Author ArchiveGraham

ByGraham

Parkrun #50 – the one where Thomas, Charlie & Andrew did their first Parkrun!

This Saturday was to be my 50th Parkrun, so, to “spread the joy” (i.e. if I’ve got to get up early on Saturday morning then so should everyone else), on Friday evening we set of for deepest, darkest Yorkshire. I could have stayed in East Anglia (flat, no hills) but the grandchildren all wanted to join in, which is what prompted the hike northwards. Whether or not they really wanted to join in, or if they were prodded in that direction by parents who suspected that one more Parkrun (with hills) would finish the Old Man off is a moot point – given a choice, how many 9-year-olds these days would put down “Call of Duty” to run around a park, unless there was some kind of financial incentive?

These things have a habit of growing organically (which is probably the best way to grow) and the Cambridge crew decided that if the old fella was going to be having a cardiac arrest up north, then they probably ought to be there to make sure that they got their fair share of whatever bounty there might be and that the elder sisters didn’t get first dibs at the will. Incidentally, I don’t mean “elder sisters” in a Cinderella-like context. They didn’t travel up north in a pumpkin pulled by mice (which would be illegal), it’s just that they’re sisters and, er, older. The added incentive is that once these gatherings reach a certain critical mass, then alcohol often gets involved, which takes the edge off a bit for them. And so it came to pass that the southern contingent of the Family e-Quip set off for an appointment with both destiny and Skipton Park.

The first problem, not counting the M6, was the accommodation. Mrs e-Quip is not often called upon to book hotels. Now, I’ve been doing this for years and in the early days I must have chosen some right lemons. On the well-tried principle of trial and error, Mrs e-Quip went directly to “error”. It took Edison over a 1,000 stabs to perfect the light bulb. He didn’t regard these as mistakes – his view was that inventing the light bulb is a 1000-step process. I guess that he started of with something a bit like a light bulb which gradually got better and better. I’m trying to be generous here, but no matter what definition of “hotel” you use, where we stayed on Friday night probably didn’t match it. I suppose that having been brought up with All Creatures Great & Small, Emmerdale & Heartbeat, it hadn’t occurred to Mrs e-Quip that a) there would be a nightclub in Skipton and b) it would also be a “Boutique” hotel. Years of cynicism have taught me to avoid any brand with an adjective in its name. I just naturally assume that a hotel with the word “Comfort” in its name will not be comfortable and any product name which includes the word “Fun” will be anything but. I’m not sure what particular image the word “Boutique” conjured up for Mrs e-Quip, who is as-yet untarnished by years of scepticism, but I suspect that a learning process may have begun (Warning: Steep Learning Curve Ahead). Still, we were able to sleep between 03:00 and 06:45.

It turned out that we weren’t the only ones who had a late night. Bloomy (in Yorkshire adult males seem to always referred to by a contraction of their surname with “y” stuck on the end) had decided to go straight to the “alcohol often gets involved” phase of the weekend – good lad! It was great to see that everyone had turned up: Bloomy, Wardy, Avery-y (I’m not sure how the algorithm is supposed to work in this case) – the whole crew in fact, apart from Zaris-y (I’m also unsure how the algorithm is supposed to work with Greek names). Some were even wearing PE kit! Not the Misses e-Quip though. All four of them had a note from their mum to say that they were excused games. We bimbled over to the start line for the briefing. “Welcome to Parkrun …” (applause). “Any First Timers?” – Charlie, Thomas and Andrew-y all indicated in the approved fashion (applause). “Any Visitors?”. This is a tricky one. We always say we’re from Cambridge but Mrs Bloom (née e-Quip) always insists that we’re from Peterborough. We were saved from having to make a decision by someone admitting that they had come from Detroit (applause). No-one could compete with that we let that pass. “Any milestones?”, at last, my moment of glory and I got to announce my 50th run. Fortunately there were no 6-year-olds doing their 200th runs, so my 15 seconds in the sun was undimmed. So, we all huddled in the starting funnel: Thomas, Milly, Charlie, Bloomy, Wardy, Andrew-y and me-y, waiting for the crack of the starter’s pistol. Then, we’re off.

We’ve all heard stories of people playing golf with their bosses and deliberately letting them win to let them feel good and I was wondering how the inter-generational competitive streak would be handled by sons-in-law. Would they politely lag behind in deference to my years or run at their usual pace? How would the Wardy-Bloomy brother-in-law competition play out? Fortunately, I already had a plan. I was going to be in charge of Charlie. Charlie is 5 and this was his first ever run. He had practised by running round the field behind the Co-op and had declared himself ready to run with Granddad. This meant that I didn’t have to worry about impressing anyone. Unless the unthinkable happened and Charlie was faster than me, I was safe. Bloomy was to be Milly’s escort so he didn’t have to worry about racing against Wardy. This was perhaps just as well – bearing in mind that “alcohol often gets involved” bit of the previous evening. So, I’m safe, Bloomy’s safe – result! There was a small cloud of dust left behind as Wardy shot off into the distance, and we we’re off.

I have to say that one of the reasons that I love Parkrun so much is how friendly everyone is. Charlie is so tiny it’s hard to imagine him actually completing a 5k run. So many people were urging him on and encouraging him, giving him “high-fives” and generally making him feel very happy, even though he was wearing a Chelsea football shirt. Needless to say the shirt attracted a good few comments. I have no idea what running the London Marathon is like (and I have no plans to find out) but I suspect that it is the support of the crowd that gets you over the line. That’s pretty-much how it was with Charlie on Saturday. I don’t know how his tiny little legs carried him 4 times around Skipton Park. We could see that Thomas & Milly were also going great guns as our paths crossed on each lap.

As we approached the Finish Line I held back a few yards to let Charlie run in by himself, to the applause of the family. So, my 50th Parkrun turned out to be a fantastic day, followed, naturally, by breakfast at Billy Bob’s. Thomas, Charlie & Andrew all successfully completed their first Parkrun and Milly finished her 2nd. Just like me, Bloomy was able to blame his finish time on escort and baby-sitting duties. Apart from an overnight stay in a nightclub we had a really great weekend.

(Andrew-y, Bloomy, Wardy (+ Zachary-y), Me-y, Charlie, Thomas & Milly)

As for Wardy – he came 12th. Grrrr!

 

ByGraham

The 2017 Annual User-Group Meeting

 

 

Yesterday’s Annual User-Group Meeting was a great success – thank you to everyone who managed to get there. The attendance was the highest we’ve ever had with users travelling from as far afield as Dublin and Truro, with a sizeable cohort from Scotland. The day was unusual, not only in the large attendance, but also with the exceptional weather and the varied mix of users. The meeting is usually predominantly attended by engineers but yesterday we had a planner/equipper, a nurse trainer and a contract manager.

A lot has happened since last year’s meeting. We have a new office in Stafford where Phil & Sarah are based, and they have been joined on the Help Desk by Jack Foulkes.

We started off with a review of the user-base. We had around a dozen new new customers joining us over the last year and e-Quip is now being used in over 90 hospitals.

Next, Graham went through the developments that were completed last year (in versions 3.5 and 3.6). You can see a full list here: http://www.e-quip.uk.net/blog/version-3-6-0-will-be-released-this-week/

Phil then did a presentation of the 2 new web applications: a completely revamped version of the ward users app and a nurse training & competence app that nurses and their trainers can use to update their training records. You can see a demo of the ward users app here. That will give you an idea what it can do but Phil’s presentation highlighted its flexibility, both in terms of how it can be configured for individual users and also in how it responds to being run on different platforms. It looks great whether you run it on a phone, tablet or desktop. One of these days I’ll see if I can get Phil to put up a blog article about it.

The nurse training & competence app allows virtually the same functionality as the desktop system, even down to being able to produce cross-tab TNA reports. Phil or Sarah will give everyone a link to a demo version as soon as they have published it, probably in the next couple of days. There were some useful suggestions from the floor about how training managers can be more easily identified for nurses based on work location, so we will be adding those into the system shortly.

The next presentation introduced the new interactive dashboard. First, the design goals were introduced. The dashboard is intended to be:

  • A reporting tool
  • A navigation facility
  • A KPI generator

Starting with PPM compliance we demonstrated each of these features:

First, the dashboard is a reporting tool. It shows similar data to other reports within e-Quip it just shows it in a more graphical way.

Second, the dashboard is a navigation tool. We can see above that there are 278 PPM compliant high-risk devices. Double-clicking on the gauge opens the equipment screen and shows those 278 assets. Similarly, when you move the mouse over the high-risk PPM non-compliance pie chart you can see that there are 43 high-risk devices which are more than 60 days overdue for PPM. Double-clicking on the red wedge in the pie chart opens the equipment screen and displays them. This applies to virtually every gauge, graph or chart in the dashboard.

Finally, the dashboard is a KPI generator. Whether 68% compliance for high-risk equipment (i.e. 278 out of 408) is good, bad or otherwise is determined by local policies. If you look at the gauge you will see that it has 3 sections:

Red: 0 – 33%

Yellow: 33 – 67%

Green: > 67%

Both the values and the colours can be set for each gauge (as you may well have different KPI’s for medium- and low-risk devices). This is done on one of the Settings screens.

Changing these setting won’t change the position of the gauge needle, but it will change the appearance of the gauge.

Having shown the basic idea we then went on to show all of the dashboard screens that we have created so far. Naturally these reports can all be saved as PDF documents. Click here for a copy.

There are too many to show all of them here (the PDF shows all of them) but here are a few samples to give you the general idea. Each one is configurable and can be used for navigation.

The new dashboard was extremely well-received by the users.

Next we went on to demonstrate the new procurement functionality of e-Quip which has been moved from the old e-Quip PM (Procurement Management) system. Having shown the basic ideas, Colette from Dublin then gave a presentation to show how she is using this new functionality in the new-build project at the National Rehabilitation Hospital in Dun Laoghaire. We got some good pointers from Colette and will be adding her suggestions to e-Quip as soon as we can. By the way, Colette was the first ever e-Quip customer, back in 2009. This was when the only screens that e-Quip had were equipment, brand, model, category, location, site and service and provider. We’ve certainly come a long way, with your help, since then!

So, that’s it for another year. We ran out of time and didn’t get a chance to show all the other things that we have planned for this year. If next year’s meeting is as successful as this year’s, we’ll have to look for an alternative location.

 

ByGraham

Version 3.6.0 will be Released this Week

Here’s an update on what’s in version 3.6.0. We plan to start building the release today (4th April, 2017).

Reporting Changes

New Man-Hours Report 

A new report has been added to the Quick Report feature.  The report shows a row for each person, with each row containing:

 

Name
Days Worked in the Period
Total Hours Worked in the Period
Average Hours per Day
Week 1
Hours Worked Day 1
Hours Worked Day 2
Hours Worked Day 3
Hours Worked Day 4
Hours Worked Day 5
Hours Worked Day 6
Hours Worked Day 7
Week 1 Total Hours
Week 1 Average Hours per Day
Week 2

Week 3

Week 4

The hours are taken from Jobs (Technician Hours, Assistant Hours, Supervisor Hours) and also from Tasks.

manhours

If no hours have been worked then the column is displayed with a yellow fill.

If the average number of hours falls below the Target Daily Hours on the staff tab of the personnel property page, then the cell is displayed with a red fill.

Quick Report Changes – Contract Financials

In the Quick Report utility the way that contract renewal costs are calculated has been changed. This label for this report item has been changed from “Renewal Cost” to “Cost in Period“. Previously this value was simply the sum of the Total Cost field from all contracts which started in the reporting period. This did not accurately reflect multi-year contracts. It also did not distinguish between contract renewals and new contracts.

The new calculation reflects the funding that is required for contract payments, regardless of whether they are/were new contracts, renewals, or multi-year contracts. For single-period contracts (i.e. contracts where the Year 1 Start Date, Year 2 Start Date etc.are empty), then the calculation is the same as before. i.e. the sum of the Total Cost field from all contracts which started in the reporting period.

However, for multi-year contracts the value reported is the sum of the Year 1 Cost, Year 2 Cost etc, for all dates which lie within the reporting period.

For example, consider the single-period contract below.

If Quick Report is run for any period starting 1/1/2017 this contract will report a Cost in Period of £1000. i.e. the entire contract cost, assumed payable as soon as the contract commences.

However, now suppose that the same contract has the following multi-year values:

(Note: for the sake of simplicity the dates below are for months within a year, rather than multiple years)

In this case the Quick Report will ignore the Total Cost and will report on the individual year values as shown below.


The TNA Report

This now includes the staff grade and the training date for each model.

Equipment Changes

There have been some significant changes to the equipment screen.

1. The appearance of the general tab of the equipment property page has changed in order to reduce overcrowding on the screen and to allow more fields to be added. 3 separate tabs have been added within the general tab, as shown below:

a. Risks

b. Notes & Description

c. Miscellaneous

2. Three new fields have been added to the general tab of the equipment property page, Training Risk, Risk Likelihood & Risk Consequence.

The Training Risk field is read-only and is copied from the model. The likelihood & consequence fields are initially copied from the corresponding fields on the mode record but can be subsequently edited.

3. The appearance of the financial tab of the equipment property page has changed in order to reduce overcrowding on the screen and to allow more fields to be added. Purchase and replacement information have been split into 2 separate tabs within the financial tab, as shown below:

a. Purchasing

b. Replacement

4. Three new fields, Order Value, Associated Costs & Commission Request, have been added to the financial tab of the equipment property page (see above)

5. The appearance of the data tab of the equipment property page has changed in order to reduce overcrowding on the screen and to allow more fields to be added. There are now 3 separate tabs, as shown below:

Data Connections

This tab holds information about the various databases that this device connects to. This information is all new for version 3.6.

b. Communication Ports

c. Data Security

The following new fields have been added to this tab:

Data User (Service) – The service or department which uses the data on this device

Licensing – How the associated databases are licensed.

6. A new tab, Loans, has been added to the equipment property page to show the loan history of the device.

7. A new field, Audit Date, has been added to the miscellaneous of tab the equipment property page.

In addition, the field labelled “e-Quip (PM) Code” has been relabelled “BOQ Code”. The reason is that e-Quip PM no longer exists as a stand-alone product since its functionality has been incorporated into e-Quip. The name BOQ Code also more accurately reflects the purpose of the field.

Some cosmetic reorganisation has been required in order to make room for this field.

8. A new field, Calibration Date, has been added to the technical tab of the equipment property page.

Job Changes

1. A new tab, Courier(s) has been added to the job property page.

This can be used to record up to 2 courier pickups & deliveries for a job.

2. A new job warning has been added to notify users if the last PPM job for the device was not completed.

Bulk Update – Jobs & Equipment

1. For the equipment and job property pages, bulk update now supports the ability to append field contents rather than replacing them.

This is fully described in this blog article.

Adding Devices to PPM Schedules

1. A new option has been added to the equipment summary screen context menu Add the Selected Assets to a PPM Schedule.

This allows the 1st PPM date to be scheduled based on the installation date of each selected device.

Models

1. A new tab, Features/Risks, has been added to the model property page.

While it was originally intended to be used to identify multiple, model-specific risks, it can also be used to define any list of model-specific attributes.

Sites

1. A new reference data type, Site Type, has been added.

2. The new Site Type field has been added to the miscellaneous tab of the site property page.

Contract Changes

1. A new field, Contact, has been added to the general tab of the contract property page.

2. Some additional fields had been requested for multi-year contracts and the financial tab was getting very full, so we have added the new fields and moved everything relating to multi-year contracts onto its own tab. As you can see it’s still a busy screen!

multiyear

Many of the values were originally found on the financial tab the contract property page while the following new fields have been added:

Multi-Year Discount Amount
Multi-Year Discount %
Per-Year Discount Amount
Per-Year Discount %
Per-Year Invoice No

3. When scheduling jobs from a contract using the Contract Scheduler, there is now an option to create a project to group the jobs together.

Contract-Based PPM Scheduling

We have added an option to the Contract Visit Scheduler to allow all of the jobs that are created to be linked together into Projects.

schedule

The example above will create 2 projects as shown below:

projects2

Not only does this make the jobs easier to find but the projects are also a useful place to link documents associated with the visit (rather than with the individual jobs).

Personnel

1. A new field, Target Daily Hours, has been added to the staff tab of the personnel property page.

This field is used by the new man-hours report (see above).

2. The personnel import utility now allows email address to be imported.

3. When a person’s work location changes and the person is added to competence groups the model policy competence requirements are applied automatically.

Customers

1. A new field, Site, has been added to the details tab of the customer property page

Customer Contracts

1. A new field, Contract Status, has been added to the general tab of of customer contract property page.

2. A new field, Invoice, has been added to the financial tab of of customer contract property page.

3. The Cover tab of the Customer Contract property page has been enhanced to allow callout charges to be defined for each cover period as well as labour charges for work done both inside & outside of those periods.

4. On the Financial tab of the Customer Contract property page the Parts Discount (%) now allows a maximum of 10 characters. Previously it was not possible to add negative values with multiple decimal places.

Commission Requests

1. The following fields have been added to the additional items tab of the commission request property page.

Purchase Date
Installation Date
Location
Service
Recalc. PPM Date

This now allows a single commission request to be used to commission similar devices for different locations in separate batches. For example, you might set up a commission request to be used whenever a particular model is commissioned. This would ensure that all new devices would be correctly commissioned, assigned to the correct PPM schedule, etc.

2. Until this version, when future PPM jobs were created the Planned Date for the jobs was taken from the First PPM Date field on the technical tab. This was not suitable when new batches of devices were added to an existing commission request. A new option, Recalc. PPM Date, has been added to the additional items tab which forces the PPM planned date for each item to be:

Installation Date + PPM Schedule Frequency

This option can also be set when adding batches of additional items.

3. On the technical tab, whenever either the Installation Date or PPM Schedule is changed the user is asked if the First PPM Date field should be changed. The date suggested is:

Installation Date + PPM Schedule Frequency

Competence Groups

1. A Locations tab has been added to the competence group property page. This now allows a competence group to relate to multiple, unrelated, locations. Previously it was possible to associate multiple locations to a competence group only by using child locations.

2. A new utility, Refresh Competence Groups, has been added to the personnel summary screen. This will reset the person’s membership if competence groups based on location.

3. First Name, Middle Name & Surname columns added to allow more flexibility in how personnel are sorted

4. When a personnel record is edited and the Leaving Date is set, that person is now removed from all competence groups. The personnel import does the same thing.

5. A button has been added to the TNA tab of the competence group property page labelled Apply Policies.

This resets the Required Competence Level for all TNA records for the competence group to match the model / staff grade policy (which is set on the model property page).

cg

Training Events

1. A new field, PO No, has been added to the financial tab of the training event property page.

PPQ’s (Pre-Purchase Questionnaires)

1. The PPQ property page window caption now says “PPQ/PAQ Properties” to reflect the new term, Pre-Acquisition Questionnaire, which is becoming more widespread.

2. Two new lookups, Model & Supplier have been added to the Details tab of the PPQ property page. The model brand & supplier are also displayed and can be filtered on.

ppq

Jobs created in this way now also have the default job status and priority specified by the role.

Loans

1. A new field, Tested With, has been added to the Miscellaneous tab of the loan property page. This is for use by libraries which routinely perform electrical safety tests on all devices as they are returned.

2. Two new tabs have been added for community-based loans, Delivery & Collection. These tabs contain an address (which might be different to the patient’s address) and additional delivery/collection notes.

3. A system option has been added to prevent the issue of loans on a device if its PPM is due or overdue (see below). If this option is not set then a warning will be displayed whenever a device which is due maintenance is loaned.

4. A new Quick Loan utility has been added for use by libraries which routinely perform electrical safety tests on all devices as they are returned.

New Loan Consumables Screen

We needed to implement loan consumables very quickly and so initially we did this as reference data. This allowed us to record the issue and return of consumables with loans but prevented some of the features that we knew would eventually be needed. Most importantly, consumables weren’t able to support stock. We have now added a loan consumable screen to address this.

consumable

Consumables were previously managed using the Reference Data Manager. This is no longer the case.

Out of Hours Loan Application

1. The ability to enter free text in the “2. Who are you?” field has been added.

2. It is now possible to enter a Bed or Bay No.

Leases

1. Two new fields, Capital Asset Value & Residual Asset Value have been added to the financial tab of the lease property page.

Capital Asset Value: this is the capital price of the assets that would have been paid had they been purchased. This is used by the lessor to determine the lease payments and to calculate the residual asset value. This is also the value that is added to the hospital’s asset books.

Residual Asset Value: this is the amount the assets are worth at the end of the lease. It is determined by the initial capital value, the depreciation rate and term, and the residual percentage that is decided at the start of the lease. This residual value is used to calculate the buyout cost of the asset at the end of the lease.

Suppliers

1. Two new fields, Certification Review Date and Certification Review By have been added to the general tab of the supplier property page.

Spare Parts

1. A new field, Shelf Life (Weeks), has been added to the general tab of the order property page.

2. On the locations tab of the spare part part property page, the lookup which was previously labelled Cost Centre is now labelled Ownership Type. The purpose of this field is to indicate the ownership of parts held in this location, such as ‘Owned by Trust’ or ‘Owned by Customer’.

Spare Part Orders

1. A new field, Requisition No, has been added to the general tab of the order property page.

2. An auto-generate button has been added next to the PO No field

Equipment Categories

1. Bulk Update support has been added to the equipment category property page

The User Manager

1. It is now possible for a user administrator to set a default filter for any screen for a user.

The User Manager utility now has an additional tab, Default Filters. This grid shows one row for each summary screen.

To set the user’s default filter simply double-click on the appropriate row and select the filter or click the Set Default Filter button. Either method will show the filter selector below.

Not sure of the difference between a base filter and a default filter? A base filter cannot be removed by the user. i.e. it will always be added to every filter that the user sets. Suppose that the user has a base filter of:

[Equipment Status <> ‘Decommissioned’]

If the user sets the filter:

[Location = ‘Holly Ward’]

Then the actual filter that will be applied will be:

[Equipment Status <> ‘Decommissioned’] AND [Location = ‘Holly Ward’]

A default filter can be changed or deleted at any time by a user. It simply specifies the data that should be shown when the user first displays that screen. In the example above the user’s default filter is ‘Devices with Unfinished non-PPM Jobs’. Every time this user logs in to e-Quip, then the first time the equipment summary screen is displayed then all devices with unfinished non-PPM jobs will be displayed, but the user can apply other filters at any time.

2. Bulk Update support has been added to the User Manager.

This allows you to set any of the following information for a selection of users:

Note: The check-boxes on this screen are interesting in that they are tri-state boxes. When they are displayed as a small black square this means “leave this value as it is”. When displayed as a tick this means “set this value” and when displayed as unticked this means “clear this value”.

It is likely that we will also use tri-states for all other bulk update utilities.

Roles

1. It is now possible to prevent users from clicking Show All Records on a per-screen basis. In the past this could be either enabled or disabled for the entire application. You could always make e-Quip give you a warning on certain screens but you couldn’t prevent a user from ignoring the warning.

warning

2. Support for a shared dictionary (by role) has been added. You can also grant permissions to add new words to the dictionary.

When using the spellchecker if the user is able to edit the dictionary then the “Add to Dictionary” button (see below) will be enabled.

3. A new role setting, Set Technician to Current User, has been added. If this option is set then whenever a user in this role creates a new job, the personnel record associated with the current user is assigned to the job technician, provided that the personnel record has a personnel type class of Can be Assigned to Jobs.

4. New role options have been added to control the behaviour of the Column Chooser. These options determine whether or not system and calculated columns are displayed in the Column Chooser.

Standard, System & Calculated Columns Visible 

Standard & Calculated Columns Visible

Standard & System Columns Visible

Only Standard Columns Visible

5. New role options have been added to define the job types for jobs created from with commission & decommission requests.

These options apply when jobs are are create by clicking the “Create …” button on the commission & decommission request property pages

System Options

1. A new system option has been added to prevent loans for devices who’s PPM is due or overdue.

2. A system option has been added which forces the budget associated with a location to be copied to new assets.

3. A system option has been added which forces child assets to be relocated if the location of the parent device changes.

As you can see, this is quite a big release. As always, any feedback would be appreciated.

ByGraham

Using Excel in Filters

We have recently had a couple of requests for a way of copying equipment serial numbers from a spreadsheet into an e-Quip filter. Surprisingly, this is very easy to do, it just needs a little bit of knowledge about Excel formulae.

Let’s suppose that you have a spreadsheet with a list of devices:

Now you need to try to find these equipment records in e-Quip, maybe so that you can add them all to a Medical Device Alert or perhaps to a contract. How can we use the information from Excel to create an e-Quip filter? Of course, you could copy and paste each individual cell, but that would be annoying if there were 100+ cells.

Let’s have a look at what a filter on several Serial No’s would look like in the Advanced Query Builder of the Filter Wizard. I’ve manually created a filter for 2 different Serial No’s and just changed the AND for an OR.

One way to approach this would be to keep on adding “[Serial No = ‘xxx’]” blocks until we have the full list. Again, difficult if we have several hundred, but let’s have a go. What happens if we just copy and paste from Excel into the Filter Wizard? In the example below I have just selected all of the Serial No’s in the spreadsheet and pasted them into the Filter Wizard:

Hmm, not a lot of use; this leaves us with lots of typing to do to put “[Serial No =” etc. around each value and to add  lots of “OR’s”. Still, there is the germ of an idea here. Excel is very good at doing this kind of thing. This time, I’m going to add an extra column to the spreadsheet and put a formula in it, like so:

 

Look at the formula that I have put into cell B2:

=”[Serial No = ‘” & A2 & “‘] OR “

The “=” just tells Excel that what follows is a formula.

Then we have some text enclosed in double quotes: “[Serial No = ‘“. This means to literally insert the text [Serial No = ‘ (that’s why it’s called a string literal).

Then we have & A2. In Excel “&” means “join 2 bits of text together“, so this will stick whatever text is in cell A2 and add it onto [Serial No = ‘. This will give [Serial No = ‘1234.

The last part of the formula, “& ‘]”. This tells us to simply stick ‘] OR onto what we have so far, giving [Serial No = ‘1234’] OR.

So far so good. Now we just have to copy that to every row. This is easy in Excel. You could use Copy & Paste (Ctrl+C and Ctrl+V) but Excel make sit even easier. Just put the mouse over the bottom corner of cell B2 (in the red box shown below).

The cursor will change to a small “+” sign. Just double-click on that and the formula will be copied to every row.

Now all we need to do is to select the 2nd column and paste it into the Filter Wizard, as shown:

We just need to delete the last “OR” which is hanging off the end off our filter:

Hey presto! We have a filter that could be several hundred or even thousands of lines, all as a result of a few clicks. You haven’t needed to know anything about SQL or about the structure of the database.

Can we improve on this? Yes & No. You can save fractionally on the typing and marginally on execution speed, but you need a bit more knowledge about SQL and the database structure. Is it worth it? You decide.

First, you need to know that the Serial No of a device is held in a database field named SerialNo, and secondly you need to know about the SQL IN clause.

This time, in the formula in B2 put:

=”‘” & A2 & “‘,”

As you can see, this involves marginally less typing than before. Then copy & paste the formula to all rows as before. Now your spreadsheet should look like:

If you now open the Filter Wizard and switch to the SQL tab, type in:

SerialNo IN ()

like so:

Now just copy & paste the Excel column into the space between the two brackets and delete the trailing comma:

Is this an improvement? You typed slightly less in Excel and slightly more in e-Quip. Offset against that you had to know a database field name and you had to be familiar with the SQL IN clause.

Use whichever way you are most comfortable with. Either way, if you have to search on a list of several hundred Serial No’s then a tiny bit of familiarity with Excel could save you a lot of time.

This same technique can be extended to search on a wide range of fields for different data types.

ByGraham

The Footprint Manager

If you haven’t already, it is strongly recommended that you read this blog article about groups & permissions before you look into group footprints. Just to recap, a footprint defines the parts of the database that a group of users can see.

venn

The “Hospital A” group can maybe just see their own data, and likewise the “Hospital B” group might be only able to see theirs, but an administrator would be able to see data across both sites. This is not the same as a permission: all three groups have read access to equipment (for example); the footprint defines the data that they can actually see.

Discriminators

If you are going to split a dataset up into different chunks that can be seen by different people, then you will need some discriminator. i.e. some way of identifying which users can see which chunks. For example, the e-Quip database might be used by several completely independent sets of users; such as: Medical Physics, EBME, Facilities etc. How is it best to say, “this supplier is here for the Facilities users“, “this model is here for Medical Physics users“, etc? In some cases this is easy. You might, for example, be able to say, “this  data should only be visible to users at Hospital A“. However, it can rapidly get quite complex. It might be that a spare part must be visible to users at both “Hospital A” and “Hospital B”, but the individual bins should only be visible to the appropriate site.

Far and away the most commonly-used discriminator in e-Quip is the team. It is for this reason that almost every screen in e-Quip has a team field. In the simplest case we simply define teams for “Hospital A” and “Hospital B“. We will dig into how this works, why it sometimes doesn’t and then some more complex alternatives.

How is a Footprint Different from a Filter?

Would it be sufficient to just create a base filter (i.e. a filter that the user can’t remove) for each team? Could you just add a filter of “… AND Team = ‘Hospital A” onto every search that “Hospital A” users make? Well, you could, but you would have to do this for every single screen. e-Quip allows you to create a base filter for every screen for a group but this would be quite time-consuming to manage. This would work, but wouldn’t handle lookups properly, since lookups can work from any screen. For example, you might have an equipment filter that says “… AND Team = ‘Hospital A” so you can only see “Hospital A” equipment, but if you were selecting the location for a device using the location lookup, the location lookup isn’t aware of equipment filters and so the lookup would show locations at all hospitals.

What we really need is an “entity-specific” mechanism rather than a “screen-specific” approach. This means that when we specify “you can only see information for Hospital A“, this restriction applies not only to every screen, but also to lookups. This, essentially, is the purpose of the Footprint Manager. While filters are screen-specific (and do not apply to lookups), footprints are entity-specific (and do apply to lookups).

Inclusions & Exclusions

The Footprint Manager takes discriminators and says “you can only see …” (an inclusion) or “you can never see …” (an exclusion). If you use a footprint to say “members of this group will only see data where the Team = Hospital A“, then this applies across all screens and also applies to lookups. These users will only see equipment, jobs, contracts etc., which have been assigned to the team “Hospital A“.

You might also want to specify a footprint by saying what users can’t see. A footprint might say “members of this group will never see decommissioned equipment“. If we combined these this might mean that “members of this group will only see data where Team = Hospital A but never show decommissioned equipment“. This would apply to every screen, so if the user clicked Show All Records on the equipment screen they would only see equipment assigned to the team Hospital A which was not decommissioned. The same reasoning would apply when raising a job; it would not be possible for a user with this footprint to create a job for a device for a team other than Hospital A, or for a decommissioned device.

So, an inclusion means “you can only ever see …”, while an exclusion means “you will never see …”.

Would you ever have an Inclusion and Exclusion on the Same Data type?

Basically, no! Is it meaningful to say “You can only ever see data from Hospital A and you will never see data from Hospital B“? If you can only see data from Hospital A there is no need to say that you cannot see data from Hospital B.

Using the Footprint Manager

The Footprint Manager is extremely simple to use. It is an external application under the Tools program files menu (depending on your operating system). When you run it you will be asked to enter a username and password. Once you have logged in you will see a list of groups. To edit a group footprint just double-click on the group.

At the top of the screen you will see a drop-down list which allows you to choose the discriminator that you have decided to use.

footprint1

In order to specify a team inclusion on “Hospital A“, click the lookup at the top of the screen and select team “Hospital A“. For a manager you might also select an inclusion on “Hospital B“.

footprint3

This means that these users can only see data which relates to teams “Hospital A” and “Hospital B“.

You might, alternatively, choose to say that these users can see all data except that which relates to “Hospital C“. This would be achieved with an exclusion.

footprint4

The differences between these choices can be very subtle.

What about NULLs?

If you specify an exclusion, that is fairly unambiguous. i.e. “you will never see any data where the Team = Hospital A“. Any data at all, equipment, job, contract, spare part, location, anything at all; if the team is “Hospital A” then you won’t see it.

Inclusions are a bit more problematic. Let’s say that we have a spare part which is used in both Hospital A & B. However, there are bins for this part in Hospital A and Hospital B. The bins are no problem, we just assign the appropriate team, but what about the spare part itself? If the part is associated with only a single team (maybe it is a boiler spare part for the Facilities team) then there is no problem, but in this example the part needs to be visible to several teams.

This forces us to interpret inclusions as “you will only ever see data where either the Team = Hospital A or the Team is empty“. You can also see why this is the case when you look at entity types which appear several times within one record. For example, on the equipment property page there is not a single supplier field. rather, there are fields for Original Purchase supplier, New Item Purchase Supplier, Spare Part supplier, etc.

suppliers

This is a device which was originally purchased from Beaver Medical but which is now purchased from Cardiac Services. Contracts, however, are purchased from Philips. Could we just rely on inclusions for Beaver Medical, Cardiac Services and Philips Healthcare? No, because the spare part supplier, callout supplier, training supplier and loan supplier are all blank. In order to display this record the inclusions would have to be interpreted as:

Supplier = Cardiac Services

OR

Supplier = Beaver Medical

OR

Supplier = Philips Medical

OR

Supplier is Empty

This will cause some headaches when we are looking at complex (or realistic) scenarios.

Alternative Discriminators

The team entity is the clear winner when it comes to choosing a discriminator which helps in “sharing out” your e-Quip database between different functional groups of users. However, there are alternative candidates.

Sites are commonly used for this purpose, i.e. “you can only see data where the Site = Hospital A“. For our non-NHS users the customer entity can also be useful, i.e. “you can only see data where the Customer = BUPA“.

e-Quip can handle many complex scenarios, but sometimes you may need to be very imaginative in how you decide to chop up your data.

ByGraham

Group Permissions

You may have come across the series of articles about configuring e-Quip using the Role Manager. This is a really big subject and even when tackled in small chunks the articles are tending to become long which might give the impression that this is more complex than it actually is. For that reason we have moved some of the discussions about groups and permissions, along with the Footprint Manager, into separate posts.

In order to use e-Quip you must have a user account and that account will belong to a single group. Essentially a group defines your permissions or rights. i.e. what you are allowed to do within e-Quip. Almost always a group will have multiple members (i.e. user accounts) but remember, a user account is a member of a single group.

Item Permission Masks

Every data item in e-Quip is protected by a permissions mask which can be any combination of:

Read –  This data can be seen

Write – This data can be edited

Add – New records can be created

Delete – Records can be deleted

Control – This grants the read, write, add & delete permissions and also allows records to be archived

Is the term data item the same as saying screen? For example, do permissions on the Location data type just refer to the location screen? No – these permissions apply across the whole of e-Quip. If a group has Add rights on locations, for example, then not only will the Create New Location menu on location screen be enabled, but the Add New button on the location Lookup control will be enabled everywhere it occurs, even in places like the QBE (Query by Example) utility.

Why is it called a mask? This is a bit of IT terminology used to describe something where the overall effect is defined by adding individual items, often called flags. Suppose that the read flag was represented by the number 1, write by 2, add by 4 and delete by 8. Using these flags, the mask 15 represents: 1 + 2 + 4 + 8. Thus 15 = read + write + add + delete. They are held this way (as ascending powers of 2) because computers can compare such masks very quickly.

Setting Permissions

The User Manager is responsible for defining the permissions associated with a group. To run this utility, click the Office menu (the round menu button in the top left-hand corner) on the dashboard screen and select Manage Users.

A screen will appear with two tabs, one to manage groups and another to manage individual users.

grpadmin

Group creation is trivial: simply click the New button and enter a code, name & description for the group. The code is largely unimportant and is used sometimes for advanced system customisations.

To open or edit a group, simply double-click on it in the grid. When a group is displayed, click the Permissions tab to see the rights mask for every data type.

mask

Editing permissions is as easy as you would think; simply tick the boxes for the rights that you want to set. You can set the rights for multiple data types by selecting them in the list and then clickign Edit Selection. This will display the following screen:

mask2

Just tick the appropriate rights then click the Apply button and those rights will be applied to every selected data type.

Delete Rights

When is it appropriate to delete a record from a database? I would start from “never” and then argue back very vociferously from there! My personal view is that if you create a job by mistake, then you should just set its status to Cancelled, not delete it, but that’s just my personal view. Incidentally, it isn’t possible in e-Quip to delete something that is referenced by something else.

Links Between Entity Types

Let’s suppose that a group has write & add permissions on jobs, so they can create new and edit existing jobs. What happens if those users need to a) add parts to a job and b) remove parts from a job that they may have added accidentally.

There are three separate data types involved here:

Jobs

Spare Parts

Part / Job Links

You can’t add parts to a job unless you can actually edit jobs in the first place. You cannot select from a list of spare parts if you don’t have at least read access to spare parts. But, to add a spare part to a job you also need write access to the Spare Part / Job Link entity. These are three distinct permissions. You might not be able to add new jobs, or create new spare parts, but you can still add spare parts to a job. There are separate rights masks for all three. have a look at the permissions below:

masklinks3

This group can see (i.e. read) spare parts but has read + write + add + delete rights to spare part job links, spare part model links and spare part supplier links.

This means that these users can:

Add a spare part to a job (“this spare part was used on this job”)

Add a spare part to a model (“this spare part is applicable to this model”)

Add a spare part to a supplier (“this spare part can be purchased from this supplier”)

It should be reasonably obvious that adding (i.e. creating) a spare part is not the same as linking an existing spare part to a job, model or supplier.

Deleting Links

Notice that this group has permission to delete spare part links. This is not the same as being able to delete spare parts. If a user adds a part by mistake to a job then they need to be able to remove the spare part from the job. Similarly, if a supplier stops selling a particular part, removing it from the supplier is not the same as deleting the spare part itself. It will almost always be the case that if a group has the rights to add a link then they will also be able to delete that link.

Footprints

 

Although groups primarily exist as a permissions mechanism, each group also has a Footprint, which defines the data which is visible to that group. If you’ve ever seen a Venn diagram then you’ll have a good idea what this means.

venn

The “Hospital A” group can maybe just see their own data, and likewise the “Hospital B” group might be only able to see theirs, but an administrator would be able to see data across both sites. This is not the same as a permission: all three groups have read access to equipment (for example); the footprint defines the data that they can actually see.

This blog article explains how the Footprint Manager operates.

 

ByGraham

A Little Database Theory – Part 3

At last! My original intention was to write an article about the difference between relational calculus and relational algebra. In order to get here we needed to see what databases looked like before the relational model and what the relational model actually means. Now we can get to the nub of the grist of the nitty, and/or gritty.

Lets look at a simple mathematical problem: quadratic equations. Just in case you can’t remember back to your 1st week at big school, a quadratic equation is an equation of the form:

ax2 + bx + c = 0

There are lots of approaches that you can take to solve these equations. The Egyptians didn’t have algebra and so they solved these problems in a geometric way, by drawing. I have a fantastic book on the history of numbers which explains how to do this. If I can ever find it I will post the explanation. It also explains how Sumerians did crosswords and Sudoku (that’s actually a lie, it does nothing of the sort). You might have learnt how to solve these equations by factorisation (I never got the hang of that) or by using a formula:

quad

Clearly, in the cat-skinning department, the number of solutions > 1. Well, the same goes for set-theoretic problems. The 2 cat-skinning approaches I learnt (yonks ago), vis-a-vis set theory, were relational algebra and relational calculus.

As I mentioned in the last post, the database world has pretty much decided on a language called SQL (Structured Query Language) to process relational data. This language is a practical implementation of some features of both the algebra and calculus.

Relational Algebra

I find this the easiest approach to conceptualise, largely because the notation is much closer to English (well, if not English, then at least closer to the kind of algebra that we all understand).

Just as normal algebra combines and manipulates operators symbolically, so does relational algebra. The relational algebra operators are things like:

SELECTION – σ

PROJECTION – π

RENAME – ρ

UNION – ∪

CARTESIAN PRODUCT – x

Selection is about finding stuff. To find Tom in the People relation we do:

σ Name = Tom(People)

Projection is about choosing the attributes (columns) that we want to see.

π Name, Age, Height (People)

It is (sometimes) easy to see how relational algebra maps to SQL statements:

π Name, Age, HeightName = Tom(People))

Becomes:

SELECT Name, Age, Height

FROM People

WHERE Name = Tom

Very often, as in the example of finding the employer for each person, there are multiple steps which have to be done in a particular order. Think back to the last post where we identified who worked for whom. This took several steps:.

  1. Take the Cartesian Product of People and Employers (x)
  2. Ignore tuples (rows) where the People(Employer) is not the same as the Employers(Employer) (SELECTION σ)
  3. Choose the columns that we want from the results (PROJECTION π)

The key point here is that relational algebra is procedural. In this example there are 3 steps and they have to be carried out in the correct sequence.

Another important point is that with relational algebra you specify what you want and how to get it.

Relational Calculus

Relational calculus is a formal logic. There are many examples of formal logics, such as Predicate Calculus. Had I paid more attention at university I would be able to tell you the difference between 1st-order logics and higher-order logics, but I didn’t and so I can’t. This does not seem to have had an adverse effect on my day-to-day life. If my wife has noticed my shortcomings in this department then she has never mentioned this (at least not in my presence).

The thing that puts people off formal logic is the notation – it doesn’t look like English. For example, the meaning of the expression below doesn’t exactly leap of the page:

{t.Name, t.Age, t.Height | People(t) AND t.Name = ‘Tom’}

This is the art of jargon – invent a language that very few people can understand and then all pat yourselves on the head because you understand it. This is perhaps a little unkind to computer scientists. If you are going to describe things in very precise, unambiguous ways then you need some special notation that perhaps the man on the Clapham omnibus might not understand.

Things get a bit simpler once you know how to “pronounce” the special, magic words and symbols. The “|” symbol is pronounced as “such that“. ∃ symbol means ‘there exists‘ and ∀ means “for all“.

If you want to skip the complicated stuff, the key points about relational calculus are:

  1. It is not procedural
  2. The order in which expressions are evaluated in is not important

Put simply, while in relational algebra you specify a) what you want and b) how to get it, in the calculus you just specify what you want. There are different types of relational calculus (well, there would be, wouldn’t there?). The types I am aware of are Tuple Relational Calculus (TRC) and Domain Relational Calculus (DRC). Again, perhaps I should have paid more attention at school. I don’t really know much about DRC and so I’ll do the polite (and sensible) thing and skip over it. Incidentally, don’t bother Googling DRC – you will find lots of references to what, when I was a youngster, was called the Belgian Congo.

The TRC expression above translates into SQL as:

SELECT Name, Age, Height

FROM People

WHERE Name = ‘Tom’

A TRC expression is generally of the form:

{t | COND(t)}

i.e. define a new tuple made up of all members of the tuple t which satisfy the condition COND. The condition can be made up of multiple sub-conditions separated by AND and OR

Here’s a slightly more complex example. It finds all people who work for Smiths Ltd.

{p.Name, p.Age, p.Height | PEOPLE(p) AND (∃e) (EMPLOYERS(e) AND e.Name = ‘Smiths Ltd’ AND p.Employer = e.Employer) }

Which means (informally):

Create a new tuple containing Name, Age & Height from the PEOPLE tuple (i.e. relation/table) to include only a) people p who are in the PEOPLE relation/table, b) there exists an employer e in the EMPLOYERS relation whose Name is ‘Smiths Ltd’ and whose identifier matches the Employer identifier in PEOPLE.

In SQL this translates into:

SELECT Name, Age, Height

FROM People

INNER JOIN Employers

ON People.Employer = Employers.Employer

I hope that you have found this article to be moderately interesting. It’s curious that writing this kind of article is an almost cathartic experience. You know full-well that nobody is ever going to read it, but you somehow feel better for having written it. I hope at least that I have successfully (and correctly) explained the difference between relational algebra and the relational calculus, which is what I set out to do.

Hey-ho! It’s better than working for a living.

 

ByGraham

A Little Database Theory – Part 2

The first part of this article discussed what databases looked like before Codd’s seminal paper in 1969 introduced the relation model. This post describes what the relational model actually is. By the way, the last article gave some examples from non-relational databases that I’d worked on in the past. I don’t want to give the impression that I was writing software before 1969! Just like there are still plenty of dBase applications still around (what do you mean, “what’s dBase?”) when I was first learning database programming there were still plenty of ISAM applications still running – there probably still are.

Codd’s paper revolutionised the data processing world. Data processing was now based on mathematics. There were now rules which everyone knew and understood. That’s why virtually all databases that you see now are relational and why the old-style ISAM databases disappeared.

So hands up who knows what a relation actually is? Unless you are a computer scientist or mathematician you would probably be wrong. Many people think that relations describe relationships between entities, such as equipment and models. Wrong. A relation is a concept from set theory, which you probably learnt about in junior school.

Suppose we have 2 sets: A = Names and B = Ages. A common way to define the contents of a set is as a comma-separated list between curly brackets. e.g.

A = {Tom, Dick, Harry}

B = {20, 21, 30}

A relation is a new set created by making ordered pairs from all elements of original sets. R1 below is a relation on A and B.

R1 = { {Tom, 20}, {Dick, 21}, {Harry, 30} }

Notice that R1 is itself actually a set which contains:

{

{(the 1st member of set A, the 1st member of set B)},

{(the 2nd member of set A, the 2nd member of set B)}

{(the 3rd member of set A, the 3rd member of set B)}

}

You might see this referred to as:

{

{(A1, B1 )},

{(A2, B2 )},

{(A3, B3 )}

}

The values inside the parentheses are called tuples, or n-tuples and we will see more of them in the next post. In this case n = 2, so these are 2-tuples.

Suppose we had a 3rd set C = Heights:

C = { 5′ 8″, 6′ 2″, 6′ 0″ }

The relation R2 over A, B & C would be:

R2 = { {Tom, 20, 5′ 8″}, {Dick, 21, 6′ 2″}, {Harry, 30, 6′ 0″} }

(In this case the tuples are 3-tuples)

You should now be able to spot how relations are important. Let’s just represent this a different way, with the names of the sets at the top.

table

We can see now that a relation is effectively a database table. This is the fundamental element that underpins the entire relational database model. RDBMS’s (Relational DataBase Management Systems) all manage data in what have become known as tables, where table is synonymous with relation. All of the data is defined by relations and everything that you can conceivably do to or with these relations is defined by set theory.

Incidentally, RDBMS’s are not the same thing as ROUS’s (Rodents of Unusual Size). They’re a whole different kettle of ballgames, as they say.

Just step back and ponder the enormity of this. We have 3 sets, Names, Ages and Heights, each having the same cardinality (i.e. number of members). By using some fairly elementary maths we can combine those into a mathematical model of the now-familiar concept of a database table. There are some rules that these sets and relations have to follow but the power and flexibility this model gives us is truly amazing (compared with the disadvantages of the older database systems discussed in the last article).

So far this simple relation describes a single, real-world entity, namely a person. Ok, a person is defined by more than their name, age & height, but getting a model that more-closely resembles reality simply involves adding more sets for things like hair colour, shoe size etc. The relational model allows us to define relations for many different entities and to link them together in all kinds of ways.

As an example, let’s look at some set theory and see how this can extend our model. In set theory the Cartesian Product (normally written as “x”) over 2 sets gives a new set containing every element of set 1 combined with every element of set 2.

If:

A = {a, b}

B = {x, y}

Then:

A x B = { {a, x}, {a, y}, {b, x}, {b, y} }

How is this useful? Lets create a new relation, Employers. to list companies that people can work for and rename our earlier relation R2 to People. We can add a new column to the People relation, called Employer.

Employers = { {1, Smiths Ltd}, {2, Browns Ltd} }

People = { {Tom, 1},  {Dick, 1}, {Harry, 2} }

If we take the Cartesian Product of these we get:

join

Now we just need to get rid of every row where Employer (from People) doesn’t match Employer (from Employers). This will give (I have got rid of the columns I’m not interested in):

join2

It’s hard to grasp how revolutionary this was back in 1969, now that all databases work like this. We can now keep different lumps of data and combine them using trivial set theory. Each real-world entity (like a person or employer) now only ever needs to be defined once. If Smiths Ltd changes its name to Jones Ltd, that change only needs to be made in a single place, in the employers relation. This is one of the key requirements of properly-designed relational databases: “one fact – one place”.

Set theory contains many operations that let you combine sets (UNION), identify things which several sets have in common (INTERSECTION), subtract one set from another and so on. It would be tiresome for programmers if they had to process data using set-theoretical concepts. Just as high-level programming languages (Fortan, COBOL, C, C++, C#, Pascal, BASIC etc. [Look out- I feel another article coming on!]) shield programmers from having to know how the chips inside the computer work, so high-level data processing languages shield programmers from the grubby maths.

The database world has pretty much decided on a language called SQL (Structured Query Language) to process relational data. There is no consensus as to how this is pronounced; one man’s S-Q-L is another man’s Sequel.

In our simple example above the query “who works where” is expressed in SQL by:

SELECT People.Name, Employers.Name

FROM People INNER JOIN Employers

ON People.Employer = Employers.Employer

If we only wanted to see who works for Smiths Ltd we could add a restriction, such as:

WHERE Employers.Name = Smiths Ltd

This is hiding the mathematical complexities from us and allowing programmers to work in a language which is close to English. However, scratch the surface and the maths is still there.

In the next post I’ll discuss both relational calculus and algebra and dig a bit deeper into what SQL is doing.

 

ByGraham

A Little Database Theory – Part 1

The other day I came across a forum on the web and spotted a post by someone who wanted to know the difference between relational calculus and relational algebra. As happens so often on the web, all of the responses were written by people who had virtually no idea what they were talking about. So, having a couple of hours to kill I thought I’d give it a go.

Almost everyone has come across the term “relational database” and probably to many people “relational databases” and “databases” are the same thing. They’re not, but the relational model has become so popular that non-relational databases have (except in very specialist situations) pretty much disappeared.

I thought I’d split this article into 3 separate posts:

  1. What did databases look like before the relational model?
  2. What does “relational model” actually mean?
  3. A comparison between relational calculus and relational algebra

Let’s have a look at what the e-Quip database might have looked like before the Ted Codd presented his ground-breaking paper in 1969. Imagine that you have a book and you have decided to have one page per device. At the front of the book is an index which tells you the page number for a particular device. To get information about Equipment No 12345 you would look in the index and see that this device record can be found on page 253. You then quickly flick through to that page and Bob’s your uncle, you can see all the information you need.

If the book becomes a computer disk file then there were, and still are, plenty of ways of maintaining an index and quickly getting to the right part of the file based on the value from the index. Perhaps the most common was known as ISAM, or Indexed, Sequential Access Method. This meant that you used the index to get to the right place and then you read each character sequentially from that point onward.

If you’re in charge of the book you might come up with some rules, like:

  1. The first 10 characters on line 1 are the Equipment No
  2. The next 10 characters on line 1 are the Serial No
  3. The 2nd line holds the model name and description
  4. The first 10 characters on the 3rd line hold the purchase date in the format dd/mm/yyyy
  5. The next 10 characters on the 3rd line hold the installation date in the same format

So far so good(-ish). How about jobs? You might decide to have 2 lines per job, starting from line 20, which might give you enough space for 20 jobs.

isam

I have actually worked on plenty of applications like this. The biggest problems that we had were a) the dates were in dd/mm/yy format (oops! what’s going to happen the day after 31/12/99?) and b) nobody ever got round to writing down the rules so if we ever needed to add anything we just picked a bit of space somewhere on the “page” that looked vacant and used that. Unfortunately, after a few years the vacant spaces get few and far between. When space gets really tight things get difficult. Suppose we need to add another date (e.g. Replacement Date) but there aren’t 10 unused spaces in the file. There might be 3 spaces available on line 3, one at the start of line 4 and another 2 at the end of line 4, and 4 in the middle of line 7. To read this date the programmer needs to read data from 4 different places in the file, join them together in memory and then interpret the whole as a date. Writing a date is even more complex. The programmer would start with “01/12/2015” and have to split this into 4 separate pieces (“01/”, “1”, “2/” &  “2015”) then write each piece to the appropriate place in the file.

What happens when you finally run out of space on the page? Suppose a device has 21 jobs. You can’t just write this 21st job underneath the 20th because you will be overwriting the next equipment record. Well, there are things you can do, like start another book (with its own index) just for all the additional information do but it all gets complicated and the programmers have to handle the complexity.

Searching for things was very difficult. While you could easily find the right page for a particular device, how do you find all jobs completed last month? You start at page 1 and look through every line (from line 20 onward), then look at the date in columns 10 – 20. If that date is in last month, then add the Job No to your list. Then turn to page 2, and keep doing this until you have read the whole file. You could get clever and create more indexes at the front of the book but this all added to the complexity.

It’s not really surprising that these databases were often processed in batch. i.e. once a month you run a program which produces a whole load of printed reports. If you want to know how many jobs there were last month, then get last month’s report off the shelf and read it. Interactive software applications where you could ask any question any time you liked were not that common.

What do you do if some eejit writes the wrong thing in the wrong space? You have positioned the file at a place where you are expecting to see a date, but instead of “21/01/2015” you find “HELLO”. Just letting your program collapse in a heap wouldn’t be a good idea, although plenty did. You ended up having to write complex code to do simple things, just to take care of possible errors.

Another problem is that the file (i.e. the virtual equipment book) is completely separate from the application. A file is just a file – any programmer can write whatever they like to it, regardless of the format that you have specified. To keep the contents of the file meaningful you have to rely on every program that accesses it a) knowing the rules and b) sticking to the rules. Even if everyone does this, programs will inevitably have bugs. The programmer might have intended to write “Graseby MS26 Syringe Driver” starting from the 1st character on the 2nd line of the particular equipment record, but programmers make mistakes, especially when keeping track of where they are in a large, complicated file. The end result is that it is hugely difficult to prevent the file from being corrupted.

As an aside, I once worked on a database which managed the bonuses paid to travel agents depending on how quickly they submitted details of the travellers’ cheques that they had sold. The database was exactly like the ISAM file described above. In order to handle errors, the standard technique was:

1000 IF something goes wrong

1010 THEN LET X = 1/0

All of the applications were written Data General Business BASIC (which conveniently had ISAM built into the language). If you’ve ever used BASIC you’ll know that it is an interpreted language. The interpreter reads a line, translates it into machine code then executes it. It then moves on to the next line and does the same again. If something goes wrong the interpreter just stops and displays a message, then sits and waits for someone to tell it what to do. LET X = 1/0 is going to cause an error, since you can’t divide anything by 0. The interpreter will display something like “Divide by zero error at line 1010” then just sit and wait.

The computer operator could then call the programmers, who could then a) sit in front of the computer, b) find out what has gone wrong, c) fix it, then d) tell the interpreter to carry on.

From a programmer’s perspective this is awful. Every single database application is different, even in the same organisation. You might be an expert on one system and not have the slightest idea about another. Codd’s paper describing his new relational model, published in 1969, revolutionised the data processing world. Data processing was now based on mathematics. There were now rules which everyone knew and understood. That’s why virtually all databases that you see are relational and why the old-style ISAM (Indexed Sequential Access Method) databases disappeared.

My next post will look at what we actually mean by “relational model”.

 

ByGraham

Turing Machines and the Turing Test

 

Everyone these days has heard of Alan Turing, the father of computer science, the man who cracked the Enigma code and the eponymous “inventor” of both Turing Machines and the Turing Test. Science fiction tends to conflate the two, but they are completely different things, both emanating from that same genius.

The Turing Machine

I don’t work as a computer scientist. I know people who do, but because I have a mortgage to pay I work as a software engineer, even though my discipline is computer science. Computer scientists live in colleges, generally in Cambridge although there may be one or two in Oxford. Back in the 70’s there were even some in the US of A, copying the stuff they learnt from Cambridge (Ok, Kernighan, Ritchie, Thompson et al, this isn’t meant to be serious, please don’t sue me) . They don’t do anything as demeaning as writing software. So what do they do?

How to be a genius: step 1. Think in abstract terms. Einstein worked with thought experiments, like, “if I were sitting on a photon, what would I see if another photon went past me, and at what time would we both think it was time for tea?”. How did Alan Turing imagine computers before they existed? What’s even more astonishing is that Turing worked out the kinds of problems that you could and couldn’t solve with a computational engine before there was any such thing as a computational engine. That is the power of abstraction in the hands of a genius. I’ve only ever met one true genius in my life and his gift was to explain the complex in very simple terms. There are not many people in the world with this gift. Think Feynman, Turing, Einstein…

Can you imagine a machine that can add 3 and 4, without building the actual machine, even before such a machine has been built?

Turing suggested that you imagine a machine which:

a. Takes its input from an infinite tape made up of cells

b. Has a read/write head which can read the cell under the head

c. Has some instructions (a stored program) which tells it what to do depending on what is under the head.

For example, the instruction might say:

Do the following forever:

If there is a non-blank under the head Then

Move the head right to the 2nd blank cell

Write “1”

Move the head left to the 2nd blank cell

Move the head right one space

Erase the cell contents

Otherwise

Stop

These are the instructions to add two numbers together. When the machine stops the answer should be on the tape.

(I can’t do graphics so the following images are from https://vimeo.com/46913004)

turing1

If you just follow the instructions (i.e. program) then you will end up with:

turing2

It’s worthwhile just working through this example. It has nothing to do with binary notation or real tapes – it doesn’t matter whether each cell on the tape has 0’s, 1’s, apples or bananas in it. The model is purely conceptual.

Not only did Alan Turing come up with this fantastically elegant idea, he also defined the limits of this wonderful abstract machine; not only what could it do, but what couldn’t it do. Adding is fine, so is multiplication, division etc. But one very important question is, “if the input is infinite, is the machine ever guaranteed to reach the ‘Otherwise Stop‘ state?”. In Computer science this is called the Halting Problem. It says a lot that Turing was aware of this even back in 1937. Things have moved on a lot since then but computer science is still the science of the nature of problems that can and can’t be solved by computational engines.

It turns out that there is a whole class of problems that can be solved with Turing machines. If you’re interested you might want want to check out the Game of Life, invented by John Conway, which, while not only being a mathematical model for evolution also turns out to be able to solve any problem solvable by a Turing machine.

 

The Turing Test

Even back in the 1930’s as a young under-graduate, Turing could see the logical consequences of his assertions about what could and could not be computed with a computational engine. Naturally (if you’re a genius) his thoughts turned to machine intelligence. Imagine that you were having a conversation with another “person” on the other side of a wall. How could you be certain that you were talking with a person rather than a computational engine or robot, of some kind? This is similar to the famous “cogito” problem in philosophy. We all know that external factors can affect our thoughts. Descartes put this in terms of a mischievous imp but we can imagine that things like brain tumours, hallucinogenic drugs etc. can induce visions. The question is, how can we prove that what we see is real? Descartes reduced this to Cogito ergo sum (I think therefore I am). i.e. the fact that I am thinking means that I must exist. Everything else could be down to the mischievous imp. So imagine a conversation through a brick wall. How could you be certain that the entity (mischievous imp or otherwise) on the other side 0f the wall is intelligent? In Turing’s terms “is it possible for machinery to show intelligent behaviour?”.

This is the so-called Turing Test. The basic idea is that if you can’t tell the difference then artificial intelligence has been achieved (this is a massive generalisation! Many computer scientists would disagree  with this statement. Fortunately, this is a blog, not a PhD thesis). The really important point is that someone was even thinking these ideas before computers actually existed.

So, this is computer science. Turing Machines and Turing Tests are completely different things. Perhaps you did a physics degree but are working as a clinical scientist, bio-med or EBME engineer. The difference between being a “physicist” and a bio-med is similar to the difference between being a computer scientist and a software engineer. I studied the former, but my mortgage is paid by the latter.

It’s the difference between SQL and relational calculus/algebra (I’m sure there’ll be a post about this soon!).

As a post-script, many years ago when I was at university, like most students I developed a skill for knowing what was likely to be in the exams. Exam questions, by their very nature, have to be answerable in a given time or number of paragraphs. Artificial Intelligence is so mathematically complex that only the most general areas lend themselves to exam questions. Thus, when it came to revision, AI was not top of my list, as the maths was too complex for a 4-hour exam. Imagine my horror when the AI question required a mathematical explanation of the differentiation between circles and tubes in satellite photographs (i.e. rocket launchers versus sewage works). It’s a good job I wasn’t working for JFK in the 60’s as I would have recommended a pre-emptive strike on the Ely sewage works!