ESN coverage: put to the test
In this taster for the October issue, BAPCO Journal Editor Philip Mason talks to the National Fire Chiefs Council’s ESN coverage lead Pete Walker about the first phase of network testing, and the level of user reassurance that should be derived from the results so far.
The pilot was launched in November last year, with 100 devices/apps going out to the South West, West Midlands, Wales and North East regions.
The first three formed emergency services coverage groups for the project – consisting of police, ambulance and FRS – who worked with each other in a collaborative way, with dedicated resources to carry out the task. The North East was a bit different, in that it chose to deploy ‘fit and forget’ devices in 20 ambulances, which then basically operated in a business-as-usual fashion while letting the technology soak. They have also stood up a three ES team now, however, alongside the other three regions.
To my mind, Wales had an advantage in all this, in that they already had a really tight three ES group, which is how they work on a national basis as a matter of course. That really lends itself to this kind of task, and we’re trying to get all the English regions to adopt the model. We’re not in a position where we can dictate to the regions how they should operate, but that is certainly the one we prefer.
Could you boil down how the work is being taken forward in the field. How are the coverage testing teams structured and skilled?
It differs from region to region, but taking West Midlands as an example, they have a coverage lead sat within each emergency service. Those leads then report to a single point of contact within the region, which in the case of West Mids was a police inspector.
In terms of skilling, it’s not one size fits all – there’s no massive prerequisite that people need in order to carry out the piece. Again, in West Midlands there were a number of very good radio engineers and planners working with the team, but the real skill comes in analysing the data on the portal rather than collecting it. That can be done totally autonomously – as in the North East – or by a manual walk or drive around.
What has certainly been proven is that the more collaboratively people work, the less chance there is of duplication. We then get a better understanding of what’s going on across the regions.
Thinking of the devices in particular, were there any issues which the pilot brought up prior to the start of its imminent second phase (Assure 1.1)?
We found a variety of issues, which is exactly the point of conducting a pilot. Some of those were with the devices themselves, some were with the app, and some were with the portal.
Can you give examples of what some of them were?
There was a problem with a significant proportion of the pilot devices themselves, which made itself apparent on the first day. The problem had squeaked through the testing which had happened prior to the roll-out.
Again, that was actually quite useful because as well as fixing the issue, it allowed us to test the remedial-action part of the process. The issue was raised through the help desk, subsequently investigated by all parties, and there was also a daily bridge set up through which progress could be tracked by the users. An over-the-air solution was found relatively quickly to solve that initial bug.
In terms of timescale, some of the problems were turned around in a matter of hours or days, while some took a bit longer due to the amount of investigation that was required. It’s quite a complicated arena due to having so many suppliers working on the same project – Samsung providing the devices, telent providing the portals and the apps, with the testing taking place on EE’s network.
Read the rest of the interview in the October issue of the BAPCO Journal.
Editor, Critical Communications Portfolio
Tel: +44 (0)20 3874 9216