Insightinar #1: Rapid application development and extreme manufacturing challenges Ian Quest, Director — Quick Release_ Nick Solly, COO & Head of Special Projects — Quick Release_ Chaired by Rob Ferrone, Founding Director — Quick Release_ This is a machine-generated transcript, lightly tidied for readability. Speaker attributions and paragraph breaks have been added; the substance of the talk is preserved. --- ROB FERRONE: Right, it's two o'clock UK time, three o'clock Germany time, so let's make a start. Hello, welcome to those that have joined the webinar and thank you very much for joining. The world seems to be filling up with webinars these days — we value your time, so we've created something high-impact that fits into the next 25 minutes. This is an insight into our experiences of the UK Ventilator Challenge, and you will hear about the unique project itself, but we'll also be sharing some of the interesting lessons learned. The first I knew of this challenge was when I got a WhatsApp from our CEO on a Sunday morning. As I watched the project unfold from lockdown in Germany, like many of the team, I was captivated. Every day we got snippets of news as if being reported from battle. The presenters today, Ian and Nick, were key individuals on that project — they worked long hours every day, and without a day off, for 100 days. Nick Solly, our COO, is wearing his CTO hat today, and Ian Quest heads up the consulting group. For those of you that aren't familiar with Quick Release_, we operate within complex engineering environments, and to draw a health-related parallel: product data is the lifeblood of an organisation. We are product data management doctors. We address health issues through direct surgical interventions, and we also help our clients with long-term rehabilitation and fitness plans. Every day we work both on the ground and strategically with established OEMs and start-ups — but this project required something else entirely, and took the way we work to new extremes. I'll now hand over to Ian, who'll tell you more. --- IAN QUEST: Hi, thanks Rob. Before we talk about the specifics of what we did on this project, I just want to put it into the context of the scale of what was done and the number of people involved. We were just some of the cogs in the whole gears, and without the herculean efforts and the brilliance of many of the people involved — from a wide range of organisations — this challenge would never have been met. To put a little bit of context around the specific thing we'd like to talk about today: we often get involved in projects that either have the intent to address a specific programme delay or a programme issue, or projects which are there to address systemic process issues. Those two things should be incredibly complementary — and are, to a large degree — but in terms of those projects, they end up often being quite isolated. The programme projects don't have lasting impact programme-to-programme, and really the tactical fixes don't get carried forward. And for the process projects, they often grow as the specification grows, become longer, bigger, and the actual impact on the programme, particularly the programme of today, becomes very limited. In the perfect world, processes and systems would just unfold to meet your needs day-by-day as you drove programme progress — when you needed to order a part or report an issue, that functionality would just unfold and be very intuitive. There's a number of things that stand in the way of that. Firstly, the process and systems teams are often not the people that deeply understand the day job, and so they can't make intuitive decisions — they do need specifications, they need to go back and forth with wireframes and so on. Secondly, the people doing the day job often don't have that time to focus on getting the process team up to speed — they're focused on the programme gateway or the next step for the product. And thirdly, it takes time to develop things, and if you can't respond within a day or a few days, people will have developed their own spreadsheet and found their own work-around to the problem. I give this context to the project because, with the speed and size of the challenge on the Penlon project, it forced us to think differently. There's no way of developing processes and then using them — the whole thing had to be done in parallel. --- IAN QUEST: Just to give you a background to the project — you'll probably know a lot about this and have read a lot about it in the news, so I won't spend long on it. Covid arrived in the UK late January this year. The advanced stage of treatment required mechanical ventilation, and that was understood very early on. By mid-March, we'd been able to muster about eight-and-a-half thousand ventilators in the UK — from existing in the NHS, from private, from converting juvenile units to adult units — but this was way below the predicted requirements of around 20,000. That's when the PM issued the call to arms to UK industry, and then the Ventilator Challenge UK consortium was formed and the Penlon option agreed. So, a little bit about that Penlon option. Traditionally, Penlon build much more complex anaesthetic machines, which have many more components than just a ventilator. But within them is a fully functioning and fairly sophisticated ventilator. The proposal was to take those ventilator elements, repackage them — the spec changed somewhat as well to meet Covid — and to get that certified and produced. So this was not a certified machine when we went into this, and whilst it hasn't got the complexity of a car or an aircraft, you'll see from some of the numbers there, it wasn't a simple task. It wasn't just a basic process of ramping up a few suppliers and making a few more. In terms of the pace, I think this graph says more than any other about what happened. Back on 16th March, the call to arms. By the end of March we had a unit going through trial with the MHRA — the body that certifies it's safe to use. Authorisation received mid-April, and by the start of May we were starting on the proper ramp-up of that long curve. And whilst there were many, many elements that contributed to this, one of the things that was absolutely necessary here was a data-driven approach. If any one of the items required to build that ventilator wasn't there, we were stuck. So we needed to know, for every individual part, what the supply was, what the issues were, what the alternatives were that could be used — so we could make sure we were there and ready to meet demand. I'm going to hand to Nick Solly, who'll explain a bit more about the approach we took to manage that data. --- NICK SOLLY: Thanks Ian. To achieve this kind of data-driven approach, there were some key challenges we needed to overcome. Firstly, geography. Across the consortium we had members with sites across the UK, many of those companies had never worked together, and staff had never met. Secondly, existing or lack of processes and systems. Each company involved had internal systems managing their own data, but there was no overall process in existence to bring that all together for the scale and challenge of building thousands of ventilators in a matter of weeks. And thirdly, pace. As Ian described, the project had to move very quickly, and there were hour-by-hour updates and status reports, with timelines that wouldn't permit any repeated lengthy ad-hoc analysis. We needed to pick an approach to solve these problems, and there were some key aspects in choosing that approach. On one hand we needed to consider ease of implementation, adoption, configuration and customisation — and on the other hand, robustness, scalability, and ease of maintenance. As I'm sure you can guess, one way we could have approached this is just by using spreadsheets, probably with some macros, with all their ease, speed and familiarity — but they're a nightmare to try and keep maintained or connected. Then on the other hand, we could have picked an out-of-the-box enterprise ERP software and tried to lay that on top of the project, which is the reverse of the spreadsheet. So we were thinking: is there another option? Can we create the system and the associated processes along with the project itself? This isn't just about bespoke software — this is about highly needs-driven rapid application development, being created from within the project teams themselves. Something we started referring to as pop-up software. QR_ have a history of smart tools to support our teams and our clients. But being honest, looking back, if at the beginning we'd seen the full spec of everything this tool came to do, and the timeline we needed to do it in, we may have been pretty daunted. A journey of a thousand miles, though, starts with a single step, and that led us to create QR MRP. You can see here some of the different aspects that it ended up managing. --- NICK SOLLY: The first case study we're going to talk around is scrap and returns management, and then we're going to talk about issue management. I've chosen scrap and returns management because it illustrates one of the emergent needs that we addressed on the project. To put it in context — due to the pace of the pandemic, we had many new parts, engineering changes, suppliers and tools being brought on board, and we expected a considerable number of parts not to make it online first time around. Let's have a look at that in action. On the left-hand side you can see QR MRP, the web application we spun up, and on the right-hand side I'll talk you through the story of how we integrated the programme and process developing in parallel. There was no agreed way by which we were going to manage scrappage, and so that made us bring the right people together, take the existing Penlon scrap process and develop and evolve it for the needs of this consortium. So we launched that initial process, but we still needed some visibility on the quantities or the potential impact to stock levels. QR MRP was already maintaining our bill of materials and our MRP management — part by part, we were checking we had enough parts in order to meet the ramp-up curve we were aiming for. So now we started having scrappers moving out of that pool of available parts, and we needed to make sure we updated it for that. One of the first things we created for scrappage was just a simple way for users to come into the tool and log a scrappage. One of the benefits of also maintaining your part data and your bill of materials here is that you have access to all the current parts — you can use the search function to find the part you want. Once you select it, it pulls back the image of the part, so you can check it's the right one. You just fill in this form and hit "log", and then that builds itself into the MRP calculations, so we're not going to run short on parts. That was great, but actually many of the parts being raised weren't immediately being scrapped — they needed to be returned to Penlon for further assessment. So we updated the tool to handle returns. If I open up a particular return, you can see someone's raised this part number with a quantity. It's then been received by the team at Penlon. There's an image here taken from the record. And because everything's connected together, I can click on this part number, move through to the part query screen, see information about where it exists in the bill of material, part data, commentary from various meetings, MRP information. I can even click through to the drawing. We started receiving parts back at Penlon. We got a team in place to manage those, and we got the facilities required — two shipping containers here to start with, to store those parts, affectionately known as the containers of doom. Then we started receiving the parts, but the quality of some of the packaging and labelling was quite variable. That prompted us to update the process again and provide more guidelines — and in order to facilitate the labelling, as you create a return, it generates a shipping label for you that you can just print out, stick on the box and send. We were receiving parts, and the backlog of returns was starting to build up. We needed a way of processing them more quickly, and that's where we came up with the idea of using QR codes — little 2D barcodes, so that when a part arrived with this QR code on the shipping label, we could just scan it and mark it as received. For further optimisation, we actually updated the site to be more friendly on mobiles, meaning users could scan the codes there rather than having to revert to their laptops. We also did some site visits — got some of our teams going to the various sites to understand the requirements from their side. We found that Joe and Dan, two of our team, once you put them in a mask and a white coat, look almost identical. Then we needed to understand the causes and values of that scrap and return: why were these parts being marked as scrap or return? That's where, again, the power of linking information together. If I go into a particular scrappage, you can see it's been linked to a particular issue — I can click through into the issue management system and find more information about this particular issue. It's all about linking information together and getting insight from that. Off the back of that, in terms of reporting, we started trying to create some more innovative reports. This is a Sankey diagram as an example, which shows the flow of stock through the various stages — you can either cut this by quantity or by value. In the end, we processed a million parts through the scrap system. Hopefully that illustrates a bit about how we built the process and the programme together in parallel. I'll come back to Ian now, who'll talk about our second case study, which is around issue management. --- IAN QUEST: Thank you, Nick. This case study really is less about how we built it — which I hope Nick's giving you a flavour of — and more around how this worked in practice, and how it helped. This is an example issue that popped up around a filter — a sort of eighth-inch BSP brass filter — at 9:36 that morning in Dagenham, where Ford were assembling the ventilator assembly. They found that the stock of the new supply of this filter that came in was not the same as the one identified as the golden part. So that was raised in QR MRP, mobile-optimised so it could take some photos on the phone and submit that as an issue. Then that popped up immediately on the large touchscreen in Penlon — that's the Penlon office in Abingdon. The supply chain could then have a look at that and hand it over to engineering with the datasheets that suggested these were equivalent. The engineers looked at that, confirmed, raised the deviation, and then that deviation was approved and communicated to Ford, who could then continue their build. That all happened within 50 minutes. And then about an hour-and-a-half later, after the daily issue meeting, that was closed off — lessons learned, and put to bed. One of the things about this: it was such an intuitive system that we didn't need those meetings to drive issue resolution. Issues were automatically allocated to the hardware owners. And the reason the process worked like that was because of the way it evolved, with those people directly involved asking for the things that they needed. Again, under the time pressures we had, there wasn't an option to manage by committee or manage by meeting — it had to be intuitive and quick. Nick. --- NICK SOLLY: So, why was this any different to just using spreadsheets or using an out-of-the-box system? Firstly, it meant the data was maintained live. Meetings and interactions took place directly supported by the system — no one had to have their own spreadsheet version of the universe. We had those big touchscreen dashboards in the main offices to increase engagement and allow broadcast of important information. Secondly, users were an intrinsic part of defining and shaping the tool. If someone can see that, after suggesting a new feature, it appears live in the tool within hours, they feel really invested in using that tool and being an advocate for it. Thirdly, every feature was created to address a particular need. Because those features grew piece by piece, we didn't need to spend huge amounts of time training people or writing user guides — what they saw one day was only a few steps different to what they'd seen the previous day. Fourthly, data and information was available to all without barriers. There was access for everyone. We had instant and permissive licensing — people were enabled to do their jobs by providing the information they needed in the way they needed it. Next, we tried to build some personality into the tool. There were some funny Easter eggs that we created for people to find. Serious work doesn't need to be serious — humans are great — and having a bit of a coping mechanism when there's a lot of pressure helps. We also linked all the key data together, so that it was easy for people to interrogate, navigate and explore. And last of all, we had the ability to create the specific reporting and insights anyone needed, fast. Those reports were always aligned with the latest data, which was changing on an hourly basis. --- NICK SOLLY: What actually enables this approach? How would you actually do this? Well, first, it's about challenging yourself with audacious goals. With smart, motivated people and modern software development technologies, you can actually move incredibly quickly. We were able to push code through testing and introduction within minutes, and create and roll out processes within hours. Don't be afraid to shoot for something particularly big, particularly if you can define a really clear objective around it. Secondly, and really crucially: build process teams who deeply understand the domain and the day-to-day. At QR_, for example, everyone — including all our software developers — has either previously been, or has had time as, a PDM analyst. This shortcuts the requirements-gathering process, as they already understand the relevant needs, challenges, subtleties, language and design patterns for the projects they're working on. We didn't have lengthy spec documents or iterations of wireframes — often a sketch or a quality conversation was enough. Thirdly, adopting an agile 80/20 approach. I mean agile with a small "a" here — not necessarily a formalised agile framework like Scrum or DSDM. I mean really using those core principles of the Agile Manifesto: you always need to be working on the most important things. Get data and tools out there at an earlier point than you even feel it's ready — that means you get really early feedback, and it helps drive completion of the data and the tools. Architect a really strong digital thread. If you've got to bring all these different sources of data together, identify your primary keys, your relevant fields, the refresh schedule of that data, and work out how they best all fit together. Identify where your master data is. Part of this is just about making a start — picking a specific problem you have and running hard at it with your best people. There's probably nothing in that list that's particularly new, but applying these things well really does take a specific focus. --- IAN QUEST: OK, so for the takeaways. Firstly, I think there's a really strong case for bringing the programme and process sides ever closer together — reduce that divide between the teams, and try to get process people deeply immersed in programmes, and programme people immersed in what we're doing with the processes. Secondly, the pop-up software approach we've described: it's not a gimmick. It is a real and practical thing that we can do, and there are many frameworks and tools that support it, which mean we can actually build real, usable, sustainable, supportable software in very short timescales. And thirdly, we should set our goals high. Whether or not we believe something to be possible, part of what forced us to think differently on this was just the size and speed of the challenge. That forces you into a different mindset where you have to think differently — you don't have to think about the 80% of things you won't use very often. It's focused very heavily on the current constraint, and really resolving that. I hope that's been interesting and helpful, and that there's something to take away from it. Now we'd just invite any questions. --- ROB FERRONE: Thanks Nick and Ian. I think we've done well with the time — we promised to be snappy with it — so we have got some time left if there are any questions. Just while we're waiting for a question to come in: a question to both yourself, Nick, and Ian — what would you say were the highlights and the most challenging parts of the project for you? IAN QUEST: I think for me, the highlight was that because the goal was so clear and compelling, and because we had lots of great people that came together, it was a restoring of a certain faith in humanity in a way — that all of those smart people got together and actually organised themselves and jumped on the challenge in a really effective way. At the end there was this really quite joined-up team, with some very strong relationships between people that had only been formed during the project and under those project conditions. The lack of effort that was required around managing and organising people was quite extraordinary. So that idea of setting the big challenge was really great. Getting people in, empowering them, and not trying to manage them to a solution — just seeing the power of that was a very big one for me. ROB FERRONE: OK Nick — and just before we come to you, a couple of questions have come in on the Q&A section. One question: how did you manage the changes to the processes and the software tools themselves? Was this managed from a separate system, or integrated within the same tool? NICK SOLLY: Managing the changes from the developer side: all the software version control and a series of CI/CD-type processes that allow us to go from written code to deploy into production. From the user side of things, we communicated our changes daily on the Microsoft Teams channel that we had — just a very short post saying "right, here are the new things up today, have you seen this in the tool, this might help you". Plus, you might have seen on the front page of the tool, we also had that list of release changes — again, for people to see the new features coming in. We also tried experimenting with a side draw on every page, which showed you relevant short snippet videos for that particular page you're on, to try and direct people to the help and the changes as quickly as possible. ROB FERRONE: OK, thank you. Another question: how much has Penlon taken away some learnings from this for their day-to-day business running? IAN QUEST: I think they have. Certainly a conversation with them, very much so. They've taken some elements of this tool forward for the next run of production. They've got some international orders for the ESO2, which is great. A lot of the conversations I've had with them — they've really got a lot from the experience, and being connected with different people in different organisations. So yes, I'd say they definitely have taken a lot of the learnings away. ROB FERRONE: OK. One more question: Jan says, this looks brilliant, but how sustainable is this approach and solution long-term? Or why do we still need the monolithic enterprise systems approach? IAN QUEST: I would challenge the monolithic enterprise-system approach to some degree. I think there are core systems where it probably doesn't make sense to start redeveloping a whole ERP or PLM system or CAD system — but I think all of the things that we do to link those things together, and to make them work for ourselves personally, I think we can engineer in this way. This software is continuing to be used, as I explained, and is very much supportable. So I think it is a sustainable way of building it. If you're talking about those bigger building blocks where it's helpful for people to be familiar with them, it might make sense to purchase those base ones — but don't lean on them to do everything. Sometimes configuring those systems to do something specific you need to do is way more costly and difficult than actually engineering your own link or your own reporting approach, or whatever it may be. So I think it does have a place going forward, and I think people do need to think carefully about: do you go for this big monolithic "one system does everything" approach, or actually is there a different way of tackling your real problems without introducing all the new ones that come with the new system? ROB FERRONE: Thanks Ian. One final question — and if there are any questions afterwards, reach out to us, happy to talk about this more and share any more insights. The question is: looking back at the project, is there anything else you would change? NICK SOLLY: It's quite a technical one from my side. Going into this, I knew the importance of breaking your code and your data model down into as many detachable pieces as possible — making things really self-contained. Even telling myself that when I started, I think I still could have done that more. There are still things that expanded in scope during the course of the project which I probably would have set as something separate in the first place. But, good lessons learned. IAN QUEST: For me, looking back: what people were able to do, not just around this topic but around all of the topics, was always beyond what I had expected. So I think there's a number of areas where, if we went back, we would do more of our detailed work up front, and not be daunted by it, and probably do a little bit more preparation. But the fundamental drive of "tackle the constraint as it arrives" — focus on the immediate programme constraints, and leave a trail of construction — that fundamental approach did work quite well. But yes, lots and lots of things we'd do differently. And we'd install extra coffee machines, I think as well. ROB FERRONE: Right, well, thanks very much guys. A final question is asking if we're going to be developing a concern-management system to roll out in a production environment — and I think the answer is yes, isn't it Nick? NICK SOLLY: Yep. ROB FERRONE: OK, well, if you want to know more about that, then do reach out to us. Otherwise, thank you so much for attending — I hope that you got what you wanted from this webinar, and you found it interesting and thought-provoking. We look forward to speaking to you in the future. Thank you very much.