Analytical 26: Risk

20 April, 2018

CURRENT

Risk assessments are a critical part of engineering, but overly sensitive ones can paralyse an organisation. We look at how to make risk assessments useful and how to avoid some of the pitfalls.

Transcript available
[Music] Everything can be improved, iterated and refined. And if you don't think that's true, maybe you haven't analysed it enough. Calculated choices, carefully considered, absolutely analytical. This episode is sponsored by Makers for Good and their impressive Helio solar-powered light, flashlight and power bank perfect for camping, hiking, emergencies as a nightlight or for wherever your adventures may take you. We'll talk more about them during the show. Analytical is part of the Engineered Network and to support our shows including this one, head over to our Patreon page and for other great shows visit engineered.network today. Risk. In recent times, I was involved in a risk assessment as part of my job. The assessment was supposed to highlight, oddly, risks, funny for upcoming works to be executed. And it was, in some regards, to be honest, a bit of a sham. And it's been bothering me. Quite a bit actually. The dictionary definition of risk is a probability or threat of damage, injury, liability, loss or any other negative occurrence that is caused by external or internal vulnerabilities that may be avoided through preemptive action. Bit long-winded but it kind of says it. So from that we conclude that if you can't preemptively avoid it, it's not a risk. But we also conclude it is driven by probability. What I've learned about risk perception is that it's more often driven by fear rather than fact. And I suppose the issue is that facts in the future aren't really facts, they're predictions because you can't tell what's going to happen in the future because it's in the future. So the trick it seems is balancing true real future risks with unlikely or worst case scenarios. It comes back to being honest about the potential outcome as well as being honest in your assessment of the likelihood that that outcome will actually occur. At most companies that try and be quantitative about these things, they'll pull together a table or a matrix that tries to provide bounds on each of the different aspects that they want you to consider. Now I'm not going to list all the ones that are possible but here's some common ones that I've come across starting with extent of injuries. So they could range from as little as minor non-hospitalized scratches that need a band-aid to fix them all the way up to multiple fatalities. Another one is cost of company damage. Could be $5 to many, many millions of dollars or more. Another one's legal damages. Same kind of thing as company costs, company damages, but how badly you get sued by someone else. And another common one is reputation. So something happens, and the impact is that There's a footnote on page 100 of the Saturday paper. Well, if anyone reads the paper anymore, or it's buried on a website on Thursday and no one knew about it, well hardly anyone knew about it. All the way up to an international incident where everybody knows the company name, or worse still the company name and your name. No matter what your metrics are though, it's still down to humans to try and estimate the likely scale and the likelihood of the outcome occurring. And in the case that there aren't equivalent scenarios you can draw from, so for example when the train derailed last week three people were hurt because of the exact same identical risk that we're trying to mitigate. So when that isn't something that you can actually reasonably draw a parallel from, you have to come up with your own specific scenarios. And it's at this moment that opinion comes in and the blend of people in the risk review comes into play. And it all becomes about dreaming up a scenario that those people in the room can agree on. Before we go any further, I'd like to talk about our sponsor for this episode, and that's Makers For Good, formerly Exosensory Devices. They're an innovative company based in Palo Alto, California, and they've recently released the Helio, a solar-powered lantern light, flashlight and power bank. The Helio has an intensely bright flashlight at 150 lumens, but if that's too bright or you want longevity, it has medium low light settings as well. The same too for its lantern light, but as a bonus, the lantern also provides a red light as well as a white light. Also, there's an emergency flashing mode in case you need to signal for help. The Helio takes 17 hours of full sunlight to fully charge from flat, and yes it would take a couple of days to charge, but after that you can get 15 hours of lantern or flashlight at full brightness. At 5200mAh a fully charged Helio can recharge an iPhone X 1.5 times from flat. Now if that isn't impressive enough, I mentioned low light before, if you're using the medium light output, you'll get 5 continuous days worth of light and in low light you'll get a whopping 1 month of light without needing to recharge at all. If solar isn't available it can be charged from a standard USB port from flat in 6 hours using a standard micro USB connector if you need to. The Helio comes in a variety of colours, Redwoods, Moonrock Grey and Adventure Green and all models come with a convenient flip out stand and hook that can be used to either suspend the Helio from the hook or to stand it upright on any flat surface. It also comes with a convenient lanyard for carrying and 3 low power LEDs indicate the overall charging level. All the ports are protected by a tight water resistant cover and the stand is all metal with a solid metal ratchet mechanism that holds it in place. The first thing that strikes you about the Helio when you hold it is how solid and strong it feels. It's made from a high strength polycarbonate case that's impact resistant and the unit weighs in at 370g (13oz). All of that and it's not much bigger than the average size Maglite measuring 8 inches long, 2.5 wide and just over an inch thick. For a solid product like this, it's not built down to a price but rather to its performance which speaks for itself. The Helio is only $89.95. But there's something different we really need to mention. Makers for Good have a non-profit arm and as part of that, all profits from sales of the Helio are used to support non-profit organisations through their ShareLight program. Their ultimate goal is to help bring renewable and safer light and energy to parts of the world still reliant on kerosene and candles in a package that's just as at home anywhere in the world. So if you'd like to check one out, just head over to MAKERS4, as in the number 4, good, or one word, dot com slash engineered to learn more and enter the coupon code engineered for 20% off your Helio in your choice of color and shipping is free anywhere in the continental United States. Thank you to MAKERS4GOOD for sponsoring the Engineered Network. So my observation is that if there are people in that room doing the risk review that want to execute the work and no one else cares too much about it or is too concerned about it, then they'll push a very low risk scenario. Alternatively, if there's anyone in the room with an objection to the work being executed, they'll come up with an extreme scenario as they possibly can with the most dire consequences and very often there's an inflation of risk as it goes around the room as other people pile on and push the scenario beyond what is actually plausible. What I've witnessed is that people that are in risk reviews see more risks than those same groups of people that are outside of the risk review. It's similar to asking an optometrist why you've got a headache and they say you need a new pair of glasses. It doesn't matter that you've got a blood clot behind your eye. As you walk out of the optometrist with a bright shiny new set of glasses, you fall over dead from stroke. We see what we expect to see to an extent. In the case of risk reviews, we tend to see risks and inflate them out of proportion because we're going through the process of performing a risk review and we tell ourselves that that's our job, to find risks. So we need to, even if they're unlikely. Now, the funny thing is that in risk-averse organizations, this is actually seen as a good thing, despite the fact that it makes companies less agile, less responsive, and ultimately less profitable, meaning being overly risk-averse will eventually put you out of business. So is it actually possible to have a genuinely objective risk review? I'm not entirely convinced that it is, because it's about predicting the future. And in many ways, and many lines of work, equivalent scenarios can be difficult to find. Or if you can find them, they're just subtly different enough that they just don't apply. But you could probably try a few things. And I think that would improve the quality of the risk reviews. So, let's have a look at a few ideas. Test the stated consequences. So, by that, I mean, has the consequence ever happened anywhere else before, either within the company or anyone's experience in the room or anyone anywhere in the world that we're aware of? And are the circumstances the same? Now, if it's similar, that's fine. But what are the differences? So, if we take this action that's proposed to mitigate this risk, we go forward or backwards in time, does that change the consequences? Because sometimes it can be time specific. Do we have confidence that the outcome that was stated, was it actually accurate? Be specific about the outcome. Some people might be injured the last time, could be more like last time one person cut their finger. Be specific. So, that is testing the consequences that are stated. The next thing to test is the probability. If the outcome that was stated in this scenario has occurred, how many times has it occurred? And it's not just how many times it needs to be normalised. So, it has to be weighed against how many instances exists where it could have occurred. So, normalizing probability is critical. Think about if there were 100 cyclists killed on the road last year versus 250 vehicle drivers. Well, that's terrible. That leads you to think that there are lots worse drivers out there than there are cyclists out there, potentially, but when you factor in that motorists drove 1 million miles versus cyclists only riding 100,000 miles, that completely changes what that tells you. So it's important to test the probability of that outcome. The next thing to test is the applicability. It's okay to highlight a risk that can be mitigated by other factors. Hence it might not apply since the consequence that happened in another place had different people with different training and different backgrounds. Does it really apply here if you have rigorous training programs, if you have an actual training simulator, if you regularly test and retest people? So when it happened in some other place, they had no testing, they had no training. Is it really applicable here or do you have sufficient controls in place already? The next thing, only let qualified people into the room specifically to determine the cost outcomes. I have seen people say, "It's gonna cost us a million dollars a nanosecond." Okay, maybe not a nanosecond, but certainly lots of money if this thing were to happen. The thing we're trying to mitigate the risk for would happen. Make sure that people can stand behind their cost estimates. A lawyer should be evaluating legal costs, not an engineer. And an engineer should be evaluating damage to process and mechanical equipment, not a lawyer. Funnily enough, about lawyers, the next point I think, try and think like a lawyer. And by that, I mean, you know, what's the expression, would that stand up in court? Now, obviously that assumes that you've been in a courtroom and you understand how courts work, but you know, broadly speaking, I think enough people have watched shows regarding law, law and order, or just how the law works. And it all comes down to thinking in that way, a methodical way, taking emotion out of it as much as possible. Make the person that raises that risk provide both the pros and the cons of the mitigation that they're actually stating. And remember that risk reviews in the whole, It isn't a brainstorming session. It needs to be balanced and it cannot be extremist. And if you feel it is being too extreme, then it's not a genuine risk review. Don't be afraid to challenge the unlikely risks with the "so what" test. Someone says to you, well, if we don't do this, something else will happen. So, so what? No one's going to get hurt. It's not going to cost us much money. That's fine. Just ignore it. The so what test kills off a lot of stupid. Every person in that room should be open to having what they're saying challenged. Remember again, it's not brainstorming, not entirely. It needs to be balanced and everyone needs to be able to be challenged. So risk reviews that dig up fake and fantasy risks, really, they're not useful, they're not valuable to anybody. And risk reviews though that are thorough, honest and realistic, that's what we should be aiming for. It's not easy, but it's really important if you're going to do a risk review that you try your best to get it right. Doesn't sound too risky? Nah, didn't think so. It's low risk. Move on. If you're enjoying Analytical and want to support the show, you can, like some of our backers, Chris Stone and Carsten Hansen. They and many others are patrons of the show via Patreon, and you can find it at patreon.com/johncheegee, all one word. Patron rewards include a named thank you on the website, a named thank you at the end of episodes, access to pages of raw show notes, as well as ad-free, high-quality releases of every episode. So if you'd like to contribute something, anything at all, there's lots of great rewards, and beyond that, it's all very much appreciated. I'd like to thank Makers for Good for sponsoring the Engineered Network. Visit makers4.com/engineered for more information about their impressive heliosolar-powered light, flashlight and power bank. And use the coupon code ENGINEERED for 20% off exclusively for Engineered Network listeners. Analytical is part of the Engineered Network. You can find it at engineered.network and you can follow me on Mastodon at [email protected] or the network on Twitter at engineered_net. nothing question everything it's always a good time to analyze something I'm John Chichie thanks so much for listening [ Music ]
Duration 17 minutes and 7 seconds Direct Download
Episode Sponsor:
Makers4Good: Makers4Good are an innovative company based in Palo Alto, California and they’ve recently released the HELIO: A Solar Light, Torch and Powerbank that’s perfect for camping, hiking, emergencies, gazebos, use as a night light or wherever your adventures might happen to take you. All profits from the sale of the HELIO go to providing light and power for those in need. Visit makers4good.com/engineered and use the Coupon Code ENGINEERED for 20% off the total price of your order.

Show Notes

Links of potential interest:


Premium supporters have access to high-quality, early released episodes with a full back-catalogues of previous episodes
SUPPORT ANALYTICAL PATREON APPLE PODCASTS PAYPAL ME
STREAMING VALUE SUPPORT BREEZ PODFRIEND
CONTACT FEEDBACK REDDIT FEDIVERSE TWITTER FACEBOOK
LISTEN RSS PODFRIEND APPLE PODCASTS SPOTIFY GOOGLE PODCASTS INSTAGRAM STITCHER IHEART RADIO TUNEIN RADIO CASTBOX FM OVERCAST POCKETCASTS CASTRO GAANA JIOSAAVN AMAZON

People


John Chidgey

John Chidgey

John is an Electrical, Instrumentation and Control Systems Engineer, software developer, podcaster, vocal actor and runs TechDistortion and the Engineered Network. John is a Chartered Professional Engineer in both Electrical Engineering and Information, Telecommunications and Electronics Engineering (ITEE) and a semi-regular conference speaker.

John has produced and appeared on many podcasts including Pragmatic and Causality and is available for hire for Vocal Acting or advertising. He has experience and interest in HMI Design, Alarm Management, Cyber-security and Root Cause Analysis.

You can find him on the Fediverse and on Twitter.