When you compare those, who are unfamiliar to those who are experienced at a certain task, you might think that those who are unfamiliar to the task make more errors. However, if you are a new to a task or a procedure, you are often carefully supervised by an experienced user, and you are also much more aware of the task at hand and perhaps even following a manual. This greatly reduces the risk of you making an error or a mistake.
In contrast, no one supervises an experienced user, and they are often so familiar with the task, that they do not need to constantly concentrate and be aware of everything they do, and instead their mind can wander off. This means that, perhaps counterintuitively, experienced people are often more susceptible to errors.
This is in line with my experience. As some task becomes more and more routine to me in my research, I become more and more susceptible to small mistakes, and in hindsight it feels silly that I was even able to make such a stupid error.
In my case this means that I must spend extra time in lab, but when you are talking about e.g. big airline companies, even a small mistake can lead to an accident that might cost people their lives, which is why it is important to understand why errors happen and how we can avoid them. This was the topic of the second DSII course lecture “Human Factors: Why we screw up - learning from aviation and experience” by Stephen Wright.
As pointed out in the lecture, no matter how careful you are, humans make mistakes. Thus, we should not begin a witch hunt to find these people and punish them for every error they make. In the worst-case scenario, this leads to a toxic work environment, where people hide their mistakes, which causes problems for the company in the long run.
Instead we should find the underlying reasons for the errors and mistakes and try to address those. This is what error management is aiming to do. A very crucial part for its success is that you deliver the message effectively.
A bad example, which was given in the lecture, is how Quality Assurance (QA) for handling reprocessed fuels in the lab has been documented by British Nuclear Fuels Limited (BNFL). The QA started very simple, but every time a problem occurred in the history of the lab, a supplement was made to it.
This has led to a complex QA document that now contains over 100 pages worth of procedures you need to follow, making it totally impractical and basically useless. Yet, they still expect people to read and follow everything mentioned in the QA. This sound like failed communication between the management and the people working in the lab.
Then how should one do error management? This is obviously not an easy question to answer. One thing that comes to my mind is that instead of trying to address every possible scenario, one should concentrate on the so-called latent failures, which are failures that are not spotted immediately, making them the most dangerous type of error.
In contrast, active failures are errors that are immediately identified (e.g. snapping the head off a screw). Here, there exists various models, but I found the Swiss Cheese Model by Prof. James Reason the most intuitive and useful. The model contains layers of defenses that contain holes (hence the name Swiss Cheese) that model the failure of these defenses. If the holes in these layers line up, you will breach through the defense, which leads to an incident or accident.
- Samu-Pekka Ojanen, Doctoral Researcher
Comments