In 1979 the first students were admitted into a post graduate course entitled Occupational Hazard Management run by the then Ballarat College of Advanced Education in Ballarat, Victoria, Australia (now Federation University). I had been given the job of designing and running the course in 1976. After developing the course and in my new role as course co-ordinator it fell on me to identify lecturers who could contribute to the course delivery. This was relatively easy in subject areas such as law, statistics, toxicology and the like, but someone had to develop content (suited to tertiary education, in a field in which it was clearly lacking) that captured the essence of the subject and as co-ordinator I felt obliged to take on this role. The two units I chose to satisfy this need were called Accident Phenomenology and Risk Philosophy.
Risk Philosophy was offered for the first time in the second semester of 1979. I had lectures spread over the three weeks of a residential period to present material that I had structured around the basis of the then well-known risk management steps of identification, assessment, evaluation and control. I had located some interesting approaches to what one might loosely call risk assessment in the work of Kinney and Wiruth (Kinney, G. F. & Wiruth, A. D. 1976, Practical Risk Analysis for Safety Management, Naval Weapons Center, California, USA.) that you may know as the risk assessment nomogram. There was one other, which was a much simpler two variable cross tabulation that had been published in a similar period by (I think) Steele. If I am correct, this was subsequently republished as:
Steel, C. 1990, ‘Risk estimation’, The Safety and Health Practitioner, June, pp 20 – 21.
The 1970s was a period in which such ideas flourished in the USA but they had been ignored in Australia and anywhere else as far as I could tell in that pre-www period.
In the last lecture of that residential period I had got to the point of explaining the three parameters on which an objective understanding of Risk (and the Risk Diagram) could be based and then showed how two of them, Frequency and Consequence Value could be expressed as a cross tabulation, as indeed Steele (if it was him) had done. There on the blackboard (or was it the overhead projector screen?) was the two variable cross tabulation and I could tell from body language that students in the class were enthralled by it. I distinctly remember looking at the clock and the remaining 2 minutes and hesitating – should I give them the “BUT” now or wait until next semester’s residential course, a full six months away? My BUT was based on the fact that the use of the tool was mostly irrelevant to the needs of risk assessment. To my everlasting regret I decided to wait.
When the students returned six months later and in the first minute of the first lecture on Risk Philosophy two students proudly told me how on their return to work they had taken immediate action on the cross tabulation (the risk matrix) and so promoted it in their jobs with the mining industry regulator in the State of Queensland, Australia, that its use was now required of mine managers in the new mining regulations that had just been released. I was aghast and probably managed a week smile before turning to begin the lecture reminding the group with what we had finished six months ago and then launching into “….. BUT…..” and occasionally glancing at the faces of the enthusiastic duo to see their reaction.
These days, we’d say the Risk Matrix idea went internationally viral from that moment on – regulators love to see what the others are doing and to do the same.
The term ‘matrix’ gives the idea rather too much credit I think. The term matrix has a specific meaning in mathematics, loosely as a short hand aid to certain types of computations and in this role it has a character that is not to be found in a simple cross tabulation of variables. But then, people do like to embellish simple ideas with complex names. If anything, the Risk Matrix is just a means of showing the Risk Diagram space using ordinal or nominal scales.
With the genie out of the bottle, nothing was going to put it back in quickly.
In 2000 two colleagues (Dr David Borys and Dr Jack Harvey) and I tested the responses of a group of 35 students at Ballarat to a number of risk matrix tools, as much to see how it might be done as to definitively answer the question ‘do they work’? We wrote this work up in a paper (available on request):
D. Viner, J. Harvey & D. Borys An Evaluation of Risk Assessment Methods, VIOSH Australia, University of Ballarat, Ballarat, Victoria, Australia.
The statistical analysis by Dr Harvey answered the question of whether these tools give a consistent result with a very clear “NO”. This paper was submitted for publication but was rejected for various reasons that appeared to be based on not liking the message rather than the lack of intrinsic value in determining the method by which such tools could be evaluated. None of us authors could be bothered to respond to the critique. Not put off by this, I presented the results at an Australian conference in 2002 and received strong criticism from the other presenters on the panel, at least one of whom had a vested commercial interest in the risk matrix. It is most unusual for presenters to attack each other! At the end of the session a woman from the large audience approached me to thank me for saying what most of them knew, that the tool did not work in practice.
As an end note to this particular episode, I have recently (2016) heard that the panel member who objected most strongly to my paper has since retracted his views and agrees. It is more common now for people to openly express these views, formed as they are by simple observation in the use of the tool in practice. I hope we are seeing the end of it. A thirty something year run for an ineffective tool is a long time! Genie, get back in!
One reason for my contention that the Risk Matrix should be discarded is that it is the wrong answer to the question. In another post (Chapter 11) I have also said it is the wrong answer to the wrong question, but the reason for it being the wrong question (that is, largely irrelevant) is another matter.
I have commented on the risk matrix and its use in an interview with SafeWork Australia:
I can vividly remember those VIOSH days and thought it was all blue sky thinking until it came to putting the concepts into practice.
The course title and its structure were extremely well articulated. Law and legislation was the first topic and it sorted out all the bush lawyers. The ineffectiveness of legislation to expedite change, whether it be prescriptive or performance based was soon realised.
Then we were introduced to systems thinking, energy damage and risk control and the dark ominous fog clouds came rolling in. This was followed by descriptive and inferential statistics and hypothesis testing.
However, almost 20 years later after applying much of what I learnt it is so simple and clear but the plethora of soft systems change management processes has generated obscurantism and agnotology and complicated a relatively simple subject.
Much of your frustration is reiterated by Jean Cross and Ross Trethewy in the SIA OHS BOK Chapter 31 Risk p.34:
Cross and Trethewy (2002) sum the issue up as follows “Current practice in risk assessment is highly unreliable…. a simple qualitative description of magnitude of risk does not perform the function (of requiring mangers to understand and take responsibility for the risks in their workplace)… Legislation requires employers to eliminate hazards and minimise all risks to health and safety. Ranking risks is an administrative convenience to allow a sensible consideration of where to start when a range of actions are required, but it has become the core of OHS risk management activity….”
The purpose of a ranking tool is to draw attention to the most important risks and to risks that might need more detailed analysis. Ranking is a starting point for analysis not the end result.