The Theory of Perfect People – part 2

I suggested in part 1 that imperfect people are normal and perfect people are a figment of our collective imagination.   This is a catchy idea but I am sure the reality is that personal qualities exist on a scale that ranges (in theory at least) from entirely imperfect at one end to entirely perfect at the other.  A moment of contemplation makes it clear that to devise such a scale one must have the ability to define what makes up perfection.   In the second moment we become aware that this perfection scale must be a value judgement.  What is perfect decision-making in an organisational setting concerning routine operations without any immediate threat (a democratic and informed decision involving many people, for example) may be entirely imperfect in a military situation involving an immediate threat (requiring strong and experienced leadership making a decision based on incomplete information and definitely not the forming of a committee).

What personal qualities are we looking at in making this judgement about perfection?  My inexpert (many books have been written on this) but practical view is that they are, or at least include, matters such as those in the second column below:

The way a person thinks.  … is influenced by:
I expect this is a complex outcome of prior conditioning, group norms (involving culture, beliefs and values) and external and possibly conflicting  influences.Intentions (let’s assume good intent, not criminal) Knowledge and awareness Decision-making practices: rules,  knowledge or experience-based problem solving


The way a person acts. … is influenced by:
As above, this may be conformance to rules or to a general approach that is judged by the individual or a group of which they are part to be best in the circumstances (one could say culture).   There may or may not be time to consider possible action in the context of a multitude of complex operational rules.  Where there is not time, actions will be checked after the event with the luxury of both time and hind-sight (and very often found wanting by the arm chair sitter).skill (taught, practiced) habit (operating skilfully on ‘autopilot’) novelty requiring problem solving (let’s do it this way)

The time to act in operational situations (industrial and defence control room operators; space craft, aircraft and ship pilots; road vehicle and train drivers; and the like) may be very brief and the decision subsequently found to be not the best that it could have been or even to have contravened a rule somewhere or a specified intent in a legislative rule.  

Someone who moments before the challenging event would have been judged perfect (in the sense of properly trained, skilful, knowledgeable, experienced, licenced, good intent and hence selected for the post) is post-hoc shown to have made a less than optimum judgement.  This may be so by definition: the law for example tends to define lack of safety as evidently having been the case when damage has occurred (safe means absence of injury or damage etc., therefore unsafe means when there has been injury etc.).  With performance-based (rather than prescriptive) Regulations this is highly likely.  Or it may be so by comparison with another set of rules, probably those created within the organisation.  The more complex these rules and the more their statement makes use of judgemental terms (such as ‘safe’) the more likely is this to be found post-hoc.  

For example, aircraft pilots flying under the Visual Flight Rules are required to maintain a perfect lookout and fly according to the see and avoid principle.  This is despite the fact that in many very predictable situations this is impossible and has been shown to be so, both scientifically and practically.  Nevertheless, in the event of a mid-air collision the rule can be used to say one or both pilots was negligent in not looking out “properly” (for this read perfectly).  I keep a record of my near-misses in the air and in eight out of eleven cases it was not physiologically possible to see and avoid – the avoidance came about purely because of the small size of aircraft and the large volume of air.

On whom does the responsibility rest to determine where prescriptive rules (e.g. ‘isolate this plant by following these steps: 1, 2, 3’) should be used and where reliance should be placed on knowledge, skill and experience?  Prescription of actions has an interface somewhere with actions based on skill, experience, knowledge and judgement, which is obviously a much larger and more complex field.  I have only ever seen this boundary determined in a conscious and well informed manner in the aviation industry.  An example that immediately comes to mind is an industry with huge and powerful fixed mechanical equipment whose managers struggled for years to understand this boundary and eventually the site manager was required every morning to check the security of the bolts that held guards in place throughout the process. Am I alone in thinking this is astonishing?  We see evidence of the boundary everywhere we see rules being enforced and they are often visible because of the incongruities that arise, eg. hard hats being required in open field with no overhead work, eye protection being required everywhere and at all times because one or two occasional activities in a small area of the plant require their use.    

The assumption of perfection is not limited to those at the action end of operational decisions.  It exists also at levels much higher in the organisation, where are found people responsible for creating the environment and the system within which work takes place.  These responsibilities are imposed from outside, by legislators.  Responsibilities exist in legislation on the designers of equipment too.  This set of assumptions about the possibility of perfection (being responsible) needs a great deal of unpacking, something for another occasion.   

I’d like now to look at this perfection matter in terms of reliability mathematics:

 The mathematical definition of reliability is:

reliability = 1 – probability of failure  

Which may be restated for the present purposes as:

probability of a perfect result = 1 – probability of an imperfect result.

(As a small digression from the main point of this article,  I’m sure you’ll immediately recognise in this the basic origin of what Hollnagel calls “Safety II”.  In other words:  

“Safety II” = 1 – “Safety I”)

SafetyII is the system’s ability to function as required under varying conditions, so that the number of intended and acceptable outcomes (in other words, everyday activities) is as high as possible.”  https://www.england.nhs.uk/signuptosafety/wp-content/uploads/sites/16/2015/10/safety-1-safety-2-whte-papr.pdf, (seen 13/8/2020)

If you know one of these parameters the opposite is, of course, obvious.  You either focus on what contributes to failure or to what contributes to success.  The one is simply the obverse of the other so you are looking at the same thing as long as you know how to define this thing and a sensible person would look both at both what worked and what did not work.  People on the whole love to be informed by accidents (who’d ‘a thought it?), I think because it requires little effort (see Daniel Kahneman (1994) Thinking, fast and slow.  Doubleday, Canada) and to define perfection by their absence.  In a probabilistic world this is obviously not wise. Everything tends to appear perfect until the accident happens.  So how does one define perfection?  By looking at what could lead to an undesired and hence imperfect outcome.  The boundary with what it is not defines it. 

A rational person (that’s us, isn’t it?) would focus rather on control measures given that the ‘accident’ is capable of being recognised (before it happens) with a small thought experiment that could be no more complex than “what if?”  It can be more complex but often does not have to be.  Just look for the large energy sources and begin there.  However, this is a very poorly developed skill.

Let’s look at this from the perspective of the imperfect/perfect person.    

Probability of suitable intentions  x  probability that required knowledge and awareness exist  x  probability that the correct decision was made  x  probability that the actions suited the decision =  probability of a ‘perfect’ result.

It is interesting to see what the numbers look like.  If all input probabilities are pretty good (say 0.99) the probability of a perfect result as defined above is 0.96 (assuming no dependence between these factors).  However, if the inputs are all 0.9 the end result is only 0.65.  It says a lot about us, on the whole, that we manage to achieve such low probabilities of imperfect results and therefore such high probabilities of perfect results.  I suppose that is why we have survived as a species, so far…  

It follows, does it not, that when something bad does happen we could, instead of frowning at the evidence of ‘abnormal imperfection’, realise that here is a chance to learn how to improve our species’ generally already good record of controlling such risks – this unwanted thing has just illuminated the boundary.  Instead we usually collectively choose to punish the newly discovered imperfect person, the one who showed us the boundary.  This unhelpful response is embedded in our legislator’s minds and in the legal system they create and support, as well as in the mob psychology of some peoples.

Because it all involves value judgements it is evident that it is impossible in any one situation to define unambiguously what perfect action means (other than saying nothing should go wrong) in the context of people (and one may assume organisations).  Before the unwanted happens, the same behaviour could be regarded as excellent because it benefits productivity.  It follows that imperfect people and organisations are bound to be normal (there always will be a probability of an unwanted Outcome). 

Once something unwanted has happened there is a coalescing of values amongst affected or interested people, all of whom agree it should not have happened, an example being the recent spectacular explosion in Beirut.  A compelling similarity of judgement arises along with the concomitant ability to define unambiguously what imperfect means.  But only after the Occurrence.  This psychological confluence does not exist before the unwanted thing occurs and this psychological dynamic is notoriously incapable of determining unambiguously what perfect means before the Occurrence.  Beirut, Space Shuttle Challenger, Deepwater Horizon, ad nauseam …

This nexus is at the heart of the way in which our society attempts to govern risk.  These post-hoc values need a place in which to coalesce and this is the court room or lynch/stoning gangs, depending on the culture.  Linguistically we have many derogatory adjectives with which to describe the system or people on whom our judgement rests and who we wish to blame – hasty, inattentive, ignorant, careless, thoughtless, inadequate, negligent and so on.

We are all socialised to believe that normally things go the way we want them to (a statistical fact) and that it is only when the abnormal someone or some organisation does something we don’t like or don’t want that things go wrong.  Evidence for this is to be found in Heinrich’s judgement (he admits it was judgement) and regrettably widespread current belief today nearly 100 years after his time that 80% of accidents are caused by unsafe acts.   Using the mirror of “Safety II” we would say that all non accidents are caused by safe acts – wow!  Another way of looking at this is that to better understand what limits safe acts it is useful to look at what promotes unsafe acts – once again simply the obverse of safe acts.

An understanding of the origins of our social perception of risk is to be found in the doctrines of employment in the United Kingdom during the early days of the industrial revolution.  These included the notion that employees accepted the risk (and hence the injury or disease that arose) of their employment by entering employment with a mill, foundry or wherever.  It was only later in the revolution that the law placed more responsibility on employers to control risks and took the burden off employees.  At this time the law began to require machines to be guarded and dust to be allayed at source – all engineering controls and therefore a history of interest to us risk engineers even though some or a lot of it was ignored or treated with disbelief by factory owners.  

Things were tough in those days.  Lord Robens writes of “…waste and the human tragedies that ensued…”  and that in 1852 office workers were expected to display Godliness, cleanliness and punctuality, enjoyed reduced work hours of only 11 hours a day, held prayers and brought their own coal in cold weather amongst nine other conditions of similar nature. 

See Lord Robens (1970) Human Engineering, Jonathon Cape Ltd, London, page 52.

However, things were not all bad, they could be provided with a pen sharpener (they had to supply their own pens) if they made suitable application to the right person.  In Weindling (Weindling, P. (ed.) (1985) The Social History of Occupational Health. Beckenham: Croom Helm.) is to be found a story of the disbelief of Welsh slate quarry owners that they should be required to provide respiratory protection to workers – ‘fantastical’ they said.  The notion that they should comply with a long standing (decades old) regulation that required them to allay dust at source (an engineering control) was greeted with shaking heads and rolling eyes.

Of course, this is astonishing and fascinating today, but it is interesting too as it illustrates the social trajectory of our laws.  From the beginnings of the general duty of care (which we all have towards one another) and its uncomfortable juxtaposition in the medieval relationship between landholders and serfs (requiring stern authority and cap doffing) it should be a matter of pride to us that the law upheld what was right – decent behaviour towards one another, especially when one party (the factory owner) had more power than another (the worker).  Practicality and decency (it makes more sense for a factory owner to control risk than expose the worker to uncontrolled risks) replaced stern authority.  That our current employment laws are children of these times is evident in current efforts to redefine the factory owner.  Owners are now a multitude of investors in many cases, so enter the rather awkward “person conducting a business undertaking” (PCBU) in some states and the simple ‘employer’ elsewhere.  Who, me?  The people at the top of the tree and in the cross hairs are defined by the wording of the relevant act.  This is an inevitable outcome of the complexity of the modern world.

In the tragic Dreamworld Occurrence, the inquest (https://www.claytonutz.com/knowledge/2020/august/the-dreamworld-tragedy-the-coroners-findings-the-prosecution-and-lessons-learnt?.  Seen 14/8/2020) was concerned about “frighteningly unsophisticated” safety systems, “unqualified staff”,  the “absence of holistic risk assessments”.  In this case the lawyers in my source say that the “officers” (being the name used to identify those in the cross hairs) are “uniquely positioned to influence the behaviour and culture of an organisation”.  The coroner got it right in my view (but almost certainly unwittingly) talking of “a systemic failure to ensure the safety of patrons and staff”.  To Rowe (Rowe, WD  (1977)  An Anatomy of Risk, New York, John Wiley & sons) and to me systemic means something a lot broader than just Dreamworld.  Further, the coroner said “such a culpable culture can exist only when leadership from the board down are careless in respect of safety”.  There are those old adjectives: culpable (“deserving of blame” – Oxford English Dictionary)  and careless.  So here, in this most recent and tragic of cases to afflict our nation we have the people who failed to be perfect now identified and blamed – the board.

In the time before the modern consultative post-Robens era, the essence of the common law expectation was summarised by the relevant Government department of labour (in Victoria I believe) as:

  1. A danger cannot be made safe by relying on the good behaviour of an attentive worker.

2. Both the likely and the unlikely actions of workers have to be taken into account when devising barriers against dangers.

3. The provision of hardware barriers is required whether or not such barriers are commercially practicable or mechanically feasible. While the legal meaning of ‘reasonable precautions’ is strict, in practice the level of risk which remains after the provision of barriers of various sorts is subject to commercial and mechanical feasibility and may be made respectable by publication of acceptable designs by government departments, standards bodies, professional bodies or even trade groups.

4. With regard to operator behaviour, dangers are disregarded only if they are the result of deliberate action or action that could not be reasonably anticipated. Responsibility with regard to ‘reasonable’ is satisfied when the only danger remaining is due to the unlikely and unforeseeable actions of the incalculable individual. In essence this means that when reliance is placed on a worker for the safe operation of the system then the demands on the worker should be realistically assessed in terms of the worker’s response to all the stressors of life (for example hangovers, lack of sleep) and work (for example piecework rates, peer group pressures).

 I regret I am unable to substantiate this, despite attempts to do so.  I’d be delighted if someone reading this could either direct me to the source (it must be in an archive somewhere) or tell me this is nonsense and a figment of my imagination.  I don’t think the latter is likely.  These four points are clearly directed at engineers and managers of industry and they clearly say that one should not expect a perfect person to be in the workforce.  Should we nevertheless assume that the board and senior managers will be perfect?  I don’t think so.

Engineers have in the past approached this whole subject with care and consideration.  Witness the wonderfully simple and helpful summary by Barr (Nicholas J Bahr (1997) System Safety Engineering and Risk Assessment: A Practical Approach, p152.  Taylor and Francis, Philadelphia USA).  We need to:

  1. Understand how people act and react (human factors)
  2. Design equipment to help them do their jobs better not worse (ergonomics)
  3. Understand how to make the whole system more reliable in spite of the human element (human reliability)

Apply this instead to the board and senior managers and we could paraphrase Barr’s summary as:

  1. Understand how boards and senior managers act and react (management factors)
  2. Design a system within which they can do their jobs better not worse (systemic risk management)
  3. Understand how this system can be improved over time (process improvement of systemic risk management) 

So why would we expect a perfect person to be in management, or even on the board?  Why would we expect a board member to understand the intricacies and minutiae of technology that are often to be found at the origin of something serious?   Why would Government expect them to do other than rely on the qualifications and experience of the people they employ to manage the hardware and operate the system in accordance with good practice? 

Why indeed?  We’ve been doing this for a very long time yet each year we uncover (or think we do) more imperfect people.

(This article was first published in the Australian Risk Engineering Society newsletter in

Leave a Comment