Examples and discussion on Class A control measures in the EDM

Table 7.1 defines a number of different categories of control measures of the Class A type (controls over the process leading to damage) in the Energy Damage Model.  These categories are concerned with the rationale for the choice of control measure.  Obviously, control measures can either be associated with the physical manifestation of the system, or with administrative processes within which the system operates or with the behaviour of the people who interact with the system.

No examples are given in the text for type EC1 controls to eliminate the energy.

While desirable in principle, generally it is the energy source and form that makes the function of a system possible.  It is hard to imagine eliminating the energy and retaining the function.  Examples include:

Passive controls:

  1. Don’t mine and concentrate uranium (or other toxic materials) – this is the example Haddon gave in his 1973 paper Energy Damage and Ten Countermeasure Strategies.
  2. Use non-toxic materials for functions such as de-caffeination of tea or coffee, water reticulation (lead pipes used to be used), food storage (glass containers) etc.
  3. Perform assembly and surface coating work on steel structures when they are at ground level.

Active controls: 

  1. Shut down power sources if a Recipient approaches a damaging energy space or only allow access when de-energized.
  2. Removal of mosquito (or rats etc.) breeding grounds so that fewer exist that can be disease carriers.

I thought it would be helpful to readers if I provided a longer discussion on Event Mechanism Controls.

EVENT MECHANISM CONTROLS

Type MC1 controls – Reduce the rate of deterioration of the physical properties of the system that would enable an Event to occur

Under common law, the physical properties of a system that affect the likelihood of an Event create the “safe place of work”.  The standards of physical design determine whether the standard of care has been satisfied.  Whatever the standard actually achieved (whether in someone’s view it satisfies the required standard of care or not), there is a need to maintain it at that standard and that is what MC1 is about.

For example, in one country I visited much work was done at height to complete the welding in place of steel structural sections of a building.  Welders had created small baskets to support them as they worked, using small offcuts of concrete reinforcing rods.  These baskets had scrap wood for a base.  In no other country I am aware of would such devices be regarded as acceptable but in this country they were the common “standard of care”.  Such baskets therefore needed a standard to state acceptable dimensions, method of fastening to steel beams, securing of the floor boards etc.  Once the design standard is determined and complied with, unless some effort is made to ensure it is maintained over time, it will certainly deteriorate.  It is very common for no monitoring of standards to occur and hence for standards to deteriorate quite quickly.  The late Trevor Kletz was well known for pointing out that this inadequacy was commonly to be found in industry.

There is a multitude of these very common control measures. The underlying rate of deterioration of a physical thing is very dependent on its basic design – compare brick houses with wooden houses, for example.  The actual rate of deterioration is dependant on the environment in which it exists and the rate of inspection and type of maintenance or renewal practiced.

Listed below are but a few examples.

  1. Purchasing and equipment renewal practices.  Buy cheaply and expect more maintenance effort.  Operate past the design life and expect more failures.
  2. Corrosion-poofing of metals.
  3. Use of insulation materials on electrical cables used in commercial kitchens that are not susceptible to cleaning chemicals.
  4. Use of materials (eg. insulation on electrical cables, surface coatings) used externally that are resistant to ultra violet light.
  5. Use of fail ‘safe’ mechanisms for micro switches used on machine guards.
  6. Use of hard-wired safety interlocks on machinery controlled otherwise by programmable logic controllers.
  7. Condition monitoring as a maintenance strategy, eg. on rotating equipment vibration, sampling of oil and testing for contamination, crack identification and crack length monitoring practices where fatigue is a failure mechanism, inspection of work at height equipment prior to use, inspection and testing of insulating equipment used for live electrical work.
  8. Inspection and testing of sensors and alarm systems.
  9. Use of durable high traction surfaces on metal decks subject to water or fuel contamination.
  10. Use of durable high-visibility strips on the edges of steps and platforms to aid identification.
  11. Construction of steps with relatively low risers and long treads (but still within the standard requirements) rather than the opposite.
  12. Design features aimed at reducing overload stresses, such as fuses (electricity), blast doors (overpressure reduction), safety valves (pressure vessels).

 

Type MC2 controls – Reduce the rate of deterioration of compliance with required methods and practices of human interaction with the system

Under common law, the “safe system of work” idea is in the second rank of expectations. The “system” includes the way people behave as they interact with the physical features of it.  If design features are there for the purpose of ensuring high levels of compliance, these features have the effect of reducing the rate of deterioration of compliance.  All of this is the domain of MC2 controls.  Other aspects of the system include Class B control measures (supervision, training etc.).

Again, there is a multitude of examples ranging from physical design features that encourage or enforce compliance with a preferred form of behaviour to the provision of signs conveying warning and advisory information and formally defined behavioural expectations and the way in which they are encouraged or enforced.

  1. Tamper proof (child proof) lids on containers of medicines or cleaning materials.  This reduces the probability that the behaviour of a particularly sensitive group of potential Recipients will fail to comply with requirements and hence is of this category as it ensures a high probability of compliance.
  2. Gates in swimming pools fences and around kindergartens that can only be unlatched by a tall person.  Same effect as above.
  3. Design of access controls, eg. around large power equipment that enforce compliance with procedural isolation practices.
  4. High and impenetrable fences on road and pedestrian bridges to prevent suicide attempts and objects being thrown on to vehicles or people below.
  5. Operator skill testing and recency requirements, e.g. control room personnel, pilots.
  6. Operator fatigue-based operating time limitations.
  7. Observations of compliance with behaviour-based control measures
  8. Signs advising of the presence of hazards or of which behavioural control measures are required.
  9. Fences that act as absolute barriers, fences that are visible but not physical barriers, obstructions that make access time-consuming or extremely uncomfortable, painted lines that demarcate the limits of access areas, signs advising who may enter and under what circumstances.

 

Type MC3 controls – Reduce the occurrence of unusual situations or improve the likely success of people faced with responding to them or predict and pre-plan activities associated with non-routine types of operation

In the text, the term ‘situation’ comprises both conditions (things that don’t vary to any appreciable degree over time) and circumstances (things that do often vary over time, like weather, crew size, urgent demands, equipment serviceability).   Unusual situations include those that are uncommon but otherwise regarded as normal (in the sense that the system is designed to cope and the operators are trained to manage as a normal part of their skills) and those that are uncommon and abnormal (typically an emergency situation).  Both normal and abnormal uncommon situations are to a large extent amenable to prediction.

The uncommon but normal situation is one in which operators may be expected to respond correctly but which seldom gets experienced and so their responses may not be very skilful or as hoped for due to lack of recent experience or because of the pressures of time and conflicting demands.  Examples are typically to be found in the operation of control rooms (chemical plant, power stations, etc.), of aircraft or ships, in operating theatres and other medical situations, accident response teams, fire fighters etc.  The similarity in these situations is that they are each very influenced by circumstances (abnormal plant operation, weather & equipment problems & traffic conflicts, unexpected patient response etc. respectively).  The situation may arise from equipment failure, which is mostly unintentional but may also be intentional, for example when there is a desire to test response to predictable failures, or from environmental conditions or the changes to the context of the operation.

One example, from flying operations, is of a flight sector that was to occur in poor weather, over mountainous terrain near the destination, with a failed en route navigational aid and only one pilot licensed to fly in such circumstances and therefore lacking assistance from the copilot.  The plane struck the mountain on descent.  A second example (medical) involved an operation on the throat of a patient.  Heart failure occurred shortly after administering the anaesthetic and before the operation began, leading to the eventual need to use adrenaline.  In the urgency of the situation (the clock was ticking) an incorrect dose of adrenaline was administered.  A third example is that of the uncontrolled nuclear energy release at Chernobyl, being an illustration of a situation that is not uncommon in power generation – there is a desire or a need to test the responsiveness of system protection devices and this is achieved by disabling the various layers of protection up to the one that needs to be tested.

As an aside – in each of these cases the simple, immediate and thought-free response is that the ’cause’ is human error.  The legal response is to blame the person making the error and to punish them.  Thankfully, in the case of aircraft operations, a more enlightened attitude and practice exists in many countries with an agreement that pilots are not to be blamed and punished.  I understand that in the medical case in some jurisdictions it is quite likely that the responsible surgeon would be taken to court!  The problem with this reaction is that the organisation and the profession as a whole avoid the heartless glare of the legal 20/20 hindsight searchlight on itself.  As always, this approach ensures the real lessons are not learned.

A rational control measure is not to look at how more reliable pilots and surgeons can be recruited. Rational control measures look at the implementation of practices that recognise failure possibilities and take steps to organise things so that they are less likely to arise, more likely to be detected and more likely to be less serious in the resulting Consequences.

It is very unusual, in my experience in industry, for the uncommon and abnormal (emergency) situation to be actually subject to prediction much beyond a few generic scenarios (fire, first aid etc.).  A conscious study of credible emergency scenarios is of great value to an organisation as it is far, far better to plan the emergency before you have it rather than while it is happening.  I have two examples from a number I could cite.  In one case a grave digger was inundated by wet ground.  A number of different strategies to extract him were unsuccessful, leading to a great deal of anxiety all around but extremely so for the victim.  The responsible engineer later told me that if there were a next time (they did eventually get the man out alive) he would approach the problem in the exact opposite sequence to that which seemed appropriate at the time.  In another case a plant experienced a fire in a coal mill.  The standard practice that applied to all fires was to call the fire brigade.  By the time they arrived the fire was well established but it was only when they did arrive that the fire fighters announced they would not use water unless the mill was isolated electrically.  This could not easily be done as the isolator was behind the mill and the fire was too intense to approach it.  Equipment above the mill, on which production was heavily dependent, was severely threatened by the fire.

In the first case, interestingly,  no-one suggested that the grave digger was at fault for standing in the excavation.  In the second case, there was no specific act of a person that precipitated the fire and no-one suggested that there was any organisational failing – which is not surprising.

It is interesting to reflect on the differences between common and uncommon abnormal.  The difference appears to be limited to how easy it is to imagine, assuming no special effort is made to do so.  Evidently a conscious effort is needed determine the credible emergencies/abnormal situations that are relevant to the nature of the operation and to then decide on the control measures that are appropriate for each of these.

Those controls that reduce the probability or frequency of experience of the abnormal or improve the success of people responding to the occurrence are the domain of MC3 controls.

Reducing the occurrence of unusual situations generally requires consideration of the reliability of both equipment and people.  Equipment reliability improvements may be justified, implying better quality equipment and more cautious approaches to inspection and maintenance.  By understanding how people can create unusual situations, it may be possible to provide skill training, independent verification of critical task completion, to refresh the knowledge base of operators where knowledge is the basis of action in the system and to change the design of the physical environment to minimise error.

Improving the likely success of people faced with responding to an unusual situation involves careful consideration of the way in which real people recall requirements that are seldom practiced when faced with a threatening situation.  Strategies may include simulation of predictable scenarios, talking through response methods for them, providing checklists to direct responses into preferred paths and the like.

Pre-planning activities associated with non-routine types of operation is the basis of the previous two strategies.  Careful pre-planning identifies the equipment and people strategies needed.

Type MC4 controls – reduce the probability of the system being subject to conditions that exceed its intended and designed capabilities

These controls are to protect the system from ‘overload’ situations, whatever that might mean in the context of the system.  In principle, overload can mean the imposition of excess loads on a ‘healthy’ (as built and as intended) structure or normal load applied to an ‘unhealthy’ (deteriorated capability, manufactured not to specification) structure.  For example:

  • The physical loading of structures, over-current in electrical circuits, over or under-temperature of electronics or of containments, over-speeding of moving equipment, over or under-pressure in containments, over-reaching of articulated equipment, excess angle to the horizontal, excess static or dynamic force or torque application and so on.
  • Cycling of stresses (due to temperature, pressure, force) beyond the design expectations of such cycles.

Control measures of this type are:

  1. Those that aim to limit the extent of overload.  Physically they may involve pressure relief valves and technology of a similar purpose for other forms of load.  Administratively they may involve cessation of operations in unfavourable circumstances.  Work methods may include limiting speeds of mobile equipment depending on the circumstances.
  2. Those that monitor stress cycles – an administrative practice
  3. Condition monitoring of the system for evidence of deterioration mechanisms (eg. corrosion, embrittlement,  cracking, de-lamination, damage).

 

1 Comment

  1. crysmarye

    good sir

Leave a Comment