Реферат на тему How Do People Contribute To The Catastrophic
Работа добавлена на сайт bukvasha.net: 2015-06-02Поможем написать учебную работу
Если у вас возникли сложности с курсовой, контрольной, дипломной, рефератом, отчетом по практике, научно-исследовательской и любой другой работой - мы готовы помочь.
How Do People Contribute To The Catastrophic Breakdown Of Complex Automated Technologies? Essay, Research Paper
As scientific knowledge progresses and technological advances are made, greater dependence is placed upon automated systems and their complexities are, necessarily, increased. Whilst the systems themselves may be rigorously tested to ensure they operate correctly, errors can enter the system via the weak link in the chain – the human designers and operators. Unlike the machines that they operate, humans are not very good at doing the same task for a prolonged period, or at doing two things at once, and their performance becomes impaired if asked to do so, e.g. Casali & Wierewith 1984. Human errors therefore become almost an inevitability in a complex system and this has lead to much research into the causal factors behind errors and new ways of implementation to minimise their occurrence. Reason (1990) distinguishes between two types of error; latent errors, problems caused by poor design or implementation at a high level which may not be immediately apparent, and active errors, errors caused by front line operators which are often inherited from latent errors, although the consequences here are usually seen on site and are more immediately apparent. Latent errors are the more serious category for complex automated systems as they may not be apparent at the initial onset of system implementation and can lie dormant until triggered by an active error (giving rise to the ‘pathogen metaphor’). As Reason observes, these errors “constitute the primary residual risk to complex, highly-defended technological systems.” Errors may also be exacerbated by the increasing opacity of automated systems, and this theme is central to the issue of automated systems breakdown. As automated systems become more complex, the human operators become increasingly distanced from the actual processes and lose their ‘hands-on’ knowledge of the system. Such distancing and complexity of function can lead to ‘mode errors’ – the human operators perform the appropriate action for one mode when they are in fact in another, e.g. the pilots of an Aero Mexico DC-10 made a mode error in using the autopilot, causing the engine to stall in mid-air and damaging the plane. (Norman 1983). Such complicated systems as these are often ’safeguarded’ by features in them which are designed to accommodate errors without breakdown, though ironically these systems often tend to exacerbate the problem. Automated systems frequently have inbuilt defence mechanisms for errors and can compensate for them. Often, such systems may operate on the ‘defence in depth’ principle in which a system incorporating a hierarchical structure of processes will have corresponding defence at each level. In such cases, an active error by the operator may be subjected to attempts by the system to compensate at several levels, only returning to the operator as a last resort if the error cannot be fully compensated for. However, this mechanism is often unseen by the operator who may be unaware that an error has even occurred until compensation is no longer possible and the system breaks down. At this point the error may have been compounded by the systems attempts to cope with the situation and be of a much larger, more complex and more obscure nature than when first encountered. Also the delay by the system in informing the operator of the error may well have caused the system to go beyond the point where the operator is able to save it. Such a situation is cited by Norman (1990) in which the loss of power to one aeroplane engine is compensated for by the autopilot until such compensation is no longer possible and it is too late to prevent the plane from rolling. Norman argues that in such cases automation is not the problem, rather the inappropriate level of feedback given by the system. To consider a similar scenario to the one outlined above, we can envisage problems even if the system informs the operator of the problem whilst it is still possible to act. Humans have the unique ability to use knowledge based problem solving routines on novel stimuli, as would be returned by a machine faced with an error which it’s programming does not teach it to cope with. However, this very ability of the human operator is not an optimal one, especially when the individual is under a stress situation, as would be the case if the automated system was one which could have catastrophic consequences from an error, for example an air traffic controlling system. The operator is likely therefore to be under considerable pressure to produce a solution and this is likely to interfere with already less than perfect heuristic problem-solving techniques. In attempting to match the situation with previously experienced ones (’a technique known as ’similarity matching’) and thus use previously successful solutions it is quite likely that the individual will distort the problem space and arrive at a solution which does not fully meet the requirements of the problem. Here again then, we see that the strategy of using the ill-informed operator as a last resort in the rectification of errors which have possibly been made more complex by the systems own attempts to correct then is a seriously flawed and potentiality catastrophic one. Reason (1990) has highlighted another area where humans can cause the breakdown of automated systems, and this is in the field of violations. Reason outlines intentionality as being the differentiation between errors and violations. Within the violations category, routines are a consequence of the natural human tendency to take the path of least effort. Thus, the problem here is not a sub-conscious mistake but a decision taken by the individual in response to an indifferent environment in which such practices can go unnoticed. Daily safety violations were made for a long time leading up to the Chernobyl disaster. Such human factors as sloppiness of procedure, mis-management and the practice of placing economic considerations above safety can all contribute to system failure. For example, in the Bhopal tragedy, the staff were insufficiently trained, the increased reading of a pressure gauge was not seen as abnormal and factory inspectors warnings were ignored. Clearly with such huge latent violations in procedure the disaster was not entirely unpredictable. (Stix 1989.) Exceptional violations are those in which the operating circumstances make them inevitable, for example in the Zeebrugge ferry disaster it was first thought that the blame rested with the crew member who did not close the bow doors, though later evidence that bad time management and lack of checking by higher orders were at fault. Again though, the less than optimal performance of the people running the operation is seen to be the root cause of breakdown. Human errors are things which will always occur. Attention lapses, performance limitations, slips etc. are all far too unpredictable to ever be eliminated altogether and so perhaps the aim of systems designers should be to minimise the effects of errors and to maximise their early detection. Automation is an area in which efficiency can be greatly improved, safety standards increased and economies made, though as Weiner & Curry (1980) observe, attempting to remove errors by automation is a flawed idea in itself, since humans will be monitoring the systems and thus the errors are merely relocated. In conclusion to this study, it is perhaps ironic to note that with the continued implementation of more advanced technologies, humans are increasingly assigned to the role of monitors. Here then we see ourselves falling into a situation where each half of the system is engaged in doing what the other half does best – computers are excellent at repeatedly performing mundane and tedious tasks without getting distracted whilst humans have very limited attention spans and become bored very easily. As Bainbridge (1983) points out, vigilance studies have shown that it is impossible to maintain effective visual attention on a source of information on which very little happens for more than half an hour. Perhaps if more emphasis were placed on implementing technologies to monitor the performance of humans rather than the reverse, accidents such as Chernobyl and Bhopal may be avoided. Bibliography Bainbridge, L. (1983) Ironies of automation. Automatica, 19(6) pp. 775-779. Eysenck, M.W. & Keane, M.T. (1991) Cognitive Psychology. Lawrence Erlbaum Norman, D.A. (1983) Design rules based on analyses of human error. Communications of the ACM, 26 pp. 254-258. Norman, D.A. (1990) The problem with automation: Inappropriate feedback and interaction, not over-automation. Reason, J. (1990) Human Error. ch.7. Cambridge University Press Stix, G. (1989) Bhopal: A tragedy in waiting. IEEE Spectrum
3b9