Introduction to the Problem
It could be hardly doubted that the development of technology has impacted nearly every sphere of contemporary life. This statement is particularly accurate in relation to medicine since numerous tools, technologies, and mechanisms for facilitating the provision of caregiving were developed in recent decades (Catchpole et al. 3749). However, it is also evident that the application of new technologies in this sphere should be conducted with caution because there is an immense potential for adverse consequences, including various accidents and incidents, which pose a threat to human safety (Adedigba et al. 169). This paper aims to investigate the topic of robotic surgery in the context of different safety-related models and frameworks to conduct a profound analysis of the adverse factors, which could influence the area of public health.
The Swiss Cheese Accident Model
To begin the discussion appropriately, it is of high importance to identify an accident model, which would be appropriate for the investigation of the topic under consideration. The accident model, which is often derived from the theories of accident causation, shares three primary areas of concern (Sanchez et al. 428). First of all, it is humans’ integral tendency for making mistakes; secondly, it is the correlation between big and small accidents (the big ones usually escalate from the small ones); and thirdly, the majority of adverse incidents and failures result primarily from organizational factors rather than from isolated operator errors (Sanchez et al. 428).
In their study, Sanchez et al. mention that James Reason, the industrial psychologist, argues that the majority of accidents could be categorized at least into one of four failure domains: “organizational influences, supervision, preconditions, and specific acts” (428). This statement is the basis, on which the Swiss Cheese Model is developed. In this approach, the defense against failure in human systems is represented as the slices of Swiss cheese. The holes in each slice represent the weaknesses of the existing defense system, and thus when these holes align, the likelihood of an accident increases (Sanchez et al. 428).
This model is widely recognized and used in various industries, including the healthcare system. The Swiss Cheese Model is significantly appropriate for application to the area of robotic surgery because surgical robots, even though they are programmed to function automatically, are still considerably dependent on human control. As it was previously mentioned, humans tend to make mistakes, and thus a system for protection from human failure is needed.
Further, it is possible to discuss how various latent factors influence the performance of surgical robots and the overall health institution environment. In the research by Alemzadeh et al., a comprehensive analysis of the adverse events reported to the publicity was performed (1). The authors employed “an automated natural language processing tool” to retrieve the information maintained by the U.S. Food and Drug Administration about safety-threatening factors (Alemzadeh et al. 1). One of the most evident elements, influencing the probability of adverse outcomes for the patient, was the complexity of surgical operations: such specialties as gynecology and urology had considerably lower rates of injuries and deaths than more complicated surgeries such as cardiothoracic, head, and neck operations (Alemzadeh et al. 9). Another important area of concern is the malfunctions and system errors of surgical robots. Overall, it should be stated that these findings indicate the necessity for more advanced robots.
Fault-Tree (FT) Modeling
Another model, which is widely used to investigate the safety of various environments, is the fault-tree model (Adedigba et al. 172). This is a deductive, graphic methodology that can determine the probability of adverse effects and failures of complex systems (Adedigba et al. 172). The model is structured in the following way. First of all, it is essential to determine the top event, which represents “a major accident initiating hazard” (Adedigba et al. 172). The top event is placed at the top of the fault tree, and then the tree is modeled downward to represent different combinations of basic events and logic gates that cause the top event to happen (Adedigba et al. 172).
Considering robotic surgery, it is possible to choose the error in heart surgery, which causes the injury or death of a patient, as the tops event. Basic events include various programming errors, mistakes, and device malfunctions. Logic gates describe different aspects of the insufficient interaction between humans and surgical robots, with the primary focus on input mistakes.
Event-Tree (ET) Modeling
The event-tree analysis is different in its approach, despite being used for the same purposes as the fault-tree modeling. The event-tree model represents an inductive type of analysis, which is used to investigate the sequences of events caused by the initial event (Adedigba et al. 172). This method could be used in the area of robotic surgery to obtain qualitative or quantitative data and assess the potential effect of various sequences of events (Adedigba et al. 172). The primary advantage of this method comprises the higher predictability of accidents due to the forward-looking analysis of the current data.
Failure Modes and Effects Analysis (FMEA) Framework
The study by Veronese et al. is one of the best examples of the research in the field of robotic surgery and safety since it is conducted by a multidisciplinary and multi-institutional group of researchers, who employed Failure Modes and Effects Analysis (FMEA) framework to investigate a very particular issue of patients undergoing Stereotactic Body Radiation Therapy (SBRT) (132). After the completion of FMEA’s two steps, the authors state that the primary modes, which could be subjected to failure, are the competence of the clinical staff and the continuous evaluation of the main critical parameters of the process (Veronese et al. 141). It is evident that the FMEA framework is highly useful for the assessment of clinical risk probability.
Another framework, which could be used in the area of concern under consideration, is Hazard and Operability Study (HAZOP) framework (Mohammadfam et al. 17). This approach was initially developed to analyze chemical process systems; however, it became widely recognized in other spheres lately (Mohammadfam et al. 18). The framework is structured into four primary aspects: deviation, its possible causes, consequences of the deviation, and the actions required to handle the adverse effects (Mohammadfam et al. 18). Thus, it is apparent that the model is relatively simple to use, and it also produces quick and highly descriptive results. Concerning robotic surgery, this model could be utilized to acquire an overall understanding of the situation, which would serve as the basis for further improvements.
Incident Reporting Systems
The study by Boone et al. indicates another approach, which could be employed in relation to robotic surgery (416). The authors state that “the use of the robotic platform for complex pancreatic resections” combined with a properly developed incident reporting system considerably facilitates patient outcomes and clinical safety (Boone et al. 416). It could be hardly denied that the continuous reporting of the results of robotic surgery operation will result in a profound database that could be later used for various assessments.
Accident Investigation Using Root Cause Analysis
Root cause analysis is another technique that applies to robotic surgery in the context of clinical safety. For example, Alemzadeh et al. mention that it is not possible to reduce such adverse events as patients’ injuries and deaths to a single root cause, but a complex investigation is needed (13). Sanchez et al. argue that communication failure appears to be one of the leading causes of accidents (430). Root cause analysis could also be employed in combination with the HAZOP approach since the deviations are better explained by interdependent root causes.
Finally, it is essential to dwelling upon the summary of this report’s findings. Firstly, it should be mentioned that the topic of robotic surgery, despite that it is evidently present in the academic literature, needs to be further elaborated due to the constant development of the technologies. Secondly, it is also apparent that the application of various safety models, approaches, and framework is not evenly distributed between each method. Therefore, it is found that it is essential to more thoroughly investigate the employment of such models as HAZOP and root cause analysis. Overall, it should be stated that the topic of robotic surgery will be a prevalent subject for safety concerns and investigations in further decades, and thus it is of high importance to analyze the current state of the clinical environments, where surgical robots are present.
Adedigba, Sunday A., et al. “Dynamic Safety Analysis of Process Systems Using Nonlinear and Non-sequential Accident Model.” Chemical Engineering Research and Design, vol. 111, 2016, pp. 169-183.
Alemzadeh, Homa, et al. PLoS One, vol. 11, no. 4, 2016, e0151470, Web.
Boone, Brian A., et al. “Assessment of Quality Outcomes for Robotic Pancreaticoduodenectomy: Identification of the Learning Curve.” JAMA Surgery, vol. 150, no. 5, 2015, pp. 416-422.
Catchpole, Ken, et al. “Safety, Efficiency and Learning Curves in Robotic Surgery: A Human Factors Analysis.” Surgical Endoscopy, vol. 30, no. 9, 2016, pp. 3749-3761.
Mohammadfam, Iraj, et al. “Application of Hazard and Operability Study (HAZOP) in Evaluation of Health, Safety and Environmental (HSE) Hazards.” International Journal of Occupational Hygiene, vol. 4, no. 2, 2012, pp. 17-20.
Sanchez, Juan A., et al. “Patient Safety Science in Cardiothoracic Surgery: An Overview.” The Annals of Thoracic Surgery, vol. 101, no. 2, 2016, pp. 426-433.
Veronese, Ivan, et al. “Multi-institutional Application of Failure Mode and Effects Analysis (FMEA) to CyberKnife Stereotactic Body Radiation Therapy (SBRT).” Radiation Oncology, vol. 10, no. 1, 2015, pp. 132-142.