The increasing use of automation has generated interest in more sophisticated and intelligent systems. The main requirement for any manufacturing or process control system is to ensure reliable and continuous operation. Faults or failures can cause unacceptable danger or undesirable economic consequences. There is therefore great interest in the development of Fault Diagnosis Tolerant Control (FTC) systems, which undergo graceful degradation when faults arise. This enables human or automatic systems to put in place corrective measures before the system fails totally. High levels of reliability, maintainability and performance are now needed to ensure safe operation in hazardous human or environmental situations. The consequences of faults and failures in flight controls, chemical plants, nuclear plants, vehicle systems etc. are well known. The fault diagnosis function is one of the critical elements in a fault-tolerant control system. In general, the design of a fault tolerant control system consists of two different stages, regarding the design of the Fault Detection and Isolation system (the well-known FDI) and the design of the Fault Tolerant control unit (FTC). The research activity concerning the fault detection and isolation problem strictly regards the development of dynamic systems (the so-called filters) in order to generate signals (the so-called residual) from which it is possible to detect any fault occurrence (fault detection stage) and to determine the system component affected by the fault (fault isolation task). In the current literature, many different FDI and FTC tools and methods can be employed for more reliable control, fault monitoring and diagnosis systems. In particular, the detection, isolation and diagnosis of fault conditions in process or manufacturing systems can be considered from two perspectives. The first approach is to use control engineering theory and ``quantitative modelling''. The second method is to employ ``qualitative'' modelling and reasoning based on techniques developed within the artificial intelligence community. This includes techniques such as fuzzy logic, neural networks, neuro-fuzzy systems and model-based systems. Moreover, failure detection algorithms normally use hardware techniques or analytical redundancy to determine anomalies in the system behaviour. A first model-based consists of employing analytic redundancy which involves the use of the functional relationships between measured variables to provide a crosschecking test. Additional equipment may not be needed in such a circumstance, since existing measurements are simply used to provide estimates of other variables. A residual signal is defined based upon the difference generated from consistency checks. The residual will be of value zero during normal operation and will diverge from zero in the presence of fault conditions. This type of approach relies upon the use of a model and therefore falls under the category of model-based fault diagnosis methods. An alternative approach to fault diagnosis is to use the so-called hardware redundancy. A voting method is often used for hardware redundancy checking but this involves the duplication of physical devices, which is expensive. Multiple sensors can be used with the voting method and the outputs of these sensors can be compared to check for discrepancies between the measured signals. This solution, that can be expensive, allows to achieve more reliable performances than analytical redundancy schemes since does not rely on a perfect knowledge of a system model which, is some circumstances, can not be easy to obtain. On the other hand, the main advantage of model-based FDI algorithms is that additional sensors are often not required. When the fault has been detected, the fault tolerant control system requires a fault accommodation algorithm. In particular, the current literature presents two main approaches. The first scheme is based on an ``explicit'' controller reconfiguration, while the second one provides for an ``implicit'' reconfiguration. In particular, the first approach requires the design of several controllers designed on the basis of different faulty model operations, which are discriminated by a supervision unit (sometimes it is referred as ``control falsification''). The FDI unit performs the fault falsification stage after the fault identification by means of a switching control scheme. On the other hand, the second approach is based on the design of a controller that has to be ``robust'' with respect to any fault occurrence. In this case, the controller reconfiguration is achieved from a robust point of view. Using several approaches, such as classical adaptive control schemes, neuro-fuzzy systems and the theory of non-linear regulators, can perform the design of a robust controller. It is worthwhile noting that one of the most important aspects of the control design has been neglected by the current literature. This problem concerns the safety of the control system design and its application to large-scale processes (distributed systems). These systems, which are important from a practical point of view, are critical to deal with due to the high number of process variables, components and connections present in the system generating the so-called fault propagation phenomenon. The study of the FDI and FTC units cannot be performed locally, and the possible connections among the different components have to be taken into account during the system analysis stage. The main goal consists of defining mathematical methods and schemes able to describe the whole distributed system, in order to manage the FDI, the FTC, the reconfiguration, the supervision and the reliability problems for whole system. Only few contributions addressing these topics can be found in the current literature, even if important theoretic and practical subjects are still open and can be investigated.
GRAPHICAL USER INTERFACE SOFTWARE FOR AUTOMATIC DYNAMIC SYSTEM IDENTIFICATION, FAULT DIAGNOSIS AND FAULT TOLERANT CONTROL IN COMPLEX DISTRIBUTED PROCESSES
SIMANI, Silvio;BENINI, Matteo
2006
Abstract
The increasing use of automation has generated interest in more sophisticated and intelligent systems. The main requirement for any manufacturing or process control system is to ensure reliable and continuous operation. Faults or failures can cause unacceptable danger or undesirable economic consequences. There is therefore great interest in the development of Fault Diagnosis Tolerant Control (FTC) systems, which undergo graceful degradation when faults arise. This enables human or automatic systems to put in place corrective measures before the system fails totally. High levels of reliability, maintainability and performance are now needed to ensure safe operation in hazardous human or environmental situations. The consequences of faults and failures in flight controls, chemical plants, nuclear plants, vehicle systems etc. are well known. The fault diagnosis function is one of the critical elements in a fault-tolerant control system. In general, the design of a fault tolerant control system consists of two different stages, regarding the design of the Fault Detection and Isolation system (the well-known FDI) and the design of the Fault Tolerant control unit (FTC). The research activity concerning the fault detection and isolation problem strictly regards the development of dynamic systems (the so-called filters) in order to generate signals (the so-called residual) from which it is possible to detect any fault occurrence (fault detection stage) and to determine the system component affected by the fault (fault isolation task). In the current literature, many different FDI and FTC tools and methods can be employed for more reliable control, fault monitoring and diagnosis systems. In particular, the detection, isolation and diagnosis of fault conditions in process or manufacturing systems can be considered from two perspectives. The first approach is to use control engineering theory and ``quantitative modelling''. The second method is to employ ``qualitative'' modelling and reasoning based on techniques developed within the artificial intelligence community. This includes techniques such as fuzzy logic, neural networks, neuro-fuzzy systems and model-based systems. Moreover, failure detection algorithms normally use hardware techniques or analytical redundancy to determine anomalies in the system behaviour. A first model-based consists of employing analytic redundancy which involves the use of the functional relationships between measured variables to provide a crosschecking test. Additional equipment may not be needed in such a circumstance, since existing measurements are simply used to provide estimates of other variables. A residual signal is defined based upon the difference generated from consistency checks. The residual will be of value zero during normal operation and will diverge from zero in the presence of fault conditions. This type of approach relies upon the use of a model and therefore falls under the category of model-based fault diagnosis methods. An alternative approach to fault diagnosis is to use the so-called hardware redundancy. A voting method is often used for hardware redundancy checking but this involves the duplication of physical devices, which is expensive. Multiple sensors can be used with the voting method and the outputs of these sensors can be compared to check for discrepancies between the measured signals. This solution, that can be expensive, allows to achieve more reliable performances than analytical redundancy schemes since does not rely on a perfect knowledge of a system model which, is some circumstances, can not be easy to obtain. On the other hand, the main advantage of model-based FDI algorithms is that additional sensors are often not required. When the fault has been detected, the fault tolerant control system requires a fault accommodation algorithm. In particular, the current literature presents two main approaches. The first scheme is based on an ``explicit'' controller reconfiguration, while the second one provides for an ``implicit'' reconfiguration. In particular, the first approach requires the design of several controllers designed on the basis of different faulty model operations, which are discriminated by a supervision unit (sometimes it is referred as ``control falsification''). The FDI unit performs the fault falsification stage after the fault identification by means of a switching control scheme. On the other hand, the second approach is based on the design of a controller that has to be ``robust'' with respect to any fault occurrence. In this case, the controller reconfiguration is achieved from a robust point of view. Using several approaches, such as classical adaptive control schemes, neuro-fuzzy systems and the theory of non-linear regulators, can perform the design of a robust controller. It is worthwhile noting that one of the most important aspects of the control design has been neglected by the current literature. This problem concerns the safety of the control system design and its application to large-scale processes (distributed systems). These systems, which are important from a practical point of view, are critical to deal with due to the high number of process variables, components and connections present in the system generating the so-called fault propagation phenomenon. The study of the FDI and FTC units cannot be performed locally, and the possible connections among the different components have to be taken into account during the system analysis stage. The main goal consists of defining mathematical methods and schemes able to describe the whole distributed system, in order to manage the FDI, the FTC, the reconfiguration, the supervision and the reliability problems for whole system. Only few contributions addressing these topics can be found in the current literature, even if important theoretic and practical subjects are still open and can be investigated.I documenti in SFERA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.