CN116149999A - Safety analysis method for complex software system - Google Patents

Safety analysis method for complex software system Download PDF

Info

Publication number
CN116149999A
CN116149999A CN202211716742.5A CN202211716742A CN116149999A CN 116149999 A CN116149999 A CN 116149999A CN 202211716742 A CN202211716742 A CN 202211716742A CN 116149999 A CN116149999 A CN 116149999A
Authority
CN
China
Prior art keywords
control
software system
complex software
analysis
security
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211716742.5A
Other languages
Chinese (zh)
Inventor
闫陈静
张伟
薛琼
董子铭
高金梁
赵迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Aerospace Academy Of Systems Science And Engineering
Original Assignee
China Aerospace Academy Of Systems Science And Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Aerospace Academy Of Systems Science And Engineering filed Critical China Aerospace Academy Of Systems Science And Engineering
Priority to CN202211716742.5A priority Critical patent/CN116149999A/en
Publication of CN116149999A publication Critical patent/CN116149999A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3604Software analysis for verifying properties of programs
    • G06F11/3608Software analysis for verifying properties of programs using formal methods, e.g. model checking, abstract interpretation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/10Requirements analysis; Specification techniques
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Testing And Monitoring For Control Systems (AREA)

Abstract

The invention provides a complex software system security analysis method, which utilizes a top-down hierarchical research method from a system to a configuration item to develop complex software system security analysis, including software system accident generation mechanism analysis, system level security analysis and configuration item level security analysis. In a system-level security analysis stage, obtaining system-level security requirements and system dangerous events by determining system dangers, constructing a software system control structure model, identifying unsafe control behaviors and controller constraints and identifying causation scenes; in the stage of configuration item level security analysis, the information flow transmission technology and the fault tree analysis technology are utilized to convert the security requirements and dangerous events output by the system level into the security requirements and dangerous events of the configuration item level, and the software system configuration item level security analysis is performed by combining the system level security analysis result and utilizing the bidirectional security analysis and the software fault tree analysis to obtain the more accurate and complete complex software system security requirements.

Description

Safety analysis method for complex software system
Technical Field
The invention belongs to the technical field of software system safety research, and particularly relates to a complex software system safety analysis method.
Background
Because of the rapid innovation of software technology, many complex systems are being changed into software intensive systems, and with the increase of software scale and complexity, the systems have complex functions, high integration level, multiple system structure layers, multiple components and elements, multiple state variables, complex feedback structures, nonlinear characteristics of input and output, namely Gao Jieci, multiple loops and nonlinearity, and very complex characteristics in structure, function, behavior, evolution and the like. Unpredictable behavior in software systems may lead to catastrophic consequences such as functional failure, system accidents, personal casualties, etc. In order to further improve the safety of the software system and reduce the occurrence rate of accidents caused by the software, the control behavior of the software needs to be subjected to system analysis, so that the accidents caused by the errors of the software are avoided.
Traditional security analysis methods such as FTA, FMEA and the like can play a good role in analyzing accidents caused by failure, but most of the accidents are caused by abnormal interaction among components at present, and the traditional analysis methods cannot meet the requirements. The traditional accident chain and other models have larger limitation in describing system accidents with complex structures and interaction relations, and the defects and accident mechanisms of the system are difficult to accurately describe. For the occurred accidents, the accident occurrence mechanism and process can be accurately described by establishing an accident model, the system summarizes the defects in the aspects of safety design and management and the like, and a targeted design rule or management measure is formed to avoid the similar accidents from happening again. Aiming at the complex characteristics of the current software system, the system overall structure is comprehensively mastered only by deeply understanding the accident mechanism, so that similar problems can be avoided when a new system is designed, and the safety of the system is improved.
Disclosure of Invention
The invention solves the technical problems that: the method is used for carrying out security analysis on the complex software system formed by a group of software configuration items, and obtaining the security requirement of the software system, and has the characteristics of simplicity, easiness in implementation, wide application range and high security analysis accuracy, and the security of the software system can be effectively ensured.
The technical scheme of the invention is as follows:
a complex software system safety analysis method is provided, the complex software system is applied to an aircraft measurement and control system, and comprises the following steps:
(1) Determining a system-level dangerous event according to a system-level accident related to the control behavior of the complex software system;
(2) Constructing a safety control structure model of the complex software system by adopting a multi-level control structure method;
(3) Identifying unsafe control behaviors of the complex software system based on the system-level dangerous event and the complex software system safety control structure model;
(4) Obtaining a cause scene of a system-level accident related to the control behavior of the complex software system according to the unsafe control behavior and the safe control structure model of the complex software system and the system-level dangerous event;
(5) Determining the system-level security requirement of a complex software system according to the cause scene;
(6) Based on the system-level security requirements, the security requirements of the configuration items of the complex software system are analyzed, and the security requirements of the configuration items of the complex software system are formed.
Preferably, in the step (2), the constructed safety control structure model is used for representing a control relationship between components of the measurement and control system, an upper layer of the safety control structure model is a controller, a process model is built in the controller, the process model generates a control instruction through a control algorithm, a current state and state transition, the activity of a controlled process of a lower layer is controlled, and the lower layer is an actuator; the sensor feeds back execution information to the upper layer, the process model of the upper layer corrects the internal state according to the feedback information, and the dynamic balance between the controller and the actuator is maintained through the internal state and the feedback information.
Preferably, in the step (3), unsafe control behaviors of the complex software system are identified, specifically:
unsafe control behavior is a potential safety hazard that may exist in the control behavior, and includes four types: the lack of control actions provided results in a hazard, the control actions provided result in a hazard, the control actions that are potentially safe are provided but the nodes are provided too early, too late or in error, the control actions last too long or stop too early.
Preferably, in the step (4), the causation scenario of the system-level accident related to the control behavior of the complex software system specifically includes:
the cause scene related to unsafe control behavior of the controller, improper feedback and information related cause scene, cause scene related to control path, cause scene related to controlled process.
Preferably, the causation scene related to unsafe control behavior of the controller includes:
faults associated with the controller, improper control algorithms, unsafe control inputs from other controllers, improper process models.
Preferably, the improper feedback and information related cause scenario includes:
the controller does not receive feedback information sent by the sensor, the sensor does not send the feedback information, but the feedback information is received or applied by the sensor, the feedback information is not received or applied to the sensor, the feedback information does not exist in the control structure, the sensor reasonably responds, but the controller receives unsuitable feedback information, the sensor responds appropriately to the feedback information received or applied to the sensor, and the sensor cannot provide necessary feedback information or does not provide a function of providing the necessary feedback information.
Preferably, the cause scenario related to the control path includes:
the control actions have been sent by the controller but not received by the actuator, the control actions have been received by the actuator but not responded by the actuator, the actuator has responded but not applied to the controlled process or not received by the controlled process, the control actions have been sent by the controller but not received by the actuator properly, the control actions have been received by the actuator properly but not responded by the actuator, the actuator has responded properly but not applied to the controlled process or not received properly by the controlled process, the controller has not sent the control actions but the actuator or other elements still responded.
Preferably, the cause scene related to the controlled process includes:
the control behavior is applied or received by the controlled process but the controlled process is not responsive, the control behavior is not applied or received by the controlled process but the controlled process is still responsive.
Preferably, in the step (7), based on the system level security requirement, the security requirement of the configuration item of the complex software system is analyzed to form the security requirement of the configuration item of the complex software system, which specifically includes:
(71) Taking the system-level security requirement and the configuration item design requirement of the complex software system as input, carrying out SFMEA analysis on all data and function items in the configuration item design requirement, and determining key data, function items and failure modes thereof affecting the reliability and the security;
(72) Carrying out SFTA analysis on the SFMEA analysis result, judging whether each failure mode is possible to occur, if so, determining a bottom failure event corresponding to the failure mode and feeding back a value SFMEA analysis;
(73) And (5) obtaining the safety requirement of the configuration item of the complex software system through repeated iterative analysis.
Compared with the prior art, the invention has the advantages that:
the present invention utilizes a combination of system and configuration items for security analysis where security is considered a control problem: i.e., accidents, may occur when component failures, external disturbances, and/or abnormal interactions between system components are not adequately addressed. The root cause of the accident is not part failure, but is that proper control is not implemented to restrict the behavior of the system in the running process of the system; and combining the superiority of the system accident model, carrying out system-level security analysis to obtain a dangerous event of a system layer, providing more effective dangerous event input for security analysis of software, and acquiring system-level security requirements. And at the configuration item level, safety analysis is carried out by utilizing a BDA software safety bidirectional analysis technology and an SFTA software fault tree analysis technology. The analysis of the two layers is integrated to obtain more complete system safety requirements, so that the defect that the traditional safety analysis method cannot analyze system accidents caused by abnormal interaction among components is overcome, and the safety of a software system is effectively improved.
Drawings
FIG. 1 is a schematic flow chart of a security analysis method of a complex software system according to the present invention;
FIG. 2 is a schematic diagram of an accident generation mechanism model of the complex software system of the present invention;
FIG. 3 is a schematic diagram of a complex software system risk determination process according to the present invention;
FIG. 4 is a schematic diagram of a safety control structure model of the power measurement and control system of the present invention;
FIG. 5 is a schematic diagram of a cause analysis framework of the present invention.
Detailed Description
The features and advantages of the present invention will become more apparent and clear from the following detailed description of the invention.
The invention provides a complex system security analysis method, as shown in figure 1, comprising the following steps:
s1, constructing a complex software system accident generation mechanism model by means of elements such as a controller, an actuator, a sensor, a controlled process, a software configuration item defect, hardware abnormality, software-hardware interaction abnormality and the like, and analyzing a complex system software accident generation mechanism as shown in FIG. 2; on the basis of defining the complex system accident generation mechanism model, formally describing the complex system accident generation mechanism model.
Specifically, the accident generation mechanism of the complex system is as follows: abnormal software configuration item defects, hardware anomalies, abnormal software and hardware interactions or other configuration item level anomalies can cause abnormal controllers, actuators, sensors or controlled processes in a system level control structure, and in the process of controlling a system, an improper control algorithm, improper control behaviors sent by the controllers but improper reception of the actuators, failure of the sensors to provide necessary feedback information, improper control of the controlled process to receive control behaviors and the like occur, so that abnormal system control is generated, and system control faults occur, and system accidents are caused. Constructing a complex software system accident generation mechanism model, which is expressed as a four-element group: SF= < SCF, ATF, SEF, CPF >
The SCF represents a controller abnormality, wherein the controller abnormality is caused by a controller software configuration item defect, a controller hardware fault, a controller software-hardware interaction fault and other faults of the controller.
ATF represents an actuator exception, which is caused by an actuator software configuration item defect, an actuator hardware fault, an actuator software-hardware interaction fault and other faults of the actuator.
SEF stands for sensor anomalies caused by sensor software configuration item defects, sensor hardware faults, sensor software-hardware interaction faults, and other faults of the sensor.
CPF represents the anomaly of the controlled process, which is caused by the defect of the configuration item of the controller software, the hardware fault of the controlled process, the interaction fault of the software and the hardware of the controlled process and other faults of the controlled process.
S2, determining system dangers by defining system-level accidents related to control behaviors in a software system, wherein the system dangers are determined to be divided into four parts: 1) Defining loss (incidents); 2) Identifying a system level hazard; 3) Determining a system level security constraint; 4) The danger is refined and the specific process is shown in fig. 3. System-level incidents that may be caused by all control actions are defined and analyzed to get related system hazards that lead to the incident. Taking a ground software of a power measurement and control system as an example for safety analysis, as shown in fig. 4, the system is easy to cause accidents due to system problems in the process of transmitting, and the system-level accidents related to a software controller are as follows: personnel injury or death (A-1), system damage (A-2), ground facility damage (A-3), firing task failure (A-4). The risks obtained according to the function of the power measurement and control system and the system-level accident analysis include: the system is flushed out of the launching frame (H-1), is collided with the ground facility (H-2), and is out of control and falls to the ground (H-3).
S3, constructing a safety control structure model by adopting a multi-level control structure modeling method, and describing responsibilities and rights among system components by using a control structure. The upper layer of the control structure is a controller, and a process model is arranged in the controller. The process model generates instructions through control algorithms, current states, and state transitions that control the activity of the underlying controlled process. The lower layer is controlled to execute the upper layer instruction and feed back the execution information. The control model corrects the internal state of the model according to the feedback information. Dynamic balance between the controller and execution is maintained through internal state and information feedback. Accidents may occur when the internal state of the process model and the feedback information are inconsistent. A control structure model is built for ground software of a power measurement and control system, as shown in fig. 5. The front end software of the power measurement and control system is used as an interaction hub between the power measurement and control executing mechanisms, control signals for the system are generated through commands of the back end software of the power measurement and control system, and sensors distributed at key parts of the system feed back system state information to the front end software of the power measurement and control system, and the front end software of the power measurement and control system is transmitted to the power measurement and control executing mechanisms after being processed, so that a closed-loop safety control structure comprising a control path and a feedback path is formed.
S4, analyzing control behaviors of the software system and identifying unsafe control behaviors; the control behavior analysis takes system-level dangers, sub-dangers, control constraints and responsibilities as input, defines controller constraints on the basis of identifying unsafe control behaviors, and finally inputs unsafe control behaviors and control constraints. Unsafe control behavior (UCA) refers to control behavior that may lead to hazards in certain situations and worst environments. The following four cases occur, representing that the control behavior may have safety hazards: 1) The lack of control action is dangerous. 2) Providing control actions results in a hazard. 3) Providing potentially secure control behavior but providing nodes that are too early, too late, or out of order. 4) The control behavior is too long-lasting or stopped too early (only for persistent control behavior rather than discrete behavior), and the critical control behavior of each controller in the power measurement and control system diagram is taken as an example to illustrate, and the dangerous behavior existing in the identified software is given as shown in table 1:
TABLE 1 dangerous behavior in software identified by embodiments of the present invention
Figure BDA0004027803850000061
/>
Figure BDA0004027803850000071
S5, obtaining a cause scene based on system-level danger, sub-danger, control structure and unsafe control behavior analysis; comprising four aspects 1) a causative scenario involving unsafe control behavior; 2) Cause scene involving feedback/information inaccuracy; 3) Cause scenarios involving control paths; 4) The cause scenario of the controlled process is related to 21 general factors, as shown in fig. 5. And in the cause analysis, generating a potential cause scene according to the system framework and the application scene and combining the general factors. According to the cause analysis framework of the power measurement and control system, the cause scenes of UCA-1 are refined one by one, and 5 potential cause scenes related to the controller BES are obtained; 4 potential cause scenarios involving the sensor SCS, 4 potential control path cause scenarios, 3 potential controlled process cause scenarios.
In particular, the reasons why the controller may (or may not) provide unsafe control behavior include the following four points: faults associated with the controller, improper control algorithms, unsafe control inputs (from other controllers), and improper process models;
inappropriate feedback and information related causation scenarios may include the following: the controller does not receive feedback information sent by the sensor; the sensor does not send feedback information but the feedback/information has been received or applied by the sensor; feedback information is not received or applied to the sensor; the control structure has no feedback information or no feedback information; the sensor responds reasonably but the controller receives inappropriate feedback information; the sensor responds appropriately to feedback information that has been received or applied to the sensor; the sensor cannot provide necessary feedback information or does not have a function of providing necessary feedback information;
the causation scenario involving the control path includes: the control actions have been sent by the controller but not received by the actuator; the control behavior has been received by the actuator but the actuator is not responding; the actuator responds but the control action is not applied to or received by the controlled process. The control actions have been sent by the controller but the executor received improperly; the control behavior has been normally received by the actuator but the actuator response is inadequate; the actuator responds appropriately but the control actions are not applied to the controlled process or are not received correctly by the controlled process; the controller does not send control actions but the actuator or other element still responds;
the causation scenario associated with the controlled process may include: the control actions are applied or received by the controlled process but the controlled process is not responsive; the control actions are applied or received by the controlled process but the controlled process responds improperly; the control actions are not applied or received by the controlled process but the controlled process still responds.
S6, further analyzing and acquiring system-level security requirements based on the cause scene;
s7, carrying out configuration item level security analysis to form a complete software system security requirement.
Specifically, the method comprises the following steps:
s7 (a) analyzing a software fault tree, establishing a fault tree by taking a software failure mode and a software failure mode identified in an influence analysis table as top events, further analyzing possible reasons and logic relations of the occurrence of the software failure mode, and extracting corresponding software security requirements on the basis;
s7 (b) performing software security bidirectional analysis, wherein the SFMEA is adopted to analyze the software design firstly to determine whether abnormal input (such as out-of-range) or abnormal event (such as unexpected suspension) can generate unsafe system behavior, namely determining whether the abnormal input or abnormal event possibly causes unexpected result (fault or failure) through forward analysis; these undesirable results (faults or failures) are then analyzed using the SFTA to determine: these undesirable results are unlikely to occur under the system design; can be safely controlled even if the occurrence happens; or find a set of reasons that lead to these undesirable results.
Further, taking a system-level security analysis result and a software requirement as input, carrying out SFMEA analysis on all data and function items in the software requirement by using an SFMEA technology, and finding out data and function items and failure modes thereof which have important influence on the reliability and security of the system, namely, finding out all reliability and security key data, function items and failure modes thereof; then, SFTA analysis is carried out on an analysis result of the SFMEA, whether the failure mode possibly occurs or not is obtained through the SFTA analysis, and if the failure mode possibly occurs, the condition under which the small failure mode occurs, namely a bottom failure event is found out through analysis, and the bottom failure event is fed back to the SFMEA for analysis; after repeated iterative analysis, a complete dynamic measurement and control ground software safety analysis result is obtained
The foregoing is merely one specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the technical scope of the present invention should be included in the scope of the present invention.
What is not described in detail in the present specification is a well known technology to those skilled in the art.

Claims (9)

1. The safety analysis method of the complex software system is characterized in that the complex software system is applied to an aircraft measurement and control system and comprises the following steps:
(1) Determining a system-level dangerous event 5 according to the system-level accident related to the control behavior of the complex software system;
(2) Constructing a safety control structure model of the complex software system by adopting a multi-level control structure method;
(3) Identifying unsafe control behaviors of the complex software system based on the system-level dangerous event and the complex software system safety control structure model;
(4) Obtaining a cause scene of a system-level accident related to the control behavior of the complex software system according to the unsafe control behavior and the safe control structure model of the complex software system and the system-level 0 dangerous event;
(5) Determining the system-level security requirement of a complex software system according to the cause scene;
(6) Based on the system-level security requirements, the security requirements of the configuration items of the complex software system are analyzed, and the security requirements of the configuration items of the complex software system are formed.
2. The method for analyzing the safety of the complex software system according to claim 1, wherein in the step (2), the constructed safety control structure model is used for representing the control relation between the components of the measurement and control system, the upper layer of the safety control structure model is a controller, a process model is built in the controller, the process model generates a control instruction through a control algorithm, the current state and state transition, and the lower layer is an actuator; the sensor feeds back execution information to the upper layer, and the upper layer process model corrects the internal state according to the feedback information, and maintains the dynamic balance between the controller and the actuator 0 through the internal state and the feedback information.
3. The method for analyzing the security of the complex software system according to claim 2, wherein in the step (3), the unsafe control behavior of the complex software system is identified, specifically:
unsafe control behavior is a potential safety hazard that may exist in the control behavior, and includes four types: the lack of control actions provided results in a hazard, the control actions provided result in a hazard, the control actions that are potentially safe are provided but the 5 nodes are provided too early, too late or in error, the control actions last too long or stop too early.
4. A method for analyzing the security of a complex software system according to claim 3, wherein in the step (4), the causation scene of the system-level accident related to the control behavior of the complex software system specifically includes:
the cause scene related to unsafe control behavior of the controller, improper feedback and information related cause scene, cause scene related to control path, cause scene related to controlled process.
5. The method for analyzing the safety of the complex software system according to claim 4, wherein the causation scene related to the unsafe control behavior of the controller comprises:
faults associated with the controller, improper control algorithms, unsafe control inputs from other controllers, improper process models.
6. The method of claim 4, wherein the improper feedback and information-related causation scenario comprises:
the controller does not receive feedback information sent by the sensor, the sensor does not send the feedback information, but the feedback information is received or applied by the sensor, the feedback information is not received or applied to the sensor, the feedback information does not exist in the control structure, the sensor reasonably responds, but the controller receives unsuitable feedback information, the sensor responds appropriately to the feedback information received or applied to the sensor, and the sensor cannot provide necessary feedback information or does not provide a function of providing the necessary feedback information.
7. The method for analyzing the security of the complex software system according to claim 4, wherein the cause scene related to the control path comprises:
the control actions have been sent by the controller but not received by the actuator, the control actions have been received by the actuator but not responded by the actuator, the actuator has responded but not applied to the controlled process or not received by the controlled process, the control actions have been sent by the controller but not received by the actuator properly, the control actions have been received by the actuator properly but not responded by the actuator, the actuator has responded properly but not applied to the controlled process or not received properly by the controlled process, the controller has not sent the control actions but the actuator or other elements still responded.
8. The method of claim 4, wherein the causation scenario associated with the controlled process comprises:
the control behavior is applied or received by the controlled process but the controlled process is not responsive, the control behavior is not applied or received by the controlled process but the controlled process is still responsive.
9. The method for analyzing the security of a complex software system according to any one of claims 1 to 8, wherein in the step (7), the security requirement of the configuration item of the complex software system is analyzed based on the security requirement of the system level, so as to form the security requirement of the configuration item of the complex software system, and the method specifically comprises:
(71) Taking the system-level security requirement and the configuration item design requirement of the complex software system as input, carrying out SFMEA analysis on all data and function items in the configuration item design requirement, and determining key data, function items and failure modes thereof affecting the reliability and the security;
(72) Carrying out SFTA analysis on the SFMEA analysis result, judging whether each failure mode is possible to occur, if so, determining a bottom failure event corresponding to the failure mode and feeding back a value SFMEA analysis;
(73) And (5) obtaining the safety requirement of the configuration item of the complex software system through repeated iterative analysis.
CN202211716742.5A 2022-12-29 2022-12-29 Safety analysis method for complex software system Pending CN116149999A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211716742.5A CN116149999A (en) 2022-12-29 2022-12-29 Safety analysis method for complex software system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211716742.5A CN116149999A (en) 2022-12-29 2022-12-29 Safety analysis method for complex software system

Publications (1)

Publication Number Publication Date
CN116149999A true CN116149999A (en) 2023-05-23

Family

ID=86351908

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211716742.5A Pending CN116149999A (en) 2022-12-29 2022-12-29 Safety analysis method for complex software system

Country Status (1)

Country Link
CN (1) CN116149999A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117555217A (en) * 2024-01-09 2024-02-13 华侨大学 Design method and device of safety brake control system oriented to redundant structure

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117555217A (en) * 2024-01-09 2024-02-13 华侨大学 Design method and device of safety brake control system oriented to redundant structure
CN117555217B (en) * 2024-01-09 2024-04-12 华侨大学 Design method and device of safety brake control system oriented to redundant structure

Similar Documents

Publication Publication Date Title
Demetriou et al. Incipient fault diagnosis of dynamical systems using online approximators
Goupil et al. Advanced diagnosis for sustainable flight guidance and control: The European ADDSAFE project
Xu et al. Analysis of operator support method based on intelligent dynamic interlock in lead-cooled fast reactor simulator
CN116149999A (en) Safety analysis method for complex software system
Shin et al. Performance analysis on fault tolerant control system
CN113093706A (en) Flight control system actuator tiny fault diagnosis method based on comprehensive observer
EP4036018A1 (en) Method of testing a system model
CN108595959B (en) AADL model security evaluation method based on deterministic stochastic Petri network
Wang et al. Reliability analysis for flight control systems using probabilistic model checking
Zibaei et al. Diagnosis of safety incidents for cyber-physical systems: A uav example
CN104699067A (en) System fault comprehensive declare processing method
EP4068031A1 (en) Health assessment method for an aircraft system and apparatus therefor
CN113703419B (en) Automatic testing method and device for redundancy management algorithm of flight control system
Sülek et al. Computation of supervisors for fault-recovery and repair for discrete event systems
Sklyar Application of reliability theory to functional safety of computer control systems
Belcastro et al. On the validation of safety critical aircraft systems, part i: An overview of analytical & simulation methods
Alwi et al. Second order sliding mode observers for the ADDSAFE actuator benchmark problem
Zhao et al. Fault detection and diagnosis for sensor in an aero-engine system
Ma et al. A novel RFDI-FTC system for thrust-vectoring aircraft undergoing control surface damage and actuator faults during supermaneuverable flight
Zhang et al. Reliability technology using FTA, FMECA, FHA and FRACAS: A review
Jianzhong et al. Application Research of Markov in Flight Control System Safety Analysis
US20150205271A1 (en) Automated reconfiguration of a discrete event control loop
Li et al. Reliability monitoring of fault tolerant control systems with demonstration on an aircraft model
You et al. Man-machine interaction reliability modeling method based on Markov model
Ding et al. An Airborne Software CMA Application Method Based on ARP4761

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination