WO2024049474A1 - Évaluation de sécurité continue avec des graphes - Google Patents

Évaluation de sécurité continue avec des graphes Download PDF

Info

Publication number
WO2024049474A1
WO2024049474A1 PCT/US2022/075757 US2022075757W WO2024049474A1 WO 2024049474 A1 WO2024049474 A1 WO 2024049474A1 US 2022075757 W US2022075757 W US 2022075757W WO 2024049474 A1 WO2024049474 A1 WO 2024049474A1
Authority
WO
WIPO (PCT)
Prior art keywords
safety
engineering
design
models
architecture
Prior art date
Application number
PCT/US2022/075757
Other languages
English (en)
Inventor
Oswin Noetzelmann
Thomas Waschulzik
Original Assignee
Siemens Mobility GmbH
Siemens Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Mobility GmbH, Siemens Corporation filed Critical Siemens Mobility GmbH
Priority to PCT/US2022/075757 priority Critical patent/WO2024049474A1/fr
Publication of WO2024049474A1 publication Critical patent/WO2024049474A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/15Vehicle, aircraft or watercraft design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/02CAD in a network environment, e.g. collaborative CAD or distributed simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/04Constraint-based CAD

Definitions

  • requirements come from a customer.
  • Requirements can include a list of functions that a particular system needs to perform.
  • Requirements can also indicate details of an environment of the system that is being designed.
  • requirements often include safety aspects related to the system. Such safety aspects or safety requirements are often extracted by a safety engineer from regulations and procedures.
  • TCMS Train Control and Monitoring System
  • a safety computing system can automatically generate reports and continuously update such reports, so as to define the current state of safety engineering at any given time.
  • a safety computing system includes one or more processors and a memory having a plurality of application modules stored thereon.
  • the safety computing system can be configured to generate safety states associated with an engineering design.
  • the system can obtain a safety requirements specification.
  • the safety requirements specification can define safety constraints based on project requirements and standards associated with the engineering design.
  • the system can generate a plurality of engineering models. Each of the plurality of engineering models can define a digital representation of respective engineering domains of the engineering design.
  • the system can build a graph (e.g., semantic model) of the engineering design.
  • the graph or semantic model can define nodes related to safety and connections between the nodes related to safety so as to link the plurality of engineering models with each other and the safety requirements specification.
  • the system can generate the safety states associated with the engineering design at any time during the engineering design.
  • the safety states can indicate a status associated with the safety constraints relative to each of the engineering domains of the engineering design.
  • FIG. 1 is a block diagram of safety computing system configured to dynamically connect a safety requirements model with safety ontologies and functional, logical, and physical system models, in accordance with an example embodiment.
  • FIG. 2 illustrates example operations that can be performed by the system of FIG. 1, in accordance with an example embodiment.
  • FIG. 3 illustrates an example safety model that can be generated by the system of FIG. 1, in accordance with an example embodiment.
  • FIG. 4 shows further example operations that can be performed by the system using safety models, for instance the safety model of FIG. 3, in accordance with another example embodiment.
  • FIG. 5 shows an example of a computing environment within which embodiments of this disclosure may be implemented.
  • SRS safety requirements specification
  • Example sources for identifying safety relevant requirements include hazard trees and fault tree analysis (FT A).
  • FT A fault tree analysis
  • the requirements are often learned by the engineers performing the product or system design and considered by each engineer during the design process.
  • documentation of the implementation of safety features that are compiled is typically another manual task, in some cases, performed by a dedicated person responsible for parsing the regulatory processes.
  • the complete process is usually manual and often requires the engineers as well as a person responsible for inspections.
  • An inspection person might create connections from the safety requirements to the implemented design. The inspection person might also verify that safety requirements are met in a given implementation by manually going through each safety requirement one by one to check or update the status.
  • an example safety computing system 100 can be configured to dynamically connect a safety requirements model with safety ontologies and functional, logical, and physical system models, so as to define a connected knowledge graph model or module 102.
  • the various system designs can be queried or otherwise evaluated in a domain specific manner, such the system designs can be dynamically evaluated for safety -relevant flaws, inconsistencies, violations, or the like.
  • the safety ontologies, and thus the safety computing system 100 can define a semantic safety model or module 104 and an application safety architecture 106.
  • the safety computing system 100 can include various functional models or modules, for instance a control software model or module 108, which can define a hazard tree.
  • the safety computing system 100 can further include various logical and physical system models or modules, such as an electrical engineering model or module 110, a wiring and mechanical model or module 112, a bill of material structure model or module 114, and other sources or models 116.
  • the safety computing system 100 can include one or more processors and memory having stored thereon applications, agents, and computer program modules including, for example, the control software module 108, the electrical engineering module 110, the wiring and mechanical module 112, the bill of material structure module 114, the knowledge graph module 102, the safety module 104, the mobility safety architecture 106, and a safety proofing module 118.
  • Each of the control software module 108, the electrical engineering module 110, the wiring and mechanical module 112, the bill of material structure module 114, the safety module 104, the mobility safety architecture 106, and the safety proofing module 118 can be communicatively coupled to the knowledge graph module 102.
  • the safety proofing module 118 can be configured to query the knowledge graph module 102.
  • program modules, applications, computer-executable instructions, code, or the like depicted in FIG. 1 are merely illustrative and not exhaustive, and that processing described as being supported by any particular module may alternatively be distributed across multiple modules or performed by a different module.
  • various program module(s), script(s), plug-in(s), Application Programming Interface(s) (API(s)), or any other suitable computer-executable code may be provided to support functionality provided by the program modules, applications, or computerexecutable code depicted in FIG. 1 and/or additional or alternate functionality.
  • functionality may be modularized differently such that processing described as being supported collectively by the collection of program modules depicted in FIG.
  • program modules that support the functionality described herein may form part of one or more applications executable across any number of systems or devices in accordance with any suitable computing model such as, for example, a client-server model, a peer-to-peer model, and so forth.
  • any of the functionality described as being supported by any of the program modules depicted in FIG. 1 may be implemented, at least partially, in hardware and/or firmware across any number of devices.
  • safety engineering is performed based on defining or obtain project requirements, at 202.
  • system safety requirements 101 that are specific to the given project can be extracted or generated, for instance by safety engineers 201.
  • the project requirements are assessed to determine which safety standards and procedures are applicable to the project, so as to define project- specific safety requirements 101a.
  • the project requirements, and thus the safety requirements 101a can define an input to the safety proofing module 118 that can continuously assess safety of the respective system (perform safety assessment), for example, at 214.
  • the initial input defined by the safety requirements 101 can change during various phase of an engineering project.
  • changes to the safety requirements 101 can impact various design domains of the engineering project.
  • system engineering is performed by systems engineers so as generate the systems architecture of a given system.
  • the systems engineering of a given product or system defines a model-based systems engineering (MBSE) approach, in which the product or system is organized into different aspects, for instance according to functionalities or logical and physical compositions.
  • the system or product can further be organized according to connections and interfaces defined by the various compositions of the system, so as define the systems architecture.
  • the systems architecture generated at 204 can define an input for performing the safety assessment, at 214.
  • the system or product engineering process involves multiple engineering disciplines, for example mechanical, electrical, and automation, performed in parallel with each other.
  • automation engineering can be performed by automation engineers 205 (at 206)
  • electrical engineering can be performed by electrical engineers 207 (at 208)
  • mechanical engineering can be performed by mechanical engineers 209 (at 210).
  • Performances of each of the respective engineer disciplines can involve their own set of engineering tools, such as computer programs or simulations having respective digital results.
  • mechanical engineering results might include, bills of material (e.g., the bill of material structure model 114), CAD models (e.g., the wiring and mechanical model 112), diagrams or physical simulations, or the like.
  • the results of the electrical engineering performed at 208 might include, by way of example and without limitation, electrical drawings (e.g., the wiring and mechanical model 112), circuit diagrams, electrical simulations (e.g., the electrical engineering model 110), or the like.
  • the automation engineering results might include, for example and without limitation, control programs, I/O lists or configurations 109, hardware configurations, or the like.
  • the automation engineering performed at 206 can result in the I/O lists 109, control software or PLC code 111, and safety control and service documentation 113.
  • the engineering processes performed at 204, 206, 208, and 210 can take significant time and can include multiple iterations with information flowing between the disciplines.
  • the results of each of the engineering processes performed at 204, 206, 208, and 210 can define respective engineering artifacts.
  • the engineering artifacts can include the VO lists 109, the control software 111, the safety control and service documentation 113, the electrical engineering models 110, the wire and mechanical models 112, the bill of material structure models 114, and the other sources or models 116.
  • the computing system 100 can combine the engineering artifacts with the safety architecture 106, so as to define the semantic safety model 104.
  • the knowledge graph 102 can be generated using the safety model 104 and the safety architecture 106, such that safety assessments can be continuously performed (at 214) using the knowledge graph 102.
  • the safety assessments can determine the state of a given product or system with respect to its safety implementation.
  • the safety computing system 100 includes the knowledge map 104 that defines information from the different engineering domains in semantic relation, and in relation to the safety architecture 106 and safety requirements 101.
  • the engineering artifacts includes data from computer-managed repository systems and domain- specific engineering tool sets.
  • the Siemens NX CAD tool set and the Siemens Teamcenter PLM/PDM can be sources of the engineering artifacts.
  • a semantic representation of the tool data model can be used to allow natively semantic connections between the semantic safety architectures, engineering, and other information.
  • only a representation of a transformation of the tool-specific data model is available (e.g., an export in PLMXML or office documents and spreadsheets). Achieving a semantic representation of those representations allows the system 100 to integrate and link the information.
  • engineering domains can define respective modeling languages that include data or information that can be transferred into a semantic representation.
  • Example modeling languages include, without limitation, SysML (Systems Modeling Language), SysML2, BIM (Building Information Model), and GIS (Geographic Information System).
  • SysML Systems Modeling Language
  • SysML2 BIM
  • Building Information Model BIM
  • GIS Geographic Information System
  • sematic representations are already created and ready to use.
  • semantic representations for a given domain specific language can be created.
  • the semantic safety model 104 can define an ontology that describes various entities involved in the given safety engineering and safety assessment.
  • an example semantic safety model 300 is presented for example, though it will be understood that the semantic safety model 106 can be alternatively implemented, all such sematic safety models are contemplated as being within the scope of this disclosure.
  • semantic safety models are specific to a particular domain, such that the safety architecture 106 defines a plurality of safety models 104.
  • the safety architecture 106 can define a plurality of safety models 104, or a combined semantic safety model, that can define semantic representations of domain specific safety aspects, safety standards, and semantic safety models as previously described.
  • the computing system 100 can perform semantic modeling of domain- specific safety aspects (at 216), based on safety standards 201 and domain- specific safety requirements 101b.
  • the resulting semantic model can be further enriched with the project specific requirements 101a, so as to define the safety architecture 106.
  • the safety architecture 106 can also contains rule for linking the safety architecture to the engineering design artifacts, at 212.
  • a linking rule might stipulate: “All controller representations, as detected by their classification in the automation and electrical designs, are controller targets as defined in the safety architecture.” Such links can be used when a safety assessment is performed (at 214), for example, to assess the state of the associated software programs of the controllers in terms of safety.
  • the safety proofing module 118 using the knowledge graph module 102, can perform safety assessments (at 214), so as generate one or more safety reports 120, which can include domain- specific safety reports 120a.
  • the safety assessment can be based on a semantic model, which can include a systems engineering model, domain artifacts from each engineering discipline involved in a given design, the linked safety model 104, and the linked safety architecture 106.
  • domain-specific algorithms or semantic reasoning principles are applied to the combined model, for instance at 216.
  • the semantic model can be queried for completion in terms of safety requirement fulfillment and safety architecture referencing.
  • the safety architecture 106 can be evaluated using standard semantic reasoning, so as to identify any logical shortcomings. After any shortcomings are addressed, for instance by safety officers 113, the sematic model can be queried (at 214) for the completion of the safety architecture implementation and safety standard implementation. The query might result in inconsistencies or violations. Such inconsistences or violations can be recorded in the associated safety state or safety evaluation 216. The results of the query can be recorded, reported, and/or archived for further semantic processing.
  • the semantic queries to the knowledge graph 102, in particular safety state nodes and the model defined by the knowledge graph 102 result in a given safety report 120 that details the safety state in an optimized, human-readable form.
  • the report 120 can be presented in a tabular form, textual form, graphical form, or a combination thereof.
  • the safety assessment at 214 is performed each time a change to any of the input domains is detected. Changes can be detected by the safety computing system 100 by input interception, CRC hashes, or the like. Additionally, or alternatively, the safety assessment can be performed at 214 in response to a user querying the safety proofing module 118 for the latest safety state.
  • semantic safety models for instance the example safety model instance 300
  • the model instance 300 can further include various data, for instance a design simulation 301 and a design artifact 303, that can define data from the engineering processes illustrated in FIG. 2.
  • the design simulation 301 and design artifact 303 can define a crash test simulation and a train body model, respectively, used in the simulation.
  • a safety relevant simulation 304 and safety relevant design artifact 302 can represent these instances in the combined safety /engineering model generated at 212 (see FIG.
  • the safety requirement 101 can define an instance of the project specific safety requirements 101a shown in FIG. 2.
  • the safety requirements 101a might define the crash parameters to withstand a crash at a given speed for the specific train project.
  • a safety architecture link 305 can represent the instance of linking with the safety architecture model 106.
  • a train body safety architecture can specify various parameters (e.g., limits, tolerances, train body type standards, etc.).
  • a safety audit 308 can represent an instance of a safety audit as demonstrating a mechanism for persisting the information from safety audits.
  • an example safety audit is crash test audit.
  • a safety requirement state 306 can represents the success or failure of the associated safety requirement fulfillment according to the linked design state.
  • a safety authority 312 can represents an example instance of a regulatory body (e.g., ETSC) requiring the linked audit.
  • ETSC regulatory body
  • a safety metric 310 is an example for an instance of collecting/aggregating information from various Safety Requirement States, such as a "Mechanical Safety Summary for Train Body,” for example.
  • the model 300 can also indicate an engineering or design artifact 302 that is relevant, from a safety perspective, to the safety requirement 101.
  • Safety relevant generally refers to information being relevant for a safety requirement, for example a requirement from a safety standard.
  • the relation is_relevant is used to denominate completeness and information relevance (e.g., for an audit or determining safety state).
  • An example for a safety relevant simulation artifact is a crash test simulation performed to the specifications of a safety standard.
  • An example for a safety relevant design artifact is the train body model used for the crash simulation.
  • the model 300 can also indicate a simulation 304 that is relevant, from a safety perspective, to the safety requirement 101. In some cases, not every design simulation has such a representation, as not all design simulations are safety relevant.
  • the semantic safety model 300 can indicate safety requirement states 306 associated with the safety requirement 101. Example states include, without limitation, satisfies the requirement fully, satisfies the requirement partially, or does not satisfy the requirement. In cases, additional details are attached to the states. By way of example, and without limitation, additional details might include a measurement related to whether the requirement was met (e.g., crash deformation tolerance exceeded by 5 inches). Thus, the states 306 can define measurements that are used in safety audits 308.
  • the safety audits 308 might require a specific number of safety requirements 101 to be met. Furthermore, the safety audits 308 may contain safety metrics 312, which can refer to the related safety requirement states 306, as indicated by the model 300. The safety audits 308 may be required by various safety authorities 312 (e.g., ETSC, TSA, OHSA, etc.), as also indicated by the model 300.
  • various safety authorities 312 e.g., ETSC, TSA, OHSA, etc.
  • the safety assessment performed at 214 can include example operations 400 performed by the safety computing system 100. As described herein, the safety assessments can be performed using a semantic representation of various multi-disciplinary engineering artifacts, the safety architectures, and any additional domain- specific safety requirements and models, such as domain- specific languages, engineering tool specific representations, and the like.
  • the system 100 can apply various domain- specific algorithms and reasoning that define various inter-safety facts 401, to a combined sematic model, so as to define an enriched semantic or safety model 403.
  • the enriched semantic model or safety model 403 can contain additional or more complete facts and additional or more complete links (between different portions of the model 403) as compared to the original combined semantic model.
  • reasoning can be applied to the combined semantic model so as to generate or determine additional safety facts that were not included in the original model.
  • Semantic reasoning generally includes a reasoning and inference being applied to a knowledge graph data structure, as to generate results that can enrich the knowledge in the graph. An example is inferring that a train locomotive design is safety relevant after recording that the body design is safety relevant, and that the body design is part of the locomotive design.
  • the semantically represented safety architectures can be queried, for example, to evaluate compliance with respective safety models or architectures.
  • the knowledge graph 102 can be queried to complete a safety architecture ontology.
  • the safety model 403 can be queried for ontology connection to the safety architecture.
  • an Enriched Safety Model can contain links between the general Safety Model, the domain specific Safety Architecture, and the design artifacts (as shown in the example in FIG. 3). Those links can be utilized in queries to assess which requirements require which types of Design Artifacts.
  • a query can be used to establish additional links between the general Safety Model and the domain specific Safety Architecture, for example to denote overlaps in modeling (e.g., both have elements for crash requirements).
  • a query analyzing the knowledge graph 404 may also check directly if a standard safety architecture is implemented completely, correctly, and consistently. This query reports then directly to the Safety Architecture inconstancies/violations 405. In example, proceeding directly from 404 to 405 can occur when there might be no flexibility to address changing safety models due to changing switching safety standards.
  • the completeness of safety facts can be checked according to the safety models.
  • Checking the completeness can refer to different mechanisms. For example, checking the completeness can include comparing the ontology of the knowledge graph to the instance model and noting missing links/nodes. Alternatively, or additionally, checking the completeness can include comparing two instance models, for instance one being a reference model and one being the compared target. As another example, a special architecture build of I/Os including a correct wiring pattern can be checked to ensure the architecture is correctly implemented in the associated electrical model.
  • a software module is reading these I/Os, thereby proving that consistent electrical states exist during runtime, that module can also ensure that operational warnings are sent during runtime if an error occurs, and that the operational warning is correctly included in the handbook for the operator.
  • each statement in a given safety model should have a corresponding fact in the semantic safety architecture.
  • the completeness of the links between the safety architectures and the safety models can be checked.
  • each statement in a given safety model should have a link to a corresponding fact in the safety architecture.
  • Statements can also be checked by comparing the field values of linked safety facts from the architecture and the safety models.
  • text or multivalue fields can be compared to each other, and numbers and ranges can be compared to each other.
  • date fields can be checked for correct date values or ranges.
  • complex domain- specific facts in the architecture can be compared with domainspecific algorithms. In some cases, the algorithms are provided during architecture planning and attached to the semantic model.
  • the system 100 can perform semantic reasoning over the safety architecture model, so as to generate an enriched safety architecture model.
  • domain- specific algorithms can be executed with the safety architecture may add additional facts or links to the safety architecture. It is recognized herein that executing additional algorithms from different domains enables the system 100 to support various applications and domains.
  • the combined, enriched, semantic safety model is can be queried for compliance with the safety architecture implementation.
  • the system 100 can perform semantic queries for safety architecture implementation completeness.
  • the system 200 can perform semantic queries for safety standard implementation and domain compliance.
  • each statement in the safety architecture can be checked to determine whether it has a respective corresponding fact in the semantic model, thereby verifying the completeness of safety facts according to the safety architectures.
  • each safety statement in the safety architecture can be checked to determine whether it has a link to a satisfying fact in the semantic model, thereby verifying the completeness of the links between the safety architectures and the semantic models.
  • text or multi-value fields can be compared to each other, and numbers and ranges can be compared to each other.
  • complex domain- specific facts in the architecture can be compared with domain- specific algorithms. In some cases, the algorithms are provided during architecture planning and attached to the semantic model.
  • the queries at 410 and 412 can define check results that result from the queries.
  • the results can include inconsistencies or violations 405 associated with the safety architecture, inconsistencies or violations 407 associated with the safety ontology, and inconsistencies or violations 409 associated with safety standards or domains.
  • the check results can be logically linked to a safety state 411.
  • the inconsistencies or violations 405, 407, and 409 can be caused by various shortcomings, for instance a missing or faulty safety implementation, faulty semantic enrichment processes, missing data, incomplete safety models, or other causes.
  • the cause of each of the inconsistencies or violations 405, 407, and 409 can also be recorded in the safety state 411.
  • the safety state 411 defines a report, an explorable model, or other format, which can be provided to a user (e.g., engineer) who can address any issues.
  • the operations 400 can be repeated in responsive to an event or query, for instance responsive to a state change in the semantic model, and any number of safety states 411 can be generated as the engineering project matures.
  • a safety engineer can track or follow the project completeness with respect to safety in a very detailed manner. For example, an engineer can use the complete set of generated safety states 411 to retrospectively inspect the safety state at any point of time in the project.
  • embodiments described herein can connect multiple engineering disciplines and that have safety critical requirements to related complex designs. For example, engineers can continuously monitor of the state of safety in a given engineering project. The maturity of safety relevant features can be assessed automatically, in accordance with various embodiments, so as to reduce effort and errors associated with manually performing safety assessments and audits.
  • fully connected digital twins (engineering models) of various engineering processes can be used to automatically evaluate the state of safety engineering during various engineering process, so as to reduce efforts (e.g., time) by about 90%.
  • FIG. 5 illustrates an example of a computing environment within which embodiments of the present disclosure may be implemented.
  • a computing environment 600 includes a computer system 610 that may include a communication mechanism such as a system bus 621 or other communication mechanism for communicating information within the computer system 610.
  • the computer system 610 further includes one or more processors 620 coupled with the system bus 621 for processing the information.
  • the safety computing system 200 may include, or be coupled to, the one or more processors 620.
  • the processors 620 may include one or more central processing units (CPUs), graphical processing units (GPUs), or any other processor known in the art. More generally, a processor as described herein is a device for executing machine-readable instructions stored on a computer readable medium, for performing tasks and may comprise any one or combination of, hardware and firmware. A processor may also comprise memory storing machine-readable instructions executable for performing tasks. A processor acts upon information by manipulating, analyzing, modifying, converting or transmitting information for use by an executable procedure or an information device, and/or by routing the information to an output device.
  • CPUs central processing units
  • GPUs graphical processing units
  • a processor may use or comprise the capabilities of a computer, controller or microprocessor, for example, and be conditioned using executable instructions to perform special purpose functions not performed by a general purpose computer.
  • a processor may include any type of suitable processing unit including, but not limited to, a central processing unit, a microprocessor, a Reduced Instruction Set Computer (RISC) microprocessor, a Complex Instruction Set Computer (CISC) microprocessor, a microcontroller, an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), a System-on-a-Chip (SoC), a digital signal processor (DSP), and so forth.
  • RISC Reduced Instruction Set Computer
  • CISC Complex Instruction Set Computer
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • SoC System-on-a-Chip
  • DSP digital signal processor
  • processor(s) 620 may have any suitable microarchitecture design that includes any number of constituent components such as, for example, registers, multiplexers, arithmetic logic units, cache controllers for controlling read/write operations to cache memory, branch predictors, or the like.
  • the microarchitecture design of the processor may be capable of supporting any of a variety of instruction sets.
  • a processor may be coupled (electrically and/or as comprising executable components) with any other processor enabling interaction and/or communication there-between.
  • a user interface processor or generator is a known element comprising electronic circuitry or software or a combination of both for generating display images or portions thereof.
  • a user interface comprises one or more display images enabling user interaction with a processor or other device.
  • the system bus 621 may include at least one of a system bus, a memory bus, an address bus, or a message bus, and may permit exchange of information (e.g., data (including computer-executable code), signaling, etc.) between various components of the computer system 610.
  • the system bus 621 may include, without limitation, a memory bus or a memory controller, a peripheral bus, an accelerated graphics port, and so forth.
  • the system bus 621 may be associated with any suitable bus architecture including, without limitation, an Industry Standard Architecture (ISA), a Micro Channel Architecture (MCA), an Enhanced ISA (EISA), a Video Electronics Standards Association (VESA) architecture, an Accelerated Graphics Port (AGP) architecture, a Peripheral Component Interconnects (PCI) architecture, a PCI- Express architecture, a Personal Computer Memory Card International Association (PCMCIA) architecture, a Universal Serial Bus (USB) architecture, and so forth.
  • ISA Industry Standard Architecture
  • MCA Micro Channel Architecture
  • EISA Enhanced ISA
  • VESA Video Electronics Standards Association
  • AGP Accelerated Graphics Port
  • PCI Peripheral Component Interconnects
  • PCMCIA Personal Computer Memory Card International Association
  • USB Universal Serial Bus
  • the computer system 610 may also include a system memory 630 coupled to the system bus 621 for storing information and instructions to be executed by processors 620.
  • the system memory 630 may include computer readable storage media in the form of volatile and/or nonvolatile memory, such as read only memory (ROM) 631 and/or random access memory (RAM) 632.
  • the RAM 632 may include other dynamic storage device(s) (e.g., dynamic RAM, static RAM, and synchronous DRAM).
  • the ROM 631 may include other static storage device(s) (e.g., programmable ROM, erasable PROM, and electrically erasable PROM).
  • system memory 630 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processors 620.
  • a basic input/output system 633 (BIOS) containing the basic routines that help to transfer information between elements within computer system 610, such as during start-up, may be stored in the ROM 631.
  • RAM 632 may contain data and/or program modules that are immediately accessible to and/or presently being operated on by the processors 620.
  • System memory 630 may additionally include, for example, operating system 634, application modules 635, and other program modules 636.
  • Application modules 635 may include aforementioned modules described for FIG. 1 and may also include a user portal for development of the application program, allowing input parameters to be entered and modified as necessary.
  • the operating system 634 may be loaded into the memory 630 and may provide an interface between other application software executing on the computer system 610 and hardware resources of the computer system 610. More specifically, the operating system 634 may include a set of computer-executable instructions for managing hardware resources of the computer system 610 and for providing common services to other application programs (e.g., managing memory allocation among various application programs). In certain example embodiments, the operating system 634 may control execution of one or more of the program modules depicted as being stored in the data storage 640.
  • the operating system 634 may include any operating system now known or which may be developed in the future including, but not limited to, any server operating system, any mainframe operating system, or any other proprietary or non-proprietary operating system.
  • the computer system 610 may also include a disk/media controller 643 coupled to the system bus 621 to control one or more storage devices for storing information and instructions, such as a magnetic hard disk 641 and/or a removable media drive 642 (e.g., floppy disk drive, compact disc drive, tape drive, flash drive, and/or solid state drive).
  • Storage devices 640 may be added to the computer system 610 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or FireWire).
  • Storage devices 641, 642 may be external to the computer system 610.
  • the computer system 610 may include a user input interface or graphical user interface (GUI) 661, which may comprise one or more input devices, such as a keyboard, touchscreen, tablet and/or a pointing device, for interacting with a computer user and providing information to the processors 620.
  • GUI graphical user interface
  • the computer system 610 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 620 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 630. Such instructions may be read into the system memory 630 from another computer readable medium of storage 640, such as the magnetic hard disk 641 or the removable media drive 642.
  • the magnetic hard disk 641 and/or removable media drive 642 may contain one or more data stores and data files used by embodiments of the present disclosure.
  • the data store 640 may include, but are not limited to, databases (e.g., relational, object-oriented, etc.), file systems, flat files, distributed data stores in which data is stored on more than one node of a computer network, peer-to-peer network data stores, or the like. Data store contents and data files may be encrypted to improve security.
  • the processors 620 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 630.
  • hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
  • the computer system 610 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments of the invention and for containing data structures, tables, records, or other data described herein.
  • the term “computer readable medium” as used herein refers to any medium that participates in providing instructions to the processors 620 for execution.
  • a computer readable medium may take many forms including, but not limited to, non-transitory, non-volatile media, volatile media, and transmission media.
  • Non-limiting examples of non-volatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as magnetic hard disk 641 or removable media drive 642.
  • Non-limiting examples of volatile media include dynamic memory, such as system memory 630.
  • Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the system bus 621.
  • Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.
  • Computer readable medium instructions for carrying out operations of the present disclosure may be assembler instructions, instruction- set- architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
  • the computing environment 600 may further include the computer system 610 operating in a networked environment using logical connections to one or more remote computers, such as remote computing device 680.
  • the network interface 670 may enable communication, for example, with other remote devices 680 or systems and/or the storage devices 641, 642 via the network 671.
  • Remote computing device 680 may be a personal computer (laptop or desktop), a mobile device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer system 610.
  • computer system 610 may include modem 672 for establishing communications over a network 671, such as the Internet. Modem 672 may be connected to system bus 621 via user network interface 670, or via another appropriate mechanism.
  • Network 671 may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 610 and other computers (e.g., remote computing device 680).
  • the network 671 may be wired, wireless or a combination thereof. Wired connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-6, or any other wired connection generally known in the art.
  • Wireless connections may be implemented using Wi-Fi, WiMAX, and Bluetooth, infrared, cellular networks, satellite or any other wireless connection methodology generally known in the art. Additionally, several networks may work alone or in communication with each other to facilitate communication in the network 671.
  • program modules, applications, computer-executable instructions, code, or the like depicted in FIG. 5 as being stored in the system memory 630 are merely illustrative and not exhaustive and that processing described as being supported by any particular module may alternatively be distributed across multiple modules or performed by a different module.
  • various program module(s), script(s), plug-in(s), Application Programming Interface(s) (API(s)), or any other suitable computer-executable code hosted locally on the computer system 610, the remote device 680, and/or hosted on other computing device(s) accessible via one or more of the network(s) 671 may be provided to support functionality provided by the program modules, applications, or computer-executable code depicted in FIG.
  • functionality may be modularized differently such that processing described as being supported collectively by the collection of program modules depicted in FIG. 5 may be performed by a fewer or greater number of modules, or functionality described as being supported by any particular module may be supported, at least in part, by another module.
  • program modules that support the functionality described herein may form part of one or more applications executable across any number of systems or devices in accordance with any suitable computing model such as, for example, a client-server model, a peer-to-peer model, and so forth.
  • any of the functionality described as being supported by any of the program modules depicted in FIG. 6 may be implemented, at least partially, in hardware and/or firmware across any number of devices.
  • the computer system 610 may include alternate and/or additional hardware, software, or firmware components beyond those described or depicted without departing from the scope of the disclosure. More particularly, it should be appreciated that software, firmware, or hardware components depicted as forming part of the computer system 610 are merely illustrative and that some components may not be present or additional components may be provided in various embodiments. While various illustrative program modules have been depicted and described as software modules stored in system memory 630, it should be appreciated that functionality described as being supported by the program modules may be enabled by any combination of hardware, software, and/or firmware. It should further be appreciated that each of the above-mentioned modules may, in various embodiments, represent a logical partitioning of supported functionality.
  • This logical partitioning is depicted for ease of explanation of the functionality and may not be representative of the structure of software, hardware, and/or firmware for implementing the functionality. Accordingly, it should be appreciated that functionality described as being provided by a particular module may, in various embodiments, be provided at least in part by one or more other modules. Further, one or more depicted modules may not be present in certain embodiments, while in other embodiments, additional modules not depicted may be present and may support at least a portion of the described functionality and/or additional functionality. Moreover, while certain modules may be depicted and described as sub-modules of another module, in certain embodiments, such modules may be provided as independent modules or as sub-modules of other modules.
  • any operation, element, component, data, or the like described herein as being based on another operation, element, component, data, or the like can be additionally based on one or more other operations, elements, components, data, or the like. Accordingly, the phrase “based on,” or variants thereof, should be interpreted as “based at least in part on.”
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the Figures.
  • two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Stored Programmes (AREA)

Abstract

Sont divulgués des procédés et des systèmes pour générer, maintenir et suivre des informations accessibles par programme concernant des exigences de sécurité d'une conception de système. Par exemple, un système informatique de sécurité peut générer automatiquement des rapports et mettre à jour en continu de tels rapports, de façon à définir l'état actuel d'ingénierie de sécurité à n'importe quel moment donné, automatiquement ou à la demande.
PCT/US2022/075757 2022-08-31 2022-08-31 Évaluation de sécurité continue avec des graphes WO2024049474A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2022/075757 WO2024049474A1 (fr) 2022-08-31 2022-08-31 Évaluation de sécurité continue avec des graphes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2022/075757 WO2024049474A1 (fr) 2022-08-31 2022-08-31 Évaluation de sécurité continue avec des graphes

Publications (1)

Publication Number Publication Date
WO2024049474A1 true WO2024049474A1 (fr) 2024-03-07

Family

ID=83457459

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/075757 WO2024049474A1 (fr) 2022-08-31 2022-08-31 Évaluation de sécurité continue avec des graphes

Country Status (1)

Country Link
WO (1) WO2024049474A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120143570A1 (en) * 2010-12-03 2012-06-07 University Of Maryland Method and system for ontology-enabled traceability in design and management applications
WO2022035427A1 (fr) * 2020-08-12 2022-02-17 Siemens Aktiengesellschaft Groupement fonctionnel automatique de données d'un projet de conception avec vérification de conformité

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120143570A1 (en) * 2010-12-03 2012-06-07 University Of Maryland Method and system for ontology-enabled traceability in design and management applications
WO2022035427A1 (fr) * 2020-08-12 2022-02-17 Siemens Aktiengesellschaft Groupement fonctionnel automatique de données d'un projet de conception avec vérification de conformité

Similar Documents

Publication Publication Date Title
Ross et al. Why Functional Safety in Road Vehicles?
Nair et al. An extended systematic literature review on provision of evidence for safety certification
Panesar-Walawege et al. Characterizing the chain of evidence for software safety cases: A conceptual model based on the IEC 61508 standard
Falessi et al. SafeSlice: a model slicing and design safety inspection tool for SysML
dos Santos et al. Software requirements testing approaches: a systematic literature review
Yang et al. An industrial case study on an architectural assumption documentation framework
Silva et al. A field study on root cause analysis of defects in space software
de la Vara et al. Model-based assurance evidence management for safety–critical systems
Hübner et al. Interaction-based creation and maintenance of continuously usable trace links between requirements and source code
Gario et al. Fail-safe testing of safety-critical systems: a case study and efficiency analysis
Yue et al. Towards requirements engineering for digital twins of cyber-physical systems
Amalfitano et al. Using tool integration for improving traceability management testing processes: An automotive industrial experience
Ratiu et al. Safety. lab: Model-based domain specific tooling for safety argumentation
Sannier et al. Formalizing standards and regulations variability in longlife projects. A challenge for Model-driven engineering
Tundis et al. Model‐Based Dependability Analysis of Physical Systems with Modelica
Alenazi et al. Assuring virtual PLC in the context of SysML models
Tommila et al. Conceptual model for safety requirements specification and management in nuclear power plants
Altinger State of the Art Software Development in the Automotive Industry and Analysis Upon Applicability of Software Fault Prediction
CN109389407B (zh) 一种汽车电子产品功能安全的保证及验证方法
WO2024049474A1 (fr) Évaluation de sécurité continue avec des graphes
Kaleeswaran et al. A domain specific language to support HAZOP studies of SysML models
Fumagalli et al. Mind the gap!: Learning missing constraints from annotated conceptual model simulations
Damm et al. Traffic sequence charts for the enable-s 3 test architecture
Uludağ et al. Integration of systems design and risk management through model‐based systems development
Bailey et al. A framework for automated model interface coordination using SysML

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22777909

Country of ref document: EP

Kind code of ref document: A1