CN113590460A - Method and device for checking a technical system - Google Patents

Method and device for checking a technical system Download PDF

Info

Publication number
CN113590460A
CN113590460A CN202110471517.9A CN202110471517A CN113590460A CN 113590460 A CN113590460 A CN 113590460A CN 202110471517 A CN202110471517 A CN 202110471517A CN 113590460 A CN113590460 A CN 113590460A
Authority
CN
China
Prior art keywords
test
simulation
classification
classifier
measure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110471517.9A
Other languages
Chinese (zh)
Inventor
尹智洙
J·佐恩斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Publication of CN113590460A publication Critical patent/CN113590460A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/26Functional testing
    • G06F11/261Functional testing by simulating additional hardware, e.g. fault simulation
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B17/00Systems involving the use of models or simulators of said systems
    • G05B17/02Systems involving the use of models or simulators of said systems electric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

Method and device for checking a technical system. Method (10) for checking a technical system, characterized by the following features: -performing a test (12) by means of a simulation (11) of the system, -the test (12) being analyzed with respect to a satisfaction measure (13) of quantitative requirements of the system and an error measure (14) of the simulation (11), -temporally classifying (15) the test (12) as reliable (16) or unreliable (17) according to the satisfaction measure (13) and error measure (14), and-optimizing a classifier (18) for the classification (15) by stepwise refining the classification (15).

Description

Method and device for checking a technical system
Technical Field
The invention relates to a method for checking a technical system. The invention also relates to a corresponding device, a corresponding computer program and a corresponding storage medium.
Background
In software technology, the concept "model-based testing" at the upper level (model-based testingMBT) outlines the use of a model for automation of test activities and for generating test artifacts (testefakent) during testing. For example, it is well known from the description toA model of the nominal behavior of the test system generates test cases.
In particular embedded systems (embedded systems) Rely on a reasonable input signal from the sensor and in turn stimulate its surroundings by means of output signals to various actuators. Thus, during the validation and previous development phases of such a system, the model of the system is modeled in a regulation loop (model in the loop (mold in loop) Type)MiL), software (software in the loop (software in loop)Sil), processor (sprocessor in the Loop (processor in loop)PiL) or the entire hardware (hardware in the loop (hardware in loop)HiL) is simulated together with a model of the surrounding environment. In vehicle technology, a simulator for checking electronic control devices corresponding to this principle is sometimes referred to as a component, module or integrated test stand, depending on the test phase and the object.
DE10303489a1 discloses a method for testing the software of a control unit of a vehicle, a power tool or a robot system, in which a control object controllable by the control unit is at least partially simulated by a test system by generating output signals by the control unit and transmitting these output signals of the control unit via a first connection to a first hardware module and signals of a second hardware module as input signals via a second connection to the control unit, wherein these output signals are provided in the software as first control values and are additionally transmitted via a communication interface to the test system in real time in relation to the control object.
Such simulations are widespread in various technical fields and are used, for example, for checking whether embedded systems in power tools, engine control devices for drive, steering and braking systems, camera systems, systems with components for artificial intelligence and machine learning, robotic systems or autonomous vehicles are suitable in their early development stages. However, the results of simulation models according to the prior art are only included in the release decisions to a limited extent due to the lack of confidence in their reliability.
Disclosure of Invention
The invention provides a method for checking a technical system, a corresponding device, a corresponding computer program and a corresponding storage medium according to the independent claims.
The solution according to the invention is based on the recognition that: the quality of the simulation model is crucial for a correct prediction of the test results that can be achieved thereby. In the MBT field, the effective classification method involves the task of comparing the real measurements with the simulation results. For this purpose, different gauges (Metrik), metric values (Ma β zahl) or other comparators are used which relate the signals to one another and which shall be referred to below collectively as signal gauges (SM). An example of such a signal gauge is a gauge that compares magnitude, phase shift and correlation. Some of the signal metrics are defined by the relevant standards, for example according to ISO 18571.
In general, uncertainty quantification techniques support the estimation of simulation and model quality. Next, in the case of using a signal metric or more generally in the case of using an uncertainty quantification method for a specific input
Figure DEST_PATH_IMAGE001
(this particular input may be a parameter or a scenario) the result of the evaluation of the model quality is called a simulation model-error gauge-abbreviation: error degree gauge-error degree gauge
Figure 521710DEST_PATH_IMAGE002
. For input, parameters or scenes not previously considered
Figure DEST_PATH_IMAGE003
Is/are as follows
Figure 18943DEST_PATH_IMAGE004
For example, a machine learning model based on a so-called gaussian process may be used.
In verification, the system under test is typically checked against requirements, specifications, or performance indicators ((ii))system under testSUT). It should be noted that: the requirement or specification in boolean form can often be determined by using, for example, signal sequential logic (signal temporal logicSTL) to be converted into quantitative measurements. Such formalization may be used as a basis for quantitative semantics that represent generalization to validation insofar as positive values indicate that requirements are met and negative values indicate that requirements are violated. Hereinafter, such requirements, specifications, or performance measures are collectively referred to as "quantitative requirements"
Figure 470784DEST_PATH_IMAGE005
Such quantitative requirements may be checked either against the real SUT or against a model of the same SUT, i.e., a "virtual SUT". For this verification, a catalog is compiled with the test cases that the SUT must satisfy in order to determine if the SUT has the desired performance and security characteristics. Such test cases may be parameterized and thus may encompass any number of individual tests.
In this context, the proposed solution takes into account the need for reliable test results in order to ensure the performance and safety characteristics of the SUT. It is precisely in the case of tests performed on the basis of a simulation of a system or a sub-component, rather than a real system, that it is necessary to ensure that the simulation results are trustworthy.
Validation techniques that check for deviations between simulated behavior and actual system behavior are part of this approach for evaluating simulation and model quality. As described above, to quantify this deviation, one input is addressed
Figure 131573DEST_PATH_IMAGE006
To calculate the effective error gauge
Figure 477104DEST_PATH_IMAGE007
. In another aspect, the system relates to demand
Figure 689779DEST_PATH_IMAGE008
Is checked for satisfaction; satisfaction metrics determined in this manner help to better understand test results and system behavior. For this purpose, a so-called virtual test classifier is introduced below in order to delimit those partitions of the feature space in which the simulation meets the specified purpose, taking into account both quality aspects.
For this purpose, feature combinations are classified in terms of the reliability of the tests defined by these feature combinations in the simulation. To achieve a clear understanding of the resulting classification and to optimize the classifier itself, a complex algorithm is required to resolve or simplify the scatter distribution defined by the virtual test classifier.
The solution according to the invention for this task has the advantages that: this solution ingeniously combines both approaches, compared to designs based on validation only or verification only. To this end, the above-described virtual test classifier is introduced, which combines the requirements of model validation and production testing. This is achieved by taking one aspect from the simulation and model quality (
Figure DEST_PATH_IMAGE009
) And on the other hand from the test requirements (
Figure 894496DEST_PATH_IMAGE008
) Is implemented in association with the information of (a).
The application of corresponding tests is considered in a wide variety of fields. For example, it is conceivable for example to be used for automating driving functions (automated driving) The functional safety of the automation system.
Advantageous embodiments and refinements of the basic idea specified in the independent claims are possible by the measures mentioned in the dependent claims. In this way, an automated, computer-implemented test environment may be provided to automatically improve the quality of the hardware or software product being tested to a large extent.
Drawings
Embodiments of the invention are illustrated in the drawings and are further described in the following description. Wherein:
FIG. 1 illustrates a virtual test classifier.
Fig. 2 shows a scheme for generating decision boundaries for classifiers based on data.
FIG. 3 illustrates pre-classification of virtual test spaces in the event of highlighting unreliable ranges.
Fig. 4 shows another possible classification within the unreliable range.
FIG. 5 shows exemplary results of optimization-based virtual test classification in two different diagrams.
Fig. 6 shows the reliability and classification in the parameter space in the early method stages.
FIG. 7 illustrates the reliability and classification of classifiers in parameter space as improved by refining the extent of the parameter space, wherein the distinction of passing and failing test cases is omitted.
Fig. 8 shows the mapping of information previously presented in the parameter space into the coordinate system of the classifier. For clarity, only unreliable test cases are shown here.
FIG. 9 shows an expanded view that doubts the reliability of test cases near the decision boundary of a classifier.
FIG. 10 shows a diagram of a classifier that combines information from a parameter space for risk estimation.
Fig. 11 shows a description of the method according to the invention from an application point of view.
Fig. 12 shows the visualization of the classification result in the feature space opened by the test parameters.
Fig. 13 schematically shows a workstation.
Detailed Description
According to the invention, in the testing
Figure 358975DEST_PATH_IMAGE010
(the test may be extracted from a test catalog as a test case or obtained as an entity for parametric testing) within the framework of analyzing the simulation model errors
Figure 27854DEST_PATH_IMAGE011
And evaluating the quantitative specification based on simulation of the SUT
Figure 286797DEST_PATH_IMAGE012
. Virtual test classifier usage
Figure 306705DEST_PATH_IMAGE013
And
Figure 433930DEST_PATH_IMAGE014
as input and a binary decision is made as to whether the simulation-based test results are trustworthy.
In this case, any algorithm or any mathematical function that maps the feature space onto a set of classes that are formed during the classification process and that are spaced apart from one another, can be understood as a classifier, according to the use of languages that are common in informatics and in particular pattern recognition. In order to be able to determine into which class an object should be classified or classified (colloquially also: classified), the classifier uses so-called classes or decision boundaries. The term "classifier" is used in terms of terminology if the distinction between a method and an entity is not important, and the term is also used in part next synonymously with "classify" or "classify".
Fig. 1 illustrates this categorization in the current application example. In this case, each point corresponds to a test that is performed during the simulation and for which the requirements have been calculated
Figure 222895DEST_PATH_IMAGE015
A satisfaction measure (13) and an error measure (14)
Figure 652739DEST_PATH_IMAGE016
. In this case, it is preferable that the air conditioner,
Figure 159944DEST_PATH_IMAGE015
is defined such that it takes a positive value when the test can conclude that the system meets the corresponding requirement (reference numeral 24) and a negative value when the system does not meet the requirement (reference numeral 25).
As shown, decision boundaries (19) of the classifier (18) separate the space into four categories A, B, C and D. A system with high reliability will pass the class a test. For the tests of classes B and C, the simulation only provided unreliable results; such tests should therefore be performed on real systems. The class D test fails on a system with high reliability.
The virtual test classifier (18) is based on the following considerations: only when model errors (14) at the most boundary are assumed can only marginally be met in the simulation to replace the verification of a real system. On the other hand, in quantitative requirements
Figure 700646DEST_PATH_IMAGE012
A numerically high satisfaction measure (13), i.e. far beyond the specifications met or clearly not met, can tolerate a certain deviation of the simulation result from the corresponding experimental measurement.
Due to the investigation mode, the model error of the simulation model is known
Figure 485063DEST_PATH_IMAGE017
For the premise, so assume: the latter is validated and validated prior to use of the virtual test classifier (18). Within the framework of this validation, a generic model should be formed, for example on the basis of a gaussian process or else by machine learning, which is given
Figure 85808DEST_PATH_IMAGE010
Provide for
Figure 80309DEST_PATH_IMAGE018
. It should be noted here that: the reliability of the simulation depends mainly on the correctness of the generalized model.
Fig. 2 illustrates a possible scheme for generating decision boundaries (19-fig. 1) for a classifier (18) based on data. In the simplest case, the boundary (19) passes through the origin in this case along a straight line. The slope of the line should preferably be chosen such that, among other things, quantitative requirements are imposed
Figure DEST_PATH_IMAGE019
All points of the satisfaction measure (13) that differ in sign between the simulation (11) and the real measurement (21), i.e. all tests (12) in which the simulation model fails, are within the regions C and B and these regions are also as small as possible.
More general decision boundaries (19), for example polynomials, whose function curve is adapted by means of linear programming such that it meets the criteria of the classifier (18) can also be taken into account
Figure 814916DEST_PATH_IMAGE020
. In this case, where the quantitative requirements are
Figure 578473DEST_PATH_IMAGE021
All points of the satisfaction measure (13) that differ in sign between the simulation (11) and the real measurement (21), i.e. all tests (12) in which the simulation model fails, are also within the regions C and B.
In particular, the first approach, which is elucidated with respect to fig. 2, is based on a linear classification, which distinguishes between reliable and unreliable ranges. This procedure can be used to obtain a first overview of the relationship between the satisfaction measure (13) and the error measure (14) from the representation in the feature space (hereinafter: "parameter space") opened by the test parameters. The classification results also have different meanings depending on the criteria used to construct the decision boundaries. In this case, the most unfavorable cases need to be considered in order to cover the most critical individual cases. This seemingly "conservative" nature of the classification allows the classification to be used for pre-selection of unreliable value ranges.
Within this range, which is preselected for a given satisfaction measure (13) and error measure (14), the categories that have already been described with respect to fig. 1 can be distinguished: first tests for which the simulation reliably provides a result that the first tests pass; second tests for which the simulation reliably provides results that the second tests failed; and third tests for which the simulation did not provide reliable results.
In this case, the conservative property of the classification leads with high probability to a loss of information due to misclassification, that is to say to an increasing inefficiency in the case of the proposal for selecting a reasonable true measure. In order to avoid such a loss of valuable information, which is evaluated according to the nature of the pre-given criteria and classification methods, a set of unordered information has to be identified and correspondingly considered in a pre-selection phase. This practice is explained in more detail in terms of the following example.
In the case of simple linear classification based on consideration of the most unfavorable cases, there are too many points within the unreliable range even if these predetermined criteria are met, for example, in terms of whether the symbols meeting the measure (13) determined in the simulation and in the real measurement agree. As shown in fig. 3, in the class of tests defined by an exemplary linear classification with two decision boundaries, a certain number of misclassified points are contained, which are marked by triangles according to the illustration. Within this triangle there are ranges, such as ranges 38 and 39, respectively, which are clearly belonging to a particular category, such as outlined in fig. 4. In the case of linear classification, such subdivision is not taken into account, which can be interpreted in terms of information theory as an entropy loss. On the other hand, there are ranges in which the different classes are located closely next to each other, for example ranges provided with reference numeral 40.
Thus, the initially unordered triangle range for unreliable test cases contains some entropy, which is difficult to use. In order to properly distinguish the various partitions of the triangular range, an iterative approach based on measurements of the information content has proven to be effective. Various suggestions can be made based on the results of such iterative classification of test cases within the above-described ranges.
This method can be configured as follows: first, the search grid is defined according to the size of the range under consideration, so as to define a subset of unreliable partitions that are preselected. The entropy-based information measure of the subset is now computed for the specific property by iteratively considering the partitions with information content according to the search grid and refining the decision boundaries based thereon, assigning the classified range to the parameter space, labeling the modified range and updating the search grid geometry.
An exemplary classification result and its mapping into a parameter space is shown in fig. 5. The range provided with reference numeral 39 is only those tests which clearly do not meet the requirements. From the number of these test cases and the iterative steps of classification in the completely unreliable range, the ratio of reliable tests to unreliable tests can be calculated for the remaining ranges. This ratio may make it easy for an engineer to derive an index or suggestion for his effective strategy; this makes it possible, for example, to carry out the following actual measurement primarily in the areas of low reliability and partially in the areas of complete unreliability.
In summary, the advantages of this stepwise approach can be described as follows: the loss of information within the framework of classification and therefore the number of real measurements that are ultimately unnecessary are reduced; the unreliable regions are more clearly delimited and the understanding of the simulation model is locally deepened on the basis of an examination in the parameter space; and provides a possibility to derive recommendations on validation.
Instead of classifying test cases in a feature space (hereinafter "test space") that is open to satisfy the metric (13) and the error metric (14) and then mapping the classification onto the parameter space, the classification can be carried out directly in the parameter space and in turn mapped onto the test space. Fig. 6 and 7 illustrate this procedure in an exemplary manner as a function of the speed (26) and distance (27) of the vehicle entering the own lane, wherein the test cases considered to be reliable, irrespective of whether the quantitative requirements are fulfilled, lie outside the marked range.
Similarly to the case of the above-described approach, here too the parameter space is divided into different ranges: the range provided with reference numeral 41 only contains test cases in which the results in simulation and reality differ, i.e. the former is not trustworthy. The range provided with reference numeral 42 contains not only reliable test cases but also unreliable test cases. Test cases outside these ranges (41, 42) are considered reliable, wherein a distinction may additionally be made here between passed and failed tests. Optionally, the range suitable for test cases is also located near the decision boundary.
Within the "mixed" range (42) and at its boundaries, the points representing unreliable points are much more at risk of approaching those reliable tests. This knowledge can be used to improve classification and reduce the risk of misclassification.
Here, the above information can be used in three different ways: similar to that described in fig. 3 to 5, for refining the classification by selecting a smaller range; for iteratively selecting new test candidates (test cases within the mixed or nearby mixed or unreliable range in the sense described above); or to improve the robustness of the classifier.
According to the classification in the parameter space, points are mapped, as already mentioned above, onto the test space shown in fig. 8. (for clarity, a duplicate of a reliable test case is omitted from the illustration.) the representation clearly exhibits differences between conservative classifications (42) derived from the parameter space and classifications (43) obtained ignoring the parameter space; test candidates that are suitable according to the above criteria are exemplarily provided with reference numeral 44. The point (45) close to the range boundary in the extended version according to fig. 9 allows conclusions to be drawn that the frequency of unreliable tests increases within the mixed range.
Fig. 10 illustrates the combination of information in the test space and the parameter space. The range (46) where both classification methods yield the same result indicates a low risk of misclassification. While some tests with increased risk according to the analysis in the parameter space are classified (47) as reliable by the classifier. This can result in: outside the classification there is a range (48) with a high risk of misclassification.
Fig. 11 illustrates the method (10) according to the invention from an application point of view under the following assumptions:
the set of models and tests (12) for the simulation (11) is given together with the defined input parameters.
Require
Figure 615699DEST_PATH_IMAGE008
Quantifiable and predefined and implemented within the framework of a monitoring system which analyzes the test (12) with respect to a measure (13) of satisfaction of these requirements. In this illustration, both satisfaction measures (13) relate to the same requirement
Figure 831916DEST_PATH_IMAGE022
However, the requirements are evaluated once in accordance with the simulation (11) and once during the course of the experimental measurements (21) on the system.
·
Figure 855367DEST_PATH_IMAGE023
Is a predefined error measure (14). That is, for some test inputs, simulations (11) and measurements (21) have been performed, and the error metric (14) generalizes the corresponding test (12) to new experiments that have not been performed so far with a certain reliability, e.g. determined by upper and lower limits of the error metric (14). For the classifier (18 — fig. 1 to 3), only the most unfavorable, i.e., the highest error measure (14) is used. It should be noted that: a classifier (18) may be used to further refine the error metric(14)。
With these assumptions, the method (10) can be as follows in the variant according to fig. 3 to 5:
1. the classifier (18) is defined as set forth above.
2. A test (12) is performed in which an output signal is generated.
3. These output signals relate to requirements
Figure 473430DEST_PATH_IMAGE008
According to satisfaction measures (13) and simulations (11)
Figure DEST_PATH_IMAGE024
The error measure (14) of the error model is analyzed and fed to a classifier (18).
4. In the preselection stage (20), for each test (12), the classifier (18) performs a classification (15) into one of the following categories (A, B, C, D — fig. 1): the test (12) is successful in the simulation (11) and the result of the test is reliable (16); the test fails in the simulation (11) and the result of the test is reliable (16); or the results of the simulation (11) are unreliable (17).
5. An optimized classifier (18) may be applied based on the preselection and provide a plurality of unreliable ranges (39) having a number ratio.
6. Reliable (16) test results for which the simulation (11) is now deemed trustworthy are added (36) to the corresponding database.
7. Unreliable (17) tests (12) may cause a recommendation to the user to perform corresponding measurements (21) on the system according to these number proportions, wherein the final decision should be left to the user.
8. Alternatively, a new experimental measurement (21) may be performed actively or automatically.
The variants described with reference to fig. 6 to 10, however, provide for the following steps:
1. first, some tests were carried out in simulation and reality.
2. In a first step, each test is classified as reliable, i.e. consistently passed in simulations and experiments; classified as unreliable, i.e., consistently failing in simulations and experiments; or classified as unreliable as long as simulations and experiments provide different test results.
3. According to the pre-evaluation (vorqualifiierung), the parameter space is divided into a reliable range, an unreliable range (41), a mixed range (42) and optional boundary ranges at the edges of the reliable range. According to this division, each test case is reclassified according to its membership to one of the mentioned ranges.
4. The classification is improved in terms of the representation described in the parameter space in that the above-mentioned range is divided into smaller and smaller partitions and the information content of each of these parts is optimized.
5. The classification for each point is mapped onto the test space.
6. New test candidates are selected (44) based on the distribution of the points in the test space.
The results of such optimized classification (15) are preferentially used for distinguishing (31) between such tests (12) suitable for simulation (11) and such tests (12) whose execution requires experimental measurements (21). Furthermore, this can be used to improve the test database (32), the simulation model (33), the validation model (34) or the classifier itself (35).
Fig. 12 outlines a possible visualization of classification results in a parameter space according to another example. The satisfaction measure (13) and the error measure (14) are each represented as a point in the parameter space for a specific parameter (26, 27) of the test (12), according to the illustration, the distance and the mass of the vehicle entering the own lane. Then, in a virtual test environment (29), a visualization (28) of the classification (15) of the test (12) is effected in a parameter space by means of a classifier (18).
The method (10) can be implemented, for example, in a workstation (30) in software or hardware or in a hybrid form of software and hardware, as illustrated in the schematic diagram of fig. 13.

Claims (10)

1. Method (10) for inspecting a technical system, in particular an at least partially autonomous robot or vehicle,
the method is characterized by comprising the following steps:
-performing a test (12) by means of a simulation (11) of the system,
-the test (12) is analyzed with respect to a satisfaction measure (13) of the quantitative requirements of the system and an error measure (14) of the simulation (11),
-temporarily classifying (15) the test (12) as reliable (16) or unreliable (17) according to the satisfaction measure (13) and the error measure (14), and
-optimizing a classifier (18) for said classification (15) by progressively refining said classification (15).
2. The method (10) of claim 1,
the method is characterized by comprising the following steps:
-confirming the simulation (11) by experimental measurements (21) on the system in a pre-selection phase (20),
-the temporal decision boundary (19) of the classifier (18) is linearly derived such that the satisfaction measure (13) derived in the simulation (11) on the one hand deviates as little as possible from the satisfaction measure derived in the measurement (21) on the other hand, and
-optimizing the classifier (18) by iteratively adapting the decision boundary (19).
3. The method (10) of claim 1,
the method is characterized by comprising the following steps:
-pre-evaluating a test (12) performed by means of the simulation (11) by means of a corresponding measurement (21) on the system,
-dividing the feature space opened by the parameters (26, 27) of the test (12) into ranges (41, 42) in dependence on the pre-assessment,
-refining said classification (15) in such a way that said range (41, 42) is repeatedly divided, and
-mapping the classification (15) of the test (12) onto the classifier (18) once the refinement is completed.
4. The method (10) of claim 3,
the method is characterized by comprising the following steps:
-automatically selecting (22) other tests (12) to be performed after optimizing the classifier (18).
5. The method (10) of claim 3 or 4,
the method is characterized by comprising the following steps:
-said analysis is performed such that said satisfaction measure (13) is positive when said system satisfies (24) said requirement and negative when said system does not satisfy (25) said requirement.
6. The method (10) according to any one of claims 3 to 5,
the method is characterized by comprising the following steps:
-visualizing (28) the classification (15) in the feature space in accordance with the analysis.
7. The method (10) according to any one of claims 1 to 6, characterized in that the errors of the system identified by the check are automatically ameliorated.
8. Computer program which is set up to carry out the method (10) according to any one of claims 1 to 7.
9. Machine readable storage medium having stored thereon a computer program according to claim 8.
10. Device (30) set up to carry out the method (10) according to any one of claims 1 to 7.
CN202110471517.9A 2020-04-30 2021-04-29 Method and device for checking a technical system Pending CN113590460A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102020205526.2A DE102020205526A1 (en) 2020-04-30 2020-04-30 Method and device for testing a technical system
DE102020205526.2 2020-04-30

Publications (1)

Publication Number Publication Date
CN113590460A true CN113590460A (en) 2021-11-02

Family

ID=78243004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110471517.9A Pending CN113590460A (en) 2020-04-30 2021-04-29 Method and device for checking a technical system

Country Status (2)

Country Link
CN (1) CN113590460A (en)
DE (1) DE102020205526A1 (en)

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10303489A1 (en) 2003-01-30 2004-08-12 Robert Bosch Gmbh Motor vehicle control unit software testing, whereby the software is simulated using a test system that at least partially simulates the control path of a control unit

Also Published As

Publication number Publication date
DE102020205526A1 (en) 2021-11-04

Similar Documents

Publication Publication Date Title
CN113590456A (en) Method and device for checking a technical system
US11480944B2 (en) CAD-based design control
CN106406881B (en) Scalable method and system for analyzing formalized requirements and locating errors
Vazquez-Chanlatte et al. Logical clustering and learning for time-series data
US8504331B2 (en) Aerodynamic model identification process for aircraft simulation process
US20200333772A1 (en) Semantic modeling and machine learning-based generation of conceptual plans for manufacturing assemblies
Henriksson et al. Towards structured evaluation of deep neural network supervisors
EP3816864A1 (en) Device and method for the generation of synthetic data in generative networks
KR102396496B1 (en) Method for Providing Information Generated by Failure Mode and Effect Analysis
CN111983429A (en) Chip verification system, chip verification method, terminal and storage medium
EP3667544B1 (en) Designing a structural product
EP3933691A1 (en) System and method to alter an image
US11397660B2 (en) Method and apparatus for testing a system, for selecting real tests, and for testing systems with machine learning components
CN113804451A (en) Automatic simulation test method and device for intelligent driving of automobile
CN113590458A (en) Method and device for checking a technical system
CN109743200B (en) Resource feature-based cloud computing platform computing task cost prediction method and system
CN113590460A (en) Method and device for checking a technical system
Saraogi et al. CNN Based design rule checker for VLSI layouts
CN113722207A (en) Method and device for checking technical systems
Wehner et al. Development of driver assistance systems using virtual hardware-in-the-loop
CN113704085A (en) Method and device for checking a technical system
US20220076142A1 (en) System and method for selecting unlabled data for building learning machines
CN115359203A (en) Three-dimensional high-precision map generation method and system and cloud platform
CN113590459A (en) Method and device for checking a technical system
CN108762227B (en) Automatic driving test system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination