CN116665174A - Visual perception algorithm-oriented dangerous test case generation method and related equipment - Google Patents

Visual perception algorithm-oriented dangerous test case generation method and related equipment Download PDF

Info

Publication number
CN116665174A
CN116665174A CN202310694282.9A CN202310694282A CN116665174A CN 116665174 A CN116665174 A CN 116665174A CN 202310694282 A CN202310694282 A CN 202310694282A CN 116665174 A CN116665174 A CN 116665174A
Authority
CN
China
Prior art keywords
causal
visual perception
test case
algorithm
oriented
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310694282.9A
Other languages
Chinese (zh)
Inventor
蒋拯民
李慧云
潘毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202310694282.9A priority Critical patent/CN116665174A/en
Publication of CN116665174A publication Critical patent/CN116665174A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • General Engineering & Computer Science (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The invention discloses a dangerous test case generation method and related equipment for a visual perception algorithm, wherein the method comprises the following steps: acquiring an image captured by an automatic driving automobile vision sensor, cleaning and marking the collected image, and obtaining an observation data set according to the non-reference image quality evaluation characteristics; constructing a causal structure diagram of influence of environmental factors on perception performance according to the observation data set and fusing domain knowledge; separating key environmental factors from the causal structure diagram, and quantitatively calculating the influence degree of each environmental factor on the visual perception performance; based on qualitative analysis of causal relation between key environment variables and visual perception performance and quantitative estimation of causal effect, challenge indexes are provided to generate dangerous test cases facing perception in batches. The method has remarkable advantages in aspects of causal relation mining, challenge index measurement, dangerous scene recognition, search space modeling and the like, and can be used for accurately evaluating and verifying the performance of the automatic driving visual algorithm in a complex environment.

Description

Visual perception algorithm-oriented dangerous test case generation method and related equipment
Technical Field
The invention relates to the technical field of artificial intelligence and automatic driving, in particular to a dangerous test case generation method, a dangerous test case generation system, a dangerous test case generation terminal and a dangerous test case generation computer readable storage medium facing a visual perception algorithm.
Background
Visual perception is a core fundamental module in an automatic driving system, which is a key component of an automatic driving vehicle to acquire environmental information and make decisions. Through visual perception, the autopilot system may identify road signs, vehicles, pedestrians, etc. Thus, good visual perceptibility is critical to achieving safe and reliable autopilot.
Existing methods typically employ discrete, manually selected test scenarios to evaluate the robustness of visual perception algorithms. These methods are based on experience or expert judgment to select challenging scenes, such as different weather conditions, road sign occlusions, illumination changes, etc., which are often fixed or limited and cannot cover the full range of possible situations, and then evaluate through offline testing or simulation. In addition, the prior art scheme lacks systematic and quantitative test environment evaluation indexes, and is difficult to accurately evaluate the challenges of a real complex environment.
Accordingly, the prior art is still in need of improvement and development.
Disclosure of Invention
The invention mainly aims to provide a dangerous test case generation method, a system, a terminal and a computer readable storage medium for a visual perception algorithm, and aims to solve the problems that in the prior art, the visual perception evaluation capability lacks systematic and quantitative test environment evaluation indexes, and the real complex environment is difficult to evaluate accurately.
In order to achieve the above object, the present invention provides a method for generating a hazard test case for a visual perception algorithm, the method for generating a hazard test case for a visual perception algorithm includes the following steps:
acquiring an image captured by an automatic driving automobile vision sensor, cleaning and marking the collected image, and obtaining an observation data set according to the non-reference image quality evaluation characteristics;
according to the observation data set, and by fusing domain knowledge, constructing a causal structure diagram of influence of environmental factors on perception performance;
acquiring key environmental factors from the causal structure diagram, and quantitatively calculating the influence degree of each environmental factor on the visual perception performance;
based on qualitative analysis of causal relation between key environment variables and visual perception performance and quantitative estimation of causal effect, dangerous test cases oriented to perception are generated in batches based on challenge indexes.
Optionally, in the method for generating a risk test case facing to a visual perception algorithm, the acquiring an image captured by a visual sensor of an automatic driving automobile, cleaning and marking the collected image, and obtaining an observation data set according to a non-reference image quality evaluation feature specifically includes:
the method comprises the steps that environmental variables to be inspected form combination conditions, a certain number of images are captured for each environmental combination, and the images are acquired through an automatic driving automobile vision sensor;
cleaning and marking the collected images, and extracting non-reference image quality assessment features from the images;
obtaining an observation data set according to the non-reference image quality evaluation characteristics;
wherein the obtained observation data set is expressed asWherein N is the number of cases in the observation data set, i is one of the number of cases, X is a covariate vector, W is an environmental factor vector to be inspected, and Y is a measured visual perception algorithm performance index.
Optionally, in the method for generating a dangerous test case facing to a visual perception algorithm, the constructing a causal structure diagram of influence of an environmental factor on perception performance according to the observation data set and by fusing domain knowledge specifically includes:
The causal relationship between a set of variables is represented by a directed acyclic graph G (v, epsilon), where random variables are represented and epsilon represents causal links between variables;
starting from an empty graph by adopting a quick greedy equivalent search algorithm, iteratively adding or deleting edges through a scoring function until convergence, merging domain knowledge, and constructing a causal structure diagram.
Optionally, in the method for generating a dangerous test case facing to a visual perception algorithm, the applying a quick greedy equivalent search algorithm starts from a blank graph, and iteratively adds or deletes edges through a scoring function until convergence, including:
in the observation data setObtain a blank G init
Iteratively processing each pair of variables: < W, Y >, < W, X > and < X, Y >, adding an edge between the variables;
for each potential edge, testing whether adding the potential edge to the current graph results in a loop graph, and if so, skipping the potential edge;
for each potential edge which does not lead to the cyclic graph, testing whether the addition of the potential edge improves the scoring index, and if so, adding the potential edge;
iteratively processing all edges in the current graph, checking whether the potential edges are removed and improving the scoring index without breaking the graph, and if so, executing the operation of removing the edges;
Repeating the iterative process until the scoring index is no longer improved;
wherein, checking whether to remove the potential edge is judged by calculating whether the Bayesian information criterion BIC value is increased:
BIC=2·ln P(data|θ,G)-c·k·ln(n)
wherein θ represents a parameter of the directed acyclic graph G (v, ε), c is a constant taking 1, k is the number of directed acyclic graph parameters, and n represents the sample size of observed data;
if the BIC value of the current causal structure is increased, this indicates that the potential edge is removed.
5. The visual perception algorithm-oriented risk test case generation method according to claim 4, wherein the obtaining key environmental factors in the observation data set quantitatively calculates the influence degree of each environmental factor on visual perception performance, and specifically includes:
differences in potential intervention results and control results represent individual therapeutic effects ITE:
ITE i =Y i (W i =1)-Y i (W i =0)
wherein ,,Yi (W i =1) represents the potential intervention result of the record in the observation dataset with the fog intensity set to mild; y is Y i (W i =0) represents the control result without fog; w (W) i =0 means that the i-th recorded fog intensity in the observed data is 0, w i =1 indicates that the fog intensity of the i-th record is mild;
the average causality ACE of light mist to average accuracy mean is expressed as:
wherein ,representing the mathematical expectation that the data will be,
when the treatment group and the control group are not randomly allocated, there is a selection bias
wherein ,NT and Nc Sample sizes of the treatment group and the control group, respectively;
estimating causal effects using a dedicated statistical learning model:
the results of the treatment group and the control group are estimated by adopting any statistical regression method and are respectively expressed as and />
In the control group, the differences were defined as:marked as Dc; in the treatment group, the differences were defined as: />Denoted as D T ;D c and DT A therapeutic effect known as padding;
based on the new data set (X) using any regression method c ,D c) and (XT ,D T ) Calculation of therapeutic Effect τ 0 (x)/τ 1 (x) Wherein τ 0 (x) Represents the estimated result of the control group, τ 1 (x) Representing control group estimation results, X c and XT Covariates of control and treatment groups, respectively, by combining τ 0(x) and τ1 (x) Combining to obtain final individual estimated causal effect tau X (x):
τ X (x)=g(x)τ 0 (x)+(1-g(x))τ 1 (x)
Where g (x) e [0,1] is a weighting function, typically taken as a trend value score;
ACE calculates τ by calculating X (x) Is determined by the mathematical expectation of (a).
Optionally, in the method for generating a dangerous test case facing to a visual perception algorithm, the qualitative analysis based on the causal relationship between a key environment variable and visual perception performance and the quantitative estimation of causal effect specifically include that the dangerous test case facing to perception is generated in batches based on a challenge index:
Defining a challenge index:
challenge index=[r 1 ,r 2 ,…,r i ,r n ]·c
wherein c= [ c ] 1 ,c 2 ,…,c i ,c m ] T The relative weights among a plurality of key environmental impact factors after a plurality of screening reflect the quantitative contribution of different environmental factors to the perception result; m is the number of critical environmental factors; r is (r) i Is corresponding to c i Normalized causal effect of (c), if c i Applying an intervention w such that the causal effect is expressed as ACE w Then:
wherein ,Ni Is the environmental node c i The type of intervention possible;
generating a driving score below the risk score threshold d thr The method for calculating the algorithm test score comprises the following steps:
d score =100×R i P i
wherein ,dscore Representing a test score of the tested system in the scene; r is R i The running path of the tested system in the ith test scene accounts for the percentage of the planned path; p (P) i A violation penalty term corresponding to the system under test;
the agent model and causal knowledge are combined to maximize the acquisition function, and the test parameter combination with the maximum acquisition function is used as a test case:
wherein ,and s are the predicted driving score and variance of the proxy model, respectively; />Representing a lowest driving score predicted by the surrogate model; phi is the probability density function of a standard normal distribution; ci is a defined challenge index; dis is a candidate test case and a explored test case Minimum euclidean space distance between examples;
if the current experiment count k is greater than the preset search iteration number N limit When in use, dangerous test case set T oriented to perception is generated in batches set
Optionally, the method for generating the dangerous test case facing the visual perception algorithm, wherein the dangerous test case is a test environment with a set of determined values.
In addition, in order to achieve the above object, the present invention further provides a system for generating a hazard test case for a visual perception algorithm, where the system for generating a hazard test case for a visual perception algorithm includes:
the observation data set collection module is used for acquiring images captured by the vision sensor of the automatic driving automobile, cleaning and marking the collected images and obtaining an observation data set according to the non-reference image quality evaluation characteristics;
the causal structure diagram construction module is used for constructing a causal structure diagram of the influence of the environmental factors on the perception performance according to the observation data set and fusion of domain knowledge;
the causal effect estimation module is used for acquiring key environmental factors from the causal structure chart and quantitatively calculating the influence degree of each environmental factor on visual perception performance;
the dangerous test case generation module is used for generating dangerous test cases oriented to perception in batches based on the challenge indexes based on qualitative analysis of causal relation between key environment variables and visual perception performance and quantitative estimation of causal effect.
In addition, to achieve the above object, the present invention also provides a terminal, wherein the terminal includes: the system comprises a memory, a processor and a visual perception algorithm-oriented hazard test case generating program which is stored in the memory and can run on the processor, wherein the visual perception algorithm-oriented hazard test case generating program realizes the steps of the visual perception algorithm-oriented hazard test case generating method when being executed by the processor.
In addition, in order to achieve the above object, the present invention further provides a computer readable storage medium, where the computer readable storage medium stores a visual perception algorithm-oriented hazard test case generation program, and the visual perception algorithm-oriented hazard test case generation program implements the steps of the visual perception algorithm-oriented hazard test case generation method described above when executed by a processor.
According to the method, an image captured by an automatic driving automobile vision sensor is acquired, the collected image is cleaned and marked, and an observation data set is obtained according to the quality evaluation characteristics of the reference-free image; according to the observation data set, and by fusing domain knowledge, constructing a causal structure diagram of influence of environmental factors on perception performance; acquiring key environmental factors from the causal structure diagram, and quantitatively calculating the influence degree of each environmental factor on the visual perception performance; based on qualitative analysis of causal relation between key environment variables and visual perception performance and quantitative estimation of causal effect, dangerous test cases oriented to perception are generated in batches based on challenge indexes. The method and the system can effectively measure the challenge degree of the environmental condition to the visual algorithm based on quantitative estimation and normalization calculation of the causal effect, guide the search strategy to generate more challenging test cases by introducing causal knowledge of the challenge index, and effectively judge whether the performance of the automatic driving system in a specific scene is lower than a preset threshold value or not by combining the challenge index and the driving score, so that potential dangerous situations are identified.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of the visual perception algorithm-oriented hazard test case generation method of the present invention;
FIG. 2 is a schematic framework diagram of analysis, screening and test scene generation of key environmental elements facing the automatic driving visual perception algorithm in a preferred embodiment of the visual perception algorithm-oriented hazard test case generation method of the present invention;
FIG. 3 is a schematic diagram of the process of constructing the observation data set in the preferred embodiment of the visual perception algorithm-oriented dangerous test case generating method of the present invention;
FIG. 4 is a schematic diagram of a causal structure diagram construction process in a preferred embodiment of the visual perception algorithm-oriented hazard test case generation method of the present invention;
FIG. 5 is a schematic diagram of a challenge index guided search flow in a preferred embodiment of a visual perception algorithm-oriented method for generating a hazard test case according to the present invention;
FIG. 6 is a schematic diagram of a preferred embodiment of the visual perception algorithm-oriented hazard test case generation system of the present invention;
FIG. 7 is a schematic diagram of the operating environment of a preferred embodiment of the terminal of the present invention.
Detailed Description
The automatic driving automobile has great significance for improving traffic safety, improving traffic efficiency, realizing low-carbon travel and the like, and has become a strategic direction for transformation and upgrading of the global automobile industry. However, frequent car accidents have in recent years severely hit public confidence and have become a major impediment to commercialization of the autopilot industry, especially the robustness test of visual perception algorithms has long not been fully appreciated.
The existing visual perception-oriented testing methods can be roughly divided into the following four types. (1) Data enhancement is a technology for attempting to expand a training set of a perception algorithm, for example, a certain automobile manufacturer adopts a crowdsourcing-based data collection strategy to obtain training data of actual driving on a road. The vehicle of the automobile manufacturer is provided with various sensors, and various data in the running process of the vehicle can be recorded. After anonymization, the owner of the vehicle selects whether to participate in the data sharing plan or not, and uploads the data to the server of the manufacturer of the vehicle, so that a huge and diversified data set is formed. However, the data set expansion is essentially to explore sparse dangerous scenes blindly in a high-dimensional test space, which is inefficient. (2) search of dangerous scenes based on neuron coverage: when a test input passes through a neuron and results in a particular output, the neuron is considered to be covered. For example, a white-box test framework known as deep xplore, uses neuron coverage to detect behavioral inconsistencies between the algorithms being evaluated. Based on the research of deep xplore, a greedy search method is provided to increase the neuron coverage rate, further popularize the concept of neuron coverage and define a series of neuron coverage indexes. Recent studies have combined fuzzy concepts and optimization algorithms with these specific neuronal coverage indicators to build a variety of Test frameworks such as DLFuzz, deepHunter, tensorFuzz and Test4Deep. However, neuronal coverage testing is still in early stages of research, particularly where it has been questioned with a large deviation from the traditional software testing concepts. (3) image conversion generation: representative schemes such as deep and neural network based style migration algorithms utilize a generation countermeasure network to convert raw input into severe weather conditions. However, the image conversion technique lacks clear interpretability, and the generation of the composite image is inefficient. (4) combat attack: this is a malicious image attack method that affects the output of the model by adding disturbances to the visual input. Early research on attack resistance mainly focuses on attacks that disturb all pixels, and as disturbances do not occur on all pixels in an actual scene, the focus of research at the present stage is on patch attacks for specific areas of an image. For example, proposed methods of generating printable, antagonistic billboards that cause visual perception to output erroneous results under dynamically changing driving conditions. Notably, resistance attacks lack a mapping to the real world environment.
In general, most of the existing approaches are directed to causing erroneous behavior in deep neural network based vision algorithms, and lack sufficient attention to the environmental factors that cause these errors.
The existing test technology of the automatic driving vision module has the following defects:
(1) Lack of interpretability: some methods, such as neuron coverage and resistance attack, cannot reasonably explain the mapping relationship between the discovered loopholes and the real environment, and thus cannot effectively guide algorithm iteration.
(2) The consumption of test resources is large: image conversion is an inefficient and resource intensive test method, often requiring significant computing resources to extract diverse antagonistic weather features. In addition, due to the lack of screening of environmental factors that actually cause degradation of visual perception performance, only the combination of environments can be exhausted, thereby exacerbating the computational burden of image synthesis.
For a long time, the industry does not have a better scheme for identifying key environmental factors of automatic driving visual perception, and the prior art scheme cannot effectively generate a high-risk perception test scene.
The invention provides a key environment element analysis, screening and test scene generation method oriented to an automatic driving visual perception algorithm, which comprises the following steps: (1) collecting an observation dataset; (2) building a structural causal graph to screen for key environmental variables; (3) Quantitatively estimating the influence degree of each environmental factor on visual perception performance, and providing a challenge index substitution measurement index of an evaluation algorithm deployment environment; (4) And generating a perceived risk test scene on line based on the challenge index.
In order to make the objects, technical solutions and advantages of the present invention more clear and clear, the present invention will be further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The method for generating the dangerous test case facing the visual perception algorithm according to the preferred embodiment of the invention, as shown in fig. 1 and 2, comprises the following steps:
and S10, acquiring an image captured by an automatic driving automobile vision sensor, cleaning and marking the collected image, and obtaining an observation data set according to the non-reference image quality evaluation characteristics.
The invention separates key environmental elements from a high-dimensional parameter space based on causal inference theory (causal inference theory). Generally, this requires the acquisition of random control experimental data, i.e. the environmental variables to be examined are randomly assigned to the control group or analysis group, however, the acquisition of random control experimental data is costly or even impractical.
To this end, the invention proposes to use a combined test principle to acquire the observation dataset. Specifically, the environment variables to be examined constitute a combination condition, a combination test, also called a combination test or a combination coverage test, is a software test technology, and aims to efficiently test interaction between different input parameters or configuration settings in a system; in the invention, various possible environment combinations are randomly acquired as much as possible through the combination test principle, and the advantages of the method are that the coverage of collected data is considered and the cost is saved.
As shown in fig. 3, a number of images (images captured by the autopilot vision sensor) are then captured for each combination; then, the collected data are cleaned and marked, and no-reference image quality assessment features are extracted from the images so as to measure the influence of various environmental factors on the image quality; the no-reference image quality assessment feature is an index for measuring image quality, and does not require reference images or subjective evaluation, and based on the features and statistical information of the images themselves, the quality of the images is estimated by analyzing low-level features, perceptual features or structural features of the images.
The extraction of the no-reference image quality index is an intermediate step in the construction of the entire observational dataset, with the aim of helping to better understand what quality of how the environmental nodes are images in the subsequent causal analysis, ultimately undermining the performance of the visual algorithm.
The flow of constructing an observation data set is shown in FIG. 3, and the obtained observation data set is expressed asWherein N is the number of cases in the observation data set, i is one of the number of cases, X is a covariate vector, W is an environmental factor vector to be inspected, and Y is a measured visual perception algorithm performance index.
And step S20, constructing a causal structure diagram of influence of the environmental factors on the perception performance according to the observation data set and fusing domain knowledge.
The invention is based on the observation dataset collected in step S10, in particular, using a causal structure discovery method (causal structure discovery method is a method for deducing causal relationships between variables, which aims to identify causal relationships between variables by observing statistical patterns and dependencies in the data, revealing causal structures between phenomena. The causal structure discovery method focuses on the causal relation among a group of variables, which is represented by a directed acyclic graph G (v, epsilon), wherein v represents a random variable, epsilon represents causal links among variables, the causal structure has a plurality of expression modes, one of the most commonly used types is expressed by using a graph structure, nodes in the graph correspond to variables (including environment variables, image quality index variables and visual perception detection precision) in observational data, and the edge is a directed arrow, A-B, which can be simply understood as that A has direct causal action on B, and A is the cause of B.
For example, W fog Y represents the direct causal influence of fog on the visual perception algorithm, and Y represents the performance index of the visual algorithm; w (W) fog Representing fog nodes in the structural causal graph. In the present invention, a fast greedy equivalent search (Fast Greedy Equivalence Search, FGES) algorithm is employed, which is a parallelized and optimized version of the greedy equivalent search (Greedy Equivalence Search, GES) algorithm. The FGES algorithm starts with a blank graph, iteratively adding or deleting edges through a scoring function until convergence, as shown in fig. 4.
As shown in fig. 4, the detailed steps for creating a causal structure are as follows:
step (1), in the observation data setObtain a blank G init
Step (2), iteratively processing each pair of variables: < W, Y >, < W, X > and < X, Y >, an edge is added between variables (the edge represents a causal relationship).
Step (3), for each potential edge (possible causal relationship, but requiring further confirmation of the subsequent steps, if the potential edge improves the scoring indicator BIC and does not violate the rules of the graph structure, it becomes a confirmed edge), testing whether adding the potential edge to the current graph results in a loop graph, if so (if the loop graph refers to a graph structure in which one or more loops (or loops) are present in the graph), the potential edge is skipped.
And (4) for each potential edge which does not cause the cyclic graph, testing whether the score index is improved by adding the potential edge, and if the score index is improved, adding the potential edge.
In the scoring-based causal graph structure discovery method, scoring indexes are indexes used for measuring the fitting degree or quality of potential causal graph models, and are used for comparing the relative advantages and disadvantages of different models and helping to determine the optimal causal graph structure.
Determining whether adding potential edges improves scoring metrics generally involves comparison and analysis of the model. In causal graph structure discovery, the quality of the graph structure can be judged by comparing the scoring indexes of different graph structures. If the scoring index score increases after the potential edge is added, the addition of the edge can be considered to improve the fit or quality of the model.
Step (5), iteratively processing all edges in the current graph, checking whether potential edges are removed and improving scoring indexes under the condition that the graph is not broken, and if so, executing edge removal operation;
repeating steps (2) - (5) until the scoring index is no longer improved.
Wherein the present invention checks whether the potential edges are removed by computing a bayesian information criterion (Bayesian Information Criterion, BIC) whether the BIC value is increased:
BIC=2·ln P(data|θ,G)-c·k·ln(n)
Where θ represents a parameter of the directed acyclic graph G (v, ε), c is a constant taking 1, k is the number of directed acyclic graph parameters, and n represents the sample size of the observed data.
If the BIC value of the current causal structure is increased, this indicates that the potential edge is removed.
In general, higher BIC values indicate that the identified graphical causal model G (v, ε) can explain causal relationships more efficiently without being overly complex (the construction of a causal graph structure is an iterative process, equivalent to just finding one structure where BIC scores are highest among other structures found in the past). It is noted that the present invention does not rely entirely on the data-driven causal structure discovery paradigm, as shown in fig. 4, and incorporates domain knowledge to construct a causal structure of environmental factors affecting perceptual performance.
And step S30, acquiring key environmental factors in the observation data set, and quantitatively calculating the influence degree of each environmental factor on the visual perception performance.
In particular, in this step, it is intended to estimate the degree of degradation of the perceived performance Y caused by the critical environmental factors, i.e. "causality". According to the study paradigm of the potential outcome framework (potential outcome framework), the term "causal effect" refers to the difference in effect on outcome Y between "treatment" and "no treatment" in a given observed dataset. For example, taking the environmental "fog" as an example, where "treatment" refers to different fog intensity levels, possibly with {0,1,2, …, N } w Represented by N w All possible values of a certain variable in the graph structure are shown, and the example shows that fog nodes have different intensities, for example, 5 intensities can be divided from none to one.
wherein ,Wi =0 means that the i-th recorded fog intensity in the observed data is 0, w i =1 indicates that the fog intensity of the i-th record is mild; y is Y i (W i =1) represents the potential intervention result of the record in the observation dataset with the fog intensity set to mild; y is Y i (W i =0) represents the control result without fog; here Y is used to measure the performance of the visual perception algorithm, e.g. average precision mean (mean Average Precision, mAP), the difference between the two (control and control) is the individual therapeutic effect (Individual Treatment Effect, ITE):
ITE i =Y i (W i =1)-Y i (W i =0)
according to the causal graph model introduced in step 20 of the present invention, non-critical environmental factors can be regarded as covariate vectors: x, they are not affected by the treatment. The image quality index is a variable after treatment, and the result is affected by treatment. The average causal effect of mild mist on the mAP (Average Causal Effect, ACE) can be further expressed as:
wherein ,representing mathematical expectations.
However, in practice, the results of the intervention group and the control group cannot be retrieved simultaneously in the observed data, so that the mAP of the intervention group cannot be simply subtracted from the mAP of the control group, and there may be a problem of selection bias:
That is, when the treatment group and the control group are not randomly allocated, there is a selection bias
wherein ,NT and Nc The sample sizes of the treatment group and the control group, respectively.
When the treatment and control groups are not randomly assigned, a selection bias occurs, resulting in systematic differences in the two covariates, which may lead to inaccurate estimation of ACE. For this purpose, the invention proposes to use a dedicated statistical learning model to estimate causal effects, taking as an example the meta-learning algorithm X-learner, which has been proven to still produce accurate extrapolated results in the presence of confounding variables and non-observed heterogeneities. The X-learner uses a three-stage procedure to estimate the therapeutic effect, as follows:
step (11), simulation of the treated/control results: the treatment and control group results were estimated using any statistical regression method, such as random forest, expressed as and />
Step (12), filling therapeutic effect: fills in the therapeutic effect of the treated/control group individuals. In the control group, the differences were defined as:denoted as D c The method comprises the steps of carrying out a first treatment on the surface of the In the treatment group, the differences were defined as: />Denoted as D T ;D c and DT A therapeutic effect known as padding.
Step (13), combining the therapeutic effects to generate a final estimate: based on the new data set (X) using any regression method c ,D c) and (XT ,D T ) Calculation of therapeutic Effect τ 0 (x)/τ 1 (x) Wherein τ 0 (x) Represents the estimated result of the control group, τ 1 (x) Representing control group estimation results, X c and XT Covariates of control and treatment groups, respectively, by combining τ 0(x) and τ1 (x) Combining to obtain final individual estimated causal effect tau X (x):
τ X (x)=g(x)τ 0 (x)+(1-g(x))τ 1 (x)
Wherein g (x) is [0,1 ]]Is a weighted function, typically taken as a tendency score (similarity score); to this end, ACE calculates τ by calculating τ X (x) Is determined by the mathematical expectation of (a).
And S40, based on qualitative analysis of causal relation between key environment variables and visual perception performance and quantitative estimation of causal effect, generating dangerous test cases oriented to perception in batches based on challenge indexes.
Specifically, based on qualitative analysis of causal relation between key environment variables and visual perception performance and quantitative estimation of causal effect, the invention further designs a totally new alternative evaluation index for evaluating environmental condition challenges of visual algorithm deployment, which is called as a challenge index. The challenge index is defined by the following equation:
challenge index=[r 1 ,r 2 ,…,r i ,r n ]·c
wherein c= [ c ] 1 ,c 2 ,…,c i ,c m ] T The relative weights among a plurality of key environmental impact factors after a plurality of screening reflect the quantitative contribution of different environmental factors to the perception result; m is the number of critical environmental factors; r is (r) i Is corresponding to c i Normalized causal effect of (c), if c i Applying an intervention w such that the causal effect is expressed as ACE w Then:
wherein ,Ni Is the environmental node c i The kind of possible interventions.
The challenge index provides a new method for estimating the deployment difficulty of the visual algorithm based on causal knowledge for the first time in the industry, and the algorithm verification and iteration can be greatly accelerated by collecting training and testing samples under a key environment combination. It is noted that the vector c in the challenge index reflects the quantized contribution of different environmental factors to the perceived result. To determine these relative weights, it is suggested to employ an entropy weighting method.
Furthermore, the invention provides a dangerous test scene generation method of 'challenge index guided search' to verify the safety of the vision-based end-to-end automatic driving solution. Any search-based scenario generation method needs to solve the following key problems: what the search space is, and what search strategies are used to generate test cases. Considering that the huge search space makes the existing search technology very inefficient and blind, the invention firstly selects the key environmental factors which are subjected to causal inference screening as the dimension of the search space, thereby reducing a plurality of non-causal test dimensions. And secondly, carrying out probability modeling on the search space by adopting a proxy model, capturing the similarity between the explored test sample and the neighbor unexplored area by utilizing the spatial correlation, and establishing result prediction and uncertainty estimation for the similarity. In addition, by integrating the discovered causal knowledge (challenge index), the search strategy is further guided to search towards more challenging key areas, so that a perceived risk-oriented test scenario (i.e., risk test cases) is generated in batches. The online scene search process is shown in fig. 5.
In fig. 5, k is the current experimental count; nl (Nl) imit The search iteration times are preset; d, d score and dthr The test score and the risk score threshold of the tested system in the scene are respectively. It is to be noted that the object of the present invention is to generate as much as possible a driving score below a certain threshold (d thr ) The calculation method of the algorithm test score is as follows:
d score =100×R i P i
wherein ,Ri The running path of the tested system in the ith test scene accounts for the percentage of the planned path; p (P) i The violation punishment items corresponding to the tested system comprise route deviation, collision, simulation timeout and the like. As shown in FIG. 5, the search method of the present invention dynamically adjusts the search area in the test space on line according to the test result of the previous test case. Specifically, it combines proxy models and causal knowledge to maximize the acquisition function and maximize the test of the acquisition functionThe parameter combination is used as a test case (the test case is a test environment of a set of definite values such as what the sunlight intensity is, what the fog concentration is, etc., and the formula (9) is simply to select the one with the largest acquisition from the possibilities to be selected as a test case):
wherein ,and s are the predicted driving score and variance, respectively, of the surrogate model; / >Representing a lowest driving score predicted by the surrogate model; phi is the probability density function of a standard normal distribution; ci is a challenge index calculated using formulas (6) and (7); dis is the minimum Euclidean space distance between the candidate test case and the explored test case; if the current experiment count k is greater than the preset search iteration number N limit When in use, dangerous test case set T oriented to perception is generated in batches set
The key point of the invention is to provide a test method for an automatic driving visual perception algorithm, which specifically comprises qualitative research of causal relationship and causal effect of quantitative estimation; a new evaluation index, namely a challenge index, is provided for measuring the challenge degree of environmental conditions to the visual algorithm; in addition, the invention also provides a challenge index guiding searching method to generate a risk test scene, and the performance of the visual end-to-end algorithm in various challenging environments is verified.
The main protection points of the invention are as follows:
(1) The challenge index calculating method comprises the following steps: the calculation method for the challenge index provided by the invention is protected, and the method is based on quantitative estimation and normalization calculation of the causal effect, so that the challenge degree of the environmental condition to the visual algorithm can be effectively measured.
(2) The dangerous scene identification method comprises the following steps: the method for identifying the dangerous scene by using the challenge index and the driving score can effectively judge whether the performance of the automatic driving system in a specific scene is lower than a preset threshold value or not by combining the challenge index and the driving score, so that the potential dangerous situation is identified.
(3) Algorithm flow and related algorithm details: the method comprises the steps of protecting a search space modeling method based on a proxy model, establishing a potential test sample space model by utilizing space correlation, and guiding a search strategy to generate more challenging test cases by introducing causal knowledge of challenge indexes; the algorithm flow and related algorithm details provided by the invention, including specific steps of challenge index guided search, construction and updating methods of proxy models and the like, are protected, and have important technical and innovative characteristics for implementing the method of the invention.
Compared with the best prior art, the invention has the following advantages (namely beneficial effects):
(1) Deep mining of causal relationships: the invention firstly proposes the application of causal inference theory in the industry, and deeply digs the causal relation between the environmental condition and the visual algorithm performance, thereby being capable of more accurately evaluating the influence of different environmental factors on the algorithm, and having more scientificity and reliability compared with the prior art.
(2) Innovative measure of challenge index: the invention provides the challenge index as a new index for evaluating the challenge degree of the algorithm under various environmental conditions, and the index combines the normalized calculation and weight distribution of the causal effect, so that the challenge of the environment to the algorithm can be more comprehensively measured, and the requirements of actual application scenes can be more accurately reflected compared with the prior art.
(3) Search space modeling and test case generation: the search space modeling method based on the proxy model can more efficiently generate the challenging test cases, is helpful for more comprehensively verifying and evaluating the robustness and performance of the visual algorithm, and has higher efficiency and feasibility compared with the prior art.
(4) Effective dangerous scene identification: the invention reasonably screens key factors influencing a visual perception algorithm by using a causal reasoning method, and introduces the concept of a challenge index for the first time so as to quantify the causal effect of the key influencing factors. According to the high-efficiency dangerous scene generation method, the performance of the automatic driving system under the dangerous scene can be rapidly and accurately identified by combining the proposed challenge index and the driving score, and the potential dangerous situation can be early warned in time. Compared with the prior art, the method can reasonably generate a challenging test scene, and provides important guarantee for the safety and reliability of an automatic driving system.
In summary, compared with the best prior art, the method has remarkable advantages in aspects of causal relation mining, challenge index measurement, dangerous scene recognition, search space modeling and the like, and can more accurately evaluate and verify the performance of the visual algorithm in a complex environment.
In addition, a great deal of experiments have been carried out on the method of the invention, and compared with random search and random neighborhood search, the method of the invention respectively discovers 3.75-12.3 times and 1.5-9.25 times of dangerous scenes. In addition, the generated scene keeps balance between test coverage and danger, and has greater threat to the tested system.
Further, as shown in fig. 6, the present invention further provides a system for generating a hazard test case for a visual perception algorithm based on the method for generating a hazard test case for a visual perception algorithm, where the system for generating a hazard test case for a visual perception algorithm includes:
the observation data set collecting module 51 is used for acquiring images captured by the vision sensor of the automatic driving automobile, cleaning and marking the collected images, and obtaining an observation data set according to the non-reference image quality evaluation characteristics;
the causal structure diagram construction module 52 is configured to construct a causal structure diagram of the influence of the environmental factor on the perceptual performance according to the observation data set and by fusing domain knowledge;
The causal effect estimation module 53 is configured to acquire key environmental factors in the observation data set, and quantitatively calculate the influence degree of each environmental factor on the visual perception performance;
the dangerous test case generating module 54 is configured to generate dangerous test cases oriented to perception in batches based on the challenge indexes based on qualitative analysis of causal relation between the key environment variables and visual perception performance and quantitative estimation of causal effect.
Further, as shown in fig. 7, the application further provides a terminal based on the method and the system for generating the dangerous test case facing the visual perception algorithm, and the terminal comprises a processor 10, a memory 20 and a display 30. Fig. 7 shows only some of the components of the terminal, but it should be understood that not all of the illustrated components are required to be implemented and that more or fewer components may alternatively be implemented.
The memory 20 may in some embodiments be an internal storage unit of the terminal, such as a hard disk or a memory of the terminal. The memory 20 may in other embodiments also be an external storage device of the terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal. Further, the memory 20 may also include both an internal storage unit and an external storage device of the terminal. The memory 20 is used for storing application software installed in the terminal and various data, such as program codes of the installation terminal. The memory 20 may also be used to temporarily store data that has been output or is to be output. In an embodiment, the memory 20 stores a hazard test case generating program 40 facing the visual perception algorithm, where the hazard test case generating program 40 facing the visual perception algorithm may be executed by the processor 10, so as to implement the hazard test case generating method facing the visual perception algorithm in the present application.
The processor 10 may in some embodiments be a central processing unit (Central Processing Unit, CPU), microprocessor or other data processing chip for running program code or processing data stored in the memory 20, such as executing the visual perception algorithm-oriented hazard test case generation method, etc.
The display 30 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like in some embodiments. The display 30 is used for displaying information at the terminal and for displaying a visual user interface. The components 10-30 of the terminal communicate with each other via a system bus.
In an embodiment, the steps of the visual perception algorithm-oriented hazard test case generation method described above are implemented when the processor 10 executes the visual perception algorithm-oriented hazard test case generation program 40 in the memory 20.
The invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a visual perception algorithm-oriented dangerous test case generation program, and the visual perception algorithm-oriented dangerous test case generation program realizes the steps of the visual perception algorithm-oriented dangerous test case generation method when being executed by a processor.
In summary, the present invention provides a method for generating a dangerous test case facing a visual perception algorithm and related devices, where the method includes: acquiring an image captured by an automatic driving automobile vision sensor, cleaning and marking the collected image, and obtaining an observation data set according to the non-reference image quality evaluation characteristics; according to the observation data set, and by fusing domain knowledge, constructing a causal structure diagram of influence of environmental factors on perception performance; acquiring key environmental factors from the causal structure diagram, and quantitatively calculating the influence degree of each environmental factor on the visual perception performance; based on qualitative analysis of causal relation between key environment variables and visual perception performance and quantitative estimation of causal effect, dangerous test cases oriented to perception are generated in batches based on challenge indexes. The method and the system can effectively measure the challenge degree of the environmental condition to the visual algorithm based on quantitative estimation and normalization calculation of the causal effect, guide the search strategy to generate more challenging test cases by introducing causal knowledge of the challenge index, and effectively judge whether the performance of the automatic driving system in a specific scene is lower than a preset threshold value or not by combining the challenge index and the driving score, so that potential dangerous situations are identified.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal comprising the element.
Of course, those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by a computer program for instructing relevant hardware (e.g., processor, controller, etc.), the program may be stored on a computer readable storage medium, and the program may include the above described methods when executed. The computer readable storage medium may be a memory, a magnetic disk, an optical disk, etc.
It is to be understood that the invention is not limited in its application to the examples described above, but is capable of modification and variation in light of the above teachings by those skilled in the art, and that all such modifications and variations are intended to be included within the scope of the appended claims.

Claims (10)

1. The dangerous test case generation method facing the visual perception algorithm is characterized by comprising the following steps of:
acquiring an image captured by an automatic driving automobile vision sensor, cleaning and marking the collected image, and obtaining an observation data set according to the non-reference image quality evaluation characteristics;
according to the observation data set, and by fusing domain knowledge, constructing a causal structure diagram of influence of environmental factors on perception performance;
acquiring key environmental factors from the causal structure diagram, and quantitatively calculating the influence degree of each environmental factor on the visual perception performance;
based on qualitative analysis of causal relation between key environment variables and visual perception performance and quantitative estimation of causal effect, dangerous test cases oriented to perception are generated in batches based on challenge indexes.
2. The visual perception algorithm-oriented hazard test case generation method according to claim 1, wherein the steps of acquiring the image captured by the visual sensor of the automatic driving automobile, cleaning and marking the collected image, and obtaining the observation data set according to the no-reference image quality evaluation feature comprise:
The method comprises the steps that environmental variables to be inspected form combination conditions, a certain number of images are captured for each environmental combination, and the images are acquired through an automatic driving automobile vision sensor;
cleaning and marking the collected images, and extracting non-reference image quality assessment features from the images;
obtaining an observation data set according to the non-reference image quality evaluation characteristics;
wherein the obtained observation data set is expressed asWherein N is the number of cases in the observation data set, i is one of the number of cases, X is a covariate vector, W is an environmental factor vector to be inspected, and Y is a measured visual perception algorithm performance index.
3. The visual perception algorithm-oriented dangerous test case generation method according to claim 2, wherein the constructing a causal structure diagram of influence of environmental factors on perception performance according to the observation data set and by fusing domain knowledge specifically comprises:
representing causal relationships between a set of variables by a directed acyclic graph G (v, epsilon), where v represents random variables and epsilon represents causal links between variables;
starting from a blank graph by adopting a quick greedy equivalent search algorithm, iteratively adding or deleting edges through a scoring function until convergence, and constructing a causal structure diagram by combining necessary domain knowledge.
4. The visual perception algorithm-oriented dangerous test case generating method according to claim 3, wherein the applying a fast greedy equivalent search algorithm starts from a blank graph, iteratively adds or deletes edges through a scoring function until convergence, and specifically includes:
in the observation data setObtain a blank G init
Iteratively processing each pair of variables: < W, Y >, < W, X > and < X, Y >, adding an edge between the variables;
for each potential edge, testing whether adding the potential edge to the current graph results in a loop graph, and if so, skipping the potential edge;
for each potential edge which does not lead to the cyclic graph, testing whether the addition of the potential edge improves the scoring index, and if so, adding the potential edge;
iteratively processing all edges in the current graph, checking whether the potential edges are removed and improving the scoring index without breaking the graph, and if so, executing the operation of removing the edges;
repeating the iterative process until the scoring index is no longer improved;
wherein, checking whether to remove the potential edge is judged by calculating whether the Bayesian information criterion BIC value is increased:
BIC=2·ln(data|θ,G)-··n(n)
wherein θ represents a parameter of the directed acyclic graph G (v, ε), c is a constant taking 1, k is the number of directed acyclic graph parameters, and n represents the sample size of observed data;
If the BIC value of the current causal structure is increased, this indicates that the potential edge is removed.
5. The visual perception algorithm-oriented risk test case generation method according to claim 4, wherein the obtaining key environmental factors in the observation data set quantitatively calculates the influence degree of each environmental factor on visual perception performance, and specifically includes:
differences in potential intervention results and control results represent individual therapeutic effects ITE:
ITE i =Y i (W i =1)-Y i (W i =0)
wherein ,Yi (W i =1) represents the potential intervention result of the record in the observation dataset with the fog intensity set to mild; y is Y i (W i =0) represents the control result without fog; w (W) i =0 means that the i-th recorded fog intensity in the observed data is 0, w i =1 indicates that the fog intensity of the i-th record is mild;
the average causality ACE of light fog to visual perception performance is expressed as:
wherein ,representing mathematical expectations;
when the treatment group and the control group are not randomly allocated, there is a selection bias
wherein ,NT and Nc Sample sizes of the treatment group and the control group, respectively;
estimating causal effects using a dedicated statistical learning model:
the results of the treatment group and the control group are estimated by adopting any statistical regression method and are respectively expressed as and />
In the control group, the differences were defined as:denoted as D c The method comprises the steps of carrying out a first treatment on the surface of the In the treatment group, the differences were defined as: />Denoted as D T ;D c and DT A therapeutic effect known as padding;
based on the new data set (X) using any regression method c ,D c) and (XT ,D T ) Calculation of therapeutic Effect τ 0 (x)/τ 1 (x) Wherein τ 0 (x) Represents the estimated result of the control group, τ 1 (x) Representing control group estimation results, X c and XT Covariates of control and treatment groups, respectively, by combining τ 0(x) and τ1 (x) Combining to obtain final individual estimated causal effect tau X (x):
τ X (x)=g(x)τ 0 (x)+(1-g(x))τ 1 (x)
Where g (x) e [0,1] is a weighting function, typically taken as a trend value score;
ACE is determined by computing the mathematical expectation of τx (X).
6. The visual perception algorithm-oriented risk test case generation method according to claim 5, wherein the qualitative analysis based on the causal relationship between the key environment variable and the visual perception performance and the quantitative estimation of the causal effect generate the perception-oriented risk test case in batches based on the challenge index, specifically comprising:
defining a challenge index:
challenge index=[r 1 ,r 2 ,…,r i ,r n ]·c
wherein c= [ c ] 1 ,c 2 ,…,c i ,c m ] T The relative weights among a plurality of key environmental impact factors after a plurality of screening reflect the quantitative contribution of different environmental factors to the perception result; m is the number of critical environmental factors; r is (r) i Is corresponding to c i Normalized causal effect of (c), if c i Applying an intervention w such that the causal effect is expressed as ACE w Then:
wherein ,Ni Is the environmental node c i The type of intervention possible;
generating a driving score below the risk score threshold d thr The test score of the tested autopilot sensing algorithm under a given test scene is as follows:
d score =100×R i P i
wherein ,dscore Representing a test score of the tested system in the scene; r is R i The running path of the tested system in the ith test scene accounts for the percentage of the planned path; p (P) i A violation penalty term corresponding to the system under test;
the agent model and causal knowledge are combined to maximize the acquisition function, and the test parameter combination with the maximum acquisition function is used as a test case, and the acquisition function is defined as:
wherein ,and s are the predicted driving score and variance of the proxy model, respectively; />Representing a lowest driving score predicted by the surrogate model; phi is the probability density function of a standard normal distribution; ci is a defined challenge index; dis is the minimum Euclidean space distance between the candidate test case and the explored test case;
if the current experiment count k is greater than the preset search iteration number N limit When in use, dangerous test case set T oriented to perception is generated in batches set
7. The visual perception algorithm-oriented risk test case generation method of claim 6, wherein the risk test case is a test environment of a set of determined values.
8. The dangerous test case generation system facing the visual perception algorithm is characterized by comprising the following components:
the observation data set collection module is used for acquiring images captured by the vision sensor of the automatic driving automobile, cleaning and marking the collected images and obtaining an observation data set according to the non-reference image quality evaluation characteristics;
the causal structure diagram construction module is used for constructing a causal structure diagram of the influence of the environmental factors on the perception performance according to the observation data set and fusion of domain knowledge;
the causal effect estimation module is used for acquiring key environmental factors from the causal structure chart and quantitatively calculating the influence degree of each environmental factor on visual perception performance;
the dangerous test case generation module is used for generating dangerous test cases oriented to perception in batches based on the challenge indexes based on qualitative analysis of causal relation between key environment variables and visual perception performance and quantitative estimation of causal effect.
9. A terminal, the terminal comprising: the system comprises a memory, a processor and a visual perception algorithm-oriented hazard test case generation program which is stored in the memory and can run on the processor, wherein the visual perception algorithm-oriented hazard test case generation program realizes the steps of the visual perception algorithm-oriented hazard test case generation method according to any one of claims 1 to 7 when being executed by the processor.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a visual perception algorithm oriented hazard test case generation program, which when executed by a processor implements the steps of the visual perception algorithm oriented hazard test case generation method according to any one of claims 1-7.
CN202310694282.9A 2023-06-12 2023-06-12 Visual perception algorithm-oriented dangerous test case generation method and related equipment Pending CN116665174A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310694282.9A CN116665174A (en) 2023-06-12 2023-06-12 Visual perception algorithm-oriented dangerous test case generation method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310694282.9A CN116665174A (en) 2023-06-12 2023-06-12 Visual perception algorithm-oriented dangerous test case generation method and related equipment

Publications (1)

Publication Number Publication Date
CN116665174A true CN116665174A (en) 2023-08-29

Family

ID=87722286

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310694282.9A Pending CN116665174A (en) 2023-06-12 2023-06-12 Visual perception algorithm-oriented dangerous test case generation method and related equipment

Country Status (1)

Country Link
CN (1) CN116665174A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117056746A (en) * 2023-10-11 2023-11-14 长春汽车工业高等专科学校 Big data-based automobile test platform and method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117056746A (en) * 2023-10-11 2023-11-14 长春汽车工业高等专科学校 Big data-based automobile test platform and method

Similar Documents

Publication Publication Date Title
CN109978893B (en) Training method, device, equipment and storage medium of image semantic segmentation network
Dervilis et al. On robust regression analysis as a means of exploring environmental and operational conditions for SHM data
CN113272827A (en) Validation of classification decisions in convolutional neural networks
CN110969200B (en) Image target detection model training method and device based on consistency negative sample
CN110874471B (en) Privacy and safety protection neural network model training method and device
WO2021157330A1 (en) Calculator, learning method of discriminator, and analysis system
Luleci et al. Generative adversarial networks for labeled acceleration data augmentation for structural damage detection
CN116665174A (en) Visual perception algorithm-oriented dangerous test case generation method and related equipment
JP2021174556A (en) Semantic hostile generation based on function test method in automatic driving
US20200410709A1 (en) Location determination apparatus, location determination method and computer program
Mohammadi-Ghazi et al. Conditional classifiers and boosted conditional Gaussian mixture model for novelty detection
CN114218998A (en) Power system abnormal behavior analysis method based on hidden Markov model
Sun et al. Reliability validation of learning enabled vehicle tracking
Zhang et al. Anomaly detection of sensor faults and extreme events based on support vector data description
Langford et al. “know what you know”: Predicting behavior for learning-enabled systems when facing uncertainty
CN113343123B (en) Training method and detection method for generating confrontation multiple relation graph network
Noppel et al. Disguising attacks with explanation-aware backdoors
Dederichs et al. Experimental comparison of automatic operational modal analysis algorithms for application to long-span road bridges
JP2021165909A (en) Information processing apparatus, information processing method for information processing apparatus, and program
CN113486754B (en) Event evolution prediction method and system based on video
KR101351997B1 (en) Ecological environment evaluation system and method thereof
CN114039837B (en) Alarm data processing method, device, system, equipment and storage medium
ŞAHİN The role of vulnerable software metrics on software maintainability prediction
Hashemi et al. Runtime monitoring for out-of-distribution detection in object detection neural networks
Ramachandra Causal inference for climate change events from satellite image time series using computer vision and deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination