CN110796024A - Automatic driving visual perception test method and device for failure sample - Google Patents

Automatic driving visual perception test method and device for failure sample Download PDF

Info

Publication number
CN110796024A
CN110796024A CN201910955518.3A CN201910955518A CN110796024A CN 110796024 A CN110796024 A CN 110796024A CN 201910955518 A CN201910955518 A CN 201910955518A CN 110796024 A CN110796024 A CN 110796024A
Authority
CN
China
Prior art keywords
failure
visual perception
detection
automatic driving
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910955518.3A
Other languages
Chinese (zh)
Other versions
CN110796024B (en
Inventor
王军德
周风明
郝江波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Kotei Informatics Co Ltd
Original Assignee
Wuhan Kotei Informatics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Kotei Informatics Co Ltd filed Critical Wuhan Kotei Informatics Co Ltd
Priority to CN201910955518.3A priority Critical patent/CN110796024B/en
Publication of CN110796024A publication Critical patent/CN110796024A/en
Application granted granted Critical
Publication of CN110796024B publication Critical patent/CN110796024B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

The embodiment of the invention provides an automatic driving visual perception test method and device aiming at failure samples, which take the perception failure samples as a core, and realize the quick and effective verification of visual perception by organically combining a plurality of technical points such as failure scene definition, failure picture extraction, failure picture learning, HIL test construction and the like; the existing natural driving image database is effectively utilized and is an original data source of the test system, and the test system ensures the continuous incremental updating of the database in an iterative updating mode; the visual perception sample set is derived from the similar and self failure samples, the pertinence is strong, the failure scenes are summarized and extracted through a large amount of analysis on the failure samples, the effectiveness is greatly improved, the test progress can be rapidly improved through the staged implementation of the pre-detection and the formal detection, only the extracted specific samples are labeled and tested and verified, and the efficiency is greatly improved.

Description

Automatic driving visual perception test method and device for failure sample
Technical Field
The invention relates to the field of automatic driving, in particular to an automatic driving visual perception testing method and device aiming at a failure sample.
Background
With the gradual trend of the flow ceiling of the mobile internet, the digital fusion of the internet and the traditional industries such as agriculture, industry, building industry and service industry becomes a new trend, and the technology of combining the industrial internet with 5G, cloud computing and the like can accelerate the economic transformation of the entity. The automobile serves as an indispensable intelligent mobile device in an industrial internet scene, and with the innovation of new-generation automobile technical revolution such as new energy, intelligent internet and automatic driving, a reproducible and circular business mode closed loop is created by combining different landing scenes.
The automatic driving means that the intelligent automobile senses the driving environment around the automobile by installing sensor equipment (including 2D (two-dimensional) photographing visual sensing, laser radar, millimeter wave radar and the like) arranged on the automobile, fast operation and analysis are carried out by combining a navigation high-precision map, potential road condition environments are continuously simulated and deeply learned and judged, the optimal or most suitable driving route and mode of the automobile are further planned by means of an algorithm, and then the optimal or most suitable driving route and mode are fed back to a control system through a chip to carry out actual operation actions such as braking and steering wheel control.
Automatic driving is in a high-speed development stage at present, and intellectualization is particularly hot. Perception, fusion, decision, control and execution are all intelligent stages, and the perception is particularly important as eyes and ears of an automobile. According to the market survey report, the front camera installation of 2019 quarter is increased by 71.7 percent on a par. By means of the effect and cost advantages of the vision sensor, a large number of functions of ADAS and L3 are realized by single or multiple vision perceptions, the traffic world around the vehicle is structured through vision, such as people and vehicles, vehicle lines, signboards, parking spaces and the like, and accurate information is provided for the control unit.
However, the camera as an optical component is easily affected by light, and the traffic world type is strange, and the vision needs strong training as a postshield for machine learning and AI. Even the leading camera enterprises around the world face the following problems: 1. how to obtain enough original image sets; 2. how to find a sample set from the image; 3. how to select a proper training set in the sample set; 4. how to realize sample set labeling quickly and accurately; 5. how to put the least cost into carrying out the verification on the labeled training set and the labeled test set. The prior art is more a method for solving a single problem, such as only paying attention to the implementation process or object of the label, and not considering the representativeness of the labeled content; only the improvement of the quality is concerned, and the investment reduction is not considered; the efficiency of only paying attention to the labeling is improved, and the labeled content is not paid attention to rapid acquisition; only a single batch of technical implementations is concerned and iterative optimization of historical data is not efficiently utilized. A complete set of test system is not provided from the overall perspective of 'many, fast, good and province'. Many Tier1 and car factories are limited by factors such as large investment in sensing test cost, unsatisfactory effect, too long time consumption and the like, and no great progress is made in the late stage.
Disclosure of Invention
In order to solve the above problems, embodiments of the present invention provide an automated driving visual perception test method and apparatus for failure samples that overcome or at least partially solve the above problems.
According to a first aspect of embodiments of the present invention, there is provided an automated driving visual perception test method for a failure sample, comprising:
taking the automatic driving visual perception failure picture as original input, and performing hardware-in-loop HIL environment pre-detection on the detected visual perception product to obtain a pre-detection failure sample of the detected product;
and based on the trained failure picture expansion retrieval model, acquiring an automatic driving vision perception failure expansion picture matched with the characteristics of the pre-detection failure sample of the detected product in a natural driving image database, and carrying out HIL environment formal detection on the detected vision perception product based on the automatic driving vision perception failure expansion picture.
Preferably, before the automatic driving visual perception failure picture is taken as an original input, the method further comprises the following steps:
natural driving image data are acquired based on historical natural driving data or are verified and recorded based on historical visual perception products, and a natural driving image database is formed;
and collecting the automatic driving visual perception failure data based on the natural driving environment verification of the historical visual perception product or the verification of the early stage of the historical visual perception product to obtain a perception failure set.
Preferably, the method further comprises the following steps:
and extracting the characteristic of the automatic driving visual perception failure scene in the perception failure set, and extracting the automatic driving visual perception failure picture in the natural driving image database based on the characteristic of the automatic driving visual perception failure scene.
Preferably, after obtaining the sample for the product to be tested to detect the failure, the method further comprises:
and performing machine learning training based on the pre-detection failure sample of the detected product and the automatic driving image database to obtain a failure image expansion retrieval model for extracting an automatic driving visual perception failure expansion image from the automatic driving image data set.
Preferably, after obtaining the extended picture of the automated driving visual perception failure corresponding to the sample of the pre-detection failure of the detected product in the natural driving image database, the method further comprises:
and labeling the automatic driving visual perception failure expansion picture to obtain a labeled training set.
Preferably, after the augmented picture of the automatic driving visual perception failure is labeled, the method further includes:
and taking the marked automatic driving visual perception failure expansion picture as an automatic driving visual perception failure picture so as to carry out HIL environment pre-detection on the detected visual perception product.
Preferably, after performing HIL environment formal detection on the detected visual perception product based on the automatic driving visual perception failure extended picture, the method further includes:
and outputting a detection failure result of the detected product, and taking the detection failure result of the detected product as a pre-detection failure sample of the detected product.
According to a second aspect of the embodiments of the present invention, there is provided an automatic driving visual perception testing apparatus for a failure sample, including a pre-detection module and a formal detection module;
the pre-detection module is used for performing hardware-in-loop HIL (hardware in loop) environment pre-detection on the detected visual perception product by taking the automatic driving visual perception failure picture as original input to obtain a pre-detection failure sample of the detected product;
and the formal detection module is used for expanding the retrieval model based on the trained failure picture, acquiring an automatic driving vision perception failure expansion picture matched with the characteristics of the pre-detection failure sample of the detected product in a natural driving image database, and carrying out formal detection on the HIL environment of the detected vision perception product based on the automatic driving vision perception failure expansion picture.
According to a third aspect of embodiments of the present invention, there is provided an electronic device, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the program, implements the automated driving visual perception test method for failure samples as provided in any one of the various possible implementations of the first aspect.
According to a fourth aspect of embodiments of the present invention, there is provided a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of automated driving visual perception testing for failure samples as provided in any one of the various possible implementations of the first aspect.
The embodiment of the invention provides a method and a device for testing automatic driving visual perception aiming at failure samples, which take the perception failure samples as a core, and realize the quick and effective verification of the visual perception by organically combining a plurality of technical points such as failure scene definition, failure picture extraction, failure picture learning, HIL test construction and the like; the existing natural driving image database is effectively utilized and is an original data source of the test system, and the test system ensures the continuous incremental updating of the database in an iterative updating mode; the visual perception sample set is derived from the similar and self failure samples, the pertinence is strong, the failure scenes are summarized and extracted through a large amount of analysis on the failure samples, the effectiveness is greatly improved, the test progress can be rapidly improved through the staged implementation of the pre-detection and the formal detection, only the extracted specific samples are labeled and tested and verified, and the efficiency is greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from these without inventive effort.
FIG. 1 is a block diagram of an automated driving visual perception test method for failure samples according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating an exemplary embodiment of an automated driving visual perception testing method for failure samples according to the present invention;
fig. 3 is a schematic physical structure diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments, but not all embodiments, of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, intelligent networked automobiles are rapidly developed, and intellectualization is particularly hot. Perception, fusion, decision, control and execution are all intelligent stages, and the perception is particularly important as eyes and ears of an automobile. According to the market survey report, the front camera installation of 2019 quarter is increased by 71.7 percent on a par. By means of the effect and cost advantages of the vision sensor, a large number of functions of ADAS and L3 are realized by single or multiple vision perceptions, the traffic world around the vehicle is structured through vision, such as people and vehicles, vehicle lines, signboards, parking spaces and the like, and accurate information is provided for the control unit.
However, the camera as an optical component is easily affected by light, and the traffic world type is strange, and the vision needs strong training as a postshield for machine learning and AI. Even the leading camera enterprises around the world face the following problems: 1. how to obtain enough original image sets; 2. how to find a sample set from the image; 3. how to select a proper training set in the sample set; 4. how to realize sample set labeling quickly and accurately; 5. how to put the least cost into carrying out the verification on the labeled training set and the labeled test set. The prior art is more a method for solving a single problem, such as only paying attention to the implementation process or object of the label, and not considering the representativeness of the labeled content; only the improvement of the quality is concerned, and the investment reduction is not considered; the efficiency of only paying attention to the labeling is improved, and the labeled content is not paid attention to rapid acquisition; only a single batch of technical implementations is concerned and iterative optimization of historical data is not efficiently utilized. A complete set of test system is not provided from the overall perspective of 'many, fast, good and province'. Many Tier1 and car factories are limited by factors such as large investment in sensing test cost, unsatisfactory effect, too long time consumption and the like, and no great progress is made in the late stage.
Therefore, the automatic driving visual perception testing method and device for the failure sample provided by the embodiment of the invention take the perception failure sample as a core, and realize the quick and effective verification of the visual perception by organically combining a plurality of technical points such as failure scene definition, failure picture extraction, failure picture learning, HIL test construction and the like. The following description and description will proceed with reference being made to various embodiments.
As shown in fig. 1 and 2, an embodiment of the present invention provides an automated driving visual perception test method for a failure sample, including:
taking the automatic driving visual perception failure picture as original input, and performing hardware-in-loop HIL environment pre-detection on the detected visual perception product to obtain a pre-detection failure sample of the detected product;
and based on the trained failure picture expansion retrieval model, acquiring an automatic driving vision perception failure expansion picture matched with the characteristics of the pre-detection failure sample of the detected product in a natural driving image database, and carrying out HIL environment formal detection on the detected vision perception product based on the automatic driving vision perception failure expansion picture.
In the embodiment, a perception failure sample is taken as a core, and the rapid and effective verification of visual perception is realized by organically combining a plurality of technical points such as failure scene definition, failure picture extraction, failure picture learning and HIL test construction; the existing natural driving image database is effectively utilized and is an original data source of the test system, and the test system ensures the continuous incremental updating of the database in an iterative updating mode; the visual perception sample set is derived from the similar and self failure samples, the pertinence is strong, the failure scenes are summarized and extracted through a large amount of analysis on the failure samples, the effectiveness is greatly improved, the test progress can be rapidly improved through the staged implementation of the pre-detection and the formal detection, only the extracted specific samples are labeled and tested and verified, and the efficiency is greatly improved.
On the basis of the above embodiment, before the automatic driving visual perception failure picture is taken as an original input, the method further comprises the following steps:
natural driving image data are acquired based on historical natural driving data or are verified and recorded based on historical visual perception products, and a natural driving image database is formed;
and collecting the automatic driving visual perception failure data based on the natural driving environment verification of the historical visual perception product or the verification of the early stage of the historical visual perception product to obtain a perception failure set.
In the embodiment, as a preferred implementation, natural driving image data is recorded based on historical natural driving data collection or historical visual perception product natural driving environment verification, and a natural driving image database is formed.
In the embodiment, as a preferred implementation, the failure of the automatic driving visual perception is collected based on the verification of the natural driving environment of the historical visual perception product or the verification of the initial stage of the product.
1. Collecting test data from the final application scene of the visual perception product is the first step of the test System, taking TSR \ ACC \ LDW as an example, the front camera for environment perception of the ADAS (Advanced driving assistance System) function is applied to commercial vehicles and passenger vehicles of users, and appears anywhere in millions of kilometers of road sections in China at any time and in any weather. And carrying a visual module required by perception, accurately installing the visual module at the specified position of the test vehicle, and determining the requirements of software and hardware by adjusting parameters and calibrating. And the road can cover more than fifty thousand kilometers of road sections by running in multiple cities.
In addition to the above embodiments, as a preferred embodiment, the method further includes:
2. in addition to the above step 1, "natural driving image data" is stored and saved in an HDD (Hard Disk Drive).
3. Besides the pure acquisition data in the stage 1, a detected sensing unit or product in a DEMO \ research and development version generally exists, and verification is performed in a natural driving environment.
In addition to the above embodiments, as a preferred embodiment, the method further includes:
4. in step 3, the "natural driving image data" is stored in the HDD.
In addition to the above embodiments, as a preferred embodiment, the method further includes:
5. the data from the step 2 to the step 4 are stored in a data management room, and a natural driving image database is maintained through a data management platform.
On the basis of the above embodiments, the method further includes:
and extracting the characteristic of the automatic driving visual perception failure scene in the perception failure set, and extracting the automatic driving visual perception failure picture in the natural driving image database based on the characteristic of the automatic driving visual perception failure scene.
In this embodiment, as a preferred embodiment, the method further includes:
6. in the data acquisition process, the output of the step 3 contains a large amount of 'automatic driving visual perception failure' and a large amount of structural data behind the video, and failure sample extraction can be carried out manually or automatically. Firstly, extracting a small part of samples in an online or offline mode, and then classifying, summarizing and refining the characteristics of the samples. And aiming at the characteristics, automatically carrying out steps of rule making, suspected failure locking, suspected failure extraction, manual re-judgment and the like to gradually form an automatic driving visual perception failure set.
In this embodiment, the definition of the automated driving visual perception failure scene is developed according to the automated driving visual perception failure, and the image of the automated driving visual perception failure is acquired from the natural driving image database by a semi-automatic means.
In this embodiment, as a preferred implementation, the picture of the failure of the automatic driving visual perception is used as an original input, the HIL environment pre-detection is performed on the detected visual perception product, and the pre-detection failure of the detected product is generated.
In this embodiment, as a preferred embodiment, the method further includes:
7. the 'automatic driving visual perception failure' set is similar to a defect set, the root cause of failure can be extracted only through a large amount of statistical analysis and classification and summarization, the set is defined as a failure scene in the embodiment, the scene comprises scene factors and factors, the comprehensiveness of the scene factors and the effectiveness of the scene factors are extracted through data analysis, and the 'automatic driving visual perception failure scene' definition is formed through a plurality of series of methods and tools.
In this embodiment, as a preferred embodiment, the method further includes:
8. based on the definition of the failure scene in the step 7, basically knowing which factor factors have limitation or representativeness on visual perception, and based on the limitation or representativeness, extracting an 'automatic driving visual perception failure picture' in the step 5 in a manual or automatic mode to form an effective test question set; and accurately labeling in a semi-automatic mode to form an effective test question set with answers. This partial set need not be too numerous, up to which the "initial data sample phase" is completed.
In this embodiment, as a preferred embodiment, the method further includes:
9. and (3) after the tested visual perception product is taken, constructing an HIL environment in the modes of video darkness and fragrance, video injection and the like, inputting the automatic driving visual perception failure picture output in the step 8, and comparing the answer of the tested product with the standard answer.
10. And giving a pre-evaluation report according to the comparison result, and classifying the samples with failure pre-detection.
On the basis of the above embodiments, after obtaining the sample for the pre-detection failure of the detected product, the method further includes:
and performing machine learning training based on the pre-detection failure sample of the detected product and the automatic driving image database to obtain a failure image expansion retrieval model for extracting an automatic driving visual perception failure expansion image from the automatic driving image data set.
In this embodiment, as a preferred embodiment, the method further includes:
11. and performing multiple rounds of machine learning aiming at the tested product failure sample through a self-research tool. Machine learning (more than 100 times) is performed on the failure samples by using YOLO v3, and the failure characteristics are self-learned. On the basis of the above embodiments, after obtaining the autodrive visual perception failure extended picture corresponding to the detected product pre-detection failure sample in the natural drive image database, the method further includes:
and labeling the automatic driving visual perception failure expansion picture to obtain a labeled training set.
In this embodiment, as a preferred embodiment, the method further includes:
12. after learning, an 'expansion picture for the failure of the tested product' is extracted in the step 5 in an automatic mode, and the set is far more valuable than the step 8, so that a new test question set is formed. After the pictures are automatically extracted, the forward samples and the reverse samples are distinguished in a manual mode, the forward samples are accurately marked in a semi-automatic mode, and an effective test question set with answers is formed. (the test question set is integrated into the step 8, and the subsequent tested products are standby), closed loop iterative updating is formed, and a sample library is optimized.
On the basis of the above embodiments, after labeling the extended picture of the automatic driving visual perception failure, the method further includes:
and taking the marked automatic driving visual perception failure expansion picture as an automatic driving visual perception failure picture so as to carry out HIL environment pre-detection on the detected visual perception product.
In this embodiment, as a preferred embodiment, the method further includes:
13. and (5) constructing an HIL environment in a video dark aroma mode, a video injection mode and the like in the step 9, inputting the result in the step 12, and comparing the answer of the tested product with the standard answer.
On the basis of the above embodiments, and after performing HIL environment formal detection on the detected visual perception product based on the automatic driving visual perception failure extended picture, the method further includes:
and outputting a detection failure result of the detected product, and taking the detection failure result of the detected product as a pre-detection failure sample of the detected product.
In this embodiment, as a preferred embodiment, the method further includes:
14. and (5) giving a formal evaluation report aiming at the comparison result, and classifying the sample with failure pre-detection. (the failure sample is merged into step 6 and the subsequent product to be tested is ready for use). So as to form closed loop iterative update and optimize failure set.
The engineering flow of the whole test system is divided into two major parts of 14 nodes. An initial data sample stage is arranged before the automatic driving visual perception failure picture; followed by a verification stage of the product under test. And the two parts are respectively related in the aspects of failure samples and picture test sets.
The solid line is the service flow (also including the concept of data flow), the dashed line is only the data flow, and the result of the tested product is the input of the test set, so that the result is output to the excessive tested products to realize data accumulation.
Based on the method of the embodiment, the quality verification of the automatic driving visual perception product can be comprehensively, effectively, quickly and cheaply carried out, and the method has the following advantages:
1. the comprehensiveness:
1.1 it is the original data source of the test system to effectively utilize the existing natural driving image database
1.2 the test System guarantees incremental updating of the database by means of iterative updating
2. Effectiveness:
2.1 the sample set of visual perception is derived from the similar and self-failure samples, and has strong pertinence
2.2 by analyzing a large number of failure samples, summarizing and refining failure scenes, the effectiveness is greatly improved
3. Rapidity:
3.1 by implementing the pre-detection and the formal detection in stages, the test progress can be rapidly promoted
3.2 only marking and testing the refined specific sample, greatly improving the efficiency
3.3 adopting HIL mode to replace public road environment verification, execution efficiency is improved
4. The cost is low:
4.1 sample extraction and labeling are carried out based on failure scenes, and the investment is small due to small scale
4.2 by the application of machine learning, the manual sample extraction is replaced, and the cost is reduced
According to a second aspect of the embodiments of the present invention, there is provided an automatic driving visual perception testing apparatus for a failure sample, based on the automatic driving visual perception testing method for a failure sample in the above embodiments, the automatic driving visual perception testing apparatus includes a pre-detection module and a formal detection module;
the pre-detection module is used for performing hardware-in-loop HIL (hardware in loop) environment pre-detection on the detected visual perception product by taking the automatic driving visual perception failure picture as original input to obtain a pre-detection failure sample of the detected product;
and the formal detection module is used for expanding the retrieval model based on the trained failure picture, acquiring an automatic driving vision perception failure expansion picture matched with the characteristics of the pre-detection failure sample of the detected product in a natural driving image database, and carrying out formal detection on the HIL environment of the detected vision perception product based on the automatic driving vision perception failure expansion picture.
An embodiment of the present invention provides an electronic device, as shown in fig. 3, including: a processor (processor)501, a communication Interface (Communications Interface)502, a memory (memory)503, and a communication bus 504, wherein the processor 501, the communication Interface 502, and the memory 503 are configured to communicate with each other via the communication bus 504. The processor 501 may invoke a computer program on the memory 503 and operable on the processor 501 to perform the automated driving visual perception test method for failure samples provided by the various embodiments described above, including, for example:
taking the automatic driving visual perception failure picture as original input, and performing hardware-in-loop HIL environment pre-detection on the detected visual perception product to obtain a pre-detection failure sample of the detected product;
and based on the trained failure picture expansion retrieval model, acquiring an automatic driving vision perception failure expansion picture matched with the characteristics of the pre-detection failure sample of the detected product in a natural driving image database, and carrying out HIL environment formal detection on the detected vision perception product based on the automatic driving vision perception failure expansion picture.
In addition, the logic instructions in the memory 503 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Embodiments of the present invention further provide a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is implemented to perform the method for testing the automatic driving visual perception for the failure sample provided in the foregoing embodiments when executed by a processor, for example, the method includes:
taking the automatic driving visual perception failure picture as original input, and performing hardware-in-loop HIL environment pre-detection on the detected visual perception product to obtain a pre-detection failure sample of the detected product;
and based on the trained failure picture expansion retrieval model, acquiring an automatic driving vision perception failure expansion picture matched with the characteristics of the pre-detection failure sample of the detected product in a natural driving image database, and carrying out HIL environment formal detection on the detected vision perception product based on the automatic driving vision perception failure expansion picture.
In summary, the automatic driving visual perception test method and device for failure samples provided by the embodiments of the present invention use a perception failure sample as a core, and realize quick and effective verification of visual perception by organically combining multiple technical points such as failure scene definition, failure picture extraction, failure picture learning, HIL test construction, and the like; the existing natural driving image database is effectively utilized and is an original data source of the test system, and the test system ensures the continuous incremental updating of the database in an iterative updating mode; the visual perception sample set is derived from the similar and self failure samples, the pertinence is strong, the failure scenes are summarized and extracted through a large amount of analysis on the failure samples, the effectiveness is greatly improved, the test progress can be rapidly improved through the staged implementation of the pre-detection and the formal detection, only the extracted specific samples are labeled and tested and verified, and the efficiency is greatly improved.
The above-described embodiments of the electronic device and the like are merely illustrative, and units illustrated as separate components may or may not be physically separate, and components displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute the various embodiments or some parts of the methods of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. An automated driving visual perception test method for failure samples, comprising:
taking the automatic driving visual perception failure picture as original input, and performing hardware-in-loop HIL environment pre-detection on the detected visual perception product to obtain a pre-detection failure sample of the detected product;
and based on the trained failure picture expansion retrieval model, acquiring an automatic driving vision perception failure expansion picture matched with the characteristics of the pre-detection failure sample of the detected product in a natural driving image database, and carrying out HIL environment formal detection on the detected vision perception product based on the automatic driving vision perception failure expansion picture.
2. The automated driving visual perception test method for the failure sample according to claim 1, wherein before the automated driving visual perception failure picture is used as an original input, the method further comprises:
natural driving image data are acquired based on historical natural driving data or are verified and recorded based on historical visual perception products, and a natural driving image database is formed;
and collecting the automatic driving visual perception failure data based on the natural driving environment verification of the historical visual perception product or the verification of the early stage of the historical visual perception product to obtain a perception failure set.
3. The automated driving visual perception test method for failure samples of claim 2, further comprising:
and extracting the characteristic of the automatic driving visual perception failure scene in the perception failure set, and extracting the automatic driving visual perception failure picture in the natural driving image database based on the characteristic of the automatic driving visual perception failure scene.
4. The automated driving visual perception test method for failure samples according to claim 1, wherein after obtaining the pre-detection failure sample of the tested product, the method further comprises:
and performing machine learning training based on the pre-detection failure sample of the detected product and the automatic driving image database to obtain a failure image expansion retrieval model for extracting an automatic driving visual perception failure expansion image from the automatic driving image data set.
5. The automated driving visual perception test method for the failure sample according to claim 3, wherein after obtaining the automated driving visual perception failure extended picture corresponding to the pre-detection failure sample of the tested product in the natural driving image database, the method further comprises:
and labeling the automatic driving visual perception failure expansion picture to obtain a labeled training set.
6. The automated driving visual perception test method for the failure sample according to claim 5, wherein after the labeling of the augmented picture of automated driving visual perception failure, the method further comprises:
and taking the marked automatic driving visual perception failure expansion picture as an automatic driving visual perception failure picture so as to carry out HIL environment pre-detection on the detected visual perception product.
7. The automated driving visual perception test method for the failure sample according to claim 1, wherein after performing HIL environment formal detection on the detected visual perception product based on the automated driving visual perception failure extended picture, the method further comprises:
and outputting a detection failure result of the detected product, and taking the detection failure result of the detected product as a pre-detection failure sample of the detected product.
8. The automatic driving visual perception testing device for the failure sample is characterized by comprising a pre-detection module and a formal detection module;
the pre-detection module is used for performing hardware-in-loop HIL (hardware in loop) environment pre-detection on the detected visual perception product by taking the automatic driving visual perception failure picture as original input to obtain a pre-detection failure sample of the detected product;
and the formal detection module is used for expanding the retrieval model based on the trained failure picture, acquiring an automatic driving vision perception failure expansion picture matched with the characteristics of the pre-detection failure sample of the detected product in a natural driving image database, and carrying out formal detection on the HIL environment of the detected vision perception product based on the automatic driving vision perception failure expansion picture.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 7 are implemented when the processor executes the program.
10. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN201910955518.3A 2019-10-09 2019-10-09 Automatic driving visual perception test method and device for failure sample Active CN110796024B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910955518.3A CN110796024B (en) 2019-10-09 2019-10-09 Automatic driving visual perception test method and device for failure sample

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910955518.3A CN110796024B (en) 2019-10-09 2019-10-09 Automatic driving visual perception test method and device for failure sample

Publications (2)

Publication Number Publication Date
CN110796024A true CN110796024A (en) 2020-02-14
CN110796024B CN110796024B (en) 2022-07-29

Family

ID=69440083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910955518.3A Active CN110796024B (en) 2019-10-09 2019-10-09 Automatic driving visual perception test method and device for failure sample

Country Status (1)

Country Link
CN (1) CN110796024B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030093187A1 (en) * 2001-10-01 2003-05-15 Kline & Walker, Llc PFN/TRAC systemTM FAA upgrades for accountable remote and robotics control to stop the unauthorized use of aircraft and to improve equipment management and public safety in transportation
CN104143128A (en) * 2014-07-21 2014-11-12 快威科技集团有限公司 Information system security evaluation index development method and device
CN108694273A (en) * 2017-04-12 2018-10-23 三星电子株式会社 Circuit yield analysis method and system for assessing rare failure event
CN109263826A (en) * 2018-08-30 2019-01-25 武汉理工大学 Ship Intelligent Collision Avoidance system and method based on maneuverability modeling
US20190063881A1 (en) * 2017-08-25 2019-02-28 Aurora Flight Sciences Corporation Aerial Vehicle Interception System
CN109537383A (en) * 2018-11-06 2019-03-29 深圳市路桥建设集团有限公司 A kind of unmanned road structure based on complicated traffic environment
US20190228118A1 (en) * 2018-01-24 2019-07-25 Toyota Research Institute, Inc. Systems and methods for identifying human-based perception techniques
WO2019161076A1 (en) * 2018-02-19 2019-08-22 Digital Global Systems, Inc. Systems, methods, and devices for unmanned vehicle detection and threat management

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030093187A1 (en) * 2001-10-01 2003-05-15 Kline & Walker, Llc PFN/TRAC systemTM FAA upgrades for accountable remote and robotics control to stop the unauthorized use of aircraft and to improve equipment management and public safety in transportation
CN104143128A (en) * 2014-07-21 2014-11-12 快威科技集团有限公司 Information system security evaluation index development method and device
CN108694273A (en) * 2017-04-12 2018-10-23 三星电子株式会社 Circuit yield analysis method and system for assessing rare failure event
US20190063881A1 (en) * 2017-08-25 2019-02-28 Aurora Flight Sciences Corporation Aerial Vehicle Interception System
US20190228118A1 (en) * 2018-01-24 2019-07-25 Toyota Research Institute, Inc. Systems and methods for identifying human-based perception techniques
WO2019161076A1 (en) * 2018-02-19 2019-08-22 Digital Global Systems, Inc. Systems, methods, and devices for unmanned vehicle detection and threat management
CN109263826A (en) * 2018-08-30 2019-01-25 武汉理工大学 Ship Intelligent Collision Avoidance system and method based on maneuverability modeling
CN109537383A (en) * 2018-11-06 2019-03-29 深圳市路桥建设集团有限公司 A kind of unmanned road structure based on complicated traffic environment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DAVID CRABB等: "Glaucoma and fitness to drive:Using binocular visual fields to predict a milestone to blindness", 《INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE》 *
贾立山: "体现驾驶员特性的车道偏离预警系统关键技术研究", 《中国优秀博硕士学位论文全文数据库(博士)工程科技Ⅱ辑》 *
韩飞: "虚拟环境下基于视觉传感的汽车换道辅助系统研究", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 *

Also Published As

Publication number Publication date
CN110796024B (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN111179585B (en) Site testing method and device for automatic driving vehicle
KR102094341B1 (en) System for analyzing pot hole data of road pavement using AI and for the same
CN109086668B (en) Unmanned aerial vehicle remote sensing image road information extraction method based on multi-scale generation countermeasure network
US20230138650A1 (en) Test method for automatic driving, and electronic device
CN111611711B (en) Automatic driving data processing method and device and electronic equipment
CN111177887A (en) Method and device for constructing simulation track data based on real driving scene
CN113343461A (en) Simulation method and device for automatic driving vehicle, electronic equipment and storage medium
CN112287566A (en) Automatic driving scene library generation method and system and electronic equipment
CN111309910A (en) Text information mining method and device
CN115830399B (en) Classification model training method, device, equipment, storage medium and program product
CN116484971A (en) Automatic driving perception self-learning method and device for vehicle and electronic equipment
CN114117740A (en) Simulation test scene generation method and device based on automatic driving
CN112905849A (en) Vehicle data processing method and device
Raffaelli et al. Facing ADAS validation complexity with usage oriented testing
CN114693963A (en) Recognition model training and recognition method and device based on electric power data feature extraction
CN112699754A (en) Signal lamp identification method, device, equipment and storage medium
CN110796024B (en) Automatic driving visual perception test method and device for failure sample
CN116580271A (en) Evaluation method, device, equipment and storage medium for perception fusion algorithm
CN114550129B (en) Machine learning model processing method and system based on data set
CN113344121B (en) Method for training a sign classification model and sign classification
CN114861321A (en) Problem scene extraction method, device, equipment and medium for traffic flow simulation
Afdhal et al. Evaluation of benchmarking pre-trained cnn model for autonomous vehicles object detection in mixed traffic
CN111143423B (en) Dynamic scene labeling data mining method and device and terminal
CN114627400A (en) Lane congestion detection method and device, electronic equipment and storage medium
Benčević et al. Tool for automatic labeling of objects in images obtained from Carla autonomous driving simulator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant