CN114879182A - Unmanned scene diagnosis method, electronic device and storage medium - Google Patents
Unmanned scene diagnosis method, electronic device and storage medium Download PDFInfo
- Publication number
- CN114879182A CN114879182A CN202210466396.3A CN202210466396A CN114879182A CN 114879182 A CN114879182 A CN 114879182A CN 202210466396 A CN202210466396 A CN 202210466396A CN 114879182 A CN114879182 A CN 114879182A
- Authority
- CN
- China
- Prior art keywords
- scene
- data
- target
- diagnosis
- unmanned
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/86—Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
- G01S13/867—Combination of radar systems with cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/91—Radar or analogous systems specially adapted for specific applications for traffic control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/22—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2458—Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
- G06F16/2477—Temporal data queries
Abstract
The invention discloses an unmanned scene diagnosis method, electronic equipment and a storage medium, comprising S1, determining a scene to be diagnosed, setting scene parameters and setting a corresponding configuration file; s2, determining data required for diagnosing the scene; s3, determining diagnosis conditions corresponding to the scene parameters; s4, judging whether the data meet the diagnosis condition, if so, activating diagnosis of a corresponding scene, generating a diagnosis result and executing the next step; and S5, storing the diagnosis result. According to the method, the scenes are judged and identified according to the special scenes which are fused and concerned, the identification result is given, the data recording module can extract the data into the catalog of the scene data according to the scene identification result, and developers can directly extract the data of the corresponding scene from the corresponding scene catalog after the drive test, so that the developers can analyze and solve the problems conveniently, and the development efficiency is improved.
Description
Technical Field
The invention belongs to the technical field of intelligent unmanned driving, and particularly relates to an unmanned scene diagnosis method, electronic equipment and a storage medium.
Background
With the development of multiple industrial technologies such as sensors, artificial intelligence, communication, navigation positioning, mode recognition, machine vision, intelligent control and the like, the key technologies of unmanned vehicles such as environment perception, navigation positioning, path planning, decision control and the like are also rapidly developed. Wherein the environment perception technology is the basis, and good environment perception technology can provide powerful support for upper vehicle control.
In the environment sensing technology, the surrounding environment is sensed through sensors such as a camera, a millimeter wave radar and a laser radar, then a fusion module analyzes and processes the information to obtain the most reliable description of the environment, specifically, targets identified by the sensors are output to a rear end after fusion processing, and in the process of sensing fusion processing, some unexpected phenomena sometimes occur under some special scenes, so that target information output to the rear end in a fusion mode is abnormal, and decision control is affected.
For the above situation, in the current development, a developer judges whether an abnormality occurs through an analysis tool and human eye recognition in the process of drive test, if so, the scene type of the time point is recorded, and then data at the corresponding moment is copied for analysis.
Disclosure of Invention
In order to solve the problems, the invention provides an unmanned scene diagnosis method, electronic equipment and a storage medium, so that developers can analyze and locate the problems more conveniently.
In order to solve the technical problem, the technical scheme adopted by the invention is as follows: an unmanned scene diagnosis method includes the following steps,
s1, determining the scene to be diagnosed, setting scene parameters, and setting a corresponding configuration file;
s2, determining data required for diagnosing the scene;
s3, determining diagnosis conditions corresponding to the scene parameters;
s4, judging whether the data meet the diagnosis condition, if so, activating diagnosis of a corresponding scene, generating a diagnosis result and executing the next step;
and S5, storing the diagnosis result.
As an optimization, the scene needing diagnosis includes,
a scene a, the target identified by the radar and the camera in the preset range is inconsistent;
b, changing course angles of targets in a scene b and a preset range to exceed a set threshold;
scene c, changing the target path within a preset range;
scene d, the curvature radius of the lane line exceeds a set threshold;
a scene e is detected for the first time when the target is within the preset angle range, and the initial detection distance exceeds the preset range;
the scene f is that the target is detected for the first time within the preset angle range, the radar is inconsistent with the target detected by the camera, and the initial detection distance exceeds the preset range;
a scene g, only identifying a target by a camera and enabling the confidence coefficient to exceed a preset value;
scene h, the camera does not successfully identify the target;
and after the scene i, the radar and the target identified by the camera are fused, the absolute value of the speed difference between the fused target and the target identified by the camera exceeds a set threshold value.
As optimization, the configuration file comprises a scene name, a scene ID and whether the scene is activated or not.
And as optimization, the data comprises camera target data, radar target data, fusion target data and lane line data, and the data is acquired by calling a get interface function of a data center.
Based on the above method, the present invention also provides an electronic device comprising,
a memory for storing a scene diagnostic program in unmanned driving;
and the processor is used for realizing the scene diagnosis method in the unmanned driving when executing the scene diagnosis program in the unmanned driving.
Based on the above method, the present invention further provides a storage medium storing one or more programs, which when executed by a processor, perform the steps of the method for scene diagnosis in unmanned driving.
Compared with the prior art, the invention has the following advantages:
the method comprises the steps of determining scenes needing to be diagnosed, and distributing corresponding scene Id (scene identification) for the scenes; determining which data need to be used for diagnosing the above scenes; determining a detection standard; according to different characteristics of each scene, after corresponding data are obtained from the data center module, appropriate logic judgment is made, and a scene diagnosis result is updated to the data center; the data recording module acquires a scene diagnosis result from the data center, and if a certain scene is triggered after being inquired, the data recording module names the data by a timestamp and a scene id after the data recording is finished, and stores the data into a corresponding scene path. Developers can conveniently obtain abnormal condition data from the data storage path corresponding to each scene, and the abnormal condition data are analyzed and solved aiming at the problems, so that the development efficiency is improved.
According to the method, the scenes are judged and identified according to the special scenes which are fused and concerned, the identification result is given, the data recording module can extract the data into the catalog of the scene data according to the scene identification result, and developers can directly extract the data of the corresponding scene from the corresponding scene catalog after the drive test, so that the developers can analyze and solve the problems conveniently, and the development efficiency is improved.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The invention will be further explained with reference to the drawings and the embodiments.
Example (b): with reference to figure 1 of the drawings,
an unmanned scene diagnosis method includes the following steps,
and S1, determining the scene needing diagnosis, setting scene parameters and setting a corresponding configuration file. The configuration file comprises a scene name, a scene ID and whether the scene is activated or not. Specifically, scenes needing attention or diagnosis are determined in advance, scene parameters are set for the scenes, ids are allocated for the scenes to identify the scenes, and json configuration files are configured, wherein the configuration of each scene by the configuration files comprises the following steps:
scene name (sceneName) a scene description keyword;
scene id (sceneid): uniquely identifying each scene;
whether a scene is active (sceneactate): the value is true/false, and whether the scene diagnosis module diagnoses the scene is determined.
Wherein the scene needing diagnosis comprises,
the radar in the scenes a and 80m has a stable moving target of the lane, but the vision or the camera does not recognize the target in five continuous frames;
the course angle of the boundary frames in the scene b and 80m is adjacent to that of two frames, and the two frames have large-amplitude transformation (such as 3 deg);
scene c, CIPV cut in/out in 80 m;
the curvature radius of the lane lines detected by the scene d and Vision is less than 250 m;
scene e, the target is detected within +/-10 degrees, and the initial detection distance is beyond 30 m;
scene f, the target is detected within +/-10 degrees, no radar target exists, and the initial detection distance is beyond 30 m;
scene g, only vision is associated to the target and confidence < 40;
the scene h and the vision target are not newly built for 5 continuous frames;
scene i, radar and vision target match, but the absolute value of the speed difference between the fusion target and the vision only target exceeds 20% and is greater than 2 m/s.
And S2, determining data required for diagnosing the scene. Specifically, it is determined which sensor data are needed for diagnosing the above scenes, wherein FC (front camera) targets, FR (front millimeter wave radar), FUS (fusion), FC lane line data, and the like are involved. The data used include:
fusion _ abobj: fusing target data after fusion and before target selection, wherein the fused target data comprises target data of a target correlation sensor besides a fusion result;
fusion _ obj: the fusion target data after fusion and target selection is distinguished from fusion _ abobj by: after screening, the target in fusion _ obj is less than or equal to the target in fusion _ abobj;
fcPrData _ lane: FC lane line data;
fcPrData _ obj: FC target data;
frPrData _ obj: FR target data;
the above data may be obtained by calling a get interface function of the data center.
And S3, determining the diagnosis condition or the detection standard corresponding to the scene parameter. Specifically, if a scene a is judged to be actually generated only when a trigger is diagnosed in a few consecutive frames of the scene, if a radar detects a target of the vehicle channel and a camera does not detect the five consecutive frames of the scene, the scene a is judged to be triggered; each scene has different characteristics, and some scenes need to be diagnosed and are regarded as happening, such as a scene b; and for anti-jitter, namely a certain scene is always in a trigger state, and a frame of non-trigger suddenly appears in the middle, the scene is still considered as the trigger state, so that the standard is different for the affirmation that the scene disappears. Therefore, according to different characteristics of each scene, the detection standard is different, the detection standard is a detection parameter (number of trigger/disappearance frames), and the parameter needs to be corrected by combining with the drive test, so that the judgment result is more accurate.
And S4, judging whether the data meet the diagnosis condition, if so, activating the diagnosis of the corresponding scene, generating a diagnosis result and executing the next step. Specifically, according to different characteristics of each scene, after corresponding data is acquired from the data center, appropriate logic judgment is made, and a scene diagnosis result is updated to the data center. The scene diagnosis process comprises the following steps:
as scene a: the radar within 80m has a stable moving target of the lane, but five consecutive frames of vision are not identified; it takes 3 judgment conditions to obtain the diagnosis result, as follows:
longitudinal distance state: the target longitudinal distance is less than 80 m;
FR own lane target status: the area ID of the associated FR is 1;
FC status: finding no FC target in the fusion associated targets;
if the above 3 conditions are satisfied, the scene 1 is regarded as one trigger, and if the trigger is detected in 5 continuous frames, the scene 1 is regarded as occurring, and the corresponding mark position is regarded as 1.
As another example, scenario b: a large-amplitude transformation (>3deg) occurs in two adjacent frames when a Bounding box appears a header within 80 m; the diagnosis result of this scenario can be obtained by 2 judgment conditions, as follows:
course angle variation amplitude state: calculating the difference between the heading angles of two continuous frames to be more than 3 deg;
longitudinal distance state: the target longitudinal distance is less than 80 m;
once the 2 conditions are met, the scene is considered to occur, and the scene corresponds to a mark position 1;
and storing the diagnosis result of each scene by using the array in the same judgment mode of other scene diagnoses, and updating the result to the data center.
And S5, storing the diagnosis result. The data recording module acquires a scene diagnosis result from the data center, if a certain scene is triggered or activated after being inquired, the data recording module names the data by using a timestamp and a scene id after the data recording is finished, and stores the data into a corresponding scene path, wherein the data format is a custom format. And developers can conveniently obtain abnormal condition data from the data storage path corresponding to each scene, and simultaneously use corresponding tools to analyze the data, so that problems are solved, and the development efficiency is improved.
Based on the above method, the present invention also provides an electronic device comprising,
a memory for storing a scene diagnostic program in unmanned driving;
and the processor is used for realizing the scene diagnosis method in the unmanned driving when executing the scene diagnosis program in the unmanned driving.
Based on the above method, the present invention further provides a storage medium storing one or more programs, which when executed by a processor, perform the steps of the method for scene diagnosis in unmanned driving.
The method comprises the steps of determining scenes needing to be diagnosed, and distributing corresponding scene IDs (scene identifiers) for the scenes; determining which data need to be used for diagnosing the above scenes; determining a detection criterion or diagnostic condition that the scene is deemed to have indeed been triggered only if several consecutive frames of the scene are detected; the scene disappears for several frames to be regarded as the scene indeed disappears; according to different characteristics of each scene, after corresponding data are obtained from the data center module, appropriate logic judgment is made, and a scene diagnosis result is updated to the data center; the data recording module acquires a scene diagnosis result from the data center, and if a certain scene is triggered after being inquired, the data recording module names the data by a timestamp and a scene id after the data recording is finished, and stores the data into a corresponding scene path. The abnormal situation data can be conveniently obtained by developers from the data storage path corresponding to each scene, the problems are analyzed and solved, and the development efficiency is improved.
According to the method, the scenes are judged and identified according to the special scenes which are fused and concerned, the identification result is given, the data recording module can extract the data into the catalog of the scene data according to the scene identification result, and developers can directly extract the data of the corresponding scene from the corresponding scene catalog after the drive test, so that the developers can analyze and solve the problems conveniently, and the development efficiency is improved.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and not for limiting the technical solutions, and those skilled in the art should understand that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all that should be covered by the claims of the present invention.
Claims (6)
1. An unmanned scene diagnosis method is characterized by comprising the following steps,
s1, determining a scene needing to be diagnosed, setting scene parameters and setting a corresponding configuration file;
s2, determining data required for diagnosing the scene;
s3, determining diagnosis conditions corresponding to the scene parameters;
s4, judging whether the data meet the diagnosis condition, if so, activating diagnosis of a corresponding scene, generating a diagnosis result and executing the next step;
and S5, storing the diagnosis result.
2. The unmanned scene diagnostic method of claim 1, wherein the scene requiring diagnosis includes,
a scene a, the target identified by the radar and the camera in the preset range is inconsistent;
b, changing course angles of targets in a scene b and a preset range to exceed a set threshold;
scene c, changing the target path within a preset range;
scene d, the curvature radius of the lane line exceeds a set threshold;
a scene e is detected for the first time when the target is within the preset angle range, and the initial detection distance exceeds the preset range;
the scene f is that the target is detected for the first time within the preset angle range, the radar is inconsistent with the target detected by the camera, and the initial detection distance exceeds the preset range;
a scene g, only identifying a target by a camera and enabling the confidence coefficient to exceed a preset value;
scene h, the camera does not successfully identify the target;
and after the scene i, the radar and the target identified by the camera are fused, the absolute value of the speed difference between the fused target and the target identified by the camera exceeds a set threshold value.
3. The unmanned scene diagnostic method of claim 1, wherein the profile includes a scene name, a scene ID, whether a scene is active or not.
4. The unmanned scene diagnostic method of claim 1, wherein the data comprises camera target data, radar target data, fusion target data, lane line data, the data being obtained by invoking a data center get interface function.
5. An electronic device, comprising,
a memory for storing a scene diagnostic program in unmanned driving;
a processor for implementing the in-unmanned scene diagnostic method of any one of claims 1-4 when executing the in-unmanned scene diagnostic program.
6. A storage medium storing one or more programs which, when executed by a processor, perform the steps of the method for scene diagnosis in unmanned driving according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210466396.3A CN114879182A (en) | 2022-04-29 | 2022-04-29 | Unmanned scene diagnosis method, electronic device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210466396.3A CN114879182A (en) | 2022-04-29 | 2022-04-29 | Unmanned scene diagnosis method, electronic device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114879182A true CN114879182A (en) | 2022-08-09 |
Family
ID=82673221
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210466396.3A Pending CN114879182A (en) | 2022-04-29 | 2022-04-29 | Unmanned scene diagnosis method, electronic device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114879182A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115951660A (en) * | 2023-02-28 | 2023-04-11 | 中国第一汽车股份有限公司 | Vehicle diagnosis method and device, electronic equipment and storage medium |
-
2022
- 2022-04-29 CN CN202210466396.3A patent/CN114879182A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115951660A (en) * | 2023-02-28 | 2023-04-11 | 中国第一汽车股份有限公司 | Vehicle diagnosis method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109212521B (en) | Target tracking method based on fusion of forward-looking camera and millimeter wave radar | |
CN106840242B (en) | Sensor self-checking system and multi-sensor fusion system of intelligent driving automobile | |
CN113119963B (en) | Intelligent ultrasonic system, vehicle rear collision warning device and control method thereof | |
US20220048536A1 (en) | Method and device for testing a driver assistance system | |
US20170080950A1 (en) | Method and device for operating a vehicle | |
CN109099920B (en) | Sensor target accurate positioning method based on multi-sensor association | |
CN108960083B (en) | Automatic driving target classification method and system based on multi-sensor information fusion | |
KR102168288B1 (en) | System and method for tracking multiple object using multi-LiDAR | |
US20200056893A1 (en) | Method and system for localizing a vehicle | |
WO2019208271A1 (en) | Electronic control device, and computation method | |
WO2023050586A1 (en) | Abnormality detection method and apparatus for positioning sensor, and terminal device | |
KR102592830B1 (en) | Apparatus and method for predicting sensor fusion target in vehicle and vehicle including the same | |
CN114879182A (en) | Unmanned scene diagnosis method, electronic device and storage medium | |
US20200223429A1 (en) | Vehicular control system | |
US11900691B2 (en) | Method for evaluating sensor data, including expanded object recognition | |
CN111947669A (en) | Method for using feature-based positioning maps for vehicles | |
KR101818535B1 (en) | System for predicting the possibility or impossibility of vehicle parking using support vector machine | |
CN115457353A (en) | Fusion method and device for multi-sensor data | |
CN114435401B (en) | Vacancy recognition method, vehicle, and readable storage medium | |
CN110745145A (en) | Multi-sensor management system for ADAS | |
US11654927B2 (en) | Method for monitoring a vehicle system for detecting an environment of a vehicle | |
CN109948656B (en) | Information processing method, device and storage medium | |
CN110333517B (en) | Obstacle sensing method, obstacle sensing device and storage medium | |
CN114037967A (en) | Fusion method and device of multi-source lane lines, vehicle and storage medium | |
GB2607299A (en) | Track fusion for an autonomous vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |