CN114743170A - Automatic driving scene labeling method based on AI algorithm - Google Patents

Automatic driving scene labeling method based on AI algorithm Download PDF

Info

Publication number
CN114743170A
CN114743170A CN202210435027.8A CN202210435027A CN114743170A CN 114743170 A CN114743170 A CN 114743170A CN 202210435027 A CN202210435027 A CN 202210435027A CN 114743170 A CN114743170 A CN 114743170A
Authority
CN
China
Prior art keywords
scene
data
algorithm
labeling method
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210435027.8A
Other languages
Chinese (zh)
Inventor
李开兴
郝金隆
张霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202210435027.8A priority Critical patent/CN114743170A/en
Publication of CN114743170A publication Critical patent/CN114743170A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an automatic driving scene labeling method based on an AI algorithm, which can print corresponding scene labels on original data of automatic driving data, so that a user can screen the data according to the labels, thereby quickly obtaining enriched and targeted scene data, improving the screening efficiency of the data and reducing the data quantity to be labeled.

Description

Automatic driving scene labeling method based on AI algorithm
Technical Field
The invention relates to the technical field of artificial intelligence and automatic driving, in particular to an automatic driving scene labeling method based on an AI algorithm.
Background
The automatic driving of the L2 level is already massively produced, the automatic driving of the L3 level is also on the market in a large quantity, the requirement on an automatic driving algorithm is higher and higher, a large amount of data needs to be processed, and particularly, the data of various scenes is one of the core factors influencing the performance of the algorithm. The richer the data of various scenes, the stronger the adaptability of the algorithm to various scenes. However, for different scenes, the required data quantity in the acquired original data is seriously unbalanced, data set balance is required in the general algorithm development process, so that the data quantity difference of different scenes cannot be overlarge, if data preliminary screening is carried out before the marking, the workload of data marking can be greatly reduced, but the manual screening is troublesome and labor-consuming in the face of massive original data. Patent CN2018113066861 proposes a method for mining simulation scene data, which performs data mining by comparing the difference between the characteristics of the simulation scene and the algorithm operation result; patent CN2020110090940 proposes a difficult data mining method, which performs data mining by comparing differences of results of cloud and vehicle end algorithms. However, these two methods have the following problems: 1. scene labels and scene data are not corresponded and stored, and rapid retrieval cannot be realized; 2. both methods are not directed to training data required by a certain algorithm; 3. a baseline method is required for comparison.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide an automatic driving scene labeling method based on an AI algorithm, so as to solve the problem that the prior art can not carry out targeted screening on original data according to data required by a scene.
In order to solve the technical problems, the invention adopts the following technical scheme:
an automatic driving scene labeling method based on an AI algorithm comprises the following steps:
s1: selecting a scene, and labeling data required by the scene according to different data required by different scenes;
s2: selecting a scene recognition algorithm applicable to the scene selected at S1;
s3: training the scene recognition algorithm selected in the step S2 according to the data marked in the step S1 to obtain an algorithm model for recognizing the scene;
s4: acquiring original data in the driving process of the vehicle, and identifying the original data by using the algorithm model of the scene obtained in S3 to obtain an identification result;
s5: screening the recognition result obtained in the step S4;
s6: performing data slicing on the recognition result screened in the S5 according to the type of the scene recognition algorithm selected in the S2;
s7: and according to the selected scene in the step S1, marking the corresponding scene label on the data slice obtained in the step S6.
Compared with the prior art, the invention has the following beneficial effects:
the method can screen the required data according to different scenes and different scene identification algorithms required by the different scenes, label the scene, and enable the user to screen the data according to the label, thereby quickly obtaining the enriched and targeted scene data, improving the screening efficiency of the data and reducing the data quantity required to be labeled.
Drawings
FIG. 1 is a flow chart of the method for implementing the original data according to the present invention
Fig. 2 is a flow chart of an implementation method of the invention for slice data.
FIG. 3 is a flow chart of an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the present invention will be further described in detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present invention, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
The invention provides an automatic driving scene labeling method based on an AI algorithm, which comprises the following steps as shown in figure 1:
s1: and selecting a scene, and labeling the data required by the scene according to different data required by different scenes. For different driving scenes, the data needed to be focused are different, and the data needed by a certain scene are labeled in advance.
S2: for the scene selected at S1, a scene recognition algorithm applicable to the scene is selected. The types of the scene recognition algorithms are many, including but not limited to CNN convolutional neural network type and RNN convolutional neural network type.
S3: and training the scene recognition algorithm selected in the step S2 according to the data marked in the step S1 to obtain an algorithm model for recognizing the scene.
S4: and acquiring original data in the driving process of the vehicle, and identifying the original data by using the algorithm model of the scene acquired in S3 to acquire an identification result. The method can label the scene of the picture data and label the scene of the numerical data required by the scene.
S5: the recognition results obtained in S4 are screened, and the purpose of the screening is to remove results that are not reliable enough for recognition, for example, in some algorithms, data intervals with large fluctuation are recognized and removed, and the recognition accuracy is improved.
S6: and performing data slicing on the recognition result screened in the step 5 according to the type of the scene recognition algorithm selected in the step 2, wherein the data slicing needs to be performed according to a certain rule, and the rule needs to be adjusted according to the type of the scene recognition algorithm. For example, when the scene recognition algorithm selected at S2 is the CNN type algorithm, the input is a single image; when the scene recognition algorithm selected at S2 is the RNN type algorithm, the input is a plurality of images in succession.
S7: and according to the selected scene in the step S1, marking the corresponding scene label on the data slice obtained in the step S6.
In specific implementation, the method of the present invention can label the sliced data, and as shown in fig. 2, the sliced data is processed by the following steps:
(1) repeating the steps S1-S3;
(2) carrying out identification processing on the sliced data to obtain an identification result;
(3) according to the selected scene of S1, the data of which the slicing has been completed is labeled with a corresponding scene label.
The method of the invention also comprises the following steps:
and taking the slice data marked with the scene label obtained in the S7 as the original data in the S4, and then repeating the steps S1-S7, so that the accuracy of marking the scene label on the data can be further improved through multiple iterations.
The automatic driving scene labeling method based on the AI algorithm according to the present invention is further described by the following embodiments. As shown in fig. 3, the curve scene recognition is taken as an example, and the data is taken as a continuous picture obtained by video framing for explanation. The implementation steps are as follows:
step 1: marking curve scene data;
step 2: adopting CNN + LSTM as a scene recognition algorithm;
and step 3: training a CNN + LSTM algorithm by using the labeling data to obtain a model for identifying the curve;
and 4, step 4: carrying out inference prediction on original continuous picture data by using a curve scene recognition model to obtain a recognition result, wherein the recognition result comprises a class label id of a picture and a probability p of a curve;
and 5: removing pictures with prediction confidence probability lower than a threshold thr1 from the recognition result; thr1 needs to be set according to actual conditions;
and 6: counting the number of frames of each continuous interval belonging to the same category in the identification result, and removing the interval with the number of frames less than a threshold thr2, wherein thr2 needs to be set according to actual conditions;
and 7: removing intervals with large identification fluctuation from the results, wherein the fluctuation uses the confidence probability standard deviation of continuous intervals as an index, and the calculation formula is as follows without loss of generality; (ii) a Wherein the interval with standard deviation higher than threshold thr3 is removed as the classification confidence probability of each image of the interval and the average confidence probability of the interval.
And 8: slicing each continuous interval as slice data;
and step 9: using the category label id of the continuous interval as a scene label of the slice data;
further, if the data has been sliced, the step of tagging the sliced data with scene tags is as follows,
repeating the steps 1-3;
step 10: carrying out inference prediction on the sliced continuous picture data by using a curve scene recognition model to obtain a recognition result, wherein the recognition result comprises a class label id of the picture and a probability p of being a curve;
step 11: removing pictures with prediction confidence probability lower than a threshold thr1 from the recognition result; thr1 needs to be set according to actual conditions;
step 12: counting the number of frames of each continuous interval belonging to the same category in the identification result, and removing the interval with the number of frames less than a threshold thr2, wherein thr2 needs to be set according to actual conditions;
step 13: removing intervals with large identification fluctuation from the results, wherein the fluctuation uses the confidence probability standard deviation of continuous intervals as an index, and the calculation formula is as follows without loss of generality; wherein the interval with standard deviation higher than threshold thr3 is removed as the classification confidence probability of each image of the interval and the average confidence probability of the interval.
Step 14: and if the processed identification result corresponding to one piece of slice data still contains a curve section, marking a curve label on the piece of slice data.
The method can screen the required data according to different scenes and different scene identification algorithms required by the different scenes, label the scene, and enable the user to screen the data according to the label, thereby quickly obtaining the enriched and targeted scene data, improving the screening efficiency of the data and reducing the data quantity required to be labeled.
As mentioned above, the reminder system of the present invention is not limited to the configuration, and other systems capable of implementing the embodiments of the present invention may fall within the protection scope of the present invention.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and not for limiting the technical solutions, and those skilled in the art should understand that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all that should be covered by the claims of the present invention.

Claims (7)

1. An automatic driving scene labeling method based on an AI algorithm is characterized by comprising the following steps:
s1: selecting a scene, and labeling data required by the scene according to different data required by different scenes;
s2: selecting a scene recognition algorithm applicable to the scene selected at S1;
s3: training the scene recognition algorithm selected in the step S2 according to the data marked in the step S1 to obtain an algorithm model for recognizing the scene;
s4: acquiring original data in the driving process of the vehicle, and identifying the original data by using the algorithm model of the scene obtained in S3 to obtain an identification result;
s5: screening the recognition result obtained in the step S4;
s6: performing data slicing on the recognition result screened in the S5 according to the type of the scene recognition algorithm selected in the S2;
s7: and according to the selected scene in the step S1, marking the corresponding scene label on the data slice obtained in the step S6.
2. The AI algorithm-based auto-driving scene labeling method of claim 1, wherein the slicing-completed data is processed by the following steps:
(1) repeating the steps S1-S3;
(2) carrying out identification processing on the sliced data to obtain an identification result;
(3) according to the selected scene of S1, the data of which the slicing has been completed is labeled with a corresponding scene label.
3. The AI algorithm-based automatic driving scenario labeling method of claim 1, further comprising the steps of:
and taking the slice data marked with the scene label in the S7 as the original data in the S4, and then repeating the steps S1-S7.
4. The AI algorithm-based auto-driving scene tagging method of claim 1, wherein at S4, said raw data includes, but is not limited to, picture data and numerical data.
5. The AI algorithm-based auto-driving scene labeling method of claim 1, wherein in S2, the scene recognition algorithm comprises a CNN convolutional neural network or an RNN convolutional neural network.
6. The AI-algorithm-based auto-driving scene labeling method according to claim 5, wherein in S6, when the scene recognition algorithm selected in S2 is the CNN-type algorithm, the input image is a single image.
7. The AI-algorithm-based automatic driving scene labeling method according to claim 5, wherein in S6, when the scene recognition algorithm selected in S2 is the RNN-type algorithm, the input images are a plurality of consecutive images.
CN202210435027.8A 2022-04-24 2022-04-24 Automatic driving scene labeling method based on AI algorithm Pending CN114743170A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210435027.8A CN114743170A (en) 2022-04-24 2022-04-24 Automatic driving scene labeling method based on AI algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210435027.8A CN114743170A (en) 2022-04-24 2022-04-24 Automatic driving scene labeling method based on AI algorithm

Publications (1)

Publication Number Publication Date
CN114743170A true CN114743170A (en) 2022-07-12

Family

ID=82284660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210435027.8A Pending CN114743170A (en) 2022-04-24 2022-04-24 Automatic driving scene labeling method based on AI algorithm

Country Status (1)

Country Link
CN (1) CN114743170A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439957A (en) * 2022-09-14 2022-12-06 上汽大众汽车有限公司 Intelligent driving data acquisition method, acquisition device, acquisition equipment and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439957A (en) * 2022-09-14 2022-12-06 上汽大众汽车有限公司 Intelligent driving data acquisition method, acquisition device, acquisition equipment and computer readable storage medium
CN115439957B (en) * 2022-09-14 2023-12-08 上汽大众汽车有限公司 Intelligent driving data acquisition method, acquisition device, acquisition equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN112818906B (en) Intelligent cataloging method of all-media news based on multi-mode information fusion understanding
US11640714B2 (en) Video panoptic segmentation
CN111126197B (en) Video processing method and device based on deep learning
CN111967429A (en) Pedestrian re-recognition model training method and device based on active learning
CN110992365A (en) Loss function based on image semantic segmentation and design method thereof
CN110751191A (en) Image classification method and system
CN111522951A (en) Sensitive data identification and classification technical method based on image identification
CN114743170A (en) Automatic driving scene labeling method based on AI algorithm
CN116030396A (en) Accurate segmentation method for video structured extraction
CN112446417B (en) Spindle-shaped fruit image segmentation method and system based on multilayer superpixel segmentation
CN111985269A (en) Detection model construction method, detection device, server and medium
CN112925905A (en) Method, apparatus, electronic device and storage medium for extracting video subtitles
CN111539390A (en) Small target image identification method, equipment and system based on Yolov3
CN114494890B (en) Model training method, commodity image management method and device
CN115937862A (en) End-to-end container number identification method and system
CN115410131A (en) Method for intelligently classifying short videos
CN112989869B (en) Optimization method, device, equipment and storage medium of face quality detection model
CN114529894A (en) Rapid scene text detection method fusing hole convolution
CN114581769A (en) Method for identifying houses under construction based on unsupervised clustering
CN113192106A (en) Livestock tracking method and device
CN113192108A (en) Human-in-loop training method for visual tracking model and related device
CN112541469A (en) Crowd counting method and system based on self-adaptive classification
CN117371533B (en) Method and device for generating data tag rule
CN114286199B (en) Automatic short video segment generation method and system based on neural network model
CN114863242B (en) Deep learning network optimization method and system for image recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination