CN111143423A - Dynamic scene labeling data mining method and device and terminal - Google Patents

Dynamic scene labeling data mining method and device and terminal Download PDF

Info

Publication number
CN111143423A
CN111143423A CN201811306640.XA CN201811306640A CN111143423A CN 111143423 A CN111143423 A CN 111143423A CN 201811306640 A CN201811306640 A CN 201811306640A CN 111143423 A CN111143423 A CN 111143423A
Authority
CN
China
Prior art keywords
data
behavior
scene
dynamic
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811306640.XA
Other languages
Chinese (zh)
Other versions
CN111143423B (en
Inventor
李皓
王智杰
毛继明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201811306640.XA priority Critical patent/CN111143423B/en
Publication of CN111143423A publication Critical patent/CN111143423A/en
Application granted granted Critical
Publication of CN111143423B publication Critical patent/CN111143423B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention provides a dynamic scene labeling data mining method, a device and a terminal, wherein the method comprises the following steps: acquiring an operation result of an automatic driving algorithm, and recording dynamic behavior characteristics according to the operation result of the automatic driving algorithm; inquiring scene marking data corresponding to the dynamic behavior characteristics in a scene marking database; and extracting the inquired scene labeling data as dynamic scene labeling data. The method can monitor the change of the dynamic behavior caused by the change of the automatic driving algorithm in time, and label the dynamic scene label data obtained after the algorithm is operated, so that the operation effect of the automatic driving algorithm can be optimized, analyzed and evaluated according to the labeled dynamic scene label data.

Description

Dynamic scene labeling data mining method and device and terminal
Technical Field
The invention relates to the technical field of automatic driving, in particular to a dynamic scene labeling data mining method, a dynamic scene labeling data mining device and a dynamic scene labeling data mining terminal.
Background
An automatic driving automobile, also called an unmanned automobile, a computer driving automobile or a wheeled mobile robot, is an intelligent automobile which realizes unmanned driving through a computer system. The development of the automatic driving system depends on the process from simulation to real vehicle test, and a large number of simulation scenes are needed to verify and analyze the unmanned driving related algorithm before the real vehicle test is carried out. The construction of the simulation scene depends on a large amount of scene data, and the scene data comprises directly acquired real environment data, designed virtual environment data, virtual and real combined environment data and the like. Wherein, a part of scene data is not changed along with the change of the automatic driving algorithm and the perception module, and the scene data is static scene data. And the other part of scene data changes along with the changes of the automatic driving algorithm and the perception module, and is called as dynamic scene marking data. For example, before the automated driving algorithm is changed, the behavior of the host vehicle is a straight line and there is a risk of collision with an obstacle, and after the algorithm is changed, the behavior of the host vehicle turns to a right turn and the risk of collision with an obstacle disappears.
And marking a large amount of scene data according to the time dimension so as to construct a large amount of simulation scenes and further perform simulation analysis on the unmanned driving related algorithm. Annotated objects include, but are not limited to, map elements, host vehicle behavior, areas, weather, obstacle vehicle behavior, and interactions between hosts and obstacles, among others. However, at present, the labeling of the scene data cannot clearly distinguish whether the static scene data or the dynamic scene labeling data is labeled, so that a user cannot perform optimization analysis and evaluation on the operation effect of the automatic driving algorithm according to the labeled dynamic scene labeling data.
Disclosure of Invention
The embodiment of the invention provides a dynamic scene marking data mining method, a dynamic scene marking data mining device and a terminal, and at least solves the technical problems in the prior art.
In a first aspect, an embodiment of the present invention provides a method for mining dynamic scene labeling data, including:
obtaining an operation result of an automatic driving algorithm;
recording dynamic behavior characteristics according to the running result of the automatic driving algorithm;
inquiring scene marking data corresponding to the dynamic behavior characteristics in a scene marking database;
and extracting the inquired scene labeling data as dynamic scene labeling data.
In one embodiment, obtaining the results of the operation of the autonomous driving algorithm includes:
acquiring original scene data;
and processing the original scene data by using a multi-sensor fusion algorithm to generate barrier behavior data.
In one embodiment, obtaining the operation result of the automatic driving algorithm further comprises:
processing the original scene data by using a decision algorithm and a planning algorithm respectively to generate a decision suggestion and a planning suggestion;
and processing the obstacle behavior data, the decision suggestion and the planning suggestion by using a control algorithm, and outputting the behavior data of the main vehicle and the interaction behavior data of the obstacle and the main vehicle.
In one embodiment, recording dynamic behavior characteristics based on results of the operation of the autonomous driving algorithm includes:
recording obstacle behavior characteristics according to the obstacle behavior data;
and recording the behavior characteristics of the main vehicle according to the behavior data of the main vehicle, and recording the behavior characteristics of the interaction of the obstacle and the main vehicle according to the behavior data of the interaction of the obstacle and the main vehicle.
In one embodiment, before querying scene labeling data corresponding to the dynamic behavior feature in a scene labeling database, the method further includes:
and performing similarity analysis among the plurality of dynamic behavior characteristics, classifying the plurality of dynamic behavior characteristics according to the similarity analysis result, and inquiring according to the classification result.
In a second aspect, an embodiment of the present invention provides a dynamic scene labeling data mining device, including:
the dynamic behavior characteristic recording module is used for acquiring the operation result of the automatic driving algorithm and recording the dynamic behavior characteristic according to the operation result of the automatic driving algorithm;
the dynamic behavior characteristic query module is used for querying scene marking data corresponding to the dynamic behavior characteristics in a scene marking database;
and the dynamic behavior feature marking module is used for marking the searched scene marking data with dynamic behavior features to obtain the dynamic scene marking data.
In one embodiment, the dynamic behavior feature recording module includes:
and the barrier behavior data acquisition unit is used for acquiring original scene data, and processing the original scene data by using a multi-sensor fusion algorithm to generate barrier behavior data.
In one embodiment, the dynamic behavior feature recording module further comprises:
the behavior suggestion unit is used for respectively processing the original scene data by utilizing a decision algorithm and a planning algorithm to generate a decision suggestion and a planning suggestion;
and the master vehicle behavior data acquisition unit is used for processing the obstacle behavior data, the decision suggestion and the planning suggestion by using a control algorithm and outputting master vehicle behavior data and behavior data of the interaction of the obstacle and the master vehicle.
In one embodiment, the dynamic behavior feature recording module further comprises:
the characteristic recording unit is used for recording the behavior characteristics of the obstacles according to the behavior data of the obstacles, recording the behavior characteristics of the main vehicle according to the behavior data of the main vehicle, and recording the behavior characteristics of the interaction between the obstacles and the main vehicle according to the behavior data of the interaction between the obstacles and the main vehicle.
In a third aspect, an embodiment of the present invention provides a dynamic scene labeling data mining terminal, where the function may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the above-described functions.
In a possible design, the structure of the dynamic scene labeling data mining terminal includes a processor and a memory, the memory is used for storing a program for supporting the dynamic scene labeling data mining terminal to execute the dynamic scene labeling data mining method in the first aspect, and the processor is configured to execute the program stored in the memory. The dynamic scene labeling data mining terminal can also comprise a communication interface, and the communication interface is used for communicating the dynamic scene labeling data mining terminal with other equipment or a communication network.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, configured to store computer software instructions for a dynamic scene labeling data mining device, where the computer software instructions include a program for executing the dynamic scene labeling data mining method in the first aspect to the dynamic scene labeling data mining device.
One of the above technical solutions has the following advantages or beneficial effects: the method can monitor the change of the dynamic behavior caused by the change of the automatic driving algorithm in time, and label the dynamic scene label data obtained after the algorithm is operated, so that the operation effect of the automatic driving algorithm can be optimized, analyzed and evaluated according to the labeled dynamic scene label data. The method can completely represent the characteristics of a certain operation result, flexibly provide the minimum set of data available to a user, improve the regression verification performance and save resources.
The foregoing summary is provided for the purpose of description only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present invention will be readily apparent by reference to the drawings and following detailed description.
Drawings
In the drawings, like reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily to scale. It is appreciated that these drawings depict only some embodiments in accordance with the disclosure and are therefore not to be considered limiting of its scope.
Fig. 1 is a flowchart of a dynamic scene annotation data mining method according to an embodiment of the present invention;
fig. 2 is a flowchart of another dynamic scene labeling data mining method according to an embodiment of the present invention;
fig. 3 is a flowchart of another dynamic scene labeling data mining method according to an embodiment of the present invention;
fig. 4 is a flowchart of another dynamic scene labeling data mining method according to an embodiment of the present invention;
FIG. 5 is a flowchart of another dynamic scene annotation data mining method according to an embodiment of the present invention;
fig. 6 is a block diagram of a dynamic scene labeling data mining device according to an embodiment of the present invention;
fig. 7 is a block diagram of another dynamic scene labeling data mining device according to an embodiment of the present invention;
fig. 8 is a block diagram of another dynamic scene labeling data mining device according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a dynamic scene labeling data mining terminal according to an embodiment of the present invention.
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
Example one
In a specific embodiment, as shown in fig. 1, a flowchart of a dynamic scene labeling data mining method is provided, where the method includes:
step S10: and acquiring the running result of the automatic driving algorithm, and recording the dynamic behavior characteristics according to the running result of the automatic driving algorithm.
Step S20: and recording dynamic behavior characteristics according to the running result of the automatic driving algorithm.
Step S30: and inquiring scene marking data corresponding to the dynamic behavior characteristics in a scene marking database.
Step S40: and extracting the inquired scene labeling data as dynamic scene labeling data.
Before and after the automatic driving algorithm is changed, the operation result is changed, dynamic scene marking data in the changed operation result are mined, and the operation of the automatic driving algorithm can be analyzed and evaluated. Firstly, the operation result of the automatic driving algorithm of the main vehicle is monitored in real time by using the annotation algorithm, and the operation result comprises tracking and scanning various behavior data such as behavior logs and the like. And recording dynamic behavior characteristics in the operation result in a time dimension, wherein the dynamic behavior characteristics comprise the behavior of the host vehicle, the behavior of the obstacle, the relationship between the host vehicle and the behavior of the obstacle, the road condition and the like. And marking the original scene data acquired by the main vehicle to obtain a scene marking database. The method can completely represent the characteristics of a certain operation result, flexibly provide the minimum set of data available to a user, improve the regression verification performance and save resources. And inquiring scene marking data corresponding to the dynamic behavior characteristics in a scene marking database, and marking the inquired scene marking data with the dynamic behavior characteristics to obtain the dynamic scene marking data so as to perform optimization analysis and evaluation on the operation effect of the automatic driving algorithm according to the marked dynamic scene marking data.
In one embodiment, as shown in fig. 2, the step S10 of obtaining the operation result of the automatic driving algorithm includes:
step S101: original scene data is acquired.
Step S102: and processing the original scene data by using a multi-sensor fusion algorithm to generate barrier behavior data.
In the operation process of the main vehicle, the video camera, the radar sensor, the laser range finder and the like are utilized to obtain the surrounding traffic conditions and obtain real original scene data. And in the process of verifying the multi-sensor fusion algorithm, the sensing module outputs an algorithm operation result. The operation result obtained after calculation by the sensing module comprises a multi-frame scene. In each frame of scene output by the sensing module, obstacle behavior data and surrounding environment data, such as traffic light signal data, are included. The obstacle behavior data may deviate in consistency as the algorithm changes. The obstacle behavior data output by the sensing module can be recorded only, so that the behavior change of the obstacle before and after the algorithm change can be analyzed conveniently.
In one embodiment, as shown in fig. 3, the step S10 of obtaining the operation result of the automatic driving algorithm further includes:
step S103: and respectively processing the original scene data by using a decision algorithm and a planning algorithm to generate a decision suggestion and a planning suggestion.
Step S104: and processing the obstacle behavior data, the decision suggestion and the planning suggestion by using a control algorithm, and outputting the behavior data of the main vehicle and the interaction behavior data of the obstacle and the main vehicle.
And judging the driving behavior of the vehicle by the decision algorithm according to the barrier information, the traffic signboard information and the lane line information in the simulation data. The planning algorithm plans the vehicle driving path according to the main vehicle driving behavior. Decision-making algorithms and plans generally do not cause changes in the operating results, and therefore, the operating results obtained by the algorithms may not be labeled. The control algorithm determines the control quantity required by the running of the main vehicle according to the vehicle running path plan, the positioning information and the course information of the main vehicle, inputs the control quantity to the main vehicle and further controls the running of the main vehicle. And processing the obstacle behavior data, decision suggestions, planning suggestions and the like by using a control algorithm, and outputting the behavior data of the main vehicle and the interaction behavior data of the obstacle and the main vehicle. The host behavior data and the behavior data of the obstacle interacting with the host vehicle may change as the control algorithm changes. Besides the obstacle behavior data output by the sensing module, the behavior data of the main vehicle and the interaction between the obstacle and the main vehicle can be recorded, so that the behavior change of the main vehicle or the interaction between the main vehicle and the obstacle before and after the algorithm change can be analyzed conveniently.
In one embodiment, as shown in fig. 4, step S20 includes:
step S201: and recording the behavior characteristics of the obstacles according to the behavior data of the obstacles, recording the behavior characteristics of the main vehicle according to the behavior data of the main vehicle, and recording the behavior characteristics of the obstacles interacting with the main vehicle according to the behavior data of the obstacles interacting with the main vehicle.
The operation result includes multiple frames of scenes arranged according to a time dimension, that is, each frame of scene corresponds to one group of timestamps including a start time and an end time, and each frame of scene corresponds to data output by each module, for example, obstacle behavior data, main vehicle behavior data, and the like. The start and end timestamps of the behavior may be determined from different behavior tasks. And recording the behavior characteristics of the obstacles corresponding to the behavior task, the behavior characteristics of the interaction of the behavior characteristics of the main vehicle and the main vehicle aiming at each frame of scene between the starting timestamp and the ending timestamp.
In one embodiment, as shown in fig. 5, before step S30, the method further includes:
step S21: and performing similarity analysis among the plurality of dynamic behavior characteristics, classifying the plurality of dynamic behavior characteristics according to the similarity analysis result, and inquiring according to the classification result.
And analyzing the similarity of the dynamic behavior characteristics to obtain the dynamic behavior characteristics of the same or similar behavior tasks. For example, a lane change operation task, an operation task of turning at a certain intersection, and the like. And classifying the behavior characteristics of the obstacles, the behavior characteristics of the main vehicle and the interaction behavior characteristics of the main vehicle and the main vehicle according to the types of the behavior tasks. When querying in the scene annotation database, the data in the scene annotation database may be classified first, for example, the data may be classified into scene annotation of a host vehicle, scene annotation of an obstacle, and the like. During query, the behavior characteristics of the obstacles are queried in the scene labeling categories of the obstacles, and the behavior characteristics of the main vehicle are queried in the scene labeling categories of the main vehicle, so that the query speed can be increased.
Of course, the dynamic behavior features may also be mapped to a corresponding feature space to perform statistics and classification on various dynamic behavior features, analysis of the situation of dynamic feature changes, distribution analysis of dynamic feature changes, and the like.
And finally, carrying out dynamic behavior characteristic labeling on the inquired scene labeling data to obtain dynamic scene labeling data. The dynamic scene labeling data is supplemented to the scene labeling database, so that the database is enriched, and the purpose of mining the dynamic scene labeling data with higher precision is achieved.
Example two
In a specific embodiment, as shown in fig. 6, there is provided a dynamic scene labeling data mining device, including:
an operation result obtaining module 10, configured to obtain an operation result of an automatic driving algorithm;
the dynamic behavior characteristic recording module 20 is used for recording dynamic behavior characteristics according to the running result of the automatic driving algorithm;
a dynamic behavior feature query module 30, configured to query scene labeling data corresponding to the dynamic behavior feature in a scene labeling database;
and the dynamic scene labeling data extracting module 40 is configured to extract the queried scene labeling data as dynamic scene labeling data.
In one embodiment, as shown in fig. 7, the operation result obtaining module 10 includes:
and the obstacle behavior data acquisition unit 101 is configured to acquire original scene data, process the original scene data by using a multi-sensor fusion algorithm, and generate obstacle behavior data.
In one embodiment, the operation result obtaining module 10 further includes:
a behavior suggestion unit 102, configured to process the original scene data by using a decision algorithm and a planning algorithm, respectively, to generate a decision suggestion and a planning suggestion;
and the master vehicle behavior data acquisition unit 103 is used for processing the obstacle behavior data, the decision suggestion and the planning suggestion by using a control algorithm and outputting master vehicle behavior data and behavior data of the interaction of the obstacle and the master vehicle.
In one embodiment, as shown in fig. 7, the dynamic behavior feature recording module 20 includes:
the characteristic recording unit 201 is used for recording the behavior characteristics of the obstacles according to the behavior data of the obstacles, recording the behavior characteristics of the main vehicle according to the behavior data of the main vehicle, and recording the behavior characteristics of the interaction of the obstacles and the main vehicle according to the behavior data of the interaction of the obstacles and the main vehicle.
In one embodiment, as shown in fig. 8, the apparatus further comprises:
the dynamic behavior feature classification module 21 is configured to perform similarity analysis on the multiple dynamic behavior features, and classify the multiple dynamic behavior features according to a similarity analysis result, so as to perform query according to a classification result.
EXAMPLE III
An embodiment of the present invention provides a dynamic scene labeling data mining terminal, as shown in fig. 9, including:
a memory 400 and a processor 500, the memory 400 having stored therein a computer program operable on the processor 500. The processor 500 implements the dynamic scene labeling data mining method in the above embodiments when executing the computer program. The number of the memory 400 and the processor 500 may be one or more.
A communication interface 600 for the memory 400 and the processor 500 to communicate with the outside.
Memory 400 may comprise high-speed RAM memory and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
If the memory 400, the processor 500, and the communication interface 600 are implemented independently, the memory 400, the processor 500, and the communication interface 600 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 9, but this does not indicate only one bus or one type of bus.
Optionally, in a specific implementation, if the memory 400, the processor 500, and the communication interface 600 are integrated on a single chip, the memory 400, the processor 500, and the communication interface 600 may complete communication with each other through an internal interface.
Example four
A computer-readable storage medium storing a computer program which, when executed by a processor, implements a dynamic scene labeling data mining method as in any of embodiments one included herein.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a computer readable storage medium. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive various changes or substitutions within the technical scope of the present invention, and these should be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (12)

1. A dynamic scene labeling data mining method is characterized by comprising the following steps:
obtaining an operation result of an automatic driving algorithm;
recording dynamic behavior characteristics according to the running result of the automatic driving algorithm;
inquiring scene marking data corresponding to the dynamic behavior characteristics in a scene marking database;
and extracting the inquired scene labeling data as dynamic scene labeling data.
2. The method of claim 1, wherein obtaining results of the operation of the autonomous driving algorithm comprises:
acquiring original scene data;
and processing the original scene data by using a multi-sensor fusion algorithm to generate barrier behavior data.
3. The method of claim 2, wherein obtaining results of the operation of the autonomous driving algorithm further comprises:
processing the original scene data by using a decision algorithm and a planning algorithm respectively to generate a decision suggestion and a planning suggestion;
and processing the obstacle behavior data, the decision suggestion and the planning suggestion by using a control algorithm, and outputting the behavior data of the main vehicle and the interaction behavior data of the obstacle and the main vehicle.
4. The method of claim 3, wherein recording dynamic behavior characteristics based on results of the operation of the autonomous driving algorithm comprises:
recording obstacle behavior characteristics according to the obstacle behavior data;
recording the behavior characteristics of the main vehicle according to the behavior data of the main vehicle;
and recording the behavior characteristics of the interaction of the obstacle and the host vehicle according to the behavior data of the interaction of the obstacle and the host vehicle.
5. The method according to claim 1, wherein before querying the scene annotation database for the scene annotation data corresponding to the dynamic behavior feature, the method further comprises:
and performing similarity analysis among the plurality of dynamic behavior characteristics, classifying the plurality of dynamic behavior characteristics according to the similarity analysis result, and inquiring according to the classification result.
6. A dynamic scene labeling data mining device is characterized by comprising:
the operation result acquisition module is used for acquiring the operation result of the automatic driving algorithm;
the dynamic behavior characteristic recording module is used for recording dynamic behavior characteristics according to the running result of the automatic driving algorithm;
the dynamic behavior characteristic query module is used for querying scene marking data corresponding to the dynamic behavior characteristics in a scene marking database;
and the dynamic scene labeling data extraction module is used for extracting the inquired scene labeling data as dynamic scene labeling data.
7. The apparatus of claim 6, wherein the operation result obtaining module comprises:
and the barrier behavior data acquisition unit is used for acquiring original scene data, and processing the original scene data by using a multi-sensor fusion algorithm to generate barrier behavior data.
8. The apparatus of claim 7, wherein the operation result obtaining module further comprises:
the behavior suggestion unit is used for respectively processing the original scene data by utilizing a decision algorithm and a planning algorithm to generate a decision suggestion and a planning suggestion;
and the master vehicle behavior data acquisition unit is used for processing the obstacle behavior data, the decision suggestion and the planning suggestion by using a control algorithm and outputting master vehicle behavior data and behavior data of the interaction of the obstacle and the master vehicle.
9. The apparatus of claim 8, wherein the dynamic behavior feature recording module comprises:
the characteristic recording unit is used for recording the behavior characteristics of the obstacles according to the behavior data of the obstacles, recording the behavior characteristics of the main vehicle according to the behavior data of the main vehicle, and recording the behavior characteristics of the interaction between the obstacles and the main vehicle according to the behavior data of the interaction between the obstacles and the main vehicle.
10. The apparatus of claim 6, further comprising:
and the dynamic behavior feature classification module is used for analyzing the similarity among the dynamic behavior features and classifying the dynamic behavior features according to the similarity analysis result so as to inquire according to the classification result.
11. A dynamic scene labeling data mining terminal is characterized by comprising:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method recited in any of claims 1-5.
12. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN201811306640.XA 2018-11-05 2018-11-05 Dynamic scene labeling data mining method and device and terminal Active CN111143423B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811306640.XA CN111143423B (en) 2018-11-05 2018-11-05 Dynamic scene labeling data mining method and device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811306640.XA CN111143423B (en) 2018-11-05 2018-11-05 Dynamic scene labeling data mining method and device and terminal

Publications (2)

Publication Number Publication Date
CN111143423A true CN111143423A (en) 2020-05-12
CN111143423B CN111143423B (en) 2023-03-24

Family

ID=70516482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811306640.XA Active CN111143423B (en) 2018-11-05 2018-11-05 Dynamic scene labeling data mining method and device and terminal

Country Status (1)

Country Link
CN (1) CN111143423B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115617880A (en) * 2022-12-19 2023-01-17 北京百度网讯科技有限公司 Mining method, device and equipment for automatic driving scene and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101389004A (en) * 2007-09-13 2009-03-18 中国科学院自动化研究所 Moving target classification method based on on-line study
US9443153B1 (en) * 2015-06-12 2016-09-13 Volkswagen Ag Automatic labeling and learning of driver yield intention
CN106339691A (en) * 2016-09-07 2017-01-18 四川天辰智创科技有限公司 Method and device used for marking object
CN107991898A (en) * 2016-10-26 2018-05-04 法乐第(北京)网络科技有限公司 A kind of automatic driving vehicle simulating test device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101389004A (en) * 2007-09-13 2009-03-18 中国科学院自动化研究所 Moving target classification method based on on-line study
US9443153B1 (en) * 2015-06-12 2016-09-13 Volkswagen Ag Automatic labeling and learning of driver yield intention
CN106339691A (en) * 2016-09-07 2017-01-18 四川天辰智创科技有限公司 Method and device used for marking object
CN107991898A (en) * 2016-10-26 2018-05-04 法乐第(北京)网络科技有限公司 A kind of automatic driving vehicle simulating test device and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115617880A (en) * 2022-12-19 2023-01-17 北京百度网讯科技有限公司 Mining method, device and equipment for automatic driving scene and storage medium

Also Published As

Publication number Publication date
CN111143423B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
CN109598066B (en) Effect evaluation method, apparatus, device and storage medium for prediction module
CN110796007B (en) Scene recognition method and computing device
CN110163176B (en) Lane line change position identification method, device, equipment and medium
JP2022505759A (en) Methods and equipment for testing driver assistance systems
CN109558854B (en) Obstacle sensing method and device, electronic equipment and storage medium
CN109703569B (en) Information processing method, device and storage medium
EP3620945A1 (en) Obstacle distribution simulation method, device and terminal based on multiple models
CN113343461A (en) Simulation method and device for automatic driving vehicle, electronic equipment and storage medium
CN112561859B (en) Monocular vision-based steel belt drilling and anchor net identification method and device for anchoring and protecting
CN113907663A (en) Obstacle map construction method, cleaning robot and storage medium
US10404565B2 (en) Simulation latency indication
CN109377694A (en) The monitoring method and system of community's vehicle
CN115830399A (en) Classification model training method, apparatus, device, storage medium, and program product
CN113762406A (en) Data mining method and device and electronic equipment
CN109886198B (en) Information processing method, device and storage medium
US11354461B2 (en) Method and device for simulating a distribution of obstacles
CN111143423B (en) Dynamic scene labeling data mining method and device and terminal
CN112487861A (en) Lane line recognition method and device, computing equipment and computer storage medium
CN111126154A (en) Method and device for identifying road surface element, unmanned equipment and storage medium
CN111126336B (en) Sample collection method, device and equipment
CN114241373A (en) End-to-end vehicle behavior detection method, system, equipment and storage medium
CN109948449B (en) Information processing method, device and storage medium
CN113393675A (en) Vehicle ID determination method, device, equipment and medium
CN114861793A (en) Information processing method, device and storage medium
CN114651190A (en) Method, device and computer program for approving the use of a sensor system for detecting objects in the environment of a vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant