CN109829395B - Data processing method, device and equipment based on unmanned vehicle and storage medium - Google Patents

Data processing method, device and equipment based on unmanned vehicle and storage medium Download PDF

Info

Publication number
CN109829395B
CN109829395B CN201910036390.0A CN201910036390A CN109829395B CN 109829395 B CN109829395 B CN 109829395B CN 201910036390 A CN201910036390 A CN 201910036390A CN 109829395 B CN109829395 B CN 109829395B
Authority
CN
China
Prior art keywords
data
information
unmanned vehicle
target driving
driving scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910036390.0A
Other languages
Chinese (zh)
Other versions
CN109829395A (en
Inventor
慎东辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910036390.0A priority Critical patent/CN109829395B/en
Publication of CN109829395A publication Critical patent/CN109829395A/en
Application granted granted Critical
Publication of CN109829395B publication Critical patent/CN109829395B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

According to the data processing method, the device, the equipment and the storage medium based on the unmanned vehicle, provided by the embodiment of the invention, data generated in the driving process of the unmanned vehicle are obtained; determining at least one piece of tag information for each piece of data, wherein the tag information comprises unmanned vehicle action information and/or unmanned vehicle perception information used for indicating the representation of the data; and determining data conforming to a target driving scene according to the tag information of each data, wherein the target driving scene corresponds to at least two tag information, and mining and analyzing processing aiming at the scene is carried out on the data generated in the driving process of the unmanned vehicle, so that the efficiency and the accuracy of scene extraction are improved, and the efficiency of unmanned vehicle data processing is further improved.

Description

Data processing method, device and equipment based on unmanned vehicle and storage medium
Technical Field
The embodiment of the invention relates to the technical field of unmanned vehicles, in particular to a data processing method, a data processing device, data processing equipment and a storage medium based on an unmanned vehicle.
Background
With the development of automobile technology, unmanned vehicles are beginning to be applied and developed. The unmanned vehicle has various driving scenes.
However, in the prior art, how to deeply identify and analyze the driving scene and the data in the driving process of the unmanned vehicle to determine the data which conform to different driving scenes so as to analyze and optimize the automatic driving of the unmanned vehicle is a problem to be solved.
Disclosure of Invention
The embodiment of the invention provides a data processing method, a device, equipment and a storage medium based on an unmanned vehicle, which are used for mining and analyzing data generated in the driving process of the unmanned vehicle by taking a scene as a target, so that the efficiency and the accuracy of scene extraction are improved, and the efficiency of data processing of the unmanned vehicle is further improved.
The invention provides a data processing method based on an unmanned vehicle, which comprises the following steps:
acquiring data generated in the driving process of the unmanned vehicle;
determining at least one piece of tag information for each piece of data, wherein the tag information comprises unmanned vehicle action information and/or unmanned vehicle perception information used for indicating the representation of the data;
and determining data conforming to a target driving scene according to the label information of each data, wherein the target driving scene corresponds to at least two pieces of label information.
In some embodiments, determining data that conforms to a target driving scenario according to tag information of each of the data includes:
acquiring label information corresponding to a target driving scene according to the target driving scene;
acquiring data corresponding to the label information according to the label information corresponding to the target driving scene;
and determining data conforming to the target driving scene according to the target driving scene and the data corresponding to the label information.
In some embodiments, determining data that conforms to a target driving scenario from the target driving scenario and data corresponding to the tag information includes:
acquiring a label operation rule according to the target driving scene;
and processing the data corresponding to the tag information by using the tag operation rule according to the tag information as a unit to obtain the data conforming to the target driving scene.
In some embodiments, the tag operation rule comprises a combination of one or more of:
intersection operation, union operation and difference operation.
In some embodiments, the tag information further comprises: timestamp information;
the target driving scene comprises: time overlapping scenes and/or time precedence scenes;
the label information corresponding to the time overlapping scene is label information corresponding to the same timestamp information; and the label information corresponding to the time sequence scene is label information corresponding to one or more adjacent timestamp information.
In some embodiments, before determining the data conforming to the target driving scenario according to the tag information of each of the data, the method further includes:
receiving a retrieval instruction, wherein the retrieval instruction comprises the target driving scene.
In some embodiments, the unmanned vehicle action information includes any one or more of:
driving mode, driving behavior, driving speed.
In some embodiments, the unmanned vehicle awareness information includes any one or more of:
map element perception information, traffic light perception information, weather perception information, and obstacle perception information.
A second aspect of the present invention provides an unmanned vehicle-based data processing apparatus, comprising:
the acquisition module is used for acquiring data generated in the driving process of the unmanned vehicle;
the marking module is used for determining at least one piece of label information for each piece of data, wherein the label information comprises unmanned vehicle action information and/or unmanned vehicle perception information used for indicating the data representation;
and the processing module is used for determining data which accord with a target driving scene according to the label information of each data, wherein the target driving scene corresponds to at least two pieces of label information.
In some embodiments, the processing module is configured to obtain, according to a target driving scene, tag information corresponding to the target driving scene; acquiring data corresponding to the label information according to the label information corresponding to the target driving scene; and determining data conforming to the target driving scene according to the target driving scene and the data corresponding to the label information.
In some embodiments, the processing module is configured to obtain a tag operation rule according to the target driving scenario; and processing the data corresponding to the tag information by using the tag operation rule according to the tag information as a unit to obtain the data conforming to the target driving scene.
In some embodiments, the obtaining module is further configured to receive a retrieval instruction before determining data that conforms to a target driving scene according to tag information of each of the data, where the retrieval instruction includes the target driving scene.
A third aspect of the present invention provides an unmanned vehicle-based data processing apparatus comprising: the data processing system comprises a memory, a processor and a computer program, wherein the computer program is stored in the memory, and the processor runs the computer program to execute the unmanned vehicle-based data processing method provided by any one of the embodiments of the first aspect.
A fourth aspect of the present invention provides a storage medium, where a computer program is stored, and the computer program is used, when executed by a processor, to implement the unmanned vehicle-based data processing method provided in any one of the embodiments of the first aspect.
According to the data processing method, the device, the equipment and the storage medium based on the unmanned vehicle, provided by the embodiment of the invention, data generated in the driving process of the unmanned vehicle are obtained; determining at least one piece of tag information for each piece of data, wherein the tag information comprises unmanned vehicle action information and/or unmanned vehicle perception information used for indicating the representation of the data; and determining data conforming to a target driving scene according to the tag information of each data, wherein the target driving scene corresponds to at least two tag information, and mining and analyzing processing aiming at the scene is carried out on the data generated in the driving process of the unmanned vehicle, so that the efficiency and the accuracy of scene extraction are improved, and the efficiency of unmanned vehicle data processing is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a data processing method based on an unmanned vehicle according to an embodiment of the present invention;
fig. 2 is a schematic diagram of tag information in a straight-ahead turning scene according to an embodiment of the present invention;
fig. 3 is a schematic diagram of tag information in an obstacle parking scene according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an unmanned vehicle-based data processing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a hardware structure of an unmanned vehicle-based data processing device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be understood that, in various embodiments of the present invention, the sequence numbers of the processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the internal logic of the processes, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
It should be understood that in the present application, "comprising" and "having" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that, in the present invention, "a plurality" means two or more. "and/or" is merely an association describing an associated object, meaning that three relationships may exist, for example, and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "comprises A, B and C" and "comprises A, B, C" means that all three of A, B, C comprise, "comprises A, B or C" means that one of A, B, C comprises, "comprises A, B and/or C" means that any 1 or any 2 or 3 of A, B, C comprises.
It should be understood that in the present invention, "B corresponding to a", "a corresponds to B", or "B corresponds to a" means that B is associated with a, and B can be determined from a. Determining B from a does not mean determining B from a alone, but may be determined from a and/or other information. And the matching of A and B means that the similarity of A and B is greater than or equal to a preset threshold value.
As used herein, "if" may be interpreted as "at … …" or "when … …" or "in response to a determination" or "in response to a detection", depending on the context.
In order to optimize or upgrade the driving of the unmanned vehicle, the state of the unmanned vehicle in various scenes needs to be analyzed, so that an optimization point is determined, and the unmanned vehicle is improved by adopting a corresponding technical means. In the process, unmanned vehicle data of a target scene need to be extracted from massive unmanned vehicle data, and corresponding processing is carried out to provide a data basis for searching an optimizable point. However, the existing data processing method is difficult to meet the scene extraction requirement of the unmanned vehicle. In order to solve the existing problems, the invention provides a data processing method, a device, equipment and a storage medium based on an unmanned vehicle, which can perform scene-oriented mining and analysis processing on data generated in the driving process of the unmanned vehicle, thereby improving the efficiency and accuracy of scene extraction and further improving the efficiency of unmanned vehicle data processing. This scheme is illustrated in detail below by means of several specific examples.
Fig. 1 is a schematic flow chart of a data processing method based on an unmanned vehicle according to an embodiment of the present invention, and as shown in fig. 1, an execution subject of the solution may be an unmanned vehicle measurement and control system, an unmanned vehicle optimization server, or other devices with a data processing function. For simplicity of description, the following embodiments will be explained with the execution subject as a server. The unmanned vehicle-based data processing method shown in fig. 1 includes the following steps S101 to S103.
And S101, acquiring data generated in the driving process of the unmanned vehicle.
During the driving process of the unmanned vehicle, the control system of the unmanned vehicle collects data generated by the unmanned vehicle in real time and uploads the collected data to the server in real time or periodically or according to user instructions so that the server can perform possible data processing.
The data generated during the driving of the unmanned vehicle may be various, such as data indicating a driving mode (e.g., an automatic driving mode, a manual driving mode, a safe driving mode, etc.), data indicating a driving behavior (e.g., steering wheel control data, indicator light control data, etc.), and data indicating a driving speed (e.g., brake amount data, gear position data, accelerator amount data, etc.).
For another example, the data generated during the driving process of the unmanned vehicle may also be data indicating map element perception information (e.g., real-time map position identification data, GPS position data), data indicating traffic light perception information (e.g., traffic light image data captured by a camera), data indicating weather perception information (e.g., temperature sensor data, humidity sensor data, weather data received from the outside), and data indicating obstacle perception information (e.g., obstacle image data captured by a camera, obstacle data sensed by radar or ultrasonic waves at equal distances).
The data may be obtained online or offline, which is not limited in this embodiment.
S102, determining at least one piece of label information for each piece of data.
The data generated by the unmanned vehicle can reflect various states of the unmanned vehicle at the time, and the present embodiment represents the states by the tag information, so that the data irrelevant to the scene is converted into the tag information associated with the scene. Wherein the tag information comprises unmanned vehicle action information and/or unmanned vehicle perception information for indicating the data representation.
In some embodiments, the unmanned vehicle action information includes any one or more of: driving mode, driving behavior, driving speed. For example, from data indicating driving patterns, tag information may be obtained that is: an autonomous driving mode, a manual driving mode, a safe driving mode, an unknown mode, or a mode transition timestamp. For another example, the tag information that can be obtained from the data indicating the driving behavior may be: straight running, left turning, right turning, turning around, left lane changing, right lane changing or left and right continuous lane changing. For another example, the tag information that can be obtained from the data indicating the driving speed may be: during stopping and running, the vehicle runs at a constant speed, runs at a non-overspeed speed or runs at an overspeed speed. Fig. 2 is a schematic diagram of tag information in a straight-ahead turning scene according to an embodiment of the present invention. In the embodiment shown in fig. 2, the timestamps t 11-t 12 are the first stage, and the obtained tag information is in a constant speed driving, straight driving and automatic driving mode; the timestamps t 12-t 13 are the second stage, and the obtained label information is driving, turning left and automatic driving modes; the timestamps t 13-t 14 are the third stage, and the obtained tag information is in the constant speed driving, straight driving and automatic driving modes. The action process of left turning of the unmanned vehicle in the driving process can be reflected through the label information.
In other embodiments, the unmanned vehicle awareness information includes any one or more of: map element perception information, traffic light perception information, weather perception information, and obstacle perception information. For example, from data indicating map element perception information, the resulting tag information may be: crossroads, T-junctions, stop lines, or map areas. For another example, according to the data indicating the traffic light perception information, the obtained tag information may include: red, green to red, or red to green. For another example, according to the data indicating the weather awareness information, the obtained tag information may be: fog weather, rainy weather, windy weather, or high temperature weather, etc. For another example, the tag information obtained from the data indicating the obstacle sensing information may be: obstacle type (e.g., pedestrian, stationary object, bicycle, motorcycle), obstacle location (e.g., opposite lane, other lane, forward lane, rear lane), obstacle-to-body relationship (e.g., gear change, speed, change), or obstacle behavior (e.g., cut-in/out, reverse, turn around, etc.), and the like. Fig. 3 is a schematic diagram of tag information in an obstacle parking scene according to an embodiment of the present invention. In the embodiment shown in fig. 2, the timestamps t 21-t 22 are the first stage, and the obtained tag information is the fog weather, the constant speed driving, the straight driving and the automatic driving mode; the timestamps t 22-t 23 are the second stage, and the obtained label information is in a foggy weather, driving, braking and decelerating mode and a manual driving mode; the timestamps t 23-t 24 are the third stage, and the obtained tag information is the foggy weather, stop, unknown pattern. The action process of emergency braking after the unmanned vehicle meets the obstacle in the foggy weather and is taken over by manpower can be reflected through the label information.
S103, determining data conforming to a target driving scene according to the label information of each data, wherein the target driving scene corresponds to at least two pieces of label information.
The target driving scenario may be information preset by the server, for example, tests for the obstacle cut-in scenario and the constant speed driving scenario are required to be performed by the unmanned vehicle in each optimization process, so the server may preset the two scenarios without additional input operation by the user. Alternatively, the target driving scenario may be input by the user. For example, before determining data that conforms to a target driving scene according to tag information of each of the data, a search instruction is received, where the search instruction includes the target driving scene.
Specifically, there may be multiple implementation manners for determining the data that conforms to the target driving scene according to the tag information of each piece of data, and in some implementation manners, the server may first obtain the tag information corresponding to the target driving scene according to the target driving scene. For example, in the embodiment where the target scene is a straight-going turning scene, obtaining the tag information corresponding to the straight-going turning scene according to the pre-stored correspondence between the scene and the tag may include: "straight" and "turn". For the obstacle parking scene, the obtained tag information includes: "obstacles" and "stops". Wherein the "obstacle" tag information may correspond to all tag information derived from the data indicating the obstacle sensing information.
And then, the server acquires data corresponding to the label information according to the label information corresponding to the target driving scene. For example, from the tag information "go straight" and "turn", data containing data between t11 to t14 shown in fig. 2 is acquired. For another example, data including data between t21 and t24 shown in fig. 3 is acquired from the tag information "obstacle" and "stop".
After data are acquired according to the label information, the server determines data which accord with the target driving scene according to the target driving scene and the data corresponding to the label information. Specifically, a tag operation rule may be obtained according to the target driving scene, and then data corresponding to the tag information is processed according to the tag information as a unit by using the tag operation rule, so as to obtain the data conforming to the target driving scene. For example, data with "obstacles" and other tag information may be acquired according to the tag information "obstacles", but tag information with "stop" is also required in the obstacle parking scene, and then the data with the "stop" tag information is searched for in the data corresponding to the "obstacles" tag information, only intersection operation may be performed on the data set corresponding to the "obstacles" and the data set corresponding to the "stop", and the operation structure may be data conforming to the obstacle parking scene.
In some embodiments, the tag operation rule comprises a combination of one or more of:
intersection operation, union operation and difference operation.
In some embodiments, the tag information may further include: time stamp information. Such as t 11-t 14 shown in fig. 2, and t 21-t 24 shown in fig. 3. The target driving scene comprises: time overlapping class scenes and/or time precedence class scenes. Wherein the tag information corresponding to the time-overlapping scene is tag information corresponding to the same timestamp information. And the label information corresponding to the time sequence scene is label information corresponding to one or more adjacent timestamp information. The data generated by the unmanned vehicle is of a time attribute, and the data may correspond to different labels as time advances (see fig. 2 and 3), and it can be understood that the data reflects the change of the vehicle body action. For example, the tag information of the first stage data shown in fig. 2 is a constant speed driving, straight driving, and automatic driving mode. However, the tag information of the second stage data is the driving, left turn, and automatic driving mode. That is to say, in two adjacent time stamps, tag information changes from constant speed driving to driving and also changes from straight driving to left turning, thereby meeting a time-sequential scene (straight-driving turning scene).
According to the data processing method based on the unmanned vehicle, provided by the embodiment of the invention, data generated in the driving process of the unmanned vehicle are obtained; determining at least one piece of tag information for each piece of data, wherein the tag information comprises unmanned vehicle action information and/or unmanned vehicle perception information used for indicating the representation of the data; and determining data conforming to a target driving scene according to the tag information of each data, wherein the target driving scene corresponds to at least two tag information, and mining and analyzing processing aiming at the scene is carried out on the data generated in the driving process of the unmanned vehicle, so that the efficiency and the accuracy of scene extraction are improved, and the efficiency of unmanned vehicle data processing is further improved.
Fig. 4 is a schematic structural diagram of an unmanned vehicle-based data processing apparatus according to an embodiment of the present invention, and as shown in fig. 4, an unmanned vehicle-based data processing apparatus 40 according to the embodiment includes:
the acquiring module 41 is used for acquiring data generated in the driving process of the unmanned vehicle;
a labeling module 42, configured to determine at least one label information for each of the data, where the label information includes unmanned vehicle action information and/or unmanned vehicle perception information indicating a representation of the data;
and a processing module 43, configured to determine, according to the tag information of each piece of data, data that conforms to a target driving scene, where the target driving scene corresponds to at least two pieces of tag information.
The data processing device based on the unmanned vehicle provided by the embodiment of the invention obtains data generated in the driving process of the unmanned vehicle; determining at least one piece of tag information for each piece of data, wherein the tag information comprises unmanned vehicle action information and/or unmanned vehicle perception information used for indicating the representation of the data; and determining data conforming to a target driving scene according to the tag information of each data, wherein the target driving scene corresponds to at least two tag information, and mining and analyzing processing aiming at the scene is carried out on the data generated in the driving process of the unmanned vehicle, so that the efficiency and the accuracy of scene extraction are improved, and the efficiency of unmanned vehicle data processing is further improved.
In some embodiments, the processing module 43 is configured to obtain, according to a target driving scene, tag information corresponding to the target driving scene; acquiring data corresponding to the label information according to the label information corresponding to the target driving scene; and determining data conforming to the target driving scene according to the target driving scene and the data corresponding to the label information.
In some embodiments, the processing module 43 is configured to obtain a tag operation rule according to the target driving scenario; and processing the data corresponding to the tag information by using the tag operation rule according to the tag information as a unit to obtain the data conforming to the target driving scene.
In some embodiments, the tag operation rule comprises a combination of one or more of:
intersection operation, union operation and difference operation.
In some embodiments, the tag information further comprises: time stamp information.
The target driving scene comprises: time overlapping class scenes and/or time precedence class scenes.
The label information corresponding to the time overlapping scene is label information corresponding to the same timestamp information; and the label information corresponding to the time sequence scene is label information corresponding to one or more adjacent timestamp information.
In some embodiments, the obtaining module 41 is further configured to receive a retrieval instruction before determining data that conforms to a target driving scenario according to tag information of each of the data, where the retrieval instruction includes the target driving scenario.
In some embodiments, the unmanned vehicle action information includes any one or more of:
driving mode, driving behavior, driving speed.
In some embodiments, the unmanned vehicle awareness information includes any one or more of:
map element perception information, traffic light perception information, weather perception information, and obstacle perception information.
The data processing device based on the unmanned vehicle provided in this embodiment is the same as the technical solution for implementing the data processing method based on the unmanned vehicle provided in any one of the foregoing embodiments, and the implementation principle is similar and is not described again.
Fig. 5 is a schematic diagram of a hardware structure of an unmanned vehicle-based data processing device according to an embodiment of the present invention, where the device 50 includes: a processor 51, a memory 52 and computer programs; wherein
A memory 52 for storing the computer program, which may also be a flash memory (flash). The computer program is, for example, an application program, a functional module, or the like that implements the above method.
And a processor 51 for executing the computer program stored in the memory to realize the steps executed by the server in the above unmanned vehicle-based data processing method. Reference may be made in particular to the description relating to the preceding method embodiment.
Alternatively, the memory 52 may be separate or integrated with the processor 51.
When the memory 52 is a device independent of the processor 51, the apparatus may further include:
a bus 53 for connecting the memory 52 and the processor 51.
The present invention also provides a readable storage medium, in which a computer program is stored, and the computer program is used for implementing the unmanned vehicle-based data processing method provided by the above-mentioned various embodiments when being executed by a processor.
The readable storage medium may be a computer storage medium or a communication medium. Communication media includes any medium that facilitates transfer of a computer program from one place to another. Computer storage media may be any available media that can be accessed by a general purpose or special purpose computer. For example, a readable storage medium is coupled to the processor such that the processor can read information from, and write information to, the readable storage medium. Of course, the readable storage medium may also be an integral part of the processor. The processor and the readable storage medium may reside in an Application Specific Integrated Circuits (ASIC). Additionally, the ASIC may reside in user equipment. Of course, the processor and the readable storage medium may also reside as discrete components in a communication device. The readable storage medium may be a read-only memory (ROM), a random-access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The present invention also provides a program product comprising execution instructions stored in a readable storage medium. The at least one processor of the device may read the execution instructions from the readable storage medium, and the execution of the execution instructions by the at least one processor causes the device to implement the unmanned vehicle-based data processing method provided by the various embodiments described above.
According to the embodiment of the invention, data generated in the driving process of the unmanned vehicle is obtained; determining at least one piece of tag information for each piece of data, wherein the tag information comprises unmanned vehicle action information and/or unmanned vehicle perception information used for indicating the representation of the data; and determining data conforming to a target driving scene according to the tag information of each data, wherein the target driving scene corresponds to at least two tag information, and mining and analyzing processing aiming at the scene is carried out on the data generated in the driving process of the unmanned vehicle, so that the efficiency and the accuracy of scene extraction are improved, and the efficiency of unmanned vehicle data processing is further improved.
In the above embodiments of the apparatus, it should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose processors, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. A data processing method based on unmanned vehicles is characterized by comprising the following steps:
acquiring data generated in the driving process of the unmanned vehicle;
determining at least one piece of tag information for each piece of data, wherein the tag information comprises unmanned vehicle action information and/or unmanned vehicle perception information used for indicating the representation of the data;
determining data conforming to a target driving scene according to the label information of each data, wherein the target driving scene corresponds to at least two label information;
determining data conforming to a target driving scene according to the label information of each data, comprising:
acquiring label information corresponding to a target driving scene according to the target driving scene;
acquiring data corresponding to the label information according to the label information corresponding to the target driving scene;
determining data conforming to the target driving scene according to the target driving scene and the data corresponding to the tag information, including:
acquiring a label operation rule according to the target driving scene;
and processing the data corresponding to the tag information by using the tag operation rule according to the tag information as a unit to obtain the data conforming to the target driving scene.
2. The method of claim 1, wherein the tag operation rule comprises a combination of one or more of the following:
intersection operation, union operation and difference operation.
3. The method of claim 1 or 2, wherein the tag information further comprises: timestamp information;
the target driving scene comprises: time overlapping scenes and/or time precedence scenes;
the label information corresponding to the time overlapping scene is label information corresponding to the same timestamp information; and the label information corresponding to the time sequence scene is label information corresponding to one or more adjacent timestamp information.
4. The method according to claim 1 or 2, before determining data conforming to a target travel scene from tag information of each of the data, further comprising:
receiving a retrieval instruction, wherein the retrieval instruction comprises the target driving scene.
5. The method of claim 1 or 2, wherein the unmanned vehicle action information comprises any one or more of:
driving mode, driving behavior, driving speed.
6. The method of claim 1 or 2, wherein the unmanned vehicle awareness information comprises any one or more of:
map element perception information, traffic light perception information, weather perception information, and obstacle perception information.
7. An unmanned vehicle-based data processing apparatus, comprising:
the acquisition module is used for acquiring data generated in the driving process of the unmanned vehicle;
the marking module is used for determining at least one piece of label information for each piece of data, wherein the label information comprises unmanned vehicle action information and/or unmanned vehicle perception information used for indicating the data representation;
the processing module is used for determining data which accord with a target driving scene according to the label information of each data, wherein the target driving scene corresponds to at least two pieces of label information;
the processing module is specifically configured to: acquiring label information corresponding to a target driving scene according to the target driving scene; acquiring data corresponding to the label information according to the label information corresponding to the target driving scene; determining data conforming to the target driving scene according to the target driving scene and the data corresponding to the label information;
the processing module is specifically configured to: acquiring a label operation rule according to the target driving scene; and processing the data corresponding to the tag information by using the tag operation rule according to the tag information as a unit to obtain the data conforming to the target driving scene.
8. An unmanned vehicle-based data processing device, comprising: a memory, a processor, and a computer program, the computer program being stored in the memory, the processor running the computer program to perform the unmanned vehicle-based data processing method of any of claims 1-6.
9. A readable storage medium, wherein a computer program is stored, and when executed by a processor, is configured to implement the unmanned vehicle-based data processing method according to any one of claims 1 to 6.
CN201910036390.0A 2019-01-15 2019-01-15 Data processing method, device and equipment based on unmanned vehicle and storage medium Active CN109829395B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910036390.0A CN109829395B (en) 2019-01-15 2019-01-15 Data processing method, device and equipment based on unmanned vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910036390.0A CN109829395B (en) 2019-01-15 2019-01-15 Data processing method, device and equipment based on unmanned vehicle and storage medium

Publications (2)

Publication Number Publication Date
CN109829395A CN109829395A (en) 2019-05-31
CN109829395B true CN109829395B (en) 2022-03-08

Family

ID=66860888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910036390.0A Active CN109829395B (en) 2019-01-15 2019-01-15 Data processing method, device and equipment based on unmanned vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN109829395B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112329500A (en) * 2019-08-05 2021-02-05 北京百度网讯科技有限公司 Scene segment implementation method and device based on discrete frame and storage medium
CN112380137A (en) * 2020-12-04 2021-02-19 清华大学苏州汽车研究院(吴江) Method, device and equipment for determining automatic driving scene and storage medium
CN112905849A (en) * 2021-02-18 2021-06-04 中国第一汽车股份有限公司 Vehicle data processing method and device
CN112818910B (en) * 2021-02-23 2022-03-18 腾讯科技(深圳)有限公司 Vehicle gear control method and device, computer equipment and storage medium
CN112861266B (en) * 2021-03-05 2022-05-06 腾讯科技(深圳)有限公司 Method, apparatus, medium, and electronic device for controlling device driving mode

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102350990A (en) * 2011-06-29 2012-02-15 北京理工大学 Comparison model for obstacle avoidance behaviors of vehicle under manned and unmanned conditions
CN103150786A (en) * 2013-04-09 2013-06-12 北京理工大学 Non-contact type unmanned vehicle driving state measuring system and measuring method
CN105787058A (en) * 2016-02-26 2016-07-20 广州品唯软件有限公司 User label system and data pushing system based on same
CN106651175A (en) * 2016-12-21 2017-05-10 驭势科技(北京)有限公司 Unmanned vehicle operation management system, general control platform, branch control platform, vehicle-mounted computation device and computer readable storage medium
CN108205568A (en) * 2016-12-19 2018-06-26 腾讯科技(深圳)有限公司 Method and device based on label selection data
CN108357496A (en) * 2018-02-12 2018-08-03 北京小马智行科技有限公司 Automatic Pilot control method and device
CN108983788A (en) * 2018-08-15 2018-12-11 上海海事大学 The unmanned sanitation cart intelligence control system and method excavated based on big data

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102350990A (en) * 2011-06-29 2012-02-15 北京理工大学 Comparison model for obstacle avoidance behaviors of vehicle under manned and unmanned conditions
CN103150786A (en) * 2013-04-09 2013-06-12 北京理工大学 Non-contact type unmanned vehicle driving state measuring system and measuring method
CN105787058A (en) * 2016-02-26 2016-07-20 广州品唯软件有限公司 User label system and data pushing system based on same
CN108205568A (en) * 2016-12-19 2018-06-26 腾讯科技(深圳)有限公司 Method and device based on label selection data
CN106651175A (en) * 2016-12-21 2017-05-10 驭势科技(北京)有限公司 Unmanned vehicle operation management system, general control platform, branch control platform, vehicle-mounted computation device and computer readable storage medium
CN108357496A (en) * 2018-02-12 2018-08-03 北京小马智行科技有限公司 Automatic Pilot control method and device
CN108983788A (en) * 2018-08-15 2018-12-11 上海海事大学 The unmanned sanitation cart intelligence control system and method excavated based on big data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"城市环境下无人驾驶智能车感知系统若干关键技术研究";陈龙;《中国博士学位论文全文数据库 工程科技Ⅱ辑》;20150715(第7期);正文第11页 *
陈龙."城市环境下无人驾驶智能车感知系统若干关键技术研究".《中国博士学位论文全文数据库 工程科技Ⅱ辑》.2015,(第7期), *

Also Published As

Publication number Publication date
CN109829395A (en) 2019-05-31

Similar Documents

Publication Publication Date Title
CN109829395B (en) Data processing method, device and equipment based on unmanned vehicle and storage medium
CN110796007B (en) Scene recognition method and computing device
US9443153B1 (en) Automatic labeling and learning of driver yield intention
CN112069856A (en) Map generation method, driving control method, device, electronic equipment and system
CN111582189B (en) Traffic signal lamp identification method and device, vehicle-mounted control terminal and motor vehicle
CN108573611B (en) Speed limit sign fusion method and speed limit sign fusion system
CN110969178A (en) Data fusion system and method for automatic driving vehicle and automatic driving system
CN113643431A (en) System and method for iterative optimization of visual algorithm
CN117079238A (en) Road edge detection method, device, equipment and storage medium
CN115112125A (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN115841660A (en) Distance prediction method, device, equipment, storage medium and vehicle
CN111126336B (en) Sample collection method, device and equipment
CN110446106B (en) Method for identifying front camera file, electronic equipment and storage medium
CN114898314A (en) Target detection method, device and equipment for driving scene and storage medium
CN115402347A (en) Method for identifying a drivable region of a vehicle and driving assistance method
CN111854770B (en) Vehicle positioning system and method
CN113869440A (en) Image processing method, apparatus, device, medium, and program product
CN111143423A (en) Dynamic scene labeling data mining method and device and terminal
CN114820691B (en) Method, device and equipment for detecting motion state of vehicle and storage medium
CN117576652B (en) Road object identification method and device, storage medium and electronic equipment
CN117870701A (en) Lane positioning method and device, electronic equipment and storage medium
Narasimhan Ramakrishnan Design and evaluation of perception system algorithms for semi-autonomous vehicles
CN116434017A (en) Multi-sensor post-fusion method and device based on single-sensor time sequence tracking result
WO2020129247A1 (en) Information processing device, information processing method, and information processing program
CN114782914A (en) Automatic driving vehicle positioning method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant