CN112634610A - Natural driving data acquisition method and device, electronic equipment and storage medium - Google Patents

Natural driving data acquisition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112634610A
CN112634610A CN202011467160.9A CN202011467160A CN112634610A CN 112634610 A CN112634610 A CN 112634610A CN 202011467160 A CN202011467160 A CN 202011467160A CN 112634610 A CN112634610 A CN 112634610A
Authority
CN
China
Prior art keywords
traffic
data
real space
scene
attribute information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011467160.9A
Other languages
Chinese (zh)
Inventor
于鹏
孙亚夫
吴琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Innovation Center For Mobility Intelligent Bicmi Co ltd
Original Assignee
Beijing Innovation Center For Mobility Intelligent Bicmi Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Innovation Center For Mobility Intelligent Bicmi Co ltd filed Critical Beijing Innovation Center For Mobility Intelligent Bicmi Co ltd
Priority to CN202011467160.9A priority Critical patent/CN112634610A/en
Publication of CN112634610A publication Critical patent/CN112634610A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/012Measuring and analyzing of parameters relative to traffic conditions based on the source of data from other sources than vehicle or roadside beacons, e.g. mobile networks
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
    • G08G1/054Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed photographing overspeeding vehicles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

The embodiment of the invention discloses a natural driving data acquisition method and device, electronic equipment and a storage medium. The method comprises the following steps: collecting traffic scene data by using an unmanned aerial vehicle; and extracting the behavior data and the traffic environment data of the traffic participants from the traffic scene data. Based on the method and the device, the disguise of the natural driving data acquisition process is realized, the authenticity and the naturalness of the acquired natural driving data are further improved, the privacy of the acquired person is better protected, the data integrity is also improved due to less shielding among traffic targets, and in addition, due to the fact that the problem of line of sight disappearance does not exist, the data do not need to be restored by means of other algorithms, and the data precision is improved.

Description

Natural driving data acquisition method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a natural driving data acquisition method, a natural driving data acquisition device, electronic equipment and a storage medium.
Background
With the continuous development of automatic driving technology, automatic driving systems are being developed in a direction to gradually replace human drivers or perform part of driving tasks instead of human beings. However, the actual data support training of the automatic driving model and the evaluation test of the automatic driving system are lacked in the industry at present, and further the development of the automatic driving technology is influenced. In addition, the establishment of relevant standards for autonomous driving techniques also requires support for a large amount of natural driving data. Therefore, establishing a natural driving database is particularly important for realizing the test, evaluation and standard establishment of the automatic driving system. Most of the existing natural driving data acquisition methods have defects in the aspects of precision, data naturalness, data integrity, privacy protection and the like. Therefore, a need exists for a natural driving data collection method that overcomes the above-mentioned deficiencies.
Disclosure of Invention
It is an object of embodiments of the present invention to address at least the above problems and/or disadvantages and to provide at least the advantages described hereinafter.
The embodiment of the invention provides a natural driving data acquisition method and device, electronic equipment and a storage medium, which can improve the precision, naturalness, authenticity and integrity of natural driving data and better protect the privacy of an acquired person of the natural driving data.
In a first aspect, a natural driving data collection method is provided, including:
collecting traffic scene data by using an unmanned aerial vehicle;
and extracting the behavior data and the traffic environment data of the traffic participants from the traffic scene data.
Optionally, the collecting traffic scene data with the drone includes:
controlling the unmanned aerial vehicle to hover at a fixed height;
and collecting traffic scene video data from the fixed height by adopting a camera configured by the unmanned aerial vehicle.
Optionally, the extracting the behavior data and the traffic environment data of the traffic participants from the traffic scene data includes:
identifying traffic participants and traffic environment objects from the traffic scene video data;
according to the traffic scene video data, determining attribute information and track information of the traffic participants in a real space and attribute information of the traffic environment object in the real space; the behavior data of the traffic participant comprises attribute information and track information of the traffic participant in a real space, and the traffic environment data comprises attribute information of the traffic environment object in the real space.
Optionally, the determining, according to the traffic scene video data, attribute information and trajectory information of the traffic participant in a real space and attribute information of the traffic environment object in the real space includes:
determining the real physical size corresponding to a single pixel point in the traffic scene video data according to the fixed height hovering by the unmanned aerial vehicle and the parameter of the camera configured by the unmanned aerial vehicle;
and determining attribute information and track information of the traffic participants in the real space and attribute information of the traffic environment object in the real space according to the attribute information and track information of the traffic participants represented by the pixel points and the attribute information of the traffic environment object represented by the pixel points in the traffic scene video data based on the real physical dimensions corresponding to the single pixel points in the traffic scene video data.
Optionally, the attribute information of the traffic participant in the real space includes the size of the traffic participant in the real space, and the trajectory information of the traffic participant in the real space includes the speed, the acceleration, the position and the relative position with other traffic participants or traffic environment objects of the traffic participant in the real space; the attribute information of the traffic-environment object in the real space includes a size and a position of the traffic-environment object in the real space.
Optionally, the extracting the behavior data of the traffic participant and the traffic environment data from the traffic scene data includes:
classifying the traffic scene data according to scene elements contained in the traffic scene data;
and respectively extracting the behavior data and the traffic environment data of the traffic participants aiming at various traffic scene data.
Optionally, before the extracting the behavior data and the traffic environment data of the traffic participant from the traffic scene data, the method further includes:
and selecting traffic scene data with the traffic flow above a preset flow threshold and/or the visibility above a preset visibility threshold for extracting behavior data and traffic environment data of traffic participants.
Optionally, the collecting traffic scene data with the drone includes:
and collecting traffic scene data by adopting an unmanned aerial vehicle in clear daytime.
In a second aspect, there is provided a natural driving data acquisition apparatus comprising:
the acquisition module is used for acquiring traffic scene data by adopting an unmanned aerial vehicle;
and the extraction module is used for extracting the behavior data and the traffic environment data of the traffic participants from the traffic scene data.
Optionally, the acquisition module includes:
the control submodule is used for controlling the unmanned aerial vehicle to hover at a fixed height;
and the acquisition submodule is used for acquiring the traffic scene video data from the fixed height by adopting the camera configured by the unmanned aerial vehicle.
Optionally, the extraction module includes:
the identification submodule is used for identifying traffic participants and traffic environment objects from the traffic scene video data;
the determining submodule is used for determining attribute information and track information of the traffic participants in a real space and attribute information of the traffic environment object in the real space according to the traffic scene video data; the behavior data of the traffic participant comprises attribute information and track information of the traffic participant in a real space, and the traffic environment data comprises attribute information of the traffic environment object in the real space.
Optionally, the determining sub-module includes:
the first determining unit is used for determining the real physical size corresponding to a single pixel point in the traffic scene video data according to the fixed height hovering by the unmanned aerial vehicle and the parameter of the camera configured by the unmanned aerial vehicle;
and the second determining unit is used for determining the attribute information and the track information of the traffic participant in the real space and the attribute information of the traffic environment object in the real space according to the attribute information and the track information of the traffic participant represented by the pixel points and the attribute information of the traffic environment object represented by the pixel points in the traffic scene video data based on the real physical size corresponding to the single pixel point in the traffic scene video data.
Optionally, the attribute information of the traffic participant in the real space includes the size of the traffic participant in the real space, and the trajectory information of the traffic participant in the real space includes the speed, the acceleration, the position and the relative position with other traffic participants or traffic environment objects of the traffic participant in the real space; the attribute information of the traffic-environment object in the real space includes a size and a position of the traffic-environment object in the real space.
Optionally, the extraction module further includes:
the classification submodule is used for classifying the traffic scene data according to scene elements contained in the traffic scene data;
and the extraction submodule is used for respectively extracting the behavior data and the traffic environment data of the traffic participants aiming at various traffic scene data.
Optionally, the apparatus further comprises:
the selection module is used for selecting traffic scene data of which the traffic flow is above a preset flow threshold and/or the visibility is above a preset visibility threshold, and extracting behavior data and traffic environment data of traffic participants.
Optionally, the acquisition module is specifically configured to:
and collecting traffic scene data by adopting an unmanned aerial vehicle in clear daytime.
In a third aspect, an electronic device is provided, including: at least one processor, and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method described above.
In a fourth aspect, a storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the method described above.
The embodiment of the invention at least comprises the following beneficial effects:
according to the natural driving data acquisition method and device provided by the embodiment of the invention, the unmanned aerial vehicle is adopted to acquire traffic scene data, and behavior data and traffic environment data of traffic participants are extracted from the traffic scene data. Based on the method and the device, the disguise of the natural driving data acquisition process is realized, the authenticity and the naturalness of the acquired natural driving data are further improved, the privacy of the acquired person is better protected, the data integrity is also improved due to less shielding among traffic targets, and in addition, due to the fact that the problem of line of sight disappearance does not exist, the data do not need to be restored by means of other algorithms, and the data precision is improved.
Additional advantages, objects, and features of embodiments of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of embodiments of the invention.
Drawings
FIG. 1 is a schematic diagram of an exemplary system architecture in which embodiments of the present invention may be used;
FIG. 2 is a flow chart of a natural driving data collection method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a layout manner of the unmanned aerial vehicles according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a natural driving data acquisition device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The embodiments of the present invention will be described in further detail with reference to the accompanying drawings so that those skilled in the art can implement the embodiments of the invention with reference to the description.
The real traffic scene is mainly composed of two parts, namely traffic participants and traffic environment objects. Natural driving data is the behavior of the traffic participants, for example the driving behavior of the driver of the motor vehicle, included in the real traffic scene. The natural driving data may also include traffic environment data such as attribute information of roads and traffic facilities, and the like. At present, a natural driving data acquisition method is mainly to arrange a degree type sensor on a motor vehicle, and various parameters of the motor vehicle are acquired by using the sensor in the process that a driver drives the motor vehicle to run on a road. In the data acquisition mode, the acquisition process is non-concealed, so that a driver can only make extremely standard driving behaviors in an intentional way in cooperation with the data acquisition work, and then the behaviors of traffic participants in a real traffic scene are not completely in accordance with the standards, so that the authenticity and naturalness of the acquired natural driving data are influenced. In addition, because the collection process of the natural driving data needs the participation of the collected person, the risk of revealing the privacy of the collected person exists. In addition, in the above data collection method, since the sensors are disposed on the motor vehicle, the traffic participants on the road may be shielded from each other during the driving process of the motor vehicle, so that the data integrity is affected. Moreover, for a camera arranged in a cab, due to the problem of disappearance of sight lines, collected data needs to be restored by means of other algorithms, and the accuracy of the data is affected.
Based on the defects, the embodiment of the invention provides a natural driving data acquisition method, which adopts an unmanned aerial vehicle to acquire traffic scene data and extracts behavior data and traffic environment data of traffic participants from the traffic scene data. The acquired person of the natural driving data does not participate in the data acquisition process at all, and the data acquisition process has high concealment, so that the authenticity and naturalness of the natural driving data can be improved, and the privacy of the acquired person can not be involved. Unmanned aerial vehicle is at the operation in the air, and the sheltering from between traffic participant and the traffic environment object is few, and the integrality of data obtains improving. And the problem of visual line disappearance does not exist, data restoration is not needed by other algorithms, and the data precision is higher.
The following briefly introduces exemplary system architectures of embodiments of a natural driving data collection method, apparatus, electronic device, and storage medium provided by embodiments of the present invention. Fig. 1 illustrates an exemplary system architecture to which embodiments of the natural driving data collection method, apparatus, electronic device, and storage medium provided by embodiments of the present invention may be applied. As shown in fig. 1, the system architecture may include a drone 110, a camera 120, a network 130, and a server device 140.
The camera 120 is mounted on the drone 110. The unmanned aerial vehicle 110 hovers above a traffic scene to be collected, and video data of the traffic flow and the traffic environment within the camera shooting range are collected through the camera 120 and serve as the traffic scene video data. Here, other data acquisition devices may also be used, for example, a laser radar may acquire laser point cloud data of a traffic flow and a traffic environment below the laser radar, and by processing the laser point cloud data, behavior data and traffic environment data of traffic participants may also be extracted from the laser point cloud data. In addition, the camera 120 may be used to collect video data of a traffic flow and a traffic environment, and may also be used to collect image data of the traffic flow and the traffic environment, and when the camera 120 continuously collects a plurality of pieces of image data, the server device 140 may extract behavior data and traffic environment data of the traffic participants according to the plurality of pieces of image data. The unmanned aerial vehicle can be a multi-rotor unmanned aerial vehicle, and the embodiment of the invention is not particularly limited to this.
The network 130 is a medium for providing communication links between the drone 110, the camera 120, and the server device 140. The unmanned aerial vehicle 110 may receive a control instruction sent by the server device 140 through the network 130 and hover at a fixed height above a road where traffic scene data needs to be collected, and the traffic scene data collected by the camera 120 may be directly sent to the server device 140 through the network 130, or may be sent to the unmanned aerial vehicle 110 first, and then sent to the server device 140 by the unmanned aerial vehicle 110 through the network. Network 130 may include various types of connections, such as wired communication links, wireless communication links, or fiber optic cables, to name a few. The present invention is not particularly limited herein.
The server device 140 may be a server device providing various services, for example, receive traffic scene data sent by the camera 120 or the drone 110 through the network 130, and send a control instruction to the drone 110 through the network, so that the drone 110 hovers at a fixed height above a road where the traffic scene data needs to be collected according to the control instruction, and may further extract behavior data and traffic environment data of traffic participants from the traffic scene data through data analysis capability. A database runs on the server device 140, and the collected traffic scene data and the behavior data of the traffic participants and the traffic environment data extracted from the traffic scene data can be stored in the database for constructing an automatic driving simulation scene. The server device 140 may be implemented as a distributed server device cluster composed of a plurality of server devices, or may be implemented as a single server device. The server device may also be other computing devices with corresponding service capabilities, such as a terminal device like a computer. The embodiment of the invention does not limit the position relation between the server-side equipment and the traffic scene to be collected, and the server-side equipment can be arranged outside the traffic scene to be collected.
It should be understood that the number of the drones, the cameras, the network and the server devices in fig. 1 is only illustrative, and the number of the drones, the cameras, the network and the server devices can be selected according to actual needs. The present invention is not particularly limited in this regard.
Fig. 2 is a flowchart of a natural driving data collection method performed by a system with processing capability, a server, or a natural driving data collection apparatus according to an embodiment of the present invention. As shown in fig. 2, the method includes:
and step 210, collecting traffic scene data by adopting an unmanned aerial vehicle.
Here, the traffic scene data may include traffic flow and traffic environment data formed in a real traffic scene. The traffic scene data may be in the form of video data collected by a camera or a plurality of pieces of image data collected continuously, or laser point cloud data collected by a laser radar. The video data is more beneficial to extracting the track information of the traffic participants, so the traffic scene data collected in the form of the video data is preferably adopted.
In some embodiments, collecting traffic scene data with a drone includes: controlling the unmanned aerial vehicle to hover at a fixed height; and collecting traffic scene video data from the fixed height by adopting a camera configured by the unmanned aerial vehicle. Fig. 3 shows a layout mode of the unmanned aerial vehicle provided by the embodiment of the invention. The drone 310 hovers over the road 330 and captures video data of the traffic scene within the coverage of the camera 320. When the unmanned aerial vehicle is controlled to hover at a fixed height, the same traffic target which needs to be identified in traffic scene video data collected by the camera of the unmanned aerial vehicle can be ensured to be consistent in size, so that the difficulty in extracting behavior data of traffic participants and traffic environment data from the traffic scene data is reduced, and the processing efficiency of the traffic scene data is improved.
For the convenience of subsequent data management, the traffic scene data collected by the unmanned aerial vehicle is recorded with the collection time and the location information.
Step 220, extracting the behavior data and the traffic environment data of the traffic participants from the traffic scene data.
In some embodiments, the extracting behavior data and traffic environment data of traffic participants from the traffic scene data includes:
and step S11, identifying traffic participants and traffic environment objects from the traffic scene video data.
The traffic target refers to a target which needs to be identified from traffic scene data, and comprises a traffic participant and a traffic environment object.
Traffic participants include people, animals or objects that are or are about to enter into dynamics in real traffic scenarios, such as motor vehicles, non-motor vehicles, pedestrians, animals, and so on. In order to more accurately identify the traffic participants from the traffic scene video data, the traffic participants may be further classified in a more detailed manner, for example, the motor vehicles are further subdivided into passenger cars, trucks, and the like, which is not specifically limited by the present invention. The behavior data of the traffic participant is used for reflecting the state and the behavior of the traffic participant in the real traffic scene, and can comprise attribute information and track information of the traffic participant in a real space.
The traffic environment is the sum of all external influences and forces acting on road traffic participants, including road conditions, traffic facilities, terrain and features, weather conditions, and other traffic activities of the traffic participants. The traffic environment data is used for reflecting the state of the traffic environment in a real traffic scene, and may include attribute information of traffic environment objects in a real space. The traffic environment object means each object constituting the traffic environment. Here, since the traffic activities of other traffic participants can be embodied by the behavior data of the traffic participant, in the embodiment of the present invention, the traffic environment object may include a road, a traffic facility, a feature and a landscape, a weather condition, and an obstacle.
It should be noted that, the real space is a virtual space corresponding to the traffic scene video data, that is, according to the traffic scene video data, the embodiment of the present invention determines the attribute information and the trajectory information of the traffic participant in the real space and the attribute information of the traffic environment object in the real space, for example, determines the real size of a certain traffic participant in the real space and the size of a certain traffic facility in the real space.
In particular, the process of identifying traffic participants and traffic environment objects from traffic scene video data may be implemented based on a target identification model. The target recognition model may be a supervised learning model, a semi-supervised learning model, and an unsupervised learning model, and may also be a deep learning model, such as a neural network model. Taking the identification process of the traffic participants based on the neural network model as an example, the traffic participants can be labeled manually, a traffic participant training set is constructed, then the constructed training set is used for training a target detection model, then the trained target detection model is used for carrying out target detection on traffic scene video data, and the traffic participants are detected from the traffic scene video data. The identification process of the traffic environment object can be substantially the same as the identification process of the traffic participant. However, since the traffic environment object is often static, identifying the traffic environment object from the traffic scene video data may also be based on a general image identification algorithm.
Step S12, according to the traffic scene video data, determining the attribute information and the track information of the traffic participants in the real space and the attribute information of the traffic environment object in the real space.
The attribute information of the traffic participant in the real space may include the size, the status, and the like of the traffic participant. The attribute information of the traffic environment object in the real space may include road geometric information, static traffic information, and quasi-static traffic information, the road geometric information includes road geography, terrain, quality, boundaries, etc., the static traffic information may include static traffic facilities, boundaries, identification markings, etc., and the quasi-static traffic information may include road geometric information and temporary changes of the static traffic information.
In some examples, the attribute information of the traffic participant in the real space includes a size of the traffic participant in the real space, and the trajectory information of the traffic participant in the real space includes a speed, an acceleration, a position and a relative position with other traffic participants or traffic environment objects of the traffic participant in the real space; the attribute information of the traffic-environment object in the real space includes a size and a position of the traffic-environment object in the real space.
In some examples, the determining attribute information and trajectory information of the traffic participant in real space and attribute information of the traffic environment object in real space from the traffic scene video data includes: determining the real physical size corresponding to a single pixel point in the traffic scene video data according to the fixed height hovering by the unmanned aerial vehicle and the parameter of the camera configured by the unmanned aerial vehicle; and determining attribute information and track information of the traffic participants in the real space and attribute information of the traffic environment object in the real space according to the attribute information and track information of the traffic participants represented by the pixel points and the attribute information of the traffic environment object represented by the pixel points in the traffic scene video data based on the real physical dimensions corresponding to the single pixel points in the traffic scene video data.
Specifically, the parameters of the camera configured by the drone include the real length of the video picture acquired by the camera and other optical parameters of the camera. The real physical size corresponding to a single pixel point in the traffic scene video data can be determined by combining the camera parameters and the hovering height of the unmanned aerial vehicle.
Further, attribute information and track information of traffic participants expressed by pixel points and attribute information of traffic environment objects expressed by pixel points in traffic scene video data can be determined. More specifically, the size, speed, acceleration, position, and relative position between the traffic participant and other traffic participants or traffic environment objects in the traffic scene video data are represented by pixels, for example, for a traffic participant, the size in the traffic scene video data can be represented by the number n of pixels constituting the traffic participant, the speed can be represented by m pixels/s, and the acceleration can be represented by m pixels/sExpressed as k pixels/s2The position can be expressed as (p, q), the abscissa representing the traffic participant is the p-th pixel point, the ordinate is the q-th pixel point, and the relative position with other traffic participants or traffic environment objects can be expressed as the distance with other traffic participants or traffic environment objects is s pixel points.
And then, according to the attribute information and the track information expressed by the traffic participants by the pixel points and the attribute information expressed by the traffic environment objects by the pixel points in the traffic scene video data, and by combining the real physical size corresponding to a single pixel point in the traffic scene video data, the attribute information and the track information of the traffic participants in the real space and the attribute information of the traffic environment objects in the real space can be calculated. Specifically, the calculation may be implemented according to the following formula:
the area calculation formula is as follows: a ═ ne2
The length calculation formula is as follows: n ═ nxe;
The width calculation formula: w ═ nye;
Velocity calculation formula: v ═ me;
acceleration calculation formula: a is ke; wherein e is the real physical side length corresponding to a single pixel point, n is the number of pixel points forming a certain traffic target (traffic participant or traffic environment object) in the traffic scene video data, and n is the number of the pixel points forming a certain traffic targetxThe number of pixel points on the x-axis of a traffic target in traffic scene video data, nyThe number of pixel points in the y direction of the traffic target in the traffic scene video data is A, the area of a certain traffic target in the traffic scene video data is A, the length of a certain traffic target in the traffic scene video data is l, the width of a certain traffic target in the traffic scene video data is w, the speed of a certain traffic participant in the traffic scene video data is v, and the acceleration of a certain traffic participant in the traffic scene video data is a. Correspondingly, the relative position between the traffic participant and other traffic participants or traffic environment objects can also be calculated by multiplying the number of the pixel points by the real physical side length corresponding to the pixel pointsAnd (4) obtaining.
It should be noted that, for the attribute information of the traffic participant in the real space and the attribute information of the traffic environment object in the real space, which cannot be measured by size, the attribute information can be directly extracted from the traffic scene video data without calculation by means of the real physical size of the pixel point. For example, the color of the traffic light, the weather condition, etc. can be recognized by a general image processing method, an object recognition model, or a manual judgment method.
In some embodiments, the extracting behavior data of the traffic participant and traffic environment data from the traffic scene data further comprises: classifying the traffic scene data according to scene elements contained in the traffic scene data; and respectively extracting the behavior data and the traffic environment data of the traffic participants aiming at various traffic scene data.
In practical application, the behaviors of traffic participants and traffic environments in different types of traffic scenes have great differences. For example, for a highway, the traffic participants are primarily motor vehicles, and there are no non-motor vehicles, pedestrians, and animals. For urban roads, the composition of traffic participants is relatively complex, the traffic environment is also relatively complex, and the traffic environment object contains more obstacles and the like. For another example, at the intersection of an urban road, the traffic environment is more complex than that of a common urban road. Therefore, the embodiment of the invention classifies the traffic scene data based on the scene elements contained in the traffic scene data, and then extracts the behavior data and the traffic environment data of the traffic participants respectively according to various traffic scene data. Based on the above, the extracted behavior data and traffic environment data of the traffic participants are specific to the traffic scenes in each specific category, and the data can be directly applied to construct simulation scenes in the corresponding category. On the other hand, aiming at the characteristics of various traffic scene data, a target identification method can be selected in a targeted manner, so that the extraction efficiency of behavior data and traffic environment data of traffic participants is improved.
It should be noted that scene elements included in the traffic scene data may be understood as elements constituting a real traffic scene, which may include the number and types of traffic participants, and the weather conditions, lighting conditions, the number and types of obstacles, road types, types and numbers of traffic facilities, and so on in the traffic environment object. The road types can also comprise traffic lanes, intersections and special areas, the traffic lanes can also comprise urban roads, expressways, intercity roads and country roads, the intersections can also comprise crossroads, T-shaped intersections and Y-shaped intersections, and the special areas can comprise entrances and exits.
In some examples, to simplify the classification process for traffic scene data, traffic scene data may be classified based on road type, since road type has the most significant impact on the behavior of traffic participants.
In some embodiments, before the extracting the behavior data and the traffic environment data of the traffic participant from the traffic scene data, the method further comprises: and selecting traffic scene data with the traffic flow above a preset flow threshold and/or the visibility above a preset visibility threshold for extracting behavior data and traffic environment data of traffic participants. In order to extract as much natural driving data as possible that can be used for evaluation and testing of the automatic driving system, it is preferable that traffic scene data in which the traffic flow is above a preset flow threshold be subjected to subsequent analysis processing. Too low visibility can increase the difficulty in analyzing and processing traffic scene data and reduce efficiency, so that the traffic scene data with visibility above a preset visibility threshold is preferably subjected to subsequent analysis and processing. The preset flow threshold and the preset visibility threshold may be set as required, for example, the preset flow threshold is set to 50 vehicles/minute, which is not limited in the embodiment of the present invention.
Further, the collecting traffic scene data by the unmanned aerial vehicle includes: and collecting traffic scene data by adopting an unmanned aerial vehicle in clear daytime. The traffic scene data are collected in clear daytime, so that the integrity of the data is better ensured, and the subsequent analysis and processing efficiency of the traffic scene data is improved. It should be noted that, in this case, when extracting the traffic environment data from the traffic scene data, the traffic environment data may not be analyzed for meteorological conditions, and the extracted traffic environment objects may include roads, traffic facilities, surface features and obstacles.
In some examples, the raw traffic scene data and the extracted behavior data and traffic environment data of the traffic participants may be stored in a database. The extracted natural driving data can be used for building an automatic driving scene, developing automatic driving behavior and intention prediction models, simulating and learning automatic driving behavior, developing and verifying automatic driving prediction and planning algorithms, analyzing and simulating human driving behavior and the like, and the data expansibility is high.
In summary, the traffic scene data is collected by the unmanned aerial vehicle, and then the behavior data and the traffic environment data of the traffic participants are extracted from the traffic scene data. Based on the method, the concealment of the natural driving data acquisition process is realized, the authenticity and the naturalness of the acquired natural driving data are further improved, and the privacy of an acquired person is better protected; and because the shielding among the traffic targets is less, the data integrity is also improved; in addition, because the problem of line of sight disappearance does not exist, other algorithms are not needed to restore the data, and the precision of the data is improved; compared with other methods, the method also has the advantages of strong flexibility and low cost.
Fig. 4 shows a schematic structural diagram of a natural driving data acquisition device provided by an embodiment of the invention. As shown in fig. 4, the natural driving data collecting apparatus 400 includes: an acquisition module 410, configured to acquire traffic scene data using an unmanned aerial vehicle; and the extraction module 420 is used for extracting the behavior data and the traffic environment data of the traffic participants from the traffic scene data.
In some embodiments, the acquisition module comprises: the control submodule is used for controlling the unmanned aerial vehicle to hover at a fixed height; and the acquisition submodule is used for acquiring the traffic scene video data from the fixed height by adopting the camera configured by the unmanned aerial vehicle.
In some embodiments, the extraction module comprises: the identification submodule is used for identifying traffic participants and traffic environment objects from the traffic scene video data; the determining submodule is used for determining attribute information and track information of the traffic participants in a real space and attribute information of the traffic environment object in the real space according to the traffic scene video data; the behavior data of the traffic participant comprises attribute information and track information of the traffic participant in a real space, and the traffic environment data comprises attribute information of the traffic environment object in the real space.
In some embodiments, the determining sub-module comprises: the first determining unit is used for determining the real physical size corresponding to a single pixel point in the traffic scene video data according to the fixed height hovering by the unmanned aerial vehicle and the parameter of the camera configured by the unmanned aerial vehicle; and the second determining unit is used for determining the attribute information and the track information of the traffic participant in the real space and the attribute information of the traffic environment object in the real space according to the attribute information and the track information of the traffic participant represented by the pixel points and the attribute information of the traffic environment object represented by the pixel points in the traffic scene video data based on the real physical size corresponding to the single pixel point in the traffic scene video data.
In some embodiments, the attribute information of the traffic participant in the real space includes a size of the traffic participant in the real space, and the trajectory information of the traffic participant in the real space includes a speed, an acceleration, a position and a relative position with other traffic participants or traffic environment objects of the traffic participant in the real space; the attribute information of the traffic-environment object in the real space includes a size and a position of the traffic-environment object in the real space.
In some embodiments, the extraction module further comprises: the classification submodule is used for classifying the traffic scene data according to scene elements contained in the traffic scene data; and the extraction submodule is used for respectively extracting the behavior data and the traffic environment data of the traffic participants aiming at various traffic scene data.
In some embodiments, the apparatus further comprises: the selection module is used for selecting traffic scene data of which the traffic flow is above a preset flow threshold and/or the visibility is above a preset visibility threshold, and extracting behavior data and traffic environment data of traffic participants.
In some embodiments, the acquisition module is specifically configured to: and collecting traffic scene data by adopting an unmanned aerial vehicle in clear daytime.
Fig. 5 shows an electronic device of an embodiment of the invention. As shown in fig. 5, the electronic device 500 includes: at least one processor 510, and a memory 520 communicatively coupled to the at least one processor 510, wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method.
Specifically, the memory 520 and the processor 510 are connected together via a bus 530, and can be a general-purpose memory and a processor, which are not specifically limited herein, and when the processor 510 executes a computer program stored in the memory 520, the operations and functions described in the embodiments of the present invention in conjunction with fig. 1 to 4 can be performed.
An embodiment of the present invention further provides a storage medium, on which a computer program is stored, which, when executed by a processor, implements the method. For specific implementation, reference may be made to the method embodiment, which is not described herein again.
While embodiments of the present invention have been disclosed above, it is not limited to the applications listed in the description and the embodiments. It is fully applicable to a variety of fields in which embodiments of the present invention are suitable. Additional modifications will readily occur to those skilled in the art. Therefore, the embodiments of the invention are not to be limited to the specific details and illustrations shown and described herein, without departing from the general concept defined by the claims and their equivalents.

Claims (11)

1. A natural driving data collection method, comprising:
collecting traffic scene data by using an unmanned aerial vehicle;
and extracting the behavior data and the traffic environment data of the traffic participants from the traffic scene data.
2. The nature driving data collection method of claim 1, wherein collecting traffic scene data with the drone comprises:
controlling the unmanned aerial vehicle to hover at a fixed height;
and collecting traffic scene video data from the fixed height by adopting a camera configured by the unmanned aerial vehicle.
3. The natural driving data collection method of claim 2, wherein said extracting behavior data and traffic environment data of traffic participants from said traffic scene data comprises:
identifying traffic participants and traffic environment objects from the traffic scene video data;
according to the traffic scene video data, determining attribute information and track information of the traffic participants in a real space and attribute information of the traffic environment object in the real space; the behavior data of the traffic participant comprises attribute information and track information of the traffic participant in a real space, and the traffic environment data comprises attribute information of the traffic environment object in the real space.
4. The natural driving data collection method of claim 3, wherein said determining attribute information and trajectory information of the traffic participant in real space and attribute information of the traffic environment object in real space from the traffic scene video data comprises:
determining the real physical size corresponding to a single pixel point in the traffic scene video data according to the fixed height hovering by the unmanned aerial vehicle and the parameter of the camera configured by the unmanned aerial vehicle;
and determining attribute information and track information of the traffic participants in the real space and attribute information of the traffic environment object in the real space according to the attribute information and track information of the traffic participants represented by the pixel points and the attribute information of the traffic environment object represented by the pixel points in the traffic scene video data based on the real physical dimensions corresponding to the single pixel points in the traffic scene video data.
5. The natural driving data collection method according to claim 4, wherein the attribute information of the traffic participant in the real space includes a size of the traffic participant in the real space, and the trajectory information of the traffic participant in the real space includes a speed, an acceleration, a position of the traffic participant in the real space, and a relative position with other traffic participants or traffic environment objects; the attribute information of the traffic-environment object in the real space includes a size and a position of the traffic-environment object in the real space.
6. The natural driving data collection method of claim 1, wherein said extracting behavior data of traffic participants and traffic environment data from said traffic scene data comprises:
classifying the traffic scene data according to scene elements contained in the traffic scene data;
and respectively extracting the behavior data and the traffic environment data of the traffic participants aiming at various traffic scene data.
7. The natural driving data collection method of claim 1, wherein prior to extracting the behavior data and traffic environment data of the traffic participants from the traffic scene data, the method further comprises:
and selecting traffic scene data with the traffic flow above a preset flow threshold and/or the visibility above a preset visibility threshold for extracting behavior data and traffic environment data of traffic participants.
8. The nature driving data collection method of claim 1, wherein collecting traffic scene data with the drone comprises:
and collecting traffic scene data by adopting an unmanned aerial vehicle in clear daytime.
9. A natural driving data collection device, comprising:
the acquisition module is used for acquiring traffic scene data by adopting an unmanned aerial vehicle;
and the extraction module is used for extracting the behavior data and the traffic environment data of the traffic participants from the traffic scene data.
10. An electronic device, comprising: at least one processor, and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of any of claims 1-8.
11. A storage medium on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method of any one of claims 1-8.
CN202011467160.9A 2020-12-14 2020-12-14 Natural driving data acquisition method and device, electronic equipment and storage medium Pending CN112634610A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011467160.9A CN112634610A (en) 2020-12-14 2020-12-14 Natural driving data acquisition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011467160.9A CN112634610A (en) 2020-12-14 2020-12-14 Natural driving data acquisition method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112634610A true CN112634610A (en) 2021-04-09

Family

ID=75312895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011467160.9A Pending CN112634610A (en) 2020-12-14 2020-12-14 Natural driving data acquisition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112634610A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022246852A1 (en) * 2021-05-28 2022-12-01 吉林大学 Automatic driving system testing method based on aerial survey data, testing system, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108694367A (en) * 2017-04-07 2018-10-23 北京图森未来科技有限公司 A kind of method for building up of driving behavior model, device and system
CN110222667A (en) * 2019-06-17 2019-09-10 南京大学 A kind of open route traffic participant collecting method based on computer vision
CN111179585A (en) * 2018-11-09 2020-05-19 上海汽车集团股份有限公司 Site testing method and device for automatic driving vehicle
CN112069643A (en) * 2019-05-24 2020-12-11 北京车和家信息技术有限公司 Automatic driving simulation scene generation method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108694367A (en) * 2017-04-07 2018-10-23 北京图森未来科技有限公司 A kind of method for building up of driving behavior model, device and system
CN111179585A (en) * 2018-11-09 2020-05-19 上海汽车集团股份有限公司 Site testing method and device for automatic driving vehicle
CN112069643A (en) * 2019-05-24 2020-12-11 北京车和家信息技术有限公司 Automatic driving simulation scene generation method and device
CN110222667A (en) * 2019-06-17 2019-09-10 南京大学 A kind of open route traffic participant collecting method based on computer vision

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022246852A1 (en) * 2021-05-28 2022-12-01 吉林大学 Automatic driving system testing method based on aerial survey data, testing system, and storage medium

Similar Documents

Publication Publication Date Title
CN109657355B (en) Simulation method and system for vehicle road virtual scene
CN108694367B (en) Method, device and system for establishing driving behavior model
WO2022141910A1 (en) Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
CN111856963B (en) Parking simulation method and device based on vehicle-mounted looking-around system
CN107576960A (en) The object detection method and system of vision radar Spatial-temporal Information Fusion
DE102017125493A1 (en) TRAFFIC SIGN RECOGNITION
Zhang et al. Roadview: A traffic scene simulator for autonomous vehicle simulation testing
CN108068817A (en) A kind of automatic lane change device and method of pilotless automobile
CN114970321A (en) Scene flow digital twinning method and system based on dynamic trajectory flow
CN111339876B (en) Method and device for identifying types of areas in scene
CN113343461A (en) Simulation method and device for automatic driving vehicle, electronic equipment and storage medium
CN113674523A (en) Traffic accident analysis method, device and equipment
CN114248819B (en) Railway intrusion foreign matter unmanned aerial vehicle detection method, device and system based on deep learning
DE112021006402T5 (en) Estimating automatic exposure values of a camera by prioritizing an object of interest based on contextual input from 3D maps
DE102023104789A1 (en) TRACKING OF MULTIPLE OBJECTS
CN113255553B (en) Sustainable learning method based on vibration information supervision
CN116457800A (en) Architecture for map change detection in an autonomous vehicle
CN112634610A (en) Natural driving data acquisition method and device, electronic equipment and storage medium
DE102021107914A1 (en) IMPROVED VEHICLE OPERATION
CN112509321A (en) Unmanned aerial vehicle-based driving control method and system for urban complex traffic situation and readable storage medium
EP3786854A1 (en) Methods and systems for determining driving behavior
CN114842660B (en) Unmanned lane track prediction method and device and electronic equipment
CN114581863A (en) Vehicle dangerous state identification method and system
CN114241373A (en) End-to-end vehicle behavior detection method, system, equipment and storage medium
DE102020202342A1 (en) Cloud platform for automated mobility and computer-implemented method for providing cloud-based data enrichment for automated mobility

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210409