CN116186336A - Driving data acquisition and calibration method, device and storage medium - Google Patents

Driving data acquisition and calibration method, device and storage medium Download PDF

Info

Publication number
CN116186336A
CN116186336A CN202310187644.5A CN202310187644A CN116186336A CN 116186336 A CN116186336 A CN 116186336A CN 202310187644 A CN202310187644 A CN 202310187644A CN 116186336 A CN116186336 A CN 116186336A
Authority
CN
China
Prior art keywords
data
driving
driver
vehicle
driving data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310187644.5A
Other languages
Chinese (zh)
Inventor
王建强
王裕宁
李尚怡
李若辰
李晋豪
许庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Toyota Motor Corp
Original Assignee
Tsinghua University
Toyota Motor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Toyota Motor Corp filed Critical Tsinghua University
Priority to CN202310187644.5A priority Critical patent/CN116186336A/en
Publication of CN116186336A publication Critical patent/CN116186336A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9027Trees

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Physics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Time Recorders, Dirve Recorders, Access Control (AREA)

Abstract

The application provides a driving data acquisition and calibration method, a driving data acquisition and calibration device and a storage medium. The method comprises the following steps: under the condition that the vehicle is in a natural driving state, driving data are collected in real time; aligning the collected driving data according to time sequence, and organizing the aligned driving data into a tree structure; and carrying out coupling calibration on the driving data and the road data of the vehicle to form the calibrated driving data. According to the method and the device, the driving data of the vehicles in the natural driving state are subjected to coupling calibration, so that the driving data of the human-vehicle road after the ring integrated data are calibrated are formed, and the driving data have a uniform data structure and are compatible and expandable.

Description

Driving data acquisition and calibration method, device and storage medium
Technical Field
The application relates to the technical field of intelligent vehicle application, in particular to a driving data acquisition and calibration method, a device and a storage medium.
Background
With the development of automatic driving technology, the traditional rule-based method can not meet the traffic scene requirement of complex dynamic coupling any more, and can not meet the standard of high-level automatic driving, while the data-driven method can still obtain good performance under complex working conditions. However, in the data driving method, a large amount of data is required as a support in the training and calculation solving process, and natural driving data acquisition is a hot spot research object of various large autopilot algorithm companies in recent years in order to fit actual driving conditions. These datasets, while having their own advantages, have some common problems: the data structure of the human-vehicle-road integration cannot be acquired. In the development of a high-level automatic driving algorithm, learning driver wisdom is a quite important technical route, however, the existing data is only environment information but not driver dynamic information, or only driver pose information but not corresponding relation with the environment, so that complete people and vehicle road in-loop integrated data cannot be obtained.
Disclosure of Invention
The present application has been made in view of at least one of the above-mentioned problems occurring in the prior art. According to an aspect of the present application, there is provided a driving data acquisition and calibration method, the method including:
under the condition that the vehicle is in a natural driving state, driving data are collected in real time; wherein the driving data comprises driver human factor data, environment data and vehicle control data;
aligning the collected driving data according to time sequence, and organizing the aligned driving data into a tree structure;
carrying out coupling calibration on the driving data and the road data of the vehicle to form calibrated driving data;
under the condition that the vehicle is in a natural driving state, driving data are collected in real time, and the method comprises the following steps:
collecting driver human factor data in real time;
collecting environmental data in real time; and
vehicle control data is collected in real time.
In some embodiments, the driver-human data includes at least one of: driver attention area data, driver physiological and brain electrical signal data, and driver personal information.
In some embodiments, the environmental data includes at least one of: environmental image data, point cloud data, and data collected by the road test device.
In some embodiments, the vehicle control data includes at least one of: global navigation satellite system positioning data, inertial integrated navigation data and controller area network bottom layer control information data.
In some embodiments, time-aligning the collected driving data includes:
and carrying out alignment processing on the driving data according to the time stamp of the driving data.
In some embodiments, the tree structure includes a data acquisition log, a Python toolkit, and a data ontology.
In some embodiments, performing road-to-vehicle data coupling calibration on the driving data includes:
calibrating the interactive objects contained in the attention area; and/or
Calibrating the interactive object influencing the self-vehicle decision; and/or
And calibrating the dangerous event.
In some embodiments, the method further comprises: evaluating the driver;
wherein the evaluating the driver comprises:
evaluating the historical driving behavior of the driver; and/or
Evaluating real-time risk of driving event; and/or
Subjective evaluation is carried out on driving events; and/or
And comprehensively evaluating the driving event.
Another aspect of the embodiments of the present application provides a driving data acquisition and calibration device, the device includes:
the system comprises a memory and a processor, wherein the memory stores a computer program run by the processor, and the computer program when run by the processor causes the processor to execute the driving data acquisition and calibration method.
A further aspect of the embodiments provides a storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to perform the driving data collection and calibration method as described above.
According to the driving data acquisition and calibration method, device and storage medium, the driving data of the vehicles in the natural driving state are subjected to coupling calibration, so that the driving data of the human-vehicle road after the ring integrated data are calibrated are formed, and the driving data have a uniform data structure and are compatible and expandable.
Drawings
FIG. 1 shows a schematic flow chart of a driving data collection and calibration method according to an embodiment of the present application;
FIG. 2 shows a schematic diagram of a human road data acquisition module according to an embodiment of the present application;
FIG. 3 shows a schematic diagram of organizing driving data into a tree structure according to an embodiment of the present application;
FIG. 4 shows a driver's attention heat graphical intent in accordance with an embodiment of the present application;
FIG. 5 shows a schematic flow chart of a driving data acquisition and calibration method according to another embodiment of the present application;
fig. 6 shows a schematic block diagram of a driving data collection and calibration device according to an embodiment of the present application.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the embodiments of the present application, the following detailed description refers to the accompanying drawings and the detailed description.
With the development of autopilot technology, data-driven methods are gaining wide attention. Traditional rule-based methods can not meet the traffic scene requirements of complex dynamic coupling any more, and cannot meet the standards of high-level automatic driving, while data-driven methods can still achieve good performance under complex working conditions.
However, in the data driving method, a large amount of data is required to be used as support in the training and calculation solving process, in order to fit the actual driving conditions, natural driving data acquisition becomes a hot spot research object of various automatic driving algorithm companies in recent years, and various natural driving public data sets are layered and have different characteristics.
The High D data set records global natural driving data on the expressway through a bird's eye view angle by using a vehicle-road cooperative technology, wherein the global natural driving data comprises positions, speeds, intelligent body types and the like; the Waymo motion collects strong interactive driving event data in the urban road, such as overtaking, steering, traveling and the like, and three-bit coordinates are collected through laser radar and other equipment and further more data, such as behavior labels and the like, are marked; in addition, some data sets focus on data collection of a Driver, for example, the Driver & Act data sets collect behavior postures and eye-catching actions of the Driver when the Driver drives through an in-vehicle camera; the CoCat dataset collects the attention area of the driver through the eye tracker and other devices, so as to obtain the gazing rule of the driver on the driving environment.
The existing various data sets have respective advantages, but have some common problems:
1) The data structure of the human-vehicle-road integration cannot be acquired. In the development of a high-level automatic driving algorithm, learning the intelligence of a driver is a quite important technical route, however, the existing data only has environment information but no dynamic information of the driver, or only has pose information of the driver but no corresponding relation with the environment, so that complete integrated data of people and vehicles on the ring cannot be obtained;
2) Lack of evaluation of driving behavior: the existing data set often records all natural driving behaviors, although the behavior of a human driver is not always correct or excellent in practice although the natural driving behaviors are uniformly learned by inputting a decision control model, the behavior which an algorithm hopes to learn should be excellent driving behavior, so that the natural driving behaviors need to be evaluated so as to screen a subset worth learning;
3) The lack of a unified data structure, and thus the different data sets cannot be used in parallel, lacks of expansibility and compatibility.
Based on at least one technical problem described above, the present application provides a driving data acquisition and calibration method, which includes: under the condition that the vehicle is in a natural driving state, driving data are collected in real time; aligning the collected driving data according to time sequence, and organizing the aligned driving data into a tree structure; and carrying out coupling calibration on the driving data and the road data of the vehicle to form the driving data after calibration. According to the method and the device, the driving data of the vehicles in the natural driving state are subjected to coupling calibration, so that the driving data of the human-vehicle road after the ring integrated data are calibrated are formed, and the driving data have a uniform data structure and are compatible and expandable.
FIG. 1 shows a schematic flow chart of a driving data collection and calibration method according to an embodiment of the present application; as shown in fig. 1, a driving data collecting and calibrating method 100 according to an embodiment of the present application may include the following steps S101, S102, and S103:
in step S101, driving data is collected in real time in a case where the vehicle is in a natural driving state.
The method and the device for judging the vehicle are firstly used for judging whether the vehicle is in a natural driving state or not. The natural driving state herein refers to a driving state in which any active driving assistance function or automatic driving function is not turned on.
At the beginning of data collection, a data collection security (e.g., driver) and/or collection module confirms whether the vehicle turns on any active driving assistance function or automatic driving function such as lane keeping, ACC adaptive cruising, etc., and if not, confirms that the vehicle is in a natural driving state, and data collection can be performed.
In one embodiment of the present application, as shown in fig. 2, driving data is collected by a man-vehicle road data collection module. The driving data here includes driver human factor data, environmental data, vehicle control data; under the condition that the vehicle is in a natural driving state, driving data are collected in real time, and the method comprises the following steps:
a1, acquiring driver human factor data in real time;
a2, collecting environment data in real time; and
a3, collecting vehicle control data in real time.
Wherein the driver human factor comprises at least one of: driver attention area data, driver physiological and brain electrical signal data, and driver personal information. The process and manner of acquisition of these several data is described below.
When personal information of the driver is collected, a standard personal information questionnaire can be filled in by the driver before the driving behavior starts, so that the history characteristics of the driver, such as information of driving age, past accident number, driving style and the like, can be collected.
When the attention area data of the driver is collected, the collection may be performed by an eye tracker. In a specific example, the attention distribution of the driver at the first viewing angle may be acquired by a glasses type eye tracker, and the output data is an attention heat map, a gaze area coordinate, a gaze time, and the like.
When the driver physiological and brain signal data are collected, in a specific example, five paths of physiological signals can be collected through the portable all-in-one sensor, which are respectively: heart rate, electromyographic signals, skin electrical signals, blood oxygen level and skin temperature. Since the physiological signal can feed back the physiological response of the driver while driving the vehicle, the reaction severity of the driver at a specific time or in a specific event can be captured by collecting the physiological signal. The data form of the acquired physiological information is a time sequence curve, and a time stamp can be recorded at the same time of acquiring the physiological signal. Because the brain electrical signal can feed back which brain region has the highest activity intensity when the driver performs specific operation, the operation behavior rule of the driver can be extracted according to the brain electrical signal, and the data form of the brain electrical signal is a time sequence intensity sequence corresponding to the brain electrode.
Wherein the environmental data includes at least one of: environmental image data, point cloud data, and data collected by roadside equipment.
The environmental data in the embodiment of the present application refers to data for describing real road information around a vehicle, and mainly includes basic road information around the vehicle and obstacles, other traffic participants, and the like. Such as pedestrians, lane lines, road edges, etc.
In the embodiment of the application, a plurality of sensing devices such as cameras, laser radars, millimeter waves and road side devices are used when the environmental data are collected, so that multidimensional and complete driving environmental data are collected together. The working process and working principle of each sensing device are described below.
When adopting laser radar to gather environmental data, laser radar can set up at the vehicle top, and the barrier information around the mainly used gathering vehicle. The data format acquired by the laser radar is point cloud data, and the point cloud data can effectively obtain target-level barrier information after being processed, so that effective information support is provided for subsequent vehicle decision behaviors.
When adopting the camera to gather environmental data, the camera can set up in the vehicle top, and can set up a plurality of cameras. In order to facilitate the omnibearing collection of environmental data, the cameras can face different directions respectively. The images in different directions collected by the cameras can form image information of a plurality of channels after being combined in groups. The environment data collected by the camera is image data, and various important traffic factors such as traffic participants, traffic signs, traffic lights and the like in the image data can be extracted by adopting a neural network algorithm and the like, so that information support is provided for subsequent vehicle decision behaviors.
When the millimeter wave radar is employed to collect the environmental data, the millimeter wave radar may be provided in the middle lower portion of the vehicle, and a plurality of millimeter wave radars may be provided. In order to facilitate the omnidirectional acquisition of environmental data, the plurality of millimeter wave radars may be distributed around the vehicle. The millimeter wave radar has the main function of sensing the obstacle information at a short distance, and the sensing range is far smaller than that of a camera and a laser radar. The collected data of the millimeter wave radar can be generally used for a slow scene, such as automatic parking and the like; of course, the method can also be used for complex traffic scenes, especially for vehicle decision-making problems in congestion traffic scenes.
When ambient data is collected using roadside devices, the roadside devices may be disposed around the roadway. The road side devices are sensing devices which can acquire basic road information and traffic participant information from different angles and transmit the acquired data to the vehicle in a wireless communication mode. The road side equipment is multiple in data types, high in flexibility and rich in format, and the problems of visual field blind areas and the like of the vehicle-mounted sensing equipment due to visual angles can be effectively solved.
Wherein the vehicle control data includes at least one of: global navigation satellite system (Global Navigation Satellite System, GNSS) positioning data, inertial integrated navigation data, and controller area network (Controller Area Network, CAN) underlying control information data.
The vehicle control data collected by the embodiment of the application are various data of the vehicle itself, which are mainly used as reference true values in the subsequent decision control model training. The vehicle control data can comprise real-time geographic position information of the vehicle, and can be combined through a GNSS positioning system and inertial navigation of an inertial measurement unit (Inertial Measurement Unit, IMU), so that a self-vehicle global three-dimensional coordinate of a time sequence is output, and the precision is in the centimeter level.
The vehicle control data may also include vehicle floor control information including steering wheel angle, accelerator opening, brake opening, gear information, engine speed, wheel speed, turn signal lights, etc. These messages are integrated on the CAN control bus in the vehicle, so that CAN be obtained by directly calling the CAN bus message. The data format of these vehicle control data is time series data.
In step S102, the collected driving data are aligned according to time sequence, and the aligned driving data are organized into a tree structure.
In the embodiment of the application, after the collected driving data is stored in the database, multiple types of data may exist, and alignment and organization of the driving data are required.
In one embodiment of the present application, time-aligning the collected driving data includes: and carrying out alignment processing on the driving data according to the time stamp of the driving data.
Specifically, first, since the time stamp accuracy and frequency of the acquired data are different from one device to another, it is necessary to perform a down-sampling process, for example, to reduce the sampling frequency to 20hz. In a specific example, the time stamps of all driving data are approximated at intervals of 0.05 seconds, and if the sampling frequency is high, the average value of all amounts in the intervals of 0.05 seconds is taken as the time zone flag value. The driving data is arranged in time sequence with 0.05 seconds as one frame, and then the alignment of the driving data is started.
The driving data is then organized into a tree structure as shown in fig. 3 for subsequent observation and recall by users who need to use the driving data. Wherein a first level of the tree structure includes a data acquisition log, a Python toolkit, and a data ontology. The description about the tree structure is as follows:
(1) The following information is recorded in the data acquisition log: the key information of the vehicle description comprises necessary parameter information such as the size, the driving form and the like of the vehicle; acquisition conditions, namely time-space information at the time of data acquisition, comprise information such as acquisition places, weather conditions, driving areas and the like; the sensor description file contains supplementary information of the type, manufacturer, model, basic parameters and the like of the sensor.
(2) The Python tool package stores a basic preprocessing method, and achieves the functions of data alignment, data calibration and the like. In addition, the collected driving data set is also provided with a visualization tool which can screen related data according to specific time information or object information and perform image or point cloud visualization.
(3) The data body part divides the data into a JPEG image part, a PCD point cloud file part, a JSON time sequence index file part and a JSON target information storage part according to the data type.
For the JPEG image portion, the images collected by the front camera, the driver camera, the left camera, the right camera, and the like may be stored separately according to the difference of the cameras collecting the images, for example, the images collected by the front camera, the driver camera, the left camera, the right camera, and the like are stored as one branch, the corresponding JPEG files are stored in each branch, and each file is named by a simplified timestamp of the capturing time.
For the PCD point cloud file part, a simplified time stamp can be adopted to name each point cloud file, and each point in the point cloud file stores characterization information such as x, y, z, reflectivity and the like.
For the JSON time sequence index file part, each JSON file named by adopting a simplified time stamp is arranged in the JSON time sequence index file part, and the vehicle state, the driver state, the camera data and the radar data under the corresponding time stamp are stored in each file. The vehicle state to be recorded comprises current accelerator opening, brake, speed, GNSS position information and the like; the driver state comprises signal description sequences such as skin electricity, electrocardio and the like of the driver at the current moment and human body posture joint point description information. In addition, the file name index information of the corresponding image is stored for the image data collected by the cameras according to different cameras. For example, attention heat map information is additionally stored in a matrix form for a driver camera, and object detection information in an image is additionally stored for an environmental camera. For another example, the radar data stores PCD point cloud file index information and target detected object information and the total number of detected objects in the point cloud image.
Wherein, for the JSON target information storage file part, each JSON file in the JSON target information storage file part is stored separately by taking detection targets as units, and each detection target is matched with a universal unique identification code (UUID) to ensure the validity of the index. Each detection target information storage file is firstly associated with a perceived sensor, for example, a left camera and a laser radar serial number are stored, and the relative position, the type and the parameters of a target object in the sensor are respectively recorded according to time sequences according to different sensors.
According to the data alignment method and device, the data in the data sets are enabled to have a unified data structure through data alignment, so that different data sets can be used in parallel, and expansibility and compatibility are improved.
In step S103, the driving data is subjected to coupling calibration of road data, so as to form the calibrated driving data.
In the embodiment of the application, when training and solving an automatic driving decision model, various driving data are often required to be used as features to be input so as to improve decision performance, and therefore, coupling calibration is required to be carried out on the driver human factor data, the environment data and the vehicle control data.
In one embodiment of the present application, the step of performing the coupling calibration of the driving data to the road data includes the following steps:
b1, calibrating interactive objects contained in the attention area; and/or
B2, calibrating the interactive object influencing the self-vehicle decision; and/or
And B3, calibrating the dangerous event.
In step B1, the interactive objects included in the attention area need to be calibrated. Because the eye tracker collects data from a first person perspective and outputs a result that is an attention thermodynamic diagram, it is not possible to accurately determine which object in the environment the driver is interacting with, and therefore coupling calibration is required. Specifically, as shown in fig. 4, the attention heat of the driver is schematically shown. And combining driving environment data, performing object perception recognition by taking the image record of the first-person camera as input to obtain the relative coordinates of each object in the driver visual field relative to the center of the visual field, so as to obtain the actual position of the object. And then, carrying out hot spot clustering on the attention heat map, and dividing a plurality of important attention areas in the attention heat map into discrete peak blocks to obtain the geometric center of the peak blocks. Finally, comparing the actual relative coordinates of the interactive object with the geometric center of the attention peak block, and if the error is smaller than 5% of the width of the visual field area, determining that the matching is successful, and indicating that the driver focuses on the object at the moment. Such as traffic lights, pedestrians, traffic signs and other vehicles in the figures.
In step B2, the interactive object that affects the decision of the vehicle needs to be calibrated. When actually making decisions, the traffic scene facing complex coupling interaction dynamics needs to know which object has the greatest influence on self decision. In a specific example of the present application, a control interference algorithm may be used to perform decision interaction object screening, which is specifically as follows:
taking an image u acquired by a look-around camera as an input, the image u can be used as a two-dimensional matrix, each pixel point consists of three channels of RGB, and the x (u) is expressed as follows:
x(u)=[r,g,b] (1)
wherein R, G, B represents the color brightness of the three red, green and blue channels.
Defining an interference pixel point mu which is a fixed value and is used for covering or weakening the original RGB value so as to blur the corresponding position of the image, wherein the interference pixel point mu is expressed as follows:
μ=[μ 000 ] (2)
wherein mu 0 Representing the value of R, G, B.
Defining m (u) as the distribution of the interference pixels, i.e. how many interference pixels are on an image and their positions, the image Φ (u) after applying all the interference pixels can be expressed as:
Φ(u)=m(u)x(u)+(1-m(u))μ (3)
wherein m (u) represents the distribution of the interference pixels; μ represents an interference pixel; x (u) represents a pixel point.
At this time, the image after interference is input to the decision evaluation algorithm, and a probability distribution of the decision is obtained. The original image u is input into a decision evaluation algorithm, and a decision probability distribution can be obtained. Thus, the decision change amount after the disturbance is applied can be calculated:
f c (Φ)=-|D(Φ)-D 0 | (4)
where Φ represents the image after all the interfering pixels are applied.
The area covered by the optimal distribution can be regarded as a key decision area, and traffic objects contained in the covered area are marked as decision interaction key objects.
It should be noted that the embodiments of the present application are not limited to the control interference algorithm, and other embodiments may also use other algorithms to screen the interactive objects, which are all within the protection scope of the present application.
In step B3, a dangerous event needs to be calibrated. After the physiological signals are collected, screening physiological signal peak time, if three types of signals are simultaneously in an abnormally high peak value in the five types of physiological signals, reversely positioning the corresponding driving environment by a time stamp, and picking and marking the driving event as high risk through behavior segmentation (for example, a time period of 10 seconds respectively when the dangerous event occurs forwards and backwards). All high-risk events are marked in a concentrated mode and can be used as a strong interaction natural driving data set for training and solving a decision model of a dangerous scene.
In yet another embodiment of the present application, the method further comprises: the driver is evaluated.
In one example, the evaluation of the driver includes the steps of:
c1, evaluating the historical driving behavior of a driver; and/or
C2, evaluating real-time risks of driving events; and/or
C3, subjective evaluation is carried out on the driving event; and/or
And C4, comprehensively evaluating the driving event.
In the conventional technology, when driving data are collected and calibrated, evaluation of driving behaviors is often lacking. However, not all natural driving data are excellent, and there is an irrational and incorrect driving operation for the human driver, so the driving behavior evaluation is important to improve the decision and control performance. The driving behavior can be evaluated to be excellent through three dimensions.
First, in step C1, the historical driving behavior of the driver is evaluated. For example, by collecting a driver history questionnaire, excellent drivers are screened, all driving behaviors of drivers with accident numbers of more than 2 in the past five years or driving ages of not more than 5 years are marked as "normal", and otherwise are evaluated as "good".
In step C2, a real-time risk assessment of the driving event is performed. After each driving event is segmented, the high risk times in each event can be calculated, and the specific calculation method is as follows:
the expected time to collision TTC of a host vehicle with another vehicle is defined as follows:
Figure BDA0004107864600000101
wherein Δd represents the relative position of the own vehicle and other agents, and Δv represents the relative speed of the own vehicle and other agents in the connecting direction. TTC represents a common risk assessment that can measure whether the object is at risk of collision with itself.
After identifying all the categories and locations of agents in the current event, the speed of each agent is calculated by differencing, and then its TTC is calculated. If the TTC of a certain agent and a self-vehicle is less than 1 second, a high risk is marked once.
For example, in all driving events, where the sign is equal to or greater than one "high risk" event, it is defined as high risk driving behavior. If the driving event does not present a high risk object, its corresponding driving operation is defined as excellent driving.
In step C3, subjective evaluation is performed on the driving event. For example, for each driver participating in natural driving data collection, invite 10 volunteers to watch their driving behavior videos for 20 minutes, score their driving after watching, score into three dimensions: comfort C, high efficiency E, and risk R. Each individual item was divided into 10 min and the particle size was 1 min. The driving integrated phenotype D of the individual driver is calculated as follows:
D=C+E+R (6)
wherein D represents the driving integrated phenotype of the individual driver; c represents comfort; e represents high efficiency; r represents a risk.
After scoring results of all 10 volunteers are obtained, removing a highest score and a lowest score, taking the average value of the rest 8 people, and if D is greater than or equal to 25 scores and the mean square error is less than 10, defining the driver as an excellent driver; if D is greater than or equal to 25 minutes and the mean square error is greater than 10, the data acquisition personnel singly judge whether the driver is an excellent driver or not; if D is less than 25 minutes and equal to or greater than 20 minutes, it is marked as "ordinary driver". If D is less than 20 points, it is marked as "negative driving behavior".
In step C4, a comprehensive assessment is made of the driving event. After the evaluation of steps C1-C3 is obtained, a comprehensive evaluation of the driving event may be performed. For example, if a certain driving behavior is generated by "excellent driver" driving while there is no "high risk" in the event and the history driving is evaluated as "good", the driving event is marked as "excellent driving behavior" and recorded in the database.
The conventional driving data set often records all natural driving behaviors, although the driving data can be input into the decision control model for unified learning, in practice, the behavior of the driver is not always correct or excellent, and the behavior the decision control model hopes to learn should be excellent driving behavior, so that evaluation on the natural driving behavior is needed to screen a subset worth learning, and the embodiment of the application marks excellent driving behavior and can provide reference for learning of the decision control model.
Fig. 5 is a schematic flow chart of a driving data acquisition and calibration method according to another embodiment of the present application. The driving data collection and calibration method 500 according to an embodiment of the present application may include the following steps S501, S502, S503, S504, S505, and S506;
in step S501, it is determined whether or not the vehicle is in a natural driving state, and if so, step S502 is executed; otherwise, the process returns to step S501.
In step S502, three modules of data are collected. The method and the device for judging the vehicle are firstly used for judging whether the vehicle is in a natural driving state or not. The natural driving state herein refers to a driving state in which any active driving assistance function or automatic driving function is not turned on. For example, it is confirmed by the driver and/or the acquisition module whether the vehicle turns on any active driving assistance function such as lane keeping, ACC adaptive cruise, or automatic driving function, etc.
Step S502 includes steps S5021, S5022 and S5023.
In step S5021, the driver cause module. The driver factor module of the embodiment of the application is used for collecting driver factor data. Wherein the driver human factor comprises at least one of: driver attention area data, driver physiological and brain electrical signal data, and driver personal information.
In step S5022, the driving environment module. The driving environment module of the embodiment of the application is used for collecting environment data. Wherein the environmental data includes at least one of: environmental image data, point cloud data, and data collected by roadside equipment.
In step S5023, the vehicle control module. The vehicle control module of the embodiment of the application is used for vehicle control data
In step S503, the data is aligned with the organization. In the embodiment of the application, after the collected driving data is stored in the database, multiple types of data may exist, and alignment and organization of the driving data are required. For example, the driving data may be organized in a tree structure for direct use in a subsequent vehicle decision process.
In step S504, data coupling calibration is performed. In the embodiment of the application, when training and solving an automatic driving decision model, various driving data are often required to be used as features to be input so as to improve decision performance, and therefore, the driver human factor data, the environment data and the vehicle control data are required to be subjected to coupling calibration to obtain complete human-vehicle road-in-loop integrated data.
In step S505, driving behavior evaluation and assessment. The method and the device can improve decision making and control performance by evaluating whether the driving behavior is excellent.
In step S506, whether the acquisition is ended; if yes, ending the collection; otherwise, the process returns to step S501.
According to the method and the device, the driving data of the vehicles in the natural driving state are subjected to coupling calibration, so that the driving data of the human-vehicle road after the ring integrated data are calibrated are formed, and the driving data have a uniform data structure and are compatible and expandable.
The driving data collection and calibration device of the present application will be described below with reference to fig. 6, wherein fig. 6 shows a schematic block diagram of the driving data collection and calibration device according to an embodiment of the present application.
As shown in fig. 6, the driving data collection and calibration device 600 includes: one or more memories 601 and one or more processors 602, said memories 601 having stored thereon a computer program to be run by said processors 602, which when run by said processors 602, causes said processors 602 to perform the driving data collection and calibration method as described hereinbefore.
The driving data collection and calibration device 600 may be part or all of a computer device that may implement the driving data collection and calibration method through software, hardware, or a combination of software and hardware.
As shown in fig. 6, the driving data collection and calibration device 600 includes one or more memories 601, one or more processors 602, a display (not shown), and a communication interface, among others, interconnected by a bus system and/or other form of connection mechanism (not shown). It should be noted that the components and configuration of the driving data collection and calibration device 600 shown in fig. 6 are merely exemplary and not limiting, and that the driving data collection and calibration device 600 may have other components and configurations as desired.
The memory 601 is used to store various data and executable program instructions that are generated during operation of the associated train, such as algorithms for storing various application programs or performing various specific functions. One or more computer program products may be included that may include various forms of computer-readable storage media, such as volatile and/or nonvolatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like.
The processor 602 may be a Central Processing Unit (CPU), an image processing unit (GPU), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or other form of processing unit with data processing and/or instruction execution capabilities, and may be other components in the driving data acquisition and calibration apparatus 600 to perform desired functions.
In one example, the driving data collection and calibration device 600 further includes an output device that may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display device, a speaker, and the like.
The communication interface is an interface that may be any presently known communication protocol, such as a wired interface or a wireless interface, where the communication interface may include one or more serial ports, USB interfaces, ethernet ports, wiFi, wired network, DVI interfaces, device integration interconnect modules, or other suitable various ports, interfaces, or connections.
Furthermore, according to an embodiment of the present application, there is also provided a storage medium on which program instructions are stored, which program instructions, when executed by a computer or a processor, are adapted to carry out the respective steps of the driving data collection and calibration method of the embodiments of the present application. The storage medium may include, for example, a memory card of a smart phone, a memory component of a tablet computer, a hard disk of a personal computer, read-only memory (ROM), erasable programmable read-only memory (EPROM), portable compact disc read-only memory (CD-ROM), USB memory, or any combination of the foregoing storage media.
The driving data acquisition and calibration device and the storage medium have the same advantages as the driving data acquisition and calibration method because the driving data acquisition and calibration method can be realized.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above illustrative embodiments are merely illustrative and are not intended to limit the scope of the present application thereto. Various changes and modifications may be made therein by one of ordinary skill in the art without departing from the scope and spirit of the present application. All such changes and modifications are intended to be included within the scope of the present application as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, e.g., the division of the elements is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple elements or components may be combined or integrated into another device, or some features may be omitted or not performed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the present application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in order to streamline the application and aid in understanding one or more of the various inventive aspects, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof in the description of exemplary embodiments of the application. However, the method of this application should not be construed to reflect the following intent: i.e., the claimed application requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be combined in any combination, except combinations where the features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the present application and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some of the modules according to embodiments of the present application may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present application may also be embodied as device programs (e.g., computer programs and computer program products) for performing part or all of the methods described herein. Such a program embodying the present application may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
The foregoing is merely illustrative of specific embodiments of the present application and the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes or substitutions are intended to be covered by the scope of the present application. The protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A driving data acquisition and calibration method, the method comprising:
under the condition that the vehicle is in a natural driving state, driving data are collected in real time; wherein the driving data comprises driver human factor data, environment data and vehicle control data;
aligning the collected driving data according to time sequence, and organizing the aligned driving data into a tree structure;
carrying out coupling calibration on the driving data and the road data of the vehicle to form calibrated driving data;
under the condition that the vehicle is in a natural driving state, driving data are collected in real time, and the method comprises the following steps:
collecting driver human factor data in real time;
collecting environmental data in real time; and
vehicle control data is collected in real time.
2. The method of claim 1, wherein the driver-human data comprises at least one of: driver attention area data, driver physiological and brain electrical signal data, and driver personal information.
3. The method of claim 1, wherein the environmental data comprises at least one of: environmental image data, point cloud data, and data collected by the road test device.
4. The method of claim 1, wherein the vehicle control data comprises at least one of: global navigation satellite system positioning data, inertial integrated navigation data and controller area network bottom layer control information data.
5. The method of claim 1, wherein time-aligning the collected driving data comprises:
and carrying out alignment processing on the driving data according to the time stamp of the driving data.
6. The method of claim 1, wherein the tree structure comprises a data acquisition log, a Python toolkit, and a data ontology.
7. The method of claim 1, wherein the step of performing a road-to-vehicle data coupling calibration on the driving data comprises:
calibrating the interactive objects contained in the attention area; and/or
Calibrating the interactive object influencing the self-vehicle decision; and/or
And calibrating the dangerous event.
8. The method according to claim 1, wherein the method further comprises: evaluating the driver;
wherein the evaluating the driver comprises:
evaluating the historical driving behavior of the driver; and/or
Evaluating real-time risk of driving event; and/or
Subjective evaluation is carried out on driving events; and/or
And comprehensively evaluating the driving event.
9. A driving data acquisition and calibration device, characterized in that it comprises:
a memory and a processor, the memory having stored thereon a computer program to be run by the processor, which when run by the processor causes the processor to perform the driving data collection and calibration method according to any one of claims 1 to 8.
10. A storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to perform the driving data collection and calibration method according to any one of claims 1 to 8.
CN202310187644.5A 2023-03-01 2023-03-01 Driving data acquisition and calibration method, device and storage medium Pending CN116186336A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310187644.5A CN116186336A (en) 2023-03-01 2023-03-01 Driving data acquisition and calibration method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310187644.5A CN116186336A (en) 2023-03-01 2023-03-01 Driving data acquisition and calibration method, device and storage medium

Publications (1)

Publication Number Publication Date
CN116186336A true CN116186336A (en) 2023-05-30

Family

ID=86440214

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310187644.5A Pending CN116186336A (en) 2023-03-01 2023-03-01 Driving data acquisition and calibration method, device and storage medium

Country Status (1)

Country Link
CN (1) CN116186336A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106428000A (en) * 2016-09-07 2017-02-22 清华大学 Vehicle speed control device and method
US20190156668A1 (en) * 2016-08-11 2019-05-23 Jiangsu University Driving service active sensing system and method in internet of vehicles environment
CN109816811A (en) * 2018-10-31 2019-05-28 杭州云动智能汽车技术有限公司 A kind of nature driving data acquisition device
CN111047047A (en) * 2019-11-15 2020-04-21 奇点汽车研发中心有限公司 Driving model training method and device, electronic equipment and computer storage medium
CN210574293U (en) * 2019-08-01 2020-05-19 清华大学苏州汽车研究院(相城) Intelligent network connection-based unstable vehicle speed prevention early warning system for vehicles on ramp road section
CN115140060A (en) * 2022-07-29 2022-10-04 中汽创智科技有限公司 Data processing method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190156668A1 (en) * 2016-08-11 2019-05-23 Jiangsu University Driving service active sensing system and method in internet of vehicles environment
CN106428000A (en) * 2016-09-07 2017-02-22 清华大学 Vehicle speed control device and method
CN109816811A (en) * 2018-10-31 2019-05-28 杭州云动智能汽车技术有限公司 A kind of nature driving data acquisition device
CN210574293U (en) * 2019-08-01 2020-05-19 清华大学苏州汽车研究院(相城) Intelligent network connection-based unstable vehicle speed prevention early warning system for vehicles on ramp road section
CN111047047A (en) * 2019-11-15 2020-04-21 奇点汽车研发中心有限公司 Driving model training method and device, electronic equipment and computer storage medium
CN115140060A (en) * 2022-07-29 2022-10-04 中汽创智科技有限公司 Data processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11640174B2 (en) Smart vehicle
US10816993B1 (en) Smart vehicle
US11060882B2 (en) Travel data collection and publication
Tyagi et al. Autonomous Intelligent Vehicles (AIV): Research statements, open issues, challenges and road for future
DE112019000049T5 (en) OBJECT DETECTION AND DETECTION SECURITY SUITABLE FOR AUTONOMOUS DRIVING
EP3109114B1 (en) Method and device for detecting safe driving state of driver
DE112019000279T5 (en) CONTROLLING AUTONOMOUS VEHICLES USING SAFE ARRIVAL TIMES
US10699167B1 (en) Perception visualization tool
CN113916242B (en) Lane positioning method and device, storage medium and electronic equipment
US11840261B2 (en) Ground truth based metrics for evaluation of machine learning based models for predicting attributes of traffic entities for navigating autonomous vehicles
CN115056649A (en) Augmented reality head-up display system, implementation method, equipment and storage medium
US20210354730A1 (en) Navigation of autonomous vehicles using turn aware machine learning based models for prediction of behavior of a traffic entity
CN114750696A (en) Vehicle vision presenting method, vehicle-mounted equipment and vehicle
CN110442113A (en) Abnormal driving condition intelligence pre-judging method and Intelligent terminal for Internet of things
DE102020123976A1 (en) Method, system and computer program product for determining safety-critical traffic scenarios for driver assistance systems (DAS) and highly automated driving functions (HAF)
Jiang et al. Understanding drivers’ visual and comprehension loads in traffic violation hotspots leveraging crowd-based driving simulation
CN116186336A (en) Driving data acquisition and calibration method, device and storage medium
CN113378719A (en) Lane line recognition method and device, computer equipment and storage medium
DE102020202342A1 (en) Cloud platform for automated mobility and computer-implemented method for providing cloud-based data enrichment for automated mobility
Shichkina et al. Analysis of driving style using self-organizing maps to analyze driver behavior.
US11151633B1 (en) Systems and methods of application of machine learning to traffic data for vehicle recommendation
Huang et al. Driving safety prediction and safe route mapping using in-vehicle and roadside data
US20220148319A1 (en) Apparatuses, systems and methods for integrating vehicle operator gesture detection within geographic maps
Saputra Accident Investigation in the Automated Traffic System
Bezai An adaptive urban planning framework to support autonomous car technologies

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination