CN114852097A - Data processing method and device and vehicle - Google Patents

Data processing method and device and vehicle Download PDF

Info

Publication number
CN114852097A
CN114852097A CN202210521753.1A CN202210521753A CN114852097A CN 114852097 A CN114852097 A CN 114852097A CN 202210521753 A CN202210521753 A CN 202210521753A CN 114852097 A CN114852097 A CN 114852097A
Authority
CN
China
Prior art keywords
scene
driving
data
time
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210521753.1A
Other languages
Chinese (zh)
Inventor
陈志新
尚秉旭
张勇
王洪峰
刘洋
金百鑫
何柳
张中举
许朝文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FAW Group Corp
Original Assignee
FAW Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FAW Group Corp filed Critical FAW Group Corp
Priority to CN202210521753.1A priority Critical patent/CN114852097A/en
Publication of CN114852097A publication Critical patent/CN114852097A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0002Automatic control, details of type of controller or control system architecture
    • B60W2050/0004In digital systems, e.g. discrete-time systems involving sampling
    • B60W2050/0005Processor details or data handling, e.g. memory registers or chip architecture

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a data processing method, a data processing device and a vehicle. Wherein, the method comprises the following steps: acquiring first driving data generated by the automatic driving vehicle and second driving data generated by an automatic driving system in the driving process of driving the automatic driving vehicle by a driver, wherein the automatic driving system is arranged on the automatic driving vehicle; determining a current driving scenario of the autonomous vehicle in response to the first driving data and the second driving data being different; and acquiring scene data corresponding to the current driving scene to obtain target scene data. The invention solves the technical problems of data redundancy and low efficiency in the process of automatically driving and collecting scene data in the related technology.

Description

Data processing method and device and vehicle
Technical Field
The invention relates to the technical field of automatic driving, in particular to a data processing method and device and a vehicle.
Background
The automatic driving automobile can acquire data such as road traffic environment data, vehicle running state data, driver control behaviors and the like by means of the environment sensing system. In order to screen out the automatic driving demand scene from a large amount of scene data, the automatic driving system can be operated to output a virtual control instruction in a manual driving state, the output instruction is compared with the behavior of a driver, and when the output instruction is inconsistent with the behavior of the driver, the scene is judged to be an effective scene.
However, the differentiation of the types of differences between the driver behavior and the automatic driving system and the accuracy of difference data processing are low, resulting in low effectiveness and data redundancy of the difference target scene data acquired by the automatic driving system.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a data processing method, a data processing device and a vehicle, and aims to at least solve the technical problems of low validity and data redundancy of different target scene data acquired by an automatic driving system in the related art.
According to an aspect of an embodiment of the present invention, there is provided a data processing method including: acquiring first driving data generated by the automatic driving vehicle and second driving data generated by an automatic driving system in the driving process of driving the automatic driving vehicle by a driver, wherein the automatic driving system is arranged on the automatic driving vehicle; determining a current driving scenario of the autonomous vehicle in response to the first driving data and the second driving data being different; and acquiring scene data corresponding to the current driving scene to obtain target scene data.
Optionally, determining a current driving scenario of the autonomous vehicle comprises: acquiring the driving direction of the automatic driving vehicle; determining that the current driving scene is a first preset scene or a second preset scene based on the first driving data and the second driving data in response to the driving direction being a lateral direction; and determining that the current driving scene is a third preset scene or a fourth preset scene based on the vehicle speed and the vehicle acceleration in the historical time period in response to the driving direction being the longitudinal direction.
Optionally, determining that the current driving scene is a first preset scene or a second preset scene includes: determining a first sending time of a first lane changing instruction in the first driving data and a second sending time of a second lane changing instruction in the second driving data; acquiring the time difference between the first sending time and the second sending time; and determining that the current driving scene is a first preset scene in response to the time difference being larger than the first preset time difference.
Optionally, obtaining scene data corresponding to the current driving scene to obtain target scene data includes: determining a lane change time of the autonomous vehicle based on the first departure time and the second departure time; determining a target time period corresponding to target scene data based on the lane change time; and acquiring scene data in the target time period to obtain target scene data.
Optionally, determining a target time period corresponding to the target scene data based on the lane change time includes: in response to the time difference being larger than the first preset time difference and smaller than the second preset time difference, determining that the starting time of the target time period is the difference value between the lane changing time and the first preset value, and the ending time of the target time period is the sum value of the lane changing time and the time difference; when the time difference is larger than a second preset time difference, determining that the starting time of the target time period is the difference value between the lane changing time and the first preset value, and the ending time of the target time period is the sum value of the lane changing time and the second preset time difference; and the second preset time difference is greater than the first preset time difference.
Optionally, in response to the time difference being less than or equal to the first preset time difference, the method further comprises: acquiring a first driving track in the first driving data and a second driving track in the second driving data; acquiring the distance between the first running track and the second running track; and determining that the current driving scene is a second preset scene in response to the distance being greater than the preset distance.
Optionally, obtaining scene data corresponding to the current driving scene to obtain target scene data includes: determining a track starting time based on the first running track and the second running track; and acquiring scene data of the track starting moment to obtain target scene data.
Optionally, determining that the current driving scene is a third preset scene or a fourth preset scene includes: acquiring a first maximum value and a first average value of the vehicle speed, and a second maximum value and a second average value of the vehicle acceleration; determining that the current driving scene is a third preset scene in response to the first maximum value and the second maximum value being within a first preset range and the first average value and the second average value being within a second preset range; and determining that the current driving scene is a fourth preset scene in response to the first maximum value not being within a first preset range, or the second maximum value not being within the first preset range, or the first average value not being within a second preset range, or the second average value not being within the second preset range.
Optionally, determining that the current driving scene is a third preset scene includes: acquiring an acceleration instruction output by an automatic driving system, wherein the acceleration instruction is generated by the automatic driving system under the condition that a first vehicle speed in first running data is different from a second vehicle speed in second running data; acquiring a difference value between the acceleration instruction and a target acceleration in the first running data to obtain a first difference value; and determining that the current driving scene is a third preset scene in response to the first difference being greater than the first preset difference.
Optionally, determining that the current driving scenario is a fourth scenario includes: obtaining a difference value between the first maximum value and the second maximum value to obtain a second difference value, and obtaining a third difference value by obtaining a difference value between the first average value and the second average value; and determining that the current driving scene is a fourth preset scene in response to the second difference being greater than the second preset difference or the third difference being greater than the third preset difference.
Optionally, obtaining scene data corresponding to the current driving scene to obtain target scene data includes: determining a preset time period with the difference time as a center, wherein the difference time is used for representing the time for determining the current driving scene; and acquiring scene data in a preset time period to obtain target scene data.
According to another aspect of the embodiments of the present invention, there is also provided a vehicle data processing apparatus including: the automatic driving system comprises a first acquisition module, a second acquisition module and a control module, wherein the first acquisition module is used for acquiring first driving data generated by the automatic driving vehicle and second driving data generated by the automatic driving system in the driving process of the automatic driving vehicle driven by a driver, and the automatic driving system is arranged on the automatic driving vehicle; a determination module to determine a current driving scenario of the autonomous vehicle based on the first driving data and the second driving data being different; and the second acquisition module is used for acquiring scene data corresponding to the current driving scene to obtain target scene data.
According to another aspect of the embodiments of the present invention, there is also provided a vehicle including the vehicle data processing apparatus of any one of the above embodiments.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, where the computer-readable storage medium includes a stored program, and when the program runs, the apparatus where the computer-readable storage medium is located is controlled to execute the data processing method in any one of the above embodiments.
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to execute a program, where the program executes the data processing method in any one of the above embodiments.
In the embodiment of the invention, in the process of driving the automatic driving vehicle by a driver, first driving data generated by manually controlling the automatic driving vehicle and second driving data generated by an automatic driving system are acquired, the automatic driving system performs difference comparison corresponding to the difference between the first driving data and the second driving data, the current driving scene of the automatic driving vehicle is determined according to the difference type, and scene data corresponding to the current driving scene is acquired to obtain target scene data. It is easy to notice that the difference comparison is carried out by the automatic driving system according to the first driving data generated by the automatic driving vehicle in the manual driving state and the second driving data generated by the automatic driving system, and corresponding target scene data is obtained based on the current driving scene, so that the technical effects of improving the accuracy of difference distinction and difference data processing are achieved, and the technical problems of low validity and data redundancy of the difference target scene data obtained by the automatic driving system in the related technology are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of a method of data processing according to an embodiment of the invention;
FIG. 2 is a process flow diagram of an alternative differential analysis model according to an embodiment of the invention;
FIG. 3 is a schematic diagram of an alternative lane change command from the autopilot system with lateral decision differences in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram of an alternative lateral decision difference lane change not commanded by the autopilot system in accordance with an embodiment of the present invention;
fig. 5 is a schematic diagram of a data processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
According to an aspect of embodiments of the present invention, there is provided a data processing method, it should be noted that the steps shown in the flowchart of the figure may be executed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases, the steps shown or described may be executed in an order different from that here.
Fig. 1 is a flow chart of a data processing method according to an embodiment of the present invention, as shown in fig. 1, the method including the steps of:
step S102, acquiring first driving data generated by an automatic driving vehicle and second driving data generated by an automatic driving system in the process that a driver drives the automatic driving vehicle to drive, wherein the automatic driving system is arranged on the automatic driving vehicle;
step S104, responding to the difference between the first running data and the second running data, and determining the current running scene of the automatic driving vehicle;
and step S106, acquiring scene data corresponding to the current driving scene to obtain target scene data.
The application scenarios of the method are as follows: when a driver drives an automatic driving vehicle to run, the automatic driving vehicle with the automatic driving system is used, manual driving is carried out by the driver, the automatic driving system is started at the same time, but the automatic driving system does not carry out vehicle control, only simulation operation data is generated, and the simulation operation data is used for comparing the difference with the data generated by the manual driving. During driving, the automatic driving system can automatically acquire data related to the driving state of the vehicle in the driving process, data related to manual driving operation, data related to simulation driving operation of the automatic driving system, difference comparison is carried out, and a scene label of effective target scene data is recorded. The first driving data is acquired from data generated during manual driving, and the second driving data is acquired from data generated during simulated driving by the automatic driving system.
In an alternative embodiment, as shown in fig. 1, the first driving data may be an operation instruction, an operation time, a driving direction and a track of the vehicle, a driving speed and an acceleration of the vehicle, and the like, performed on the vehicle in a corresponding scene when the driver manually drives the autonomous vehicle. The second travel data may be an operation command, an operation time, a travel direction and a trajectory of the vehicle, a travel speed and an acceleration of the vehicle, and the like, which are performed on the vehicle in a corresponding scene by the automatic driving system when the vehicle is subjected to the simulation control. For example, when the driver drives an autonomous vehicle to travel at a constant straight speed, and in a scene during travel, the driver changes lanes to the time T0, the autonomous system simulates driving to change lanes to the time T1, the first travel data may be the lane change time T0 when the autonomous vehicle travels, and the second travel data may be the lane change time T1 when the autonomous vehicle travels.
And after automatically capturing the first driving data and the second driving data, the automatic control system compares whether the difference between the first driving data and the second driving data exceeds a set difference range, if not, the data is not recorded, and if the difference exceeds the set difference range, the current driving scene is determined according to the difference range and the difference type.
The driving scene is that the operation such as lane changing, turning around and the like is required in the driving process of the vehicle, the driving tracks of the vehicle are different when the operation is performed, and the vehicle also has the states of constant-speed driving, acceleration, deceleration driving and the like at the driving speed, so that different driving scenes are formed. After the corresponding driving scenes are determined according to different difference ranges and different types, the first driving data and the second driving data are recorded as scene data corresponding to the current scene and stored in a database, so that big data analysis is facilitated.
It should be noted that fig. 2 is a schematic diagram of a flow of an alternative difference analysis model according to an embodiment of the present invention, and as shown in fig. 2, the differences between the driver behavior and the automatic driving system are mainly divided into a lateral difference and a longitudinal difference according to the current driving scenario. The driving scenes of the lateral decision difference are lane changing operation scenes of the automatic driving vehicle, and the driving scenes of the lateral track decision difference are straight driving non-lane changing scenes, turning or turning scenes and lane changing scenes in a lane. The driving scenes with longitudinal difference are divided into an artificial uniform speed driving scene and an artificial acceleration and deceleration driving scene.
In the embodiment of the invention, in the process of driving the automatic driving vehicle by a driver, first driving data generated by manually controlling the automatic driving vehicle and second driving data generated by an automatic driving system are acquired, the automatic driving system performs difference comparison corresponding to the difference between the first driving data and the second driving data, the current driving scene of the automatic driving vehicle is determined according to the difference type, and scene data corresponding to the current driving scene is acquired to obtain target scene data. It is easy to notice that the difference comparison is carried out by the automatic driving system according to the first driving data generated by the automatic driving vehicle in the manual driving state and the second driving data generated by the automatic driving system, and corresponding target scene data is obtained based on the current driving scene, so that the technical effects of improving the accuracy of difference distinction and difference data processing are achieved, and the technical problems of low validity and data redundancy of the difference target scene data obtained by the automatic driving system in the related technology are solved.
Optionally, according to the method of the embodiment of the invention, determining the current driving scene of the autonomous vehicle includes: acquiring the driving direction of the automatic driving vehicle; determining that the current driving scene is a first preset scene or a second preset scene based on the first driving data and the second driving data in response to the driving direction being a lateral direction; and determining that the current driving scene is a third preset scene or a fourth preset scene based on the vehicle speed and the vehicle acceleration in the historical time period in response to the driving direction being the longitudinal direction.
The automatic driving system automatically acquires the driving direction of the vehicle, and can judge whether the difference is in the lateral direction or the longitudinal direction according to the driving data generated by judgment. In the driving scene of the automatic driving vehicle, for the lateral direction situation, for example, the driver drives the vehicle to change lane, but the automatic driving system does not give a lane change instruction for controlling the vehicle to change lane, and at this time, a decision difference occurs; for another example, when the driver drives the vehicle to change lane, the automatic driving system also gives a lane change instruction for controlling the vehicle to change lane, but the lane change track of the driver driven the vehicle is different from the lane change track of the vehicle given by the automatic driving system, and at this time, a track difference occurs, so that when the driving direction is the lateral direction, the first preset scene can be a lateral decision difference scene; the second preset scenario may be a lateral trajectory difference scenario. For the longitudinal direction condition, the third preset scene can be a constant speed scene of manual driving; the fourth preset scene may be a manual driving acceleration or deceleration scene.
In an alternative embodiment, the automatic driving system automatically acquires the vehicle driving direction, including the vehicle driving track and the vehicle driving speed during driving. In the state of manual driving, a driver drives a vehicle to change lanes, but an automatic driving system does not give a lane changing instruction for controlling the vehicle to change lanes, at the moment, a decision difference occurs, the driver drives the vehicle to change lanes, the automatic driving system also gives the lane changing instruction for controlling the vehicle to change lanes, but the lane changing track of the driver driving the vehicle is different from the lane changing track of the vehicle given by the automatic driving system, at the moment, the track difference occurs, and the current driving scene is determined to be a first preset scene or a second preset scene in response to the driving direction being a lateral direction. In the manual driving state, a driver drives an automatic driving vehicle to drive forwards, an automatic control system automatically acquires an actual vehicle speed average value, an actual vehicle speed maximum value, an actual acceleration average value and an actual acceleration maximum value of the vehicle, if the difference of the data exceeds a set difference threshold value, and the speed difference occurs at the moment, the current driving scene is determined to be a third preset scene or a fourth preset scene in response to the fact that the driving direction is the longitudinal direction.
Optionally, according to the method in the embodiment of the present invention, determining that the current driving scene is the first preset scene or the second preset scene includes: determining a first sending time of a first lane changing instruction in the first driving data and a second sending time of a second lane changing instruction in the second driving data; acquiring the time difference between the first sending time and the second sending time; and determining that the current driving scene is a first preset scene in response to the time difference being larger than the first preset time difference.
Fig. 3 is a schematic diagram of an automatic driving system issuing a lane change instruction under an optional lateral decision difference according to an embodiment of the present invention, and as shown in fig. 3, a first lane change instruction in first driving data (shown as a solid line L in fig. 3) may be a lane change instruction issued during manual driving, and a first issue time is a lane change instruction issue time T0 (shown as a solid five-pointed star in fig. 3). The second lane change instruction in the second travel data (shown by a solid line N in fig. 3) may be a lane change instruction issued by the automatic driving system during simulated driving, and the second issuance time is a lane change instruction issuance time T1 (shown by a hollow five-pointed star in fig. 3). The first preset time may be a threshold value corresponding to a time difference T-delta between time T1 and time T0. It should be noted that the time axis direction is rightward (as indicated by the T axis in fig. 3).
In an alternative embodiment, when the driver drives the automatic driving vehicle to drive forwards, in the same time interval, the time for sending the lane change instruction when the driver performs manual control is T0, the time for sending the lane change instruction when the automatic driving system performs simulated driving is T1, and the time difference T-delta between the first sending time and the second sending time is obtained. When the time difference is larger than the set time difference threshold value, the manual control sends out a lane changing instruction, but the automatic driving system does not send out the lane changing instruction in the same time domain with the manual driving, and the driving scene is judged to be a lateral decision difference scene.
Optionally, according to the method of the embodiment of the present invention, acquiring scene data corresponding to a current driving scene to obtain target scene data includes: determining a lane change time of the autonomous vehicle based on the first departure time and the second departure time; determining a target time period corresponding to target scene data based on the lane change time; and acquiring scene data in the target time period to obtain target scene data.
The lane changing time refers to the time when the driver manually controls the automatic driving vehicle and sends a lane changing operation instruction. The target time period refers to a time period in which scene data needs to be collected in a lateral decision difference scene. When a driver manually controls an automatic driving vehicle, a lane changing operation instruction is sent for a plurality of times according to the actual road condition, in order to accurately obtain the difference data corresponding to a certain lane changing operation instruction, the lane changing instruction sent during manual control and the lane changing instruction sent by an automatic driving system are required to be obtained in the corresponding target time period of the same time domain, and the time period before the first sending time is included, so that an effective scene label with analysis value is recorded.
In an alternative embodiment, the driver drives the autonomous vehicle to drive forward, in a certain case, the time for the driver to manually control to issue the lane change instruction is a first issue time T0, the time for the autonomous driving system to issue the lane change instruction when performing the simulation driving is T1, and the lane change time at this time is T0, based on the time difference between the first issue time and the second issue time, if the time difference is greater than the time difference threshold set for the driving scene, it is indicated that the manual control issues the lane change instruction, but the autonomous driving system does not issue the lane change instruction within the set time, it is determined that the driving scene of the vehicle at this time is a lateral decision difference scene, and based on the lane change time T0, the response target time period may be selected. For example, if the automatic driving system does not send a lane change instruction within a certain time T-chage after the lane change time, the current scene is identified as a lateral decision-making difference scene, data of the time from the certain time T-chg before the lane change time is used as the data of the lateral decision-making difference scene, and a scene label is recorded.
Optionally, according to the method in the embodiment of the present invention, determining the target time period corresponding to the target scene data based on the lane change time includes: in response to the time difference being larger than the first preset time difference and smaller than the second preset time difference, determining that the starting time of the target time period is the difference value between the lane changing time and the first preset value, and the ending time of the target time period is the sum value of the lane changing time and the time difference; the time difference is larger than a second preset time difference, the starting time of the target time period is determined to be the difference value between the lane changing time and the first preset value, and the ending time of the target time period is the sum value of the lane changing time and the second preset time difference; and the second preset time difference is greater than the first preset time difference.
The second preset time difference may be a period of time T-chg after the first issue time, and the automatic driving system acquires whether an operation instruction of the second issue time exists. The starting time and the ending time can be the starting time and the ending time of a target time period of the automatic driving system under a lateral decision difference scene, and the lane changing time is taken as a standard to collect scene data required before and after the target time period.
In an alternative embodiment, as shown in fig. 3, when the driver drives the autonomous vehicle forward, the driver lane change time T0 and the automated driving decision lane change time T1 generate a certain time difference T-delta, if the T-delta is larger than the set time difference threshold value, the difference is judged to be a lateral decision difference, the current driving scene is a first preset scene (namely a lateral decision difference scene), if the T-delta is smaller than the second preset time difference T-chg, it is determined that a lane change instruction is sent by the automatic driving system within a period of time after the first sending time, the starting time of the target time period is determined to be the difference value between the lane change time and the first preset value, and the ending time is determined to be the sum value of the lane change time and the time difference, namely, the target time period is the time from the driver lane change time T0 to the automatic driving decision lane change time T1. Fig. 4 is a schematic diagram of an automatic driving system not sending a lane change instruction under an optional lateral decision difference according to an embodiment of the present invention, as shown in fig. 4, if the time difference is greater than a second preset time difference T-chg and must be greater than a set time difference threshold, it is determined that the difference is a lateral decision difference, the current driving scene is a first preset scene (i.e., a lateral decision difference scene), which illustrates that a lane change instruction is not sent by the automatic driving system within a period of time after a first sending time T0 (shown as a solid five-pointed star in fig. 4) of the lane change instruction, and then it is determined that a starting time of the target time period is a difference between the lane change time and the first preset value, and an ending time is a sum of the lane change time and the second preset time difference, that is, the target time period is a period of time before the driver lane change time T0 to the T-chg time.
For example, in the manual driving state, if the driver changes lane at a certain time T0 and the automatic driving system sends a lane change instruction T1 within a certain time after the time, determining the starting time and the ending time of the target time period includes the following steps:
1. setting the first preset time as 10s and the second preset time T-chg as 15 s;
2. calculating the difference between the T1 moment and the T0 moment as T-delta;
3. if the time is 10s < T-delta <15s, the moment when the automatic driving system sends a lane change instruction T1 exists in the second preset time;
4. the starting time of the target time period is 10s before the lane changing time T0, and the ending time is T1;
5. if T-delta is greater than 15s, the fact that the automatic driving system does not send a lane change instruction within the second preset time is indicated as T1;
6. the start time of the target time period is 10s before the lane change time T0, and the end time is 15s after the lane change time T0.
It should be noted that, if the automatic driving system at a certain time T0 issues a lane change instruction, the driver performs a lane change at time T1 within a certain time T-chg after the time, and calculates the difference between time T1 and time T0 as T-delta, where the threshold value for T-delta is smaller than T-chg. If the difference is larger than the set threshold and smaller than T-chg, the current scene is identified as a lateral decision-making difference scene, data from the difference between the lane change time and the threshold to the time T1 is used as the lateral decision-making difference scene data, and a scene label is recorded. And if the difference is larger than T-chg, taking the data from the difference between the track changing time and the threshold value to the T-chg time as the lateral decision difference scene data, and recording a scene label.
Optionally, according to the method of the above embodiment of the present invention, in response to the time difference being less than or equal to the first preset time difference, the method further includes: acquiring a first driving track in the first driving data and a second driving track in the second driving data; acquiring the distance between the first running track and the second running track; and determining that the current driving scene is a second preset scene in response to the distance being greater than the preset distance.
The first travel track in the first travel data may be a motion track X0 of the vehicle after a lane change instruction issued during lane change of manual driving control. The second travel locus in the second travel data may be a movement locus X1 of the vehicle after the lane change instruction is issued while the automated driving system simulates driving. The second preset scenario may be a lateral trajectory difference, for example, the scenario may include a lane straight lane changing-free scenario, a turning or turning scenario, and a lane changing scenario.
It should be noted that, when a straight-ahead lane change is not considered in the lane, the lateral trajectory of manual driving is less different from that of an automatic driving system, and therefore, no difference comparison is performed in such a scene. Scenes such as turning, turning around, lane changing and the like are all that the distance between track points at the same moment is calculated by comparing the historical driving track of the vehicle after the operation is finished with the track output by the automatic driving system, and if the distance exceeds a certain threshold value, the difference data in the target scene can be recorded.
For example, when a manually driven autonomous vehicle changes lanes, the autonomous system issues a lane change operation command. When the vehicle changes lanes, the running track of the manually driven vehicle is X0, the running track of the automatically driven system is controlled to be X1 in a simulation mode, in the lane changing process, the automatically driven system automatically compares X0 with X1, and if the track point distance at the same moment exceeds a set distance threshold value, the current driving scene is determined to be a lateral track difference.
Optionally, according to the method of the embodiment of the present invention, acquiring scene data corresponding to a current driving scene to obtain target scene data includes: determining a track starting time based on the first running track and the second running track; and acquiring scene data of the track starting moment to obtain target scene data.
In an alternative embodiment, when the vehicle executes an instruction given by manual control of a driver, the vehicle starts to change the running track, wherein the running track is X0, and when the automatic driving system simulates the vehicle to change the running track to run in the same scene, wherein the track is X1, the track starting time is determined as the time for sending the operation instruction. From the track starting time, the automatic driving system automatically compares X0 with X1, then the current driving scene is determined to be a lateral track difference, the automatic driving system can automatically acquire the data corresponding to the target scene within a period of time, and records a scene label.
For example, for a turning or turning scene, after the driver completes the turning or turning, the vehicle driving history track data X0 at the starting time of the turning or turning is compared with the output track X1 of the automatic driving system, the track point distance at the same time is calculated, if the distance exceeds a certain threshold, the current scene is identified as a lateral track difference scene, the data at the starting time of the turning or turning is used as the lateral track difference scene data, and the scene label is recorded.
For a lane change scene, if no lateral decision difference occurs, after a driver finishes lane change, comparing vehicle running track data X0 at the lane change starting moment with a track X1 output when an automatic driving system decides to change lanes, aligning the lane change starting moment of the driver with the lane change deciding moment of the automatic driving system, then calculating track point distance at the same moment, if the distance exceeds a certain distance threshold value, identifying the current scene as a lateral track difference scene, taking the data at the lane change starting moment as the lateral track difference scene data, and recording a scene label.
Optionally, according to the method in the embodiment of the present invention, determining that the current driving scene is a third preset scene or a fourth preset scene includes: acquiring a first maximum value and a first average value of the vehicle speed, and a second maximum value and a second average value of the vehicle acceleration; determining that the current driving scene is a third preset scene in response to the first maximum value and the second maximum value being within a first preset range and the first average value and the second average value being within a second preset range; and determining that the current driving scene is a fourth preset scene in response to the first maximum value not being within a first preset range, or the second maximum value not being within the first preset range, or the first average value not being within a second preset range, or the second average value not being within the second preset range.
Wherein the first maximum value and the first average value of the vehicle speed may be maximum values and average values of an actual acceleration vehicle speed when the driver performs manual driving. The second maximum value and the second average value of the vehicle acceleration may be maximum values and average values of actual acceleration when the automatic control system simulates driving. The first preset range may be a maximum value range of the actual acceleration when it is determined that the vehicle is driving at a constant speed. The second preset range may be a range of average values of the actual accelerations set in advance. The third preset scene may be a manual driving constant speed scene. The fourth preset scenario may be a manual driving acceleration or deceleration scenario.
In an alternative embodiment, when the driver drives manually, the automatic driving system automatically acquires the maximum value Am0 of the actual acceleration and the average value A0 of the actual acceleration of the automatic driving vehicle, and the automatic driving system simulates the maximum value Am1 of the actual acceleration and the average value A1 of the actual acceleration of the vehicle driving. And respectively setting a value range for the maximum value and the average value of the actual acceleration, if Am0 and Am1 are in the set range of the maximum value of the actual acceleration and A0 and A1 are in the set range of the average value of the actual acceleration, judging that the vehicle actually moves at a constant speed, and determining that the scene is an artificial driving constant speed scene. And if one or more values in the four value types exceed the set value range, determining that the scene is a manual driving acceleration or deceleration scene.
Optionally, according to the method of the embodiment of the present invention, determining that the current driving scene is the fourth scene includes: acquiring an acceleration instruction output by an automatic driving system, wherein the acceleration instruction is generated by the automatic driving system under the condition that a first vehicle speed in first running data is different from a second vehicle speed in second running data; acquiring a difference value between the acceleration instruction and a target acceleration in the first running data to obtain a first difference value; and determining that the current driving scene is a third preset scene in response to the first difference being greater than the first preset difference.
The first vehicle speed in the first travel data may be a speed at which the vehicle travels when the driver performs manual control. The second vehicle speed in the second running data may be a speed at which the automatic driving system simulates driving the vehicle to run. The first difference may be a difference between the manual control and an average value of the driving speed of the vehicle controlled by the automatic driving system. The first preset difference is a set threshold value for the difference.
In an alternative embodiment, a driver drives an automatic driving vehicle, the driving speed of the vehicle is controlled to be V0 through manual control, the driving speed of the vehicle is controlled to be V1 through an automatic driving system in the same period of time, when V0 is not equal to V1, it is indicated that the actual driving speed of the driver is not consistent with the target speed of the automatic driving system, at this time, the automatic driving system outputs a larger acceleration instruction according to the difference value between the actual vehicle speed and the target vehicle speed, it needs to be further indicated that the vehicle is accelerated when the acceleration is positive, and the vehicle is decelerated when the acceleration is negative. After generating an acceleration command, the autopilot system automatically obtains the output acceleration command and compares the difference between V0 and V1. At this time, the actual acceleration of the vehicle is close to 0, so that in a vehicle driving scene with an approximately constant speed, when the difference is greater than a threshold value set in advance by the system, the current scene is identified as a manual driving constant speed difference scene, a certain time, for example, 10s before and after the identified difference moment is used as the difference scene data, and a scene label is recorded.
Optionally, according to the method of the embodiment of the present invention, determining that the current driving scene is the fourth scene includes: obtaining a difference value between the first maximum value and the second maximum value to obtain a second difference value, and obtaining a third difference value by obtaining a difference value between the first average value and the second average value; and determining that the current driving scene is a fourth preset scene in response to the second difference being greater than the second preset difference or the third difference being greater than the third preset difference.
In an alternative embodiment, when the driver drives manually, the automatic driving system automatically acquires the maximum value Am0 of the actual acceleration and the average value a0 of the actual acceleration of the automatic driving vehicle, and the automatic driving system simulates the maximum value Am1 of the actual acceleration and the average value a1 of the actual acceleration of the vehicle driving. The second difference may be the difference of Am0 and Am 1. The third difference may be the difference between a0 and a 1. The second preset difference may be a set threshold for the difference between the actual acceleration maximum Am0 and Am 1. The third preset difference may be a set threshold value of the difference between the actual acceleration averages a0 and a 1.
In an alternative embodiment, the driver drives the autonomous vehicle, the autonomous system simulating the maximum value Am1 of the actual acceleration, the average value a1 of the actual acceleration driven by the vehicle by manually controlling the maximum value Am0 of the actual acceleration of the vehicle, the average value a0 of the actual acceleration. After the automatic driving system automatically acquires the data, the difference between Am0 and Am1 is calculated as a second difference, and the difference between a0 and a1 is calculated as a third difference. And if one or more difference values exceed the corresponding set threshold value, identifying that the current scene is a manual driving acceleration and deceleration difference scene, identifying a certain time before and after the difference moment as the difference scene data, and recording a scene label.
Optionally, according to the method of the embodiment of the present invention, acquiring scene data corresponding to a current driving scene to obtain target scene data includes: determining a preset time period with the difference time as a center, wherein the difference time is used for representing the time for determining the current driving scene; and acquiring scene data in a preset time period to obtain target scene data.
The difference time can be the time when the second difference or the third difference exceeds the corresponding set threshold, and the vehicle driving scene is a manual driving acceleration or deceleration scene.
In an alternative embodiment, when the driver drives the automated driving vehicle, the difference between the maximum value of the actual acceleration of the manually controlled vehicle and the maximum value of the actual acceleration of the automated driving system in simulated driving exceeds the maximum difference threshold, or the difference between the average value of the actual acceleration of the manually controlled vehicle and the average value of the actual acceleration of the automated driving system in simulated driving exceeds the average difference threshold, the difference time is determined to be set as the difference time, data of a certain time before and after the difference time is used as the difference scene data, for example, a time period of 10s before and after the difference time, and the scene tag is recorded.
In the embodiment of the invention, aiming at the technical problems of data redundancy and low efficiency in scene data acquisition in automatic driving, the embodiment of the invention provides a scheme for comparing a virtual control instruction output by an automatic driving system with the difference of driver behaviors, lateral difference and longitudinal difference are respectively compared according to the motion direction of a vehicle, difference analysis and effective data recording are carried out according to different scene characteristics, and high-quality scene data required by the automatic driving system are obtained. The data processing method comprises the following steps:
1. in order to ensure accurate identification and acquisition of real scene data effective for the automatic driving system, the automatic driving system is operated virtually during manual driving, differences are respectively compared in different scenes in a lateral direction and a longitudinal direction according to the moving direction of a vehicle, and the data recording time is set according to the scene characteristics, so that the data effectiveness is greatly improved;
2. in order to effectively analyze the lateral motion difference, firstly, the lateral decision is analyzed, and then the distance difference value analysis is carried out on lateral tracks on different roads and different driver behavior scenes;
3. for a longitudinal motion scene, the uniform speed running scene and the acceleration and deceleration running scene are compared and analyzed respectively according to the vehicle state in a certain time, so that misjudgment possibly caused by unified analysis of the acceleration difference is avoided.
Example 2
According to another aspect of the embodiments of the present invention, a vehicle data processing apparatus is further provided, where the apparatus may perform the vehicle data processing method in the foregoing embodiments, and a specific implementation scheme and a preferred application scenario are the same as those in the foregoing embodiments, and are not described herein again.
Fig. 5 is a schematic diagram of a data processing apparatus according to an embodiment of the present invention, as shown in fig. 5, the apparatus including:
a first obtaining module 52, configured to obtain first driving data generated by an autonomous vehicle and second driving data generated by an autonomous system during driving of the autonomous vehicle by a driver, where the autonomous system is installed on the autonomous vehicle;
a determination module 54 for determining a current driving scenario of the autonomous vehicle based in response to the first driving data and the second driving data being different;
the second obtaining module 56 is configured to obtain scene data corresponding to the current driving scene to obtain target scene data.
All the acquisition module devices are included in an automatic driving system, and function realization and data storage of the function modules are carried out through the automatic driving system equipped in the automatic driving vehicle. The first acquisition module can be a data acquisition module and can be used for acquiring some related data when a driver manually controls the vehicle to run and some related data when the automatic driving system simulates the vehicle to run. The determining module may be configured to compare the obtained manual driving data and the obtained automatic driving system simulation data according to a difference, and determine the current driving scene according to a comparison result. In the embodiment of the invention, the first obtaining module can obtain the lane changing operation instruction time for determining whether the lane changing operation instruction time is a lateral decision difference, can obtain the driving track of a vehicle when the vehicle executes a series of operations such as lane changing, turning around and the like, can determine whether the driving track is a lateral track difference, can obtain the maximum value and the average value of the actual vehicle speed at the current moment, can determine whether the driving is a longitudinal difference manual driving constant speed scene, and can obtain the maximum value and the average value of the actual acceleration at the current moment, and can determine whether the driving is a longitudinal difference manual driving acceleration or deceleration scene. The second obtaining module may be configured to, after the difference comparison is completed, confirm scene data corresponding to the current driving scene to obtain target scene data, and record a scene tag, so that the useful difference data corresponding to the target scene may be used for subsequent research and development of the automatic driving system.
For example, when a driver drives an automatic driving vehicle manually, the automatic driving system automatically acquires the lane changing instruction time of manual control driving at the moment and the lane changing instruction time sent by the automatic driving system simulation driving through the first acquisition module; the determining module compares the difference of the two data, and if the difference exceeds a set difference range, the scene is determined to be a lateral decision difference; and the second acquisition module takes data of a period of time before and after the lane change time as the lateral decision difference scene data and records the scene label.
Optionally, according to the method of the above embodiment of the present invention, the determining module includes: a direction acquisition unit for acquiring a traveling direction of the autonomous vehicle; a first scene determination unit configured to determine, in response to the traveling direction being the lateral direction, that the current traveling scene is a first preset scene or a second preset scene based on the first traveling data and the second traveling data; a second scene determination unit configured to determine whether the current travel scene is a third preset scene or a fourth preset scene based on a vehicle speed and a vehicle acceleration within a history time period in response to the travel direction being the longitudinal direction.
Optionally, according to the method in the foregoing embodiment of the invention, the first obtaining module includes: the time acquisition unit is used for acquiring first sending time of a first lane change instruction in the first driving data and second sending time of a second lane change instruction in the second driving data; a time difference acquisition unit for acquiring a time difference between the first issue time and the second issue time; the first scene determination unit in the determination module includes: and the first scene first zone determining unit is used for determining the current driving scene as a first preset scene in response to the time difference being larger than a first preset time difference.
Optionally, according to the method in the foregoing embodiment of the invention, the first obtaining module includes: a time acquisition unit that determines a lane change time of the autonomous vehicle based on the first issue time and the second issue time; determining a target time period corresponding to target scene data based on the lane change time; the second acquisition module is used for acquiring scene data in the target time period to obtain target scene data.
Optionally, according to the method in the embodiment of the present invention, the first obtaining module further includes: the track acquiring unit is used for acquiring a first running track in the first running data and a second running track in the second running data; a distance acquisition unit for acquiring a distance between the first travel track and the second travel track; the first scene determining unit in the determining module further includes: and the first scene two-zone determining unit is used for determining that the current driving scene is a second preset scene in response to the distance being greater than the preset distance.
Optionally, according to the method in the embodiment of the present invention, the first obtaining module further includes: the time acquisition unit is used for determining the track starting time based on the first running track and the second running track; the second acquisition module is used for acquiring scene data of the track starting moment to obtain target scene data.
Optionally, according to the method in the embodiment of the present invention, the first obtaining module further includes: the vehicle acceleration acquisition module is used for acquiring a first maximum value and a first average value of vehicle speed; the second scene determination unit in the determination module includes: the first scene first area determining unit is used for determining that the current driving scene is a first preset scene in response to the first maximum value and the second maximum value being within a first preset range and the first average value and the second average value being within a second preset range; and the second scene second area determining unit is used for determining that the current driving scene is a fourth preset scene in response to the first maximum value not being within a first preset range, the second maximum value not being within the first preset range, the first average value not being within a second preset range, or the second average value not being within the second preset range.
Optionally, according to the method in the embodiment of the present invention, the first obtaining module further includes: an instruction acquisition unit further configured to acquire an acceleration instruction output by the autonomous driving system, wherein the acceleration instruction is generated by the autonomous driving system in a case where a first vehicle speed in the first travel data is different from a second vehicle speed in the second travel data; the difference value acquisition unit is used for acquiring a difference value between the acceleration instruction and the target acceleration in the first running data to obtain a first difference value; the second scene determination unit in the determination module includes: and the second scene one area determining unit is used for determining that the current driving scene is a third preset scene in response to the first difference value being larger than the first preset difference value.
Optionally, according to the method in the embodiment of the present invention, the first obtaining module further includes: the difference value obtaining unit is further configured to obtain a difference value between the first maximum value and the second maximum value to obtain a second difference value, and obtain a difference value between the first average value and the second average value to obtain a third difference value; the second scene determining module in the determining module further includes: and the second scene second area determining unit is used for determining that the current driving scene is a fourth preset scene in response to the second difference value being larger than the second preset difference value or the third difference value being larger than the third preset difference value.
Optionally, according to the method of the embodiment of the present invention, the first obtaining module further includes: the time acquisition unit is used for determining a preset time period with a difference time as a center, wherein the difference time is used for representing the time for determining the current driving scene; the second acquisition module is used for acquiring scene data in a preset time period to obtain target scene data.
Example 3
According to another aspect of the embodiments of the present invention, there is also provided a vehicle equipped with the data processing apparatus in the above-described embodiments, and also capable of executing the data processing method in any one of the above-described embodiments.
Example 4
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, where the computer-readable storage medium includes a stored program, and when the program runs, the apparatus where the computer-readable storage medium is located is controlled to execute the data processing method in any one of the above embodiments.
Example 5
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to execute a program, where the program executes the data processing method in any one of the above embodiments.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and amendments can be made without departing from the principle of the present invention, and these modifications and amendments should also be considered as the protection scope of the present invention.

Claims (10)

1. A data processing method, comprising:
acquiring first running data generated by an automatic driving vehicle and second running data generated by an automatic driving system in the running process of a driver driving the automatic driving vehicle, wherein the automatic driving system is installed on the automatic driving vehicle;
determining a current driving scenario of the autonomous vehicle in response to the first driving data and the second driving data being different;
and acquiring scene data corresponding to the current driving scene to obtain target scene data.
2. The method of claim 1, wherein determining a current driving scenario of the autonomous vehicle comprises:
acquiring a driving direction of the autonomous vehicle;
in response to the driving direction being a lateral direction, determining that the current driving scene is a first preset scene or a second preset scene based on the first driving data and the second driving data;
and in response to the driving direction being a longitudinal direction, determining that the current driving scene is a third preset scene or a fourth preset scene based on the vehicle speed and the vehicle acceleration in the historical time period.
3. The method of claim 2, wherein determining that the current driving scenario is a first preset scenario or a second preset scenario comprises:
determining a first sending time of a first lane changing instruction in the first driving data and a second sending time of a second lane changing instruction in the second driving data;
acquiring the time difference between the first sending time and the second sending time;
and determining that the current driving scene is the first preset scene in response to the time difference being larger than a first preset time difference.
4. The method according to claim 3, wherein obtaining scene data corresponding to the current driving scene to obtain target scene data comprises:
determining a lane change time of the autonomous vehicle based on the first issue time and the second issue time;
determining a target time period corresponding to the target scene data based on the lane changing time;
and acquiring scene data in the target time period to obtain the target scene data.
5. The method of claim 3, wherein in response to the time difference being less than or equal to the first preset time difference, the method further comprises:
acquiring a first driving track in the first driving data and a second driving track in the second driving data;
acquiring the distance between the first running track and the second running track;
and determining that the current driving scene is the second preset scene in response to the distance being greater than a preset distance.
6. The method of claim 5, wherein obtaining scene data corresponding to the current driving scene to obtain target scene data comprises:
determining a track starting time based on the first running track and the second running track;
and acquiring scene data of the track starting moment to obtain the target scene data.
7. The method of claim 2, wherein determining that the current driving scenario is a third preset scenario or a fourth preset scenario comprises:
acquiring a first maximum value and a first average value of the vehicle speed, and a second maximum value and a second average value of the vehicle acceleration;
determining that the current driving scene is the third preset scene in response to the first maximum value and the second maximum value both being within a first preset range and the first average value and the second average value both being within a second preset range;
determining that the current driving scene is the fourth preset scene in response to the first maximum value not being within the first preset range, or the second maximum value not being within the first preset range, or the first average value not being within the second preset range, or the second average value not being within the second preset range.
8. The method of claim 7, wherein obtaining scene data corresponding to the current driving scene to obtain target scene data comprises:
determining a preset time period with a difference time as a center, wherein the difference time is used for representing the time for determining the current driving scene;
and acquiring scene data in the preset time period to obtain the target scene data.
9. A data processing apparatus, comprising:
the automatic driving system comprises a first acquisition module, a second acquisition module and a control module, wherein the first acquisition module is used for acquiring first driving data generated by an automatic driving vehicle and second driving data generated by the automatic driving system in the driving process of the automatic driving vehicle driven by a driver, and the automatic driving system is installed on the automatic driving vehicle;
a determination module to determine a current driving scenario of the autonomous vehicle based on responding to the first driving data and the second driving data being different;
and the second acquisition module is used for acquiring scene data corresponding to the current driving scene to obtain target scene data.
10. A vehicle, characterized by comprising: the vehicle data processing apparatus of claim 9.
CN202210521753.1A 2022-05-13 2022-05-13 Data processing method and device and vehicle Pending CN114852097A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210521753.1A CN114852097A (en) 2022-05-13 2022-05-13 Data processing method and device and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210521753.1A CN114852097A (en) 2022-05-13 2022-05-13 Data processing method and device and vehicle

Publications (1)

Publication Number Publication Date
CN114852097A true CN114852097A (en) 2022-08-05

Family

ID=82636416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210521753.1A Pending CN114852097A (en) 2022-05-13 2022-05-13 Data processing method and device and vehicle

Country Status (1)

Country Link
CN (1) CN114852097A (en)

Similar Documents

Publication Publication Date Title
CN108919795B (en) Automatic driving automobile lane change decision method and device
CN111267846B (en) Game theory-based peripheral vehicle interaction behavior prediction method
US11238674B2 (en) Simulation of different traffic situations for a test vehicle
EP3640622B1 (en) Method and apparatus for determining coping capability boundary information of an unmanned vehicle and computer program therefore
CN109657355A (en) A kind of emulation mode and system of road vehicle virtual scene
EP3958129A1 (en) Method and system for validating autonomous control software for a self-driving vehicle
CN110751847B (en) Decision-making method and system for automatically driving vehicle behaviors
CN113343461A (en) Simulation method and device for automatic driving vehicle, electronic equipment and storage medium
CN113799770B (en) Data processing method and device based on automatic driving
CN111301404B (en) Vehicle control method and device, storage medium and processor
CN111539087A (en) Automatic driving system simulation test platform and automatic driving system evaluation method
CN110986994B (en) Automatic lane change intention marking method based on high-noise vehicle track data
CN114670872B (en) Automatic driving speed planning method, device, vehicle and storage medium
CN112466118A (en) Vehicle driving behavior recognition method, system, electronic device and storage medium
CN113176778A (en) Control method and control device for unmanned vehicle and unmanned vehicle
CN116010854B (en) Abnormality cause determination method, abnormality cause determination device, electronic device and storage medium
CN114852097A (en) Data processing method and device and vehicle
CN116153128A (en) Vehicle positioning and parking guiding method and system for parking lot
CN110239518A (en) A kind of lateral direction of car position control method and device
CN113183982A (en) Method and device for generating driving route of vehicle and automatic driving vehicle
CN112644487A (en) Automatic driving method and device
CN113085861A (en) Control method and device for automatic driving vehicle and automatic driving vehicle
CN111462475B (en) Real-time interactive dynamic traffic flow test system
CN114104005B (en) Decision-making method, device and equipment of automatic driving equipment and readable storage medium
CN114814825B (en) Vehicle track sensing and state extraction method based on radar and video fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination