CN115346362B - Driving data processing method and device, electronic equipment and storage medium - Google Patents

Driving data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115346362B
CN115346362B CN202210654589.1A CN202210654589A CN115346362B CN 115346362 B CN115346362 B CN 115346362B CN 202210654589 A CN202210654589 A CN 202210654589A CN 115346362 B CN115346362 B CN 115346362B
Authority
CN
China
Prior art keywords
data
auxiliary driving
exit
vehicle
driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210654589.1A
Other languages
Chinese (zh)
Other versions
CN115346362A (en
Inventor
范飞跃
孙雷明
王菊
高宇凡
于林男
李伟男
李烁
康璐
王梦露
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zebred Network Technology Co Ltd
Original Assignee
Zebred Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zebred Network Technology Co Ltd filed Critical Zebred Network Technology Co Ltd
Priority to CN202210654589.1A priority Critical patent/CN115346362B/en
Publication of CN115346362A publication Critical patent/CN115346362A/en
Application granted granted Critical
Publication of CN115346362B publication Critical patent/CN115346362B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • G07C5/0866Registering performance data using electronic data carriers the electronic data carrier being a digital video recorder in combination with video camera
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0968Systems involving transmission of navigation instructions to the vehicle
    • G08G1/0969Systems involving transmission of navigation instructions to the vehicle having a display in the form of a map
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)
  • Time Recorders, Dirve Recorders, Access Control (AREA)

Abstract

The invention discloses a driving data processing method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a plurality of data sets acquired by a vehicle in a target journey, wherein the plurality of data sets comprise an auxiliary driving data set, a vehicle body data set, a vehicle recorder data set and a map data set of the vehicle; based on a preset data fusion mode, N data sets corresponding to the preset data fusion mode are screened out from the multiple data sets, and data fusion analysis is carried out based on the N data sets to obtain an analysis result, wherein N is an integer greater than 1; and visually displaying the analysis result on the corresponding interactive interface of the vehicle. The scheme realizes linkage among various data and effectively improves the data utilization rate.

Description

Driving data processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of vehicles, and in particular, to a driving data processing method, a driving data processing device, an electronic device, and a storage medium.
Background
With the improvement of the intelligence of the vehicle, the vehicle body is provided with a greater number of sensors with greater performance, and the vehicle is provided with a chip with higher calculation power, so that the intelligent auxiliary driving is more and more popular in the application of the intelligent auxiliary driving on the vehicle. In the prior art, auxiliary driving data, map data, vehicle body data and the like generated by a vehicle are only used when respective corresponding functions are called, the data are deposited in a vehicle machine system, and the data are mutually independent and are split, so that the data utilization rate is low, and data support cannot be provided for more upper-layer applications.
Disclosure of Invention
The embodiment of the application provides a driving data processing method, a driving data processing device, electronic equipment and a storage medium.
In a first aspect, the present application provides a driving data processing method, where the method includes:
acquiring a plurality of data sets acquired by a vehicle in a target journey, wherein the plurality of data sets comprise an auxiliary driving data set, a vehicle body data set, a vehicle recorder data set and a map data set of the vehicle;
based on a preset data fusion mode, N data sets corresponding to the preset data fusion mode are screened out from the multiple data sets, and data fusion analysis is carried out based on the N data sets to obtain an analysis result, wherein N is an integer greater than 1;
and visually displaying the analysis result on the corresponding interactive interface of the vehicle.
Optionally, based on a preset data fusion manner, N data sets corresponding to the preset data fusion manner are screened from the multiple data sets, and based on the N data sets, data fusion analysis is performed to obtain an analysis result, including:
if the preset data fusion mode is a first data fusion mode aiming at auxiliary driving, screening the auxiliary driving data set, the automobile data recorder data set and the map data set from the plurality of data sets to serve as the N data sets;
And analyzing the auxiliary driving data set, the vehicle recorder data set and the map data set based on the preset auxiliary driving parameters corresponding to the first data fusion mode to obtain parameter values corresponding to the preset auxiliary driving parameters as the analysis result.
Optionally, the preset driving assistance parameters include one or more of the following parameters: the total duration of the assist driving, the total number of times of performing all types of assist driving, the number of times of performing each type of assist driving, the cause of exit of the assist driving, the number of occurrences of each cause of exit, the mileage of the assist driving, and the number of documents of the drive recorder.
Optionally, the preset driving assistance parameter includes the driving assistance exit reason, and the analyzing the driving assistance data set, the vehicle recorder data set, and the map data set includes:
determining M auxiliary driving exit moments from the auxiliary driving data set, wherein the vehicle automatically exits corresponding auxiliary driving at the M auxiliary driving exit moments, and M is a positive integer;
determining a target shooting image corresponding to each of the M driving support exit moments from the data set of the automobile data recorder, and/or determining a target road state of a road where the automobile is located, corresponding to each driving support exit moment from the data set of the map;
Aiming at each auxiliary driving exit moment, carrying out road condition recognition on a target shooting image corresponding to the auxiliary driving exit moment and/or a target road state to obtain a road condition recognition result corresponding to the auxiliary driving exit moment;
and matching the preset exit reason of the auxiliary driving with the road condition recognition result corresponding to the exit time of the auxiliary driving aiming at the auxiliary driving corresponding to the exit time of each auxiliary driving, and taking the successfully matched exit reason as the exit reason corresponding to the exit time of the auxiliary driving.
Optionally, after determining M auxiliary driving exit moments from the auxiliary driving data set, the method further includes:
and recording M road sections corresponding to the M auxiliary driving exit moments so as to generate reminding information when the vehicle runs to the M road sections again, wherein the reminding information is used for reminding a user to take over auxiliary driving.
Optionally, after the data fusion analysis is performed based on the N data sets to obtain an analysis result, the method further includes:
and generating an auxiliary driving data analysis report corresponding to the target journey based on the analysis result.
Optionally, based on a preset data fusion manner, N data sets corresponding to the preset data fusion manner are screened from the multiple data sets, and based on the N data sets, data fusion analysis is performed to obtain an analysis result, including:
if the preset data fusion mode is a second data fusion mode aiming at a vehicle data recorder, screening the vehicle data recorder data set and the vehicle body data set from the plurality of data sets to serve as the N data sets;
determining the shooting images of the target quantity corresponding to the target journey and shooting time of each shooting image from the data set of the automobile data recorder;
and determining a target vehicle body parameter corresponding to the shooting time from the vehicle body data set aiming at the shooting time of each shooting image, and taking the target vehicle body parameter as the analysis result.
Optionally, the target body parameters include one or more of the following: and the driving gear, the braking information and the vehicle body lamplight information of the vehicle.
Optionally, the visually displaying the analysis result on the interactive interface corresponding to the vehicle includes:
And combining each shot image with the corresponding target vehicle body parameter based on the shooting time of each shot image so as to simultaneously display the corresponding target vehicle body parameter when the video shot by the automobile data recorder is played on the interactive interface.
Optionally, based on a preset data fusion manner, N data sets corresponding to the preset data fusion manner are screened from the multiple data sets, and based on the N data sets, data fusion analysis is performed to obtain an analysis result, including:
if the preset data fusion mode is a third data fusion mode aiming at vehicle collision, screening the vehicle body data set, the automobile data recorder data set and the map data set from the plurality of data sets to serve as the N data sets;
determining the collision time of the collision of the vehicle from the vehicle body data set;
determining a collision place corresponding to the collision time from the map data set based on the collision time;
based on the collision moment, determining a collision video corresponding to the collision moment from the data set of the automobile data recorder;
And the analysis result is a collision place corresponding to the collision moment and the collision video.
Optionally, the interactive interface displays a time axis corresponding to the target journey, a driving route corresponding to the target journey and video information shot by a driving recorder corresponding to the target journey;
the step of visually displaying the analysis result on the corresponding interactive interface of the vehicle comprises the following steps:
marking the collision place on the driving route; and playing the collision video on the interactive interface when the time axis is positioned to the collision moment.
In a second aspect, the present application further provides a driving data processing device, including:
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring various data sets acquired by a vehicle in a target journey, and the various data sets comprise an auxiliary driving data set, a vehicle body data set, a vehicle recorder data set and a map data set of the vehicle;
the analysis module is used for screening N data sets corresponding to the preset data fusion mode from the plurality of data sets based on the preset data fusion mode, and carrying out data fusion analysis based on the N data sets to obtain an analysis result, wherein N is an integer greater than 1;
And the visualization module is used for carrying out visual display on the analysis result on the corresponding interaction interface of the vehicle.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory, and one or more programs, where the one or more programs are stored in the memory, and configured to be executed by one or more processors, where the one or more programs include operation instructions for performing a method as provided in the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, implements the steps corresponding to the driving data processing method as provided in the first aspect.
One or more technical solutions provided in the embodiments of the present application at least have the following technical effects or advantages:
according to the scheme, multiple data sets acquired by a vehicle in a target journey are acquired, wherein the multiple data sets comprise an auxiliary driving data set, a vehicle body data set, a vehicle data set and a map data set of the vehicle; based on a preset data fusion mode, N data sets corresponding to the preset data fusion mode are screened out from a plurality of data sets, data fusion analysis is carried out based on the N data sets, an analysis result is obtained, and N is an integer greater than 1; and visually displaying the analysis result on the corresponding interactive interface of the vehicle. In the scheme, the plurality of data sets come from different data fields of the vehicle, and the data support can be provided for more application scenes of the vehicle by carrying out fusion analysis on the data sets of the different data fields, so that linkage among the plurality of data is realized, and the data utilization rate is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a driving data processing method provided in an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a visual interactive interface according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a driving data processing device according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The embodiment of the application provides a driving data processing method, a driving data processing device, electronic equipment and a storage medium.
The technical scheme of the embodiment of the application generally comprises the following steps: acquiring a plurality of data sets acquired by a vehicle in a target journey, wherein the plurality of data sets comprise an auxiliary driving data set, a vehicle body data set, a vehicle data set and a map data set of the vehicle; based on a preset data fusion mode, N data sets corresponding to the preset data fusion mode are screened out from a plurality of data sets, data fusion analysis is carried out based on the N data sets, an analysis result is obtained, and N is an integer greater than 1; and visually displaying the analysis result on the corresponding interactive interface of the vehicle.
According to the scheme, the multiple data sets come from different data fields of the vehicle, the data sets of the different data fields are subjected to fusion analysis, and the analysis result can provide data support for more application scenes of the vehicle, so that linkage among multiple data is realized, and the data utilization rate is effectively improved.
In order to better understand the above technical solutions, the following detailed description will refer to the accompanying drawings and specific embodiments.
First, the term "and/or" appearing herein is merely an association relationship describing associated objects, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
The embodiment of the specification provides a driving data processing method, as shown in fig. 1, comprising the following steps:
step S101: acquiring a plurality of data sets acquired by a vehicle in a target journey, wherein the plurality of data sets comprise an auxiliary driving data set, a vehicle body data set, a vehicle recorder data set and a map data set of the vehicle;
Step S102: based on a preset data fusion mode, N data sets corresponding to the preset data fusion mode are screened out from the multiple data sets, and data fusion analysis is carried out based on the N data sets to obtain an analysis result, wherein N is an integer greater than 1;
step S103: and visually displaying the analysis result on the corresponding interactive interface of the vehicle.
The embodiments of the present invention are not limited to this, and may be applied to a vehicle-mounted terminal, a server that can be connected to a vehicle in a communication manner, or a system including a vehicle-mounted terminal and a server.
In step S101, the vehicle may be any vehicle that needs to perform driving data processing, and the vehicle may be equipped with a driving support system, a navigation system, a vehicle recorder, and various sensors. The data generated by the recording driving assisting system can be used as a driving assisting data set, the data generated by the navigation system can be used as a map data set, the data collected by the automobile data recorder can be used as an automobile data set, and the data collected by each sensor can be used as automobile body data.
In particular implementations, the auxiliary driving system may include various types, such as emergency obstacle avoidance, emergency braking, lane keeping, adaptive cruise control (Adaptive Cruise Control, ACC), auto-assisted driving, and the like. The vehicle can record real-time data of various auxiliary driving in the driving process, such as the starting time of the auxiliary driving, the exiting time of the auxiliary driving, the type of the auxiliary driving, the duration of the auxiliary driving and the like, and takes the data as an auxiliary driving data set.
The sensors mounted on the vehicle may include, but are not limited to, a speed sensor, an acceleration sensor, a pedal position sensor, a steering wheel angle sensor, and a collision sensor, and the vehicle may acquire, through various sensors mounted on the vehicle, a running speed, an acceleration, a braking force, steering information, collision information, and the like of the vehicle during running as a vehicle body data set, and of course, the vehicle body data set may also include other data, such as gear information, light information, and the like of the vehicle, which is not limited herein.
Video data in the running process of the vehicle can be recorded through the automobile data recorder, and videos under different visual angles can be collected for different automobile data recorders. For example, some automobile recorders may capture videos in front of and behind the vehicle, and some automobile recorders may capture videos in front of, behind, to the left of, and to the right of the vehicle. The video shot by the automobile data recorder can be used as an automobile data recorder data set.
The map image of the area where the vehicle is located, the real-time position of the vehicle, the vehicle position at the start time of the auxiliary driving, the vehicle position at the end time of the auxiliary driving, the vehicle position at the time of collision, the road information, and the like can be acquired by the navigation system, and these data can be used as a map data set.
In the embodiment of the present disclosure, the target journey may be any journey that the vehicle historically travels, for example, the vehicle condition of the user on a certain day is: 8:00-9:00 is driven from place A to place B,18:00-19:30 is driven from place B to place C, and 22:00-22:20 is driven from place C to place A, so that the user can drive 3 strokes in total on the day, the target stroke can be any stroke in the 3 strokes, and the target stroke can also be the total stroke formed by the 3 strokes.
In the embodiment of the present disclosure, in order to make the user more aware of the driving habits and the vehicle details of the user, in each time the user completes a trip, fusion analysis may be performed on multiple data sets generated in the trip by using the method provided in the embodiment of the present disclosure, and the analysis result is fed back to the user. Of course, the data fusion analysis may be performed according to a default data processing period, or according to a data processing period set by the user. For example, if the user manually sets the data processing cycle to be performed once a week, then the fusion analysis can be performed on the various data sets collected during all the trips of the week after the vehicle completes one week of travel.
It should be understood that when various types of data are collected, the vehicle records the collection time of each data at the same time, so that the data and the collection time can be corresponding, and when the data is screened later, the screening can be performed based on the collection time of the data. For example, the target journey is a vehicle journey of 8:00-9:00 of the same day, and then various data collected in the period of 8:00-9:00 can be screened out to be used as various data sets corresponding to the target journey.
In step S102, multiple data sets can be collected during the running process of the vehicle, different data sets are deposited in different data domains, for example, an auxiliary driving data set, a map data set are deposited in an intelligent driving domain of the vehicle, a vehicle body data set is deposited in a vehicle body domain, and a vehicle recorder data set is deposited in a cabin domain. Under the traditional automobile electronic and electric architecture, cross-domain data are difficult to open and easy to fuse. In the embodiment of the specification, the data of each domain can be opened and penetrated based on an intelligent automobile Service software architecture (SOA) (Service-Oriented Architecture) to realize cross-domain data call.
In a specific implementation process, a preset data fusion mode can be set according to application scenes, specifically, because cross-domain calling of data is realized, many new application scenes, such as a map visual data linkage scene, an intelligent driving behavior data report analysis scene, a driving record and map visual scene, a cycle number data report analysis scene and the like, can be extended and expanded in the embodiment of the specification.
Taking the map visual data linkage scene as an example, the preset data fusion mode corresponding to the map visual data linkage scene can be to fuse a map data set, an auxiliary driving data set, a vehicle body data set and a vehicle data set. Taking the intelligent driving behavior data report analysis scene as an example, the preset data fusion mode corresponding to the intelligent driving behavior data report analysis scene can be to fuse the auxiliary driving data set, the map data set and the automobile data recorder data set. Therefore, according to different scenes, a corresponding preset data fusion mode can be set.
Based on a preset data fusion mode, N data sets corresponding to the preset data fusion mode can be screened out from multiple data sets, wherein N is an integer greater than 1. Further, fusion analysis is performed on the N data sets. The specific analysis step corresponding to the preset data fusion mode can be set according to the output target of the application scene, and still takes the map visual data linkage scene as an example, wherein the output target of the scene is the auxiliary driving condition, the vehicle body data and the like in the driving process on the map picture, so that the data acquired at the same moment in each data set can be subjected to the processes of splicing, synthesizing, associating and the like, and the processing result is displayed on the map picture. Taking the intelligent driving behavior data report analysis scene as an example, the output target of the application scene is specific details of auxiliary driving in a target journey, so that data in each data set can be subjected to statistical analysis, and then each statistical result is fused and presented in a report form.
In step S103, in order to better enable the user to know the vehicle condition in the target journey, the analysis result obtained in step S102 may be visually displayed. In the embodiment of the present disclosure, the interactive interface may be a display interface of a vehicle central control screen, a display interface of a display screen of a vehicle recorder, or a display interface of a display screen of a user terminal device bound to a vehicle, which is not limited herein.
According to the scheme in the embodiment of the specification, through fusion analysis of various data sets in the target journey and visual display of analysis results, a user can intuitively and comprehensively understand driving operation and vehicle utilization data in the target journey, and the user can trace back the driving process of the target journey from various angles conveniently.
In the following, in order to better understand the data fusion analysis process in the embodiment of the present disclosure, the preset data fusion manner is exemplified by a first data fusion manner for driving assistance, a second data fusion manner for a vehicle recorder, and a third data fusion manner for vehicle collision, respectively.
First kind: the preset data fusion mode is a first data fusion mode aiming at auxiliary driving
In a specific implementation process, the fusion analysis step corresponding to the first data fusion mode may be: if the preset data fusion mode is a first data fusion mode aiming at auxiliary driving, screening the auxiliary driving data set, the automobile data recorder data set and the map data set from the plurality of data sets to serve as the N data sets; and analyzing the auxiliary driving data set, the vehicle recorder data set and the map data set based on the preset auxiliary driving parameters corresponding to the first data fusion mode to obtain parameter values corresponding to the preset auxiliary driving parameters as the analysis result.
In particular, the first data fusion manner for assisting driving can show the use condition of assisting driving in a target journey from multiple angles. In this embodiment of the present disclosure, the N data sets corresponding to the first data fusion manner include an auxiliary driving data set, a vehicle recorder data set, and a map data set.
For convenience of explanation, specific data of each data set in this embodiment will be explained below.
The set of auxiliary driving data may include: an assist driving start timing, an assist driving exit timing, and an assist driving type corresponding to each execution of the assist driving.
The vehicle data set may include a plurality of video files taken by the vehicle in a target trip. It should be noted that, when the vehicle data recorder shoots the video, the video may be stored according to a preset duration, for example, the video is stored as a video file every time the vehicle data recorder shoots the video for 5min, and if the target journey is 30min, the number of corresponding video files is 6. Of course, the preset duration of the video file may be set according to actual needs, which is not limited herein.
The map data set may include: a vehicle position at a start time of the assist drive corresponding to each execution of the assist drive, and a vehicle position at an exit time of the assist drive. The map data set stores vehicle positions (latitude and longitude information) corresponding to each driving time, and the vehicle positions at the driving start time can be screened out from the map data set based on the driving start time, and similarly, the vehicle positions at the driving exit time can be searched out from the map data set based on the driving exit time.
Further, the first data fusion mode corresponds to preset driving assistance parameters, including but not limited to one or more of the following: the total duration of the assist driving, the total number of times of performing all types of assist driving, the number of times of performing each type of assist driving, the cause of exit of the assist driving, the number of occurrences of each cause of exit, the mileage of the assist driving, and the number of documents of the drive recorder. The data required for calculating the parameter value may also be different for each preset driving assistance parameter, so that the data for calculating each preset driving assistance parameter may be determined from the N data sets according to the preset calculation mode of each driving assistance parameter, and the data may be processed to obtain the parameter value of each preset driving assistance parameter.
For better explanation, a specific calculation mode of each of the above-mentioned preset auxiliary driving parameters is explained below.
(1) Total duration of auxiliary driving duration
Based on the auxiliary driving data set, auxiliary driving starting time and auxiliary driving exiting time of each auxiliary driving in the target journey are obtained. For example, three types of assist driving (assist driving a, assist driving B, and assist driving C) are started in the target course, wherein the assist driving start timing of the assist driving a is a1, and the assist driving exit timing is a2; the auxiliary driving starting time of the auxiliary driving B is B1, and the auxiliary driving exiting time is B2; the assisted driving C has an assisted driving start time C1 and an assisted driving exit time C2.
And combining the time-overlapped auxiliary driving based on the starting time and the exiting time of each auxiliary driving. It should be noted that the period overlapping may be divided into total overlapping and partial overlapping, and for the total overlapping, for example, the operation periods a1 to a2 of the auxiliary driving a are completely included in the operation periods B1 to B2 of the auxiliary driving B, and the combined operation periods are B1 to B2. For partial overlap, for example, the operation period a 1-a 2 of the auxiliary driving a is partially included in the operation period B1-B2 of the auxiliary driving B, if a1 is located in the period B1-B2 and a2 is not included in the period B1-B2, the operation periods a 1-a 2 and B1-B2 are combined to obtain the combined operation period B1-a 2.
Further, after the period merging is performed, the duration of each period is calculated, and the sum of the durations of each period is taken as the total duration of the auxiliary driving.
(2) The total number of times all types of assist driving are performed may be determined by counting the total number of times assist driving is performed in the assist driving data set.
(3) The number of times each type of assist driving is performed may be counted based on the assist driving types corresponding to all the assist driving in the assist driving data set.
(4) Driver assistance exit reason
In the embodiment of the present specification, the determination of the reason for the driver's exit may be achieved by: m auxiliary driving exit moments are determined from the auxiliary driving data set, wherein the vehicle automatically exits corresponding auxiliary driving at the M auxiliary driving exit moments, and M is a positive integer; determining a target shooting image corresponding to each of the M driving support exit moments from the data set of the automobile data recorder, and/or determining a target road state of a road where the automobile is located, corresponding to each driving support exit moment from the data set of the map; aiming at each auxiliary driving exit moment, carrying out road condition recognition on a target shooting image corresponding to the auxiliary driving exit moment and/or a target road state to obtain a road condition recognition result corresponding to the auxiliary driving exit moment; and matching the preset exit reason of the auxiliary driving with the road condition recognition result corresponding to the exit time of the auxiliary driving aiming at the auxiliary driving corresponding to the exit time of each auxiliary driving, and taking the successfully matched exit reason as the exit reason corresponding to the exit time of the auxiliary driving.
Specifically, the reason for the exit of the auxiliary driving may be classified into a manual exit and an automatic exit (i.e., a non-manual exit), and for each time of the exit of the auxiliary driving, if an operation record of the manual exit of the user is detected within a preset period corresponding to the exit of the auxiliary driving, the manual exit of the user is taken as the reason for the exit of the auxiliary driving, where the preset period may be within 1s before the exit of the auxiliary driving and within 50ms before the exit of the auxiliary driving, which is not limited herein.
And filtering out the exit time corresponding to the manual exit of the user from all the auxiliary driving exit times, taking the rest exit time (namely the time corresponding to the automatic exit) as M auxiliary driving exit times, and determining the reason of each automatic exit auxiliary driving by combining a map and/or a vehicle recorder image.
For each exit time of the M auxiliary driving exit times, in order to determine an exit reason corresponding to the exit time, a target shot image corresponding to the exit time may be determined from a video recorded by the vehicle recorder, where the target shot image may be one or more, and includes a road image of a position where the vehicle is located at the exit time, and the road condition recognition result is obtained by recognizing the target shot image, where the road condition recognition result may include, but is not limited to, unclear lane lines, missing lane lines, front construction, road non-passing, a curved road angle, and the like. In addition, a target road state of the position of the vehicle at the exit time can be determined from the map data set, the target road state can comprise lane distribution, road construction and the like, and the road condition recognition result can be obtained by recognizing the target road state, and can comprise, but is not limited to, whether lanes merge or diverge, whether a road is available, a curved road angle and the like.
The road condition recognition of the target photographed image and/or the target road state can be obtained by analyzing the AI data engine and combining the image analysis, the driving map section and the map route planning data (such as the monitored traffic accident, the road maintenance construction, the tide lane peak adjustment, the lane merging and converging and the like).
It should be appreciated that each driver assistance corresponds to a preset exit cause, for example, for lane keeping, the preset exit cause may include, but is not limited to, lane line unclear, lane line missing, a bend angle greater than a threshold, and so on. In the embodiment of the present disclosure, the preset exit reason corresponding to each auxiliary driving is matched with the respective road condition recognition result, and the exit reason successfully matched is taken as the final exit reason. Taking the auxiliary driving as an example of lane keeping, based on the exit time of the lane keeping, carrying out road condition recognition on the target shooting image and/or the target road state at the exit time through the steps, comparing the road condition recognition result with the preset exit reason of the lane keeping, and if the road condition recognition result and the preset exit reason have unclear lane lines, taking the unclear lane lines as the final exit reason.
(5) The number of occurrences corresponding to each exit cause
The exit reasons corresponding to each auxiliary driving exit moment can be obtained through the step (4), and the same exit reasons are counted to obtain the occurrence times corresponding to each exit reason.
(6) Mileage of auxiliary driving
Based on the map data set, a vehicle position at an auxiliary driving start time and a vehicle position at an auxiliary driving exit time of each auxiliary driving in the target course are acquired. Since the vehicle may start a plurality of auxiliary driving in the same period, for auxiliary driving in which there is overlap in the period, there may be a case where the corresponding travel paths overlap entirely or partially. Therefore, the merging process can be performed on the overlapped links based on the vehicle position at the start time of each driving assistance and the vehicle position at the exit time to ensure that the same links are not repeatedly accumulated. And calculating the total distance as the mileage of the assisted driving for the road sections after the merging processing.
(7) The number of files of the automobile data recorder can be determined by counting the total number of video files recorded by the automobile data recorder in a target journey.
Further, in order to facilitate the user to understand the assisted driving situation in the target course, an assisted driving data analysis report corresponding to the target course may be generated based on the analysis result. Of course, the analysis report may also be displayed on the interactive interface.
It should be noted that, in some cases, the vehicle may not be able to obtain all the above data, and the calculated driving assistance parameters may also be different. For example, if all the data can be acquired, the parameter values of the 7 auxiliary driving parameters can be calculated; if the vehicle position at the start time of the auxiliary driving and the vehicle position at the exit time of the auxiliary driving cannot be obtained, the mileage of the auxiliary driving cannot be calculated; if the vehicle position at the start time of the auxiliary driving, the vehicle position at the exit time of the auxiliary driving, and the type of the auxiliary driving cannot be obtained, the mileage of the auxiliary driving and the number of times of executing each type of auxiliary driving cannot be calculated; if only the data set of the automobile data recorder can be obtained, only the number of files of the automobile data recorder can be calculated. Therefore, when generating a report, there is a difference in specific parameters in the report depending on the acquired data.
In the embodiment of the present disclosure, considering that the M auxiliary driving operations are all manually exited by a non-user, in order to improve driving safety, the following steps may be further executed: and recording M road sections corresponding to the M auxiliary driving exit moments so as to generate reminding information when the vehicle runs to the M road sections again, wherein the reminding information is used for reminding a user to take over auxiliary driving.
Specifically, for each assisted driving exit time, the corresponding road section may be determined by: the vehicle position at the exit of the assist drive is determined as the start point of the link, and the vehicle position at the next time the user manually starts the same assist drive is determined as the end point of the link. Of course, the corresponding road segments may also be determined by other means, without limitation. After the road sections corresponding to each auxiliary driving exit moment are determined, marking the road sections in a navigation system, and if the vehicle runs to the road sections next time, generating reminding information to remind a user whether the auxiliary driving needs to be exited for manual takeover.
Second kind: the preset data fusion mode is a second data fusion mode aiming at the automobile data recorder
In a specific implementation process, the fusion analysis step corresponding to the second data fusion mode may be: if the preset data fusion mode is a second data fusion mode aiming at a vehicle data recorder, screening the vehicle data recorder data set and the vehicle body data set from the plurality of data sets to serve as the N data sets; determining the shooting images of the target quantity corresponding to the target journey and shooting time of each shooting image from the data set of the automobile data recorder; and determining a target vehicle body parameter corresponding to the shooting time from the vehicle body data set aiming at the shooting time of each shooting image, and taking the target vehicle body parameter as the analysis result.
Specifically, the video shot by the driving recorder can be used in traffic accident analysis, and in order to restore the driving state of the vehicle in the target journey, in the embodiment of the specification, the vehicle body data can be correspondingly added in the video image of the driving recorder so as to intuitively display various data in the driving process.
In the embodiment of the present disclosure, the shooting images of the target number may be extracted from the video files included in the data set of the automobile data recorder, and the shooting time of each shooting image may be acquired. The target number may be set according to actual needs, and the captured images of the target number may be all images included in the video, or may be extracted every preset frame number, which is not limited herein. For each photographing moment, determining a target vehicle body parameter acquired at the photographing moment from a vehicle body data set, wherein the target vehicle body parameter comprises one or more of the following parameters: driving gear, braking information and body light information of the vehicle. The braking information comprises, but is not limited to, braking time and braking force, and when the vehicle is an electric vehicle, the braking information can also comprise automatic energy recovery data of the electric vehicle; the body light information includes, but is not limited to, a low beam state, a high beam state, a left turn light state, a right turn light state, a double flash state.
Further, based on the shooting time of each shooting image, synthesizing each shooting image with the corresponding target vehicle body parameter, so as to simultaneously display the corresponding target vehicle body parameter when the video shot by the automobile data recorder is played on the interactive interface.
Specifically, the image shot at the same moment and the acquired target car body parameters are synthesized, so that real-time target car body parameters can be synchronously displayed in the video file.
It should be noted that, for an electric vehicle, the unique energy recovery mode of the electric vehicle may cause a change in driving mode and driving habit of a driver, and some drivers may be unfamiliar with the driving mode of the electric vehicle to perform misoperation, thereby causing traffic accidents. Based on the method, if target car body parameters such as braking information are added in the video of the automobile data recorder, details of the accident can be more intuitively disclosed, and the accident cause can be determined.
Third kind: the preset data fusion mode is a third data fusion mode aiming at vehicle collision
In a specific implementation process, the fusion analysis step corresponding to the third data fusion mode may be: if the preset data fusion mode is a third data fusion mode aiming at vehicle collision, screening the vehicle body data set, the automobile data recorder data set and the map data set from the plurality of data sets to serve as the N data sets; determining the collision time of the collision of the vehicle from the vehicle body data set; determining a collision place corresponding to the collision time from the map data set based on the collision time; based on the collision moment, determining a collision video corresponding to the collision moment from the data set of the automobile data recorder; and the analysis result is a collision place corresponding to the collision moment and the collision video.
Specifically, the N data sets corresponding to the third data fusion method include a vehicle body data set, a vehicle recorder data set, and a map data set. When a vehicle collides, the vehicle sends out a collision signal, and the collision time corresponding to the collision signal is recorded in a vehicle body data set.
The collision time of the collision of the vehicle can be determined through the vehicle body data set, and the collision place corresponding to the collision time is determined in the map data set so as to be used for marking the subsequent collision place.
Meanwhile, in the driving process, if a collision signal is detected, the automobile data recorder can execute emergency recording, an emergency recorded video can be set according to actual needs, for example, a video of a period of time from 6s before collision to 12s after collision is recorded, and the emergency recorded video is stored into the automobile data recorder data set as a collision video. Alternatively, the collision video may be a video obtained by processing, such as cutting and splicing, the video normally shot by the automobile data recorder, and is not limited herein. In the embodiment of the present disclosure, after determining the time of collision when the vehicle collides, the corresponding collision video may be determined from the data set of the vehicle recorder according to the time of collision.
Further, a time axis corresponding to the target journey, a driving route corresponding to the target journey and video information shot by a driving recorder corresponding to the target journey are displayed on the interactive interface, and when visual display is carried out, the collision place is marked on the driving route; and playing the collision video on the interactive interface when the time axis is positioned to the collision moment.
Specifically, the driving route may be a route in which a target trip is highlighted on a map screen, in order to facilitate viewing of a collision location, the collision location may be marked on the driving route, and when the driving recorder can collect video information under K angles, the playing time of the video information may be corrected with the time axis, and as time varies, a video corresponding to time may be played correspondingly, and then when the time axis locates to the collision time, a corresponding collision video may be played.
In order to better understand the application scenario extended by the data fusion in the embodiments of the present disclosure, a specific description is given below of a centralized application scenario.
1. Map visual data linkage scene
In this scenario, a map screen corresponding to the target course may be displayed on the interactive interface, and the starting position of the vehicle, the driving route, and the arrival position of the vehicle may be displayed on the map screen. In addition, the real-time speed of the vehicle in the target journey can be displayed on the interactive interface, and the driving route can be visually distinguished according to the speed, for example, the driving route part with the speed smaller than the preset speed is marked with red to indicate slow progress, and the driving route part with the speed larger than or equal to the preset speed is marked with green to indicate fast and smooth traffic.
In this scenario, the auxiliary driving analysis result may also be displayed, such as an auxiliary driving start time, an auxiliary driving exit time, an auxiliary driving road segment, and the like. While also providing data linkage to view detailed auxiliary driving data analysis reports for the target trip.
2. Intelligent driving behavior data report analysis scene
In this scenario, the auxiliary driving situation of the target journey is mainly counted, for example, the auxiliary driving data analysis report is generated through data fusion analysis, and the analysis report is sent to the user for viewing.
3. Driving record and map visual scene
Under the scene, a map picture corresponding to the target star can be displayed on the interactive interface, and the starting position, the driving route and the arrival position of the vehicle can be displayed on the map picture. Meanwhile, the video shot by the corresponding automobile data recorder can be checked, and when collision occurs, the collision place can be marked on the driving route. In addition, a vehicle data recorder video/image list can be displayed, a user can enter a specific view page by selecting a single video/image, view details through gesture zooming and control the playing of the video by dragging a video progress bar, and auxiliary evidence can be conveniently and rapidly found before and after an accident.
4. Analysis of scenes and the like with cycle number report
In this scenario, the user cycle number report may be generated according to various data corresponding to the target trip, where the data report includes, but is not limited to, auxiliary driving related data, vehicle electricity consumption or oil consumption, vehicle electricity consumption distribution (such as air-conditioning electricity consumption, entertainment system electricity consumption, driving electricity consumption, etc.), emergency recording times of the vehicle recorder, emergency acceleration times/road sections, emergency deceleration times/road sections, etc., so that the user can more clearly understand the vehicle and driving conditions.
As shown in fig. 2, a schematic view of a visual interactive interface provided in an embodiment of the present disclosure is shown in fig. 2, where the interactive interface displays a time axis of a target trip, a map screen corresponding to the target trip, and a driving route of the target trip is displayed on the map screen, and a "bump" marked on the driving route indicates that a collision occurs at the location. Meanwhile, basic information of the target journey is displayed on the interactive interface, wherein the basic information comprises a starting point position, an end point position and a total mileage, the starting point position is X, the end point position is Y, and the total mileage is Z in FIG. 2. As shown in fig. 2, the "assisted driving" in fig. 2 may be a viewing portal associated with the assisted driving report, and selecting the "assisted driving" may view the assisted driving report of the target course. The "record file" in fig. 2 may be a viewing entry associated with the vehicle event recorder video presentation, and selecting the "record file" may view a detailed video file. The "viewable file" in fig. 2 may be a view portal associated with the crash video, and selection of the "viewable file" may view the emergency recorded crash video. In addition, videos shot by the automobile data recorder under different angles are displayed on the interactive interface, and as shown in fig. 2, videos shot under 5 shooting angles (shooting angles 1-5) can be displayed.
In summary, the solution in the embodiments of the present disclosure can implement fusion between multiple types of data, and provide enabling and data multiplexing analysis for upper layer applications.
Based on the same inventive concept, the embodiment of the present disclosure further provides a driving data processing device, as shown in fig. 3, where the device includes:
an obtaining module 301, configured to obtain multiple data sets collected by a vehicle in a target journey, where the multiple data sets include an auxiliary driving data set, a vehicle body data set, a vehicle recorder data set, and a map data set of the vehicle;
the analysis module 302 is configured to screen N data sets corresponding to a preset data fusion manner from the multiple data sets based on the preset data fusion manner, and perform data fusion analysis based on the N data sets to obtain an analysis result, where N is an integer greater than 1;
and the visualization module 303 is configured to visually display the analysis result on an interaction interface corresponding to the vehicle.
Optionally, the analysis module 302 is configured to:
if the preset data fusion mode is a first data fusion mode aiming at auxiliary driving, screening the auxiliary driving data set, the automobile data recorder data set and the map data set from the plurality of data sets to serve as the N data sets;
And analyzing the auxiliary driving data set, the vehicle recorder data set and the map data set based on the preset auxiliary driving parameters corresponding to the first data fusion mode to obtain parameter values corresponding to the preset auxiliary driving parameters as the analysis result.
Optionally, the preset driving assistance parameters include one or more of the following parameters: the total duration of the assist driving, the total number of times of performing all types of assist driving, the number of times of performing each type of assist driving, the cause of exit of the assist driving, the number of occurrences of each cause of exit, the mileage of the assist driving, and the number of documents of the drive recorder.
Optionally, the preset driving assistance parameter includes the driving assistance exit reason, and the analysis module 302 is configured to:
determining M auxiliary driving exit moments from the auxiliary driving data set, wherein the vehicle automatically exits corresponding auxiliary driving at the M auxiliary driving exit moments, and M is a positive integer;
determining a target shooting image corresponding to each of the M driving support exit moments from the data set of the automobile data recorder, and/or determining a target road state of a road where the automobile is located, corresponding to each driving support exit moment from the data set of the map;
Aiming at each auxiliary driving exit moment, carrying out road condition recognition on a target shooting image corresponding to the auxiliary driving exit moment and/or a target road state to obtain a road condition recognition result corresponding to the auxiliary driving exit moment;
and matching the preset exit reason of the auxiliary driving with the road condition recognition result corresponding to the exit time of the auxiliary driving aiming at the auxiliary driving corresponding to the exit time of each auxiliary driving, and taking the successfully matched exit reason as the exit reason corresponding to the exit time of the auxiliary driving.
Optionally, the apparatus further comprises:
the recording module is used for recording M road sections corresponding to the M auxiliary driving exit moments so as to generate reminding information when the vehicle runs to the M road sections again, and the reminding information is used for reminding a user to take over auxiliary driving.
Optionally, the apparatus further comprises:
and the report generation module is used for generating an auxiliary driving data analysis report corresponding to the target journey based on the analysis result.
Optionally, the analysis module 302 is configured to:
if the preset data fusion mode is a second data fusion mode aiming at a vehicle data recorder, screening the vehicle data recorder data set and the vehicle body data set from the plurality of data sets to serve as the N data sets;
Determining the shooting images of the target quantity corresponding to the target journey and shooting time of each shooting image from the data set of the automobile data recorder;
and determining a target vehicle body parameter corresponding to the shooting time from the vehicle body data set aiming at the shooting time of each shooting image, and taking the target vehicle body parameter as the analysis result.
Optionally, the target body parameters include one or more of the following: and the driving gear, the braking information and the vehicle body lamplight information of the vehicle.
Optionally, a visualization module 303 is configured to:
and combining each shot image with the corresponding target vehicle body parameter based on the shooting time of each shot image so as to simultaneously display the corresponding target vehicle body parameter when the video shot by the automobile data recorder is played on the interactive interface.
Optionally, the analysis module 302 is configured to:
if the preset data fusion mode is a third data fusion mode aiming at vehicle collision, screening the vehicle body data set, the automobile data recorder data set and the map data set from the plurality of data sets to serve as the N data sets;
Determining the collision time of the collision of the vehicle from the vehicle body data set;
determining a collision place corresponding to the collision time from the map data set based on the collision time;
based on the collision moment, determining a collision video corresponding to the collision moment from the data set of the automobile data recorder;
and the analysis result is a collision place corresponding to the collision moment and the collision video.
Optionally, the interactive interface displays a time axis corresponding to the target journey, a driving route corresponding to the target journey and video information shot by a driving recorder corresponding to the target journey;
a visualization module 303, configured to mark the collision location on the driving route; and playing the collision video on the interactive interface when the time axis is positioned to the collision moment.
The specific manner in which the respective modules perform the operations in the apparatus of the above embodiment has been described in detail in the above embodiment of the driving data processing method, and will not be described in detail herein.
Based on the same inventive concept, the embodiments of the present disclosure further provide an electronic device, as shown in fig. 4, including a memory 808, a processor 802, and a computer program stored on the memory 808 and executable on the processor 802, where the processor 802 implements the steps of any of the above-described driving data processing methods when executing the program.
Where in FIG. 4 a bus architecture (represented by bus 800), bus 800 may comprise any number of interconnected buses and bridges, with bus 800 linking together various circuits, including one or more processors, represented by processor 802, and memory, represented by memory 808. Bus 800 may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., as are well known in the art and, therefore, will not be described further herein. Bus interface 806 provides an interface between bus 800 and receiver 801 and transmitter 803. The receiver 801 and the transmitter 803 may be the same element, i.e. a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 802 is responsible for managing the bus 800 and general processing, while the memory 808 may be used to store data used by the processor 802 in performing operations.
Based on the same inventive concept, the present invention also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of any one of the above-described driving data processing methods.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (14)

1. The driving data processing method is characterized by comprising the following steps of:
Acquiring a plurality of data sets acquired by a vehicle in a target journey, wherein the plurality of data sets comprise an auxiliary driving data set, a vehicle body data set, a vehicle recorder data set and a map data set of the vehicle;
based on a preset data fusion mode, N data sets corresponding to the preset data fusion mode are screened from the multiple data sets, data fusion analysis is carried out based on the N data sets, an analysis result is obtained, and N is an integer greater than 1, and the method comprises the following steps: m auxiliary driving exit moments are determined from the auxiliary driving data set, wherein M is a positive integer; determining a target shooting image corresponding to each auxiliary driving exit moment from the data set of the automobile data recorder and/or determining a target road state corresponding to each auxiliary driving exit moment from the data set of the map; aiming at each auxiliary driving exit moment, carrying out road condition recognition on a target shooting image corresponding to the auxiliary driving exit moment and/or a target road state to obtain a road condition recognition result, matching the road condition recognition result with a preset exit reason of the auxiliary driving corresponding to the auxiliary driving exit moment, and taking the successfully matched exit reason as the exit reason corresponding to the auxiliary driving exit moment;
And visually displaying the analysis result on the corresponding interactive interface of the vehicle.
2. The method of claim 1, wherein the screening N data sets corresponding to the preset data fusion manner from the multiple data sets based on the preset data fusion manner, and performing data fusion analysis based on the N data sets, to obtain an analysis result, includes:
if the preset data fusion mode is a first data fusion mode aiming at auxiliary driving, screening the auxiliary driving data set, the automobile data recorder data set and the map data set from the plurality of data sets to serve as the N data sets;
and analyzing the auxiliary driving data set, the vehicle recorder data set and the map data set based on the preset auxiliary driving parameters corresponding to the first data fusion mode to obtain parameter values corresponding to the preset auxiliary driving parameters as the analysis result.
3. The method of claim 2, wherein the preset driving assistance parameters include one or more of the following: the total duration of the assist driving, the total number of times of performing all types of assist driving, the number of times of performing each type of assist driving, the cause of exit of the assist driving, the number of occurrences of each cause of exit, the mileage of the assist driving, and the number of documents of the drive recorder.
4. The method of claim 3, wherein the preset driving assistance parameters include the driving assistance exit cause, and the analyzing the driving assistance data set, the vehicle recorder data set, and the map data set includes:
determining M auxiliary driving exit moments from the auxiliary driving data set, wherein the vehicle automatically exits corresponding auxiliary driving at the M auxiliary driving exit moments, and M is a positive integer;
determining a target shooting image corresponding to each of the M driving support exit moments from the data set of the automobile data recorder, and/or determining a target road state of a road where the automobile is located, corresponding to each driving support exit moment from the data set of the map;
aiming at each auxiliary driving exit moment, carrying out road condition recognition on a target shooting image corresponding to the auxiliary driving exit moment and/or a target road state to obtain a road condition recognition result corresponding to the auxiliary driving exit moment;
and matching the preset exit reason of the auxiliary driving with the road condition recognition result corresponding to the exit time of the auxiliary driving aiming at the auxiliary driving corresponding to the exit time of each auxiliary driving, and taking the successfully matched exit reason as the exit reason corresponding to the exit time of the auxiliary driving.
5. The method of claim 4, wherein after the determining M auxiliary driving exit times from the auxiliary driving data set, the method further comprises:
and recording M road sections corresponding to the M auxiliary driving exit moments so as to generate reminding information when the vehicle runs to the M road sections again, wherein the reminding information is used for reminding a user to take over auxiliary driving.
6. The method of claim 2, wherein after the data fusion analysis based on the N data sets, the method further comprises:
and generating an auxiliary driving data analysis report corresponding to the target journey based on the analysis result.
7. The method of claim 1, wherein the screening N data sets corresponding to the preset data fusion manner from the multiple data sets based on the preset data fusion manner, and performing data fusion analysis based on the N data sets, to obtain an analysis result, includes:
if the preset data fusion mode is a second data fusion mode aiming at a vehicle data recorder, screening the vehicle data recorder data set and the vehicle body data set from the plurality of data sets to serve as the N data sets;
Determining the shooting images of the target quantity corresponding to the target journey and shooting time of each shooting image from the data set of the automobile data recorder;
and determining a target vehicle body parameter corresponding to the shooting time from the vehicle body data set aiming at the shooting time of each shooting image, and taking the target vehicle body parameter as the analysis result.
8. The method of claim 7, wherein the target body parameters include one or more of the following: and the driving gear, the braking information and the vehicle body lamplight information of the vehicle.
9. The method of claim 7, wherein the visually displaying the analysis result on the corresponding interactive interface of the vehicle comprises:
and combining each shot image with the corresponding target vehicle body parameter based on the shooting time of each shot image so as to simultaneously display the corresponding target vehicle body parameter when the video shot by the automobile data recorder is played on the interactive interface.
10. The method of claim 1, wherein the screening N data sets corresponding to the preset data fusion manner from the multiple data sets based on the preset data fusion manner, and performing data fusion analysis based on the N data sets, to obtain an analysis result, includes:
If the preset data fusion mode is a third data fusion mode aiming at vehicle collision, screening the vehicle body data set, the automobile data recorder data set and the map data set from the plurality of data sets to serve as the N data sets;
determining the collision time of the collision of the vehicle from the vehicle body data set;
determining a collision place corresponding to the collision time from the map data set based on the collision time;
based on the collision moment, determining a collision video corresponding to the collision moment from the data set of the automobile data recorder;
and the analysis result is a collision place corresponding to the collision moment and the collision video.
11. The method of claim 10, wherein the interactive interface displays a time axis corresponding to the target trip, a driving route corresponding to the target trip, and video information captured by a driving recorder corresponding to the target trip;
the step of visually displaying the analysis result on the corresponding interactive interface of the vehicle comprises the following steps:
marking the collision place on the driving route; and playing the collision video on the interactive interface when the time axis is positioned to the collision moment.
12. A traffic data processing apparatus, comprising:
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring various data sets acquired by a vehicle in a target journey, and the various data sets comprise an auxiliary driving data set, a vehicle body data set, a vehicle recorder data set and a map data set of the vehicle;
the analysis module is used for screening N data sets corresponding to the preset data fusion mode from the multiple data sets based on the preset data fusion mode, carrying out data fusion analysis based on the N data sets to obtain an analysis result, wherein N is an integer larger than 1, and comprises the following steps: m auxiliary driving exit moments are determined from the auxiliary driving data set, wherein M is a positive integer; determining a target shooting image corresponding to each auxiliary driving exit moment from the data set of the automobile data recorder and/or determining a target road state corresponding to each auxiliary driving exit moment from the data set of the map; aiming at each auxiliary driving exit moment, carrying out road condition recognition on a target shooting image corresponding to the auxiliary driving exit moment and/or a target road state to obtain a road condition recognition result, matching the road condition recognition result with a preset exit reason of the auxiliary driving corresponding to the auxiliary driving exit moment, and taking the successfully matched exit reason as the exit reason corresponding to the auxiliary driving exit moment;
And the visualization module is used for carrying out visual display on the analysis result on the corresponding interaction interface of the vehicle.
13. An electronic device comprising a memory and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors to perform the operation instructions included in the one or more programs for performing the method according to any one of claims 1-11.
14. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method steps of any of claims 1-11.
CN202210654589.1A 2022-06-10 2022-06-10 Driving data processing method and device, electronic equipment and storage medium Active CN115346362B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210654589.1A CN115346362B (en) 2022-06-10 2022-06-10 Driving data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210654589.1A CN115346362B (en) 2022-06-10 2022-06-10 Driving data processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115346362A CN115346362A (en) 2022-11-15
CN115346362B true CN115346362B (en) 2024-04-09

Family

ID=83948310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210654589.1A Active CN115346362B (en) 2022-06-10 2022-06-10 Driving data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115346362B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007293536A (en) * 2006-04-24 2007-11-08 Denso Corp Accident information collecting system and accident information recording device
WO2011062179A1 (en) * 2009-11-17 2011-05-26 富士通テン株式会社 Information processing device, in-vehicle device, information processing system, information processing method, and recording medium
CN104077819A (en) * 2014-06-17 2014-10-01 深圳前向启创数码技术有限公司 Remote monitoring method and system based on driving safety
CN108257250A (en) * 2018-01-25 2018-07-06 成都配天智能技术有限公司 Travelling data management method and automobile data recorder
CN108860165A (en) * 2018-05-11 2018-11-23 深圳市图灵奇点智能科技有限公司 Vehicle assistant drive method and system
WO2021155694A1 (en) * 2020-02-04 2021-08-12 腾讯科技(深圳)有限公司 Method and apparatus for driving traffic tool in virtual environment, and terminal and storage medium
CN113380065A (en) * 2021-06-24 2021-09-10 广汽埃安新能源汽车有限公司 Vehicle management method, system, device, electronic equipment and storage medium
WO2021248301A1 (en) * 2020-06-09 2021-12-16 华为技术有限公司 Self-learning method and apparatus for autonomous driving system, device, and storage medium
CN114492679A (en) * 2022-04-18 2022-05-13 北京远特科技股份有限公司 Vehicle data processing method and device, electronic equipment and medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104508719B (en) * 2012-07-17 2018-02-23 日产自动车株式会社 Drive assist system and driving assistance method
US10818110B2 (en) * 2018-04-06 2020-10-27 Nio Usa, Inc. Methods and systems for providing a mixed autonomy vehicle trip summary

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007293536A (en) * 2006-04-24 2007-11-08 Denso Corp Accident information collecting system and accident information recording device
WO2011062179A1 (en) * 2009-11-17 2011-05-26 富士通テン株式会社 Information processing device, in-vehicle device, information processing system, information processing method, and recording medium
CN104077819A (en) * 2014-06-17 2014-10-01 深圳前向启创数码技术有限公司 Remote monitoring method and system based on driving safety
CN108257250A (en) * 2018-01-25 2018-07-06 成都配天智能技术有限公司 Travelling data management method and automobile data recorder
CN108860165A (en) * 2018-05-11 2018-11-23 深圳市图灵奇点智能科技有限公司 Vehicle assistant drive method and system
WO2021155694A1 (en) * 2020-02-04 2021-08-12 腾讯科技(深圳)有限公司 Method and apparatus for driving traffic tool in virtual environment, and terminal and storage medium
WO2021248301A1 (en) * 2020-06-09 2021-12-16 华为技术有限公司 Self-learning method and apparatus for autonomous driving system, device, and storage medium
CN113380065A (en) * 2021-06-24 2021-09-10 广汽埃安新能源汽车有限公司 Vehicle management method, system, device, electronic equipment and storage medium
CN114492679A (en) * 2022-04-18 2022-05-13 北京远特科技股份有限公司 Vehicle data processing method and device, electronic equipment and medium

Also Published As

Publication number Publication date
CN115346362A (en) 2022-11-15

Similar Documents

Publication Publication Date Title
DE102018104801A1 (en) SUPPORT FOR DRIVERS ON TRIP CHANGES
CN107004363B (en) Image processing device, on-vehicle display system, display device, and image processing method
CN104103100B (en) Driving behavior analysis system
CN111915915A (en) Driving scene reconstruction method, device, system, vehicle, equipment and storage medium
EP3338252A1 (en) Devices, method and computer program for providing information about an expected driving intention
US11003925B2 (en) Event prediction system, event prediction method, program, and recording medium having same recorded therein
WO2021024798A1 (en) Travel assistance method, road captured image collection method, and roadside device
CN111386563B (en) Teacher data generation device
JP2023160848A (en) Vehicle recording device and operation situation recording program
CN115056649A (en) Augmented reality head-up display system, implementation method, equipment and storage medium
CN109166353A (en) Complex crossing guided vehicle road detection method and system in front of a kind of vehicle driving
DE112016003438T5 (en) DRIVING ASSISTANCE DEVICE
CN115346362B (en) Driving data processing method and device, electronic equipment and storage medium
Hamdane et al. Description of pedestrian crashes in accordance with characteristics of Active Safety Systems
Zhu et al. Automatic disengagement scenario reconstruction based on urban test drives of automated vehicles
JP7284951B2 (en) Monitoring support device, monitoring support program, and storage medium
CN114511834A (en) Method and device for determining prompt information, electronic equipment and storage medium
CN116469071A (en) Obstacle detection method, system, equipment and storage medium of shunting scene
CN115134491B (en) Image processing method and device
DE102019217752B4 (en) Method, computer program with instructions and device for gathering information about a person
KR102460623B1 (en) Apparatus and method for detecting no-entry vehicle using dual cameras
CN112669612B (en) Image recording and playback method, device and computer system
JP7432198B2 (en) Situation awareness estimation system and driving support system
JP2024144827A (en) Reproduction system, reproduction method, and reproduction program
CN116691514A (en) Method, device, vehicle and storage medium for displaying environment image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant