CN112287797A - Data processing method and device, electronic equipment and readable storage medium - Google Patents

Data processing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN112287797A
CN112287797A CN202011147071.6A CN202011147071A CN112287797A CN 112287797 A CN112287797 A CN 112287797A CN 202011147071 A CN202011147071 A CN 202011147071A CN 112287797 A CN112287797 A CN 112287797A
Authority
CN
China
Prior art keywords
vehicle
driving
data
scene
recognition network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011147071.6A
Other languages
Chinese (zh)
Inventor
杨宏达
李国镇
卢美奇
李友增
戚龙雨
吴若溪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Priority to CN202011147071.6A priority Critical patent/CN112287797A/en
Publication of CN112287797A publication Critical patent/CN112287797A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/008Registering or indicating the working of vehicles communicating information to a remotely located station
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0808Diagnosing performance data

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a data processing method, a data processing device, an electronic device and a readable storage medium, wherein the method comprises the following steps: acquiring vehicle driving data; training an initial driving auxiliary recognition network according to a general training sample generated by vehicle driving data; inputting vehicle driving data into a primary driving assistance recognition network to obtain a driving scene where the vehicle driving data output by the primary driving assistance recognition network is located; and uploading the driving scene of the vehicle driving data to a cloud server, so as to determine the real driving scene of the vehicle driving data according to the driving scene of the vehicle driving data, and generate a special training sample. According to the data processing method, the special training samples are generated through the vehicle driving data, and the samples for training the driving auxiliary recognition network can be added.

Description

Data processing method and device, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a data processing method and apparatus, an electronic device, and a readable storage medium.
Background
At present, automobiles become one of indispensable transportation means in daily life, and safety problems brought by vehicles in the driving process are widely concerned by people.
The vehicle driving auxiliary system can judge the driving scene of the current vehicle according to the vehicle driving information, and further provides driving decision and danger early warning for drivers.
In order to enable the vehicle driving assistance system to realize the judgment of the driving scene, training samples generated by a large number of vehicle driving data in different scenes are generally required to train the recognition model in the vehicle driving assistance system, so that the recognition model learns which information corresponds to the dangerous situation and which information corresponds to the safety situation. Therefore, vehicle driving data in a common scene and vehicle driving data in an uncommon scene are very important for training a recognition model in a vehicle driving assistance system.
Disclosure of Invention
In view of the above, an object of the present application is to provide a data processing method, an apparatus, an electronic device and a readable storage medium, so as to increase training samples, so that the training accuracy of a recognition model in a vehicle driving assistance system is improved.
In a first aspect, an embodiment of the present application provides a data processing method, which is applied to an onboard processor, and the method includes:
acquiring vehicle running data generated when a target vehicle runs;
training an initial driving auxiliary recognition network according to a general training sample generated by the vehicle driving data to generate an untrained primary driving auxiliary recognition network;
inputting the vehicle driving data into the primary driving assistance recognition network to obtain a driving scene where the vehicle driving data output by the primary driving assistance recognition network is located;
uploading a running scene where the vehicle running data are located to a cloud server, determining a real running scene where the vehicle running data are located according to the running scene where the vehicle running data are located, and generating a special training sample according to the real running scene where the vehicle running data are located; the special training sample is used for training the primary driving auxiliary recognition network.
With reference to the first aspect, the present embodiments provide a first possible implementation manner of the first aspect, wherein the vehicle driving data is at least one of: position data of the vehicle, a traveling speed of the vehicle, a traveling posture of the vehicle, state information of a driver of the vehicle, and hazard warning data.
With reference to the first possible implementation manner of the first aspect, the present application provides a second possible implementation manner of the first aspect, where the vehicle driving data includes position data of the vehicle, a driving speed of the vehicle, a driving posture of the vehicle, and state information of a driver of the vehicle;
the inputting of the vehicle driving data into the primary driving assistance recognition network to obtain the driving scene of the vehicle driving data output by the primary driving assistance recognition network comprises:
according to the vehicle running data, the primary driving auxiliary recognition network detects whether the target vehicle has a preset dangerous event;
if the primary driving auxiliary recognition network detects that the target vehicle has a preset dangerous event, judging whether a front vehicle exists or not, and judging whether the distance between the target vehicle and the front vehicle is smaller than a preset distance or not;
and if the vehicle is in front and the distance between the target vehicle and the vehicle in front is less than the preset distance, determining that the driving scene where the vehicle driving data is located is a dangerous scene.
With reference to the second possible implementation manner of the first aspect, this application example provides a third possible implementation manner of the first aspect, where the preset dangerous event includes any one or more of the following: the collision of the vehicle head, the rapid deceleration of the vehicle, the rapid lane change of the vehicle and the dangerous driving state of the driver in the vehicle.
With reference to the second possible implementation manner of the first aspect, an embodiment of the present application provides a fourth possible implementation manner of the first aspect, where the vehicle driving data further includes: hazard warning data;
the uploading the driving scene of the vehicle driving data to a cloud server to determine the real driving scene of the vehicle driving data according to the driving scene of the vehicle driving data includes:
and uploading the driving scene where the vehicle driving data are located to a cloud server so as to determine the real driving scene where the vehicle driving data are located according to the preset dangerous event of the target vehicle and the dangerous alarm data.
With reference to the first aspect, the present application provides a fifth possible implementation manner of the first aspect, where the general training sample is obtained through the following steps:
and uploading the driving scene of the vehicle driving data to a cloud server, so as to generate a general training sample according to the vehicle driving data and the driving scene of the vehicle driving data.
With reference to the first aspect, an embodiment of the present application provides a sixth possible implementation manner of the first aspect, where after the training an initial driving assistance recognition network according to a general training sample generated according to the vehicle driving data to generate an untrained primary driving assistance recognition network, the method further includes:
judging whether a front vehicle switching event occurs or not;
if the vehicle-ahead switching event does not occur, detecting whether the data waveform of the vehicle driving data shakes;
if the data waveform of the vehicle driving data shakes, uploading the vehicle driving data to a cloud server, so as to determine a real driving scene where the vehicle driving data is located according to the vehicle driving data, and generating a special training sample according to the real driving scene where the vehicle driving data is located; the special training sample is used for training the primary driving auxiliary recognition network.
In a second aspect, an embodiment of the present application further provides a data processing apparatus, including:
the acquisition module is used for acquiring vehicle running data generated when a target vehicle runs;
the training module is used for training the initial driving auxiliary recognition network according to a general training sample generated by the vehicle driving data so as to generate an untrained primary driving auxiliary recognition network;
the input module is used for inputting the vehicle running data into the primary driving assistance recognition network so as to obtain a running scene where the vehicle running data output by the primary driving assistance recognition network is located;
the first uploading module is used for uploading a driving scene where the vehicle driving data are located to a cloud server, so that a real driving scene where the vehicle driving data are located is determined according to the driving scene where the vehicle driving data are located, and a special training sample is generated according to the real driving scene where the vehicle driving data are located; the special training sample is used for training the primary driving auxiliary recognition network.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fourth aspect, this application further provides a readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
The data processing method provided by the embodiment of the application comprises the following steps: acquiring vehicle running data generated when a target vehicle runs in a general running environment and vehicle running data generated when the target vehicle runs in a special running environment; training the initial driving auxiliary recognition network according to a general training sample generated by vehicle running data to generate an untrained primary driving auxiliary recognition network; inputting vehicle driving data into a primary driving assistance recognition network to obtain a driving scene where the vehicle driving data output by the primary driving assistance recognition network is located; uploading the vehicle driving data and a driving scene in which the vehicle driving data is located to a cloud server, determining a real driving scene in which the vehicle driving data is located according to the vehicle driving data and the driving scene in which the vehicle driving data is located, and generating a special training sample according to the vehicle driving data and the real driving scene in which the vehicle driving data is located; the special training sample is used for training the primary driving auxiliary recognition network. According to the data processing method provided by the embodiment of the application, the obtained vehicle driving data can generate the special training samples under the special driving environment, the samples for training the driving assistance recognition network can be added, the training precision of the recognition model in the vehicle driving assistance system can be improved, and the recognition accuracy of the driving assistance recognition network trained by using the general training samples and the special training samples can be higher.
According to the data processing method provided by the embodiment of the application, the driving scene output by the primary driving assistance system is verified according to the time relation between the danger alarm and the occurrence of the preset dangerous event, so that a real driving scene is obtained, and the sample for training the driving assistance recognition network can be increased by uploading the vehicle driving data under the condition that the primary driving assistance system makes the misjudgment.
According to the data processing method provided by the embodiment of the application, the vehicle running data under the condition that the vehicle running data is jittered and the vehicle switching event does not occur can be uploaded, so that samples for training the driving auxiliary identification network can be added.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a flowchart illustrating a data processing method provided in an embodiment of the present application;
FIG. 2 is a flow chart of another data processing method provided by an embodiment of the present application;
fig. 3 is a schematic structural diagram illustrating a data processing apparatus according to an embodiment of the present application;
fig. 4 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
Based on this, embodiments of the present application provide a data processing method, an apparatus, an electronic device, and a readable storage medium, which are described below by way of embodiments.
To facilitate understanding of the present embodiment, a data processing method disclosed in the embodiments of the present application will be described in detail first. In the flowchart of a data processing method shown in fig. 1, the following steps are included:
s101: acquiring vehicle running data generated when a target vehicle runs;
s102: training the initial driving auxiliary recognition network according to a general training sample generated by vehicle running data to generate an untrained primary driving auxiliary recognition network;
s103: inputting vehicle driving data into a primary driving assistance recognition network to obtain a driving scene where the vehicle driving data output by the primary driving assistance recognition network is located;
s104: uploading a running scene where the vehicle running data are located to a cloud server, determining a real running scene where the vehicle running data are located according to the running scene where the vehicle running data are located, and generating a special training sample according to the real running scene where the vehicle running data are located; the special training sample is used for training the primary driving auxiliary recognition network.
According to the data processing method, the number of training samples is increased, the special training samples for the primary driving auxiliary recognition network with the primary driving scene recognition capability can be generated, the primary driving auxiliary recognition network is further trained by using the generated special training samples, the trained driving auxiliary recognition network can recognize the real driving scene corresponding to the vehicle driving data of the unusual scene, and compared with the initial driving auxiliary recognition network which can only recognize the driving scene corresponding to the vehicle driving data in the common scene, the recognition capability is stronger, the recognition is more accurate, and therefore the training precision of the driving auxiliary recognition network is improved.
It should be noted that the data processing method provided in the embodiment of the present application may be applied to a vehicle having an Advanced Driving Assistance System (ADAS System for short), and the data processing method provided in the embodiment of the present application is applied to an onboard processor.
Therefore, in step S101, the target vehicle refers to a vehicle on which the ADAS system has been installed, and the sensors in the ADAS system mainly refer to aftermarket sensors, i.e., sensors that have not been installed in the factory. The target vehicle may be an autonomous vehicle or a general driving vehicle (i.e., a vehicle that requires driving by a driver).
The vehicle travel data may include at least one of: position data of the vehicle, a traveling speed of the vehicle, a traveling posture of the vehicle, and status information of a driver of the vehicle.
Specifically, the position data of the vehicle may be Positioning data acquired by a Global Positioning System (GPS for short); the traveling speed of the vehicle may include speed data of forward traveling of the vehicle, and speed data of backward traveling of the vehicle, the speed data about the vehicle being included in the traveling speed of the vehicle; the driving attitude of the vehicle mainly refers to motion attitude data measured by an Inertial Measurement Unit (IMU), such as data of a three-axis attitude angle and acceleration; the state information of the vehicle driver mainly refers to the body state information and facial state information when the driver drives the vehicle.
The vehicle travel data may include at least one of: position data of the vehicle, a traveling speed of the vehicle, a traveling posture of the vehicle, state information of a driver of the vehicle, and hazard warning data.
The position data of the vehicle, the running speed of the vehicle, the running posture of the vehicle, the state information of the driver of the vehicle and the content in the running data of the vehicle are the same, and are not described in detail herein.
The danger warning data mainly refers to data which is made by the ADAS system according to the data and used for warning a driver that the driver is possibly in a dangerous driving scene currently, and the danger warning data can comprise information such as data content and warning time.
The vehicle running data can be acquired through equipment such as a vehicle-mounted camera, a vehicle-mounted sensor, an IMU (inertial measurement unit), a GPS (global positioning system) and the like, and the vehicle-mounted processor acquires the data through the equipment such as the vehicle-mounted camera, the vehicle-mounted sensor, the IMU, the GPS and the like.
In an actual process, the driving environment of the vehicle can be divided into a common scene and an uncommon scene, wherein the common scene mainly refers to common driving scenes (such as low-speed rear-end collision, high-speed straight driving and the like during traffic jam), and the driving scenes occupy the majority of the driving process of the vehicle. The unusual scenes mainly refer to unusual driving scenes (such as rear-end collision in rainy and snowy days, rear-end collision in front of a vehicle during backing and the like), and the unusual driving scenes can also include extremely low driving scenes such as rugged mountain road driving and night light-free driving.
The general training sample refers to a sample of a driving scene corresponding to vehicle driving data in a common scene, which can be identified by the primary driving assistance identification network.
In step S102, the initial driving assistance recognition network is a recognition module disposed in the ADAS system, and the initial driving assistance recognition network may be an untrained neural network or an untrained neural network, and the initial driving assistance recognition network cannot obtain a corresponding real driving scene according to the vehicle driving data or the vehicle driving data output.
The primary driving assistance recognition network refers to an untrained neural network, namely the primary driving assistance recognition network has the capability of obtaining the driving scene of the vehicle driving data according to the vehicle driving data output and obtaining the driving scene of the vehicle driving data according to the vehicle driving data output, however, the driving scene in which the vehicle driving data output by the primary driving assistance recognition network according to the vehicle driving data is located is not completely correct, that is, the output driving scene is not the real driving scene, this is due to the lack of training samples generated from the real driving scenario in which the vehicle driving data is located, therefore, if the primary driving assistance recognition network can obtain the real driving scene where the vehicle driving data is located according to the vehicle driving data output, the primary driving assistance recognition system needs to be further trained.
The above-mentioned traveling scene may include a dangerous scene and a normal scene.
In a specific implementation, the general training sample can be obtained by the following steps:
and uploading the vehicle driving data and the driving scene where the vehicle driving data is located to a cloud server, so that the cloud server generates a general training sample according to the vehicle driving data and the driving scene where the vehicle driving data is located.
Here, the driving scene in which the vehicle driving data is located may be determined according to other algorithms and manual labeling methods capable of identifying the driving scene in which the vehicle driving data is located.
In step S103, the primary driving assistance recognition network may output a driving scene in which the vehicle driving data is obtained, based on the input vehicle driving data. The driving scene may include a normal scene and a dangerous scene.
In one possible embodiment, the vehicle travel data may include position data of the vehicle, a travel speed of the vehicle, a travel posture of the vehicle, and status information of a driver of the vehicle.
In executing step S103, the following steps may be executed:
s1031: according to the vehicle running data, the initial driving auxiliary recognition network detects whether a preset dangerous event occurs in a target vehicle;
s1032: if the initial driving auxiliary recognition network detects that the target vehicle has a preset dangerous event, judging whether a front vehicle exists or not and whether the distance between the target vehicle and the front vehicle is smaller than a preset distance or not;
s1033: and if the vehicle is in front and the distance between the target vehicle and the vehicle in front is less than the preset distance, determining that the driving scene where the vehicle driving data is located is a dangerous scene.
In step S1031, the preset dangerous event may include any one or more of the following: the collision of the vehicle head, the rapid deceleration of the vehicle, the rapid lane change of the vehicle and the dangerous driving state of the driver in the vehicle.
For the event that the vehicle head collides, a collision detection algorithm can be used for detection. Specifically, the collision detection algorithm may detect whether the vehicle head collides and a direction of the collision according to image information, IMU data, and GPS data in front of the vehicle, which are acquired by the vehicle-mounted camera.
The vehicle behavior detection algorithm may be used to detect an event where the vehicle is suddenly decelerated or the vehicle is suddenly changing lanes. Specifically, whether the vehicle is suddenly decelerated or the vehicle is suddenly changed can be detected according to image information in front of the vehicle, IMU data and GPS data acquired by the vehicle-mounted camera.
Aiming at the event that a driver in the vehicle is in a dangerous driving state, the vehicle-mounted internal camera can be utilized to acquire image information in the vehicle, and the limb characteristics, the facial characteristics and the current state of the driver are detected through the deep neural network model.
In step S1032, when the initial driving assistance recognition network detects that one or more of the preset dangerous events occur at the head of the target vehicle, it detects whether there is a front vehicle in front of the target vehicle and whether the distance between the target vehicle and the front vehicle is smaller than a preset distance.
In step S1033, if there is a preceding vehicle and the distance between the target vehicle and the preceding vehicle is less than the preset distance, it is determined that the driving scene in which the vehicle driving data is located is a dangerous scene.
Through the processes of the steps S1031 to S1033, the primary driving assistance recognition network is mainly caused to recognize the dangerous scene and obtain the vehicle driving data corresponding to the dangerous scene.
In the process of obtaining a normal scene and vehicle running data corresponding to the normal scene, when the initial driving auxiliary identification network does not detect that the vehicle head of the target vehicle has the preset dangerous event, the vehicle running data can be randomly collected at regular time, and the primary driving auxiliary identification network judges whether a front vehicle exists or not according to the input vehicle running data and whether the distance between the target vehicle and the front vehicle is smaller than the preset distance or not.
And if the primary driving auxiliary identification network judges that a vehicle exists in front according to the input vehicle driving data and the distance between the target vehicle and the vehicle in front is not less than the preset distance, determining that the driving scene of the vehicle driving data is a normal scene.
Through the step of step S103, the primary driving assistance recognition network may output a dangerous scene and vehicle driving data corresponding to the dangerous scene; and the vehicle driving data corresponding to the normal scene and the normal scene. It should be noted that the output of step S103 is not completely correct, and there is a case of misjudgment, that is, the normal scene is determined as the dangerous scene, or the dangerous scene is determined as the normal scene.
In a specific implementation process, other algorithms can be used for primarily judging the driving scene output by the primary driving assistance identification network so as to judge whether the driving scene output by the primary driving assistance identification network has a misjudgment condition, and in practice, most of results of primarily judging the driving scene output by the primary driving assistance identification network by using other algorithms are correct, but the misjudgment condition can also exist.
Therefore, in step S104, the vehicle driving data and the driving scene in which the vehicle driving data is located may be uploaded to the cloud server, so that the cloud server determines the real driving scene in which the vehicle driving data is located according to the vehicle driving data and the driving scene in which the vehicle driving data is located.
In a specific implementation process, the cloud server can determine the real driving scene of the vehicle driving data according to some algorithms, and can also determine the real driving scene by a manual marking method.
In a feasible implementation manner, the vehicle driving data may further include danger alarm data, so that the real driving scene where the vehicle driving data is located may be determined according to the preset danger event and the danger alarm data occurring in the target vehicle.
In the specific implementation process, whether a danger alarm occurs before a preset dangerous event occurs on the target vehicle can be judged firstly.
The judgment can be carried out according to the occurrence time of the preset dangerous event and the occurrence time of the dangerous alarm.
And then, if no danger alarm occurs before a preset dangerous event occurs in the target vehicle, judging that the real driving scene of the vehicle driving data is a dangerous scene.
If no danger alarm occurs before the preset dangerous event occurs on the target vehicle, the situation that the result that the driving scene where the vehicle driving data output by the primary driving auxiliary identification network is the dangerous scene is misjudged is shown, namely, the normal scene is misjudged as the dangerous scene.
In the specific implementation process, whether a preset dangerous event occurs in the target vehicle is detected after the dangerous alarm occurs for a period of time can be judged; if the preset dangerous event of the target vehicle is not detected after the dangerous alarm occurs for a period of time, the situation that the result that the driving scene of the vehicle driving data output by the primary driving auxiliary identification network is the normal scene is misjudged is shown, that is, the dangerous scene is misjudged as the normal scene.
When the real driving scene where the vehicle driving data is located is obtained, the cloud server side can generate a special training sample according to the vehicle driving data and the real driving scene where the vehicle driving data is located; wherein the special training sample is used for training the primary driving auxiliary recognition network. The special training sample refers to a sample of a driving scene corresponding to the vehicle driving data in the unusual scene, which can be identified by the primary driving assistance identification network.
The primary driving auxiliary recognition network trained by the special training sample can recognize the real driving scene of the vehicle driving data according to the vehicle driving data, so that the recognition certainty of the primary driving auxiliary recognition network can be improved.
In some feasible embodiments, whether the occurrence time of the hazard alarm and the time interval of detecting the preset hazard event of the target vehicle are greater than the preset time interval or not can be further judged, if the occurrence time of the hazard alarm and the time interval of detecting the preset hazard event of the target vehicle are less than the preset time interval, it is indicated that the hazard alarm is not timely sent out or the reaction time of the driver is not long enough, so that the vehicle driving data under the condition can be uploaded to the cloud server to generate a special training sample according to the vehicle driving data; the special training sample is used for training the primary driving auxiliary recognition network.
In specific implementation, a general training sample and a special training sample can be used for training the primary driving auxiliary recognition network together, and the trained primary driving auxiliary recognition network can more accurately judge the driving scene under the unusual scene.
As shown in fig. 2, an embodiment of the present application further provides a flowchart of another data processing method, and specifically, after step S102 is executed, the method may further include:
s105: judging whether a front vehicle switching event occurs or not;
s106: if the former vehicle switching event does not occur, detecting whether the data waveform of the vehicle driving data shakes;
s107: if the data waveform of the vehicle driving data shakes, uploading the vehicle driving data to a cloud server so as to generate a special training sample according to the vehicle driving data; the special training sample is used for training the primary driving auxiliary recognition network.
In step S105, it may be determined whether a preceding vehicle switching event occurs according to image information obtained by the vehicle-mounted camera in front of the target vehicle.
In the judgment process, the detection model compares the obtained surrounding box of the target of the front vehicle with the predicted value of the surrounding box of the current frame obtained according to the result of the historical multi-frame surrounding box, and if the intersection ratio of the surrounding box of the target of the front vehicle and the predicted value is smaller than a preset threshold value, the target switching is considered to occur. In addition, if the intersection ratio of the front vehicle target bounding boxes of the front frame and the rear frame is smaller than a preset threshold value, the target switching is also considered to occur. If one of the above two conditions is satisfied, it is considered that the preceding vehicle switching has occurred.
In step S106, when the preceding vehicle target is switched, the waveform of the vehicle travel data may have jitter jumps.
The data waveform may include any one of the following timing data: and detecting a time sequence of algorithm intermediate results such as a frame coordinate waveform, a distance waveform, a relative speed waveform, an aspect ratio waveform and the like.
In the waveform jitter detection process, the waveform jitter detection can be realized by a Kalman (Kalman) filter. Mainly detects four coordinate values (x1, y1, x2 and y2) of the target bounding box fixed point of the front vehicle and the actual distance between the front vehicle and the target vehicle. By means of the prediction phase of the filter, the possible value of the variable at time t can be predicted from the variable values at time t-2 and time t-1. And at the time t, the model can obtain the observed value at the current time through the data acquired by the vehicle-mounted sensor. And when the absolute error between the predicted value and the observed value is larger than a threshold value E, determining that an abnormal waveform jitter occurs at the current moment.
Therefore, in step S107, when the preceding vehicle switching event does not occur, the data waveform of the vehicle driving data is jittered, which indicates that the vehicle driving data is abnormal data, so that the vehicle driving data can be uploaded to the cloud server to generate a special training sample according to the vehicle driving data; the special training sample is used for training the primary driving auxiliary recognition network.
Based on the same technical concept, embodiments of the present application further provide a data processing apparatus, an electronic device, a computer-readable storage medium, and the like, and refer to the following embodiments in detail.
Fig. 3 is a block diagram illustrating a data processing apparatus according to some embodiments of the present application, which implements functions corresponding to the above-described steps of performing a data processing method on a terminal device. The apparatus may be understood as a component of a server including a processor, which is capable of implementing the above-mentioned data processing method, as shown in fig. 3, the data processing apparatus may include:
an obtaining module 301, configured to obtain vehicle driving data generated when a target vehicle drives;
a training module 302, configured to train an initial driving assistance recognition network according to a general training sample generated by the vehicle driving data, so as to generate an untrained primary driving assistance recognition network;
an input module 303, configured to input the vehicle driving data into the primary driving assistance recognition network to obtain a driving scene where the vehicle driving data output by the primary driving assistance recognition network is located;
the first uploading module 304 is configured to upload a driving scene where the vehicle driving data is located to a cloud server, so as to determine a real driving scene where the vehicle driving data is located according to the driving scene where the vehicle driving data is located, and generate a special training sample according to the real driving scene where the vehicle driving data is located; the special training sample is used for training the primary driving auxiliary recognition network.
In one possible embodiment, the vehicle driving data is at least one of the following: position data of the vehicle, a traveling speed of the vehicle, a traveling posture of the vehicle, state information of a driver of the vehicle, and hazard warning data.
In one possible embodiment, the vehicle travel data includes position data of the vehicle, a travel speed of the vehicle, a travel posture of the vehicle, and status information of a driver of the vehicle;
the input module 303 includes:
the first detection module is used for detecting whether the target vehicle has a preset dangerous event or not by the initial driving auxiliary identification network according to the vehicle running data;
the judging module is used for judging whether a front vehicle exists or not and whether the distance between the target vehicle and the front vehicle is smaller than a preset distance or not if the initial driving auxiliary recognition network detects that the target vehicle has a preset dangerous event;
and the determining module is used for determining that the driving scene where the vehicle driving data is located is a dangerous scene if a vehicle in front exists and the distance between the target vehicle and the vehicle in front is less than a preset distance.
In one possible embodiment, the predetermined risk event includes any one or more of: the collision of the vehicle head, the rapid deceleration of the vehicle, the rapid lane change of the vehicle and the dangerous driving state of the driver in the vehicle.
In one possible embodiment, the vehicle travel data further includes: hazard warning data;
the first uploading module is specifically configured to:
and uploading the driving scene where the vehicle driving data are located to a cloud server so as to determine the real driving scene where the vehicle driving data are located according to the preset dangerous event of the target vehicle and the dangerous alarm data.
In a possible embodiment, the method further comprises:
the second uploading module is used for uploading the driving scene where the vehicle driving data are located to a cloud server so as to generate a general training sample according to the vehicle driving data and the driving scene where the vehicle driving data are located.
In a possible embodiment, the method further comprises:
the second judgment module is used for judging whether a front vehicle switching event occurs or not;
the second detection module is used for detecting whether the data waveform of the vehicle driving data shakes or not if a front vehicle switching event does not occur;
the third uploading module is used for uploading the vehicle driving data to a cloud server if the data waveform of the vehicle driving data shakes, so as to generate a special training sample according to the vehicle driving data; the special training sample is used for training the primary driving auxiliary recognition network.
As shown in fig. 4, which is a schematic structural diagram of an electronic device provided in an embodiment of the present application, the electronic device includes: a processor 401, a memory 402 and a bus 403, wherein the memory 402 stores execution instructions, and when the electronic device is operated, the processor 401 and the memory 402 communicate with each other through the bus 403, and the processor 401 executes the steps of the data processing method stored in the memory 402 as shown in fig. 1.
The computer program product for performing the data processing method provided in the embodiment of the present application includes a computer-readable storage medium storing a nonvolatile program code executable by a processor, where instructions included in the program code may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment, and is not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application, and are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A data processing method, applied to an on-board processor, the method comprising:
acquiring vehicle running data generated when a target vehicle runs;
training an initial driving auxiliary recognition network according to a general training sample generated by the vehicle driving data to generate a primary driving auxiliary recognition network;
inputting the vehicle driving data into the primary driving assistance recognition network to obtain a driving scene where the vehicle driving data output by the primary driving assistance recognition network is located;
uploading a running scene where the vehicle running data are located to a cloud server, determining a real running scene where the vehicle running data are located according to the running scene where the vehicle running data are located, and generating a special training sample according to the real running scene where the vehicle running data are located; the special training sample is used for training the primary driving auxiliary recognition network.
2. The data processing method according to claim 1, wherein the vehicle travel data is at least one of: position data of the vehicle, a traveling speed of the vehicle, a traveling posture of the vehicle, state information of a driver of the vehicle, and hazard warning data.
3. The data processing method according to claim 1, wherein the vehicle travel data includes position data of the vehicle, a travel speed of the vehicle, a travel posture of the vehicle, and status information of a driver of the vehicle;
the inputting of the vehicle driving data into the primary driving assistance recognition network to obtain the driving scene of the vehicle driving data output by the primary driving assistance recognition network comprises:
according to the vehicle running data, the primary driving auxiliary recognition network detects whether the target vehicle has a preset dangerous event;
if the primary driving auxiliary recognition network detects that the target vehicle has a preset dangerous event, judging whether a front vehicle exists or not, and judging whether the distance between the target vehicle and the front vehicle is smaller than a preset distance or not;
and if the vehicle is in front and the distance between the target vehicle and the vehicle in front is less than the preset distance, determining that the driving scene where the vehicle driving data is located is a dangerous scene.
4. The data processing method of claim 3, wherein the predetermined risk event comprises any one or more of: the collision of the vehicle head, the rapid deceleration of the vehicle, the rapid lane change of the vehicle and the dangerous driving state of the driver in the vehicle.
5. The data processing method according to claim 3, wherein the vehicle travel data further includes: hazard warning data;
the uploading the driving scene of the vehicle driving data to a cloud server to determine the real driving scene of the vehicle driving data according to the driving scene of the vehicle driving data includes:
and uploading the driving scene where the vehicle driving data are located to a cloud server so as to determine the real driving scene where the vehicle driving data are located according to the preset dangerous event of the target vehicle and the dangerous alarm data.
6. The data processing method of claim 1, wherein the generic training sample is obtained by:
and uploading the driving scene of the vehicle driving data to a cloud server, so as to generate a general training sample according to the vehicle driving data and the driving scene of the vehicle driving data.
7. The data processing method according to claim 1, wherein after the training an initial driving assistance recognition network according to the general training samples generated according to the vehicle driving data to generate an untrained primary driving assistance recognition network, the method further comprises:
judging whether a front vehicle switching event occurs or not;
if the vehicle-ahead switching event does not occur, detecting whether the data waveform of the vehicle driving data shakes;
if the data waveform of the vehicle driving data shakes, uploading the vehicle driving data to a cloud server so as to generate a special training sample according to the vehicle driving data; the special training sample is used for training the primary driving auxiliary recognition network.
8. A data processing apparatus, comprising:
the acquisition module is used for acquiring vehicle running data generated when a target vehicle runs;
the training module is used for training the initial driving auxiliary recognition network according to a general training sample generated by the vehicle driving data so as to generate an untrained primary driving auxiliary recognition network;
the input module is used for inputting the vehicle running data into the primary driving assistance recognition network so as to obtain a running scene where the vehicle running data output by the primary driving assistance recognition network is located;
the first uploading module is used for uploading a driving scene where the vehicle driving data are located to a cloud server, so that a real driving scene where the vehicle driving data are located is determined according to the driving scene where the vehicle driving data are located, and a special training sample is generated according to the real driving scene where the vehicle driving data are located; the special training sample is used for training the primary driving auxiliary recognition network.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the steps of the data processing method of any of claims 1 to 7.
10. A readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the data processing method according to any one of claims 1 to 7.
CN202011147071.6A 2020-10-23 2020-10-23 Data processing method and device, electronic equipment and readable storage medium Pending CN112287797A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011147071.6A CN112287797A (en) 2020-10-23 2020-10-23 Data processing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011147071.6A CN112287797A (en) 2020-10-23 2020-10-23 Data processing method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN112287797A true CN112287797A (en) 2021-01-29

Family

ID=74423722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011147071.6A Pending CN112287797A (en) 2020-10-23 2020-10-23 Data processing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN112287797A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114141009A (en) * 2021-10-31 2022-03-04 际络科技(上海)有限公司 Simulation traffic flow lane changing method and system based on multi-time sequence network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574537A (en) * 2015-11-23 2016-05-11 北京高科中天技术股份有限公司 Multi-sensor-based dangerous driving behavior detection and evaluation method
US20180307967A1 (en) * 2017-04-25 2018-10-25 Nec Laboratories America, Inc. Detecting dangerous driving situations by parsing a scene graph of radar detections
CN110329271A (en) * 2019-06-18 2019-10-15 北京航空航天大学杭州创新研究院 A kind of multisensor vehicle driving detection system and method based on machine learning
CN111694973A (en) * 2020-06-09 2020-09-22 北京百度网讯科技有限公司 Model training method and device for automatic driving scene and electronic equipment
CN111723608A (en) * 2019-03-20 2020-09-29 杭州海康威视数字技术股份有限公司 Alarming method and device of driving assistance system and electronic equipment
CN111731284A (en) * 2020-07-21 2020-10-02 平安国际智慧城市科技股份有限公司 Driving assistance method and device, vehicle-mounted terminal equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574537A (en) * 2015-11-23 2016-05-11 北京高科中天技术股份有限公司 Multi-sensor-based dangerous driving behavior detection and evaluation method
US20180307967A1 (en) * 2017-04-25 2018-10-25 Nec Laboratories America, Inc. Detecting dangerous driving situations by parsing a scene graph of radar detections
CN111723608A (en) * 2019-03-20 2020-09-29 杭州海康威视数字技术股份有限公司 Alarming method and device of driving assistance system and electronic equipment
CN110329271A (en) * 2019-06-18 2019-10-15 北京航空航天大学杭州创新研究院 A kind of multisensor vehicle driving detection system and method based on machine learning
CN111694973A (en) * 2020-06-09 2020-09-22 北京百度网讯科技有限公司 Model training method and device for automatic driving scene and electronic equipment
CN111731284A (en) * 2020-07-21 2020-10-02 平安国际智慧城市科技股份有限公司 Driving assistance method and device, vehicle-mounted terminal equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114141009A (en) * 2021-10-31 2022-03-04 际络科技(上海)有限公司 Simulation traffic flow lane changing method and system based on multi-time sequence network

Similar Documents

Publication Publication Date Title
JP6925796B2 (en) Methods and systems, vehicles, and computer programs that assist the driver of the vehicle in driving the vehicle.
CN107867288B (en) Method for detecting a forward collision
US11688174B2 (en) System and method for determining vehicle data set familiarity
CN107590768B (en) Method for processing sensor data for the position and/or orientation of a vehicle
US10703363B2 (en) In-vehicle traffic assist
US20160217688A1 (en) Method and control and detection unit for checking the plausibility of a wrong-way driving incident of a motor vehicle
CN111038522B (en) Vehicle control unit and method for evaluating a training data set of a driver assistance system
US9323718B2 (en) Method and device for operating a driver assistance system of a vehicle
EP3492870B1 (en) Self-position estimation method and self-position estimation device
CN107107821B (en) Augmenting lane detection using motion data
CN109070881B (en) Method for operating a vehicle
WO2017207153A1 (en) Method for providing information regarding a pedestrian in an environment of a vehicle and method for controlling a vehicle
CN111413973A (en) Lane change decision method and device for vehicle, electronic equipment and storage medium
JP2009175929A (en) Driver condition estimating device and program
CN116390879A (en) System and method for avoiding impending collisions
JP2018124789A (en) Driving evaluation device, driving evaluation method and driving evaluation system
JP5895728B2 (en) Vehicle group management device
CN113335311B (en) Vehicle collision detection method and device, vehicle and storage medium
US9747801B2 (en) Method and device for determining surroundings
CN107430821B (en) Image processing apparatus
CN111352414A (en) Decoy removal apparatus and method for vehicle and vehicle including the same
CN112287797A (en) Data processing method and device, electronic equipment and readable storage medium
CN109313851B (en) Method, device and system for retrograde driver identification
US20220292888A1 (en) Filtering of operating scenarios in the operation of a vehicle
CN115017967A (en) Detecting and collecting accident-related driving experience event data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210129

RJ01 Rejection of invention patent application after publication