CN110688877B - Danger early warning method, device, equipment and storage medium - Google Patents

Danger early warning method, device, equipment and storage medium Download PDF

Info

Publication number
CN110688877B
CN110688877B CN201810730839.9A CN201810730839A CN110688877B CN 110688877 B CN110688877 B CN 110688877B CN 201810730839 A CN201810730839 A CN 201810730839A CN 110688877 B CN110688877 B CN 110688877B
Authority
CN
China
Prior art keywords
neural network
lane
target vehicle
early warning
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810730839.9A
Other languages
Chinese (zh)
Other versions
CN110688877A (en
Inventor
余倩
黄洋文
邝宏武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201810730839.9A priority Critical patent/CN110688877B/en
Publication of CN110688877A publication Critical patent/CN110688877A/en
Application granted granted Critical
Publication of CN110688877B publication Critical patent/CN110688877B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention provides a danger early warning method, a device, equipment and a storage medium, wherein the method comprises the following steps: acquiring continuous multi-frame video frame images acquired by image acquisition equipment; preprocessing the video frame image to obtain feature extraction data of a target vehicle, and inputting the feature extraction data serving as input data into a neural network combination model; and determining whether a dangerous event occurs or not by the neural network combined model according to the input data, and outputting an early warning signal corresponding to the dangerous event when the dangerous event occurs. The embodiment of the invention can realize the detection of the possible danger in the vehicle running process and improve the safety in the vehicle running process.

Description

Danger early warning method, device, equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for risk early warning.
Background
With the continuous development of economic society and the improvement of living standard of people, people rely on automobiles more and more for going out, and the driving safety problem is more and more prominent. During the running process of the vehicle on the road, the danger of lane departure, front vehicle collision, pedestrian collision and the like can be caused by various sudden conditions of fatigue driving, distraction, novice getting on the road and the like of a driver. These conditions can lead to traffic accidents, compromising the safety of the driver and passengers.
Therefore, how to detect the danger possibly existing in the vehicle driving process so as to improve the safety of the vehicle in the driving process becomes a problem to be solved urgently.
Disclosure of Invention
The embodiment of the invention aims to provide a danger early warning method, a danger early warning device, danger early warning equipment and a storage medium, so that the danger possibly existing in the driving process of a vehicle can be detected, and the safety of the driving process of the vehicle can be improved. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for early warning of a danger, where the method includes:
acquiring continuous multi-frame video frame images acquired by image acquisition equipment;
preprocessing the video frame image to obtain feature extraction data of a target vehicle, and inputting the feature extraction data serving as input data into a neural network combination model;
and determining whether a dangerous event occurs or not by the neural network combined model according to the input data, and outputting an early warning signal corresponding to the dangerous event when the dangerous event occurs.
Optionally, the neural network combination model includes a combination of one or more of a first neural network, a second neural network and a third neural network;
the first neural network is used for determining whether a lane departure event occurs according to the input data, and outputting an early warning signal corresponding to the lane departure event when the lane departure event is determined to occur;
the second neural network is used for determining whether a front vehicle collision event occurs according to the input data, and outputting an early warning signal corresponding to the front vehicle collision event when the front vehicle collision event is determined to occur;
and the third neural network is used for determining whether a pedestrian collision event occurs according to the input data, and outputting an early warning signal corresponding to the pedestrian collision event when the pedestrian collision event is determined to occur.
Optionally, the neural network combination model includes at least the first neural network;
the preprocessing the video frame image to obtain feature extraction data of a target vehicle, and inputting the feature extraction data serving as input data into a neural network combination model, comprises:
performing first preprocessing on the video frame image to obtain vehicle characteristic information and lane characteristic information of the target vehicle, and inputting the vehicle characteristic information and the lane characteristic information into a first neural network;
the determining, by the neural network combined model, whether a dangerous event occurs according to the input data, and outputting an early warning signal corresponding to the dangerous event when the dangerous event occurs includes:
determining, by the first neural network, a lane departure distance and/or a lane crossing time of the target vehicle according to the vehicle characteristic information and the lane characteristic information;
and when the lane departure distance meets a preset departure threshold value and/or the lane crossing time meets a preset time threshold value, determining that a lane departure event occurs, and outputting an early warning signal corresponding to the lane departure event.
Optionally, the lane feature information at least includes one or more of a lane line quality, a lane width, a lane radius, a lane curvature, and a yaw angle of a driving lane corresponding to the target vehicle;
the vehicle characteristic information includes at least a travel track of the target vehicle, a relative position and a relative speed of the target vehicle with respect to the travel lane, and one or more of a travel speed, a turn signal, a steering wheel angle signal, a brake signal, a yaw rate, a headway, and a headway width of the target vehicle.
Optionally, the determining, by the first neural network, the lane departure distance of the target vehicle according to the vehicle characteristic information and the lane characteristic information includes:
calculating, by the first neural network, an estimated lateral displacement of the target vehicle when deviating from the driving lane according to the vehicle characteristic information and the lane characteristic information;
the estimated lateral displacement X s Is determined by:
Figure BDA0001720840160000031
wherein, T t Is the headway of the target vehicle, V is the speed of travel of the target vehicle, beta is the curvature of the lane of travel, X front Is the actual lateral displacement of the target vehicle relative to the driving lane,
Figure BDA0001720840160000032
is the yaw angle formed between the driving lane and the longitudinal axis of the target vehicle,
Figure BDA0001720840160000033
is determined by:
Figure BDA0001720840160000034
dX is a change in lateral displacement X of the target vehicle per unit time, dY is a change in traveling direction displacement of the target vehicle per unit time, and dX' represents a differential value of dX.
Optionally, when the lane departure distance satisfies a preset departure threshold, determining that a lane departure event occurs includes:
when | X s |≥X L Determining that the target vehicle has a lane departure event; wherein X L As a threshold value for the deviation tendency, X L And (L-H)/2, wherein L represents the width of the driving lane, and H represents the width of the head of the target vehicle.
Optionally, the determining, by the first neural network, the cross-lane time of the target vehicle according to the vehicle characteristic information and the lane characteristic information includes:
calculating, by the first neural network, a crossing time TTLC of the target vehicle according to the vehicle characteristic information and the lane characteristic information by:
Figure BDA0001720840160000035
wherein d is the wheel distance row of the target vehicleThe lateral distance of the lane line of the driving lane, v is the longitudinal speed of the target vehicle,
Figure BDA0001720840160000036
is the yaw angle formed between the driving lane and the longitudinal axis of the target vehicle,
Figure BDA0001720840160000037
is determined by:
Figure BDA0001720840160000038
dX is a change in lateral displacement X per unit time of the target vehicle, dY is a change in travel direction displacement per unit time of the target vehicle, and dX' represents a differential value of dX.
Optionally, the determining, by the first neural network, the lane crossing time of the target vehicle according to the vehicle characteristic information and the lane characteristic information includes:
calculating, by the first neural network, a crossing time TTLC of the target vehicle according to the vehicle characteristic information and the lane characteristic information by:
Figure BDA0001720840160000041
where v is the target vehicle longitudinal velocity, d is the lateral distance of the target vehicle's wheels from the lane line of the driving lane, ω is the yaw rate, R road Is the radius of the lane of the driving lane,
Figure BDA0001720840160000042
is the yaw angle formed between the driving lane and the longitudinal axis of the target vehicle,
Figure BDA0001720840160000043
is determined by:
Figure BDA0001720840160000044
dX is a change in lateral displacement X per unit time of the target vehicle, dY is a change in travel direction displacement per unit time of the target vehicle, and dX' represents a differential value of dX.
Optionally, when the lane crossing time meets a preset time threshold, determining that a lane departure event occurs includes:
and when the lane crossing time is less than or equal to a preset time threshold value, determining that the target vehicle has a lane departure event.
Optionally, the training process of the neural network combination model includes:
obtaining a sample video, and preprocessing the sample video to obtain sample data of a sample vehicle;
dividing the sample data into a plurality of groups of training sets and a plurality of groups of testing sets in different sampling periods;
training by adopting a sequence learning method through the multiple groups of training sets to obtain multiple neural network models;
determining weights of the plurality of neural network models according to the plurality of groups of test sets;
and combining the plurality of neural network models according to the weights of the plurality of neural network models to obtain a neural network combined model.
Optionally, the determining weights of the plurality of neural network models according to the plurality of sets of test sets includes:
determining the corresponding prediction early warning occasions of the multiple groups of training sets in the multiple neural network models and the corresponding actual early warning occasions of the multiple groups of testing sets;
and comparing the predicted early warning opportunity with the actual early warning opportunity to determine the weights of the plurality of neural network models.
Optionally, after the determining that the lane departure event occurs, the method further comprises:
and determining the actual operation taking time of the driver according to the steering lamp signal, the vehicle running track and the steering wheel angle signal, and determining the actual operation taking time of the driver as the actual early warning time corresponding to the lane departure event.
Optionally, the method further includes:
and when the difference value between the actual early warning opportunity and the time for outputting the corresponding warning information is greater than a preset time threshold value, updating the neural network combination model.
Optionally, the neural network combination model is a neural network combination model based on sequence learning, the neural network combination model based on sequence learning includes a plurality of neural networks, and each neural network corresponds to a weight;
the updating the neural network combination model comprises:
and inputting the actual early warning time and the time for outputting the corresponding warning information into the neural network combination model based on the sequence learning, and adjusting the weight of each neural network in the neural network combination model based on the sequence learning.
Optionally, when the difference between the actual early warning time and the time for outputting the corresponding warning information is greater than a preset time threshold, the neural network combination model is updated, including:
when the difference value between the actual early warning opportunity and the time for outputting the corresponding warning information is larger than a preset time threshold value, displaying confirmation information for judging whether to update the neural network combination model;
when a confirmation instruction for the confirmation information is received, updating the neural network combination model.
In a second aspect, an embodiment of the present invention provides a danger early warning apparatus, where the apparatus includes:
the image acquisition module is used for acquiring continuous multi-frame video frame images acquired by the image acquisition equipment;
the data extraction module is used for preprocessing the video frame image to obtain feature extraction data of a target vehicle, and inputting the feature extraction data serving as input data into a neural network combination model;
and the danger early warning module is used for controlling the neural network combined model to determine whether a dangerous event occurs according to the input data, and outputting an early warning signal corresponding to the dangerous event when the dangerous event occurs.
Optionally, the neural network combination model includes a combination of one or more of a first neural network, a second neural network and a third neural network;
the first neural network is used for determining whether a lane departure event occurs according to the input data, and outputting an early warning signal corresponding to the lane departure event when the lane departure event is determined to occur;
the second neural network is used for determining whether a front vehicle collision event occurs according to the input data, and outputting an early warning signal corresponding to the front vehicle collision event when the front vehicle collision event is determined to occur;
and the third neural network is used for determining whether a pedestrian collision event occurs according to the input data, and outputting an early warning signal corresponding to the pedestrian collision event when the pedestrian collision event is determined to occur.
Optionally, the neural network combination model includes at least the first neural network;
the data extraction module is specifically used for performing first preprocessing on the video frame image to obtain vehicle characteristic information and lane characteristic information of the target vehicle, and inputting the vehicle characteristic information and the lane characteristic information into a first neural network;
the danger early warning module comprises:
the parameter determination submodule is used for controlling the first neural network to determine the lane departure distance and/or the lane crossing time of the target vehicle according to the vehicle characteristic information and the lane characteristic information;
and the lane departure early warning submodule is used for determining that a lane departure event occurs when the lane departure distance meets a preset departure threshold value and/or the lane crossing time meets a preset time threshold value, and outputting an early warning signal corresponding to the lane departure event.
Optionally, the lane feature information at least includes one or more of a lane line quality, a lane width, a lane radius, a lane curvature, and a yaw angle of a driving lane corresponding to the target vehicle;
the vehicle characteristic information includes at least a travel track of the target vehicle, a relative position and a relative speed of the target vehicle with respect to the travel lane, and one or more of a travel speed, a turn signal, a steering wheel angle signal, a brake signal, a yaw rate, a headway, and a headway width of the target vehicle.
Optionally, the parameter determining submodule is specifically configured to control the first neural network to calculate, according to the vehicle characteristic information and the lane characteristic information, an estimated lateral displacement of the target vehicle when the target vehicle deviates from the driving lane;
the estimated lateral displacement X s Is determined by:
Figure BDA0001720840160000071
wherein, T t Is the headway of the target vehicle, V is the speed of travel of the target vehicle, beta is the curvature of the lane of travel, X front Is the actual lateral displacement of the target vehicle relative to the driving lane,
Figure BDA0001720840160000072
is the yaw angle formed between the driving lane and the longitudinal axis of the target vehicle,
Figure BDA0001720840160000073
is determined by:
Figure BDA0001720840160000074
dX is a change in lateral displacement X per unit time of the target vehicle, dY is a change in travel direction displacement per unit time of the target vehicle, and dX' represents a differential value of dX.
Optionally, the lane departure warning sub-module is specifically configured to be used as the | X | s |≥X L Determining that the target vehicle has a lane departure event; wherein, X L As a threshold value for the deviation tendency, X L And (L-H)/2, wherein L represents the width of the driving lane, and H represents the width of the head of the target vehicle.
Optionally, the parameter determining submodule is specifically configured to control the first neural network to calculate, according to the vehicle characteristic information and the lane characteristic information, a lane crossing time TTLC of the target vehicle by:
Figure BDA0001720840160000075
wherein d is a lateral distance of a wheel of the target vehicle from a lane line of the driving lane, v is a longitudinal speed of the target vehicle,
Figure BDA0001720840160000076
is the yaw angle formed between the driving lane and the longitudinal axis of the target vehicle,
Figure BDA0001720840160000077
is determined by:
Figure BDA0001720840160000078
dX is a change in lateral displacement X per unit time of the target vehicle, dY is a change in travel direction displacement per unit time of the target vehicle, and dX' represents a differential value of dX.
Optionally, the parameter determining submodule is specifically configured to control the first neural network to calculate, according to the vehicle characteristic information and the lane characteristic information, a lane crossing time TTLC of the target vehicle by:
Figure BDA0001720840160000081
where v is the target vehicle longitudinal velocity, d is the lateral distance of the target vehicle's wheels from the lane line of the driving lane, ω is the yaw rate, R road Is the radius of the lane of the driving lane,
Figure BDA0001720840160000082
is the yaw angle formed between the driving lane and the longitudinal axis of the target vehicle,
Figure BDA0001720840160000083
is determined by:
Figure BDA0001720840160000084
dX is a change in lateral displacement X per unit time of the target vehicle, dY is a change in travel direction displacement per unit time of the target vehicle, and dX' represents a differential value of dX.
Optionally, the lane departure warning sub-module is specifically configured to determine that the target vehicle has a lane departure event when the lane crossing time is less than or equal to a preset time threshold.
Optionally, the apparatus further comprises:
the video acquisition module is used for acquiring a sample video and preprocessing the sample video to obtain sample data of a sample vehicle;
the data sampling module is used for dividing the sample data into a plurality of groups of training sets and a plurality of groups of testing sets in different sampling periods;
the sequence learning module is used for obtaining a plurality of neural network models through the plurality of groups of training sets by adopting a sequence learning method;
a weight determination module for determining weights of the plurality of neural network models according to the plurality of sets of test sets;
and the model combination module is used for combining the plurality of neural network models according to the weights of the plurality of neural network models to obtain a neural network combination model.
Optionally, the weight determining module is specifically configured to:
determining the corresponding prediction early warning occasions of the multiple groups of training sets in the multiple neural network models and the corresponding actual early warning occasions of the multiple groups of testing sets;
and comparing the predicted early warning opportunity with the actual early warning opportunity to determine the weights of the plurality of neural network models.
Optionally, the apparatus further comprises:
and the early warning opportunity determining module is used for determining the actual operation taking opportunity of the driver according to the steering lamp signal, the vehicle running track and the steering wheel turning angle signal, and determining the actual operation taking opportunity of the driver as the actual early warning opportunity corresponding to the lane departure event.
Optionally, the apparatus further comprises:
and the model updating module is used for updating the neural network combination model when the difference value between the actual early warning opportunity and the time for outputting the corresponding warning information is greater than a preset time threshold value.
Optionally, the neural network combination model is a neural network combination model based on sequence learning, the neural network combination model based on sequence learning includes a plurality of neural networks, and each neural network corresponds to a weight;
the model updating module is specifically configured to input the actual early warning time and the time for outputting the corresponding warning information into the neural network combination model based on the sequence learning, and adjust the weight of each neural network in the neural network combination model based on the sequence learning.
Optionally, the model updating module is specifically configured to:
when the difference value between the actual early warning opportunity and the time for outputting the corresponding warning information is larger than a preset time threshold value, displaying confirmation information for judging whether to update the neural network combination model;
when a confirmation instruction for the confirmation information is received, updating the neural network combination model.
In a third aspect, an embodiment of the present invention provides a danger early warning device, including: a processor and a memory;
the memory stores executable program code;
the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, so as to execute a hazard warning method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when executed by a processor, the computer program implements a method for early warning of a danger as described in the first aspect.
The embodiment of the invention provides a danger early warning method, a device, equipment and a storage medium, wherein the method comprises the following steps: acquiring continuous multi-frame video frame images acquired by image acquisition equipment; preprocessing the video frame image to obtain feature extraction data of a target vehicle, and inputting the feature extraction data serving as input data into a neural network combination model; and determining whether a dangerous event occurs or not by the neural network combined model according to the input data, and outputting an early warning signal corresponding to the dangerous event when the dangerous event occurs.
According to the embodiment of the invention, the feature extraction data of the target vehicle can be obtained by processing the video frame image acquired by the image acquisition equipment, and whether a danger exists or not is determined by the neural network combination model based on the feature extraction data, so that the danger early warning in the driving process of the vehicle is realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a first flowchart of a method for early warning of danger according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating the judgment of the warning condition based on the deviation distance according to the embodiment of the present invention;
fig. 3 is a schematic flow chart of a danger early warning method according to an embodiment of the present invention;
fig. 4 is a schematic flow chart of a danger early warning method according to an embodiment of the present invention;
fig. 5 is a fourth flowchart illustrating a danger early warning method according to an embodiment of the present invention;
fig. 6 is a fifth flowchart illustrating a method for early warning of danger according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a danger early warning device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a danger early warning device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Interpretation of terms:
sequence learning: based on deep neural networks, a learning from sequence to sequence.
Auxiliary driving: various sensors mounted on the vehicle are utilized to collect the internal and external environmental data of the vehicle at the first time, and technical processing such as identification, detection and tracking of static and dynamic objects is carried out, so that a driver can perceive possible danger at the fastest time.
Driving early warning: when the lane departure early warning, the forward collision early warning, the pedestrian collision early warning and the like predict the possible danger, early warnings are provided in time to remind a driver to take measures to avoid the danger.
The danger early warning method provided by the embodiment of the invention is particularly used for predicting the dangers of lane departure, front vehicle collision, pedestrian collision and the like possibly caused by various emergencies such as fatigue driving, distraction, novice getting on the road and the like of various vehicles in the road driving process and providing early warning. The principle is that various sensors are used for acquiring road information and vehicle information, the actual early warning time is determined by combining the operation of a driver, and a large number of video samples are acquired quickly and automatically. Training a plurality of sequence learning neural network models offline for the sample data set in different sampling periods, and predicting driving early warning; and comparing the prediction early warning time with the actual early warning time, and adjusting the weights of the plurality of neural network models to form a combined model. Meanwhile, the system has self-diagnosis and repair functions, the system prediction early warning and the driver operation time in a specific period, namely the actual early warning time, are recorded in the using process of a product, and when the system prediction early warning and the driver operation time are close to each other, the model is not updated; when the error is large, the system limitation of the driver is reminded, and the driver selects whether to perform online learning through recording and subsequent data, so that the model is more in line with the operation habit of the current driver.
The danger early warning method flow of the embodiment of the invention comprises a video acquisition flow, an algorithm processing flow and an early warning output flow, and is specifically shown in figure 1.
Video acquisition process: for acquiring a scene video sequence in real time. Road information and vehicle information are collected at a first time using various sensors mounted on a vehicle.
Algorithm processing flow: by analyzing the video data output from the video acquisition unit, early warning event marking is realized, so that samples are obtained, models are learned offline, and the models are updated online.
Early warning output flow: and outputting the early warning signal as a digital signal, so as to be convenient for controlling a subsequent unit to respond correspondingly. The early warning includes but is not limited to lane departure early warning, front vehicle collision early warning, pedestrian collision early warning and the like.
The algorithm processing flow comprises the following steps:
sample acquisition: input information required by each early warning event is determined, road information and vehicle information are obtained by various sensors, and meanwhile the actual early warning time is determined by combining the operation of a driver. The sample is obtained quickly and automatically, and the problem of manual calibration of a large number of videos is solved.
Model off-line learning: dividing the sample data set into a plurality of groups of training sets and test sets in different sampling periods; and training a plurality of neural network models for sequence learning off line by using training sets with different sampling periods, comparing the prediction early warning with the actual early warning opportunity, and adjusting the weights of the plurality of neural network models to form a combined model. The different sampling periods include, but are not limited to, 30 frames, 60 frames, 90 frames, etc.
And (3) updating the model on line: the periodic recording system predicts the early warning and the operation time of the driver, and when the early warning and the operation time are close to each other, the model is not updated; when the error is large, the driver is reminded of system limitation, whether the driver selects to carry out online learning through recording and subsequent data or not is judged, so that the model is more accordant with the operation habit of the current driver, and the self-diagnosis and repair functions are achieved.
The above sample obtaining comprises:
the sample required truth determines: and determining input and output signals required by each early warning function. The example is determined by carrying out the truth value determination of the sample by lane departure early warning:
for the lane departure early warning judgment condition, the current application is more extensive and comprises two categories: offset distance and TTLC (Time-series Turnstile ease Counting, vehicle cross-lane Time). The deviation distance is the distance between the outer side of the wheel and the deviated side lane line, and the TTLC is the wheel outer side line pressing time calculated according to the vehicle lateral deviation speed and the distance from the lane line. The two strategies activate and close lane departure early warning by setting different threshold values and prompt and alarm the driver.
The departure distance based warning condition determination is shown in FIG. 2, fromThe captured image of the area in front of the vehicle detects lane mark lines such as white lines, and calculates a yaw angle formed between the traveling lane and the longitudinal axis of the vehicle based on the detected lane mark lines
Figure BDA0001720840160000121
Transverse displacement X relative to the driving lane front And the curvature B of the driving lane, etc.
Yaw angle:
Figure BDA0001720840160000131
where dX represents the change in the lateral displacement X per unit time, and dY represents the change in the traveling direction per unit time. And dX' represents a differential value of the change dX.
Future lateral displacement X which makes it possible to determine the tendency of a vehicle to deviate from a driving lane s The calculation is as follows:
Figure BDA0001720840160000132
in the above formula, T t Representing the headway used to calculate the forward observation distance.
Threshold value X for determining deviation tendency L The calculation formula is as follows:
X L =(L-H)/2
where L denotes the width of the driving lane and H denotes the vehicle width.
When | X s |≥X L When the deviation determination flag Fout is set to ON; when | X s |<X L When the deviation determination flag Fout is set to OFF.
Here, the actual lateral displacement X may also be used front (T t 0) instead of estimating the lateral displacement X s The tendency of lane departure is judged.
TTLC-based early warning condition judgment
Figure BDA0001720840160000133
Generally, the TTLC calculation formula is shown as above, where d is the distance of the wheel from the lane line in the lateral direction, and V is the vehicle longitudinal speed. But the formula does not take into account curves and situations where the vehicle has lateral acceleration. To perfect the model, the relative acceleration a of the vehicle in the opposite lane is measured r Relative velocity v r When moving laterally along the lane, at time T, the movement is shifted by d r The calculation formula is as follows:
Figure BDA0001720840160000134
wherein a is r The calculation formula of (a) is as follows:
Figure BDA0001720840160000135
where ω is yaw rate, R road Is the road radius.
The TTLC calculation formula is as follows:
Figure BDA0001720840160000141
in summary, by combining the two warning condition determination methods, the lane departure warning data flow is as follows: the truth value for lane departure warning needs to include lane line quality, lane width, yaw angle, lane curvature, vehicle speed, yaw rate, turn signal, brake signal, headway time and vehicle width as input. Meanwhile, a lane departure warning signal is included as an output, and the process is shown in fig. 3.
Acquiring a true value of a sample: road information and vehicle information are acquired by various sensors:
directly obtaining: such as a turn signal, a brake signal and the like CAN be directly acquired through the CAN bus.
And (3) algorithm processing: the lane line quality, lane width, lane curvature and the like need to be obtained by algorithm processing according to images collected by a visual sensor; the actual early warning opportunity determination can be judged by combining the current road information, the vehicle information and the driver operation. Example of actual early warning opportunity determination: and when the vehicle enters the lane departure early warning marking module, determining the action taking time of a driver according to a steering lamp signal, a vehicle running track and a steering wheel turning angle signal, namely the actual early warning time of lane departure.
The model offline learning comprises the following steps:
the original sample data set is divided into multiple sets of training sets and test sets in different sampling periods, where the different sampling periods include, but are not limited to, 30 frames, 60 frames, 90 frames, and the like.
And performing off-line model training on the training sample data set to respectively train a plurality of sequence-learned neural network models with different sampling periods. The sequence learning method includes, but is not limited to, LSTM (Long Short-Term Memory), GRU (Gated current Unit), and the like.
The prediction early warning time of the training data set on the sequence learning neural network models is obtained, the prediction early warning time is compared with the actual early warning time, and the weight occupied by each sequence learning neural network model as a combined model is obtained by using a linear regression method, so that the prediction precision of early warning is improved.
The online updating of the model comprises the following steps:
and the periodic recording system predicts the early warning and the operation time of the driver. The periodicity includes, but is not limited to, a duration mode and an early warning frequency mode.
When the prediction early warning time in the period is closer to the operation time of the driver, namely the prediction effect of the model is still good, the model is not updated, the calculation amount of early warning prediction is reduced, and the operation efficiency of the network system is improved.
When the error between the prediction early warning opportunity and the operation opportunity of the driver is large in the period, the driver is reminded of system limitation, and the driver selects whether to perform online learning through recording and subsequent data. When the driver selects on-line learning, the recorded data and the subsequent data are input into the model in a sequence mode for training, so that the model is more in line with the operation habit of the current driver and has the functions of self-diagnosis and self-repair.
The danger early warning method provided by the embodiment of the invention is used for assisting in driving and early warning based on a sequence learning method. Compared with the traditional rule-based method and the traditional neural network-based method, the method has more feasibility and performance advantages. Various sensors are utilized to obtain road information and vehicle information, and meanwhile, the actual early warning opportunity is determined by combining the operation of a driver. A large amount of video samples are rapidly and automatically obtained, and huge labeling sample cost is reduced. Different sampling periods combine the way that the sequence learns the offline model. The scene context and the scene interaction information are better combined, and the prediction precision of early warning is improved. The model is updated on line, the habit characteristics of a driver are combined, and the self-diagnosis and repair functions of the system are realized, so that the early warning precision is guaranteed.
The present invention will be described in detail below with reference to specific examples.
Referring to fig. 4, a flow of a method for early warning of danger according to an embodiment of the present invention is shown, where the method includes the following steps:
s401, acquiring continuous multi-frame video frame images acquired by the image acquisition equipment.
The method provided by the embodiment of the invention can be applied to a vehicle driving assistance system. In particular, the method can be applied to a processor in a vehicle driving assistance system.
In the embodiment of the invention, in order to improve the safety of vehicle running, a vehicle driving assisting system can be installed on the vehicle. The vehicle assisted driving system may include at least an image capture device and a processor. The image acquisition equipment can acquire the video frame in front of the vehicle in running, namely continuous multi-frame video frame images, and then can send the video frame images to the processor for processing.
S402, preprocessing the video frame image to obtain feature extraction data of the target vehicle, and inputting the feature extraction data serving as input data into a neural network combination model.
After the processor acquires continuous multi-frame video frame images acquired by the image acquisition equipment, the video frame images can be preprocessed to obtain feature extraction data of the target vehicle. For example, the processor may identify a target vehicle in the video frame by using any target identification method, and further obtain feature extraction data of the target vehicle by analyzing images of the plurality of video frames.
The feature extraction data may be any data that can reflect the traveling condition of the target vehicle. For example, when performing early warning judgment on lane departure, the processor may acquire the following feature extraction data: lane line quality, lane width, yaw angle, lane curvature, vehicle speed, yaw rate, headway, vehicle width, and the like.
In the embodiment of the invention, the neural network combination model can be obtained by pre-training so as to carry out danger early warning according to the neural network combination model.
Specifically, the processor may obtain a sample video, such as a historical monitoring video of one or more vehicles during driving, as the sample video. After the sample video is obtained, the sample video can be preprocessed to obtain sample data of the sample vehicle included in the sample video. In addition, in order to ensure the accuracy of the danger early warning, the processor can sample the sample data according to different sampling periods to obtain a plurality of groups of training sets. The above-mentioned sampling period may include, but is not limited to, 30 frames, 60 frames, 90 frames, etc.
After the sample data is sampled to obtain a plurality of groups of training sets, the processor can perform sequence learning on the plurality of groups of training sets to obtain a plurality of neural network models. That is, how many sets of training sets, how many neural network models can be trained.
The sequence learning method may include, but is not limited to, LSTM (Long Short-Term Memory, temporal Recurrent neural network), GRU (Gated Recurrent Unit), and the like.
After obtaining the plurality of neural network models, the processor may further determine weights of the plurality of neural network models, and combine the plurality of neural network models according to the weights of the plurality of neural network models to obtain a neural network combination model. For example, the weights of the plurality of neural network models may be set to be the same; alternatively, a test set may be obtained and weights for a plurality of neural network models determined from the test set.
When the danger early warning is carried out, the processor can input the feature extraction data of the target vehicle into the neural network combination model as input data, and the neural network combination model can determine whether a dangerous event occurs according to the input data.
And S403, determining whether a dangerous event occurs or not by the neural network combined model according to the input data, and outputting an early warning signal corresponding to the dangerous event when the dangerous event occurs.
When a dangerous event occurs, certain relation is satisfied between input data. In one implementation, the neural network combination model may determine whether a dangerous event occurs according to whether input data satisfies a preset relationship. When the input data satisfies the preset relationship, it may be determined that a dangerous event has occurred.
When the dangerous event is determined to occur, the early warning signal corresponding to the dangerous event can be output, and specifically, the processor can output sound information, such as a prolonged sound, to remind the driver of possible danger.
In the embodiment of the invention, the video frame image acquired by the image acquisition equipment is processed to acquire the feature extraction data of the target vehicle, and based on the feature extraction data, whether danger exists is determined through the neural network combination model, so that the danger early warning in the driving process of the vehicle is realized.
As an implementation manner of the embodiment of the present invention, the neural network combination model may include a plurality of neural networks, so as to perform early warning on different dangerous events through the plurality of neural networks, respectively.
For example, the neural network combination model may include a combination of one or more of a first neural network, a second neural network, and a third neural network; the first neural network is used for determining whether a lane departure event occurs according to input data, and outputting an early warning signal corresponding to the lane departure event when the lane departure event is determined to occur; the second neural network is used for determining whether a front vehicle collision event occurs according to the input data and outputting an early warning signal corresponding to the front vehicle collision event when the front vehicle collision event is determined to occur; and the third neural network is used for determining whether a pedestrian collision event occurs according to the input data and outputting an early warning signal corresponding to the pedestrian collision event when the pedestrian collision event is determined to occur.
The output early warning signals may be different for different dangerous events. For example, when the warning signal is a sound signal, sounds with different volumes may be used for warning for different events, or sounds with different timbres may be used for warning.
In this embodiment, the neural network combination model can perform early warning respectively for different dangerous events, so that the practicability of the neural network combination model is improved. And different early warning signals are output according to different dangerous events, and the driver can be reminded in a targeted manner, so that the response speed of the driver is improved, and the probability of dangerous events is reduced.
As an implementation manner of the embodiment of the present invention, when the neural network combination model at least includes the first neural network, as shown in fig. 5, the method for early warning of danger according to the embodiment of the present invention may include the following steps:
s501, acquiring continuous multi-frame video frame images acquired by image acquisition equipment;
this step is substantially the same as step S401 in the embodiment shown in fig. 4, and is not described herein again.
S502, performing first preprocessing on the video frame image to obtain vehicle characteristic information and lane characteristic information of a target vehicle, and inputting the vehicle characteristic information and the lane characteristic information into a first neural network;
the lane characteristic information at least comprises one or more of lane line quality, lane width, lane radius, lane curvature and yaw angle of a driving lane corresponding to the target vehicle; the vehicle characteristic information may include at least a traveling track of the target vehicle, a relative position and a relative speed of the target vehicle with respect to a traveling lane, and one or more of a traveling speed, a turn signal, a steering wheel angle signal, a brake signal, a yaw rate, a headway, and a headway width of the target vehicle.
S503, determining the lane departure distance and/or the lane crossing time of the target vehicle by the first neural network according to the vehicle characteristic information and the lane characteristic information;
after the vehicle characteristic information and the lane characteristic information are input into the first neural network, the first neural network may determine whether a dangerous event occurs according to the vehicle characteristic information and the lane characteristic information. In particular, the first neural network may determine a lane departure distance and/or a cross-lane time of the target vehicle.
In one implementation, when the first neural network determines the lane departure distance of the target vehicle, it may specifically calculate an estimated lateral displacement of the target vehicle when departing from the driving lane, based on the vehicle characteristic information and the lane characteristic information.
Estimating lateral displacement X s Can be determined by:
Figure BDA0001720840160000181
wherein, T t Is the headway of the target vehicle, V is the speed of travel of the target vehicle, beta is the curvature of the lane of travel, X front Is the actual lateral displacement of the target vehicle relative to the driving lane,
Figure BDA0001720840160000182
is the yaw angle formed between the driving lane and the longitudinal axis of the target vehicle,
Figure BDA0001720840160000183
can be determined by:
Figure BDA0001720840160000184
dX is a change in lateral displacement X per unit time of the target vehicle, dY is a change in travel direction displacement per unit time of the target vehicle, and dX' represents a differential value of dX.
In another implementation, when the first neural network determines the crossing time of the target vehicle, specifically, when the curve and the vehicle have lateral acceleration are not considered, it may calculate the crossing time TTLC of the target vehicle according to the vehicle characteristic information and the lane characteristic information by the first neural network by:
Figure BDA0001720840160000191
wherein d is a lateral distance of a wheel of the target vehicle from a lane line of the driving lane, v is a longitudinal speed of the target vehicle,
Figure BDA0001720840160000192
is the yaw angle formed between the driving lane and the longitudinal axis of the target vehicle,
Figure BDA0001720840160000193
is determined by:
Figure BDA0001720840160000194
dX is a change in lateral displacement X per unit time of the target vehicle, dY is a change in travel direction displacement per unit time of the target vehicle, and dX' represents a differential value of dX.
Considering a curve and a situation in which the vehicle has a lateral acceleration, it is assumed that the target vehicle is at a relative acceleration a relative to the lane of travel r Relative velocity v r When moving laterally along the lane, at time T, the movement is shifted by d r The calculation formula is as follows:
Figure BDA0001720840160000195
wherein a is r The calculation formula of (a) is as follows:
Figure BDA0001720840160000196
where ω is yaw rate, R road Is the road radius.
The first neural network may calculate the crossing time TTLC of the target vehicle according to the vehicle characteristic information and the lane characteristic information by:
Figure BDA0001720840160000201
where v is the target vehicle longitudinal velocity, d is the lateral distance of the target vehicle's wheels from the lane line of the driving lane, ω is the yaw rate, R road Is the radius of the lane of the driving lane,
Figure BDA0001720840160000202
is the yaw angle formed between the driving lane and the longitudinal axis of the target vehicle,
Figure BDA0001720840160000203
is determined by:
Figure BDA0001720840160000204
dX is a change in lateral displacement X per unit time of the target vehicle, dY is a change in travel direction displacement per unit time of the target vehicle, and dX' represents a differential value of dX.
S504, when the lane departure distance meets a preset departure threshold value and/or the lane crossing time meets a preset time threshold value, determining that a lane departure event occurs, and outputting an early warning signal corresponding to the lane departure event.
When the first neural network calculates the estimated transverse displacement X of the target vehicle s It may determine whether a lane departure event has occurred according to the following:
when | X s |≥X L Judging that the target vehicle has a lane departure event; wherein, X L As a threshold value for the deviation tendency, X L And (L-H)/2, wherein L represents the width of the driving lane, and H represents the width of the head of the target vehicle.
When the first neural network calculates the cross-lane time of the target vehicle, it may determine whether a lane departure event occurs according to the following manner: and when the lane crossing time is less than or equal to the preset time threshold, determining that the target vehicle has a lane departure event. The preset time threshold may be determined according to different lanes, and the specific value of the preset time threshold is not limited in the embodiment of the present invention.
In the embodiment, the lane departure event can be accurately pre-warned through the first neural network, so that the safety of the vehicle in the driving process is improved.
Accordingly, as shown in fig. 6, in the embodiment of the present invention, the training process of the neural network combination model may include:
s601, obtaining a sample video, and preprocessing the sample video to obtain sample data of a sample vehicle.
In the embodiment of the invention, the processor can obtain the sample video, for example, historical monitoring video in the running process of one or more vehicles can be obtained as the sample video.
After the sample video is obtained, the sample video can be preprocessed to obtain sample data of the sample vehicle. For example, the processor may identify the sample vehicle in the sample video by using any target identification method, and further obtain the sample data of the sample vehicle by analyzing each frame of image in the sample video.
The sample data may be any data that can reflect the running condition of the sample vehicle. For example, the sample data may include one or more of: lane line quality, lane width, yaw angle, lane curvature, vehicle speed, yaw rate, headway, vehicle width, and the like.
S602, dividing the sample data into a plurality of training sets and a plurality of testing sets in different sampling periods.
In the embodiment of the invention, in order to ensure the accuracy of the danger early warning, the processor can sample the sample data according to different sampling periods to obtain a plurality of groups of training sets and a plurality of groups of test sets. The above-mentioned sampling period may include, but is not limited to, 30 frames, 60 frames, 90 frames, etc.
For example, the processor may sample the sample data according to different sampling periods, and then use a part of the sampling results as a training set, and another part of the sampling results as a test set. The training set is used for training to obtain a plurality of neural network models, and the test set is used for determining the weight of each neural network model.
And S603, training by adopting a sequence learning method through the plurality of groups of training sets to obtain a plurality of neural network models.
After the sample data is sampled to obtain a plurality of groups of training sets, the processor can perform sequence learning on the plurality of groups of training sets to obtain a plurality of neural network models. For example, the processor may train to obtain a corresponding neural network model for each set of training sets. That is, how many sets of training sets, how many neural network models can be trained.
The sequence learning method may include, but is not limited to, LSTM (Long Short-Term Memory, temporal Recurrent neural network), GRU (Gated Recurrent Unit), and the like.
Specifically, the processor may train the neural network model by using sample data in any training set as input parameters for any group of training sets. And determining the actual early warning time according to the steering lamp signal, the brake signal and the vehicle running track, and determining that the training of the neural network model is finished when the early warning time output by the trained neural network model and the actual early warning time meet a preset condition, if the time difference is smaller than a preset time threshold.
S604, determining weights of the plurality of neural network models according to the plurality of groups of test sets.
After obtaining the plurality of neural network models, the processor may also determine weights for the plurality of neural network models based on the plurality of sets of test sets.
For example, the processor may input sample data in each group of test sets into each neural network model, and determine predicted early warning occasions corresponding to the plurality of groups of test sets in the plurality of neural network models, and actual early warning occasions corresponding to the plurality of groups of test sets; and comparing the predicted early warning opportunity with the actual early warning opportunity, and determining the weights of the plurality of neural network models by using a linear regression method if the predicted early warning opportunity is compared with the actual early warning opportunity.
The more the prediction early warning time corresponding to any neural network model is close to the actual early warning time, the higher the weight of the neural network model is.
And S605, combining the plurality of neural network models according to the weights of the plurality of neural network models to obtain a neural network combination model.
After obtaining the plurality of neural network models and the weights of the neural network models, the processor may combine the plurality of neural network models according to the weights of the plurality of neural network models to obtain a neural network combination model.
In this embodiment, a neural network combination model can be obtained by training in a sequence learning manner, and the neural network combination model can better combine scene context and scene interaction information, thereby improving the prediction accuracy of early warning. In addition, in the model training process, the road information and the vehicle information are obtained by using the sample videos, and meanwhile, the actual early warning time is determined by combining the operation of a driver, so that a large number of video samples can be quickly and automatically obtained, and the huge cost of marking the samples is reduced.
As an implementation manner of the embodiment of the present invention, the processor may further update the neural network combination model to further improve accuracy of the risk early warning.
Specifically, after the lane departure event is determined to occur, the processor may determine the actual operation taking timing of the driver according to the turn signal, the vehicle running track and the steering wheel angle signal in the feature extraction data, and determine the actual operation taking timing of the driver as the actual early warning timing corresponding to the lane departure event. And when the difference value between the actual early warning opportunity and the time for outputting the corresponding warning information is greater than a preset time threshold value, updating the neural network combination model.
When the time for outputting the alarm information is closer to the actual early warning time, the prediction effect of the current neural network model is better, the neural network combination model can not be updated, the calculation amount of the danger early warning is reduced, and the operation efficiency of the network system is improved.
When the error between the time of outputting the alarm information and the actual early warning opportunity is larger, the prediction effect of the current neural network model is poor, and under the condition, the processor can update the neural network combination model.
For example, the processor may retrain the video frames acquired in the last period of time as sample videos to obtain the neural network combination model. The specific training method is similar to the embodiment shown in fig. 6, and this embodiment will not be described again for this process.
In one implementation, to improve the user experience, it may be determined by the driver whether to perform an update of the neural network combination model. Specifically, when the update condition is satisfied, the processor may first show confirmation information of whether to update the neural network combination model, and when a confirmation instruction for the confirmation information is received, update the neural network combination model.
Therefore, the neural network combination model obtained by training can be combined with the habit characteristics of the driver, the self-diagnosis and repair functions of the system are realized, and the early warning precision is guaranteed.
In one implementation, the neural network combination model is a sequence learning-based neural network combination model, which includes a plurality of neural networks, each of the neural networks corresponding to a weight;
the updating the neural network combination model comprises:
inputting the actual early warning time and the alarm information corresponding to the output into a neural network combination model based on sequence learning, and adjusting the weight of each neural network in the neural network combination model based on sequence learning.
Correspondingly, an embodiment of the present invention further provides a danger early warning apparatus, as shown in fig. 7, the apparatus includes:
the image acquisition module 710 is configured to acquire a continuous multi-frame video frame image acquired by an image acquisition device;
the data extraction module 720 is configured to preprocess the video frame image to obtain feature extraction data of the target vehicle, and input the feature extraction data as input data to the neural network combination model;
and a danger early warning module 730, configured to control the neural network combination model to determine whether a dangerous event occurs according to the input data, and output an early warning signal corresponding to the dangerous event when the dangerous event occurs.
According to the embodiment of the invention, the feature extraction data of the target vehicle can be obtained by processing the video frame image acquired by the image acquisition equipment, and whether a danger exists or not is determined by the neural network combination model based on the feature extraction data, so that the danger early warning in the driving process of the vehicle is realized.
As an implementation manner of the embodiment of the present invention, the neural network combination model includes one or more combinations of a first neural network, a second neural network and a third neural network;
the first neural network is used for determining whether a lane departure event occurs according to the input data, and outputting an early warning signal corresponding to the lane departure event when the lane departure event is determined to occur;
the second neural network is used for determining whether a front vehicle collision event occurs according to the input data, and outputting an early warning signal corresponding to the front vehicle collision event when the front vehicle collision event is determined to occur;
and the third neural network is used for determining whether a pedestrian collision event occurs according to the input data, and outputting an early warning signal corresponding to the pedestrian collision event when the pedestrian collision event is determined to occur.
As an implementation manner of the embodiment of the present invention, the neural network combination model at least includes the first neural network;
the data extraction module 720 is specifically configured to perform a first preprocessing on the video frame image to obtain vehicle characteristic information and lane characteristic information of the target vehicle, and input the vehicle characteristic information and the lane characteristic information to a first neural network;
the danger early warning module 730 includes:
the parameter determination submodule is used for controlling the first neural network to determine the lane departure distance and/or the lane crossing time of the target vehicle according to the vehicle characteristic information and the lane characteristic information;
and the lane departure early warning submodule is used for determining that a lane departure event occurs when the lane departure distance meets a preset departure threshold value and/or the lane crossing time meets a preset time threshold value, and outputting an early warning signal corresponding to the lane departure event.
As an implementation manner of the embodiment of the present invention, the lane characteristic information at least includes one or more of a lane line quality, a lane width, a lane radius, a lane curvature, and a yaw angle of a driving lane corresponding to the target vehicle;
the vehicle characteristic information includes at least a travel track of the target vehicle, a relative position and a relative speed of the target vehicle with respect to the travel lane, and one or more of a travel speed, a turn signal, a steering wheel angle signal, a brake signal, a yaw rate, a headway, and a headway width of the target vehicle.
As an implementation manner of the embodiment of the present invention, the parameter determining submodule is specifically configured to control the first neural network to calculate, according to the vehicle characteristic information and the lane characteristic information, an estimated lateral displacement of the target vehicle when the target vehicle deviates from the driving lane;
the estimated lateral displacement X s Is determined by:
Figure BDA0001720840160000251
wherein, T t Is the headway of the target vehicle, V is the speed of travel of the target vehicle, beta is the curvature of the lane of travel, X front Is the actual lateral displacement of the target vehicle relative to the driving lane,
Figure BDA0001720840160000252
is the yaw angle formed between the driving lane and the longitudinal axis of the target vehicle,
Figure BDA0001720840160000253
is determined by:
Figure BDA0001720840160000254
dX is a change in lateral displacement X per unit time of the target vehicle, dY is a change in travel direction displacement per unit time of the target vehicle, and dX' represents a differential value of dX.
As an implementation manner of the embodiment of the present invention, the lane departure warning sub-module is specifically configured to determine when | X | s |≥X L If so, determining that the target vehicle has a lane departure event; wherein, X L As a threshold value for the deviation tendency, X L And (L-H)/2, wherein L represents the width of the driving lane, and H represents the width of the head of the target vehicle.
As an implementation manner of the embodiment of the present invention, the parameter determining sub-module is specifically configured to control the first neural network to calculate, according to the vehicle characteristic information and the lane characteristic information, a lane crossing time TTLC of the target vehicle by:
Figure BDA0001720840160000255
wherein d is a lateral distance of a wheel of the target vehicle from a lane line of the driving lane, v is a longitudinal speed of the target vehicle,
Figure BDA0001720840160000256
is the yaw angle formed between the driving lane and the longitudinal axis of the target vehicle,
Figure BDA0001720840160000257
is determined by:
Figure BDA0001720840160000261
dX is a change in lateral displacement X per unit time of the target vehicle, dY is a change in travel direction displacement per unit time of the target vehicle, and dX' represents a differential value of dX.
As an implementation manner of the embodiment of the present invention, the parameter determining sub-module is specifically configured to control the first neural network to calculate, according to the vehicle characteristic information and the lane characteristic information, a lane crossing time TTLC of the target vehicle by:
Figure BDA0001720840160000262
where v is the target vehicle longitudinal velocity, d is the lateral distance of the target vehicle's wheels from the lane line of the driving lane, ω is the yaw rate, R road Is the radius of the lane of the driving lane,
Figure BDA0001720840160000263
is the yaw angle formed between the driving lane and the longitudinal axis of the target vehicle,
Figure BDA0001720840160000264
is determined by:
Figure BDA0001720840160000265
dX is a change in lateral displacement X per unit time of the target vehicle, dY is a change in travel direction displacement per unit time of the target vehicle, and dX' represents a differential value of dX.
As an implementation manner of the embodiment of the present invention, the lane departure warning submodule is specifically configured to determine that the target vehicle has a lane departure event when the lane crossing time is less than or equal to a preset time threshold.
As an implementation manner of the embodiment of the present invention, the apparatus further includes:
the video acquisition module is used for acquiring a sample video and preprocessing the sample video to obtain sample data of a sample vehicle;
the data sampling module is used for dividing the sample data into a plurality of groups of training sets and a plurality of groups of testing sets in different sampling periods;
the sequence learning module is used for obtaining a plurality of neural network models through the plurality of groups of training sets by adopting a sequence learning method;
a weight determination module for determining weights of the plurality of neural network models according to the plurality of sets of test sets;
and the model combination module is used for combining the plurality of neural network models according to the weights of the plurality of neural network models to obtain a neural network combination model.
As an implementation manner of the embodiment of the present invention, the weight determining module is specifically configured to:
determining the corresponding prediction early warning occasions of the multiple groups of training sets in the multiple neural network models and the corresponding actual early warning occasions of the multiple groups of testing sets;
and comparing the predicted early warning opportunity with the actual early warning opportunity to determine the weights of the plurality of neural network models.
As an implementation manner of the embodiment of the present invention, the apparatus further includes:
and the early warning opportunity determining module is used for determining the actual operation taking opportunity of the driver according to the steering lamp signal, the vehicle running track and the steering wheel turning angle signal, and determining the actual operation taking opportunity of the driver as the actual early warning opportunity corresponding to the lane departure event.
As an implementation manner of the embodiment of the present invention, the apparatus further includes:
and the model updating module is used for updating the neural network combination model when the difference value between the actual early warning opportunity and the time for outputting the corresponding warning information is greater than a preset time threshold value.
As an implementation manner of the embodiment of the present invention, the neural network combination model is a neural network combination model based on sequence learning, the neural network combination model based on sequence learning includes a plurality of neural networks, and each of the neural networks corresponds to a weight;
the model updating module is specifically configured to input the actual early warning time and the time for outputting the corresponding warning information into the neural network combination model based on the sequence learning, and adjust the weight of each neural network in the neural network combination model based on the sequence learning.
As an implementation manner of the embodiment of the present invention, the model update module is specifically configured to:
when the difference value between the actual early warning opportunity and the time for outputting the corresponding warning information is larger than a preset time threshold value, displaying confirmation information for judging whether to update the neural network combination model;
when a confirmation instruction for the confirmation information is received, updating the neural network combination model.
Correspondingly, an embodiment of the present invention further provides a danger early warning device, as shown in fig. 8, including: a processor 810 and a memory 820;
the memory 820 stores executable program code;
the processor 810 executes a program corresponding to the executable program code by reading the executable program code stored in the memory 820, so as to perform a danger early warning method according to an embodiment of the present invention, where the danger early warning method includes:
acquiring continuous multi-frame video frame images acquired by image acquisition equipment;
preprocessing the video frame image to obtain feature extraction data of a target vehicle, and inputting the feature extraction data serving as input data into a neural network combination model;
and determining whether a dangerous event occurs or not by the neural network combined model according to the input data, and outputting an early warning signal corresponding to the dangerous event when the dangerous event occurs.
According to the embodiment of the invention, the feature extraction data of the target vehicle can be obtained by processing the video frame image acquired by the image acquisition equipment, and whether a danger exists or not is determined by the neural network combination model based on the feature extraction data, so that the danger early warning in the driving process of the vehicle is realized.
Accordingly, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the computer-readable storage medium implements a risk early warning method according to an embodiment of the present invention, where the risk early warning method includes:
acquiring continuous multi-frame video frame images acquired by image acquisition equipment;
preprocessing the video frame image to obtain feature extraction data of a target vehicle, and inputting the feature extraction data serving as input data into a neural network combination model;
and determining whether a dangerous event occurs or not by the neural network combined model according to the input data, and outputting an early warning signal corresponding to the dangerous event when the dangerous event occurs.
According to the embodiment of the invention, the feature extraction data of the target vehicle can be obtained by processing the video frame image acquired by the image acquisition equipment, and whether a danger exists or not is determined by the neural network combination model based on the feature extraction data, so that the danger early warning in the driving process of the vehicle is realized.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus/device/storage medium embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (30)

1. A method of hazard warning, the method comprising:
acquiring continuous multi-frame video frame images acquired by image acquisition equipment;
preprocessing the video frame image to obtain feature extraction data of a target vehicle, and inputting the feature extraction data serving as input data into a neural network combination model;
determining whether a dangerous event occurs or not by the neural network combined model according to the input data, and outputting an early warning signal corresponding to the dangerous event when the dangerous event occurs;
the method further comprises the following steps:
after a lane departure event is determined, updating the neural network combination model when the difference value between the actual early warning opportunity and the time for outputting the corresponding warning information is greater than a preset time threshold value; wherein the actual early warning timing represents a timing at which the driver actually takes an operation.
2. The method of claim 1, wherein the neural network combination model comprises a combination of one or more of a first neural network, a second neural network, and a third neural network;
the first neural network is used for determining whether a lane departure event occurs according to the input data, and outputting an early warning signal corresponding to the lane departure event when the lane departure event is determined to occur;
the second neural network is used for determining whether a front vehicle collision event occurs according to the input data, and outputting an early warning signal corresponding to the front vehicle collision event when the front vehicle collision event is determined to occur;
and the third neural network is used for determining whether a pedestrian collision event occurs according to the input data, and outputting an early warning signal corresponding to the pedestrian collision event when the pedestrian collision event is determined to occur.
3. The method of claim 2, wherein the neural network combination model includes at least the first neural network;
the preprocessing the video frame image to obtain feature extraction data of a target vehicle, and inputting the feature extraction data serving as input data into a neural network combination model, comprises:
performing first preprocessing on the video frame image to obtain vehicle characteristic information and lane characteristic information of the target vehicle, and inputting the vehicle characteristic information and the lane characteristic information into a first neural network;
the determining, by the neural network combined model, whether a dangerous event occurs according to the input data, and outputting an early warning signal corresponding to the dangerous event when the dangerous event occurs includes:
determining, by the first neural network, a lane departure distance and/or a lane crossing time of the target vehicle according to the vehicle characteristic information and the lane characteristic information;
and when the lane departure distance meets a preset departure threshold value and/or the lane crossing time meets a preset time threshold value, determining that a lane departure event occurs, and outputting an early warning signal corresponding to the lane departure event.
4. The method of claim 3,
the lane characteristic information at least comprises one or more of lane line quality, lane width, lane radius, lane curvature and yaw angle of a driving lane corresponding to the target vehicle;
the vehicle characteristic information includes at least a travel track of the target vehicle, a relative position and a relative speed of the target vehicle with respect to the travel lane, and one or more of a travel speed, a turn signal, a steering wheel angle signal, a brake signal, a yaw rate, a headway, and a headway width of the target vehicle.
5. The method of claim 4,
determining, by the first neural network, a lane departure distance of the target vehicle according to the vehicle characteristic information and the lane characteristic information, including:
calculating, by the first neural network, an estimated lateral displacement of the target vehicle when deviating from the driving lane according to the vehicle characteristic information and the lane characteristic information;
the estimationMeasuring transverse displacement X s Is determined by:
Figure FDA0003597574220000021
wherein, T t Is the headway of the target vehicle, V is the speed of travel of the target vehicle, beta is the curvature of the lane of travel, X front Is the actual lateral displacement of the target vehicle relative to the driving lane,
Figure FDA0003597574220000022
is the yaw angle formed between the driving lane and the longitudinal axis of the target vehicle,
Figure FDA0003597574220000023
is determined by:
Figure FDA0003597574220000024
dX is a change in lateral displacement X per unit time of the target vehicle, dY is a change in travel direction displacement per unit time of the target vehicle, and dX' represents a differential value of dX.
6. The method of claim 5, wherein determining that a lane departure event has occurred when the lane departure distance satisfies a preset departure threshold comprises:
when | X s |≥X L Determining that the target vehicle has a lane departure event; wherein, X L As a threshold value for the deviation tendency, X L And (L-H)/2, wherein L represents the width of the driving lane, and H represents the width of the head of the target vehicle.
7. The method of claim 4,
the determining, by the first neural network, a lane crossing time of the target vehicle according to the vehicle characteristic information and the lane characteristic information includes:
calculating, by the first neural network, a crossing time TTLC of the target vehicle according to the vehicle characteristic information and the lane characteristic information by:
Figure FDA0003597574220000031
wherein d is a lateral distance of a wheel of the target vehicle from a lane line of the driving lane, v is a longitudinal speed of the target vehicle,
Figure FDA0003597574220000032
is the yaw angle formed between the driving lane and the longitudinal axis of the target vehicle,
Figure FDA0003597574220000033
is determined by:
Figure FDA0003597574220000034
dX is a change in lateral displacement X per unit time of the target vehicle, dY is a change in travel direction displacement per unit time of the target vehicle, and dX' represents a differential value of dX.
8. The method of claim 4,
the determining, by the first neural network, a lane crossing time of the target vehicle according to the vehicle characteristic information and the lane characteristic information includes:
calculating, by the first neural network, a crossing time TTLC of the target vehicle according to the vehicle characteristic information and the lane characteristic information by:
Figure FDA0003597574220000035
where v is the target vehicle longitudinal velocity, d is the lateral distance of the target vehicle's wheels from the lane line of the driving lane, ω is the yaw rate, R road Is the radius of the lane of the driving lane,
Figure FDA0003597574220000041
is the yaw angle formed between the driving lane and the longitudinal axis of the target vehicle,
Figure FDA0003597574220000042
is determined by:
Figure FDA0003597574220000043
dX is a change in lateral displacement X per unit time of the target vehicle, dY is a change in travel direction displacement per unit time of the target vehicle, and dX' represents a differential value of dX.
9. The method of claim 4, wherein determining that a lane departure event has occurred when the cross-lane time meets a preset time threshold comprises:
and when the lane crossing time is less than or equal to a preset time threshold value, determining that the target vehicle has a lane departure event.
10. The method of claim 1, wherein the training process of the neural network combination model comprises:
obtaining a sample video, and preprocessing the sample video to obtain sample data of a sample vehicle;
dividing the sample data into a plurality of groups of training sets and a plurality of groups of testing sets in different sampling periods;
training by adopting a sequence learning method through the multiple groups of training sets to obtain multiple neural network models;
determining weights of the plurality of neural network models according to the plurality of groups of test sets;
and combining the plurality of neural network models according to the weights of the plurality of neural network models to obtain a neural network combined model.
11. The method of claim 10, wherein determining weights for the plurality of neural network models from the plurality of sets of tests comprises:
determining the corresponding prediction early warning occasions of the multiple groups of training sets in the multiple neural network models and the corresponding actual early warning occasions of the multiple groups of testing sets;
and comparing the predicted early warning opportunity with the actual early warning opportunity to determine the weights of the plurality of neural network models.
12. The method of claim 4, wherein after the determining that a lane departure event has occurred, the method further comprises:
and determining the actual operation taking time of the driver according to the steering lamp signal, the vehicle running track and the steering wheel angle signal, and determining the actual operation taking time of the driver as the actual early warning time corresponding to the lane departure event.
13. The method of claim 1, wherein the neural network combination model is a sequence learning based neural network combination model, the sequence learning based neural network combination model comprising a plurality of neural networks, each of the neural networks corresponding to a weight;
the updating the neural network combination model comprises:
and inputting the actual early warning time and the time for outputting the corresponding warning information into the neural network combination model based on the sequence learning, and adjusting the weight of each neural network in the neural network combination model based on the sequence learning.
14. The method according to claim 1, wherein when a difference between an actual early warning opportunity and a time for outputting corresponding warning information is greater than a preset time threshold, the updating of the neural network combination model comprises:
when the difference value between the actual early warning opportunity and the time for outputting the corresponding warning information is larger than a preset time threshold value, displaying confirmation information for judging whether to update the neural network combination model;
when a confirmation instruction for the confirmation information is received, updating the neural network combination model.
15. A hazard early warning apparatus, the apparatus comprising:
the image acquisition module is used for acquiring continuous multi-frame video frame images acquired by the image acquisition equipment;
the data extraction module is used for preprocessing the video frame image to obtain feature extraction data of a target vehicle, and inputting the feature extraction data serving as input data into a neural network combination model;
the danger early warning module is used for controlling the neural network combined model to determine whether a dangerous event occurs according to the input data, and outputting an early warning signal corresponding to the dangerous event when the dangerous event occurs;
the device further comprises:
the model updating module is used for updating the neural network combination model when the difference value between the actual early warning opportunity and the time for outputting the corresponding warning information is greater than a preset time threshold value after the lane departure event is determined to occur; wherein the actual early warning timing represents a timing at which the driver actually takes an operation.
16. The apparatus of claim 15, wherein the neural network combination model comprises a combination of one or more of a first neural network, a second neural network, and a third neural network;
the first neural network is used for determining whether a lane departure event occurs according to the input data, and outputting an early warning signal corresponding to the lane departure event when the lane departure event is determined to occur;
the second neural network is used for determining whether a front vehicle collision event occurs according to the input data, and outputting an early warning signal corresponding to the front vehicle collision event when the front vehicle collision event is determined to occur;
and the third neural network is used for determining whether a pedestrian collision event occurs according to the input data, and outputting an early warning signal corresponding to the pedestrian collision event when the pedestrian collision event is determined to occur.
17. The apparatus of claim 16, wherein the neural network combination model comprises at least the first neural network;
the data extraction module is specifically used for performing first preprocessing on the video frame image to obtain vehicle characteristic information and lane characteristic information of the target vehicle, and inputting the vehicle characteristic information and the lane characteristic information into a first neural network;
the danger early warning module comprises:
the parameter determination submodule is used for controlling the first neural network to determine the lane departure distance and/or the lane crossing time of the target vehicle according to the vehicle characteristic information and the lane characteristic information;
and the lane departure early warning submodule is used for determining that a lane departure event occurs when the lane departure distance meets a preset departure threshold value and/or the lane crossing time meets a preset time threshold value, and outputting an early warning signal corresponding to the lane departure event.
18. The apparatus of claim 17,
the lane characteristic information at least comprises one or more of lane line quality, lane width, lane radius, lane curvature and yaw angle of a driving lane corresponding to the target vehicle;
the vehicle characteristic information includes at least a travel track of the target vehicle, a relative position and a relative speed of the target vehicle with respect to the travel lane, and one or more of a travel speed, a turn signal, a steering wheel angle signal, a brake signal, a yaw rate, a headway, and a headway width of the target vehicle.
19. The apparatus of claim 18,
the parameter determination submodule is specifically configured to control the first neural network to calculate, according to the vehicle feature information and the lane feature information, an estimated lateral displacement of the target vehicle when the target vehicle deviates from the driving lane;
the estimated lateral displacement X s Is determined by:
Figure FDA0003597574220000071
wherein, T t Is the headway of the target vehicle, V is the running speed of the target vehicle, beta is the curvature of the running lane, X front Is the actual lateral displacement of the target vehicle relative to the driving lane,
Figure FDA0003597574220000072
is the yaw angle formed between the driving lane and the longitudinal axis of the target vehicle,
Figure FDA0003597574220000073
is determined by:
Figure FDA0003597574220000074
dX is a change in lateral displacement X per unit time of the target vehicle, dY is a change in travel direction displacement per unit time of the target vehicle, and dX' represents a differential value of dX.
20. The apparatus of claim 19, wherein the lane departure warning submodule, in particular for when | X | s |≥X L Determining that the target vehicle has a lane departure event; wherein, X L As a threshold value for the deviation tendency, X L And (L-H)/2, wherein L represents the width of the driving lane, and H represents the width of the head of the target vehicle.
21. The apparatus of claim 18,
the parameter determination submodule is specifically configured to control the first neural network to calculate, according to the vehicle characteristic information and the lane characteristic information, a lane crossing time TTLC of the target vehicle by:
Figure FDA0003597574220000075
wherein d is a lateral distance of a wheel of the target vehicle from a lane line of a driving lane, v is a longitudinal speed of the target vehicle,
Figure FDA0003597574220000076
is the yaw angle formed between the driving lane and the longitudinal axis of the target vehicle,
Figure FDA0003597574220000077
is determined by:
Figure FDA0003597574220000078
dX is a change in lateral displacement X per unit time of the target vehicle, dY is a change in travel direction displacement per unit time of the target vehicle, and dX' represents a differential value of dX.
22. The apparatus of claim 18,
the parameter determination submodule is specifically configured to control the first neural network to calculate, according to the vehicle characteristic information and the lane characteristic information, a lane crossing time TTLC of the target vehicle by:
Figure FDA0003597574220000081
where v is the target vehicle longitudinal velocity, d is the lateral distance of the target vehicle's wheels from the lane line of the driving lane, ω is the yaw rate, R road Is the radius of the lane of the driving lane,
Figure FDA0003597574220000082
is the yaw angle formed between the driving lane and the longitudinal axis of the target vehicle,
Figure FDA0003597574220000083
is determined by:
Figure FDA0003597574220000084
dX is a change in lateral displacement X per unit time of the target vehicle, dY is a change in travel direction displacement per unit time of the target vehicle, and dX' represents a differential value of dX.
23. The apparatus of claim 18, wherein the lane departure warning sub-module is configured to determine that the target vehicle has a lane departure event when the crossing time is less than or equal to a preset time threshold.
24. The apparatus of claim 15, further comprising:
the video acquisition module is used for acquiring a sample video and preprocessing the sample video to obtain sample data of a sample vehicle;
the data sampling module is used for dividing the sample data into a plurality of groups of training sets and a plurality of groups of testing sets in different sampling periods;
the sequence learning module is used for obtaining a plurality of neural network models through the plurality of groups of training sets by adopting a sequence learning method;
a weight determination module for determining weights of the plurality of neural network models according to the plurality of sets of test sets;
and the model combination module is used for combining the plurality of neural network models according to the weights of the plurality of neural network models to obtain a neural network combination model.
25. The apparatus according to claim 24, wherein the weight determination module is specifically configured to:
determining the corresponding prediction early warning occasions of the multiple groups of training sets in the multiple neural network models and the corresponding actual early warning occasions of the multiple groups of testing sets;
and comparing the predicted early warning opportunity with the actual early warning opportunity to determine the weights of the plurality of neural network models.
26. The apparatus of claim 18, further comprising:
and the early warning opportunity determining module is used for determining the actual operation taking opportunity of the driver according to the steering lamp signal, the vehicle running track and the steering wheel turning angle signal, and determining the actual operation taking opportunity of the driver as the actual early warning opportunity corresponding to the lane departure event.
27. The apparatus of claim 15, wherein the neural network combination model is a sequence learning based neural network combination model, the sequence learning based neural network combination model comprising a plurality of neural networks, each of the neural networks corresponding to a weight;
the model updating module is specifically configured to input the actual early warning time and the time for outputting the corresponding warning information into the neural network combination model based on the sequence learning, and adjust the weight of each neural network in the neural network combination model based on the sequence learning.
28. The apparatus of claim 15, wherein the model update module is specifically configured to:
when the difference value between the actual early warning opportunity and the time for outputting the corresponding warning information is larger than a preset time threshold value, displaying confirmation information for judging whether to update the neural network combination model;
when a confirmation instruction for the confirmation information is received, updating the neural network combination model.
29. A hazard early warning device, comprising: a processor and a memory;
the memory stores executable program code;
the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, for performing a hazard warning method according to any one of claims 1 to 14.
30. A computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, and when executed by a processor, implements a method for risk pre-warning as claimed in any one of claims 1-14.
CN201810730839.9A 2018-07-05 2018-07-05 Danger early warning method, device, equipment and storage medium Active CN110688877B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810730839.9A CN110688877B (en) 2018-07-05 2018-07-05 Danger early warning method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810730839.9A CN110688877B (en) 2018-07-05 2018-07-05 Danger early warning method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110688877A CN110688877A (en) 2020-01-14
CN110688877B true CN110688877B (en) 2022-08-05

Family

ID=69106737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810730839.9A Active CN110688877B (en) 2018-07-05 2018-07-05 Danger early warning method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110688877B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111343603B (en) * 2020-01-19 2023-09-01 鲨港科技(上海)有限公司 Data transmission system, method, electronic equipment and helmet
CN111898475A (en) * 2020-07-10 2020-11-06 浙江大华技术股份有限公司 Method and device for estimating state of non-motor vehicle, storage medium, and electronic device
CN114103988B (en) * 2020-08-31 2024-04-19 奥迪股份公司 Safety monitoring device, vehicle comprising same, and corresponding method, device and medium
CN111814766B (en) * 2020-09-01 2020-12-15 中国人民解放军国防科技大学 Vehicle behavior early warning method and device, computer equipment and storage medium
CN112052776B (en) * 2020-09-01 2021-09-10 中国人民解放军国防科技大学 Unmanned vehicle autonomous driving behavior optimization method and device and computer equipment
CN112418157B (en) * 2020-12-08 2022-09-16 北京深睿博联科技有限责任公司 Vehicle speed identification method and device based on differential neural network and image sequence data
CN113562621B (en) * 2021-07-27 2024-02-23 衡水京华制管有限公司 Crown block dangerous suspended object early warning method, terminal equipment and readable storage medium
CN113792598B (en) * 2021-08-10 2023-04-14 西安电子科技大学广州研究院 Vehicle-mounted camera-based vehicle collision prediction system and method
CN114475641B (en) * 2022-04-15 2022-06-28 天津所托瑞安汽车科技有限公司 Lane departure warning method, lane departure warning device, lane departure warning control device, and storage medium
CN115904499B (en) * 2023-02-27 2023-05-09 珠海市鸿瑞信息技术股份有限公司 Dangerous situation awareness real-time early warning system and method based on artificial intelligence

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103832433A (en) * 2012-11-21 2014-06-04 中国科学院沈阳计算技术研究所有限公司 Lane departure and front collision warning system and achieving method thereof
CN106156725A (en) * 2016-06-16 2016-11-23 江苏大学 A kind of method of work of the identification early warning system of pedestrian based on vehicle front and cyclist
CN107463907A (en) * 2017-08-08 2017-12-12 东软集团股份有限公司 Vehicle collision detection method, device, electronic equipment and vehicle
CN108099819A (en) * 2017-12-15 2018-06-01 东风汽车集团有限公司 A kind of Lane Departure Warning System and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8311283B2 (en) * 2008-07-06 2012-11-13 Automotive Research&Testing Center Method for detecting lane departure and apparatus thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103832433A (en) * 2012-11-21 2014-06-04 中国科学院沈阳计算技术研究所有限公司 Lane departure and front collision warning system and achieving method thereof
CN106156725A (en) * 2016-06-16 2016-11-23 江苏大学 A kind of method of work of the identification early warning system of pedestrian based on vehicle front and cyclist
CN107463907A (en) * 2017-08-08 2017-12-12 东软集团股份有限公司 Vehicle collision detection method, device, electronic equipment and vehicle
CN108099819A (en) * 2017-12-15 2018-06-01 东风汽车集团有限公司 A kind of Lane Departure Warning System and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度神经网络的车道偏离预警;谈东奎 等;《第十七届中国科协年会论文集》;20150523;正文第2节 *

Also Published As

Publication number Publication date
CN110688877A (en) 2020-01-14

Similar Documents

Publication Publication Date Title
CN110688877B (en) Danger early warning method, device, equipment and storage medium
EP2201496B1 (en) Inattentive state determination device and method of determining inattentive state
EP3279052B1 (en) Automatic driving control device
EP2174838B1 (en) Drive assistance apparatus for vehicle and vehicle equipped with the apparatus
US11727799B2 (en) Automatically perceiving travel signals
US8924074B2 (en) Driving assistance system for vehicle and vehicle equipped with driving assistance system for vehicle
US8947218B2 (en) Driving support device
JP4396597B2 (en) Dangerous reaction point recording system and driving support system
US20080291276A1 (en) Method for Driver Assistance and Driver Assistance Device on the Basis of Lane Information
EP3759700A1 (en) Method for determining driving policy
CN112389448A (en) Abnormal driving behavior identification method based on vehicle state and driver state
CN105564436A (en) Advanced driver assistance system
CN110077398B (en) Risk handling method for intelligent driving
Chang et al. Onboard measurement and warning module for irregular vehicle behavior
US20180299893A1 (en) Automatically perceiving travel signals
CN111094095B (en) Method and device for automatically sensing driving signal and vehicle
US11282299B2 (en) Method for determining a driving instruction
CN101393034B (en) Traffic lane prediction method and lane bias alarm system
Rodemerk et al. Predicting the driver's turn intentions at urban intersections using context-based indicators
US20180300566A1 (en) Automatically perceiving travel signals
CN115993597A (en) Visual radar perception fusion method and terminal equipment
US11807238B2 (en) Driving assistance system for a vehicle, vehicle having same and driving assistance method for a vehicle
CN118225122A (en) Intelligent lane recommendation navigation method and system
Riera et al. Detecting and tracking unsafe lane departure events for predicting driver safety in challenging naturalistic driving data
CN105976453A (en) Image transformation-based driving alarm method and apparatus thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant