CN114360291A - Driver danger early warning method, device, equipment and storage medium - Google Patents

Driver danger early warning method, device, equipment and storage medium Download PDF

Info

Publication number
CN114360291A
CN114360291A CN202111593442.8A CN202111593442A CN114360291A CN 114360291 A CN114360291 A CN 114360291A CN 202111593442 A CN202111593442 A CN 202111593442A CN 114360291 A CN114360291 A CN 114360291A
Authority
CN
China
Prior art keywords
image data
target object
target
driver
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111593442.8A
Other languages
Chinese (zh)
Inventor
李莹盈
覃毅哲
朱智斌
石胜明
莫忠婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongfeng Liuzhou Motor Co Ltd
Original Assignee
Dongfeng Liuzhou Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongfeng Liuzhou Motor Co Ltd filed Critical Dongfeng Liuzhou Motor Co Ltd
Priority to CN202111593442.8A priority Critical patent/CN114360291A/en
Publication of CN114360291A publication Critical patent/CN114360291A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention discloses a driver danger early warning method, device, equipment and storage medium, and belongs to the technical field of automobiles. According to the invention, visible light image data shot by a first camera and infrared image data shot by a second camera are obtained; performing region fusion on the visible light image data and the infrared image data to obtain fused image data; identifying the fused image data through a preset target identification model to obtain target information; performing behavior prediction on the target object in the target information through a long-term and short-term memory model to obtain the predicted behavior of the target object; and early warning is carried out based on the predicted behavior, the fused image data is projected to a preset position so as to enhance the sight of a driver, vehicles and pedestrians are accurately predicted to enter a lane, early warning is given to the driver, the fused image data is projected, and the driving safety is improved.

Description

Driver danger early warning method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of automobiles, in particular to a driver danger early warning method, device, equipment and storage medium.
Background
With the popularization of vehicles, traffic transportation problems become more serious, roads are crowded, traffic accidents frequently occur, and analysis on the reasons of occurrence of the traffic accidents shows that most of the traffic accidents are caused by untimely response of drivers to environmental changes and road conditions, so that danger early warning needs to be performed on the drivers when the drivers drive the vehicles.
The existing driver danger early warning method cannot predict vehicles and pedestrians entering a lane, so that a driver does not notice the vehicles and the pedestrians under the condition of poor sight line to cause traffic accidents.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a driver danger early warning method, a driver danger early warning device, driver danger early warning equipment and a storage medium, and aims to solve the technical problem that vehicles and pedestrians can not be accurately predicted to merge into a lane in the prior art.
In order to achieve the above object, the present invention provides a driver danger early warning method, comprising the steps of:
acquiring visible light image data shot by a first camera and infrared image data shot by a second camera;
performing region fusion on the visible light image data and the infrared image data to obtain fused image data;
identifying the fused image data through a preset target identification model to obtain target information;
performing behavior prediction on the target object in the target information through a long-term and short-term memory model to obtain the predicted behavior of the target object;
and early warning is carried out based on the predicted behavior, and the fused image data is projected to a preset position so as to enhance the sight of the driver.
Optionally, the performing region fusion on the visible light image data and the infrared image data to obtain fused image data includes:
dividing the visible light image data and the infrared image data according to a preset number of pixel points to obtain visible light image data pixel points and infrared image data pixel points;
calculating first region energy according to the visible light image data pixel points;
calculating the energy of a second area according to the infrared image data pixel points;
obtaining a fusion region energy based on the first region energy and the second region energy;
and obtaining fused image data according to the energy of the fused region.
Optionally, the target information includes: a target object and a target frame;
the identifying the fused image data through a preset target identification model to obtain target information comprises the following steps:
inputting the fused image data into a deep learning network in a preset target recognition model for feature extraction to obtain a fused image feature map and a target object;
acquiring the sizes of a preset number of reference anchor point frames and target objects;
obtaining the center of the reference anchor frame and the size of the reference anchor frame based on the reference anchor frame;
adjusting the center of the reference anchor point frame and the size of the reference anchor point frame through a preset clustering algorithm according to the center of the reference anchor point frame and the size of the target object;
when the distance between the center of the adjusted reference anchor point frame and the center of the target object is smaller than a preset distance threshold, taking the reference anchor point frame corresponding to the center of the adjusted reference anchor point frame as a frame to be selected;
calculating the proportion of the target object in the frame to be selected, and taking the frame to be selected corresponding to the maximum proportion as a target frame;
and obtaining target information according to the target frame and the target object.
Optionally, the adjusting the center of the reference anchor frame and the size of the reference anchor frame according to the center of the reference anchor frame and the size of the target object by a preset clustering algorithm includes:
acquiring the coordinates of the center of the reference anchor point frame, the width and the height of the target object;
acquiring the offset of the center of the target object;
calculating and adjusting the coordinates of the center of the reference anchor point frame and the width and height of the target object through a preset clustering algorithm according to the offset, the width and the height of the target object;
and obtaining the coordinate of the center of the adjusted reference anchor point frame, the width of the reference anchor point frame and the height of the reference anchor point frame according to the calculation result.
Optionally, the target information further includes: driving information and environmental information of the target object;
the predicting the behavior of the target object in the target information through the long-term and short-term memory model to obtain the predicted behavior of the target object comprises the following steps:
inputting the driving information and the environmental information of the target object into a forgetting gate, an input gate and an output gate in a long-short term memory model for calculation to obtain a hidden state;
and calculating the hidden state through a preset logistic regression function to obtain the predicted behavior of the target object.
Optionally, the inputting the driving information and the environmental information of the target object into a forgetting gate, an input gate, and an output gate in a long-term and short-term memory model to calculate to obtain the hidden state includes:
acquiring a weight matrix and an offset of each neural network layer in the long-term and short-term memory model;
inputting the running information and the environmental information of the target object into a forgetting gate in a long-short term memory model for calculation based on the weight matrix and the offset to obtain long-term memory data of the last moment needing to be reserved;
inputting the running information and the environmental information of the target object into an input gate in a long-term and short-term memory model for calculation based on the weight matrix and the offset to obtain currently input long-term memory data to be stored;
obtaining long-term memory data of the current moment according to the long-term memory data to be stored in the current input and the long-term memory data of the previous moment needing to be reserved;
and calculating the long-term memory data at the current moment through an output gate to obtain the hidden state at the current moment.
Optionally, the performing early warning based on the predicted behavior and projecting the fused image data to a preset position to enhance the driver's sight line includes:
and when the environmental information is the crossroad and the visibility is smaller than the preset range, early warning is carried out on the predicted behavior of the target object, and the fused image data is projected onto a front windshield display of the vehicle so as to enhance the sight of a driver.
In addition, in order to achieve the above object, the present invention further provides a driver danger early warning device, including:
the acquisition module is used for acquiring visible light image data shot by the first camera and infrared image data shot by the second camera;
the fusion module is used for carrying out regional fusion on the visible light image data and the infrared image data to obtain fused image data;
the identification module is used for identifying the fused image data through a preset target identification model to obtain target information;
the prediction module is used for predicting the behavior of the target object in the target information through a long-term and short-term memory model to obtain the predicted behavior of the target object;
and the early warning module is used for early warning based on the predicted behavior and projecting the fused image data to a preset position so as to enhance the sight of the driver.
In addition, in order to achieve the above object, the present invention further provides a driver danger early warning apparatus, including: a memory, a processor and a driver hazard warning program stored on the memory and executable on the processor, the driver hazard warning program configured to implement the steps of the driver hazard warning method as described above.
In addition, to achieve the above object, the present invention further provides a storage medium having a driver danger early warning program stored thereon, wherein the driver danger early warning program, when executed by a processor, implements the steps of the driver danger early warning method as described above.
According to the invention, visible light image data shot by a first camera and infrared image data shot by a second camera are obtained; performing region fusion on the visible light image data and the infrared image data to obtain fused image data; identifying the fused image data through a preset target identification model to obtain target information; performing behavior prediction on the target object in the target information through a long-term and short-term memory model to obtain the predicted behavior of the target object; and early warning is carried out based on the predicted behavior, the fused image data is projected to a preset position so as to enhance the sight of a driver, vehicles and pedestrians entering a lane are accurately predicted, early warning is given to the driver, the fused image data is projected, and the driving safety is improved.
Drawings
FIG. 1 is a schematic structural diagram of a driver danger early warning device of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a first embodiment of a driver danger early warning method according to the present invention;
FIG. 3 is a flowchart illustrating a driver danger early warning method according to a second embodiment of the present invention;
FIG. 4 is a flowchart illustrating a driver danger early warning method according to a third embodiment of the present invention;
FIG. 5 is a flowchart illustrating a driver danger early warning method according to a fourth embodiment of the present invention;
FIG. 6 is a flowchart illustrating a fifth embodiment of the driver danger early warning method according to the present invention;
FIG. 7 is a schematic overall flowchart of the driver danger warning method according to this embodiment;
fig. 8 is a block diagram showing the structure of the first embodiment of the driver danger early warning apparatus of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a driver danger warning device in a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the driver danger early warning apparatus may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (Wi-Fi) interface). The Memory 1005 may be a high-speed Random Access Memory (RAM) Memory, or may be a Non-Volatile Memory (NVM), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration shown in fig. 1 does not constitute a limitation of the driver hazard warning device, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a storage medium, may include therein an operating system, a network communication module, a user interface module, and a driver hazard warning program.
In the driver danger early warning apparatus shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 of the driver danger early warning apparatus of the present invention may be disposed in the driver danger early warning apparatus, and the driver danger early warning apparatus calls the driver danger early warning program stored in the memory 1005 through the processor 1001 and executes the driver danger early warning method provided by the embodiment of the present invention.
The embodiment of the invention provides a driver danger early warning method, and referring to fig. 2, fig. 2 is a schematic flow diagram of a first embodiment of the driver danger early warning method.
In this embodiment, the driver danger early warning method includes the following steps:
step S10: and acquiring visible light image data shot by the first camera and infrared image data shot by the second camera.
It should be noted that the execution subject of the embodiment is a controller capable of implementing the driver danger early warning, and may also be other devices capable of implementing the same or similar functions, which is not limited in the embodiment. The first camera is a visible light camera, is arranged on the upper part of a front windshield of the vehicle and is used for shooting light image data. The second camera is an infrared camera, is arranged on the upper part of the front windshield of the vehicle and is used for shooting infrared image data.
In the specific implementation, the double cameras consisting of the visible light camera and the infrared camera are arranged on the upper part of the front windshield of the vehicle, so that the environmental information and the vehicle condition information in front of and around the vehicle can be shot clearly under any light and weather conditions.
Step S20: and performing region fusion on the visible light image data and the infrared image data to obtain fused image data.
In this embodiment, the region fusion is to fuse the visible light image data and the infrared image data by using a region fusion algorithm to obtain fused image data.
The process of the fusion algorithm generally includes that after regional characteristics of corresponding points of two pictures of a visible light picture and an infrared picture are calculated, numerical values of points in the fused pictures are selected by comparing the sizes of characteristic values, fusion is conducted, and the regional characteristics of the fused pictures are obtained so that fused image data can be obtained.
Step S30: and identifying the fused image data through a preset target identification model to obtain target information.
It should be understood that the target recognition model is preset as a target recognition model obtained by acquiring a large number of labeled image samples in an early stage and training the labeled image samples by using a YOLOv3 (young Only Look Once, a target detector based on a deep convolutional neural network) target recognition algorithm.
In this embodiment, the target information includes a type of the target object, frame information of the target object, driving information of the target object, current environmental condition information, and the like.
After the fused image data is obtained, the fused image data is input to a preset image recognition model for recognition, so as to obtain the type of the target object, the target frame, the driving information of the target object, and the current environment information.
Step S40: and performing behavior prediction on the target object in the target information through a long-term and short-term memory model to obtain the predicted behavior of the target object.
The method comprises the steps that a Long Short-Term Memory model (LSTM) can predict the behavior of a target object through the running information of the target object to obtain the predicted behavior of the target object, whether vehicles are converged at two sides and an intersection or not is predicted according to vehicle conditions of the two sides of the vehicles shot by an optical camera and an infrared camera, and early warning is sent to a driver in advance to avoid accidents.
Step S50: and carrying out early warning based on the predicted behavior, and projecting the fused image data to a preset position to enhance the sight of the driver.
In specific implementation, the preset position is a display in a windshield in front of a vehicle, when the visibility of environmental information of target information is smaller than 100 meters and the predicted behavior of a target object is in a crossroad and enters a lane, the predicted behavior of the target object is early warned, the environmental road condition information around the vehicle shot by a camera is projected to the display in the windshield in front of the vehicle, the driver is early warned, the sight of the driver is enhanced, and the problem that the sight of the driver is blocked in bad weather and at night when the visibility of the driver is low is solved.
In the embodiment, visible light image data shot by a first camera and infrared image data shot by a second camera are obtained; performing region fusion on the visible light image data and the infrared image data to obtain fused image data; identifying the fused image data through a preset target identification model to obtain target information; performing behavior prediction on the target object in the target information through a long-term and short-term memory model to obtain the predicted behavior of the target object; and early warning is carried out based on the predicted behavior, the fused image data is projected to a preset position so as to enhance the sight of a driver, vehicles and pedestrians entering a lane are accurately predicted, early warning is given to the driver, the fused image data is projected, and the driving safety is improved.
Referring to fig. 3, fig. 3 is a flowchart illustrating a driver danger early warning method according to a second embodiment of the present invention.
Based on the first embodiment, the step S20 of the driver danger early warning method in this embodiment specifically includes:
step S201: and dividing the visible light image data and the infrared image data according to a preset number of pixel points to obtain visible light image data pixel points and infrared image data pixel points.
In a specific implementation, the preset number may be set according to a requirement, for example, 3 × 3 pixels, 5 × 5 pixels, 7 × 7 pixels, and the like, and the embodiment takes 3 × 3 pixels as an example for description. The visible light image data and the infrared image data are divided into 3 × 3 pixel points. And obtaining visible light image data pixel points and infrared image data pixel points.
Step S202: and calculating the energy of the first region according to the visible light image data pixel points.
It should be understood that after obtaining the pixels of the visible light image data, the pixel values of the visible light image data can be obtained, and the calculation formula of the area energy is as follows:
Figure BDA0003428292170000081
in formula 1, e (X) is the area energy, and X (i, j) is the pixel value.
The first region energy is the region energy centered at a certain point in the visible light image, and the first region energy is EV(X, y) obtaining the area energy of the visible light image according to the pixel value X (i, j) of the visible light image, and selecting the area energy E of a certain point in the visible light imageV(x, y) as the first region energy.
Step S203: and calculating the energy of a second area according to the infrared image data pixel points.
In particular toIn the implementation, as shown in formula 1, after the pixel value X (i, j) of the infrared image is obtained, the area energy of the infrared image is obtained according to formula 1, and the second area energy is EI(x, y) selecting an infrared image of a point at the same position as the visible image as the second region energy EI(x,y)。
Step S204: a fusion zone energy is derived based on the first zone energy and the second zone energy.
It is to be understood that by comparing the area energies of the same positions in the visible light image and the infrared image, the area energy having a large area energy is taken as the fusion area energy. And if the energy of the first region is greater than the energy of the second region, taking the energy of the first region as the energy of the fusion region, and if the energy of the second region is greater than the energy of the first region, taking the energy of the second region as the energy of the fusion region.
Step S205: and obtaining fused image data according to the energy of the fused region.
In this embodiment, by comparing the regional energy at the same position in the visible light image and the infrared image, the pixel point with large regional energy is put into the fused image. The following formula:
Figure BDA0003428292170000091
in the formula 2, IF(x, y) is fused image data, IV(x, y) is the photopictable data, II(x, y) is infrared image data, EV(x, y) is the first region energy, EIAnd (x, y) is second area energy, when the first area energy is greater than the second area energy, the light image data corresponding to the first area energy is used as fused image data, and when the second area energy is greater than the first area energy, the infrared image data corresponding to the second area energy is used as fused image data. The image with large area energy has good image data quality, and the shot image with good image quality is taken as fusion image data, so that the subsequent accurate and rapid feature extraction is conveniently carried out on the fusion image data.
In this embodiment, the visible light image data and the infrared image data are divided according to a preset number of pixel points, so as to obtain visible light image data pixel points and infrared image data pixel points; calculating first region energy according to the visible light image data pixel points; calculating the energy of a second area according to the infrared image data pixel points; obtaining a fusion region energy based on the first region energy and the second region energy; and obtaining fused image data according to the fused region energy, calculating the region energy of the optical image data and the infrared image data, comparing the region energy with the infrared image data, taking the image with good image quality as the fused image data, and obtaining the image data with good shooting quality as the fused image data, so that the feature extraction can be performed on the fused image data more accurately and rapidly.
Referring to fig. 4, fig. 4 is a flowchart illustrating a driver danger early warning method according to a third embodiment of the present invention.
Based on the first embodiment, the step S30 of the driver danger early warning method in this embodiment specifically includes:
the target information includes: a target object and a target frame.
Step S301: and inputting the fused image data into a deep learning network in a preset target recognition model for feature extraction to obtain a fused image feature map and a target object.
It should be noted that the target object includes the kind of object, such as a pedestrian, a vehicle, a non-motor vehicle, etc., and the target frame is a frame that completely includes the target object and identifies and classifies the target object, and the target object may be classified and identified by color and name.
In specific implementation, a deep learning network in the target recognition model is preset as a DarkNet-53 feature extraction network, after the fused image data is obtained, the fused image is scaled to 416x416 size, that is, 416 pixel points are arranged in each row of the fused image data, and 416 pixel points are also arranged in each column, and the DarkNet-53 feature extraction network divides the scaled image into a 13x13 grid, a 26x26 grid and a 52x52 grid. And performing feature extraction on the images divided into grids with different sizes to obtain the shallow feature and the deep feature of the fused image. The method comprises the steps of dividing the image into 13x13 grids, wherein 32x32 pixel points are arranged in each grid, dividing the image into 26x26 grids, 16x16 pixel points are arranged in each grid, dividing the image into 52x52 grids, 8x8 pixel points are arranged in each grid, the fewer the pixel points are contained in each grid, the deeper features can be extracted, the features of the fusion images of different grids are fused to obtain the total sum of the summarized features, namely the fusion image feature image, and the target object on the image can be obtained by extracting the features of the fusion image. For example, the fused image data comprises pedestrians and vehicles, and according to a preset target recognition model, the features of the pedestrians and the vehicles can be extracted to obtain an image feature map and corresponding target objects of the pedestrians and the vehicles.
Step S302: and acquiring the sizes of the reference anchor point frames and the target objects in a preset number.
It should be understood that, after the target object is obtained, the reference anchor point frame of the target object under different grids can be predicted according to the size of the target object, and the width and height of the target object, and the reference anchor point frame is a frame that can completely frame the target object. The ratio of the target object in the fused image can be obtained through the fused image data, and the size information of the width and the height of the target object in the fused image can be obtained.
Frames of 3 predicted target objects can be generated according to the divided 13x13 grids, 26x26 grids and 52x52 grids respectively, reference anchor points of different sizes can be obtained, and then the frame closest to the size of the target object can be obtained by looping the frame to be continuously close to the target object.
Step S303: and obtaining the center of the reference anchor frame and the size of the reference anchor frame based on the reference anchor frame.
In a specific implementation, 4 values are predicted for each reference anchor box on each cell by the YOLOv3 algorithm, namely, the size information of the width and height of the reference anchor box and the coordinates of the center of the reference anchor box. When the predicted reference anchor frame is obtained, the center of the reference anchor frame can be obtained according to the width and height of the anchor frame.
Step S304: and adjusting the center of the reference anchor point frame and the size of the reference anchor point frame through a preset clustering algorithm according to the center of the reference anchor point frame and the size of the target object.
Due to the predicted reference anchor block, the position may be shifted less for large blocks, but for small blocks, the position of the small block may be shifted more with the same penalty, resulting in a poor regression position. Therefore, the reference anchor point frame needs to be corrected and adjusted so that the reference anchor point frame gradually approaches the size of the target object, and finally the target object is completely framed. Further, resizing the reference anchor block comprises: acquiring the coordinates of the center of the reference anchor point frame, the width and the height of the target object; acquiring the offset of the center of the target object; calculating and adjusting the coordinates of the center of the reference anchor point frame and the width and height of the target object through a preset clustering algorithm according to the offset, the width and the height of the target object; and obtaining the coordinate of the center of the adjusted reference anchor point frame, the width of the reference anchor point frame and the height of the reference anchor point frame according to the calculation result.
It should be noted that the YOLOv3 algorithm can predict the offset of the center of the target object with respect to the upper left corner of the grid cell, and the reference anchor point frame is adjusted accordingly according to the predicted offset and the width and height of the target object. The width and height of the target object are (w, h), the center coordinates of the reference anchor frame are (x, y), and the width and height of the target object and the center coordinates of the reference anchor frame are (t)x,ty,tw,th) The amount of shift of the center of the target object is (C)x,Cy) The width and height of the reference anchor block are (P)w,Ph) Then, the specific calculation formula for adjusting the reference anchor frame is as follows:
bx=σ(tx)+Cx(formula 3)
by=σ(ty)+Cy(formula 4)
bw=Pw·etw(formula 5)
bh=Ph·eth(formula 6)
In which σ is an activation function, e.g. sigmoid function, txAs a value of the central abscissa of the reference anchor frame, CxAs the value of the abscissa of the offset, the value b of the abscissa of the center of the reference anchor frame which is continuously close to the center of the target object can be obtained by equation 3x,txAs a value of the central ordinate of the reference anchor frame, CxIs the value b of the ordinate of the offsetyThe value in which the ordinate of the center of the reference anchor point frame is continuously close to the center of the target object can be obtained by equation 4. PwFor reference to the width of the anchor block, PhThe width and height of the reference anchor frame, b, which are continuously adjusted, can be obtained by equations 5 and 6 for the height of the reference anchor framewFor the adjusted width of the reference anchor frame, bhThe adjusted height of the reference anchor point frame is obtained.
The preset clustering algorithm is a K-means clustering algorithm, the size and the center coordinates of the adjusted reference anchor point frame are continuously calculated by applying the strategy of the K-means clustering algorithm, and the reference anchor point frame is continuously moved to be continuously close to the center of the target object.
Step S305: and when the distance between the center of the adjusted reference anchor point frame and the center of the target object is smaller than a preset distance threshold, taking the reference anchor point frame corresponding to the center of the adjusted reference anchor point frame as a frame to be selected.
In this embodiment, the preset distance threshold is a preset threshold of a distance between the center of the adjusted reference anchor frame and the center of the target object, for example, 0.5, 0.3, 0.1, and the like, which is set in advance, in this embodiment, 0.1 is taken as an example for description, a distance between the center of the adjusted reference anchor frame and the center of the target object is calculated according to the center coordinates of the adjusted reference anchor frame and the center coordinates of the target object, when the distance is smaller than 0.1, the calculation is stopped, and the adjusted reference anchor frame calculated at this time is taken as a frame to be selected.
It should be understood that, because reference anchor frames generated according to different grids are different, and grids of different sizes also generate a plurality of reference anchor frames, when a strategy of a K-means clustering algorithm is used for calculation, non-conforming frames in predicted reference anchor frames need to be continuously filtered until a frame to be selected is obtained, and the reference anchor frames are screened by setting a preset distance threshold value to obtain the size of an optimal frame.
Step S306: and calculating the occupation ratio of the target object in the frame to be selected, and taking the frame to be selected corresponding to the largest occupation ratio as a target frame.
It should be noted that the proportion of the target object in the frame to be selected refers to the proportion of the target object in the frame to be selected, that is, the probability that the frame to be selected contains the target object, and the frame with the highest probability of the target object in the frame to be selected is taken as the target frame. The target frame is a frame which is predicted by the YOLOv3 algorithm and can completely contain the target object, and meanwhile, the target object is subjected to class marking.
Step S307: and obtaining target information according to the target frame and the target object.
In this embodiment, the target information includes: a target object and a target frame. And after the target frame is obtained, outputting the target frame through a YOLOv3 algorithm, labeling the target object through the target frame, and labeling the type of the target object, so that behavior prediction can be conveniently performed on the target object through the target frame in the follow-up process.
In the embodiment, the fused image data is input into a depth learning network in a preset target recognition model for feature extraction, a fused image feature map and a target object are obtained, the features of the target object are obtained, and the target object and the size information of the target object are obtained according to the feature combination of the target object; acquiring the sizes of a preset number of reference anchor point frames and target objects; obtaining the center of the reference anchor frame and the size of the reference anchor frame based on the reference anchor frame; adjusting the center of the reference anchor point frame and the size of the reference anchor point frame through a preset clustering algorithm according to the center of the reference anchor point frame and the size of the target object; when the distance between the center of the adjusted reference anchor point frame and the center of the target object is smaller than a preset distance threshold, taking the reference anchor point frame corresponding to the center of the adjusted reference anchor point frame as a frame to be selected; continuously adjusting the reference anchor point frame to obtain a frame closest to the size of the target object, calculating the proportion of the target object in the frame to be selected, and taking the frame to be selected corresponding to the maximum proportion as the target frame; and obtaining target information according to the target frame and the target object, screening the frame with the best size to obtain the frame with the most target objects in the frame, and taking the frame as a final output prediction frame to conveniently and quickly predict the behavior of the target object in the follow-up process.
Referring to fig. 5, fig. 5 is a flowchart illustrating a driver danger early warning method according to a fourth embodiment of the present invention.
Based on the first embodiment, the step S40 of the driver danger early warning method in this embodiment specifically includes:
the target information further includes: travel information of the target object, and environmental information.
Step S401: and inputting the running information and the environmental information of the target object into a forgetting gate, an input gate and an output gate in the long-short term memory model for calculation to obtain a hidden state.
It should be understood that the target information further includes environmental information collected by the camera and driving information of the target object, the driving information may include straight driving, left lane changing, right lane changing, etc., the driving information of the target object in the target information and the surrounding environmental information are input into an LSTM model for calculation, the LSTM model includes a forgetting gate, an input gate and an output gate, and finally, a hidden state is obtained through calculation.
Step S402: and calculating the hidden state through a preset logistic regression function to obtain the predicted behavior of the target object.
The preset logistic regression function refers to a Softmax local regression function, the hidden state is calculated through the Softmax function, the probability of various actions of the target object in the next step is obtained, the action with the maximum probability is taken as the predicted action, the predicted action of the target object is obtained, early warning is conveniently carried out according to the predicted action of the target object, and a driver is reminded of paying attention.
When the vehicle is at the crossroad and the visibility is judged to be smaller than the preset range according to the image data shot by the camera, the predicted behavior of the target object is early warned, and the fused image data is projected to a front windshield display of the vehicle so as to enhance the sight of a driver. The preset range is set according to specific environment information, in the embodiment, 100 meters is taken as an example for explanation, when the visibility of the image reaction environment is smaller than 100 meters, and the environment is very severe or in a dark environment, the image data shot by the camera can be projected onto a front windshield display of a vehicle, so that the sight of a driver is enhanced, and meanwhile, when the target object is predicted to converge at the intersection, the predicted behavior of the target object is early warned to remind the driver to pay attention. The early warning device of accessible on-vehicle controller control vehicle reports to the police, through sending out the early warning for the driver in advance, reminds the driver to pay attention to in order to avoid the occurence of failure.
In the embodiment, the hidden state is obtained by inputting the running information and the environmental information of the target object into a forgetting gate, an input gate and an output gate in a long-term and short-term memory model for calculation; and calculating the hidden state through a preset logistic regression function to obtain the predicted behavior of the target object, accurately predicting the behavior according to the running information and the environmental information of the target object, and early warning a driver according to the predicted behavior to improve the driving safety.
Referring to fig. 6, fig. 6 is a flowchart illustrating a driver danger early warning method according to a fifth embodiment of the present invention. Based on the fourth embodiment, the step S401 of the driver danger early warning method in this embodiment specifically includes:
step S411: and acquiring a weight matrix and an offset of each neural network layer in the long-term and short-term memory model.
It should be understood that the weight matrix and the offset are obtained by performing model training in the early stage, and the trained LSTM model is obtained by inputting the labeled training set image and training the training set image. The weight matrix and the offset of each neural network layer can be obtained according to the LSTM and the model.
In training, the loss J (θ) function is defined as the negative log-likelihood function of the true label.
Figure BDA0003428292170000141
Wherein m is the number of target classes, i.e. the number of behaviors that may occur in the vehicle, t is the true value represented by an independent vector, i.e. the representation of the true behavior in the formula, Y is the estimated probability of each behavior class obtained by a function, λ is a regularization hyper-parameter, which is continuously adjusted during training, θ is a parameter set, which represents the weight W and the bias b, different parameters are defined to participate in the training, and the parameter with the minimum loss function is found as the final parameter for subsequent behavior prediction.
Step S412: and inputting the running information and the environment information of the target object into a forgetting gate in a long-term and short-term memory model for calculation based on the weight matrix and the offset to obtain long-term memory data of the last moment needing to be reserved.
In a specific implementation, the forgetting gate is used for controlling the historical information stored in the hidden layer node at the previous moment. The forgetting door can hide the hidden state h of the layer according to the last timet-1And input x of the node at the present timetValues between 0 and 1 are calculated to determine the information that needs to be retained and discarded, where 1 identifies a complete retention and 0 indicates a complete discard. The long-term memory of the hidden layer, namely the history information can be selectively processed through the processing of the forgetting gate. The forgetting gate calculation process is as follows:
ft=σ(Wxfxt+Whfht-1+Wcfct-1+bf) (formula 7)
In formula 7, ftObtaining history data which is long-term memory data at the last moment and needs to be reserved for forgetting gate, wherein sigma is Sigmoid activation function, and xxfAs target information, Wxf、WhfAnd WcfWeight matrix for forgetting gate neural network layer, bfTo forget the offset of the door, ht-1For receiving the hidden state at the last moment in time t, ct-1For long-term memory at time t-1, an initial time ht-1And ct-1The values of the two-dimensional model are all 0, the Sigmoid activation function is located in the neural network, information is filtered through the Sigmoid activation function, and data which can be suitable for participating in network operation are obtained.
Step S413: and inputting the running information and the environmental information of the target object into an input gate in a long-term and short-term memory model for calculation based on the weight matrix and the offset to obtain long-term memory data to be saved in the current input.
It should be noted that the input gate is used to control the input of the long-term memory of the current hidden layer node, and can determine whether to input the information xtLong-term memory updated to the current time ctI.e. to determine which parts of the input are worth using and retaining. The output of the input gate is a value between 0 and 1 of the Sigmoid output, with 1 indicating a complete reservation and 0 indicating a complete discard.
The calculation of the input gate is related to the hidden state at the previous time and the currently input target information, and if there is no hidden state at the initial time, the value of the hidden state is 0. The calculation formula for the input gate is:
it=σ(Wxixt+Whiht-1+Wcict-1+bi) (formula 8)
In the formula 8, itLong-term memory data to be preserved for the input gate, σ is Sigmoid activation function, xxiAs target information, Wxi、WhiAnd WciWeight matrix being a neural network layer, biAs an offset, ht-1To receive the hidden state of the previous moment at the current moment t, ct-1For long-term memory at time t-1, an initial time ht-1And ct-1All have a value of 0.
Step S414: and obtaining the long-term memory data of the current moment according to the long-term memory data to be stored in the current input and the long-term memory data of the last moment needing to be reserved.
In a specific implementation, a specific calculation process for obtaining the long-term memory data at the current time is as follows:
Figure BDA0003428292170000151
Figure BDA0003428292170000152
in the formula 9 and the formula 10,
Figure BDA0003428292170000153
for a new candidate value vector at the current time, x, created using the hyperbolic tangent function tanhtAs target information, Wxc、WhcAnd WccWeight matrix being a neural network layer, bcFor the offset, obtained by equation 9
Figure BDA0003428292170000154
In formula 10, itLong-term memory data to be stored for the input gate, ftLong-term memory data of the last moment reserved for forgetting to forget the door, ct-1For long-term memory at time t-1, ctIs long-term memory from the last moment ct-1And the updated new long-term memory is the long-term memory data at the current moment.
Step S415: and calculating the long-term memory data at the current moment through an output gate to obtain the hidden state at the current moment.
It should be noted that the output gate is used to control the output of the node of the current hidden layer, and determine whether to output to the next hidden layer or the output layer. By control of the output gate, long-term memory can be focused on those applicable information. Its states are also 0 and 1, the control function of the output gate acting on the current long-term memory ctTo obtain whenHidden state at a previous moment. The specific calculation process to obtain the hidden state is as follows:
Ot=σ(Wxoxt+Whoht-1+Wcoct+bo) (formula 11)
ht=Ottanhct(formula 12)
In formula 11, OtDetermining long-term memory for running Sigmoid layers ctσ is Sigmoid activation function, xtAs target information, Wxo、WhoAnd WcoWeight matrix being a neural network layer, boOffset, h in formula 12tLong term memory of hidden state for current moment ctSubjected to tanh treatment and OtMultiplying, and finally outputting the hidden state h at the current momentt
In the specific implementation, after the hidden state at the current moment is obtained, an attention mechanism is introduced, and the hidden state h at each moment is usedtSaving record generation matrix H: (h)1,h2,h3......ht) An output vector r of the attention mechanism is obtained by weighted sum calculation of hidden states, and the output vector is calculated as follows:
m ═ tanhH, (formula 13)
α=softmax(wTM) (formula 14)
r=HαT(formula 15)
In formula 13, H is a hidden state matrix, M is an intermediate vector of the matrix H after tanh processing, in formula 14, softmax is an activation function, wTCalculating an attention vector alpha for a parameter vector trained in an LSTM model, obtaining an output vector r of an attention mechanism through the attention vector alpha and a hidden state matrix H, and obtaining the output vector r of an attention mechanism through H after calculating the output vector r of the attention mechanism*And (5) obtaining final predicted behavior information, calculating the probability that the target will perform various actions in the next step, taking the action with the maximum probability as the final predicted behavior, and performing early warning when a target object is converged.
As shown in fig. 7, fig. 7 is a schematic view of an overall flow of the method for warning danger of a driver in this embodiment, image sensing is performed by a visible light camera, when the visibility of environmental data in the sensed image is less than 100 meters, that is, road conditions other than 100 meters cannot be seen, projection needs to be performed on the environment around the vehicle, image data captured by an infrared camera and image data captured by the optical camera are fused, behavior prediction of a target object and the target object is obtained by target recognition and behavior prediction of the fused image, when convergence of a target object of a vehicle or a pedestrian is predicted, a warning device is controlled to send out warning to remind the driver, when the visibility of environmental information is less than 100 meters, it is determined that projection needs to be performed, the fused image is projected on a front windshield display of the vehicle in real time, enhancing the driver's line of sight.
In the embodiment, the weight matrix and the offset of each neural network layer in the long-term and short-term memory model are obtained; inputting the running information and the environmental information of the target object into a forgetting gate in a long-short term memory model for calculation based on the weight matrix and the offset to obtain long-term memory data of the last moment needing to be reserved; inputting the running information and the environmental information of the target object into an input gate in a long-term and short-term memory model for calculation based on the weight matrix and the offset to obtain long-term memory data to be stored in the current input; obtaining long-term memory data of the current moment according to the long-term memory data to be stored in the current input and the long-term memory data of the previous moment needing to be reserved; and calculating the long-term memory data at the current moment through an output door to obtain the hidden state at the current moment, predicting the possibility that target objects on two sides of the vehicle are converged into a driving lane through a long-term and short-term memory model, and solving the problem that the target objects cannot be clearly seen by a driver when the sight is blocked.
Referring to fig. 8, fig. 8 is a block diagram illustrating a first embodiment of a driver danger early warning apparatus according to the present invention.
As shown in fig. 8, the driver danger early warning apparatus according to the embodiment of the present invention includes:
the acquiring module 10 is configured to acquire visible light image data captured by the first camera and infrared image data captured by the second camera.
And the fusion module 20 is configured to perform region fusion on the visible light image data and the infrared image data to obtain fused image data.
And the identification module 30 is configured to identify the fused image data through a preset target identification model to obtain target information.
And the predicting module 40 is configured to perform behavior prediction on the target object in the target information through a long-term and short-term memory model to obtain a predicted behavior of the target object.
And the early warning module 50 is used for early warning based on the predicted behavior and projecting the fused image data to a preset position so as to enhance the sight of the driver.
In the embodiment, visible light image data shot by a first camera and infrared image data shot by a second camera are obtained; performing region fusion on the visible light image data and the infrared image data to obtain fused image data; identifying the fused image data through a preset target identification model to obtain target information; performing behavior prediction on the target object in the target information through a long-term and short-term memory model to obtain the predicted behavior of the target object; and early warning is carried out based on the predicted behavior, the fused image data is projected to a preset position so as to enhance the sight of a driver, vehicles and pedestrians entering a lane are accurately predicted, early warning is given to the driver, the fused image data is projected, and the driving safety is improved.
In an embodiment, the fusion module 20 is further configured to divide the visible light image data and the infrared image data according to a preset number of pixel points to obtain visible light image data pixel points and infrared image data pixel points; calculating first region energy according to the visible light image data pixel points; calculating the energy of a second area according to the infrared image data pixel points; obtaining a fusion region energy based on the first region energy and the second region energy; and obtaining fused image data according to the energy of the fused region.
In one embodiment, the target information includes: the recognition module 30 is further configured to input the fused image data to a deep learning network in a preset target recognition model for feature extraction, so as to obtain a fused image feature map and a target object; acquiring the sizes of a preset number of reference anchor frames and a target object; obtaining the center of the reference anchor frame and the size of the reference anchor frame based on the reference anchor frame; adjusting the center of the reference anchor point frame and the size of the reference anchor point frame through a preset clustering algorithm according to the center of the reference anchor point frame and the size of the target object; when the distance between the center of the adjusted reference anchor point frame and the center of the target object is smaller than a preset distance threshold, taking the reference anchor point frame corresponding to the center of the adjusted reference anchor point frame as a frame to be selected; calculating the proportion of the target object in the frame to be selected, and taking the frame to be selected corresponding to the maximum proportion as a target frame; and obtaining target information according to the target frame and the target object.
In an embodiment, the identification module 30 is further configured to obtain coordinates of a center of the reference anchor frame, a width and a height of the target object; acquiring the offset of the center of the target object; calculating and adjusting the coordinates of the center of the reference anchor point frame and the width and height of the target object through a preset clustering algorithm according to the offset, the width and the height of the target object; and obtaining the coordinates of the center of the adjusted reference anchor point frame, the width of the reference anchor point frame and the height of the reference anchor point frame according to the calculation result.
In one embodiment, the target information further includes: the prediction module 40 is further configured to input the driving information and the environmental information of the target object to a forgetting gate, an input gate and an output gate in the long-term and short-term memory model for calculation to obtain a hidden state; and calculating the hidden state through a preset logistic regression function to obtain the predicted behavior of the target object.
In an embodiment, the prediction module 40 is further configured to obtain a weight matrix and a bias of each neural network layer in the long-term and short-term memory model; inputting the running information and the environmental information of the target object into a forgetting gate in a long-short term memory model for calculation based on the weight matrix and the offset to obtain long-term memory data of the last moment needing to be reserved; inputting the running information and the environmental information of the target object into an input gate in a long-term and short-term memory model for calculation based on the weight matrix and the offset to obtain long-term memory data to be stored in the current input; obtaining long-term memory data of the current moment according to the long-term memory data to be stored in the current input and the long-term memory data of the previous moment needing to be reserved; and calculating the long-term memory data at the current moment through an output gate to obtain the hidden state at the current moment.
In an embodiment, the early warning module 50 is further configured to perform early warning on the predicted behavior of the target object when the environmental information is an intersection and the visibility is smaller than a preset range, and project the fused image data onto a front windshield display of a vehicle to enhance the sight line of a driver.
In addition, in order to achieve the above object, the present invention further provides a driver danger early warning apparatus, including: a memory, a processor and a driver hazard warning program stored on the memory and executable on the processor, the driver hazard warning program configured to implement the steps of the driver hazard warning method as described above.
Since the driver danger early warning device adopts all technical solutions of all the embodiments, at least all the beneficial effects brought by the technical solutions of the embodiments are achieved, and are not described in detail herein.
In addition, an embodiment of the present invention further provides a storage medium, where a driver danger early warning program is stored on the storage medium, and the driver danger early warning program, when executed by a processor, implements the steps of the driver danger early warning method described above.
Since the storage medium adopts all technical solutions of all the embodiments, at least all the beneficial effects brought by the technical solutions of the embodiments are achieved, and no further description is given here.
It should be understood that the above is only an example, and the technical solution of the present invention is not limited in any way, and in a specific application, a person skilled in the art can set the technical solution as needed, and the present invention is not limited thereto.
It should be noted that the above-described work flows are only illustrative, and do not limit the scope of the present invention, and in practical applications, a person skilled in the art may select some or all of them according to actual needs to implement the purpose of the solution of the embodiment, and the present invention is not limited herein.
In addition, the technical details that are not described in detail in this embodiment can be referred to the driver danger warning method provided in any embodiment of the present invention, and are not described herein again.
Further, it is noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better embodiment. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g. Read Only Memory (ROM)/RAM, magnetic disk, optical disk), and includes several instructions for enabling a terminal device (e.g. a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the present specification and drawings, or used directly or indirectly in other related fields, are included in the scope of the present invention.

Claims (10)

1. A driver danger early warning method, characterized by comprising:
acquiring visible light image data shot by a first camera and infrared image data shot by a second camera;
performing region fusion on the visible light image data and the infrared image data to obtain fused image data;
identifying the fused image data through a preset target identification model to obtain target information;
performing behavior prediction on the target object in the target information through a long-term and short-term memory model to obtain the predicted behavior of the target object;
and early warning is carried out based on the predicted behavior, and the fused image data is projected to a preset position so as to enhance the sight of the driver.
2. The driver danger early warning method according to claim 1, wherein the performing region fusion on the visible light image data and the infrared image data to obtain fused image data comprises:
dividing the visible light image data and the infrared image data according to a preset number of pixel points to obtain visible light image data pixel points and infrared image data pixel points;
calculating first region energy according to the visible light image data pixel points;
calculating the energy of a second area according to the infrared image data pixel points;
obtaining a fusion region energy based on the first region energy and the second region energy;
and obtaining fused image data according to the energy of the fused region.
3. The driver hazard warning method as set forth in claim 1, wherein the target information comprises: a target object and a target frame;
the identifying the fused image data through a preset target identification model to obtain target information comprises the following steps:
inputting the fused image data into a deep learning network in a preset target recognition model for feature extraction to obtain a fused image feature map and a target object;
acquiring the sizes of a preset number of reference anchor point frames and target objects;
obtaining the center of the reference anchor frame and the size of the reference anchor frame based on the reference anchor frame;
adjusting the center of the reference anchor point frame and the size of the reference anchor point frame through a preset clustering algorithm according to the center of the reference anchor point frame and the size of the target object;
when the distance between the center of the adjusted reference anchor point frame and the center of the target object is smaller than a preset distance threshold, taking the reference anchor point frame corresponding to the center of the adjusted reference anchor point frame as a frame to be selected;
calculating the proportion of the target object in the frame to be selected, and taking the frame to be selected corresponding to the maximum proportion as a target frame;
and obtaining target information according to the target frame and the target object.
4. The driver danger early warning method according to claim 3, wherein the adjusting of the center of the reference anchor block and the size of the reference anchor block by a preset clustering algorithm according to the center of the reference anchor block and the size of the target object comprises:
acquiring the coordinates of the center of the reference anchor point frame, the width and the height of the target object;
acquiring the offset of the center of the target object;
calculating and adjusting the coordinates of the center of the reference anchor point frame and the width and height of the target object through a preset clustering algorithm according to the offset, the width and the height of the target object;
and obtaining the coordinate of the center of the adjusted reference anchor point frame, the width of the reference anchor point frame and the height of the reference anchor point frame according to the calculation result.
5. The driver hazard warning method as set forth in claim 1, wherein the target information further comprises: driving information and environmental information of the target object;
the predicting the behavior of the target object in the target information through the long-term and short-term memory model to obtain the predicted behavior of the target object comprises the following steps:
inputting the driving information and the environmental information of the target object into a forgetting gate, an input gate and an output gate in a long-short term memory model for calculation to obtain a hidden state;
and calculating the hidden state through a preset logistic regression function to obtain the predicted behavior of the target object.
6. The driver danger early warning method according to claim 5, wherein the step of inputting the driving information and the environment information of the target object into a forgetting gate, an input gate and an output gate in a long-short term memory model for calculation to obtain the hidden state comprises the following steps:
acquiring a weight matrix and an offset of each neural network layer in the long-term and short-term memory model;
inputting the running information and the environmental information of the target object into a forgetting gate in a long-short term memory model for calculation based on the weight matrix and the offset to obtain long-term memory data of the last moment needing to be reserved;
inputting the running information and the environmental information of the target object into an input gate in a long-term and short-term memory model for calculation based on the weight matrix and the offset to obtain long-term memory data to be stored in the current input;
obtaining long-term memory data of the current moment according to the long-term memory data to be stored in the current input and the long-term memory data of the previous moment needing to be reserved;
and calculating the long-term memory data at the current moment through an output gate to obtain the hidden state at the current moment.
7. The driver hazard warning method as claimed in claim 5, wherein said warning based on said predicted behavior and projecting said fused image data to a preset position to enhance the driver's line of sight comprises:
and when the environmental information is the crossroad and the visibility is smaller than the preset range, early warning is carried out on the predicted behavior of the target object, and the fused image data is projected onto a front windshield display of the vehicle so as to enhance the sight of a driver.
8. A driver danger early warning device, characterized in that the driver danger early warning device comprises:
the acquisition module is used for acquiring visible light image data shot by the first camera and infrared image data shot by the second camera;
the fusion module is used for carrying out region fusion on the visible light image data and the infrared image data to obtain fused image data;
the identification module is used for identifying the fused image data through a preset target identification model to obtain target information;
the prediction module is used for predicting the behavior of the target object in the target information through a long-term and short-term memory model to obtain the predicted behavior of the target object;
and the early warning module is used for early warning based on the predicted behavior and projecting the fused image data to a preset position so as to enhance the sight of the driver.
9. A driver danger early warning apparatus, characterized by comprising: a memory, a processor, and a driver hazard warning program stored on the memory and executable on the processor, the driver hazard warning program configured to implement the driver hazard warning method of any one of claims 1 to 7.
10. A storage medium having stored thereon a driver danger early warning program which, when executed by a processor, implements the driver danger early warning method according to any one of claims 1 to 7.
CN202111593442.8A 2021-12-23 2021-12-23 Driver danger early warning method, device, equipment and storage medium Pending CN114360291A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111593442.8A CN114360291A (en) 2021-12-23 2021-12-23 Driver danger early warning method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111593442.8A CN114360291A (en) 2021-12-23 2021-12-23 Driver danger early warning method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114360291A true CN114360291A (en) 2022-04-15

Family

ID=81101860

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111593442.8A Pending CN114360291A (en) 2021-12-23 2021-12-23 Driver danger early warning method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114360291A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115909254A (en) * 2022-12-27 2023-04-04 钧捷智能(深圳)有限公司 DMS system based on camera original image and image processing method thereof

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170240109A1 (en) * 2016-02-23 2017-08-24 Toyota Jidosha Kabushiki Kaisha Display device
CN107862287A (en) * 2017-11-08 2018-03-30 吉林大学 A kind of front zonule object identification and vehicle early warning method
CN109272050A (en) * 2018-09-30 2019-01-25 北京字节跳动网络技术有限公司 Image processing method and device
CN110443776A (en) * 2019-08-07 2019-11-12 中国南方电网有限责任公司超高压输电公司天生桥局 A kind of Registration of Measuring Data fusion method based on unmanned plane gondola
CN110667576A (en) * 2019-10-18 2020-01-10 北京百度网讯科技有限公司 Method, apparatus, device and medium for controlling passage of curve in automatically driven vehicle
CN110807385A (en) * 2019-10-24 2020-02-18 腾讯科技(深圳)有限公司 Target detection method and device, electronic equipment and storage medium
CN111310861A (en) * 2020-03-27 2020-06-19 西安电子科技大学 License plate recognition and positioning method based on deep neural network
CN112767357A (en) * 2021-01-20 2021-05-07 沈阳建筑大学 Yolov 4-based concrete structure disease detection method
CN113221928A (en) * 2020-01-21 2021-08-06 海信集团有限公司 Clothing classification information display device, method and storage medium
CN113283367A (en) * 2021-06-08 2021-08-20 南通大学 Safety detection method for visual blind area of underground garage in low-visibility environment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170240109A1 (en) * 2016-02-23 2017-08-24 Toyota Jidosha Kabushiki Kaisha Display device
CN107862287A (en) * 2017-11-08 2018-03-30 吉林大学 A kind of front zonule object identification and vehicle early warning method
CN109272050A (en) * 2018-09-30 2019-01-25 北京字节跳动网络技术有限公司 Image processing method and device
CN110443776A (en) * 2019-08-07 2019-11-12 中国南方电网有限责任公司超高压输电公司天生桥局 A kind of Registration of Measuring Data fusion method based on unmanned plane gondola
CN110667576A (en) * 2019-10-18 2020-01-10 北京百度网讯科技有限公司 Method, apparatus, device and medium for controlling passage of curve in automatically driven vehicle
CN110807385A (en) * 2019-10-24 2020-02-18 腾讯科技(深圳)有限公司 Target detection method and device, electronic equipment and storage medium
CN113221928A (en) * 2020-01-21 2021-08-06 海信集团有限公司 Clothing classification information display device, method and storage medium
CN111310861A (en) * 2020-03-27 2020-06-19 西安电子科技大学 License plate recognition and positioning method based on deep neural network
CN112767357A (en) * 2021-01-20 2021-05-07 沈阳建筑大学 Yolov 4-based concrete structure disease detection method
CN113283367A (en) * 2021-06-08 2021-08-20 南通大学 Safety detection method for visual blind area of underground garage in low-visibility environment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115909254A (en) * 2022-12-27 2023-04-04 钧捷智能(深圳)有限公司 DMS system based on camera original image and image processing method thereof
CN115909254B (en) * 2022-12-27 2024-05-10 钧捷智能(深圳)有限公司 DMS system based on camera original image and image processing method thereof

Similar Documents

Publication Publication Date Title
CN112418268B (en) Target detection method and device and electronic equipment
CN110188807B (en) Tunnel pedestrian target detection method based on cascading super-resolution network and improved Faster R-CNN
CN111767878B (en) Deep learning-based traffic sign detection method and system in embedded device
JP2019061658A (en) Area discriminator training method, area discrimination device, area discriminator training device, and program
CN109190488B (en) Front vehicle door opening detection method and device based on deep learning YOLOv3 algorithm
KR20210013216A (en) Multi-level target classification and traffic sign detection method and apparatus, equipment, and media
CN107944351B (en) Image recognition method, image recognition device and computer-readable storage medium
US11087477B2 (en) Trajectory prediction
CN109800682B (en) Driver attribute identification method and related product
CN111667512A (en) Multi-target vehicle track prediction method based on improved Kalman filtering
CN114418895A (en) Driving assistance method and device, vehicle-mounted device and storage medium
CN108830131B (en) Deep learning-based traffic target detection and ranging method
CN111931683B (en) Image recognition method, device and computer readable storage medium
CN111178119A (en) Intersection state detection method and device, electronic equipment and vehicle
CN114820644A (en) Method and apparatus for classifying pixels of an image
Peng et al. Real-time illegal parking detection algorithm in urban environments
CN116630920A (en) Improved lane line type identification method of YOLOv5s network model
CN114360291A (en) Driver danger early warning method, device, equipment and storage medium
Chan et al. The ethical dilemma when (not) setting up cost-based decision rules in semantic segmentation
CN117935388A (en) Expressway charging monitoring system and method based on networking
Hasan Yusuf et al. Real-time car parking detection with deep learning in different lighting scenarios
CN108873097B (en) Safety detection method and device for parking of vehicle carrying plate in unmanned parking garage
CN114360056B (en) Door opening early warning method, device, equipment and storage medium
CN116229443A (en) Vehicle license plate detection method, device, computer equipment and storage medium
Abdi et al. Driver information system: A combination of augmented reality and deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220415