CN117681766A - Vehicle collision early warning method, device, equipment, storage medium and program product - Google Patents

Vehicle collision early warning method, device, equipment, storage medium and program product Download PDF

Info

Publication number
CN117681766A
CN117681766A CN202311668054.0A CN202311668054A CN117681766A CN 117681766 A CN117681766 A CN 117681766A CN 202311668054 A CN202311668054 A CN 202311668054A CN 117681766 A CN117681766 A CN 117681766A
Authority
CN
China
Prior art keywords
vehicle
driving
early warning
image
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311668054.0A
Other languages
Chinese (zh)
Inventor
陈建儒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Lutes Robotics Co ltd
Original Assignee
Ningbo Lutes Robotics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Lutes Robotics Co ltd filed Critical Ningbo Lutes Robotics Co ltd
Priority to CN202311668054.0A priority Critical patent/CN117681766A/en
Publication of CN117681766A publication Critical patent/CN117681766A/en
Pending legal-status Critical Current

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The application relates to a vehicle collision early warning method, a vehicle collision early warning device, a computer device, a storage medium and a computer program product. The method comprises the following steps: acquiring a driving image in the driving process of a vehicle; predicting a driving image through a pre-trained perception model to obtain a scene prediction result; when the scene prediction result represents that the vehicle is in an early warning scene, determining early warning information according to the running state of the vehicle; and controlling a projection device on the vehicle to project the early warning information on the ground on which the vehicle runs. By adopting the method, the early warning information can be projected to the driving ground through the projection device, so that the driving intention of a driver is visually displayed on the driving ground, surrounding pedestrians or other vehicles can be actively prompted to evade or wait through the early warning information on the driving ground, the probability of collision between the vehicle and the pedestrians or other vehicles is reduced, and the driving safety is improved.

Description

Vehicle collision early warning method, device, equipment, storage medium and program product
Technical Field
The present disclosure relates to the field of intelligent driving technologies of vehicles, and in particular, to a vehicle collision early warning method, apparatus, computer device, storage medium, and computer program product.
Background
In order to protect pedestrians and reduce accidents, the distance, relative speed and direction of the front vehicle and the own vehicle are judged by monitoring the front vehicle at all times through a radar system of a collision avoidance system (CAS, also called a pre-collision system, a front collision warning system or a collision relief system). If the dangerous distance is reached with the front vehicle during running, when collision danger exists, the front collision early warning system can give a visual alarm and a sound alarm to remind a driver, and rear-end collision accidents of the vehicle in front of the driver are avoided.
Among them, collision avoidance systems are those that detect impending collisions by radar (all weather), laser (LIDAR) and cameras (using image recognition). However, due to factors such as poor field of view in rainy days, dead angle blind areas of cameras and the like, the collision avoidance system has limitation on the recognition of collision events, so that the accident occurrence probability is high.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a vehicle collision warning method, apparatus, computer device, storage medium, and computer program product that can reduce the probability of occurrence of an accident.
In a first aspect, the present application provides a vehicle collision warning method. The method comprises the following steps:
Acquiring a driving image in the driving process of a vehicle;
predicting the driving image through a pre-trained perception model to obtain a scene prediction result;
when the scene prediction result represents that the vehicle is in an early warning scene, determining early warning information according to the running state of the vehicle;
and controlling a projection device on the vehicle to project the early warning information on the ground on which the vehicle runs.
In one embodiment, the method further comprises:
and controlling the projection device to be in an idle state when the scene prediction result indicates that the vehicle is in a non-early-warning scene.
In one embodiment, the sensing model includes a feature extraction network and a scene recognition network, and the predicting the driving image through the pre-trained sensing model to obtain a scene prediction result includes:
extracting image features of the driving image through a pre-trained feature extraction network;
and predicting a driving scene where the driving image is based on the image characteristics through a pre-trained scene recognition network to obtain a scene prediction result.
In one embodiment, the feature extraction network includes a target detection sub-model and a lane line detection sub-model, the image features include obstacle features and lane line features, and the extracting, by the pre-trained feature extraction network, the image features of the driving image includes:
Extracting obstacle characteristics of the driving image through a pre-trained target detection sub-model;
and extracting the lane line characteristics of the driving image through a pre-trained lane line detection sub-model.
In one embodiment, the determining the early warning information according to the driving state of the vehicle includes:
acquiring running data of the vehicle;
determining a movement pattern of the vehicle based on the travel data;
and determining early warning information corresponding to the motion mode.
In one embodiment, the determining the early warning information corresponding to the motion mode includes:
if the motion mode is linear motion, determining that the early warning information corresponding to the linear motion comprises at least one of linear arrow and prompt text; and a mapping relation exists between the movement speed of the linear movement and the projection length, the projection color and the flicker frequency of the linear arrow.
In one embodiment, the method further comprises:
if the motion mode is steering motion, determining that the early warning information corresponding to the steering motion comprises at least one of curved arrow and prompt text; and a mapping relation exists between the steering angle of the steering movement and the bending angle, the projection length, the projection color and the flicker frequency of the steering arrow.
In one embodiment, the method further comprises:
acquiring training samples, wherein each training sample comprises an input image, image characteristics of the input image and a label; the tag is used for marking a driving scene where the input image is located;
processing the training sample through an initial network to obtain a prediction result of a driving scene where the input image is located;
and adjusting the initial network to train based on the difference between the prediction result and the label, and stopping training when the training stopping condition is reached, so as to obtain a trained scene recognition network.
In a second aspect, the present application further provides a vehicle collision warning device. The device comprises:
the image acquisition module is used for acquiring a running image in the running process of the vehicle;
the scene prediction module is used for predicting the running image through a pre-trained perception model to obtain a scene prediction result;
the early warning determining module is used for determining early warning information according to the running state of the vehicle when the scene prediction result represents that the vehicle is in an early warning scene;
and the early warning module is used for controlling the projection device on the vehicle to project the early warning information on the ground where the vehicle runs.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the vehicle collision early warning method when executing the computer program.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the vehicle collision warning method described above.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of the vehicle collision warning method described above.
According to the vehicle collision early warning method, the device, the computer equipment, the storage medium and the computer program product, the driving image in the driving process of the vehicle is obtained, the driving image is predicted through the pre-trained perception model, the scene prediction result is obtained, when the scene prediction result represents that the vehicle is in an early warning scene, the early warning information is determined according to the driving state of the vehicle, and the projection device on the vehicle is controlled to project the early warning information on the ground where the vehicle is driving. The driving scene of the vehicle can be known in real time by predicting the driving image, when the scene prediction result represents that the vehicle is in the early warning scene, the driving state of the vehicle is determined, the early warning information conforming to the current driving scene is projected to the driving ground through the projection device, the driving intention of a driver is visually displayed on the driving ground, surrounding pedestrians or other vehicles can be actively prompted to evade or wait through the early warning information on the driving ground, the probability of collision between the vehicle and the pedestrians or other vehicles is reduced, and the driving safety is improved.
Drawings
FIG. 1 is a diagram of an application environment of a vehicle collision warning method in one embodiment;
FIG. 2 is a flow chart of a vehicle collision warning method according to an embodiment;
FIG. 3 is a schematic view of a mounting position of a projection device on a vehicle in one embodiment;
FIG. 4 is a block diagram of a perception model in one embodiment;
FIG. 5 is a schematic view of a projected position of a straight arrow on a driving surface in one embodiment;
FIG. 6 is a schematic diagram of pre-warning information corresponding to different movement speeds in an embodiment;
FIG. 7 is a schematic view of the projected location of a steering arrow on a driving surface in one embodiment;
FIG. 8 is a schematic diagram of pre-warning information corresponding to different bending angles in an embodiment;
FIG. 9 is a schematic diagram of early warning information of an obstacle detected in one embodiment;
FIG. 10 is a block diagram showing a configuration of a vehicle collision warning apparatus in one embodiment;
FIG. 11 is an internal block diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
If the automobile or truck is in a quiet state during running, the pedestrian cannot hear the vehicle coming from behind, and thus the rider and the pedestrian may be endangered. In order to protect pedestrians and reduce accidents, all new low emission and electric vehicles must generate a certain degree of noise during driving, therefore, all new electric vehicles with four wheels need an Acoustic Vehicle Alarm System (AVAS) which generates a proper sound, and the AVAS system must generate noise when advancing or backing at a speed of less than 19 km/h.
A radar system of a collision avoidance system (CAS, also known as a pre-crash system, a front crash warning system, or a crash mitigation system) monitors the front vehicle at all times, determining the distance, relative speed, and orientation of the front vehicle and the host vehicle. If the dangerous distance is reached with the front vehicle during running, when collision danger exists, the front collision early warning system can give a visual alarm and a sound alarm to remind a driver, and rear-end collision accidents of the vehicle in front of the driver are avoided.
Among them, collision avoidance systems are those that detect impending collisions by radar (all weather), laser (LIDAR) and cameras (using image recognition). However, due to factors such as poor field of view in rainy days, dead angle blind areas of cameras and the like, the collision avoidance system has limitation on the recognition of collision events, so that the accident occurrence probability is high.
Therefore, in order to solve the above-mentioned problem, the embodiment of the present application predicts the driving scene in which the vehicle is located by predicting the driving image, when the scene prediction result represents that the vehicle is in the early warning scene, the driving state of the vehicle is determined, the early warning information conforming to the current driving scene is projected to the driving ground by the projection device, and then the driving intention of the driver is visually displayed on the driving ground, and surrounding pedestrians or other vehicles can be actively prompted to evade or wait by the early warning information on the driving ground, so that the probability of collision between the vehicle and the pedestrians or other vehicles is reduced, and the driving safety is improved.
The vehicle collision early warning method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. The vehicle-mounted camera 102 communicates with the vehicle controller 104 through a network. The vehicle-mounted camera 102 acquires a running image in the running process of the vehicle and transmits the running image to the whole vehicle controller 104, and the whole vehicle controller 104 acquires the running image in the running process of the vehicle; the whole vehicle controller 104 predicts the running image through a pre-trained perception model to obtain a scene prediction result; when the whole vehicle controller 104 represents that the vehicle is in an early warning scene according to the scene prediction result, the early warning information is determined according to the running state of the vehicle; the vehicle controller 104 controls a projection device on the vehicle to project early warning information on the ground where the vehicle travels. The vehicle-mounted camera can be, but is not limited to, a vehicle recorder, a reversing image, a 360-degree panoramic image, in-vehicle monitoring and the like. In one embodiment, as shown in fig. 2, a vehicle collision pre-warning method is provided, and the method is applied to the whole vehicle controller in fig. 1 for illustration, and includes the following steps:
Step 202, acquiring a driving image in the driving process of the vehicle.
The driving image refers to an image shot by the vehicle-mounted camera in the driving process of the vehicle. The driving image includes the surrounding environment of the vehicle, road conditions, other vehicles, traffic signals, road identifications, pedestrians, and the like.
Specifically, the vehicle-mounted camera shoots a driving image in the driving process of the vehicle and transmits the driving image to the whole vehicle controller. And the whole vehicle controller receives the running image transmitted by the vehicle-mounted camera.
And 204, predicting the running image through a pre-trained perception model to obtain a scene prediction result.
The perception model is a model for judging the driving scene of the vehicle based on comprehensive analysis of various factors. Through fusion and analysis of various image features, the perception model can accurately perceive the environment and the state around the vehicle, so that different driving scenes can be identified. The perceptual model may employ a deep learning based framework model such as Convolutional Neural Network (CNN), cyclic neural network (RNN), and the like.
In some embodiments, the training process of the perceptual model is as follows: a large amount of running image data is prepared, which contains various driving scenes such as urban roads, highways, mountain roads, rainy and snowy weather, and the like. Meanwhile, marking the position, speed, direction, surrounding vehicles, pedestrians and other information of the vehicle in the running image data; and (3) inputting the marked driving image data into the model for training by using a deep learning frame, and continuously adjusting parameters of the model by using a back propagation algorithm, so that the model can learn the characteristics and rules of different driving scenes better.
The scene prediction result refers to a result obtained by predicting a driving scene where the vehicle is located according to a pre-trained perception model. Specifically, the scene prediction results can be classified into two types of early warning scenes and non-early warning scenes. Early warning scenes generally refer to potentially dangerous driving environments, such as road congestion, traffic accidents, bad weather, etc., which may pose a threat to the driving safety of the vehicle. Whereas non-warning scenarios refer to relatively safe driving environments, such as sunny weather, clear roads, etc.
Specifically, the vehicle controller inputs the acquired driving image into a perception model for prediction, the perception model extracts image features in the driving image, and a driving scene where the vehicle is located is predicted according to a rule learned during training.
And 206, determining early warning information according to the running state of the vehicle when the scene prediction result represents that the vehicle is in the early warning scene.
The driving state refers to a movement state of the vehicle on a road. For example, the running state includes information of speed, acceleration, deceleration, running direction, lane position, and the like.
The early warning information is important information for reminding pedestrians or other vehicles to pay attention to the driving state of the current vehicle. The early warning information can be used for representing the driving intention and the driving state of a driver so as to increase driving safety. The warning information may be transmitted in a variety of ways, such as sounds, lights, or images, to draw attention of pedestrians and other vehicles.
Specifically, when the scene prediction result represents that the vehicle is in the early warning scene, the whole vehicle controller determines corresponding early warning information according to the scene prediction result and the vehicle driving state.
In some embodiments, the projection device is controlled to be in an idle state when the scene prediction results indicate that the vehicle is in a non-pre-warning scene.
Specifically, when the scene prediction result indicates that the vehicle is in a non-early warning scene, the whole vehicle controller does not trigger the projection device, so that the projection device is in an idle state.
In this embodiment, the projection device is set to be in an idle state in a non-early warning scene, so that energy sources can be saved, and energy consumption of a vehicle can be reduced.
Step 208, controlling a projection device on the vehicle to project the early warning information on the ground on which the vehicle runs.
The projection device is a lighting device mounted on a vehicle and is used for projecting various patterns, characters and other information on the running ground. The projection device can clearly project required patterns and characters, and ensure that the required patterns and characters can keep good visibility under different illumination conditions and running speeds. The running ground refers to the ground in the movement direction of the vehicle, and can be summarized as the ground in the forward direction and the ground in the backward direction. FIG. 3 is a schematic view showing the mounting position of the projection device on the vehicle in one embodiment, as shown in FIG. 3, where one projection device is mounted on both the front and rear of the vehicle, and the projection ranges of the front and rear projection devices are shown in FIG. 3.
According to the method and the device, various early warning information is projected on the driving ground through the projection device, and the driving intention of a driver can be visually displayed on the driving ground, so that the driving safety and the traffic efficiency are improved.
Specifically, when the scene prediction result represents that the vehicle is in an early warning scene, the whole vehicle controller transmits the early warning information required to be projected to a projection device, and the projection device projects the early warning information onto the ground where the vehicle runs according to the lamplight display requirement.
In the vehicle collision early warning method, a driving image in the driving process of the vehicle is obtained, the driving image is predicted through a pre-trained perception model, a scene prediction result is obtained, when the scene prediction result represents that the vehicle is in an early warning scene, early warning information is determined according to the driving state of the vehicle, and a projection device on the vehicle is controlled to project the early warning information on the ground where the vehicle is driving. The driving scene of the vehicle can be known in real time by predicting the driving image, when the scene prediction result represents that the vehicle is in the early warning scene, the driving state of the vehicle is determined, the early warning information conforming to the current driving scene is projected to the driving ground through the projection device, the driving intention of a driver is visually displayed on the driving ground, surrounding pedestrians or other vehicles can be actively prompted to evade or wait through the early warning information on the driving ground, the probability of collision between the vehicle and the pedestrians or other vehicles is reduced, and the driving safety is improved.
In one embodiment, predicting the driving image through a pre-trained perception model to obtain a scene prediction result includes:
extracting image features of a driving image through a pre-trained feature extraction network; and predicting the driving scene where the driving image is based on the image characteristics through a pre-trained scene recognition network to obtain a scene prediction result.
Wherein, image features refer to features for identifying driving scenes. For example, colors in an image feature running image, obstacles, lane lines, warning signs, environmental features, etc.
The perceptual model includes a feature extraction network and a scene recognition network. The feature extraction network is an important component in the perception model and is used for extracting image features for scene recognition from the driving image. The image features may be colors in the driving image, obstacles, lane lines, warning signs, environmental features, etc. Feature extraction networks are typically constructed using deep learning techniques, such as Convolutional Neural Networks (CNNs).
The scene recognition network is another important component in the perception model and is responsible for classifying and recognizing the extracted characteristic information so as to determine the specific driving scene of the vehicle. For example, the scene recognition network may classify the extracted image features into different driving scenes of urban roads, highways, mountain roads, and the like. Scene recognition networks are typically built using deep learning techniques, such as Recurrent Neural Networks (RNNs) or Support Vector Machines (SVMs), etc.
Specifically, the vehicle controller extracts image features of a driving image through a pre-trained feature extraction network, the extracted image features are input into a scene recognition network, and the scene recognition network classifies and recognizes the input image features according to a learned rule and mode to determine a scene prediction result.
In this embodiment, the image features of the driving image are extracted through the pre-trained feature extraction network, the extracted image features are helpful to accurately represent the driving image, and accurate driving scene prediction is performed through the pre-trained scene recognition network based on the extracted image features, so that the accuracy of scene recognition is improved.
In some embodiments, extracting image features of the driving image through a pre-trained feature extraction network comprises:
extracting obstacle characteristics of a driving image through a pre-trained target detection sub-model; and extracting lane line characteristics of the driving image through a pre-trained lane line detection sub-model.
FIG. 4 is a block diagram of a perception model in one embodiment, and as shown in FIG. 4, the feature extraction network includes a target detection sub-model and a lane line detection sub-model. The object detection sub-model is a part of a feature extraction network and is mainly responsible for detecting key objects such as other vehicles, pedestrians, traffic signs and the like from a driving image and providing accurate position and classification information for the objects. Target detection automatically learns the characteristics of the target through a Convolutional Neural Network (CNN).
The lane line detection sub-model is another part of the feature extraction network, and is mainly used for detecting lane lines in the driving image and providing lane information of the current vehicle for an automatic driving system so as to help the vehicle to keep a correct driving track. The lane line detection adopts a combination method of a convolutional neural network and a cyclic neural network to detect the lane line. For example, models based on the encocoder-Decoder architecture (Encoder-Decoder architecture), models using the attention mechanism, and the like can achieve robust lane line detection under various road conditions.
In some embodiments, the training process for the target detection sub-model is as follows: acquiring training samples, wherein each training sample comprises an input image and a label of the input image; the tag is used for marking the position and the shape of the obstacle in the input image; processing the training sample through the initial target detection sub-model to obtain the characteristics of the obstacle in the input image; based on the difference between the characteristics of the obstacle and the labels, the initial target detection sub-model is adjusted to train, and training is stopped when the training stopping condition is reached, so that the trained target detection sub-model is obtained.
In some embodiments, the training process of the lane line detection sub-model is as follows: acquiring training samples, wherein each training sample comprises an input image and a label of the input image; the label is used for marking the position and the shape of the lane line in the input image; processing the training sample through the initial lane line detection sub-model to obtain lane line characteristics in the input image; based on the difference between the lane line characteristics and the labels, the initial lane line detection sub-model is adjusted to train, and training is stopped when the training stopping condition is reached, so that the trained initial lane line detection sub-model is obtained.
The image features include obstacle features and lane line features. The obstacle features refer to visual features of obstacles such as other vehicles, pedestrians, traffic signs and the like in the driving image. These visual features can be used to identify and locate obstructions in the running image and provide critical environmental awareness information. The obstacle features in the image are typically extracted using computer vision and deep learning techniques. Obstacles in the running image are detected and classified, for example, by a YOLO deep learning object detection algorithm.
Lane line features, which are visual features of lane lines in the driving image. The lane line features include color features, width features, continuity features, shape features, location features, and the like. Road detection techniques are typically used to detect lane line features in the running image.
Specifically, the vehicle controller inputs the preprocessed running image into a target detection sub-model, the target detection sub-model analyzes the running image, generates a boundary box and a corresponding class label at the position of each suspected obstacle, and takes the information such as the position, the size and the class of the obstacle in the image as the obstacle characteristics. The vehicle controller inputs the driving image into a lane line detection sub-model, the lane line detection sub-model analyzes the image, detects characteristics of color, width, shape, starting point, end point, center line position, curvature and the like of the lane line, and takes the characteristics as lane line characteristics.
In this embodiment, the object detection sub-model and the lane line detection sub-model can more accurately identify and locate the obstacle and the lane line in the image, and the scene identification network can accurately predict the driving scene based on the accurate obstacle characteristics and lane line characteristics, thereby improving the accuracy of scene identification.
In one embodiment, determining the pre-warning information according to the driving state of the vehicle includes:
acquiring running data of a vehicle; determining a movement pattern of the vehicle based on the travel data; and determining early warning information corresponding to the movement mode.
The driving data refers to driving data generated in the driving process of the vehicle. For example, the driving data includes parameters such as gear, steering wheel angle, speed per hour, rotational speed, etc.
The sport mode refers to an operation mode of the vehicle. For example, the motion pattern may generally include straight forward, straight backward, steering forward, and steering backward. Wherein the pre-warning information of each movement mode is different, and the projection direction of the pre-warning information of each movement mode is consistent with the running direction of the vehicle.
Specifically, the vehicle controller acquires running data of the vehicle through various sensors on the vehicle, determines whether the vehicle is in straight forward, straight backward, steering forward and steering backward based on the running data, and determines early warning information according to the running direction corresponding to the running mode and the corresponding running data after the running mode of the vehicle is determined.
In this embodiment, corresponding early warning information is determined according to different operation modes, so that the early warning information has pertinence, thereby improving early warning efficiency, reducing probability of collision between a vehicle and a pedestrian or other vehicles, and improving driving safety.
In one embodiment, determining the pre-warning information corresponding to the movement pattern includes:
if the motion mode is linear motion, determining that the early warning information corresponding to the linear motion comprises at least one of linear arrow and prompt text; there is a mapping relationship between the movement speed of the linear movement and the projection length, projection color and flicker frequency of the linear arrow.
The linear motion refers to a specific vehicle motion state, that is, the steering angle of the vehicle is kept within a preset angle threshold value during the running process of the vehicle, that is, the running track of the vehicle is basically in a straight line. In this state, the vehicle is mainly traveling in a certain direction without significant steering or curve travel. The linear motion can be summarized as a linear forward motion and a linear backward motion.
The indicated direction of the straight arrow characterizes the direction of movement of the vehicle. Fig. 5 is a schematic view showing a projection position of a straight arrow on a running ground in one embodiment, and as shown in fig. 5, the direction indicated by the straight arrow is a vehicle forward direction when the vehicle is in a straight forward motion, and the direction indicated by the straight arrow is a vehicle backward direction when the vehicle is in a straight backward motion.
The prompt text is used for prompting other vehicles or pedestrians to avoid or wait. The prompt text may be inside the straight arrow or beside the straight arrow. For example, as shown in fig. 5, when the vehicle is in an early warning scene and the vehicle is in a straight-line forward motion, the projection device projects a straight-line arrow and corresponding prompt text (for example, please pay attention to the vehicle) in the forward direction of the vehicle.
The mapping relation exists between the movement speed of the linear movement and the projection length, the projection color and the flicker frequency of the linear arrow, and the relation between the speed of the linear movement of the vehicle and the projection length, the projection color and the flicker frequency of the linear arrow is represented. Specifically, the mapping relationship may be that the greater the movement speed of the vehicle in linear movement, the greater the corresponding projection length, the greater the flicker frequency of the projection device, and the projection colors corresponding to different movement speeds are different. For example, fig. 6 is a schematic diagram of early warning information corresponding to different movement speeds in an embodiment, as shown in fig. 6, the movement speeds are divided into three speed levels of low speed, medium speed and high speed, and each level corresponds to a corresponding linear movement speed range. When the vehicle runs at a low speed, a basket-color short arrow icon is projected to the running direction, and is kept always bright and does not flicker. When the vehicle runs at a medium speed, a yellow medium arrow icon is projected to the running direction, and the medium frequency blinks. When the vehicle is traveling at a high speed, a red long arrow icon is projected to the traveling direction, and blinks at a high frequency.
Specifically, when the vehicle is in an early warning scene and the vehicle is in linear motion, the vehicle control unit determines a speed grade corresponding to the motion speed of the vehicle, determines corresponding early warning information, namely projection length and projection color of a linear arrow and flicker frequency of a projection device, according to the determined speed grade, controls the projection device to project according to the early warning information requirement, and projects prompt characters in a neighborhood range of the linear arrow on the driving ground.
In this embodiment, when the vehicle is in the early warning scene and the vehicle is in the linear motion, it is determined that the early warning information corresponding to the linear motion includes at least one of a linear arrow and a prompt text, and a mapping relationship exists between a motion speed of the linear motion and a projection length, a projection color and a flicker frequency of the linear arrow. Through intuitive straight line arrow and prompt text, drivers and pedestrians on other vehicles can quickly know the current straight line motion state and early warning information of the vehicle, and the projection length, projection color and flicker frequency of the straight line arrow are directly related to the motion speed of the vehicle, so that the vehicle state of the current vehicle can be displayed more intuitively, and the early warning efficiency is improved.
In one embodiment, the vehicle collision warning method further includes:
if the motion mode is steering motion, determining that the early warning information corresponding to the steering motion comprises at least one of curved arrow and prompt text; there is a mapping relationship between the steering angle of the steering movement and the bending angle, projection length, projection color and flicker frequency of the steering arrow.
The steering movement refers to movement of the vehicle to change a running track by changing the direction of wheels by operating a steering wheel or other steering devices during running of the vehicle. In steering movements, the vehicle no longer remains straight, but follows a curved or arcuate path. The steering motion can be summarized as steering forward motion and steering backward motion.
The indicated direction of the steering arrow characterizes the direction of movement of the vehicle. Fig. 7 is a schematic view of a projected position of a steering arrow on a running ground according to an embodiment, as shown in fig. 7, when the vehicle is in a steering forward motion, an indication direction of the steering arrow is a vehicle forward direction, and when the vehicle is in a steering backward motion, an indication manner of the steering arrow is a vehicle backward direction.
The prompt text is used for prompting other vehicles or pedestrians to avoid or wait. The prompt text may be inside the turning arrow or beside the turning arrow. For example, as shown in fig. 7, when the vehicle is in an early warning scene and the vehicle is in a straight-line forward motion, the projection device projects a straight-line arrow and corresponding prompt text (for example, please pay attention to the vehicle) in the forward direction of the vehicle.
There is a mapping relationship between the steering angle of the steering movement and the bending angle, projection length, projection color and flicker frequency of the steering arrow, representing the association between the steering angle of the steering movement of the vehicle and the bending angle, projection length, projection color and flicker frequency of the steering arrow. Specifically, the mapping relationship may be that the larger the steering angle of the steering motion of the vehicle is, the larger the bending angle of the steering arrow is, the larger the corresponding projection length is, the larger the flicker frequency of the projection device is, and the projection colors corresponding to different bending angles are different. For example, fig. 8 is a schematic diagram of early warning information corresponding to different bending angles in one embodiment, and as shown in fig. 8, the bending angles are divided into steering wheel left/right rotation half-turn, steering wheel left/right rotation one turn, and steering wheel left/right rotation two-turn half-three steering levels, and each level corresponds to a corresponding bending angle range. When the vehicle steering wheel rotates left/right half a turn, a basket-colored 15-degree curved short arrow icon is projected to the traveling direction, and remains normally bright without flickering. When the vehicle steering wheel rotates one turn left/right, a yellow medium-curved medium-arrow icon is projected to the traveling direction, and blinks at a medium frequency. When the vehicle steering wheel is turned left/right two and a half turns, a red 90-degree curved long arrow icon is projected to the traveling direction and blinks at a high frequency.
Specifically, when the vehicle is in an early warning scene and the vehicle is in steering motion, the whole vehicle controller determines a steering grade corresponding to the steering angle of the vehicle, determines corresponding early warning information, namely the bending angle, the projection length and the projection color of a steering arrow and the flicker frequency of the projection device according to the determined steering grade, controls the projection device to project according to the early warning information requirement according to the early warning information, and projects prompt characters on the driving ground and in the neighborhood range of the steering arrow.
In some embodiments, if the distance between the vehicle and the obstacle is detected to be smaller than the preset distance in the process of linear movement or steering movement, the projection device is controlled to project the early warning information on the driving ground. The early warning information comprises prompt information and exclamation mark prompts. Fig. 9 is a schematic diagram of early warning information of an obstacle detected in one embodiment, and as shown in fig. 9, an exclamation mark indicator is projected in front of the vehicle when the obstacle is detected during the forward movement of the vehicle, and an exclamation mark indicator is projected in rear of the vehicle when the obstacle is detected during the backward movement of the vehicle.
In this embodiment, when the vehicle is in the early warning scene and the vehicle is in the steering motion, it is determined that the early warning information corresponding to the steering motion includes at least one of steering arrow and prompt text, and a mapping relationship exists between a steering angle of the steering motion and a bending angle, a projection length, a projection color and a flicker frequency of the steering arrow. Through audio-visual steering arrow and suggestion characters for driver, pedestrian on other vehicles can know current steering motion state and the early warning information of vehicle rapidly, and steering arrow's bending angle, projection length, projection colour and scintillation frequency are direct with the steering angle of vehicle relevance, can demonstrate the vehicle state of current vehicle more audio-visual, thereby improve early warning efficiency.
In one embodiment, the training process of the scene recognition network includes:
1. acquiring training samples, wherein each first training sample comprises an input image, image characteristics of the input image and a label; the tag is used for annotating the driving scene where the input image is located.
Wherein the image features of the input image refer to features for identifying driving scenes. Specifically, the image features include obstacle features and lane line features.
Driving scenes refer to the current driving environment of the vehicle, and can be divided into early warning scenes and non-early warning scenes. When the training samples are marked, the marked input images are early warning scenes or non-early warning scenes.
Specifically, the whole vehicle controller acquires a large number of images shot by the vehicle cameras, takes the shot images as input images, extracts image features in the input images, and marks the driving scene where the input images are positioned as an early warning scene or a non-early warning scene.
2. And processing the training sample through the initial network to obtain a prediction result of the driving scene where the input image is.
Specifically, the initial network classifies and identifies image features of the input image, and identifies weather and day-night usage scenes in the input image according to the brightness and definition of the image of the input image so as to determine specific driving scenes in which the vehicle is located, and obtains a prediction result of the driving scenes in which the input image is located based on the specific driving scenes.
3. Based on the difference between the prediction result and the label, the initial network is adjusted to train, and training is stopped when the training stopping condition is reached, so that a trained scene recognition network is obtained.
Specifically, the initial network determines the difference between the prediction result and the label, if the difference is greater than or equal to a preset error value, the initial network is adjusted, the next iterative training process is entered, when the difference is smaller than the preset error value or the number of iterations reaches the preset number of iterations, the training is stopped, and the trained initial network is used as a trained scene recognition network.
In this embodiment, by learning and training a large number of training samples with labels, the scene recognition network can gradually learn the features and modes of different driving scenes, so as to improve the scene recognition accuracy of unknown images.
In one detailed embodiment, a vehicle collision warning method includes the steps of:
1. acquiring a driving image in the driving process of a vehicle;
2. extracting obstacle characteristics of a driving image through a pre-trained target detection sub-model;
3. and extracting lane line characteristics of the driving image through a pre-trained lane line detection sub-model.
4. And predicting a driving scene where the driving image is based on the obstacle characteristics and the lane line characteristics through a pre-trained scene recognition network to obtain a scene prediction result.
5. And when the scene prediction result represents that the vehicle is in a non-early warning scene, controlling the projection device to be in an idle state.
6. Acquiring driving data of the vehicle when the scene prediction result represents that the vehicle is in an early warning scene;
7. determining a movement pattern of the vehicle based on the travel data;
8. if the motion mode is linear motion, executing a step nine; if the motion pattern is steering motion, step ten is performed.
9. Determining that the early warning information corresponding to the linear motion comprises at least one of a linear arrow and a prompt text, and executing a step eleventh; there is a mapping relationship between the movement speed of the linear movement and the projection length, projection color and flicker frequency of the linear arrow.
10. Determining that the early warning information corresponding to the steering movement comprises at least one of curved arrow and prompt text, and executing a step eleventh; there is a mapping relationship between the steering angle of the steering movement and the bending angle, projection length, projection color and flicker frequency of the steering arrow.
11. And controlling a projection device on the vehicle to project the early warning information on the ground on which the vehicle runs.
In this embodiment, the driving scene in which the vehicle is located can be known in real time by predicting the driving image, when the scene prediction result represents that the vehicle is in the early warning scene, the driving state of the vehicle is determined, the early warning information conforming to the current driving scene is projected to the driving ground through the projection device, and then the driving intention of the driver is visually displayed on the driving ground, surrounding pedestrians or other vehicles can be actively prompted to evade or wait through the early warning information on the driving ground, so that the probability of collision between the vehicle and the pedestrians or other vehicles is reduced, and the driving safety is improved.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a vehicle collision early warning device for realizing the vehicle collision early warning method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in one or more embodiments of the vehicle collision warning device provided below may refer to the limitation of the vehicle collision warning method hereinabove, and will not be repeated herein.
In one embodiment, as shown in fig. 10, there is provided a vehicle collision warning apparatus including:
an image acquisition module 1101, configured to acquire a running image during running of the vehicle;
the scene prediction module 1102 is configured to predict a running image through a pre-trained perception model to obtain a scene prediction result;
the early warning determining module 1103 is configured to determine early warning information according to a driving state of the vehicle when the scene prediction result indicates that the vehicle is in an early warning scene;
the early warning module 1104 is used for controlling a projection device on the vehicle to project early warning information on the ground where the vehicle runs.
In one embodiment, the early warning determining module 1103 is further configured to control the projection device to be in an idle state when the scene prediction result indicates that the vehicle is in a non-early warning scene.
In one embodiment, the perception model includes a feature extraction network and a scene recognition network, and the scene prediction module 1102 is further configured to extract image features of the driving image through the feature extraction network trained in advance; and predicting the driving scene where the driving image is based on the image characteristics through a pre-trained scene recognition network to obtain a scene prediction result.
In one embodiment, the feature extraction network includes a target detection sub-model and a lane line detection sub-model, the image features include an obstacle feature and a lane line feature, and the scene prediction module 1102 is further configured to extract the obstacle feature of the driving image through the pre-trained target detection sub-model; and extracting lane line characteristics of the driving image through a pre-trained lane line detection sub-model.
In one embodiment, the early warning determining module 1103 is further configured to obtain driving data of the vehicle; determining a movement pattern of the vehicle based on the travel data; and determining early warning information corresponding to the movement mode.
In one embodiment, the early warning determining module 1103 is further configured to determine that the early warning information corresponding to the linear motion includes at least one of a linear arrow and a prompt text if the motion mode is linear motion; there is a mapping relationship between the movement speed of the linear movement and the projection length, projection color and flicker frequency of the linear arrow.
In one embodiment, the early warning determining module 1103 is further configured to determine that the early warning information corresponding to the steering motion includes at least one of a curved arrow and a prompt text if the motion mode is steering motion; there is a mapping relationship between the steering angle of the steering movement and the bending angle, projection length, projection color and flicker frequency of the steering arrow.
In one embodiment, the scene prediction module 1102 is further configured to obtain training samples, each first training sample including an input image, an image feature of the input image, and a label; the tag is used for marking the driving scene where the input image is; processing the training sample through an initial network to obtain a prediction result of a driving scene where the input image is located; based on the difference between the prediction result and the label, the initial network is adjusted to train, and training is stopped when the training stopping condition is reached, so that a trained scene recognition network is obtained.
The modules in the vehicle collision early warning device can be realized in whole or in part by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a vehicle controller, the internal structure of which may be as shown in fig. 11. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a vehicle collision warning method. The display unit of the computer equipment is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device, wherein the display screen can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on a shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 11 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application applies, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (12)

1. A vehicle collision warning method, the method comprising:
acquiring a driving image in the driving process of a vehicle;
predicting the driving image through a pre-trained perception model to obtain a scene prediction result;
when the scene prediction result represents that the vehicle is in an early warning scene, determining early warning information according to the running state of the vehicle;
And controlling a projection device on the vehicle to project the early warning information on the ground on which the vehicle runs.
2. The method according to claim 1, wherein the method further comprises:
and controlling the projection device to be in an idle state when the scene prediction result indicates that the vehicle is in a non-early-warning scene.
3. The method according to claim 1, wherein the perceptual model comprises a feature extraction network and a scene recognition network, and the predicting the driving image by the pre-trained perceptual model to obtain a scene prediction result comprises:
extracting image features of the driving image through a pre-trained feature extraction network;
and predicting a driving scene where the driving image is based on the image characteristics through a pre-trained scene recognition network to obtain a scene prediction result.
4. A method according to claim 3, wherein the feature extraction network comprises a target detection sub-model and a lane line detection sub-model, the image features comprise obstacle features and lane line features, and the extracting the image features of the driving image by the pre-trained feature extraction network comprises:
Extracting obstacle characteristics of the driving image through a pre-trained target detection sub-model;
and extracting the lane line characteristics of the driving image through a pre-trained lane line detection sub-model.
5. The method of claim 1, wherein the determining the pre-warning information based on the driving status of the vehicle comprises:
acquiring running data of the vehicle;
determining a movement pattern of the vehicle based on the travel data;
and determining early warning information corresponding to the motion mode.
6. The method of claim 5, wherein the determining the pre-warning information corresponding to the movement pattern comprises:
if the motion mode is linear motion, determining that the early warning information corresponding to the linear motion comprises at least one of linear arrow and prompt text; and a mapping relation exists between the movement speed of the linear movement and the projection length, the projection color and the flicker frequency of the linear arrow.
7. The method of claim 6, wherein the method further comprises:
if the motion mode is steering motion, determining that the early warning information corresponding to the steering motion comprises at least one of curved arrow and prompt text; and a mapping relation exists between the steering angle of the steering movement and the bending angle, the projection length, the projection color and the flicker frequency of the steering arrow.
8. The method according to claim 1, wherein the method further comprises:
acquiring training samples, wherein each training sample comprises an input image, image characteristics of the input image and a label; the tag is used for marking a driving scene where the input image is located;
processing the training sample through an initial network to obtain a prediction result of a driving scene where the input image is located;
and adjusting the initial network to train based on the difference between the prediction result and the label, and stopping training when the training stopping condition is reached, so as to obtain a trained scene recognition network.
9. A vehicle collision warning device, characterized in that the device comprises:
the image acquisition module is used for acquiring a running image in the running process of the vehicle;
the scene prediction module is used for predicting the running image through a pre-trained perception model to obtain a scene prediction result;
the early warning determining module is used for determining early warning information according to the running state of the vehicle when the scene prediction result represents that the vehicle is in an early warning scene;
and the early warning module is used for controlling the projection device on the vehicle to project the early warning information on the ground where the vehicle runs.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 8 when the computer program is executed.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 8.
12. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the steps of the method of any one of claims 1 to 8.
CN202311668054.0A 2023-12-07 2023-12-07 Vehicle collision early warning method, device, equipment, storage medium and program product Pending CN117681766A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311668054.0A CN117681766A (en) 2023-12-07 2023-12-07 Vehicle collision early warning method, device, equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311668054.0A CN117681766A (en) 2023-12-07 2023-12-07 Vehicle collision early warning method, device, equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN117681766A true CN117681766A (en) 2024-03-12

Family

ID=90127751

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311668054.0A Pending CN117681766A (en) 2023-12-07 2023-12-07 Vehicle collision early warning method, device, equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN117681766A (en)

Similar Documents

Publication Publication Date Title
US11386673B2 (en) Brake light detection
US11087186B2 (en) Fixation generation for machine learning
US10055652B2 (en) Pedestrian detection and motion prediction with rear-facing camera
US10296796B2 (en) Video capturing device for predicting special driving situations
CN113998034B (en) Rider assistance system and method
US10332292B1 (en) Vision augmentation for supplementing a person's view
CN107487258B (en) Blind area detection system and method
CN112673407B (en) System and method for predictive vehicle accident warning and avoidance
US20170206426A1 (en) Pedestrian Detection With Saliency Maps
CN110400478A (en) A kind of road condition notification method and device
CN115257527B (en) Control method and device for taillight display and vehicle
US11257372B2 (en) Reverse-facing anti-collision system
CN112699862B (en) Image data processing method, device, equipment and storage medium
CN117681766A (en) Vehicle collision early warning method, device, equipment, storage medium and program product
JP2019175372A (en) Danger prediction device, method for predicting dangers, and program
JP3222638U (en) Safe driving support device
US20240127694A1 (en) Method for collision warning, electronic device, and storage medium
GB2624653A (en) A system and method for object detection from a curved mirror
CN117184057A (en) Control method and device for safe running of vehicle, electronic equipment and storage medium
CN116001799A (en) Vehicle line pressing driving early warning method and device, electronic equipment and storage medium
CN115205814A (en) Distance detection method, vehicle high beam control method, device, medium and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination