CN117912289B - Vehicle group driving early warning method, device and system based on image recognition - Google Patents

Vehicle group driving early warning method, device and system based on image recognition Download PDF

Info

Publication number
CN117912289B
CN117912289B CN202410308683.0A CN202410308683A CN117912289B CN 117912289 B CN117912289 B CN 117912289B CN 202410308683 A CN202410308683 A CN 202410308683A CN 117912289 B CN117912289 B CN 117912289B
Authority
CN
China
Prior art keywords
vehicle
image
group
driving
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410308683.0A
Other languages
Chinese (zh)
Other versions
CN117912289A (en
Inventor
赵永亮
黄志俊
韩丽君
郝志浪
邓博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Jiuzhi Information Technology Co ltd
Original Assignee
Xi'an Jiuzhi Information Technology Co ltd
Filing date
Publication date
Application filed by Xi'an Jiuzhi Information Technology Co ltd filed Critical Xi'an Jiuzhi Information Technology Co ltd
Priority to CN202410308683.0A priority Critical patent/CN117912289B/en
Publication of CN117912289A publication Critical patent/CN117912289A/en
Application granted granted Critical
Publication of CN117912289B publication Critical patent/CN117912289B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The application discloses a vehicle group driving early warning method, device and system based on image recognition, and relates to the technical field of image recognition. The vehicle group driving early warning method based on image recognition comprises the following steps: acquiring a vehicle driving front image group shot by a vehicle at the head, wherein the vehicle driving front image group is called a first driving image group, the first driving image group comprises a plurality of frames of first driving images, and the vehicle at the head is called a first vehicle; acquiring a lane where a first vehicle is located; and judging whether dangerous objects exist on the lane where the first vehicle is located, if so, transmitting the license plate number of the first vehicle and the first driving image to other vehicles in the vehicle group. The application can enable other vehicles to know the situation of the front side of the first vehicle before any reaction is carried out on the first vehicle, thereby giving enough reaction time for the vehicles behind.

Description

Vehicle group driving early warning method, device and system based on image recognition
Technical Field
The application relates to the technical field of image recognition, in particular to a vehicle group running early warning method based on image recognition, a vehicle group running early warning device based on image recognition and a vehicle group running early warning system based on image recognition.
Background
In the prior art, a vehicle often encounters many emergency situations when driving on a road, for example, it is assumed that three vehicles A, B, C sequentially travel in the same lane, where a vehicle a is located at the forefront, a vehicle B is located between a vehicle a and a vehicle C, and the vehicle C is at the end, when an accident (for example, emergency braking) occurs in the vehicle a, the vehicle B can see the emergency braking of the vehicle a, but the vehicle C cannot see the emergency braking of the vehicle a, which may result in that the vehicle B executes an avoidance strategy (for example, changes to another lane) after seeing the emergency braking of the vehicle a, but the vehicle C does not know at all that the vehicle a performs the emergency braking, so that only after the vehicle B partially or completely leaves the current lane, the vehicle a is known to be likely to have a late response, thereby causing a rear-end collision phenomenon.
It is assumed that there is a vehicle D behind the vehicle C at this time, and the vehicle D cannot anticipate the emergency braking of the vehicle a at all, so that there is no way to react completely, which is one of the reasons why the tandem rear-end collision accident occurs frequently at present.
Disclosure of Invention
The invention aims to provide a vehicle group driving early warning method based on image recognition to at least solve one technical problem.
In one aspect of the present invention, a vehicle group traveling pre-warning method based on image recognition is provided, the vehicle group traveling pre-warning method based on image recognition includes:
Acquiring a vehicle driving front image group shot by a vehicle at the head in a preset time period, wherein the vehicle driving front image group is called a first driving image group, the first driving image group comprises a plurality of frames of first driving images, and the vehicle at the head is called a first vehicle;
acquiring a lane where a first vehicle is located according to the first driving image group;
Judging whether dangerous objects exist on a lane where the first vehicle is located according to the first driving image group, if so, transmitting the license plate number of the first vehicle to other vehicles in a vehicle group which is located in the same network as the first vehicle and located in the same lane; the first travel image is communicated to other vehicles in a group of vehicles within the same network and within the same lane as the first vehicle.
Optionally, before the acquiring the image group of the front part of the vehicle running captured by the first vehicle in the preset time period, the vehicle group running early warning method based on image recognition includes:
acquiring request information sent by a request vehicle and position information of the request vehicle;
respectively acquiring vehicle position information of each vehicle which is positioned in the same network with the request vehicle;
Setting a position range threshold according to the request vehicle;
Acquiring the running direction of each vehicle according to the multiple times of position information transmitted by each vehicle;
Acquiring vehicles with the same running direction as the request vehicle, wherein each vehicle with the same running direction as the request vehicle forms a vehicle group with the same running direction;
acquiring vehicles in the position range threshold value in the vehicle group with the same running direction according to the position range threshold value and the vehicle position information of each vehicle, wherein each vehicle in the position range threshold value forms the vehicle group, and the vehicle group comprises the request vehicle;
And acquiring the vehicle at the first position according to the position information of each vehicle.
Optionally, the acquiring the lane where the first vehicle is located according to the first driving image group includes:
extracting image characteristics of any one or more frames of first driving images in the first driving image group;
Carrying out lane information extraction based on the image features to obtain corresponding lane line features and road surface features;
performing feature fusion based on the image features, the lane line features and the road surface features to obtain road fusion features;
And acquiring a lane where the first vehicle is currently located according to the road fusion characteristics.
Optionally, the determining whether the first vehicle has a dangerous object on the lane according to the first driving image set includes:
acquiring an image of a frame with the latest shooting time and at least one frame before the frame in a first driving image group;
Judging whether an object exists in the image according to the image of the frame with the latest shooting time in the first driving image group and at least one frame before the frame, if so, acquiring a trained image classifier;
extracting image characteristics of any one or more frames of images in the first driving image group;
Inputting the image features into the image classifier so as to acquire object classification information;
and judging whether the object is a dangerous object or not according to the object classification information.
Optionally, the object classification information includes vehicles, pedestrians, and obstacles;
When the object classification information is a vehicle, the determining the object according to the object classification information includes:
Respectively acquiring the number of pixel points occupied by an object in a first driving image of each frame in a corresponding first driving image;
and judging whether the first vehicle gradually approaches the object according to the number of the pixel points, and if so, judging that the first vehicle is a dangerous object.
Optionally, when the object classification information is a pedestrian, the determining the object according to the object classification information includes:
Acquiring current position information of a first vehicle;
And judging whether the vehicle is positioned on the expressway or not according to the current position information of the first vehicle, and if so, judging that the vehicle is a dangerous object.
Optionally, when the object classification information is a pedestrian, the determining the object according to the object classification information, and determining whether the object is a dangerous object further includes:
judging whether the vehicle is positioned on the expressway or not according to the current position information of the first vehicle, if not, respectively acquiring the number of pixels occupied by the object in each frame of first driving image in the corresponding first driving image;
And judging whether the vehicle at the first position gradually approaches the object according to the number of the pixel points, and if so, judging the vehicle to be a dangerous object.
Optionally, when the object classification information is an obstacle, the determining the object according to the object classification information includes:
acquiring a trained obstacle classifier;
Extracting image features in at least five frames of images in the first driving image group;
respectively inputting the image features in each frame of image into the trained obstacle classifier so as to obtain classification labels, wherein the image features in one frame of image correspond to one classification label;
Judging whether the occurrence frequency of one classification label exceeds a preset threshold value, if so, acquiring a classification label database, wherein the classification label database comprises at least one preset classification label and a dangerous class corresponding to each classification label, and the dangerous class comprises dangerous objects and non-dangerous objects;
and acquiring the risk classification corresponding to the preset classification label which is the same as the classification label exceeding the preset threshold value, and judging the risk as a dangerous object if the risk classification is the dangerous object.
The application also provides a vehicle group driving early warning device based on image recognition, which comprises:
A vehicle running front image group acquisition module for acquiring a vehicle running front image group captured by a vehicle located at the head in a preset time period, the vehicle running front image group being referred to as a first running image group including a plurality of frames of first running images, the vehicle located at the head being referred to as a first vehicle;
The lane acquisition module is used for acquiring a lane where a first vehicle is located according to the first driving image group;
the dangerous object judging module is used for judging whether dangerous objects exist on the lane where the first vehicle is located according to the first driving image group;
The system comprises a transmitting module, a receiving module and a receiving module, wherein the transmitting module is used for transmitting license plates of a first vehicle to other vehicles in a vehicle group which is in the same network with the first vehicle and is in the same lane; the first travel image is communicated to other vehicles in a group of vehicles within the same network and within the same lane as the first vehicle.
The application also provides a vehicle group driving early warning system based on image recognition, which comprises:
The cloud comprises the vehicle group driving early warning device based on image recognition;
The number of the vehicle ends is multiple, one vehicle end is arranged in one vehicle, and each vehicle end is communicated with the cloud; wherein,
Any one of the vehicle ends sends request information and vehicle position information to the cloud end;
The cloud end acquires a first driving image and a license plate number of a first vehicle through the vehicle group driving early warning method based on image recognition, and transmits the license plate number of the first vehicle to other vehicles in a vehicle group which is in the same network as the first vehicle and is positioned in the same lane and transmits the first driving image to other vehicles in the vehicle group which is in the same network as the first vehicle and is positioned in the same lane, so that a vehicle machine system of the vehicle which receives the first driving image displays the first driving image;
And the vehicle end which acquires the first driving image displays the first driving image on a vehicle display screen.
The vehicle group running early warning method based on image recognition judges whether dangerous objects exist or not by carrying out image recognition on the running front image shot by the vehicle positioned at the first position, and when the dangerous objects exist, the image is transmitted to other vehicles positioned in the same lane and behind the vehicle with the vehicle positioned at the first position, so that the condition of the front side of the vehicle positioned at the first position can be known before any reaction is carried out on the vehicle positioned at the first position, enough reaction time is given to the vehicle positioned at the rear side, and the problem that the vehicle positioned at the rear side of the vehicle positioned at the first position has insufficient reaction time to process when the emergency situation occurs in the front side of the vehicle positioned at the first position is prevented.
Drawings
Fig. 1 is a flowchart of a vehicle group driving early warning method based on image recognition according to an embodiment of the application.
Fig. 2 is a schematic diagram of an electronic device for implementing the vehicle group traveling warning method based on image recognition shown in fig. 1.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application become more apparent, the technical solutions in the embodiments of the present application will be described in more detail below with reference to the accompanying drawings in the embodiments of the present application. In the drawings, the same or similar reference numerals denote the same or similar elements or elements having the same or similar functions throughout. The described embodiments are some, but not all, embodiments of the application. The embodiments described below by referring to the drawings are illustrative and intended to explain the present application and should not be construed as limiting the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application. Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a vehicle group driving early warning method based on image recognition according to an embodiment of the application.
The vehicle group driving early warning method based on image recognition as shown in fig. 1 comprises the following steps:
Step 1: acquiring a vehicle driving front image group shot by a vehicle at the head in a preset time period, wherein the vehicle driving front image group is called a first driving image group, the first driving image group comprises a plurality of frames of first driving images, and the vehicle at the head is called a first vehicle;
step 2: acquiring a lane where a first vehicle is located according to the first driving image group;
Step 3: judging whether dangerous objects exist on a lane where the first vehicle is located according to the first driving image group, if so, transmitting the license plate number of the first vehicle to other vehicles in a vehicle group which is located in the same network as the first vehicle and located in the same lane; the first travel image is communicated to other vehicles in a group of vehicles within the same network and within the same lane as the first vehicle.
The vehicle group running early warning method based on image recognition judges whether dangerous objects exist or not by carrying out image recognition on the running front image shot by the vehicle positioned at the first position, and when the dangerous objects exist, the image is transmitted to other vehicles positioned in the same lane and behind the vehicle with the vehicle positioned at the first position, so that the condition of the front side of the vehicle positioned at the first position can be known before any reaction is carried out on the vehicle positioned at the first position, enough reaction time is given to the vehicle positioned at the rear side, and the problem that the vehicle positioned at the rear side of the vehicle positioned at the first position has insufficient reaction time to process when the emergency situation occurs in the front side of the vehicle positioned at the first position is prevented.
In this embodiment, the license plate number of each vehicle is known to the vehicle, and is stored in the vehicle system in advance, and the license plate number can be sent to the cloud end through a request of the cloud end or can be sent to the cloud end when being connected with the cloud end.
In this embodiment, before the acquiring the image group of the front side of the vehicle that is captured by the first vehicle in the preset time period, the vehicle group driving early warning method based on image recognition includes:
acquiring request information sent by a request vehicle and position information of the request vehicle;
respectively acquiring vehicle position information of each vehicle which is positioned in the same network with the request vehicle;
Setting a position range threshold according to the request vehicle;
Acquiring the running direction of each vehicle according to the multiple times of position information transmitted by each vehicle;
Acquiring vehicles with the same running direction as the request vehicle, wherein each vehicle with the same running direction as the request vehicle forms a vehicle group with the same running direction;
acquiring vehicles in the position range threshold value in the vehicle group with the same running direction according to the position range threshold value and the vehicle position information of each vehicle, wherein each vehicle in the position range threshold value forms the vehicle group, and the vehicle group comprises the request vehicle;
And acquiring the vehicle at the first position according to the position information of each vehicle.
In this embodiment, the position information of the requesting vehicle may be transmitted by a position signal acquired by the GPS of the vehicle itself, or by position information of high-precision navigation.
In this embodiment, each vehicle in the same network refers to a vehicle connected to the cloud end of the present application.
In the present embodiment, setting the position range threshold according to the requesting vehicle includes:
The threshold value is set based on the position information of the requesting vehicle, and for example, the position information of the requesting vehicle is latitude and longitude information (assuming that the latitude and longitude information is 30 degrees east longitude and 23 degrees north latitude), and a circle drawn with a radius of 500 meters is a position range threshold value with this point as the center.
In this embodiment, each vehicle connected to the cloud end periodically transmits position information to the cloud end only during the connection period, and the driving direction of the vehicle can be determined through the transmitted position information for several times.
In this embodiment, the present application is mainly used for alarming between vehicles in the same direction, and therefore, first, the situation of the vehicle in the same traveling direction as the requesting vehicle is distinguished, regardless of the situation of the vehicle in a different traveling direction (for example, a vehicle opposite to the direction of the requesting vehicle) from the requesting vehicle.
After the driving direction of each vehicle is obtained, the positions of the vehicles which are within the position range threshold and are in the same driving direction as the request vehicle can be obtained, for example, 100 vehicles connected with the cloud end are all obtained, wherein the vehicle A is the request vehicle, the other 99 vehicles find that 49 vehicles are opposite to the driving direction of the vehicle A through the position information sent to the cloud end, and the rest 50 vehicles form the vehicle group with the same driving direction.
Acquiring vehicles in the position range threshold value in the vehicle group with the same running direction according to the position range threshold value and the vehicle position information of each vehicle, wherein each vehicle in the position range threshold value forms the vehicle group and comprises the following steps:
The position range threshold value can be obtained by the position of the vehicle a, and according to the position range threshold value, which vehicles within the range threshold value, for example, among 50 vehicles in the vehicle group with the same driving direction, the vehicles within the range threshold value are B, C, D, and A, B, C, D constitute a vehicle group.
In this embodiment, acquiring the lane in which the first vehicle is located according to the first running image group includes:
extracting image characteristics of any one or more frames of first driving images in the first driving image group;
Based on the image features, extracting lane information to obtain corresponding lane line features and road surface features;
performing feature fusion based on the image features, the lane line features and the road surface features to obtain road fusion features;
And acquiring a lane where the first vehicle is currently located according to the road fusion characteristics.
In this embodiment, the lane determination may be performed using one frame of running image, or using multiple frames of running images, through which the determination may be more accurate, and through which it may be determined whether the vehicle changes lanes.
In this embodiment, whether one frame of running image or a plurality of frames of running images are used, the latest frame of running image in time, that is, the last running image transmitted to the cloud end is included as much as possible.
In the present embodiment, the running image may be acquired by a front-mounted image pickup device of the vehicle.
In this embodiment, extracting lane information based on image features, and obtaining corresponding lane line features and road surface features includes:
performing deconvolution operation based on the image features to generate a road image feature map with the same size corresponding to the current road image;
carrying out semantic segmentation processing based on the road image feature map to generate a corresponding pixel point category prediction result;
Performing instance segmentation processing based on the road image feature map to generate a corresponding lane pixel attribute prediction result;
performing feature clustering according to the pixel point category prediction result and the lane pixel attribute prediction result to obtain clustering clusters corresponding to different pixel point categories;
And determining and obtaining lane line characteristics and road surface characteristics corresponding to the road image characteristic map according to the clustering clusters corresponding to different pixel point categories.
In the present embodiment, a boundary distance feature, that is, a boundary distance feature between an on-vehicle camera of the current vehicle and the current road boundary, may also be included.
When the road image feature, the lane line feature and the road surface feature are integrated, the road fusion feature is obtained by the steps of: and fusing the road image features, the lane line features, the road surface features and the boundary distance features to obtain road fusion features.
In this embodiment, feature fusion is performed based on road image features, lane line features and road surface features, and obtaining road fusion features includes:
according to dimension data corresponding to the lane line features and the road surface features, performing first feature dimension mapping on the image features to obtain first splicing features;
splicing the first splicing characteristic, the lane line characteristic and the road surface characteristic to obtain a second splicing characteristic;
and performing second characteristic dimension mapping on the second splicing characteristic according to a preset dimension mapping function to obtain a third splicing characteristic, wherein the third splicing characteristic is a road fusion characteristic.
In the embodiment with the boundary distance feature, the boundary distance feature and the third splicing feature are spliced to obtain the road fusion feature.
In this embodiment, the obtaining, according to the road fusion feature, the lane in which the vehicle located at the first position is currently located includes:
Performing feature transformation on the road fusion features according to the convolutional neural network to obtain feature vectors of a first preset dimension;
Performing vector transposition on the feature vector of the first preset dimension to obtain a corresponding feature vector of the second preset dimension; the feature vector of the second preset dimension comprises a first feature vector corresponding to the first boundary;
Determining and obtaining first confidence coefficient data corresponding to different lane positions according to the first feature vector;
and carrying out lane position prediction based on the first confidence data, and generating and obtaining the lane position of the current vehicle taking the first boundary as a reference.
In this embodiment, determining whether the first vehicle has a dangerous object on the lane according to the first driving image set includes:
acquiring an image of a frame with the latest shooting time and at least one frame before the frame in a first driving image group;
Judging whether an object exists in the image according to the image of the frame with the latest shooting time in the first driving image group and at least one frame before the frame, if so, acquiring a trained image classifier;
extracting image characteristics of any one or more frames of images in the first driving image group;
Inputting the image features into the image classifier so as to acquire object classification information;
and judging whether the object is a dangerous object or not according to the object classification information.
In this embodiment, according to the image of the frame with the latest shooting time and at least one frame before the frame in the first driving image group (for convenience of description, the frame with the latest shooting time is called an image F, and the other frame is called an image H), the following method is adopted to determine whether there is an object in the image:
Determining the same key point in the image F and the image H through image recognition, and respectively determining the pixel point coordinates of the key point in the image F and the image H;
Acquiring vehicle displacement corresponding to a time period between an image F and an image H (which can be determined through the speed of the vehicle and the time interval between two frames), and calculating and determining depth coordinate information of a key point mapped to a world coordinate system according to the vehicle displacement and pixel point coordinates of the key point in the image F and the image H;
According to the pixel point coordinates of the key points in the current frame image and the geometric prior information corresponding to the vehicle-mounted camera device, calculating and determining transverse coordinate information and longitudinal coordinate information of the key points mapped to the world coordinate system;
World coordinate information of key points is determined according to the depth coordinate information, the transverse coordinate information and the longitudinal coordinate information, and a depth information density spectrum of a target scene is determined based on the world coordinate information corresponding to a plurality of key points in the image F;
Object detection is carried out on the image F, object contour information of a target object in a target scene is determined, and relative position information of the target object is determined by combining the depth information density spectrum.
When it is determined that there is an object, a trained image classifier is acquired and image features in any one or more of the images in the first running image group are extracted (it is understood that features of images of the object profile information portion may be directly extracted).
The image features are input into an image classifier to obtain classification tags including a vehicle tag, a pedestrian tag, and an obstacle tag.
When the tag is a vehicle tag, it indicates that the first vehicle is in front of the vehicle.
When the tag is a pedestrian tag, it indicates that the first vehicle is in front of a pedestrian.
When the tag is an obstacle tag, it indicates that the first vehicle is in front of an obstacle.
It will be appreciated that multiple tags may be owned simultaneously, as vehicles, pedestrians, and obstacles may be present simultaneously in front of a first vehicle.
In this embodiment, when the object classification information is a vehicle (in this embodiment, only the vehicle is referred to herein without pedestrians and obstacles), the determining whether the object is a dangerous object according to the object classification information includes:
Respectively acquiring the number of pixel points occupied by an object in a first driving image of each frame in a corresponding first driving image;
And judging whether the vehicle at the first position gradually approaches the object according to the number of the pixel points, and if so, judging the vehicle to be a dangerous object.
For example, on a 640 x 640 canvas, as an object gets closer, it must occupy more and more pixels, for example, 100 x 100 pixels at the first point in time, and if it gets closer gradually, it may become 100 x 200, 200 x 200 or more.
Therefore, whether the vehicle is approaching gradually can be judged through pixel point judgment, and if so, dangerous objects are judged.
Because it may be indicated during the approaching of the vehicles that the vehicle in front of the leading vehicle has slowed down or stopped and the leading vehicle is approaching gradually, the following vehicle should be informed of the current situation, regardless of the obstacle avoidance mode performed by the leading vehicle.
In this embodiment, when the object classification information is a pedestrian (when there is both a vehicle and a pedestrian, the object classification information is considered to be a pedestrian), the determining whether the object is a dangerous object according to the object classification information includes:
Acquiring current position information of a first vehicle;
And judging whether the vehicle is positioned on the expressway or not according to the current position information of the first vehicle, and if so, judging that the vehicle is a dangerous object.
It will be appreciated that pedestrians should not theoretically occur in the expressway, and thus if a person is found on the expressway, a special situation theoretically occurs, and thus, a dangerous object is directly judged.
In this embodiment, when the object classification information is a pedestrian, the determining, according to the object classification information, whether the object is a dangerous object further includes:
judging whether the vehicle is positioned on the expressway or not according to the current position information of the first vehicle, if not, respectively acquiring the number of pixels occupied by the object in each frame of first driving image in the corresponding first driving image;
And judging whether the vehicle at the first position gradually approaches the object according to the number of the pixel points, and if so, judging the vehicle to be a dangerous object.
Therefore, whether the vehicle is approaching gradually can be judged through pixel point judgment, and if so, dangerous objects are judged.
In this embodiment, when the object classification information is an obstacle, the determining the object according to the object classification information includes:
acquiring a trained obstacle classifier;
Extracting image features in at least five frames of images in the first driving image group;
respectively inputting the image features in each frame of image into the trained obstacle classifier so as to obtain classification labels, wherein the image features in one frame of image correspond to one classification label;
Judging whether the occurrence frequency of one classification label exceeds a preset threshold value, if so, acquiring a classification label database, wherein the classification label database comprises at least one preset classification label and a dangerous class corresponding to each classification label, and the dangerous class comprises dangerous objects and non-dangerous objects;
and acquiring the risk classification corresponding to the preset classification label which is the same as the classification label exceeding the preset threshold value, and judging the risk as a dangerous object if the risk classification is the dangerous object.
When objects are classified by the classifier, the objects may be obstacles (such as roadblocks or other objects which should not appear in the center of a road and may cause accidents) or may be objects which are not generally affected by driving, such as plastic bags, and further classification by the object classifier is needed at this time, and after classification by the object classifier, the objects may be queried by the classification tag database.
For example, a classification tag database includes two preset classification tags, one is a roadblock and one is a plastic bag, wherein the danger corresponding to the roadblock is classified as dangerous object, and the danger corresponding to the plastic bag is classified as non-dangerous object.
When the obstacle classifier is used for judging that the object is a roadblock, the object is considered to be a dangerous object, and when the obstacle classifier is used for judging that the object is a plastic bag, the object is considered to be a non-dangerous object.
It will be appreciated that in one embodiment, when there are a vehicle, a pedestrian and an obstacle at the same time, the determination may be performed by the above-described methods, that is, when one image determines that there is a vehicle, a pedestrian and an obstacle through the image classifier, the three are determined by the above-described methods, that is, the vehicle determines that there is a dangerous object, the pedestrian determines that there is a dangerous object, the obstacle determines that there is an obstacle, and if one of them determines that there is a dangerous object, the object is finally determined as a dangerous object.
By adopting the vehicle group driving early warning method based on image recognition, other vehicles behind the first vehicle can know the road condition before the first vehicle no matter whether the first vehicle is in emergency obstacle avoidance or not, and the vehicle group driving early warning method based on image recognition is particularly suitable for the situation that each vehicle is faster in high-speed driving, and once the situation occurs at the front side of the first vehicle, the first vehicle is frequently in emergency danger avoidance (for example lane change), and the situation that the vehicles at the rear side cannot respond or cannot brake or lane change due to the fact that the situation at the front side is unclear.
In the present embodiment, the first running image may be displayed by the on-board system of each vehicle so as to be displayed on the display screen of the vehicle.
The application also provides a vehicle group driving early warning device based on image recognition, which comprises a vehicle driving front image group acquisition module, a lane acquisition module, a dangerous object judgment module and a sending module, wherein,
The vehicle driving front image group acquisition module is used for acquiring a vehicle driving front image group shot by a vehicle at the head in a preset time period, wherein the vehicle driving front image group is called a first driving image group, the first driving image group comprises a plurality of frames of first driving images, and the vehicle at the head is called a first vehicle;
the lane acquisition module is used for acquiring a lane where a first vehicle is located according to the first driving image group;
The dangerous object judging module is used for judging whether dangerous objects exist on the lane where the first vehicle is located according to the first driving image group;
the sending module is used for transmitting license plate numbers of the first vehicles to other vehicles in the vehicle group which are in the same network and in the same lane with the first vehicles; the first travel image is communicated to other vehicles in a group of vehicles within the same network and within the same lane as the first vehicle.
The application also provides a vehicle group driving early warning system based on image recognition, which comprises a cloud end and a vehicle end, wherein,
The cloud end comprises the vehicle group driving early warning device based on image recognition;
the number of the vehicle ends is multiple, one vehicle end is arranged in one vehicle, and each vehicle end is communicated with the cloud; wherein,
Any one of the vehicle ends sends request information and vehicle position information to the cloud end;
The cloud end acquires a first driving image and a license plate number of a vehicle positioned at the first position through the vehicle group driving early warning method based on image identification, transmits the license plate number of the vehicle positioned at the first position to other vehicles in a vehicle group positioned in the same network and in the same lane as the vehicle positioned at the first position and transmits the first driving image to other vehicles in the vehicle group positioned in the same network and in the same lane as the vehicle positioned at the first position, so that a vehicle machine system of the vehicle receiving the first driving image displays the first driving image;
And the vehicle end which acquires the first driving image displays the first driving image on a vehicle display screen.
In this embodiment, a communication protocol is defined between a cloud end and a vehicle end, and a network server based on socket communication is established, so that information can be mutually transferred between the cloud end and the vehicle end.
It should be noted that the foregoing explanation of the method embodiment is also applicable to the apparatus of this embodiment, and will not be repeated here.
The application also provides an electronic device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the vehicle group driving early warning method based on image recognition when executing the computer program.
The application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program can realize the vehicle group driving early warning method based on image recognition when being executed by a processor.
Fig. 2 is an exemplary structural diagram of an electronic device capable of implementing the image recognition-based vehicle group traveling warning method according to one embodiment of the present application.
As shown in fig. 2, the electronic device includes an input device 501, an input interface 502, a central processor 503, a memory 504, an output interface 505, and an output device 506. The input interface 502, the central processing unit 503, the memory 504, and the output interface 505 are connected to each other through a bus 507, and the input device 501 and the output device 506 are connected to the bus 507 through the input interface 502 and the output interface 505, respectively, and further connected to other components of the electronic device. Specifically, the input device 504 receives input information from the outside, and transmits the input information to the central processor 503 through the input interface 502; the central processor 503 processes the input information based on computer executable instructions stored in the memory 504 to generate output information, temporarily or permanently stores the output information in the memory 504, and then transmits the output information to the output device 506 through the output interface 505; the output device 506 outputs the output information to the outside of the electronic device for use by the user.
That is, the electronic device shown in fig. 2 may also be implemented to include: a memory storing computer-executable instructions; and one or more processors that, when executing the computer-executable instructions, implement the image recognition-based vehicle group travel warning method described in connection with fig. 1.
In one embodiment, the electronic device shown in FIG. 2 may be implemented to include: a memory 504 configured to store executable program code; the one or more processors 503 are configured to execute executable program codes stored in the memory 504 to perform the vehicle group travel warning method based on image recognition in the above-described embodiment.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer-readable media include both permanent and non-permanent, removable and non-removable media, and the media may be implemented in any method or technology for storage of information. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps. A plurality of units, modules or means recited in the means may also be implemented by means of software or hardware by means of one unit or total means.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The Processor referred to in this embodiment may be a central processing unit (Central Processing Unit, CPU), or other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may be used to store computer programs and/or modules, and the processor may perform various functions of the apparatus/terminal device by executing or executing the computer programs and/or modules stored in the memory, and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart memory card (SMART MEDIA CARD, SMC), secure Digital (SD) card, flash memory card (FLASH CARD), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
While the invention has been described in detail in the foregoing general description and with reference to specific embodiments thereof, it will be apparent to one skilled in the art that modifications and improvements can be made thereto. Accordingly, such modifications or improvements may be made without departing from the spirit of the invention and are intended to be within the scope of the invention as claimed.

Claims (9)

1. The vehicle group driving early warning method based on the image recognition is characterized by comprising the following steps of:
Acquiring a vehicle driving front image group shot by a vehicle at the head in a preset time period, wherein the vehicle driving front image group is called a first driving image group, the first driving image group comprises a plurality of frames of first driving images, and the vehicle at the head is called a first vehicle;
acquiring a lane where a first vehicle is located according to the first driving image group;
Judging whether dangerous objects exist on a lane where the first vehicle is located according to the first driving image group, if so, transmitting the license plate number of the first vehicle to other vehicles in a vehicle group which is located in the same network as the first vehicle and located in the same lane; transmitting the first driving image to other vehicles in a vehicle group which is in the same network and in the same lane with the first vehicle;
before the vehicle at the head position is acquired to capture the image group in front of the vehicle running in a preset time period, the vehicle group running early warning method based on image recognition comprises the following steps:
acquiring request information sent by a request vehicle and position information of the request vehicle;
respectively acquiring vehicle position information of each vehicle which is positioned in the same network with the request vehicle;
Setting a position range threshold according to the request vehicle;
Acquiring the running direction of each vehicle according to the multiple times of position information transmitted by each vehicle;
Acquiring vehicles with the same running direction as the request vehicle, wherein each vehicle with the same running direction as the request vehicle forms a vehicle group with the same running direction;
acquiring vehicles in the position range threshold value in the vehicle group with the same running direction according to the position range threshold value and the vehicle position information of each vehicle, wherein each vehicle in the position range threshold value forms the vehicle group, and the vehicle group comprises the request vehicle;
And acquiring the vehicle at the first position according to the position information of each vehicle.
2. The image recognition-based vehicle group traveling warning method according to claim 1, wherein the acquiring the lane in which the first vehicle is located according to the first traveling image group includes:
extracting image characteristics of any one or more frames of first driving images in the first driving image group;
Carrying out lane information extraction based on the image features to obtain corresponding lane line features and road surface features;
performing feature fusion based on the image features, the lane line features and the road surface features to obtain road fusion features;
And acquiring a lane where the first vehicle is currently located according to the road fusion characteristics.
3. The image recognition-based vehicle group traveling warning method according to claim 2, wherein the determining whether the first vehicle has a dangerous object on the lane according to the first traveling image group includes:
acquiring an image of a frame with the latest shooting time and at least one frame before the frame in a first driving image group;
Judging whether an object exists in the image according to the image of the frame with the latest shooting time in the first driving image group and at least one frame before the frame, if so, acquiring a trained image classifier;
extracting image characteristics of any one or more frames of images in the first driving image group;
Inputting the image features into the image classifier so as to acquire object classification information;
and judging whether the object is a dangerous object or not according to the object classification information.
4. The image recognition-based vehicle group traveling warning method according to claim 3, wherein the object classification information includes vehicles, pedestrians, and obstacles;
When the object classification information is a vehicle, the determining the object according to the object classification information includes:
Respectively acquiring the number of pixel points occupied by an object in a first driving image of each frame in a corresponding first driving image;
and judging whether the first vehicle gradually approaches the object according to the number of the pixel points, and if so, judging that the first vehicle is a dangerous object.
5. The image recognition-based vehicle group traveling warning method according to claim 4, wherein when the object classification information is a pedestrian, the determining the object according to the object classification information includes:
Acquiring current position information of a first vehicle;
And judging whether the vehicle is positioned on the expressway or not according to the current position information of the first vehicle, and if so, judging that the vehicle is a dangerous object.
6. The image recognition-based vehicle group traveling warning method according to claim 5, wherein when the object classification information is a pedestrian, the determining the object according to the object classification information, determining whether it is a dangerous object, further includes:
judging whether the vehicle is positioned on the expressway or not according to the current position information of the first vehicle, if not, respectively acquiring the number of pixels occupied by the object in each frame of first driving image in the corresponding first driving image;
And judging whether the vehicle at the first position gradually approaches the object according to the number of the pixel points, and if so, judging the vehicle to be a dangerous object.
7. The image recognition-based vehicle group traveling warning method according to claim 6, wherein when the object classification information is an obstacle, the determining the object according to the object classification information includes:
acquiring a trained obstacle classifier;
Extracting image features in at least five frames of images in the first driving image group;
respectively inputting the image features in each frame of image into the trained obstacle classifier so as to obtain classification labels, wherein the image features in one frame of image correspond to one classification label;
Judging whether the occurrence frequency of one classification label exceeds a preset threshold value, if so, acquiring a classification label database, wherein the classification label database comprises at least one preset classification label and a dangerous class corresponding to each classification label, and the dangerous class comprises dangerous objects and non-dangerous objects;
and acquiring the risk classification corresponding to the preset classification label which is the same as the classification label exceeding the preset threshold value, and judging the risk as a dangerous object if the risk classification is the dangerous object.
8. The vehicle group driving early warning device based on image recognition is characterized by comprising:
A vehicle running front image group acquisition module for acquiring a vehicle running front image group captured by a vehicle located at the head in a preset time period, the vehicle running front image group being referred to as a first running image group including a plurality of frames of first running images, the vehicle located at the head being referred to as a first vehicle;
The lane acquisition module is used for acquiring a lane where a first vehicle is located according to the first driving image group;
the dangerous object judging module is used for judging whether dangerous objects exist on the lane where the first vehicle is located according to the first driving image group;
the system comprises a transmitting module, a receiving module and a receiving module, wherein the transmitting module is used for transmitting license plates of a first vehicle to other vehicles in a vehicle group which is in the same network with the first vehicle and is in the same lane;
transmitting the first driving image to other vehicles in a vehicle group which is in the same network and in the same lane with the first vehicle;
before the vehicle driving front image group shot by the vehicle at the head position in the preset time period is acquired, the vehicle group driving early warning device based on image recognition comprises:
acquiring request information sent by a request vehicle and position information of the request vehicle;
respectively acquiring vehicle position information of each vehicle which is positioned in the same network with the request vehicle;
Setting a position range threshold according to the request vehicle;
Acquiring the running direction of each vehicle according to the multiple times of position information transmitted by each vehicle;
Acquiring vehicles with the same running direction as the request vehicle, wherein each vehicle with the same running direction as the request vehicle forms a vehicle group with the same running direction;
acquiring vehicles in the position range threshold value in the vehicle group with the same running direction according to the position range threshold value and the vehicle position information of each vehicle, wherein each vehicle in the position range threshold value forms the vehicle group, and the vehicle group comprises the request vehicle;
And acquiring the vehicle at the first position according to the position information of each vehicle.
9. The vehicle group driving early warning system based on image recognition is characterized by comprising:
A cloud comprising the image recognition-based vehicle group travel warning device of claim 8;
The number of the vehicle ends is multiple, one vehicle end is arranged in one vehicle, and each vehicle end is communicated with the cloud; wherein,
Any one of the vehicle ends sends request information and vehicle position information to the cloud end;
The cloud end obtains a first driving image and a license plate number of a first vehicle through the vehicle group driving early warning method based on image recognition according to any one of claims 1 to 7, and transmits the license plate number of the first vehicle to other vehicles in a vehicle group which is in the same network as the first vehicle and is in the same lane and transmits the first driving image to other vehicles in the vehicle group which is in the same network as the first vehicle and is in the same lane, so that a vehicle machine system of the vehicle which receives the first driving image displays the first driving image;
And the vehicle end which acquires the first driving image displays the first driving image on a vehicle display screen.
CN202410308683.0A 2024-03-19 Vehicle group driving early warning method, device and system based on image recognition Active CN117912289B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410308683.0A CN117912289B (en) 2024-03-19 Vehicle group driving early warning method, device and system based on image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410308683.0A CN117912289B (en) 2024-03-19 Vehicle group driving early warning method, device and system based on image recognition

Publications (2)

Publication Number Publication Date
CN117912289A CN117912289A (en) 2024-04-19
CN117912289B true CN117912289B (en) 2024-06-28

Family

ID=

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3110752A1 (en) * 2020-05-25 2021-11-26 Psa Automobiles Sa Decision making of an avoidance strategy in a group of vehicles
WO2023020004A1 (en) * 2021-08-16 2023-02-23 长安大学 Vehicle distance detection method and system, and device and medium
KR20230091210A (en) * 2021-12-15 2023-06-23 현대자동차주식회사 Advanced Driver Assistance System, and Vehicle having the same

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3110752A1 (en) * 2020-05-25 2021-11-26 Psa Automobiles Sa Decision making of an avoidance strategy in a group of vehicles
WO2023020004A1 (en) * 2021-08-16 2023-02-23 长安大学 Vehicle distance detection method and system, and device and medium
KR20230091210A (en) * 2021-12-15 2023-06-23 현대자동차주식회사 Advanced Driver Assistance System, and Vehicle having the same

Similar Documents

Publication Publication Date Title
US9180814B2 (en) Vehicle rear left and right side warning apparatus, vehicle rear left and right side warning method, and three-dimensional object detecting device
JP7040374B2 (en) Object detection device, vehicle control system, object detection method and computer program for object detection
EP4089659A1 (en) Map updating method, apparatus and device
CN105355039A (en) Road condition information processing method and equipment
CN111262903B (en) Server device and vehicle
JP7362733B2 (en) Automated crowdsourcing of road environment information
CN111319560B (en) Information processing system, program, and information processing method
JP4951481B2 (en) Road marking recognition device
CN113257001A (en) Vehicle speed limit monitoring method and device, electronic equipment and system
CN110784680B (en) Vehicle positioning method and device, vehicle and storage medium
CN112241963A (en) Lane line identification method and system based on vehicle-mounted video and electronic equipment
CN117912289B (en) Vehicle group driving early warning method, device and system based on image recognition
US20210004016A1 (en) U-turn control system for autonomous vehicle and method therefor
CN115762153A (en) Method and device for detecting backing up
CN117912289A (en) Vehicle group driving early warning method, device and system based on image recognition
CN115457486A (en) Two-stage-based truck detection method, electronic equipment and storage medium
CN110677491B (en) Method for estimating position of vehicle
US20220237926A1 (en) Travel management device, travel management method, and recording medium
CN113591673A (en) Method and device for recognizing traffic signs
CN114333414A (en) Parking yield detection device, parking yield detection system, and recording medium
CN114008682A (en) Method and system for identifying objects
US11590988B2 (en) Predictive turning assistant
JP7126629B1 (en) Information integration device, information integration method, and information integration program
US20220105958A1 (en) Autonomous driving apparatus and method for generating precise map
US20240223915A1 (en) Systems and methods for downsampling images

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant