CN113743226B - Daytime front car light language recognition and early warning method and system - Google Patents

Daytime front car light language recognition and early warning method and system Download PDF

Info

Publication number
CN113743226B
CN113743226B CN202110897194.XA CN202110897194A CN113743226B CN 113743226 B CN113743226 B CN 113743226B CN 202110897194 A CN202110897194 A CN 202110897194A CN 113743226 B CN113743226 B CN 113743226B
Authority
CN
China
Prior art keywords
image
area
vehicle
lamp
language
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110897194.XA
Other languages
Chinese (zh)
Other versions
CN113743226A (en
Inventor
黄妙华
李涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202110897194.XA priority Critical patent/CN113743226B/en
Publication of CN113743226A publication Critical patent/CN113743226A/en
Application granted granted Critical
Publication of CN113743226B publication Critical patent/CN113743226B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Lighting Device Outwards From Vehicle And Optical Signal (AREA)

Abstract

The invention discloses a daytime front car light language identification and early warning method and system, which are car light detection methods based on combination of deep learning and image processing, and comprise a car light extraction method of (R-G) multiplied by H multiplied by V, a car light language identification method by utilizing color characteristics, a car light centroid fusion particle filtering tracking method and a car light language grading early warning method. The invention solves the problems of difficult detection, inaccurate lamp language identification and poor real-time performance of daytime running lamps, and can perform early warning according to the fed-back lamp language information.

Description

Daytime front car light language recognition and early warning method and system
Technical Field
The invention belongs to the technical field of automobile safety, relates to an automobile early warning method and system, and in particular relates to a machine vision-based front vehicle lamp language identification and early warning method and system.
Background
With the rapid development of traffic industry in China, the popularity of automobiles rises year by year, and the frequency of traffic accidents rises year by year when the wide use of automobiles brings convenience to the life of people.
Environmental awareness is a key technology of unmanned driving, and accuracy of information acquisition affects a subsequent decision control process. At present, environmental perception is mainly focused on detection of vehicles, obstacles, lane lines, pedestrians and the like, and research on detection of car lamps is little, but car lamps are important features of vehicle running states, and play an important role in detection of vehicles and information acquisition.
The research of the car light detection mainly applies the traditional image processing method to carry out panoramic processing on the images, so that the car light is difficult to extract and the error rate is high. And the traditional image processing method has no good robustness, is easily influenced by illumination conditions, and is difficult to effectively detect and extract in the daytime running process.
At present, the research on the detection of the car lights is mainly applied to the auxiliary detection of the car at night, but the research on the detection of the car lights and the language of the car lights is less. The car lamp language can directly reflect the operation intention of a driver, and plays an important role in subsequent decision control of unmanned operation.
Disclosure of Invention
In order to solve the problems of difficult detection of daytime running lights, inaccurate lamp language identification and poor real-time performance, the warning can be carried out according to the fed-back lamp language information. The invention provides a front car light language identification and early warning method and system based on a neural network and image processing.
The technical scheme adopted by the method is as follows: a daytime front car light language identification and early warning method comprises the following steps:
step 1: using an image acquired by a vehicle-mounted camera to detect a vehicle in front of the daytime based on a neural network, extracting a vehicle detection frame, and judging the relative position of the vehicle according to the vehicle detection frame;
inputting the acquired image into an optimized neural network for detection, and extracting a vehicle detection frame; and preprocessing the vehicle detection frame to a uniform size, wherein the length and width are (Img h ,Img w ) RGB three channel standard, noted Img 1 The method comprises the steps of carrying out a first treatment on the surface of the The four-point coordinates of the rightmost side, leftmost side, uppermost side and bottommost side of the vehicle detection frame on the acquired image are respectively recorded asJudging the azimuth of the vehicle relative to the vehicle-mounted camera according to the size relation between the vehicle detection frame and the acquired image;
the optimized neural network is based on a MobileNet-Yolo network, and 3*3 convolution in the network PANet is extracted by utilizing the enhanced features of depth separable convolution to replace MobileNet-Yolo;
step 2: based on RGB and HSV color spaces, a (R-G) xH2XV pair-like lamp area image Img is utilized 2 Extracting;
step 3: adjusting the hollow area of the area image similar to the car lamp, screening out the area smaller than the threshold value, and pairing the car lamps to obtainCar light detection frame Img 3 The method comprises the steps of carrying out a first treatment on the surface of the For Img 3 Cutting and recording the cut image as Img 3C The method comprises the steps of carrying out a first treatment on the surface of the Record Img 3C Four-point coordinates (C) r ,C l ,C t ,C b );
Step 4: for Img 3C Performing color channel conversion, and identifying car lamp languages including brake lamp languages and steering lamp languages;
step 5: the lamp language pairing judgment is carried out according to the lamp language identification result, if the lamp language is judged to be a brake lamp language by double-side lighting, the lamp language is judged to be a steering lamp language by single-side lighting;
step 6: mapping the identification result into a stoping image;
step 7: tracking by utilizing centroid fusion particle filtering aiming at recognizing the turn signal;
step 8: and carrying out grading early warning according to the fed-back lamp language information.
The system of the invention adopts the technical proposal that: a daytime front car light language recognition and early warning system comprises the following modules:
the module 1 is used for detecting vehicles in front of the daytime based on a neural network by utilizing images acquired by the vehicle-mounted camera, extracting a vehicle detection frame and judging the relative positions of the vehicles according to the vehicle detection frame;
inputting the acquired image into an optimized neural network for detection, and extracting a vehicle detection frame; and preprocessing the vehicle detection frame to a uniform size, wherein the length and width are (Img h ,Img w ) RGB three channel standard, noted Img 1 The method comprises the steps of carrying out a first treatment on the surface of the The four-point coordinates of the rightmost side, leftmost side, uppermost side and bottommost side of the vehicle detection frame on the acquired image are respectively recorded asJudging the azimuth of the vehicle relative to the vehicle-mounted camera according to the size relation between the vehicle detection frame and the acquired image;
the optimized neural network is based on a MobileNet-Yolo network, and 3*3 convolution in the network PANet is extracted by utilizing the enhanced features of depth separable convolution to replace MobileNet-Yolo;
module 2, forIn the RGB and HSV-based color space, the (R-G) xH2XV-pair lamp-like area image Img is utilized 2 Extracting;
a module 3 for adjusting the hollow area of the vehicle-lamp-like area image, screening out the area smaller than the threshold value, and pairing the vehicle lamps to obtain a vehicle lamp detection frame Img 3 The method comprises the steps of carrying out a first treatment on the surface of the For Img 3 Cutting and recording the cut image as Img 3C The method comprises the steps of carrying out a first treatment on the surface of the Record Img 3C Four-point coordinates (C) r ,C l ,C t ,C b );
Module 4 for Img 3C Performing color channel conversion, and identifying car lamp languages including brake lamp languages and steering lamp languages;
the module 5 is used for judging the pairing of the lamp languages, and carrying out pairing according to the lamp language identification result, if the two-side lighting judgment is a brake lamp language, the one-side lighting judgment is a steering lamp language;
a module 6 for mapping the recognition result into the back-collected image;
the module 7 is used for tracking by utilizing centroid fusion particle filtering aiming at recognizing the turn signal;
and the module 8 is used for carrying out grading early warning according to the fed-back lamp language information.
The invention provides a car lamp detection method and a car lamp detection system combining deep learning and image processing, provides a car lamp extraction method of (R-G) multiplied by H multiplied by V, provides a method for identifying car lamp languages by utilizing color characteristics, provides a car lamp centroid fusion particle filtering tracking method, and provides a grading early warning method based on car lamp languages.
The invention has the beneficial effects that:
1. through utilizing neural network to detect, can greatly reduce the interference that environmental factor draws the car light, improve the accuracy that the car light detected, be convenient for carry out color space conversion, reduce the car light and draw calculated amount.
2. And the vehicle detection frame is utilized to judge the vehicle azimuth, so that the front vehicle information can be effectively extracted on the unstructured road, and the multi-scene early warning is realized.
3. The color channel mixing enhancement is utilized, the daytime lamp can be effectively extracted, the extraction accuracy is high, and the detection rate of the lamp in various environments is more than 90%.
4. By using the auxiliary detection method of the high tail lamp and the brake lamp, the detection accuracy of the brake lamp language can be greatly improved, and the false detection and false detection caused by factors such as illumination and the like are reduced.
5. The steering lamp is tracked by utilizing centroid fusion particle filtering, so that the tracking accuracy is ensured, and the tracking failure caused by lamp language omission is reduced.
Drawings
FIG. 1 is a flow chart of a lamp detection according to an embodiment of the present invention;
FIG. 2 is a brake light detection flow chart of an embodiment of the present invention;
FIG. 3 is a flow chart of a turn signal detection process according to an embodiment of the present invention;
FIG. 4 is a flow chart of fused particle filter tracking in accordance with an embodiment of the present invention;
FIG. 5 is a vehicle warning schematic diagram of an embodiment of the present invention;
fig. 6 is a vehicle orientation determination result diagram of an embodiment of the present invention;
FIG. 7 is a diagram of a vehicle light recognition according to an embodiment of the present invention;
fig. 8 is a graph of particle filter tracking effects according to an embodiment of the present invention.
Fig. 9 is an optimized network configuration diagram of an embodiment of the present invention.
Detailed Description
In order to facilitate the understanding and practice of the invention, those of ordinary skill in the art will now make further details with reference to the drawings and examples, it being understood that the examples described herein are for the purpose of illustration and explanation only and are not intended to limit the invention thereto.
Referring to fig. 1, the method for identifying and early warning a daytime front car light provided by the invention comprises the following steps:
step 1: and carrying out vehicle detection on the vehicle in front of the daytime based on the neural network by utilizing the image acquired by the vehicle-mounted camera, extracting a vehicle detection frame, and judging the relative position of the vehicle according to the vehicle detection frame.
In the embodiment, the acquired image is input into an optimized neural network for detection, and a vehicle detection frame is extracted; preprocessing a vehicle detection frame into a standard with a uniform size, length and width of 256 pixels and 3 channels; wherein, the length and width are recorded as (Img) h ,Img w ) RGB three channel standard, noted Img 1 The method comprises the steps of carrying out a first treatment on the surface of the The four-point coordinates of the rightmost side, leftmost side, uppermost side and bottommost side of the vehicle detection frame on the acquired image are respectively recorded asJudging the azimuth +.f of the vehicle relative to the vehicle-mounted camera according to the size relation between the vehicle detection frame and the acquired image>The method comprises the following steps: />When->The vehicle is located directly in front; when (when)The vehicle is located in the right front; when->The vehicle is positioned in the left front.
Please refer to fig. 9, the optimized neural network of the present embodiment adopts a MobileNet-Yolo network, and the network replaces the trunk extraction network CSPDarkNet53 of Yolo v4 with the MobileNet network, so as to reduce the calculation amount. In order to further reduce the calculated amount and ensure the network detection precision, the method replaces 3*3 convolution in the MobileNet-Yolo enhanced feature extraction network PANet by depth separable convolution, greatly reduces the parameter quantity (4 millions are reduced to 1 million), and simultaneously reduces the dimension of the obtained 13, 26, 52 multi-scale feature layers, reduces the channel number of convolution kernels and increases nonlinear excitation. And (3) up-sampling is amplified to 26 x 26 from 13 x 13 to 26 x 52 for feature layer fusion, multi-scale feature extraction is performed, and network detection accuracy is ensured. Meanwhile, in the training mode, 6032 samples are taken from the front, near, far, left front side and right front side. The samples are uniformly scaled to the VOC2012 dataset size input network for training. And the detection effect of the network on the tail of the vehicle is improved.
Step 2: based on RGB and HSV color spaces, a (R-G) xH2XV-to-lamp area image Img is utilized 2 Extracting;
in this example, img is decomposed 1 In R, G, B, in order to eliminate the influence of the overlapping of the front vehicles on the detection of the vehicle lamps, the decomposed R, G, B channels are divided, and the images are specifically ordered into [80,200 ]]The sizes are respectively R C ,G C ,B C
Im g 1 Conversion to HSV 1 To decompose HSV 1 In H, S, V, in order to eliminate the influence of the overlapping of the front vehicles on the detection of the vehicle lamps, the decomposed H, S, V channels are divided, and the images are specifically ordered into [80,200 ]]The sizes are respectively H C ,S C ,V C
Firstly, a car light area is roughly extracted by using (R-G), and in the daytime running process, the car light features are dark and unobvious, and the conditions of missed detection and false detection are often easy to occur, so that the color gamut conversion and enhancement are carried out on the image, and the car light features are highlighted. Dividing H, V color channels, dividing and screening the H channels to obtain a car light characteristic region, and selecting a lower boundary as Lb 1 =[156,43,46]The upper bound is Ub 2 =[180,255,255]For image HSV 1 The binarization processing is carried out, and the concrete steps are as follows:
wherein (x, y) represents the pixel point coordinates in the image, and if the H-channel value of this point is not within this interval, the pixel value thereof is made to be 0;
the V features can effectively screen the lighted car light area. And mixing the segmented image with H, V channels and (R-G) to realize the extraction of the position of the car lamp.
Im g 2 =(R-G)×H×V;
Selecting mask of (5, 5) size for image Im g 2 Gaussian filtering is carried out to remove noise in the image:
where σ is the standard deviation used to adjust the degree of influence of the far pixel on the center pixel, and (x, y) represents the coordinates of the point where the gaussian filtering is performed.
Step 3: the area smaller than the threshold value is screened out by utilizing the area Hong Cao to adjust the hollow area of the image of the area similar to the car light, and the car light is paired to obtain a car light detection frame Img 3 The method comprises the steps of carrying out a first treatment on the surface of the For Img 3 Cutting and recording the cut image as Img 3C The method comprises the steps of carrying out a first treatment on the surface of the Record Img 3C Four-point coordinates (C) r ,C l ,C t ,C b );
In this example, img 2 Copying is performed, the length and width of the image after copying are extracted to be x and y respectively, the mask size is set to be (x+2, y+2), and the image is filled from the pixel (0, 0). And overturning the filled image, and combining the two pictures to obtain the prospect. The final output image is Img 3
For Img 3 Cutting and taking original image with transverse size (80, 200)]Longitudinal [0,255]Recording the cut image as Img 3C
Img 3C =Img 3 [80:200,0:255];
Obtaining Img 3C The contour smaller than the threshold is screened out, and the threshold is taken as 300 pixels in the invention. Obtaining Img 3C If the number of the contours is larger than 1, the tail lamps are paired, and the pairing algorithm is as follows:
wherein, light_x1, light_x2 are respectively the abscissa of the paired tail lamps, and light_y1, light_y2 are respectively the ordinate of the paired tail lamps; threshold_x, threshold_y is the lateral and longitudinal match threshold.
Since all the paired images are consistent in size, the horizontal and vertical thresholds are respectively 50 and 30.
After the matching and screening of the car lights are completed, the four-point coordinates of the car light detection frame are recorded (C r ,C l ,C t ,C b )。
Step 4: for Img 3C Performing color channel conversion, and identifying car lamp languages including brake lamp languages and steering lamp languages;
in this embodiment, the operation steps of step 4 are specifically:
step 4.1: obtaining Img 3C Is the minimum bounding rectangle of (1) and its corresponding coordinate l r =(l min ,l max ,r min ,r max ) Wherein l is min ,l max ,r min ,r max Respectively representing the minimum value and the maximum value of the longitudinal coordinates, and the minimum value and the maximum value of the transverse coordinates;
in order to prevent image errors caused by the minimum rectangular frame exceeding the boundary, the image area needs to be normalized:
and (3) for the regulated minimum rectangle, acquiring coordinate information of the minimum rectangle:
l t =(l x ,l y ,h,w);
step 4.2: identifying car lamp languages, including a brake lamp language and a steering lamp language;
please refer to fig. 2, the present embodiment is directed to a brake lamp, firstly HSV 1 Screening, and selecting Lb as lower bound 2 =[0,43,46]The upper bound is Ub 2 =[10,255,255]The image is binarized, and the specific steps are as follows:
HSV (HSV) 3 Cut to [80:200,0:255]And utilize l r And screening out the car light area by coordinates.
To reduce the effect of illumination on lamp language recognition, first, at V C And (3) screening a car light area, calculating an area brightness average value, and finally screening an area higher than a brightness average value threshold value, and binarizing the area, wherein the specific steps are as follows:
wherein V is mean Representing the area luminance average value, threshold_V mean Representing brightness threshold, V s (x, y) represents the luminance of the coordinate point (x, y),mean value V mean Plus threshold_V mean A post brightness threshold; the purpose of this step is to screen out areas above the mean threshold, reducing the impact of illumination on whispering.
Treating the HSV 3 And V is equal to s Multiplying to reduce influence of different shapes of the vehicle lamp on subsequent judgment errors, and regularizing multiplied images to [60,60 ]]Calculating the area of the image after the multiplication of the pixels, judging whether the area is larger than a threshold value, and judging that the image is braked if the area is larger than the threshold value; wherein, the threshold value is calculated as follows:
wherein, area represents the effective area of the brake lamp language detected after the multiplication of pixels, and area_slow represents the area threshold value for judging the steering lamp language;
please refer to fig. 3, the present embodiment is directed to turning to the lamp language, firstly HSV 1 Screening, and selecting Lb as lower bound 3 =[10,43,46]The upper bound is Ub 3 =[35,255,255]The image is binarized, and the specific steps are as follows:
meanwhile, the G component can effectively extract the turn whistle, screen the G channel, select the threshold value as 130, and binarize the image, and the specific steps are as follows:
where G (x, y) represents the G channel pixel value at (x, y) in the G channel.
Treating the HSV 4 、G、V s Performing pixel multiplication to reduce influence of vehicle lamp shape variation on subsequent judgment error, and regularizing multiplied image to [60,60 ]]And calculating the area of the multiplied images, judging whether the area is larger than a threshold value, and if the area is larger than the threshold value, indicating that the high tail lamp is lighted, wherein the vehicle belongs to a braking state.
The threshold is calculated as follows:
the area represents the effective area of the brake whistle detected after the multiplication of the pixels, and the area_turn represents the area threshold for the judgment of the steering whistle.
Step 5: the lamp language pairing judgment is carried out according to the lamp language identification result, if the lamp language is judged to be a brake lamp language by double-side lighting, the lamp language is judged to be a steering lamp language by single-side lighting;
the interference of environmental factors on the extraction of the color of the car lamp is unavoidable, and in order to improve the detection accuracy and the accuracy of the car lamp, the car lamp characteristics are utilized to carry out judgment and decision. The brake lamp words are lighted in pairs, the steering lamp words are lighted on one side, the pairing is carried out according to the lamp word identification result, if the brake lamp words are lighted on both sides, the steering lamp words are lighted on one side.
Due to the influence of illumination conditions, missed detection of certain situation lamp languages can be caused, and aiming at the brake lamp language, the high tail lamp can accurately reflect the brake state, but because of unconstrained property, the detection error rate is higher, so aiming at the high tail lamp, the embodiment is firstly shown in Img 3 In the above, the vehicle tail lamp position is used for the high tail lamp position (X Gao ,Y Gao ) Screening, selecting Img 3C Minimum mid-profile abscissaAnd maximum position->Ordinate maximum position +.>The screening comprises the following specific steps:
circumscribing the screened high taillight region with a minimum rectangle, calculating the brightness average value of the rectangular region, screening the region higher than the average value threshold value, binarizing, and marking as V Gao . Selecting R channel, screening region higher than threshold value, binarizing, taking 150 threshold value, and collecting V Gao Multiplying the R channel, judging whether the multiplied area is larger than a threshold value or not, and taking 20 as the threshold value. If the detection result is greater than the threshold value, the high tail lamp is lighted, the vehicle is in a braking state, and a detection party different from the tail lamp of the vehicle is adoptedThe method aims to reduce the interference of environmental factors and improve the accuracy of brake lamp language identification.
Besides obvious color characteristics, the frequency information of the brake lamp is also the most obvious characteristic, and in order to further improve the steering detection precision, the brake lamp is tracked by using an improved particle filtering method.
Step 6: mapping the identification result into a stoping image;
in the present embodiment, the length and width (Img h ,Img w ) Due to Img 1 The length and width are 256, so coordinate transformation is carried out, and the specific steps are as follows:
and (5) mapping the rectangular frame of the car lamp after the coordinate change is completed back to the original image, so as to realize accurate marking of the position of the car lamp.
Lamp language identification accuracy (frame) of the embodiment
Step 7: tracking by utilizing centroid fusion particle filtering aiming at recognizing the turn signal;
please refer to fig. 4, the operation of step 7 in this embodiment specifically includes:
step 7.1: acquiring a mass center;
in the process of extracting the vehicle lamp, the vehicle lamp area is extracted, and the center of mass coordinate (x l ,y l ) The lamp area can be obtained, and at the same time, by detecting the tail light, the center of mass coordinates (x t ,y t ) To ensure tracking continuity, the two centroid coordinates are fused to obtain the centroid coordinates (x n ,y n );
Where β is a coefficient;
step 7.2: centroid matching;
acquiring detected taillight information to extract its centroid coordinates (x n ,y n ) And a rectangular frame (height). Searching for the coordinate information (x) of the preceding frame of the tail light i ,y i ) Pairing is carried out, and a coordinate x which can be paired is obtained;
wherein threshold1 represents the Euclidean distance threshold, q represents Gaussian white noise, and x represents the coordinate finally confirmed after the pairing;
step 7.3: particle filtering;
initializing the particle number to 30 in the rectangular frame for the paired x to obtain a particle groupN is the number of particles. Defining a system state variable +.>Wherein (1)>v x ,v y Respectively representing the transverse and longitudinal centroid coordinates and the speed at the time t-1. Initial weight +.>The vehicle is set to move linearly at a uniform speed, and the vehicle state equation is X t =AX t-1 +w, wherein:
w represents noise and W is subject to normal distribution for the particle setAnd (3) predicting:
wherein f t-1 - (x) Representing the prior probability of the obtained particles;
detecting the next frame of observed data y t Obtaining the posterior probability of particles:
wherein eta t Representing constant f R []Representing likelihood probability, y t An observation value at time t is represented, and h (x) represents a state transition equation;
and finally, updating the particle weight:
wherein f R [x t -h(x t-1 i )]The updating of the particle weight at time t-1 by the updating step is shown.
Step 8: and carrying out grading early warning according to the fed-back lamp language information.
In this embodiment, the azimuth of the vehicle in front is firstly determined, and the hierarchical early warning is performed by combining the lamp language information of the vehicle according to the fed back vehicle determination result, and the specific early warning process is as follows:
according to the vehicle provided by the embodiment, corresponding decision and control are carried out according to the fed-back grading early warning information, and the running safety of the vehicle is ensured.
Please refer to fig. 5, which is a schematic diagram of a vehicle early warning according to an embodiment of the present invention, wherein a certain vehicle is used as a center, and vehicles in three directions, i.e. right front, left front and right front, can be detected, and the lamp language information of the vehicles in three directions is obtained, so as to obtain the operation intention of the driver of the vehicle in front. And carrying out grading early warning according to different lamp semantics, so as to carry out avoidance deceleration operation.
Please refer to fig. 6, which is a diagram of a vehicle azimuth determination result according to an embodiment of the present invention, it can be seen that vehicles in different azimuth are marked in different azimuth, including: left, right, middle. Therefore, the vehicle lane changing and speed reducing system can be combined with an early warning system, and the lane changing and speed reducing operation of the front vehicle is early warned by combining the lamp language information.
Please refer to fig. 7, which is a diagram for identifying a vehicle lamp according to an embodiment of the present invention, it can be seen that, on a highway, a front vehicle lamp can be effectively identified, a brake lamp and a Turn lamp are displayed on a front left vehicle, and a system marks a low and a Turn. The method can effectively identify the vehicle lamp language, so that the vehicle lamp language is transmitted to the early warning system for subsequent grading early warning.
Please refer to fig. 8, which is a graph showing particle filtering tracking effects according to an embodiment of the present invention, it can be seen that, by using a centroid fusion particle filtering method, a steering lamp detection frame can still be followed under the condition that a steering lamp is completely turned off, so as to effectively track a steering lamp, and ensure continuity and accuracy of lamp recognition.
The main technology of the invention is as follows:
1. the method and the system improve the target detection network in a targeted manner, reduce parameters and operation time, and have the advantages that the number of the existing networks, such as Yolo parameters, is too large, fps can be very low when the network is applied to an actual road, and the network has low requirements on hardware configuration, so that the real-time detection requirement can be met. Of course, the detection accuracy is not as good as the Yolo network because the number of parameters is reduced. When the network is trained, the invention establishes the data set by itself, and the images of the tail parts of the vehicles in the left front, the right front and the right front are selected for training, so that the problem of detection precision can be improved to a certain extent. In addition, the network also performs feature extraction layer fusion, so that the detection precision is ensured.
2. The method for extracting the (R-G) xH2XV car lamp has good effect on extracting the tail lamp of the daytime front car, ensures the detection precision of the brake lamp language by using a mode of detecting the high tail lamp and the tail lamp in an auxiliary way, and simultaneously tracks and turns the lamp language by using the centroid fusion particle filter, so as to ensure the correctness of the detected lamp language. Through experimental verification, single color feature is not good to car light lamp language extraction effect, and environmental factor influences too greatly, leads to the fact the false detection easily, leaks and examines.
3. The method for early warning the front vehicle according to the lamp language is very effective, the lamp language directly reflects the intention of the driver in front, the parameter is greatly reduced compared with methods for detecting the pose, track prediction and the like of the vehicle, the method has effectiveness, and the method is a very good early warning method.
It should be understood that the foregoing description of the preferred embodiments is not to be construed as limiting the scope of the invention, which provides a method for efficiently detecting vehicle lights and recognizing lamp words. Those skilled in the art can make substitutions and alterations without departing from the scope of the invention as defined by the appended claims, which are intended to be embraced by the claims.

Claims (7)

1. A daytime front car light language identification and early warning method is characterized by comprising the following steps:
step 1: using an image acquired by a vehicle-mounted camera to detect a vehicle in front of the daytime based on a neural network, extracting a vehicle detection frame, and judging the relative position of the vehicle according to the vehicle detection frame;
inputting the acquired image into an optimized neural network for detection, and extracting a vehicle detection frame; and preprocessing the vehicle detection frame to a uniform size, wherein the length and width are (Im g h ,Im g w ) RGB three channel standard, denoted Img 1 The method comprises the steps of carrying out a first treatment on the surface of the The vehicle detection frame is rightmost and rightmost on the acquired imageThe coordinates of four points on the left side, the uppermost side and the lowermost side are respectively recorded asJudging the azimuth of the vehicle relative to the vehicle-mounted camera according to the size relation between the vehicle detection frame and the acquired image;
wherein, the vehicle detection frames are all regulated to be 256 pixels long and wide, and the RGB three-channel standard is recorded as Im g 1 The method comprises the steps of carrying out a first treatment on the surface of the The vehicle is oriented relative to the onboard cameraThe method comprises the following steps: />When->The vehicle is located directly in front; when->The vehicle is located in the right front; when->The vehicle is positioned in the left front;
the optimized neural network is based on a MobileNet-Yolo network, and 3*3 convolution in the network PANet is extracted by utilizing the enhanced features of depth separable convolution to replace MobileNet-Yolo;
step 2: based on RGB and HSV color spaces, a (R-G) xH2XV-to-lamp area image Img is utilized 2 Extracting;
wherein Im g is decomposed 1 R, G, B, dividing the decomposed R, G, B channels, and regulating the image to a preset size, R being respectively C ,G C ,B C
Im g 1 Conversion to HSV 1 To decompose HSV 1 H, S, V, the decomposed H, S, V channels are processedDividing, namely, regulating images into preset sizes, namely H C ,S C ,V C
Dividing H, V color channels, dividing and screening H channels, and selecting Lb as lower bound 1 The upper bound is Ub 1 For image HSV 1 Performing binarization processing:
wherein (x, y) represents the pixel point coordinates in the image, and if the H-channel value of this point is not within this interval, the pixel value thereof is made to be 0;
mixing the segmented image with H, V channels and (R-G) to obtain a car light-like region image Img 2 Is extracted from the above;
Im g 2 =(R-G)×H×V;
step 3: the hollow area of the image similar to the car light area is adjusted, the area smaller than the threshold value is screened out, and the car lights are paired to obtain a car light detection frame Img 3 The method comprises the steps of carrying out a first treatment on the surface of the For Im g 3 Cutting and recording the cut image as Img 3C The method comprises the steps of carrying out a first treatment on the surface of the Record Im g 3C Four-point coordinates (C) r ,C l ,C t ,C b );
Step 4: for Im g 3C Performing color channel conversion, and identifying car lamp languages including brake lamp languages and steering lamp languages;
the specific implementation of the step 4 comprises the following sub-steps:
step 4.1: obtaining Im g 3C Is the minimum bounding rectangle of (1) and its corresponding coordinate l r =(l min ,l max ,r min ,r max ) Wherein l is min ,l max ,r min ,r max Respectively representing the minimum value and the maximum value of the longitudinal coordinates, and the minimum value and the maximum value of the transverse coordinates;
the minimum circumscribed rectangle is regulated:
for the regulated minimum rectangle, acquiring coordinate information l of the minimum rectangle t =(l x ,l y ,h,w);
Step 4.2: for the brake lamp language, selecting a lower limit Lb 2 Upper bound Ub 2 For HSV 1 Binarizing to obtain HSV 3
HSV (HSV) 3 Cutting into preset size, and using l r Screening out a car light detection frame area by coordinates;
first at V C And (3) screening out a car light detection area, calculating an area brightness average value, and finally screening out an area higher than a brightness average value threshold value, and performing binarization:
V s 1 =V mean +threshold_V mean
wherein V is mean Representing the area luminance average value, threshold_V mean Representing brightness threshold, V s (x, y) represents the brightness of the coordinate point (x, y), V s 1 Mean value V mean Plus threshold_V mean A post brightness threshold;
treating the HSV 3 And V is equal to s Multiplying and regularizing the multiplied image to [60,60 ]]Calculating the area of the image after the multiplication of the pixels, judging whether the area is larger than a threshold value, and judging that the image is braked if the area is larger than the threshold value; wherein, the threshold value is calculated as follows:
wherein, area represents the effective area of the brake lamp language detected after the multiplication of pixels, and area_slow represents the area threshold value for judging the steering lamp language;
for turn to the lamp language, choose the lower bound Lb 3 Upper bound Ub 3 For HSV 1 Binarizing the image to obtain HSV 4
Screening the G channel, selecting a threshold value, and performing binarization processing on the image:
wherein G (x, y) represents a G channel pixel value at (x, y) in the G channel;
treating the HSV 4 、G、V s Performing pixel multiplication, and regularizing the multiplied image into [60,60 ]]The size, calculate the area of the image after multiplying, judge whether greater than the threshold value, if greater than the threshold value, judge to turn to; wherein, the threshold value is calculated as follows:
wherein, area represents the effective area of the brake lamp language detected after the multiplication of pixels, and area_turn represents the area threshold value for judging the steering lamp language;
step 5: the lamp language pairing judgment is carried out according to the lamp language identification result, if the lamp language is judged to be a brake lamp language by double-side lighting, the lamp language is judged to be a steering lamp language by single-side lighting;
step 6: mapping the identification result into a stoping image;
step 7: tracking by utilizing centroid fusion particle filtering aiming at recognizing the turn signal;
step 8: and carrying out grading early warning according to the fed-back lamp language information.
2. The daytime front light language identification and early warning method according to claim 1, wherein the method comprises the following steps of: in step 2, a mask is selected for the car light-like region image Im g 2 Gaussian filtering is carried out to remove noise in the image;
where σ is the standard deviation used to adjust the degree of influence of the far pixel on the center pixel, and (x, y) represents the coordinates of the point where the gaussian filtering is performed.
3. The daytime front light language identification and early warning method according to claim 1, wherein the method comprises the following steps of: in step 3, im g 2 Copying, extracting the length and width of the copied image to be (x, y), setting the size of a mask, and filling the image from the pixel (0, 0); the filled image is turned over, two pictures are combined to obtain a foreground, and finally the output image is Im g 3
For Im g 3 Cutting and recording the cut image as Img 3C Obtaining Im g 3C Middle image contour, screening out contour smaller than threshold value, obtaining Im g 3C Number of middle image contours; if the number of the outlines is greater than 1, the taillights are paired, and the pairing formula is as follows:
wherein, light_x1, light_x2 are respectively the abscissa of the paired tail lamps, and light_y1, light_y2 are respectively the ordinate of the paired tail lamps; threshold_x, threshold_y is the lateral and longitudinal match threshold.
4. The daytime front light language identification and early warning method according to claim 1, wherein the method comprises the following steps of: in step 5, for high taillights, first at Im g 3 In the above, the vehicle tail lamp position is used for the high tail lamp position (X Gao ,Y Gao ) Screening, selecting Im g 3C Minimum mid-profile abscissaAnd maximum position->Ordinate maximum position +.>Screening was as follows:
circumscribing the screened high taillight region with a minimum rectangle, calculating the brightness average value of the rectangular region, screening the region higher than the average value threshold value, binarizing, and marking as V Gao The method comprises the steps of carrying out a first treatment on the surface of the Selecting an R channel, screening a region higher than a threshold value, and binarizing; will V Gao Multiplying the vehicle with the R channel, judging whether the multiplied area is larger than a threshold value, and if so, indicating that the high tail lamp is lighted, and the vehicle is in a braking state.
5. The daytime front light language identification and early warning method according to claim 1, wherein the method comprises the following steps of: in step 6, if Im g 1 And if the length and the width are 256, performing coordinate transformation to:
and (5) mapping the rectangular frame of the car lamp after the coordinate change is completed back to the original image, so as to realize accurate marking of the position of the car lamp.
6. The daytime front light language identification and early warning method according to claim 1, wherein the method comprises the following steps of: the specific implementation of the step 7 comprises the following sub-steps:
step 7.1, obtaining a mass center;
extract car light detection area barycenter coordinate (x) l ,y l ) By detecting the tail light, the barycenter coordinates (x t ,y t ) Fusing the two barycenter coordinates to obtain the barycenter coordinates (x) n ,y n );
Where β is a coefficient;
step 7.2: centroid matching;
according to (x n ,y n ) And a turning taillight area rectangular frame (height), searching the centroid coordinates (x i ,y i ) Pairing is carried out, and a coordinate x which can be paired is obtained;
wherein threshold1 represents the Euclidean distance threshold, q represents Gaussian white noise, and x represents the coordinate finally confirmed after the pairing;
step 7.3: particle filtering;
initializing the particle number in the rectangular frame to obtain a particle group for the paired xN is the number of particles; define state variable +.>Wherein (1)>v x ,v y Respectively representing the transverse and longitudinal centroid coordinates and the speed at the time t-1; initial weight +.>The vehicle is set to move linearly at a uniform speed, and the vehicle state equation is X t =AX t-1 +W,W represents noise and W obeys normal distribution; for particle group->And (3) predicting:
wherein f t-1 - (x) Representing the prior probability of the obtained particles;
detecting the next frame of observed data y t Obtaining the posterior probability of particles:
wherein eta t Representing constant f R []Representing likelihood probability, y t An observation value at time t is represented, and h (x) represents a state transition equation;
and finally, updating the particle weight:
wherein f R [x t -h(x t-1 i )]The updating of the particle weight at time t-1 by the updating step is shown.
7. A daytime front car light language recognition and early warning system is characterized by comprising the following modules:
the module 1 is used for detecting vehicles in front of the daytime based on a neural network by utilizing images acquired by the vehicle-mounted camera, extracting a vehicle detection frame and judging the relative positions of the vehicles according to the vehicle detection frame;
inputting the acquired image into an optimized neural network for detection, and extracting a vehicle detection frame; and preprocessing the vehicle detection frame to a uniform size, wherein the length and width are (Im g h ,Im g w ) RGB three channel standard, denoted Img 1 The method comprises the steps of carrying out a first treatment on the surface of the The four-point coordinates of the rightmost side, leftmost side, uppermost side and bottommost side of the vehicle detection frame on the acquired image are respectively recorded asJudging the azimuth of the vehicle relative to the vehicle-mounted camera according to the size relation between the vehicle detection frame and the acquired image;
wherein, the vehicle detection frames are all regulated to be 256 pixels long and wide, and the RGB three-channel standard is recorded as Im g 1 The method comprises the steps of carrying out a first treatment on the surface of the The vehicle is oriented relative to the onboard cameraThe method comprises the following steps: />When->The vehicle is located directly in front; when->The vehicle is located in the right front; when->The vehicle is positioned in the left front;
the optimized neural network is based on a MobileNet-Yolo network, and 3*3 convolution in the network PANet is extracted by utilizing the enhanced features of depth separable convolution to replace MobileNet-Yolo;
a module 2 for generating a lamp-like area image Im G based on RGB and HSV color spaces using (R-G) XH XV 2 Extracting;
wherein Im g is decomposed 1 R, G, B, dividing the decomposed R, G, B channels, and regulating the image to a preset size, R being respectively C ,G C ,B C
Im g 1 Conversion to HSV 1 To decompose HSV 1 H, S, V, dividing the decomposed H, S, V channels, and regulating the image to a preset size, which is H respectively C ,S C ,V C
Dividing H, V color channels, dividing and screening H channels, and selecting Lb as lower bound 1 The upper bound is Ub 1 For image HSV 1 Performing binarization processing:
wherein (x, y) represents the pixel point coordinates in the image, and if the H-channel value of this point is not within this interval, the pixel value thereof is made to be 0;
mixing the segmented image with H, V channels and (R-G) to obtain a car light-like region image Img 2 Is extracted from the above;
Im g 2 =(R-G)×H×V;
a module 3 for adjusting the hollow area of the vehicle-lamp-like area image, screening out the area smaller than the threshold value, and pairing the vehicle lamps to obtain a vehicle lamp detection frame Im g 3 The method comprises the steps of carrying out a first treatment on the surface of the For Im g 3 Cutting and recording the cut image as Img 3C The method comprises the steps of carrying out a first treatment on the surface of the Record Im g 3C Four-point coordinates (C) r ,C l ,C t ,C b );
Module 4 for Im g 3C Performing color channel conversion, and identifying car lamp languages including brake lamp languages and steering lamp languages;
the module 4 specifically comprises the following submodules:
module 4.1 for obtaining Im g 3C Is the minimum bounding rectangle of (1) and its corresponding coordinate l r =(l min ,l max ,r min ,r max ) Wherein l is min ,l max ,r min ,r max Respectively representing the minimum value and the maximum value of the longitudinal coordinates, and the minimum value and the maximum value of the transverse coordinates;
the minimum circumscribed rectangle is regulated:
for the regulated minimum rectangle, acquiring coordinate information l of the minimum rectangle t =(l x ,l y ,h,w);
Module 4.2 for selecting a lower bound Lb for a brake whistle 2 Upper bound Ub 2 For HSV 1 Binarizing to obtain HSV 3
HSV (HSV) 3 Cutting into preset size, and using l r Screening out a car light detection frame area by coordinates;
first at V C And (3) screening out a car light detection area, calculating an area brightness average value, and finally screening out an area higher than a brightness average value threshold value, and performing binarization:
V s 1 =V mean +threshold_V mean
wherein V is mean Representing the area luminance average value, threshold_V mean Representing brightness threshold, V s (x, y) represents the brightness of the coordinate point (x, y), V s 1 Mean value V mean Plus threshold_V mean A post brightness threshold;
treating the HSV 3 And V is equal to s Multiplying and regularizing the multiplied image to [60,60 ]]Calculating the area of the image after the multiplication of the pixels, judging whether the area is larger than a threshold value, and judging that the image is braked if the area is larger than the threshold value; wherein, the threshold value is calculated as follows:
wherein, area represents the effective area of the brake lamp language detected after the multiplication of pixels, and area_slow represents the area threshold value for judging the steering lamp language;
for turn to the lamp language, choose the lower bound Lb 3 Upper bound Ub 3 For HSV 1 Binarizing the image to obtain HSV 4
Screening the G channel, selecting a threshold value, and performing binarization processing on the image:
wherein G (x, y) represents a G channel pixel value at (x, y) in the G channel;
treating the HSV 4 、G、V s Performing pixelsMultiplying and regularizing the multiplied image to [60,60 ]]The size, calculate the area of the image after multiplying, judge whether greater than the threshold value, if greater than the threshold value, judge to turn to; wherein, the threshold value is calculated as follows:
wherein, area represents the effective area of the brake lamp language detected after the multiplication of pixels, and area_turn represents the area threshold value for judging the steering lamp language;
the module 5 is used for judging the pairing of the lamp languages, and carrying out pairing according to the lamp language identification result, if the two-side lighting judgment is a brake lamp language, the one-side lighting judgment is a steering lamp language;
a module 6 for mapping the recognition result into the back-collected image;
the module 7 is used for tracking by utilizing centroid fusion particle filtering aiming at recognizing the turn signal;
and the module 8 is used for carrying out grading early warning according to the fed-back lamp language information.
CN202110897194.XA 2021-08-05 2021-08-05 Daytime front car light language recognition and early warning method and system Active CN113743226B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110897194.XA CN113743226B (en) 2021-08-05 2021-08-05 Daytime front car light language recognition and early warning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110897194.XA CN113743226B (en) 2021-08-05 2021-08-05 Daytime front car light language recognition and early warning method and system

Publications (2)

Publication Number Publication Date
CN113743226A CN113743226A (en) 2021-12-03
CN113743226B true CN113743226B (en) 2024-02-02

Family

ID=78730230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110897194.XA Active CN113743226B (en) 2021-08-05 2021-08-05 Daytime front car light language recognition and early warning method and system

Country Status (1)

Country Link
CN (1) CN113743226B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116828298B (en) * 2023-08-31 2024-01-02 深圳市新城市规划建筑设计股份有限公司 Analysis system based on vehicle pavement information

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927548A (en) * 2014-04-18 2014-07-16 北京航空航天大学 Novel vehicle collision avoiding brake behavior detection method
CN108357418A (en) * 2018-01-26 2018-08-03 河北科技大学 A kind of front truck driving intention analysis method based on taillight identification
CN109523555A (en) * 2017-09-18 2019-03-26 百度在线网络技术(北京)有限公司 Front truck brake behavioral value method and apparatus for automatic driving vehicle
CN109584271A (en) * 2018-11-15 2019-04-05 西北工业大学 High speed correlation filtering tracking based on high confidence level more new strategy
CN112348813A (en) * 2020-12-03 2021-02-09 苏州挚途科技有限公司 Night vehicle detection method and device integrating radar and vehicle lamp detection

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103778786B (en) * 2013-12-17 2016-04-27 东莞中国科学院云计算产业技术创新与育成中心 A kind of break in traffic rules and regulations detection method based on remarkable vehicle part model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927548A (en) * 2014-04-18 2014-07-16 北京航空航天大学 Novel vehicle collision avoiding brake behavior detection method
CN109523555A (en) * 2017-09-18 2019-03-26 百度在线网络技术(北京)有限公司 Front truck brake behavioral value method and apparatus for automatic driving vehicle
CN108357418A (en) * 2018-01-26 2018-08-03 河北科技大学 A kind of front truck driving intention analysis method based on taillight identification
CN109584271A (en) * 2018-11-15 2019-04-05 西北工业大学 High speed correlation filtering tracking based on high confidence level more new strategy
CN112348813A (en) * 2020-12-03 2021-02-09 苏州挚途科技有限公司 Night vehicle detection method and device integrating radar and vehicle lamp detection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于日间行车的灯语识别技术;李堃 等;《计算机科学》;第46卷(第11A期);第277-282页 *

Also Published As

Publication number Publication date
CN113743226A (en) 2021-12-03

Similar Documents

Publication Publication Date Title
O'Malley et al. Rear-lamp vehicle detection and tracking in low-exposure color video for night conditions
CN107798335B (en) Vehicle logo identification method fusing sliding window and Faster R-CNN convolutional neural network
CN105981042B (en) Vehicle detection system and method
Chen et al. Nighttime brake-light detection by Nakagami imaging
CN106845453B (en) Taillight detection and recognition methods based on image
O'Malley et al. Vehicle detection at night based on tail-light detection
US7724962B2 (en) Context adaptive approach in vehicle detection under various visibility conditions
CN111815959B (en) Vehicle violation detection method and device and computer readable storage medium
Alcantarilla et al. Night time vehicle detection for driving assistance lightbeam controller
Alcantarilla et al. Automatic LightBeam Controller for driver assistance
CN109299674B (en) Tunnel illegal lane change detection method based on car lamp
CN107891808B (en) Driving reminding method and device and vehicle
CN108357418B (en) Preceding vehicle driving intention analysis method based on tail lamp identification
CN107886034B (en) Driving reminding method and device and vehicle
CN104050450A (en) Vehicle license plate recognition method based on video
CN110688907B (en) Method and device for identifying object based on night road light source
CN111553214B (en) Method and system for detecting smoking behavior of driver
CN104732235A (en) Vehicle detection method for eliminating night road reflective interference
Chen et al. Nighttime turn signal detection by scatter modeling and reflectance-based direction recognition
CN105893970A (en) Nighttime road vehicle detection method based on luminance variance characteristics
CN113743226B (en) Daytime front car light language recognition and early warning method and system
Yucong et al. Traffic sign recognition based on HOG feature extraction
Li et al. A low-cost and fast vehicle detection algorithm with a monocular camera for adaptive driving beam systems
Sakagawa et al. Vision based nighttime vehicle detection using adaptive threshold and multi-class classification
Chen et al. Robust rear light status recognition using symmetrical surfs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant