CN111292529A - Method for identifying fog distribution of expressway - Google Patents

Method for identifying fog distribution of expressway Download PDF

Info

Publication number
CN111292529A
CN111292529A CN202010075695.5A CN202010075695A CN111292529A CN 111292529 A CN111292529 A CN 111292529A CN 202010075695 A CN202010075695 A CN 202010075695A CN 111292529 A CN111292529 A CN 111292529A
Authority
CN
China
Prior art keywords
vehicle
tail lamp
time
visibility
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010075695.5A
Other languages
Chinese (zh)
Other versions
CN111292529B (en
Inventor
王畅
付锐
袁伟
郭应时
吴付威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changan University
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN202010075695.5A priority Critical patent/CN111292529B/en
Publication of CN111292529A publication Critical patent/CN111292529A/en
Application granted granted Critical
Publication of CN111292529B publication Critical patent/CN111292529B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Abstract

The invention belongs to the field of traffic safety, and discloses a method for identifying group fog distribution on a highway.

Description

Method for identifying fog distribution of expressway
Technical Field
The invention relates to the field of traffic safety, in particular to a method for identifying the group fog distribution of a highway.
Background
The mass fog is an important factor causing traffic accidents, and the severe weather factor accounts for a large proportion. In the foggy weather, the psychology, sight line and the like of drivers are greatly influenced, and some drivers carelessly or carelessly pay attention to regulations, so that the accident probability is increased, and even malignant traffic accidents and secondary accidents caused by multiple continuous rear-end collisions are caused. Therefore, the driver can timely identify the severe weather information and inform the driver of taking preventive measures, and the method has great significance for reducing the probability of the traffic accidents. The existing research aiming at the severe weather like the foggy cluster only stays on visibility identification at a certain moment, and for a driver, the driver is in a safety hidden danger road section when the visibility of the certain moment is obtained. When the emergency occurs at the front part, the driver still cannot respond in time. Since the appearance of the foggy mass is random and the position changes with time, drivers are more concerned about knowing in advance which range of the route to be traveled the foggy mass appears, and how visibility of various points of the foggy mass is distributed in the range, so that more targeted precautionary measures can be taken to avoid the road section or the drivers.
Disclosure of Invention
The method comprises the steps of utilizing a camera to extract road section environment and passing vehicle images in real time, utilizing a radar to extract passing vehicle motion parameters in real time, finally combining the data of the road section environment and the passing vehicle motion parameters through a computer to determine group fog distribution characteristics, and issuing early warning information through an LED information board, a high pitch loudspeaker and the like after obtaining the group fog distribution characteristics of all points of the whole road section, so that vehicles about to drive into the road section can be helped to make response measures in advance, and the accident rate is reduced.
In order to achieve the purpose, the invention is realized by adopting the following technical scheme.
A method of identifying highway agglomerate fog distribution comprising the steps of:
step 1, radar-video integrated monitoring equipment acquires image information of a road vehicle in real time and transmits the acquired image information to a computer, and the computer determines the outline of a vehicle for turning on a tail lamp according to the image information;
step 2, tracking and starting the tail lamp pair vehicle by the radar-video integrated monitoring equipment, and determining whether the tail lamp pair vehicle is started to be the same vehicle or not;
step 3, when the tail lamps are turned on and the vehicles are the same, real-time tracking and recording the turning-on and turning-off time of the tail lamps of the vehicles by turning on the tail lamps, and determining whether the fog lamps or the double flash lamps are turned on according to the turning-on rule of the tail lamps;
step 4, the radar-video integrated monitoring equipment identifies the vehicle with the double flashing lamps turned on for the first time, pushes 5 point locations forward, counts the number of pixels with pixel values within a YCbCr color space of which Cb is more than or equal to 95 and less than or equal to 135 and is more than or equal to 155 in the outline of the tail lamp of the vehicle at the first point location, and takes the pixel values as the original reference visibility data D of the first point location1(ii) a Sequentially obtaining visibility pixels D of the nth pointn
After the vehicle is driven out of the foggy road section, the double flashing lamps are turned off to obtain the visibility pixel D of the Nth point positionN(ii) a Wherein N is 1,2,3, … N, and N is the total number of points from the mist to the mist;
from D1To DNObtaining visibility pixels of N points, analyzing the transformation rule of the visibility pixels of the N points, and judging whether the visibility pixels are a complete foggy road section;
step 5, when the road section is a complete foggy road section, the original reference visibility data D of the first point location is used1Respectively determining the visibility change proportion D of adjacent points as reference2/D1、D3/D1…、DN/D1(ii) a The point position of the minimum value is the position of the lowest visibility in the cluster mist, so that the change characteristic of each cluster mist in the whole road section is obtained.
Preferably, step 1 comprises the following substeps:
substep 1.1, carrying out threshold value filtering on the image information of the road vehicle by using an HSV color space to obtain a binary image; wherein, the threshold value is: 342 ° < H <360 ° or 0 ° < H <30 °, and 0.45< S < 1.0; performing Gaussian transformation on the binary image to obtain the gray value of the dispersed original image;
substep 1.2, normalizing the gray value of the dispersed original image according to the following formula to obtain the gray value of the transformed image, taking the gray value of the transformed image as a weighting scale factor, and preliminarily determining a tail lamp area according to the weighting scale factor;
Figure BDA0002378437370000031
wherein, I (I, j) is the gray value of the original image, N (I, j) is the gray value of the transformed image, and min and max respectively represent the minimum gray value and the maximum gray value of the original image;
substep 1.3, dividing the original image into a foreground part and a background part by using a maximum inter-class variance method to obtain a more accurate tail lamp area;
substep 1.4, eliminating the inner hole of the tail lamp area in the image by using morphological transformation to obtain the outline of the candidate tail lamp area;
and a substep 1.5, when the profile of the candidate tail lamp area is 2, determining the final tail lamp profile by using the characteristic that the tail lamps are kept on the same horizontal line and are distributed in a mirror symmetry mode.
Further preferably, substep 1.5 comprises the following substeps:
substep 1.5.1, let A, B be the two candidate taillight regions obtained, respectively, whose areas are S1、S2If the smaller of the two is minS, then S is1、S2If the following inequality is satisfied, the areas of the two candidate tail lamp areas are similar;
S1-S2<K*minS
wherein, K is a proportionality coefficient, and K is 1.5;
substep 1.5.2, calculating the central positions of the tail lamp areas of the two candidate vehicles, and solving the two candidate vehiclesDistance y of the center position of the lamp area in the vertical directiondistanceAnd the distance x of the central positions of the two candidate tail lamp areas in the horizontal directiondistance
Wherein, ydistance、xdistanceIf the following inequality is satisfied, the vehicle tail lamp pair is considered to be correct;
ydistance<G*minHeight
M*minWidth<xdistance<N*minWidth
wherein G is a proportionality coefficient, and G is 3; minHeight represents the smaller value of the height of the A, B region; m, N are respectively proportional coefficients, M is 3, N is 10; minWidth is the smaller of the A, B region widths.
Preferably, step 2 comprises the following substeps;
substep 2.1, radar-video integrated monitoring equipment obtains each vehicle distance d at the current momentnThe length of the vehicle body is L, and the vehicle speed at the current moment is VnThe interval time between adjacent moments is t;
the radar-video integrated monitoring equipment acquires image information of road vehicles in real time, and determines R, G and B component values of a vehicle body by utilizing RGB color space according to the image information of the road vehicles;
substep 2.2, the vehicle is the same vehicle and should satisfy the following conditions:
according to the vehicle distance dnJudging the lane where the vehicle is located, wherein the same vehicle is located in the same lane in the two adjacent frames of images;
the change of the length L of the vehicle body of the same vehicle in two adjacent frames of images is not more than 1%;
according to the current time speed VnAnd the adjacent time interval time t, determining the running distance S of the adjacent timen=VnX t; determining the position of the vehicle at the current moment according to the radar-video integrated monitoring equipment, and calculating the position SnDetermining a vehicle search range for the radius, the vehicle location at the next time being within the vehicle search range;
and respectively calculating the average value of the R, G and B components of the vehicle body at the current moment and the average value of the R, G and B components of the vehicle body at the next moment, wherein the average value of the R, G and B components of the vehicle body at the current moment and the average value of the R, G and B components of the vehicle body at the next moment are not changed by more than 5%.
Preferably, step 3 is: recording the turning-on and turning-off time of the tail lamp of the vehicle, and when the turning-on of the tail lamp of the vehicle lasts for more than 3s, considering that the tail lamp of the vehicle is a fog lamp;
when the opening time of the vehicle tail lamp is less than 3s and more than three times of opening and extinguishing processes exist, setting the first opening time of the vehicle tail lamp as T1 and the first extinguishing time as T1', and obtaining duration T1; the second opening time is T2, and the extinguishing time is T2', so that the duration time T2 is obtained; the third time, the turning-on time T3 and the turning-off time T3' are obtained, so that the duration time T3 is obtained; and when the standard deviation of T1, T2 and T3 does not exceed 0.5 and the difference between (T2-T1) and (T3-T2) is not more than 1s, the double-flashing lamp is confirmed to be lighted.
Preferably, in step 4, the step of judging whether the route is a complete foggy route specifically includes:
1) when entering a road section, the visibility is rapidly reduced, and the reduction range of visibility pixels exceeds 50 percent;
2) the visibility pixels of the middle section are relatively stable as a whole and are compared with D1Is smaller by more than 50 percent;
3) when exiting a foggy road segment, the visibility pixels begin to rise, and DNAnd D1Are within 10%.
4) The distance of the middle road section is more than 100 meters and less than or equal to 5 kilometers.
And after the conditions are met, determining that a complete foggy road section exists.
Compared with the prior art, the invention has the beneficial effects that:
1) the radar-video integrated monitoring equipment is arranged at equal intervals in the whole road section, and is arranged every 100 meters, and the radar-video integrated monitoring equipment is used as a sensor and comprises a radar and a camera sensor; the camera is used for extracting road section environment and passing vehicle images in real time, the radar is used for extracting passing vehicle motion parameters in real time, and the computer can determine the group fog distribution characteristics by combining the data of the camera and the passing vehicle. After the distribution characteristics of the group fog of each point of the whole road section are obtained, the early warning information can be issued through an LED information board, a high pitch horn and the like, so that the vehicle about to drive into the road section can be helped to take measures in advance, and the accident rate is reduced.
2) According to the method, a vehicle which is identified to turn on the double flashing lamps for the first time is pushed forward by 5 points as starting statistical time, and the turning off of the double flashing lamps is used as ending statistical time; counting the number of pixels of each point location in YCbCr color space with the pixel value of Cb not less than 95 and not more than 135 and Cr not more than 155 in the time period, taking the number of pixels as corresponding visibility pixels to obtain visibility pixels of N point locations, analyzing the transformation rule of the visibility pixels of the N point locations, and determining whether the fog exists or not; and determining the change characteristic of the cluster fog according to the visibility pixel value change trend of each point in the cluster fog and the lowest value of the visibility pixel value change trend.
Drawings
The invention is described in further detail below with reference to the figures and specific embodiments.
FIG. 1 is a flow chart schematic of the method of identifying highway agglomerate fog distribution of the present invention;
fig. 2 shows a lamp pair to be pair verified.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to examples, but it will be understood by those skilled in the art that the following examples are only illustrative of the present invention and should not be construed as limiting the scope of the present invention.
In the invention, radar-video integrated monitoring equipment is arranged at the position of a road sideline 8 every 100 meters, and the radar-video integrated monitoring equipment comprises a radar sensor and a camera sensor; the camera is used for extracting road section environment and passing vehicle images in real time, and the radar is used for extracting motion parameters (driving speed and overall dimension) of passing vehicles in real time.
And the signal output end of the radar-video integrated monitoring equipment is electrically connected with the signal input end of the computer by adopting an optical fiber. The radar-video integrated monitoring equipment transmits the acquired motion parameters and image information of the passing vehicles to a computer (the computer is a high-performance computer and is positioned in a control center) through optical fibers, and the computer determines the group fog distribution characteristics of all points in the whole road section after processing. After the distribution characteristics of the fogs of all points of the whole road section are obtained, the early warning information can be issued through an LED information board, a high pitch horn and the like, so that the vehicles about to drive into the road section can be helped to take measures in advance, and the accident rate is reduced.
Specifically, as shown in fig. 1, the method for identifying the fog distribution on the highway comprises the following steps:
step 1, identifying and opening tail lamp pair vehicle
The radar-video integrated monitoring equipment acquires image information of road vehicles in real time and transmits the acquired image information to the computer, and the computer determines the outline of the vehicle for turning on the tail lamp according to the image information.
Specifically, step 1 comprises the following substeps:
substep 1.1, carrying out threshold value filtering on the image information of the road vehicle by using an HSV color space to obtain a binary image; wherein, the threshold value is: 342 ° < H <360 ° or 0 ° < H <30 °, and 0.45< S < 1.0; performing Gaussian transformation on the binary image to obtain the gray value of the dispersed original image; wherein H is hue, S is saturation, and V is lightness.
And the substep 1.2, normalizing the gray value of the dispersed original image according to the following formula to obtain the gray value of the transformed image, and taking the gray value of the transformed image as a weighting scale coefficient, wherein the value of the weighting scale coefficient is larger in a tail lamp area detected by an HSV color threshold value, and the coefficient values of other areas are smaller, so that the tail lamp area is preliminarily determined.
Figure BDA0002378437370000081
Where I (I, j) is the gray scale value of the original image, N (I, j) is the gray scale value of the transformed image, and min and max represent the minimum gray scale value and the maximum gray scale value of the original image, respectively.
Then, by utilizing the characteristic that the RGB color space obtains different colors through different proportion mixing among components, the enhanced components of certain components can be obtained through the operation among the components, and the segmentation is further realized. And the tail light area usually includes red and yellow pixel points, so the transformations in equations (1) and (2) can be used to enhance the red component and the yellow component, respectively, and suppress other color components.
fR=max(0,min(R(x,y)-G(x,y),R(x,y)-B(x,y))) (1)
fY=max(0,min(R(x,y)-B(x,y),G(x,y)-B(x,y))) (2)
F(x,y)=fR4fY(3)
In the formula, R (x, y), G (x, y), and B (x, y) respectively represent a red component, a green component, and a blue component at the (x, y) position. The transformed (x, y) position pixel value size F (x, y) can be obtained from equation (3).
And 1.3, after the tail lamp area is detected in the image, dividing the original image into a foreground part and a background part by using a maximum inter-class variance method to obtain a more accurate tail lamp area. The maximum inter-class variance method divides the gray scale image into A, B two regions by traversing 0-255 as a threshold, so as to respectively calculate the variance of the two regions, and finally, the value when the variance is maximum is used as a final threshold.
And substep 1.4, eliminating the inner hole of the tail lamp area in the image by using morphological transformation to obtain the outline of the candidate tail lamp area. The dilation operation in the morphological transformation expands the original target area and may cause the holes in the target to be reduced, while the erosion operation may reduce the original target area but simultaneously enlarge the internal holes. Therefore, in order to eliminate the regions with smaller areas or isolated regions in the binary image, firstly, the image is subjected to corrosion operation, and then the corroded binary image is subjected to expansion operation, so that the original adjacent regions are communicated again.
And a substep 1.5, when the profile of the candidate tail lamp area is 2, determining the final tail lamp profile by using the characteristic that the tail lamps are kept on the same horizontal line and are distributed in a mirror symmetry mode.
Substep 1.5 specifically comprises the following substeps:
substep 1.5.1, as in fig. 2, since the tail lights are maintained on the same horizontal line and distributed mirror-symmetrically; let A, B be the two candidate taillight regions obtained, and the areas thereof are S1、S2If the smaller of the two is minS, then S is1、S2If the following inequality is satisfied, the areas of the two candidate tail lamp regions are considered to be similar.
S1-S2<K*minS
Wherein K is a proportionality coefficient.
Substep 4.5.2, as shown in fig. 2, then performing matching in the horizontal direction, calculating the central positions of the two candidate tail lamp regions, and calculating the distance y between the central positions of the two candidate tail lamp regions in the vertical directiondistanceAnd a distance x of the center positions of the two candidate tail lamp regions in the horizontal directiondistance
ydistanceMust be within a certain threshold range before the two candidate tail light regions can be the correct tail light pair, ydistanceThe following inequalities need to be satisfied:
ydistance<G*minHeight
wherein G is a proportionality coefficient; minHeight represents the lesser value of the A, B zone height;
in order to eliminate the mismatch caused by the neighboring region in the x direction, the distance in the horizontal direction also needs to be constrained, i.e. x in the figuredistanceIn a proper range, the following inequalities are required to be satisfied:
M*minWidth<xdistance<N*minWidth
wherein M, N are proportionality coefficients respectively; minWidth is the smaller value of the A, B region width.
The area satisfying the constraint condition limit is considered as the paired tail lamp, the proportional coefficient in the formula needs to be set according to prior knowledge in practice, and the prior values used here are respectively: k is 1.5, G is 3, M is 3, and N is 10.
And 2, tracking and starting the tail lamp pair vehicle by the radar-video integrated monitoring equipment, and determining whether the tail lamp pair vehicle is the same vehicle or not.
Specifically, step 2 comprises the following substeps:
substep 1.1, radar-video integrated monitoring equipment obtains each vehicle distance d at current momentn(n is 1,2 …, n) and the following time each inter-vehicle distance is dn' (n is 1,2 …, n), vehicle length L, and vehicle speed V at present timen(n is 1,2 …, n) and the adjacent time interval is t. The radar-video integrated monitoring equipment acquires image information of road vehicles in real time, and determines R, G and B component values of vehicle bodies by utilizing RGB color space according to the image information of the road vehicles.
Substep 1.2, it can be considered that the same vehicle is present when it simultaneously satisfies the following characteristics:
1) according to the vehicle distance dnAnd judging the lane where the vehicle is located, wherein the same vehicle is located in the same lane in the two adjacent frames of images.
2) The body length L of the same vehicle does not vary by more than 1% in two adjacent images.
3) According to the current time speed VnAnd the adjacent time interval time t, determining the running distance S of the adjacent timen=VnX t; determining the position of the vehicle at the current moment according to the radar-video integrated monitoring equipment, and calculating the position SnA vehicle search range is determined for the radius, the location of the vehicle at the next time being within the vehicle search range.
4) And respectively calculating the average value of the R, G and B components of the vehicle body at the current moment and the average value of the R, G and B components of the vehicle body at the next moment, wherein the average value of the R, G and B components of the vehicle body at the current moment and the average value of the R, G and B components of the vehicle body at the next moment are not changed by more than 5%.
Step 3, identifying and turning on the vehicle with double flashing lights
When the tail lamp is turned on for the same vehicle, the turn-on and turn-off time of the tail lamp for the vehicle is tracked and recorded in real time, and whether the double flashing lamps are turned on or not is determined according to the turn-on rule of the tail lamp.
Specifically, after the tracking of the vehicle tail lamp is realized, the turn-on and turn-off time of the vehicle tail lamp is recorded, and when the turn-on of the vehicle tail lamp lasts for more than 3s, the tail lamp turned on by the vehicle can be regarded as a fog lamp; when the opening time of the vehicle tail lamp is less than 3s and more than three times of opening and extinguishing processes exist, setting the first opening time of the vehicle tail lamp as T1 and the first extinguishing time as T1', and obtaining duration T1; the second opening time is T2, and the extinguishing time is T2', so that the duration time T2 is obtained; the third time, the turning-on time T3 and the turning-off time T3' are carried out, so that the duration time T3 is obtained; and when the standard deviation of T1, T2 and T3 is not more than 0.5 and the difference between (T2-T1) and (T3-T2) is not more than 1s, the double-flashing lamp is confirmed to be lighted.
Step 4, identifying the cluster fog
The radar-video integrated monitoring equipment identifies the vehicle with the double flashing lamps turned on for the first time, pushes 5 points forwards (namely, the radar-video integrated monitoring equipment pushes 5 points forwards), counts the number of pixels with pixel values within a YCbCr color space of which the Cb is more than or equal to 95 and less than or equal to 135 and is more than or equal to 155 in the outline of the tail lamp of the vehicle at the first point, and takes the pixel values as the original reference visibility data D of the first point1And acquiring the distance d and the angle α between the position of the vehicle and the radar-video integrated monitoring equipment by using a radar, wherein in YCbCr, Y is the brightness component of the color, and CB and CR are the concentration offset components of blue and red.
For each point position behind, the visibility pixel D of the nth point position with the distance D and the angle α between the position of the vehicle and the radar-video integrated monitoring equipment can be obtainedn
After the vehicle is driven out of the foggy road section, the double flashing lamps are turned off, and the visibility pixel at the moment is DN(ii) a Wherein N is 1,2,3, … N, and N is the total number of points from the incoming mist to the outgoing mist of the vehicle.
From D1To DNObtaining visibility pixels of N points, analyzing the transformation rule of the visibility pixels of the N points, and judging whether the following conditions are met:
1) when entering a road section, the visibility is rapidly reduced, and the reduction range of visibility pixels exceeds 50 percent;
2) the visibility pixels of the middle section are relatively stable as a whole and are compared with D1Is smaller by more than 50 percent;
3) when exiting a foggy road segment, the visibility pixels begin to rise, and DNAnd D1Are within 10%.
4) The distance of the middle road section is more than 100 meters and less than or equal to 5 kilometers.
After the conditions are met, a complete foggy road section can be considered to exist.
Step 5, determining visibility change characteristics in the foggy mass
If a cloud of fog is identified in the road section, the visibility pixel value D is obtained1As a reference, the visibility pixel value D of the next point2So as to determine the visibility change ratio between two points as D2/D1(ii) a Determining the change ratio of each point as D3/D1、D4/D1…、DN/D1And the point position of the minimum value is the position with the lowest visibility in the foggy mass, so that the change characteristic of each foggy mass in the whole road section is obtained.
Although the present invention has been described in detail in this specification with reference to specific embodiments and illustrative embodiments, it will be apparent to those skilled in the art that modifications and improvements can be made thereto based on the present invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.

Claims (6)

1. A method for identifying the fog distribution of a highway is characterized by comprising the following steps:
step 1, radar-video integrated monitoring equipment acquires image information of a road vehicle in real time and transmits the acquired image information to a computer, and the computer determines the outline of a vehicle for turning on a tail lamp according to the image information;
step 2, tracking and starting the tail lamp pair vehicle by the radar-video integrated monitoring equipment, and determining whether the tail lamp pair vehicle is started to be the same vehicle or not;
step 3, when the tail lamps are turned on and the vehicles are the same, tracking and recording the turn-on and turn-off time of the tail lamps of the vehicles by turning on the tail lamps in real time, and determining whether the fog lamps or the double flashing lamps are turned on or not according to the turn-on rule of the tail lamps;
step 4, the radar-video integrated monitoring equipment identifies the vehicle with the double flashing lamps turned on for the first time, pushes 5 point locations forward, counts the number of pixels with pixel values within a YCbCr color space of which Cb is more than or equal to 95 and less than or equal to 135 and 155 and is taken as the original reference visibility data D of the first point location in the YCbCr color space of the vehicle tail lamp outline at the first point location1(ii) a Sequentially obtaining visibility pixels D of the nth pointn
After the vehicle is driven out of the foggy road section, the double flashing lamps are turned off to obtain the visibility pixel D of the Nth point positionN(ii) a Wherein N is 1,2,3, … N, and N is the total number of points from the mist to the mist;
from D1To DNObtaining visibility pixels of N points, analyzing the transformation rule of the visibility pixels of the N points, and judging whether the visibility pixels are a complete foggy road section;
step 5, when the road section is a complete foggy road section, the original reference visibility data D of the first point location is used1Respectively determining the visibility change proportion D of adjacent points as reference2/D1、D3/D1…、DN/D1(ii) a The point position of the minimum value is the position with the lowest visibility in the foggy mass, and therefore the change characteristic of each foggy mass in the whole road section is obtained.
2. The method for identifying the fog distribution of the highway according to claim 1, wherein the step 1 comprises the following substeps:
substep 1.1, carrying out threshold value filtering on the image information of the road vehicle by using an HSV color space to obtain a binary image; wherein, the threshold value is: 342 ° < H <360 ° or 0 ° < H <30 °, and 0.45< S < 1.0; performing Gaussian transformation on the binary image to obtain the gray value of the dispersed original image;
substep 1.2, normalizing the gray value of the dispersed original image according to the following formula to obtain the gray value of the transformed image, taking the gray value of the transformed image as a weighting scale factor, and preliminarily determining a tail lamp area according to the weighting scale factor;
Figure FDA0002378437360000021
wherein, I (I, j) is the gray value of the original image, N (I, j) is the gray value of the transformed image, and min and max respectively represent the minimum gray value and the maximum gray value of the original image;
substep 1.3, dividing the original image into a foreground part and a background part by using a maximum inter-class variance method to obtain a more accurate tail lamp area;
substep 1.4, eliminating the inner hole of the tail lamp area in the image by using morphological transformation to obtain the outline of the candidate tail lamp area;
and a substep 1.5, when the profile of the candidate tail lamp area is 2, determining the final tail lamp profile by using the characteristic that the tail lamps are kept on the same horizontal line and are distributed in a mirror symmetry mode.
3. Method for identifying a fog distribution on motorways according to claim 2, characterized in that sub-step 1.5 comprises the following sub-steps:
substep 1.5.1, let A, B be the two candidate taillight regions obtained, respectively, whose areas are S1、S2If the smaller of the two is minS, then S is1、S2If the following inequality is satisfied, the areas of the two candidate tail lamp areas are similar;
S1-S2<K*minS
wherein, K is a proportionality coefficient, and K is 1.5;
substep 1.5.2, calculating the central positions of the two candidate tail lamp areas, and solving the distance y between the central positions of the two candidate tail lamp areas in the vertical directiondistanceAnd the center positions of the two candidate tail lamp areas are in the horizontal directionDistance xdistance
Wherein, ydistance、xdistanceIf the following inequality is satisfied, the vehicle tail lamp pair is considered to be correct;
ydistance<G*minHeight
M*minWidth<xdistance<N*minWidth
wherein G is a proportionality coefficient, and G is 3; minHeight represents the lesser value of the A, B zone height; m, N are respectively proportional coefficients, M is 3, N is 10; minWidth is the smaller value of the A, B zone width.
4. The method for identifying a fog distribution on a highway according to claim 1, wherein the step 2 comprises the following substeps;
substep 2.1, radar-video integrated monitoring equipment obtains each vehicle distance d at the current momentnThe length of the vehicle body is L, and the vehicle speed at the current moment is VnThe interval time between adjacent moments is t;
the radar-video integrated monitoring equipment acquires image information of road vehicles in real time, and determines R, G and B component values of a vehicle body by utilizing RGB color space according to the image information of the road vehicles;
substep 2.2, the vehicle is the same vehicle and should satisfy the following conditions:
according to the vehicle distance dnJudging the lane where the vehicle is located, wherein the same vehicle is located in the same lane in the two adjacent frames of images;
the change of the length L of the vehicle body of the same vehicle in two adjacent frames of images is not more than 1%;
according to the current time speed VnAnd the adjacent time interval time t, determining the running distance S of the adjacent timen=VnX t; determining the position of the vehicle at the current moment according to the radar-video integrated monitoring equipment, and calculating the position SnDetermining a vehicle search range for the radius, the vehicle location at the next time being within the vehicle search range;
the average value of the R, G and B components of the vehicle body at the current moment and the average value of the R, G and B components of the vehicle body at the next moment are respectively calculated, and the average value of the R, G and B components of the vehicle body at the current moment and the average value of the R, G and B components of the vehicle body at the next moment are not changed by more than 5%.
5. The method for identifying the fog distribution on the expressway according to claim 1, wherein the step 3 is specifically as follows: recording the turning-on and turning-off time of the tail lamp of the vehicle, and when the turning-on of the tail lamp of the vehicle lasts for more than 3s, considering that the tail lamp of the vehicle is a fog lamp;
when the opening time of the vehicle tail lamp is less than 3s and more than three times of opening and extinguishing processes exist, setting the first opening time of the vehicle tail lamp as T1 and the first extinguishing time as T1', and obtaining duration T1; the second opening time is T2, and the extinguishing time is T2', so that the duration time T2 is obtained; the third time, the turning-on time T3 and the turning-off time T3' are carried out, so that the duration time T3 is obtained; and when the standard deviation of T1, T2 and T3 is not more than 0.5 and the difference between (T2-T1) and (T3-T2) is not more than 1s, the double-flashing lamp is confirmed to be lighted.
6. The method for identifying the fog cluster distribution on the expressway as claimed in claim 1, wherein the step 4 is to determine whether the fog cluster section is a complete fog cluster section by:
1) when entering a road section, the visibility is rapidly reduced, and the reduction range of visibility pixels exceeds 50 percent;
2) the visibility pixels of the middle section are relatively stable as a whole and are compared with D1Is smaller by more than 50 percent;
3) when exiting a foggy road segment, the visibility pixels begin to rise, and DNAnd D1Are within 10%.
4) The distance of the middle road section is more than 100 meters and less than or equal to 5 kilometers.
And after the conditions are met, determining that a complete foggy road section exists.
CN202010075695.5A 2020-01-22 2020-01-22 Method for identifying fog distribution of expressway Active CN111292529B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010075695.5A CN111292529B (en) 2020-01-22 2020-01-22 Method for identifying fog distribution of expressway

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010075695.5A CN111292529B (en) 2020-01-22 2020-01-22 Method for identifying fog distribution of expressway

Publications (2)

Publication Number Publication Date
CN111292529A true CN111292529A (en) 2020-06-16
CN111292529B CN111292529B (en) 2021-03-23

Family

ID=71022378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010075695.5A Active CN111292529B (en) 2020-01-22 2020-01-22 Method for identifying fog distribution of expressway

Country Status (1)

Country Link
CN (1) CN111292529B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111918461A (en) * 2020-08-12 2020-11-10 上海仙塔智能科技有限公司 Road condition sharing method, system, server and storage medium
CN112258832A (en) * 2020-09-15 2021-01-22 北京工业大学 Method for operating vehicle information-based cluster cloud meteorological information perception and release system
CN113435405A (en) * 2021-07-15 2021-09-24 山东交通学院 Expressway night fog monitoring method and system based on video images
CN113706889A (en) * 2021-08-02 2021-11-26 浪潮天元通信信息系统有限公司 Highway agglomerate fog measuring system and method based on target detection and analysis
CN114399924A (en) * 2022-02-15 2022-04-26 青岛海信网络科技股份有限公司 Vehicle, edge computing device, server and information transmission method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103778786A (en) * 2013-12-17 2014-05-07 东莞中国科学院云计算产业技术创新与育成中心 Traffic violation detection method based on significant vehicle part model
US20150022331A1 (en) * 2013-07-18 2015-01-22 GM Global Technology Operations LLC Vehicle environment monitoring system
CN108648479A (en) * 2018-05-11 2018-10-12 长安大学 A kind of device and method for reminding night group's mist section in real time using electronic map
CN108663368A (en) * 2018-05-11 2018-10-16 长安大学 A kind of system and method for real-time monitoring freeway network night entirety visibility
CN109872554A (en) * 2019-03-22 2019-06-11 山西省交通科学研究院有限公司 A kind of expressway fog zone promotes the bus or train route early warning system of traffic safety

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150022331A1 (en) * 2013-07-18 2015-01-22 GM Global Technology Operations LLC Vehicle environment monitoring system
CN103778786A (en) * 2013-12-17 2014-05-07 东莞中国科学院云计算产业技术创新与育成中心 Traffic violation detection method based on significant vehicle part model
CN108648479A (en) * 2018-05-11 2018-10-12 长安大学 A kind of device and method for reminding night group's mist section in real time using electronic map
CN108663368A (en) * 2018-05-11 2018-10-16 长安大学 A kind of system and method for real-time monitoring freeway network night entirety visibility
CN109872554A (en) * 2019-03-22 2019-06-11 山西省交通科学研究院有限公司 A kind of expressway fog zone promotes the bus or train route early warning system of traffic safety

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111918461A (en) * 2020-08-12 2020-11-10 上海仙塔智能科技有限公司 Road condition sharing method, system, server and storage medium
CN111918461B (en) * 2020-08-12 2023-08-08 上海仙塔智能科技有限公司 Road condition sharing method, system, server and storage medium
CN112258832A (en) * 2020-09-15 2021-01-22 北京工业大学 Method for operating vehicle information-based cluster cloud meteorological information perception and release system
CN112258832B (en) * 2020-09-15 2022-06-24 北京工业大学 Method for operating vehicle information-based cluster cloud meteorological information perception and release system
CN113435405A (en) * 2021-07-15 2021-09-24 山东交通学院 Expressway night fog monitoring method and system based on video images
WO2023283997A1 (en) * 2021-07-15 2023-01-19 山东交通学院 Video image-based method and system for monitoring highway agglomerate fog at night
CN113435405B (en) * 2021-07-15 2023-09-08 山东交通学院 Expressway night fog monitoring method and system based on video images
CN113706889A (en) * 2021-08-02 2021-11-26 浪潮天元通信信息系统有限公司 Highway agglomerate fog measuring system and method based on target detection and analysis
CN114399924A (en) * 2022-02-15 2022-04-26 青岛海信网络科技股份有限公司 Vehicle, edge computing device, server and information transmission method
CN114399924B (en) * 2022-02-15 2023-07-11 青岛海信网络科技股份有限公司 Vehicle, edge computing equipment, server and information transmission method

Also Published As

Publication number Publication date
CN111292529B (en) 2021-03-23

Similar Documents

Publication Publication Date Title
CN111292529B (en) Method for identifying fog distribution of expressway
CN104361350B (en) A kind of traffic mark identifying system
Cheng et al. Lane detection with moving vehicles in the traffic scenes
CN108357418B (en) Preceding vehicle driving intention analysis method based on tail lamp identification
CN109299674B (en) Tunnel illegal lane change detection method based on car lamp
DE102006041857B4 (en) Method and system for improving traffic safety
US9268740B2 (en) External environment recognizing device for vehicle and light distribution control system using the same
US20080043099A1 (en) Symmetric filter patterns for enhanced performance of single and concurrent driver assistance applications
CN111815959B (en) Vehicle violation detection method and device and computer readable storage medium
US20160335508A1 (en) Vehicle detection system and method
CN111243336B (en) Vehicle rear-end collision early warning system and method based on double-flash-lamp identification
CN111292556B (en) Vehicle early warning system and method based on roadside double-flash-lamp recognition
CN110688907B (en) Method and device for identifying object based on night road light source
EP1887492A1 (en) Symmetric filter patterns for enhanced performance of single and concurrent driver assistance applications
US20230174091A1 (en) Motor-vehicle driving assistance in low meteorological visibility conditions, in particular with fog
CN111275981A (en) Method for identifying starting brake lamp and double-flashing lamp of highway vehicle
CN111688568B (en) Brightness detection method, vehicle lamp control method, system thereof and storage medium
CN111284396B (en) Temperature rise early warning method for truck brake drum
Boonsim et al. An algorithm for accurate taillight detection at night
KR102252913B1 (en) System and method of detecting signal light
US20230368545A1 (en) Method for processing images
CN113743226B (en) Daytime front car light language recognition and early warning method and system
Huang et al. Nighttime vehicle detection and tracking base on spatiotemporal analysis using RCCC sensor
Feng et al. Forward vehicle deceleration detection system for motorcycle at nighttime
JP2005339234A (en) Front vehicle monitoring device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant