CN110826412A - Highway visibility detection system and method - Google Patents

Highway visibility detection system and method Download PDF

Info

Publication number
CN110826412A
CN110826412A CN201910958879.3A CN201910958879A CN110826412A CN 110826412 A CN110826412 A CN 110826412A CN 201910958879 A CN201910958879 A CN 201910958879A CN 110826412 A CN110826412 A CN 110826412A
Authority
CN
China
Prior art keywords
visibility
road image
lane line
lane
highway
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910958879.3A
Other languages
Chinese (zh)
Other versions
CN110826412B (en
Inventor
陈湘军
徐瑞鹏
成玉荣
杜晨浩
郭丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Institute of Technology
Original Assignee
Jiangsu Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Institute of Technology filed Critical Jiangsu Institute of Technology
Priority to CN201910958879.3A priority Critical patent/CN110826412B/en
Publication of CN110826412A publication Critical patent/CN110826412A/en
Application granted granted Critical
Publication of CN110826412B publication Critical patent/CN110826412B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a system and a method for detecting visibility of a highway, wherein the system comprises: the system comprises a road information acquisition module, a lane line detection and labeling module, a depth profile characteristic analysis module and a visibility calculation module, wherein the lane line detection and labeling module processes a road image acquired by the road information acquisition module to obtain lane lines and the number of the lane lines of the road image, simultaneously labels characteristic areas of the lane lines and actual distances of the labeled areas, the depth profile characteristic analysis module processes the labeled areas to obtain visibility coefficients of the labeled areas, and the visibility calculation module obtains the visibility of the expressway according to the actual distances and the visibility coefficients of the labeled areas and the number of the lane lines. The invention can obtain the visibility detection result with higher accuracy, has lower cost, can adapt to complex road scenes and has wider adaptability.

Description

Highway visibility detection system and method
Technical Field
The invention relates to the technical field of visibility detection, in particular to a highway visibility detection system and a highway visibility detection method.
Background
The highway is used as an important component of a highway traffic network in China and bears the important role of cross-region traffic and transportation in China. However, since the highway vehicles are driven at a high speed, if the visibility is too low during driving, the visual field of the driver is limited, and particularly, traffic accidents are very easy to occur when the driver suddenly encounters a foggy day during driving, so that it is very important to monitor the visibility condition of the highway in real time.
At present, methods applied to highway visibility detection mainly include methods such as manual visual inspection, visibility meter detection and the like. However, the manual visual inspection method is greatly influenced by subjective factors, the detection precision is low, and the real-time performance and the coverage are difficult to ensure; visibility detector generally is based on infrared light or laser technique, measurable quantity extinction coefficient or visibility value, and data is comparatively accurate, but visibility detector's detection distance is shorter, and the cost is expensive.
In addition, the means applied to the highway visibility detection also comprises a detection method based on binocular camera calibration and a contrast model. However, in the former of the two detection methods, the calibration template and the measurement camera need to be installed at a specific angle, and the method is difficult to be applied to an actual road scene; the latter has a large actual detection error.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the art described above. Therefore, an object of the present invention is to provide a highway visibility detection system, which can obtain a visibility detection result with high accuracy, is low in cost, can adapt to a complex road scene, and has wide adaptability.
The second purpose of the invention is to provide a highway visibility detection method.
In order to achieve the above object, a first embodiment of the present invention provides a highway visibility detection system, including: the system comprises a road information acquisition module, a road information acquisition module and a display module, wherein the road information acquisition module is used for acquiring a road image of an expressway; the lane line detection and marking module is used for processing the road image through a lane line detection technology to obtain lane lines and the number of the lane lines in the road image, calculating an affine matrix of the road image projected to an affine plane according to the lane lines, marking characteristic areas of the lane lines in the road image, and calculating the actual distance of the marked areas according to the affine matrix; the depth profile feature analysis module is used for processing the marked region through a depth profile feature analysis technology to obtain a visibility degree coefficient of the marked region; and the visibility calculation module is used for calculating the visibility of a clear area of the lane line in the road image according to the lane line and the number of the lane lines and calculating the visibility of a fuzzy area of the lane line in the road image according to the actual distance and the visibility degree coefficient.
According to the highway visibility detection system provided by the embodiment of the invention, the road image acquired by the road information acquisition module is processed by the lane line detection and marking module to obtain the lane lines and the number of the lane lines of the road image, the characteristic area of the lane lines is marked at the same time, the actual distance of the marked area is obtained, the marked area is processed by the depth profile characteristic analysis module to obtain the visibility coefficient of the marked area, and finally the visibility of the highway is obtained by the visibility calculation module according to the lane lines, the number of the lane lines, the actual distance of the marked area and the visibility coefficient.
In addition, the highway visibility detection system proposed according to the above embodiment of the present invention may further have the following additional technical features:
according to one embodiment of the invention, the lane line detection technique includes edge detection and multi-thresholding.
According to one embodiment of the invention, the depth profile feature analysis module comprises a profile feature analysis model trained on a deep learning neural network.
According to one embodiment of the present invention, labeling the feature region of the lane line in the road image includes: marking the two farthest lane lines visible in the road image, and predicting and marking the next lane line.
Furthermore, the highway visibility detection system also comprises an information management module, wherein the information management module is used for carrying out grade division on the visibility of the highway and carrying out early warning according to the grade.
In order to achieve the above object, a second embodiment of the present invention provides a method for detecting visibility on a highway, including: acquiring a road image of an expressway; processing the road image by a lane line detection technology to obtain lane lines and the number of the lane lines in the road image; calculating an affine matrix of the road image projected to an affine plane according to the lane line; marking a characteristic region of the lane line in the road image, and calculating an actual distance of the marked region according to the affine matrix; processing the marked area through a depth profile characteristic analysis technology to obtain a visibility degree coefficient of the marked area; and calculating the visibility of a clear area of the lane line in the road image according to the lane line and the number of the lane lines, and calculating the visibility of a fuzzy area of the lane line in the road image according to the actual distance and the visibility degree coefficient.
According to the highway visibility detection method provided by the embodiment of the invention, the acquired road image is processed through a lane line detection technology to obtain lane lines and the number of lane lines of the road image, the characteristic region of the lane lines is labeled at the same time, the actual distance of the labeled region is obtained, the labeled region is processed through a depth profile characteristic analysis technology to obtain the visibility coefficient of the labeled region, and finally the visibility of the highway is obtained according to the number of the lane lines and the lane lines, the actual distance of the labeled region and the visibility coefficient.
In addition, the method for detecting visibility of a highway according to the embodiment of the present invention may further have the following additional technical features:
according to one embodiment of the present invention, processing the road image by a lane line detection technique includes: and carrying out edge detection and multi-threshold processing on the road image features.
According to one embodiment of the invention, the labeled region is subjected to deep profile feature analysis through a profile feature analysis model trained on a deep learning neural network.
According to one embodiment of the present invention, labeling the feature region of the lane line in the road image includes: marking the two farthest lane lines visible in the road image, and predicting and marking the next lane line.
Further, the highway visibility detection method further comprises the following steps: and grading the visibility of the road, and carrying out early warning according to the grade.
Drawings
FIG. 1 is a block schematic diagram of a highway detection system in accordance with an embodiment of the present invention;
FIG. 2 is a schematic view of a road image lane line according to one embodiment of the present invention;
FIG. 3 is a schematic view of a road image projected from a camera plane to an affine plane according to one embodiment of the present invention;
FIG. 4 is a logic diagram of a deep learning neural network-based training model according to an embodiment of the present invention;
FIG. 5 is a block schematic diagram of a highway detection system in accordance with one embodiment of the present invention;
fig. 6 is a flowchart of a highway detection method according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a highway visibility detection system according to an embodiment of the present invention.
As shown in fig. 1, the highway visibility detection system according to the embodiment of the present invention includes a road information obtaining module 10, a lane line detecting and labeling module 20, a depth profile feature analyzing module 30, and a visibility calculating module 40. The road information acquiring module 10 is used for acquiring a road image; the lane line detection module 20 is configured to process the road image by using a lane line detection technology to obtain lane lines and the number of lane lines in the road image, calculate an affine matrix of the road image projected onto an affine plane according to the lane lines, label a feature region of the lane lines in the road image, and calculate an actual distance of the labeled region according to the affine matrix; the depth profile characteristic analysis module 30 is configured to process the labeled region through a depth profile characteristic analysis technique to obtain a visibility degree coefficient of the labeled region; the visibility calculation module 40 is configured to calculate visibility of a clear area of a lane line in a road image according to the lane line and the number of the lane lines, and calculate visibility of a fuzzy area of the lane line in the road image according to an actual distance and a visibility degree coefficient.
In an embodiment of the present invention, the road information acquiring module 10 may acquire the road image of the expressway by performing a background separation process on the expressway monitoring video.
In one embodiment of the present invention, the lane line detection and labeling module 20 may obtain the lane lines and the number of lane lines in the road image through edge detection and multi-threshold processing.
Specifically, the image pixel data of the road image may be normalized to [0, 1], filtered by a low-pass filter, and then subjected to fitting classification on image pixel points, and image edge pixels are separated, so as to extract lane lines and lane number in the road image, which may be specifically calculated by the following formula:
Figure BDA0002228268480000051
wherein u is the independent variable of the image pixel signal sequence, delta u is the distance between the peak and trough lane lines,
Figure BDA0002228268480000052
is uυNv is the number of peaks and troughs, and i is the peak number.
In one embodiment of the present invention, an affine matrix of the road image projected to the affine plane, such as the affine matrix of the road image projected from the camera plane to the affine plane shown in fig. 2, may be calculated according to the pixel positions of the lane lines in the road image:
wherein the content of the first and second substances,
Figure BDA0002228268480000061
representing a linear transformation, [ a ]31a32]For translation, [ a ]13a23]TA perspective transformation is generated.
In one embodiment of the invention, the two farthest lane lines visible in the road image can be labeled, the next lane line is predicted and labeled, and the actual distance of the labeled area is obtained through an affine matrix.
Specifically, as shown in fig. 3, the a and b feature regions of the lane line Ln-1 and the c and d feature regions of the lane line Ln visible in the road image may be labeled, and the next lane line Ln +1 may be predicted and the e and f feature regions may be labeled.
Further, the labeled area in the road image can be projected to an affine plane through the affine matrix shown in fig. 2, and specifically, the labeled area can be transformed through the following formula:
Figure BDA0002228268480000062
wherein u and v are coordinates of the road image on the camera plane, x and y are coordinates of the road image on the affine plane, and W is a weight matrix.
Further, x 'and y' may be calculated by the following formula:
Figure BDA0002228268480000063
Figure BDA0002228268480000064
further, the actual distance of the labeling area can be calculated through the fixed proportion of the affine plane and the actual road.
In one embodiment of the present invention, the depth profile feature analysis module 30 includes a profile feature analysis model trained based on a deep learning neural network.
Specifically, as shown in fig. 4, based on the profile feature analysis model trained by the deep learning neural network, the road image may be converted to obtain a sample data set, the sample data set is input into the data layer of the deep learning neural network, and further reaches the convolutional layer, and then reaches the pooling layer and the LRN layer through the activation function, and finally reaches the linear classifier through Dropout to prevent overfitting.
Further, the sample data set may be classified by a linear classifier, and specifically, the sample data set may be classified by the following linear classifier function:
f(x,W,b)=Wx+b
where x is an input three-dimensional column vector and b is an offset value.
It should be noted that the sample data set includes a positive sample data set and a negative sample data set, the positive sample data set may include road images of the expressway under different visibility, and the negative sample data set may include road images of the expressway with obstacles, such as vehicles. And the sample data set can be processed through a loss function so as to reduce the loss of the sample and provide sample characteristics, and meanwhile, the projection structure of the weight matrix and the goodness of fit of the actual class are measured.
The loss function of the sample data set can be transformed by the following formula:
Figure BDA0002228268480000071
Figure BDA0002228268480000072
wherein x isiFor the ith sample, f (x)iW) is the loss value vector of the ith sample, f (x)i,W)jIs the loss value, s, of the jth sample of class jiScore for other label categories, syIs xiThe fraction, Δ, of the real class of samples is a constant such that the function is continuous.
Further, a loss function of the sample data set may be obtained by regularization transformation:
Figure BDA0002228268480000073
in an embodiment of the present invention, the depth profile feature analysis module 30, i.e. the depth profile feature analysis model, may process the labeled region to obtain the visibility coefficient of the labeled region.
In an embodiment of the present invention, the visibility calculation module 40 may calculate the visibility of the clear area of the lane line in the road image according to the lane line and the number of the lane lines in the road image, and may specifically calculate the visibility of the clear area of the lane line in the road image according to the number of the lane lines and the distance between every two lane lines, such as the lane lines L1, L2, …, and Ln-2 shown in fig. 4. If the distance between every two lane lines is 15m, the visibility of the clear area of the lane lines is 15 m.
On the other hand, the visibility calculation module 40 may calculate the visibility of the fuzzy area of the lane line in the road image according to the visibility coefficient of the labeled area and the actual distance of the labeled area, for example, the visibility of the area of the lane line Ln-1, Ln +1 shown in fig. 4, and specifically may calculate by the following formula:
Figure BDA0002228268480000082
vis=k·V(β)+d
wherein dg is a visibility degree coefficient, V (β) is a linear weighting function, d is an actual distance of the marked region, k is a maximum error coefficient, and β is a contrast threshold value for defining the meteorological visibility.
In one embodiment of the present invention, as shown in fig. 5, the highway visibility detection system further includes an information management module 50. The information management module 50 may perform level division on visibility of the highway, and perform early warning according to the visibility level, for example, the visibility of less than 300m may be divided into a first level for blue early warning, the visibility of less than 200m may be divided into a second level for yellow early warning, the visibility of less than 100m may be divided into a third level for orange early warning, and the visibility of less than 50m may be divided into a fourth level for red early warning.
In addition, the information management module 50 may further mark visibility information in road images of each section of the highway, store the visibility information of the current road, compare and analyze the visibility information with historical data of the road, and perform early warning management.
According to the highway visibility detection system provided by the embodiment of the invention, the road image acquired by the road information acquisition module is processed by the lane line detection and marking module to obtain the lane lines and the number of the lane lines of the road image, the characteristic area of the lane lines is marked at the same time, the actual distance of the marked area is obtained, the marked area is processed by the depth profile characteristic analysis module to obtain the visibility coefficient of the marked area, and finally the visibility of the highway is obtained by the visibility calculation module according to the lane lines, the number of the lane lines and the actual distance and the visibility coefficient of the marked area.
Corresponding to the highway visibility detection system provided by the embodiment, the embodiment of the invention also provides a highway visibility detection method.
As shown in fig. 6, the method for detecting visibility on a highway according to the embodiment of the present invention includes the steps of:
and S1, acquiring a road image of the expressway.
In one embodiment of the invention, the road image of the expressway can be acquired by performing background separation processing on the expressway monitoring video.
And S2, processing the road image through a lane line detection technology to obtain the lane lines and the number of the lane lines in the road image.
Specifically, the image pixel data of the road image may be normalized to [0, 1], filtered by a low-pass filter, and then subjected to fitting classification on image pixel points, and image edge pixels are separated, so as to extract lane lines and lane number in the road image, which may be specifically calculated by the following formula:
Figure BDA0002228268480000091
wherein u is the independent variable of the image pixel signal sequence, delta u is the distance between the peak and trough lane lines,
Figure BDA0002228268480000101
is uυNv is the number of peaks and troughs, and i is the peak number.
And S3, calculating an affine matrix of the road image projected on the affine plane according to the lane line.
In one embodiment of the present invention, an affine matrix of the road image projected to the affine plane, such as the affine matrix of the road image projected from the camera plane to the affine plane shown in fig. 2, may be calculated according to the pixel positions of the lane lines in the road image:
Figure BDA0002228268480000102
wherein the content of the first and second substances,
Figure BDA0002228268480000103
representing a linear transformation, [ a ]31a32]For translation, [ a ]13a23]TA perspective transformation is generated.
And S4, marking the characteristic area of the lane line in the road image, and calculating the actual distance of the marked area according to the affine matrix.
In one embodiment of the invention, the two farthest lane lines visible in the road image can be labeled, the next lane line is predicted and labeled, and the actual distance of the labeled area is obtained through an affine matrix.
Specifically, as shown in fig. 3, the a and b feature regions of the lane line Ln-1 and the c and d feature regions of the lane line Ln visible in the road image may be labeled, and the next lane line Ln +1 may be predicted and the e and f feature regions may be labeled.
Further, the labeled area in the road image can be projected to an affine plane through the affine matrix shown in fig. 2, and specifically, the labeled area can be transformed through the following formula:
wherein u and v are coordinates of the road image on the camera plane, x and y are coordinates of the road image on the affine plane, and W is a weight matrix.
Further, x 'and y' may be calculated by the following formula:
Figure BDA0002228268480000111
Figure BDA0002228268480000112
further, the actual distance of the labeling area can be calculated through the fixed proportion of the affine plane and the actual road.
And S5, processing the marked area through a depth profile characteristic analysis technology to obtain a visibility degree coefficient of the marked area.
In one embodiment of the invention, the visibility degree coefficient of the labeled region can be obtained by performing depth profile feature analysis on the labeled region through a profile feature analysis model trained on a deep learning neural network.
As shown in fig. 4, based on the profile feature analysis model trained by the deep learning neural network, the road image may be converted to obtain a sample data set, the sample data set is input into the data layer of the deep learning neural network, and further reaches the convolutional layer, and then reaches the pooling layer and the LRN layer through the activation function, and finally reaches the linear classifier through Dropout to prevent overfitting.
Further, the sample data set may be classified by a linear classifier, and specifically, the sample data set may be classified by the following linear classifier function:
f(x,W,b)=Wx+b
where x is an input three-dimensional column vector and b is an offset value.
It should be noted that the sample data set includes a positive sample data set and a negative sample data set, the positive sample data set may include road images of the expressway under different visibility, and the negative sample data set may include road images of the expressway with obstacles, such as vehicles. And the sample data set can be processed through a loss function so as to reduce the loss of the sample and provide sample characteristics, and meanwhile, the projection structure of the weight matrix and the goodness of fit of the actual class are measured.
The loss function of the sample data set can be transformed by the following formula:
Figure BDA0002228268480000121
Figure BDA0002228268480000122
wherein x isiFor the ith sample, f (x)iW) is the loss value vector of the ith sample, f (x)i,W)jIs the loss value, s, of the jth sample of class jiScore for other label categories, syThe resulting fraction of the true class of samples, Δ, is a constant such that the function is continuous.
Further, a loss function of the sample data set may be obtained by regularization transformation:
Figure BDA0002228268480000123
and S6, calculating the visibility of the clear area of the lane line in the road image according to the lane line and the number of the lane lines, and calculating the visibility of the fuzzy area of the lane line in the road image according to the actual distance and the visibility degree coefficient.
In an embodiment of the invention, on the one hand, the visibility of the clear area of the lane line in the road image can be calculated according to the lane line and the number of the lane lines in the road image, and the visibility of the areas such as the lane lines L1, L2, … and Ln-2 shown in fig. 4 can be calculated according to the number of the lane lines and the distance between every two lane lines. If the distance between every two lane lines is 15m, the visibility of the clear area of the lane lines is 15 m.
On the other hand, the visibility of the lane line fuzzy area in the road image, for example, the visibility of the lane line Ln-1, Ln +1 area shown in fig. 4, can be calculated according to the visibility coefficient of the labeled area and the actual distance of the labeled area, and specifically can be calculated by the following formula:
Figure BDA0002228268480000131
vis=k·V(β)+d
wherein dg is a visibility degree coefficient, V (β) is a linear weighting function, d is an actual distance of the marked region, k is a maximum error coefficient, and β is a contrast threshold value for defining the meteorological visibility.
In one embodiment of the present invention, the highway visibility detection method further includes: and grading the visibility of the road, and carrying out early warning according to the grade.
For example, when the visibility is less than 300m, the blue warning is performed, when the visibility is less than 200m, the yellow warning is performed, when the visibility is less than 100m, the orange warning is performed, when the visibility is less than 50m, the red warning is performed, when the visibility is less than 50m, the blue warning is performed.
In addition, visibility information can be marked in road images of all road sections of the expressway, the visibility information of the current road is stored and is compared and analyzed with historical data of the road, early warning management is conducted, the early warning information can be sent to the mobile terminal, the road information board and the road management system through broadcasting, driving vehicles are reminded of paying attention to driving safety, and the expressway management is assisted to improve management efficiency and intelligent level.
According to the highway visibility detection method provided by the embodiment of the invention, the acquired road image is processed by a lane line detection technology to obtain lane lines and the number of lane lines of the road image, the characteristic region of the lane lines is labeled at the same time, the actual distance of the labeled region is obtained, the labeled region is processed by a depth profile characteristic analysis technology to obtain the visibility coefficient of the labeled region, and finally the visibility of the highway is obtained according to the lane lines, the number of lane lines, the actual distance of the labeled region and the visibility coefficient.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (10)

1. A highway visibility detection system, comprising:
the system comprises a road information acquisition module, a road information acquisition module and a display module, wherein the road information acquisition module is used for acquiring a road image of an expressway;
the lane line detection and marking module is used for processing the road image through a lane line detection technology to obtain lane lines and the number of the lane lines in the road image, calculating an affine matrix of the road image projected to an affine plane according to the lane lines, marking characteristic areas of the lane lines in the road image, and calculating the actual distance of the marked areas according to the affine matrix;
the depth profile feature analysis module is used for processing the marked region through a depth profile feature analysis technology to obtain a visibility degree coefficient of the marked region;
and the visibility calculation module is used for calculating the visibility of a clear area of the lane line in the road image according to the lane line and the number of the lane lines and calculating the visibility of a fuzzy area of the lane line in the road image according to the actual distance and the visibility degree coefficient.
2. The highway visibility detection system of claim 1, wherein the lane marking detection technique comprises edge detection and multi-threshold processing.
3. The highway visibility detection system of claim 1, wherein the depth profile feature analysis module comprises a profile feature analysis model trained based on a deep learning neural network.
4. The highway visibility detection system according to claim 2, wherein labeling a characteristic region of the lane line in the road image comprises: marking the two farthest lane lines visible in the road image, and predicting and marking the next lane line.
5. The highway visibility detection system according to claim 4, further comprising an information management module for grading visibility of the highway and giving an early warning according to the grade.
6. A highway visibility detection method is characterized by comprising the following steps:
acquiring a road image of an expressway;
processing the road image by a lane line detection technology to obtain lane lines and the number of the lane lines in the road image;
calculating an affine matrix of the road image projected to an affine plane according to the lane line;
marking a characteristic region of the lane line in the road image, and calculating an actual distance of the marked region according to the affine matrix;
processing the marked area through a depth profile characteristic analysis technology to obtain a visibility degree coefficient of the marked area;
and calculating the visibility of a clear area of the lane line in the road image according to the lane line and the number of the lane lines, and calculating the visibility of a fuzzy area of the lane line in the road image according to the actual distance and the visibility degree coefficient.
7. The highway visibility detection method according to claim 6, wherein processing the road image by a lane line detection technique comprises: and carrying out edge detection and multi-threshold processing on the road image features.
8. The highway visibility detection method according to claim 6, wherein the labeled regions are subjected to depth profile feature analysis through a profile feature analysis model trained on a deep learning neural network.
9. The highway visibility detection method according to claim 7, wherein labeling a characteristic region of the lane line in the road image comprises: marking the two farthest lane lines visible in the road image, and predicting and marking the next lane line.
10. The highway visibility detection method according to claim 9, further comprising: and grading the visibility of the road, and carrying out early warning according to the grade.
CN201910958879.3A 2019-10-10 2019-10-10 Highway visibility detection system and method Active CN110826412B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910958879.3A CN110826412B (en) 2019-10-10 2019-10-10 Highway visibility detection system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910958879.3A CN110826412B (en) 2019-10-10 2019-10-10 Highway visibility detection system and method

Publications (2)

Publication Number Publication Date
CN110826412A true CN110826412A (en) 2020-02-21
CN110826412B CN110826412B (en) 2023-07-11

Family

ID=69549054

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910958879.3A Active CN110826412B (en) 2019-10-10 2019-10-10 Highway visibility detection system and method

Country Status (1)

Country Link
CN (1) CN110826412B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113386791A (en) * 2021-06-15 2021-09-14 长安大学 Danger avoiding system based on unmanned transport vehicle train in heavy fog weather
WO2022028383A1 (en) * 2020-08-06 2022-02-10 长沙智能驾驶研究院有限公司 Lane line labeling method, detection model determining method, lane line detection method, and related device
CN114627382A (en) * 2022-05-11 2022-06-14 南京信息工程大学 Expressway fog visibility detection method combined with geometric prior of lane lines
CN114822041A (en) * 2022-06-27 2022-07-29 南京纳尼亚科技有限公司 Lane-level highway driving sight line induction system under ultra-low visibility
CN115797848A (en) * 2023-01-05 2023-03-14 山东高速股份有限公司 Visibility detection early warning method based on video data in high-speed event prevention system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005338941A (en) * 2004-05-24 2005-12-08 Fujitsu Ltd Method and device for detecting visibility
CN105261018A (en) * 2015-10-14 2016-01-20 山东交通学院 Visibility detection method based on optical model and dark primary color priori theory
CN107194924A (en) * 2017-05-23 2017-09-22 重庆大学 Expressway foggy-dog visibility detecting method based on dark channel prior and deep learning
CN107945174A (en) * 2017-12-12 2018-04-20 嘉兴四维智城信息科技有限公司 Fastlink visibility measuring method based on video
CN109409202A (en) * 2018-09-06 2019-03-01 惠州市德赛西威汽车电子股份有限公司 Robustness method for detecting lane lines based on dynamic area-of-interest

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005338941A (en) * 2004-05-24 2005-12-08 Fujitsu Ltd Method and device for detecting visibility
CN105261018A (en) * 2015-10-14 2016-01-20 山东交通学院 Visibility detection method based on optical model and dark primary color priori theory
CN107194924A (en) * 2017-05-23 2017-09-22 重庆大学 Expressway foggy-dog visibility detecting method based on dark channel prior and deep learning
CN107945174A (en) * 2017-12-12 2018-04-20 嘉兴四维智城信息科技有限公司 Fastlink visibility measuring method based on video
CN109409202A (en) * 2018-09-06 2019-03-01 惠州市德赛西威汽车电子股份有限公司 Robustness method for detecting lane lines based on dynamic area-of-interest

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周庆逵等: "基于视频的路况能见度检测系统的设计与实现", 《电子测量技术》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022028383A1 (en) * 2020-08-06 2022-02-10 长沙智能驾驶研究院有限公司 Lane line labeling method, detection model determining method, lane line detection method, and related device
CN113386791A (en) * 2021-06-15 2021-09-14 长安大学 Danger avoiding system based on unmanned transport vehicle train in heavy fog weather
CN113386791B (en) * 2021-06-15 2022-09-23 长安大学 Danger avoiding system based on unmanned transport vehicle train in heavy fog weather
CN114627382A (en) * 2022-05-11 2022-06-14 南京信息工程大学 Expressway fog visibility detection method combined with geometric prior of lane lines
CN114627382B (en) * 2022-05-11 2022-07-22 南京信息工程大学 Expressway fog visibility detection method combined with geometric prior of lane lines
CN114822041A (en) * 2022-06-27 2022-07-29 南京纳尼亚科技有限公司 Lane-level highway driving sight line induction system under ultra-low visibility
CN114822041B (en) * 2022-06-27 2022-09-02 南京纳尼亚科技有限公司 Lane-level highway driving sight line induction system under ultra-low visibility
CN115797848A (en) * 2023-01-05 2023-03-14 山东高速股份有限公司 Visibility detection early warning method based on video data in high-speed event prevention system

Also Published As

Publication number Publication date
CN110826412B (en) 2023-07-11

Similar Documents

Publication Publication Date Title
CN110826412B (en) Highway visibility detection system and method
CN106919915B (en) Map road marking and road quality acquisition device and method based on ADAS system
Guan et al. Using mobile laser scanning data for automated extraction of road markings
CN112700470B (en) Target detection and track extraction method based on traffic video stream
CN109670376B (en) Lane line identification method and system
CN103824452B (en) A kind of peccancy parking detector based on panoramic vision of lightweight
CN109284674B (en) Method and device for determining lane line
Nieto et al. Road environment modeling using robust perspective analysis and recursive Bayesian segmentation
Hautière et al. Mitigation of visibility loss for advanced camera-based driver assistance
Yang et al. Image-based visibility estimation algorithm for intelligent transportation systems
CN110060508B (en) Automatic ship detection method for inland river bridge area
CN111027447B (en) Road overflow real-time detection method based on deep learning
Vaibhav et al. Real-time fog visibility range estimation for autonomous driving applications
CN112419745A (en) Highway group fog early warning system based on degree of depth fusion network
CN116109986A (en) Vehicle track extraction method based on laser radar and video technology complementation
Joy et al. Real time road lane detection using computer vision techniques in python
CN109934096B (en) Automatic driving visual perception optimization method based on characteristic time sequence correlation
Koloushani et al. Mobile mapping system-based methodology to perform automated road safety audits to improve horizontal curve safety on rural roadways
CN112017213B (en) Target object position updating method and system
CN116631187B (en) Intelligent acquisition and analysis system for case on-site investigation information
CN117218855A (en) Method and system for evaluating side-impact accident risk
CN115240471B (en) Intelligent factory collision avoidance early warning method and system based on image acquisition
CN115965926A (en) Vehicle-mounted road sign line inspection system
CN114693722B (en) Vehicle driving behavior detection method, detection device and detection equipment
CN116794650A (en) Millimeter wave radar and camera data fusion target detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant