CN112183436A - Highway visibility detection method based on eight-neighborhood gray scale contrast of pixel points - Google Patents

Highway visibility detection method based on eight-neighborhood gray scale contrast of pixel points Download PDF

Info

Publication number
CN112183436A
CN112183436A CN202011088608.6A CN202011088608A CN112183436A CN 112183436 A CN112183436 A CN 112183436A CN 202011088608 A CN202011088608 A CN 202011088608A CN 112183436 A CN112183436 A CN 112183436A
Authority
CN
China
Prior art keywords
pixel point
image
contrast
value
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011088608.6A
Other languages
Chinese (zh)
Other versions
CN112183436B (en
Inventor
王保升
江亮
杨成
徐琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Technology
Original Assignee
Nanjing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Technology filed Critical Nanjing Institute of Technology
Priority to CN202011088608.6A priority Critical patent/CN112183436B/en
Publication of CN112183436A publication Critical patent/CN112183436A/en
Application granted granted Critical
Publication of CN112183436B publication Critical patent/CN112183436B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention discloses a highway visibility detection method based on pixel point eight neighborhood gray scale contrast, which comprises the steps of carrying out gray scale processing on an image, selecting each pixel point in a non-edge area, taking the selected pixel point as a central pixel point, carrying out gray scale contrast value calculation on the central pixel point and eight adjacent pixel points around the central pixel point, judging whether the pixel point is an invisible point according to the gray scale value of each pixel point, counting the number of the invisible points in each row on the image, further judging a visible boundary on the image, establishing a geometric model among a camera, the image and the highway by taking a landmark object with a fixed size on the highway as a distance reference, and carrying out conversion between the visible boundary and a real visible distance on the image to finally obtain a visibility distance; the method is simple in calculation and low in use cost, and can improve the estimation accuracy of the visibility distance.

Description

Highway visibility detection method based on eight-neighborhood gray scale contrast of pixel points
Technical Field
The invention relates to the technical field of machine vision, in particular to a highway visibility detection method based on eight neighborhood gray scale contrast of pixel points.
Background
Visibility is a common index in weather, road driving and airplane flying, and the unit is usually meter. The factors that affect visibility are mainly fog and haze. As is known, visibility is very important to highway driving safety, when visibility is very low, a highway manager usually closes a road for driving safety, but when the visibility does not reach a road closing standard, general visibility can adopt speed limiting processing, and different visibility distances limit different speed ranges; the laser visibility meter is a commonly used visibility detection instrument, at present, a highway network in China is gradually formed, if a large number of laser visibility meters are used for carrying out full coverage on the highway network in China, the cost is huge, and meanwhile, the laser visibility meter also has the defects of low detection precision on the mass fog, small detection range, high maintenance cost and the like.
In recent years, people pay attention to a video-based road condition visibility detection method, which overcomes the defects of a laser visibility meter to some extent. The video visibility detection method combines atmospheric optical analysis with image processing and artificial intelligence technology, establishes the relation between a video image and a real scene through the analysis and processing of the video image, and then indirectly calculates the visibility value according to the change of image characteristics. However, the existing visibility detection method based on video images only selects a small amount of videos and intercepts some inherent characteristics in the images for estimation, so that the estimation accuracy is not high, and a larger improvement space is provided.
Disclosure of Invention
The technical purpose is as follows: aiming at the defects of the prior art in precision existing in estimation through images, the invention discloses a pixel point eight neighborhood gray scale contrast-based highway visibility detection method which fully adopts image characteristics and has accurate estimation.
The technical scheme is as follows: in order to achieve the technical purpose, the invention adopts the following technical scheme:
the method for detecting the visibility of the highway based on eight neighborhood gray scale contrasts of the pixel points comprises the following steps:
s01, preprocessing an original image shot by a high-speed road camera, and removing an obviously invisible area around the original image;
s02, carrying out gray processing on the reserved image, obtaining the gray value of each pixel point on the image, and obtaining m multiplied by n pixel points in total, wherein m is the number of pixel point rows, and n is the number of pixel point columns;
s03, selecting pixel points in 2-m-1 rows and 2-n-1 columns on the image, wherein the selected pixel point is a central pixel point, eight adjacent pixel points exist around the central pixel point, each adjacent pixel point is used as a neighborhood, and the eight neighborhoods coexist around the central pixel point; the gray value of the central pixel point is marked as f (x); carrying out gray contrast calculation on the central pixel point and the adjacent pixel points in the eight neighborhoods of the central pixel point to obtain a contrast aggregate D (i) of the central pixel point and the eight adjacent pixel points around the central pixel point;
s04, removing the maximum value and the minimum value in the contrast set D (i) in the step S03, averaging the rest of the contrast values in the contrast set D (i), taking the obtained average as the contrast value at the central pixel point, and marking the contrast value as Cx
S05, repeating the steps S03 and S04 one by one for other pixel points of 2-m-1 rows and 2-n-1 columns to obtain the contrast value of each pixel point, wherein the contrast value C of the central pixel pointxWhen the number is less than or equal to 0.05, the number of the non-visual points in each line in 2-m-1 lines is counted and is recorded as Nvisual (j), the range of j is 2-m-1, and the number of the non-visual points is normalized;
s06, drawing a feature map of the normalized data obtained in the step S05, and confirming a critical value S of the number of the non-visible points according to the feature map;
and S07, determining the visible boundary of the image according to the critical value S of the number of the invisible viewpoints on the image, calibrating on the image, and finally converting the distance on the image and the actual distance to obtain the visibility distance.
Preferably, in step S02, MATLAB software is used for performing the grayscale processing.
Preferably, in step S03, the gray value of the pixel point adjacent to the center pixel point is recorded as f (xi), where i is a natural number of 1 to 8, the maximum absolute value of the difference between the gray values of the center pixel point and the eight adjacent pixel points is recorded as max, and the contrast calculation formula is:
Figure BDA0002719554410000021
Cxithe contrast value of the central pixel point and the ith adjacent pixel point is obtained.
Preferably, in the step S04,
Figure BDA0002719554410000022
wherein sort (C)xi) The contrast values of eight adjacent pixels around the central pixel are sequentially arranged according to the order of magnitude to form a contrast aggregate D (i).
Preferably, the normalization processing method in step S05 is: dividing the number of non-viewable points per row by the number of columns n, i.e.:
Figure BDA0002719554410000031
wherein j represents the specific number of rows from 2 to m-1.
Preferably, the feature map in step S06 has the row number j as the abscissa, so that
Figure BDA0002719554410000032
Drawing a characteristic diagram for the ordinate, as the visibility is reduced, the number of non-visual points is increased,
Figure BDA0002719554410000033
gradually increasing towards the numerical value, the number of the invisible viewpoints in the invisible area tends to be smooth,
Figure BDA0002719554410000034
the numerical value of (A) fluctuates back and forth without rising continuously, and the numerical value at this moment is selected
Figure BDA0002719554410000035
The value is used as a critical value S
Preferably, in step S06, the pixel row of the top row of the image reaching the threshold value S is selected as the visible boundary, the conversion of the image distance and the actual distance in step S07 is based on a fixed-size landmark on the expressway, the ratio of the actual distance to the image distance is converted, and the distance between the visible boundary and the boundary where the camera is located is calculated according to the geometric relationship, that is, the finally obtained visibility distance.
Has the advantages that: the method for detecting the visibility of the highway based on the eight-neighborhood gray scale contrast of the pixel points has the following beneficial effects:
1. the existing camera on the highway is directly adopted to collect images, no additional equipment is required to be installed, and the use cost is low.
2. By carrying out gray processing on pixel points on an image and carrying out gray contrast calculation on the central pixel point and the peripheral adjacent eight-neighborhood pixel points, after the maximum value and the minimum value are removed, the average value of the gray contrast values of the other six adjacent pixel points is taken as the contrast value of the central pixel point, so that the error is reduced, the visual boundary positioning is more accurate, and the estimation precision is improved.
3. The conversion between the image distance and the actual distance is carried out by utilizing the inherent landmark objects on the existing expressway, so that the universality is strong and the limitation of expressway areas is avoided.
4. By carrying out normalization processing on the obtained contrast value of the pixel point, the conversion difficulty between the numerical value and the characteristic value is reduced, a characteristic diagram is convenient to directly draw, the visualization is realized, and the estimation speed is improved.
5. All effective pixel points of the image are processed, so that errors caused by only selecting local or characteristic images are avoided, and estimation accuracy is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a schematic feature diagram of the non-observable normalization process of the present invention;
FIG. 2 is an image taken in accordance with an embodiment of the present invention;
fig. 3 is a geometric model of the image distance and actual distance transformation in fig. 2 according to the present invention.
Detailed Description
The present invention will be more clearly and completely described below by way of a preferred embodiment in conjunction with the accompanying drawings, without thereby limiting the scope of the invention to the described embodiment.
The invention provides a pixel point eight neighborhood gray scale comparison-based highway visibility detection method, which comprises the following steps of: s01, firstly, preprocessing the acquired image, removing the obviously invisible area on the image boundary, and reserving the visible area and the transition area of the visible area and the invisible area.
And S02, performing gray processing on the retained image, acquiring the gray value of each pixel point on the image through MATLAB software, and for the image with m rows and n columns, obtaining m multiplied by n pixel points, wherein m is the row number of the pixel points in the image, and n is the column number of the pixel points in the image.
S03, in order to ensure that each pixel point has eight neighborhoods, selecting no pixel point on the image boundary, namely selecting pixel points in 2-m-1 rows and 2-n-1 columns, wherein the selected pixel point is a central pixel point, so that eight adjacent pixel points are arranged around each pixel point, each adjacent pixel point is a neighborhood, and the gray value of the central pixel point is f (x), the gray values of the pixel points of the adjacent eight neighborhoods are recorded as f (xi), wherein i is a natural number of 1-8, and the eight adjacent pixel points are sequentially represented; the maximum absolute value of the gray value difference values of the central pixel point and the eight adjacent pixel points is recorded as max, and the contrast calculation formula is as follows:
Figure BDA0002719554410000041
wherein, CxiAs the contrast value of the central pixel point and the ith adjacent pixel point, C is calculatedxiAnd sequentially sorting according to the size order, wherein the sorted contrast values form a contrast collection D (i).
S04, according to the formula:
D(i)=sort(Cxi)
performing contrast value rankingSequencing; then, the maximum and minimum contrast values in the contrast collection D (i) are removed, the remaining contrast values are averaged out, and the average is used as the contrast value C at the central pixel pointx
Figure BDA0002719554410000042
S05, the operation of the steps S03 and S04 is carried out on the pixel points of 2-m-1 rows and 2-n-1 columns on the image one by one to obtain the contrast value of each pixel point, and the contrast value C of the central pixel pointxWhen the number is less than or equal to 0.05, the number is an unviewable point, the number of unviewable points in each row of 2-m-1 rows is counted by taking the number as Nvisible (j), wherein j is the specific row number of 2-m-1 rows, all the Nvisible (j) are normalized by dividing the number of unviewable points Nvisible (j) by the number of columns n, namely:
Figure BDA0002719554410000051
s06, drawing a characteristic graph of the number set after normalization, taking the row number j as an abscissa,
Figure BDA0002719554410000052
drawing a characteristic diagram for the ordinate, and as the visibility is reduced, moving from the camera to the invisible area,
Figure BDA0002719554410000053
the numerical value gradually increases and tends to be stable in the invisible area, and the numerical value at the moment
Figure BDA0002719554410000054
As far as the critical value S, we can observe the pixel row of which the first row on the image reaches the critical value S according to the feature map, and use the pixel row as the boundary of the visible area, where the feature map is shown in fig. 1.
S07, according to the visual boundary determined in the step S06, selecting a fixed landmark object on the actual expressway to be compared with the image, and finally establishing a geometrical relationship to convert the actual visual distance and the visual distance on the image.
In the present embodiment, referring to the actual highway lane boundary, as shown in fig. 2, there is a dotted line on the equidirectional highway, where the dotted line divides the highway into a first lane and a second lane, and some of the highways have a third lane, and the dotted line is the lane boundary; taking a straight line where a lane boundary between the first lane and the second lane is located as a horizontal axis, taking a straight line where the camera is located and perpendicular to the lane boundary as a vertical axis, taking an intersection point of the horizontal axis and the vertical axis as a coordinate origin to establish a coordinate system, selecting an endpoint of the lane boundary on the image as a reference point, wherein points E, F, G are respectively endpoints of the lane boundary in fig. 2, and a point H is an intersection point of a pixel row where the visible boundary on the image is located and the straight line where the lane boundary is located; and calculating according to the geometric relation to finally obtain the visibility distance.
As shown in fig. 3, in order to establish the coordinate system according to step S07, point O represents the point where the camera is located, point E, F, G is the end point of the lane boundary on the image, and can be flexibly selected according to the actual image as the calculated reference point, and the intersection point of the pixel line and the straight line where the lane boundary is located on the image confirmed in step S06 is point H.
Connecting points O and E, F, G, H are connected and extended, intersection points of the extension lines and the horizontal axis are A, B, C, D in sequence, wherein A, B, C is a starting point of an actual lane boundary line and corresponds to a point E, F, G on the image, a point D is a boundary point of an actual visible distance and corresponds to a point H on the image, for convenience of description, the default image angle is perpendicular to OD, namely EH is perpendicular to OD, when EF is not perpendicular to OD, the point E is required to be crossed as a perpendicular line of OD, and the proportional relation in the following calculation is not influenced.
The calculation process is as follows: respectively cross point A, B and make the perpendicular line of OD, the perpendicular line intersects with OD and is K, M respectively, i.e. AK ^ OD, BM ^ OD, the intersect of AK and OB, OC are I, J respectively, the intersect of BM and OC is L, can obtain by the geometric relation:
Figure BDA0002719554410000061
wherein AB, BC, CD, AC, BD, AD, AI, AJ, AK, BL, BM, EF, EG, EH, FH all represent distances between corresponding points, which are specifically as follows in this embodiment:
point E is the end point of the first visible lane boundary on the image along the lane direction, point F is the start point of the second lane boundary on the image, point G is the end point of the second lane boundary on the image, and point H is the intersection point of the pixel line where the boundary of the visible region on the image is located and the straight line where the lane boundary is located; the corresponding point a is the end point of the actual first visible lane boundary, the point B is the start point of the actual second lane boundary, the point C is the end point of the actual second lane boundary, the point D is the intersection point of the boundary of the actual invisible area and the straight line where the lane boundary is located, and AK and BM respectively represent the minimum distance from the point A, B to the OD.
According to the design rules of highway traffic safety facilities, the boundary of a lane on an expressway is 6 meters, the distance between the two lane boundaries is 9 meters, namely AB is 9 meters, and BC is 6 meters; using the image boundary as the coordinate system, the coordinates of the point E, F, G, H on the image can be obtained, and are respectively expressed as:
E(uE,vE),F(uF,vF),G(uG,vG),H(uH,vH)
wherein u and v represent the abscissa and ordinate, respectively, of the image coordinate system.
Since the point E, F, G, H is on the same straight line, assuming that the length per unit height of the line segment EH is dy in the coordinate system established with the image itself, the distance between two points on the image is the product of the difference between the ordinate of the corresponding point and dy, i.e.:
EH=(vH-vE)dy
EG=(vG-vE)dy
FG=(vG-vF)dy
FH=(vH-vF)dy
substituting the corresponding result into the formula:
Figure BDA0002719554410000071
the following can be obtained:
Figure BDA0002719554410000072
the length of the AD is the visibility distance, and when the coordinates of the intersection point of the pixel row where the visible area boundary is located and the lane boundary are obtained in step S07, the image distance and the actual distance may be converted to calculate the visible distance.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (7)

1. The method for detecting the visibility of the highway based on eight neighborhood gray scale contrasts of the pixel points is characterized by comprising the following steps of: the method comprises the following steps:
s01, preprocessing an original image shot by a high-speed road camera, and removing an obviously invisible area around the original image;
s02, carrying out gray processing on the reserved image, obtaining the gray value of each pixel point on the image, and obtaining m multiplied by n pixel points in total, wherein m is the number of pixel point rows, and n is the number of pixel point columns;
s03, selecting pixel points in 2-m-1 rows and 2-n-1 columns on the image, wherein the selected pixel point is a central pixel point, eight adjacent pixel points exist around the central pixel point, each adjacent pixel point is used as a neighborhood, and the eight neighborhoods coexist around the central pixel point; recording the gray value of the central pixel point as f (x); carrying out gray contrast calculation on the central pixel point and the adjacent pixel points in the eight neighborhoods of the central pixel point to obtain a contrast aggregate D (i) of the central pixel point and the eight adjacent pixel points around the central pixel point;
s04, then goDividing the maximum value and the minimum value in the contrast set D (i) in the step S03, averaging the rest of the contrast values in the contrast set D (i), taking the obtained average as the contrast value at the central pixel point, and marking the contrast value as Cx
S05, repeating the steps S03 and S04 one by one for other pixel points of 2-m-1 rows and 2-n-1 columns to obtain the contrast value of each pixel point, wherein the contrast value C of the central pixel pointxWhen the number is less than or equal to 0.05, the number of the non-visual points in each line in 2-m-1 lines is counted and is recorded as Nvisual (j), the range of j is 2-m-1, and the number of the non-visual points is normalized;
s06, drawing a feature map of the normalized data obtained in the step S05, and confirming a critical value S of the number of the non-visible points according to the feature map;
and S07, determining the number of pixel lines where the visible boundary of the image is located according to the critical value S of the number of the invisible viewpoints on the image, calibrating on the image, and finally converting the distance on the image and the actual distance to obtain the visibility distance.
2. The method for detecting the visibility of the highway based on the eight-neighborhood gray scale contrast of the pixel points as claimed in claim 1, wherein the method comprises the following steps: in step S02, MATLAB software is used to perform grayscale processing on the image.
3. The method for detecting the visibility of the highway based on the eight-neighborhood gray scale contrast of the pixel points as claimed in claim 1, wherein the method comprises the following steps: in step S03, the gray value of the adjacent pixel point of the central pixel point is recorded as f (xi), where i is a natural number of 1-8, the maximum absolute value of the difference between the gray value of the central pixel point and the gray value of the eight pixel points is recorded as max, and the contrast calculation formula is:
Figure FDA0002719554400000011
Cxithe contrast value of the central pixel point and the ith adjacent pixel point is obtained.
4. The method for detecting the visibility of the highway based on the eight-neighborhood gray scale contrast of the pixel points as claimed in claim 3, wherein the method comprises the following steps: in the step S04:
D(i)=sort(Cxi)
Figure FDA0002719554400000021
wherein sort (C)xi) The contrast values of eight adjacent pixels around the central pixel are sequentially arranged according to the order of magnitude to form a contrast aggregate D (i).
5. The method for detecting the visibility of the highway based on the eight-neighborhood gray scale contrast of the pixel points as claimed in claim 4, wherein the method comprises the following steps: the normalization processing method in step S05 includes: dividing the number of the non-visual points of each row in the 2-m-1 rows by the number of columns n, wherein the processing formula is as follows:
Figure FDA0002719554400000022
wherein j represents the specific number of rows from 2 to m-1.
6. The method for detecting the visibility of the highway based on the eight-neighborhood gray scale contrast of the pixel points as claimed in claim 5, wherein the method comprises the following steps: the feature map in step S06 has the row number j as the abscissa, and
Figure FDA0002719554400000023
drawing a characteristic diagram for the ordinate, as the visibility is reduced, the number of non-visual points is increased,
Figure FDA0002719554400000024
gradually increases, the number of the invisible viewpoints tends to be smooth in the invisible area,
Figure FDA0002719554400000025
the numerical value of (A) fluctuates back and forth without rising continuously, and the numerical value at this moment is selected
Figure FDA0002719554400000026
The value is used as the threshold value S.
7. The method for detecting the visibility of the highway based on the eight-neighborhood gray scale contrast of the pixel points as claimed in claim 6, wherein the method comprises the following steps: in step S06, the pixel row of which the top row reaches the threshold value S on the image is selected as the visible boundary, and in step S07, the conversion between the image distance and the actual distance is performed by converting the ratio between the actual distance and the image distance based on the landmark object with a fixed size on the expressway, and calculating the distance between the visible boundary and the boundary where the camera is located according to the geometric relationship, that is, the finally obtained visibility distance.
CN202011088608.6A 2020-10-12 2020-10-12 Expressway visibility detection method based on pixel point eight-neighborhood gray scale comparison Active CN112183436B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011088608.6A CN112183436B (en) 2020-10-12 2020-10-12 Expressway visibility detection method based on pixel point eight-neighborhood gray scale comparison

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011088608.6A CN112183436B (en) 2020-10-12 2020-10-12 Expressway visibility detection method based on pixel point eight-neighborhood gray scale comparison

Publications (2)

Publication Number Publication Date
CN112183436A true CN112183436A (en) 2021-01-05
CN112183436B CN112183436B (en) 2023-11-07

Family

ID=73951100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011088608.6A Active CN112183436B (en) 2020-10-12 2020-10-12 Expressway visibility detection method based on pixel point eight-neighborhood gray scale comparison

Country Status (1)

Country Link
CN (1) CN112183436B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113709431A (en) * 2021-07-26 2021-11-26 深圳市金研微科技有限公司 Apparatus and method for automatically correcting projection picture
CN115797848A (en) * 2023-01-05 2023-03-14 山东高速股份有限公司 Visibility detection early warning method based on video data in high-speed event prevention system

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4970653A (en) * 1989-04-06 1990-11-13 General Motors Corporation Vision method of detecting lane boundaries and obstacles
CA2301895A1 (en) * 1997-09-19 1999-03-25 Cambridge Management Advanced Systems Corporation Apparatus and method for monitoring and reporting weather conditions
JP2002324666A (en) * 2001-02-22 2002-11-08 Semiconductor Energy Lab Co Ltd Display device and its manufacturing method
EP2228666A1 (en) * 2009-03-11 2010-09-15 Honeywell International Inc. Vision-based vehicle navigation system and method
CN101937637A (en) * 2009-06-30 2011-01-05 株式会社日立显示器 Display device and display method
CN102230794A (en) * 2011-04-01 2011-11-02 北京航空航天大学 Method for dynamically measuring sight distance of drivers based on video
CN102509102A (en) * 2011-09-28 2012-06-20 郝红卫 Visibility measuring method based on image study
KR20120105966A (en) * 2011-03-17 2012-09-26 주식회사 지에스인스트루먼트 Apparatus for measuring of visibility and method for thereof
WO2015053997A1 (en) * 2013-10-11 2015-04-16 Qualcomm Mems Technologies, Inc. Region-dependent color mapping for reducing visible artifacts on halftoned displays
CN104777103A (en) * 2015-04-15 2015-07-15 西安灏通节能工程设备有限公司 Sight distance visibility meter and measuring method thereof
CN104809707A (en) * 2015-04-28 2015-07-29 西南科技大学 Method for estimating visibility of single fog-degraded image
US20160314361A1 (en) * 2014-11-17 2016-10-27 Tandent Vision Science, Inc. Method and system for classifying painted road markings in an automotive driver-vehicle-asistance device
CN108614998A (en) * 2018-04-09 2018-10-02 北京理工大学 A kind of single pixel infrared target detection method
CN109285187A (en) * 2018-09-11 2019-01-29 东南大学 A kind of farthest visible point detecting method based on traffic surveillance videos image
WO2019100933A1 (en) * 2017-11-21 2019-05-31 蒋晶 Method, device and system for three-dimensional measurement
CN110598613A (en) * 2019-09-03 2019-12-20 长安大学 Expressway agglomerate fog monitoring method
CN110827355A (en) * 2019-11-14 2020-02-21 南京工程学院 Moving target rapid positioning method and system based on video image coordinates
WO2020171344A1 (en) * 2019-02-22 2020-08-27 삼성전자주식회사 Display device and driving method therefor

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4970653A (en) * 1989-04-06 1990-11-13 General Motors Corporation Vision method of detecting lane boundaries and obstacles
CA2301895A1 (en) * 1997-09-19 1999-03-25 Cambridge Management Advanced Systems Corporation Apparatus and method for monitoring and reporting weather conditions
JP2002324666A (en) * 2001-02-22 2002-11-08 Semiconductor Energy Lab Co Ltd Display device and its manufacturing method
EP2228666A1 (en) * 2009-03-11 2010-09-15 Honeywell International Inc. Vision-based vehicle navigation system and method
CN101937637A (en) * 2009-06-30 2011-01-05 株式会社日立显示器 Display device and display method
KR20120105966A (en) * 2011-03-17 2012-09-26 주식회사 지에스인스트루먼트 Apparatus for measuring of visibility and method for thereof
CN102230794A (en) * 2011-04-01 2011-11-02 北京航空航天大学 Method for dynamically measuring sight distance of drivers based on video
CN102509102A (en) * 2011-09-28 2012-06-20 郝红卫 Visibility measuring method based on image study
WO2015053997A1 (en) * 2013-10-11 2015-04-16 Qualcomm Mems Technologies, Inc. Region-dependent color mapping for reducing visible artifacts on halftoned displays
US20160314361A1 (en) * 2014-11-17 2016-10-27 Tandent Vision Science, Inc. Method and system for classifying painted road markings in an automotive driver-vehicle-asistance device
CN104777103A (en) * 2015-04-15 2015-07-15 西安灏通节能工程设备有限公司 Sight distance visibility meter and measuring method thereof
CN104809707A (en) * 2015-04-28 2015-07-29 西南科技大学 Method for estimating visibility of single fog-degraded image
WO2019100933A1 (en) * 2017-11-21 2019-05-31 蒋晶 Method, device and system for three-dimensional measurement
CN108614998A (en) * 2018-04-09 2018-10-02 北京理工大学 A kind of single pixel infrared target detection method
CN109285187A (en) * 2018-09-11 2019-01-29 东南大学 A kind of farthest visible point detecting method based on traffic surveillance videos image
WO2020171344A1 (en) * 2019-02-22 2020-08-27 삼성전자주식회사 Display device and driving method therefor
CN110598613A (en) * 2019-09-03 2019-12-20 长安大学 Expressway agglomerate fog monitoring method
CN110827355A (en) * 2019-11-14 2020-02-21 南京工程学院 Moving target rapid positioning method and system based on video image coordinates

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
HAO, HONG YAN 等: "Two-Step Identification of Instantaneous Cutting Force Coefficients and Cutter Runout", 《2013 4TH INTERNATIONAL CONFERENCE ON ADVANCES IN MATERIALS AND MANUFACTURING, ICAMMP 2013》, pages 887 - 888 *
JIE ZOU 等: "Visibility Detection Method Based on Camera Model Calibration", 《2017 4TH INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND CONTROL ENGINEERING (ICISCE)》, pages 770 - 776 *
WEN-CHUNG KAO 等: "Video recording in high dynamic range scenes", 《2010 INTERNATIONAL CONFERENCE ON GREEN CIRCUITS AND SYSTEMS (ICGCS 2010)》, pages 683 - 688 *
李屹;朱文婷;: "基于数字摄像技术的能见度检测", 《现代电子技术》, no. 20, pages 95 - 97 *
王保升 等: "基于CNN深度学习模型的机场能见度预测", 《信息与电脑(理论版)》, vol. 32, no. 23, pages 43 - 46 *
荆霄: "基于图像自动识别的大气能见度测量方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 08, pages 138 - 645 *
赵凡迪: "基于空间邻域约束编码的视频目标跟踪研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, pages 136 - 308 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113709431A (en) * 2021-07-26 2021-11-26 深圳市金研微科技有限公司 Apparatus and method for automatically correcting projection picture
CN115797848A (en) * 2023-01-05 2023-03-14 山东高速股份有限公司 Visibility detection early warning method based on video data in high-speed event prevention system
CN115797848B (en) * 2023-01-05 2023-04-28 山东高速股份有限公司 Visibility detection early warning method based on video data in high-speed event prevention system

Also Published As

Publication number Publication date
CN112183436B (en) 2023-11-07

Similar Documents

Publication Publication Date Title
CN107463918B (en) Lane line extraction method based on fusion of laser point cloud and image data
CN109472776B (en) Depth significance-based insulator detection and self-explosion identification method
CN107392929B (en) Intelligent target detection and size measurement method based on human eye vision model
CN111814686A (en) Vision-based power transmission line identification and foreign matter invasion online detection method
CN112183436A (en) Highway visibility detection method based on eight-neighborhood gray scale contrast of pixel points
CN108764234B (en) Liquid level meter reading identification method based on inspection robot
CN109409205A (en) Video road driveway line detecting method of taking photo by plane based on line pitch characteristics point cluster
CN105740809A (en) Expressway lane line detection method based on onboard camera
CN113011388B (en) Vehicle outer contour size detection method based on license plate and lane line
CN112070756B (en) Three-dimensional road surface disease measuring method based on unmanned aerial vehicle oblique photography
CN112949398A (en) Lane line detection method, distance measurement method and corresponding device
CN111354047B (en) Computer vision-based camera module positioning method and system
CN112927283A (en) Distance measuring method and device, storage medium and electronic equipment
CN105260559A (en) Paper pulp fiber morphology parameter calculation method based on contour area and contour refinement
CN113239733A (en) Multi-lane line detection method
CN109559356B (en) Expressway sight distance detection method based on machine vision
CN111353481A (en) Road obstacle identification method based on laser point cloud and video image
CN113505793B (en) Rectangular target detection method under complex background
CN114299247A (en) Rapid detection and problem troubleshooting method for road traffic sign lines
CN113807238A (en) Visual measurement method for area of river surface floater
CN108956397A (en) A kind of road visibility detecting method based on trace norm
CN114724119A (en) Lane line extraction method, lane line detection apparatus, and storage medium
CN115034577A (en) Electromechanical product neglected loading detection method based on virtual-real edge matching
CN110488320B (en) Method for detecting vehicle distance by using stereoscopic vision
CN113221883A (en) Real-time correction method for flight navigation route of unmanned aerial vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant