CN109446917B - Vanishing point detection method based on cascading Hough transform - Google Patents

Vanishing point detection method based on cascading Hough transform Download PDF

Info

Publication number
CN109446917B
CN109446917B CN201811154229.5A CN201811154229A CN109446917B CN 109446917 B CN109446917 B CN 109446917B CN 201811154229 A CN201811154229 A CN 201811154229A CN 109446917 B CN109446917 B CN 109446917B
Authority
CN
China
Prior art keywords
vehicle
image
target
frame
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201811154229.5A
Other languages
Chinese (zh)
Other versions
CN109446917A (en
Inventor
宋焕生
武非凡
王伟
李婵
严腾
李莹
梁浩翔
云旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changan University
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN201811154229.5A priority Critical patent/CN109446917B/en
Publication of CN109446917A publication Critical patent/CN109446917A/en
Application granted granted Critical
Publication of CN109446917B publication Critical patent/CN109446917B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the field of intelligent traffic, and particularly relates to a vanishing point detection method based on cascading Hough transform, which comprises the following steps of: step 1: collecting road vehicle videos to obtain vehicle targets of each frame of image; step 2: carrying out Harris corner extraction to obtain characteristic points on the vehicle target of each frame of image; and step 3: acquiring a vehicle linear track; and 4, step 4: screening vehicle linear tracks, and recording a set of screened vehicle linear tracks as L; and 5: voting is carried out on the screened linear track set L from the image space to the diamond Hough space through the cascade Hough transformation, and the coordinates of the maximum value points after voting are obtained; step 6: and converting the coordinates of the maximum value point into an image space to finally obtain the coordinates of the vanishing point in the image space, and finishing the detection of the vanishing point. The method is suitable for various weather conditions, avoids false detection of the vanishing point in special weather, and greatly improves the accuracy of vanishing point detection.

Description

Vanishing point detection method based on cascading Hough transform
Technical Field
The invention belongs to the field of intelligent transportation, and particularly relates to a vanishing point detection method based on cascade Hough transform.
Background
A vanishing point is an important feature in many scenarios, defined as the same point where parallel lines all vanish at infinity, referred to as vanishing point in perspective. The method is obtained by combining a certain algorithm according to certain characteristics in an actual scene, and can be used for relevant work of machine vision such as camera calibration, scene reconstruction, scene edge clustering and the like. In conclusion, the vanishing point is used as important basic data, and an important foundation is laid for subsequent machine vision related work.
The method for detecting the vanishing points in the existing traffic scene generally detects the vanishing points according to lane lines, and the method can cause the problem that the vanishing points are difficult to detect due to the fact that the number of the lane lines is small and the lane lines have certain width, and the problem that the vanishing points are difficult to detect due to the fact that the lane lines are inaccurate to detect in special weather.
Disclosure of Invention
Aiming at the problems that the vanishing point is difficult to detect or inaccurate to detect in the existing vanishing point detection method, the invention provides a vanishing point detection method based on the cascading Hough transform.
In order to achieve the purpose, the invention adopts the following technical scheme to realize the purpose, and specifically comprises the following steps:
step 1: collecting road vehicle videos to obtain vehicle targets of each frame of image;
step 2: performing Harris corner extraction on the vehicle target of each frame of image detected in the step 1 to obtain characteristic points on the vehicle target of each frame of image;
and 3, step 3: obtaining a vehicle linear track for the vehicle target of each frame of image obtained in the step 1 and the characteristic points on the vehicle target of each frame of image obtained in the step 2;
and 4, step 4: screening the vehicle linear tracks obtained in the step 3, and recording a set of screened vehicle linear tracks as L;
and 5: voting the screened linear track set L obtained in the step 4 from the image space to a rhombus Hough space through a cascade Hough transform to obtain a maximum value point coordinate after voting;
and 6: and (5) converting the coordinates of the maximum value point obtained in the step (5) into an image space, finally obtaining the coordinates of the vanishing point in the image space, and completing the detection of the vanishing point.
Further, step 3 specifically includes the following steps:
in the images of each frame of vehicle target obtained in the step 1 and the characteristic points on the vehicle target of each frame of image obtained in the step 2, adopting an optical flow tracking algorithm, taking the images of two adjacent frames of vehicle targets and the characteristic points of the previous frame of image as the input in the optical flow tracking algorithm, and outputting the corresponding positions of the characteristic points of the previous frame on the next frame and whether the tracking is successful or not; the initial characteristic point is a Harris angular point of a new target, tracking is carried out by taking the initial characteristic point as a starting point, and the input characteristic point is an ending point of an existing track; and after all the inputs traverse the process, the output set is the tracked vehicle straight-line track.
Further, step 4 comprises the following substeps:
step 41: screening the vehicle tracks obtained in the step 3, and reserving enough track points in the vehicle tracks;
step 42: and (4) performing least square fitting on the vehicle track reserved in the step (41), and finally screening out a vehicle straight-line track set L.
Further, step 41 includes the following sub-steps:
step 41: and (4) screening the vehicle tracks obtained in the step (3), and reserving more than 15 track points in the vehicle tracks.
Further, step 5 comprises the following substeps:
and 4, converting the screened linear track set L obtained in the step 4 into a rhombic Hough space from an image space through a cascade Hough transformation, rasterizing the rhombic space, performing accumulated voting on the converted vehicle track straight line, and obtaining the maximum value point in the rhombic space as the point with the highest voting result to obtain the coordinate of the maximum value point.
Further, step 1 comprises the following substeps:
step 11: collecting a road vehicle video, wherein the foreground in the video is a moving vehicle, and the background is a road area, a non-road ground area and the sky;
step 12: detecting the background of each frame of image in the road vehicle video acquired in the step 11 through a GMM Gaussian mixture model;
step 13: and (3) obtaining a foreground moving target for the background of each frame of image obtained in the step (12) through image difference, and obtaining a vehicle target in the foreground moving target of each frame of image through median filtering and closing operation.
Further, step 2 comprises the following substeps:
classifying the foreground of the vehicle target of each frame of image obtained in the step 1 into a tracked target and a newly-appeared target, if the number of tail nodes of the current frame extracted foreground target containing the current track exceeds 3, the foreground target is considered to be an existing target, otherwise, the foreground target is a newly-appeared target, and extracting three Harris corner points, namely feature points, from each newly-appeared target; and performing the processing on each frame of image to obtain the characteristic points on the vehicle target of each frame of image.
The invention has the following beneficial effects:
(1) the invention adopts a novel straight line parameterization method of the cascade Hough transform, can skillfully and efficiently change an infinite image space into a finite diamond space, and then performs rasterization processing on the diamond space and determines the position of a final vanishing point by adopting a voting method. Through verification, the method is more stable and accurate than a method for directly extracting the vanishing point in the original space.
(2) Because the number of lane lines on the road is limited, the lane lines are often required to be manually measured, and the lane lines are difficult to accurately extract in severe weather, the method can avoid the influence caused by the severe weather to a great extent by acquiring the effective vehicle straight-line tracks with enough quantity without depending on the measurement of the lane lines, and simultaneously, the manual measurement is not required, so that the automation degree is improved.
(3) The screening of the vehicle track ensures more effective track lines by rejecting the short track and eliminates the influence of the abnormal track caused by the short track on the one hand, and on the other hand, the screening and rejecting of the track ensure the accuracy of the detection of the position of the vanishing point because the vehicle has the bent track caused by overtaking and other behaviors on the road.
The embodiments of the invention will be described and explained in further detail below with reference to the figures and the detailed description.
Drawings
FIG. 1 is a schematic diagram of a position of an image vanishing point in a scene A;
FIG. 2 is a diagram illustrating parameter space voting results in scenario A;
FIG. 3 is a schematic view of a traffic scene in an embodiment of the invention;
FIG. 4 shows scene background extraction results in an embodiment of the present invention;
FIG. 5 is a background modeling in an embodiment of the present invention;
FIG. 6 shows the median filtering and closing operation processing results according to an embodiment of the present invention;
FIG. 7 is a result of excluding non-vehicle targets in an embodiment of the present invention;
FIG. 8 is a trace of optical flow tracking in an embodiment of the present invention;
fig. 9 is a spatial cascading hough transform in an embodiment of the invention;
fig. 10 is a corresponding relationship between a cartesian coordinate system and a hough space under a parallel coordinate system in the embodiment of the present invention;
FIG. 11 is a diagram illustrating the result of parameter space voting in scenario B;
fig. 12 is a schematic diagram of the image vanishing point position in the B scene.
Detailed Description
The following embodiments of the present invention are provided, and it should be noted that the present invention is not limited to the following embodiments, and all equivalent changes based on the technical solutions of the present invention are within the protection scope of the present invention.
A method for detecting a vanishing point based on a cascaded Hough transform comprises the following steps:
step 1: collecting road vehicle videos to obtain vehicle targets of each frame of image;
step 2: performing Harris corner extraction on the vehicle target of each frame of image detected in the step 1 to obtain characteristic points on the vehicle target of each frame of image;
and 3, step 3: obtaining a vehicle linear track for the vehicle target of each frame of image obtained in the step 1 and the characteristic points on the vehicle target of each frame of image obtained in the step 2;
and 4, step 4: screening the vehicle linear tracks obtained in the step 3, and recording a set of screened vehicle linear tracks as L;
and 5: voting the screened linear track set L obtained in the step 4 from the image space to a rhombus Hough space through a cascade Hough transform to obtain a maximum value point coordinate after voting;
and 6: and (5) converting the coordinates of the maximum value points obtained in the step (5) into an image space, finally obtaining the coordinates of the vanishing points in the image space, and finishing the detection of the vanishing points.
The invention provides a vanishing point detection method based on a cascading Hough transform, which is characterized in that a surveillance video in a traffic scene is adopted, a GMM algorithm is used for obtaining a target foreground, a vehicle target is tracked by a light stream tracking algorithm, then a track is screened, and the screened track is voted through the idea of the cascading Hough transform, so that vanishing points on a road are determined. The method is suitable for various weather conditions, avoids false detection of the vanishing point in special weather, and greatly improves the accuracy of vanishing point detection.
The step 1 specifically comprises the following substeps:
step 11: selecting a certain scene, arranging a camera beside a road so that the camera can acquire vehicles in a certain range on the lane, and collecting a road vehicle video, wherein the scene is shown in figure 3, the foreground in the video is a moving vehicle, and the background is a road area, a non-road ground area and the sky;
step 12: detecting the background of each frame of image in the road vehicle video acquired in the step 11 through a GMM Gaussian mixture model, wherein the background extraction result is shown in FIG. 4;
the GMM algorithm is a classic solution of a background modeling problem, the background modeling algorithm is used for distinguishing background pixels and foreground pixels, a Gaussian model (MM) is used for describing the pixel value distribution of a certain pixel P, a time period T is used for observing the pixel value distribution of the P before the pixel value of a moving foreground object is covered, and then the Gaussian model describing the pixel value distribution of the position is calculated. When the gaussian model is calculated for each position in the image, a background model is established, and this time period T is called the modeling time. The basic assumption that the MM is able to model the background is that during the modeling time, the background pixels occur most of the time. For the distinction of foreground from background, if the pixel value at the new overlay P fits the gaussian distribution at that location, it is the background pixel, and vice versa it is the foreground pixel. However, there is a special class of background that is not stationary but moving, but the motion exhibits a certain regularity of the reciprocating cycle, for example: flashing neon lights and sloshing leaves. The GMM algorithm is proposed for such problems, MM describes the pixel distribution using a gaussian model, and GMM describes the pixel distribution using multiple gaussian models.
The advantages are that: compared with other methods such as a frame difference method and the like, the algorithm has better effect on image background detection, and meanwhile, the consumed time is not increased.
Step 13: and (3) obtaining a foreground moving target for the background of each frame of image obtained in the step (12) through image difference, and obtaining a vehicle target in the foreground moving target of each frame of image through median filtering and closing operation.
In the case of obtaining the background, the foreground moving object can be obtained through image difference, but the result of direct difference still has many interference pixels, so that part of the interference pixels are removed from the image background, as shown in fig. 5; the foreground of these non-vehicle objects is then removed from the foreground pixel block shape to obtain the vehicle object, as shown in fig. 6.
Step 2 comprises the following substeps:
the core thought of Harris angular point detection is that the window changes very violently along arbitrary direction grey scale in a certain position, then thinks that this position contains the angular point, for the extraction efficiency of Harris angular point is higher, adopts and only carries out the angular point to the vehicle target and detects, and efficiency obviously carries out Harris angular point detection efficiency height than whole image like this.
Classifying the foreground of the vehicle target of each frame of image obtained in the step 1 into a tracked target and a new target, if the number of tail nodes including the current track in the current frame of extracted foreground target exceeds 3, the foreground target is considered to be an existing target, otherwise, the foreground target is the new target, and extracting three Harris corner points, namely feature points, from each new target to serve as the starting point of the new track, as shown in FIG. 7.
The step 3 specifically comprises the following steps:
the optical flow is a reflection of the instantaneous speed of a space moving object on an imaging plane, and is a method for finding out the corresponding relation between the previous frame and the current frame by using the change of pixels in an image sequence on a time domain and the correlation between adjacent frames so as to calculate the motion information of the object between the adjacent frames.
In the images of each frame of vehicle target obtained in the step 1 and the characteristic points on the vehicle target of each frame of image obtained in the step 2, adopting an optical flow tracking algorithm, taking the images of two adjacent frames of vehicle targets and the characteristic points of the previous frame of image as the input in the optical flow tracking algorithm, and outputting the corresponding positions of the characteristic points of the previous frame on the next frame and whether the tracking is successful or not; the initial characteristic point is a Harris angular point of a new target, tracking is carried out by taking the initial characteristic point as a starting point, and the input characteristic point is an ending point of an existing track; after all inputs traverse the above process, the set of outputs is the tracked vehicle straight-line trajectory, as shown in fig. 8.
Step 4 comprises the following substeps:
step 41: screening the vehicle tracks obtained in the step 3, and reserving enough track points in the vehicle tracks;
step 42: and (4) performing least square fitting on the vehicle track reserved in the step (41), and finally screening out a vehicle straight-line track set L.
Preferably, the number of the track points is more than 15.
The step 5 specifically comprises the following steps:
and 4, converting the screened straight line track set L obtained in the step 4 into a rhombic Hough space from an image space through a cascade Hough transformation, rasterizing the rhombic space by utilizing the thought of Hough voting, and then performing accumulated voting on the converted vehicle track straight lines to finally obtain a maximum value point in the rhombic space, wherein the voting result is shown in fig. 2 and fig. 11.
For the conventional hough transform, the result of the transform is that a line becomes a point or a point becomes a line, and the problem to be solved is that a line becomes a line. In addition, for hough transforms, the original image is too large, computer implementation is time consuming and impractical where the vanishing point is at or near infinity. Therefore, the idea of cascading hough transform is adopted, so that the factor is avoided, namely line-to-line transformation, and the parameterization of a straight line can transform an original infinite image space into a limited diamond space.
Two consecutive hough transforms from line to point and then from point to line are referred to as a cascaded hough transform. A parallel coordinate system is introduced firstly, and each component of high-dimensional data is represented by a series of coordinate axes which are parallel to each other through the parallel coordinate system, so that the problem that the traditional Cartesian rectangular coordinate system is difficult to express data with more than three dimensions is solved. The following derives the cascaded hough transform by a cartesian coordinate system and a parallel coordinate system:
in the derivation process, subscript p represents coordinate axes in a parallel coordinate system, and subscript c represents coordinate axes in a cartesian coordinate system. To facilitate the connection between the cartesian and parallel coordinate systems, the two coordinate systems are superimposed as shown in fig. 9. The data in square brackets represents a point, denoted by homogeneous coordinates as [ x, y, w ], and the data in round brackets represents a straight line, e.g., (a, b, c).
X is to be c And y c Transforming the points and lines in (1) to x with coordinate axis spacing of d p And y p In (b), formula (1) can be obtained:
Figure BDA0001818580330000061
the transformation results are shown in fig. 10. Similar to the above transformation, a second transformation is performed as shown in FIG. 9, u c And v c Transformation of the point sum line in (1) to u with coordinate axis spacing of D p And v p The method comprises the following steps:
Figure BDA0001818580330000062
next, the position of-d will be y p Coordinate axis reversal to-y p Coordinate axes, -y p And x p The space between is called T space, x p And y p The space between them is called S space. Referring to equations (1) and (2), the transformation of the point sum line in T space is obtained:
Figure BDA0001818580330000071
the four processes of T-transform and S-transform are linked to obtain a complete process of cascaded Hough transform in which one point is transformed into a point in Hough space:
Figure BDA0001818580330000072
through the above cascade transformation, any point in the cartesian coordinate system is transformed into the finite rhombic space, so that the transformation from the infinite space to the finite space is realized, and the lower quadrant of the cartesian coordinate system and the middle quadrant of the rhombic space have a corresponding relationship, as shown in fig. 10.
And verifying the vanishing point result detected by the cascade Hough transform algorithm, and judging whether the connecting line of the vanishing point and the lane line at the near-field scene is attached to most of the lane lines in the scene or not, wherein as shown in the figure 1 and the figure 12, if the connecting line is attached, the detection is more accurate, otherwise, the error is larger.

Claims (5)

1. A method for detecting a vanishing point based on a cascading Hough transform comprises the following steps:
step 1: collecting road vehicle videos to obtain vehicle targets of each frame of image; the method comprises the following substeps:
step 11: collecting a road vehicle video, wherein the foreground in the video is a moving vehicle, and the background is a road area, a non-road ground area and the sky;
step 12: detecting the background of each frame of image in the road vehicle video acquired in the step 11 through a GMM Gaussian mixture model;
step 13: obtaining a foreground moving target for the background of each frame of image obtained in the step 12 through image difference, and obtaining a vehicle target in the foreground moving target of each frame of image through median filtering and closing operation;
step 2: performing Harris corner extraction on the vehicle target of each frame of image detected in the step 1 to obtain characteristic points on the vehicle target of each frame of image;
and step 3: obtaining a vehicle linear track for the vehicle target of each frame of image obtained in the step 1 and the characteristic points on the vehicle target of each frame of image obtained in the step 2; the method specifically comprises the following steps:
in the images of each frame of vehicle target obtained in the step 1 and the characteristic points on the vehicle target of each frame of image obtained in the step 2, adopting an optical flow tracking algorithm, taking the images of two adjacent frames of vehicle targets and the characteristic points of the previous frame of image as the input in the optical flow tracking algorithm, and outputting the corresponding positions of the characteristic points of the previous frame on the next frame and whether the tracking is successful or not; the initial characteristic point is a Harris angular point of a new target, tracking is carried out by taking the initial characteristic point as a starting point, and then the input characteristic point is an ending point of an existing track; after all the inputs are traversed, the output set is the tracked vehicle linear track;
and 4, step 4: screening the vehicle linear tracks obtained in the step 3, and recording a set of screened vehicle linear tracks as L;
and 5: voting the screened linear track set L obtained in the step 4 from the image space to a rhombus Hough space through a cascade Hough transform to obtain a maximum value point coordinate after voting;
step 6: and (5) converting the coordinates of the maximum value points obtained in the step (5) into an image space, finally obtaining the coordinates of the vanishing points in the image space, and finishing the detection of the vanishing points.
2. The method for detecting a vanishing point based on the cascaded hough transform as claimed in claim 1, wherein the step 4 comprises the following steps:
step 41: screening the vehicle tracks obtained in the step 3, and reserving enough track points in the vehicle tracks;
step 42: and (4) performing least square fitting on the vehicle track reserved in the step (41), and finally screening out a vehicle straight-line track set.
3. The method of detecting a vanishing point based on the cascaded hough transform as claimed in claim 2, wherein the step 4 comprises the sub-steps of:
step 41: and (4) screening the vehicle tracks obtained in the step (3), and reserving more than 15 track points in the vehicle tracks.
4. The method of claim 1, wherein step 5 comprises the sub-steps of:
and 4, converting the screened linear track set L obtained in the step 4 into a rhombus Hough space from an image space through a cascade Hough transform, performing accumulated voting on the converted vehicle track straight lines after the rhombus space is subjected to rasterization, wherein the point with the highest voting result is the maximum value point in the rhombus space, and obtaining the coordinate of the maximum value point.
5. The method of detecting a vanishing point based on the cascaded hough transform as claimed in claim 1, wherein the step 2 comprises the sub-steps of:
classifying the foreground of the vehicle target of each frame of image obtained in the step 1 into a tracked target and a new target, if the number of tail nodes including the current track in the current frame of extracted foreground target exceeds 3, the foreground target is considered to be an existing target, otherwise, the foreground target is the new target, and extracting three Harris corner points, namely feature points, on each new target so as to obtain the feature points on the vehicle target of each frame of image.
CN201811154229.5A 2018-09-30 2018-09-30 Vanishing point detection method based on cascading Hough transform Expired - Fee Related CN109446917B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811154229.5A CN109446917B (en) 2018-09-30 2018-09-30 Vanishing point detection method based on cascading Hough transform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811154229.5A CN109446917B (en) 2018-09-30 2018-09-30 Vanishing point detection method based on cascading Hough transform

Publications (2)

Publication Number Publication Date
CN109446917A CN109446917A (en) 2019-03-08
CN109446917B true CN109446917B (en) 2022-08-30

Family

ID=65546108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811154229.5A Expired - Fee Related CN109446917B (en) 2018-09-30 2018-09-30 Vanishing point detection method based on cascading Hough transform

Country Status (1)

Country Link
CN (1) CN109446917B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222667B (en) * 2019-06-17 2023-04-07 南京大学 Open road traffic participant data acquisition method based on computer vision
CN110675362B (en) * 2019-08-16 2022-10-28 长安大学 Method for acquiring horizon under curved road monitoring environment
CN110909620A (en) * 2019-10-30 2020-03-24 北京迈格威科技有限公司 Vehicle detection method and device, electronic equipment and storage medium
CN111401248B (en) * 2020-03-17 2023-08-15 阿波罗智联(北京)科技有限公司 Sky area identification method and device, electronic equipment and storage medium
CN111798431B (en) * 2020-07-06 2023-09-15 苏州市职业大学 Real-time vanishing point detection method, device, equipment and storage medium
CN112132869A (en) * 2020-11-02 2020-12-25 中远海运科技股份有限公司 Vehicle target track tracking method and device
CN112598665B (en) * 2020-12-31 2022-05-06 北京深睿博联科技有限责任公司 Method and device for detecting vanishing points and vanishing lines of Manhattan scene
CN113781562B (en) * 2021-09-13 2023-08-04 山东大学 Lane line virtual-real registration and self-vehicle positioning method based on road model
CN116682209A (en) * 2023-06-15 2023-09-01 南昌交通学院 Automatic vending machine inventory management method and system based on machine vision

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101608924A (en) * 2009-05-20 2009-12-23 电子科技大学 A kind of method for detecting lane lines based on gray scale estimation and cascade Hough transform
CN103871079A (en) * 2014-03-18 2014-06-18 南京金智视讯技术有限公司 Vehicle tracking method based on machine learning and optical flow
CN106101485A (en) * 2016-06-02 2016-11-09 中国科学技术大学 A kind of prospect track decision method based on feedback and device
CN107067755A (en) * 2017-04-28 2017-08-18 深圳市唯特视科技有限公司 A kind of method for calibrating traffic monitoring camera automatically based on computer vision
CN107977664A (en) * 2017-12-08 2018-05-01 重庆大学 A kind of road vanishing Point Detection Method method based on single image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101608924A (en) * 2009-05-20 2009-12-23 电子科技大学 A kind of method for detecting lane lines based on gray scale estimation and cascade Hough transform
CN103871079A (en) * 2014-03-18 2014-06-18 南京金智视讯技术有限公司 Vehicle tracking method based on machine learning and optical flow
CN106101485A (en) * 2016-06-02 2016-11-09 中国科学技术大学 A kind of prospect track decision method based on feedback and device
CN107067755A (en) * 2017-04-28 2017-08-18 深圳市唯特视科技有限公司 A kind of method for calibrating traffic monitoring camera automatically based on computer vision
CN107977664A (en) * 2017-12-08 2018-05-01 重庆大学 A kind of road vanishing Point Detection Method method based on single image

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Point and line parameterizations using parallel coordinates for hough transform";Marketa Dubska;《BRNO UNIVERSITY OF TECHNOLOGY DOCTORAL THESIS》;20140623;参见正文第4-5章 *
"Real projective plane mapping for detection of orthogonal vanishing points";Marketa Dubska;《Dubska》;20131231;全文 *
"车辆特征点3D参数估计及聚类算法研究";张茜婷;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170215;全文 *
Marketa Dubska."Point and line parameterizations using parallel coordinates for hough transform".《BRNO UNIVERSITY OF TECHNOLOGY DOCTORAL THESIS》.2014, *

Also Published As

Publication number Publication date
CN109446917A (en) 2019-03-08

Similar Documents

Publication Publication Date Title
CN109446917B (en) Vanishing point detection method based on cascading Hough transform
CN109242884B (en) Remote sensing video target tracking method based on JCFNet network
CN109615611B (en) Inspection image-based insulator self-explosion defect detection method
CN103077384B (en) A kind of method and system of vehicle-logo location identification
CN106128121B (en) Vehicle queue length fast algorithm of detecting based on Local Features Analysis
CN109544635B (en) Camera automatic calibration method based on enumeration heuristic
DE102015206178A1 (en) A video tracking-based method for automatically ranking vehicles in drive through applications
Li et al. Road lane detection with gabor filters
CN106228570B (en) A kind of Truth data determines method and apparatus
JP2008046903A (en) Apparatus and method for detecting number of objects
Selver et al. Camera based driver support system for rail extraction using 2-D Gabor wavelet decompositions and morphological analysis
CN112927267A (en) Target tracking method under multi-camera scene
Qu et al. Scale self-adaption tracking method of Defog-PSA-Kcf defogging and dimensionality reduction of foreign matter intrusion along railway lines
Gad et al. Real-time lane instance segmentation using SegNet and image processing
Zhao et al. Lane detection and tracking based on annealed particle filter
Zhao et al. Real-world trajectory extraction from aerial videos-a comprehensive and effective solution
EP4250245A1 (en) System and method for determining a viewpoint of a traffic camera
CN111339824A (en) Road surface sprinkled object detection method based on machine vision
CN111428538A (en) Lane line extraction method, device and equipment
CN113052118A (en) Method, system, device, processor and storage medium for realizing scene change video analysis and detection based on high-speed dome camera
Alomari et al. Smart real-time vehicle detection and tracking system using road surveillance cameras
CN114663793A (en) Target behavior identification method and device, storage medium and terminal
Zhao et al. A traffic sign detection method based on saliency detection
CN114612999A (en) Target behavior classification method, storage medium and terminal
CN105574874A (en) Fake change target removing method of sequence image change detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220830