CN113554646B - Intelligent urban road pavement detection method and system based on computer vision - Google Patents

Intelligent urban road pavement detection method and system based on computer vision Download PDF

Info

Publication number
CN113554646B
CN113554646B CN202111093287.3A CN202111093287A CN113554646B CN 113554646 B CN113554646 B CN 113554646B CN 202111093287 A CN202111093287 A CN 202111093287A CN 113554646 B CN113554646 B CN 113554646B
Authority
CN
China
Prior art keywords
pixel point
matching
image
reference image
next pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111093287.3A
Other languages
Chinese (zh)
Other versions
CN113554646A (en
Inventor
瞿夕凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengjin Decoration Group Co.,Ltd.
Original Assignee
Jiangsu Zhengjin Architectural Decoration Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Zhengjin Architectural Decoration Engineering Co ltd filed Critical Jiangsu Zhengjin Architectural Decoration Engineering Co ltd
Priority to CN202111093287.3A priority Critical patent/CN113554646B/en
Publication of CN113554646A publication Critical patent/CN113554646A/en
Application granted granted Critical
Publication of CN113554646B publication Critical patent/CN113554646B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of road surface detection, in particular to a method and a system for detecting road surfaces of smart cities based on computer vision. Acquiring a reference image and a contrast image of a road surface area to be detected, and preprocessing the reference image and the contrast image; carrying out edge detection on the reference image to obtain each edge area of the reference image; and matching pixel points of the reference image and the comparison image based on each edge region of the reference image, and determining the flatness condition of the pavement to be detected according to the matching result of the pixel points. The invention realizes the real-time detection of the road surface evenness, improves the accuracy of the road detection and shortens the time for the road surface detection.

Description

Intelligent urban road pavement detection method and system based on computer vision
Technical Field
The application relates to the field of road pavement detection, in particular to a method and a system for detecting road surfaces of smart cities based on computer vision.
Background
With the rapid development of society, vehicles in cities are increased, but road safety is becoming a concern. The influence of the road flatness on the vehicle running is relatively serious, and particularly, when an automobile runs on an uneven road at night, the automobile brings great trouble to a driver, and even traffic accidents can be caused. At present, most of small urban roads have the phenomenon of unevenness, so that the phenomenon of bumping and the like in the driving process of a vehicle can be caused, the chassis, the bumper and the like of the vehicle can be damaged in serious cases, and a large amount of time and money are wasted.
At present, road detection is usually manual detection or sensor detection, the limitation is large, the accuracy is low, manual detection wastes a large amount of manpower and material resources, phenomena such as false detection and missing detection are easy to occur, and the real-time performance is low; hardware equipment such as sensors is high in cost and complex in external environment, and the hardware equipment such as the sensors is prone to failure under the influence of the external environment, so that the conditions that detection is not timely or detection results are inaccurate are caused.
Disclosure of Invention
In order to solve the above technical problems, the present invention aims to provide a method and a system for detecting a road surface of a smart city based on computer vision, wherein the technical scheme adopted is as follows:
the invention provides a computer vision-based intelligent urban road surface detection method, which comprises the following specific steps:
an image acquisition step: acquiring binocular images of a road surface area to be detected, taking one of the binocular images as a reference image, and taking the other image as a contrast image;
image edge detection: carrying out edge detection on the reference image to obtain each edge area;
pixel point matching: acquiring a current pixel point in a reference image and a parallax value between the current pixel point and a matched pixel point in a comparison image;
judging whether a next pixel point of a current pixel point in the reference image is located in any edge region, and determining the size of a matching sliding window corresponding to the next pixel point and a parallax gradient range according to a judgment result;
determining the matching search range of the next pixel point in the comparison image according to the current pixel point in the reference image, the parallax value of the current pixel point between the matching pixel points in the comparison image and the parallax gradient range corresponding to the next pixel point;
matching a matching pixel point corresponding to the next pixel point in the comparison image according to the size of a matching sliding window corresponding to the next pixel point and the matching search range of the next pixel point in the comparison image;
calculating a parallax value between a next pixel point in the reference image and a matching pixel point in the comparison image, repeating the pixel point matching step according to the next pixel point in the reference image and the parallax value between the next pixel point and the matching pixel point in the comparison image, obtaining the matching pixel point of the next pixel point in the reference image in the comparison image and the parallax value between the next pixel point in the reference image and the matching pixel point in the comparison image, and further obtaining the matching pixel point of any one pixel point in the reference image in the comparison image;
a road surface detection step: and calculating a parallax value between any one pixel point in the reference image and a matching pixel point in the comparison image, and determining the road surface smoothness of the road surface area to be detected according to the parallax values corresponding to all the pixel points in the reference image.
Further, the step of determining the parallax gradient range corresponding to the next pixel point according to the judgment result includes:
when the next pixel point of the current pixel point in the reference image is located in any edge region, the value range of the parallax gradient corresponding to the next pixel point is
Figure 100002_DEST_PATH_IMAGE001
When the next pixel point of the current pixel point in the reference image is not located in any edge region, the value range of the parallax gradient corresponding to the next pixel point is
Figure 462355DEST_PATH_IMAGE002
Wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE003
is a threshold value for the gradient of the parallax,
Figure 520310DEST_PATH_IMAGE004
the corresponding parallax gradient for the next pixel point.
Further, the step of determining the matching search range of the next pixel point in the comparison image includes:
when the next pixel point of the current pixel point in the reference image is located in any edge area, the matching search range of the next pixel point in the comparison image is
Figure 100002_DEST_PATH_IMAGE005
When the next pixel point of the current pixel point in the reference image is not positioned in any edge region, the matching search range of the next pixel point in the comparison image is
Figure 392451DEST_PATH_IMAGE006
Wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE007
for the abscissa in the reference image of the current pixel,
Figure 385815DEST_PATH_IMAGE008
is the disparity value between the current pixel point in the reference image and the matching pixel point in the comparison image,
Figure 100002_DEST_PATH_IMAGE009
for the abscissa of the next pixel point in the comparison image,
Figure 243873DEST_PATH_IMAGE010
in order to set the maximum parallax,
Figure 100002_DEST_PATH_IMAGE011
to set the minimum parallax.
Further, the step of determining the size of the matching sliding window corresponding to the next pixel point according to the judgment result includes:
when the next pixel point of the current pixel point in the reference image is positioned in any edge region, reducing the size of an original matching sliding window corresponding to the next pixel point;
and when the next pixel point of the current pixel point in the reference image is not positioned in any edge region, increasing the size of the original matching sliding window corresponding to the next pixel point.
Further, according to the size of the matching sliding window corresponding to the next pixel point and the matching search range of the next pixel point in the comparison image, the step of matching the matching pixel point corresponding to the next pixel point in the comparison image is as follows:
sliding the matching sliding window within the matching search range of the next pixel point in the comparison image according to the size of the matching sliding window corresponding to the next pixel point, and matching the next pixel point of the current pixel point in the reference image with each pixel point in the matching sliding window in the sliding process of the matching sliding window, so as to find each pixel point which is originally designed in the comparison image;
and respectively taking the next pixel point of the current pixel point in the reference image and the preliminarily-designed pixel point in the comparison image as the centers of the two matching sliding windows, carrying out one-to-one corresponding matching on other pixel points of the two matching sliding windows, screening out the best matching pixel point from all the preliminarily-designed pixel points according to the matching result, and taking the best matching pixel point as the matching pixel point of the next pixel point of the current pixel point in the reference image.
Further, the step of determining the road surface smoothness of the road surface area to be detected according to the parallax values corresponding to all the pixel points in the reference image comprises the following steps:
calculating the variance of the parallax values corresponding to all the pixel points in the reference image, carrying out normalization processing, and taking the variance obtained after the normalization processing as the final variance;
when the final variance is smaller than a first variance threshold value, judging that the road surface to be detected is of a first flatness grade;
when the final variance is more than or equal to the first variance threshold and less than or equal to the second variance threshold, judging that the road surface to be detected is of a second flatness grade;
and when the final variance is larger than a second variance threshold value, judging that the road surface to be detected is a third flatness grade, and sequentially reducing the road surface flatness corresponding to the first flatness grade, the second flatness grade and the third flatness grade.
Further, the image acquiring step further includes preprocessing the reference image and the contrast image, and the preprocessing includes:
denoising the reference image and the comparison image respectively, and sharpening the denoised reference image and the comparison image respectively;
and respectively carrying out filtering processing on the sharpened reference image and the sharpened comparison image by adopting a pyramid layer-by-layer filtering mode so as to reduce the resolution of the images.
The invention also provides a computer vision-based intelligent urban road surface detection system, which comprises:
an image acquisition module to: acquiring binocular images of a road surface area to be detected, taking one of the binocular images as a reference image, and taking the other image as a contrast image;
an image edge detection module to: carrying out edge detection on the reference image to obtain each edge area;
the pixel point matching module is used for acquiring a current pixel point in the reference image and a parallax value between the current pixel point and a matching point of the current pixel point in the comparison image;
judging whether a next pixel point of a current pixel point in the reference image is located in any edge region, and determining the size of a matching sliding window corresponding to the next pixel point and a parallax gradient range according to a judgment result;
determining the matching search range of the next pixel point in the comparison image according to the current pixel point in the reference image, the parallax value of the current pixel point between the matching points in the comparison image and the parallax gradient range corresponding to the next pixel point;
matching a matching pixel point corresponding to the next pixel point in the comparison image according to the size of a matching sliding window corresponding to the next pixel point and the matching search range of the next pixel point in the comparison image;
calculating a parallax value between a next pixel point in the reference image and a matching pixel point in the comparison image, repeating the steps in the pixel point matching module according to the next pixel point in the reference image and the parallax value between the next pixel point and the matching pixel point in the comparison image, obtaining the matching pixel point of the next pixel point in the reference image in the comparison image and the parallax value between the next pixel point in the reference image and the matching pixel point in the comparison image, and further obtaining the matching pixel point of any one pixel point in the reference image in the comparison image;
and the road surface detection module is used for calculating a parallax value between any one pixel point in the reference image and a matching pixel point in the comparison image, and determining the road surface leveling condition of the road surface area to be detected according to the parallax values corresponding to all the pixel points in the reference image.
Further, the step of determining the parallax gradient range corresponding to the next pixel point according to the judgment result includes:
when the next pixel point of the current pixel point in the reference image is located in any edge region, the value range of the parallax gradient corresponding to the next pixel point is
Figure 715306DEST_PATH_IMAGE001
When the next pixel point of the current pixel point in the reference image is not located in any edge region, the value range of the parallax gradient corresponding to the next pixel point is
Figure 973112DEST_PATH_IMAGE002
Wherein the content of the first and second substances,
Figure 262011DEST_PATH_IMAGE003
is a threshold value for the gradient of the parallax,
Figure 238057DEST_PATH_IMAGE004
in order to be a parallax gradient, the parallax gradient,
Figure 513181DEST_PATH_IMAGE012
is a parallax gradient range.
Further, the step of determining the matching search range of the next pixel point in the comparison image includes:
when the next pixel point of the current pixel point in the reference image is located in any edge area, the matching search range of the next pixel point in the comparison image is
Figure 891072DEST_PATH_IMAGE005
When in the reference imageWhen the next pixel point of the current pixel point is not located in any edge region, the matching search range of the next pixel point in the comparison image is
Figure 226239DEST_PATH_IMAGE006
Wherein the content of the first and second substances,
Figure 689581DEST_PATH_IMAGE007
for the abscissa in the reference image of the current pixel,
Figure 627450DEST_PATH_IMAGE008
is the disparity value between the current pixel point in the reference image and the matching pixel point in the comparison image,
Figure 125428DEST_PATH_IMAGE009
for the abscissa of the next pixel point in the comparison image,
Figure 365916DEST_PATH_IMAGE010
in order to set the maximum parallax,
Figure 316555DEST_PATH_IMAGE011
to set the minimum parallax.
The invention has the following beneficial effects: the method comprises the steps of judging whether a pixel point to be matched, namely a next pixel point, is located in an edge area of a reference image, determining the size of a proper matching sliding window and a parallax gradient range according to a judgment result, determining the matching search range of the next pixel point in a comparison image according to the parallax gradient range, and finally finding the matching pixel point of the pixel point to be matched in the comparison image according to the size of the matching sliding window and the matching search range of the next pixel point in the comparison image. According to the invention, the size of the matching sliding window with proper size and the reasonable matching search range can be determined according to the position condition of the pixel point to be matched, so that the matching calculation amount can be effectively reduced while the matching pixel point of the pixel point to be matched is found, the matching accuracy and rapidity of the pixel point are improved, the detection and analysis of irrelevant matching pixel points are avoided, the overall progress of system detection is improved, and the detection time is shortened.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a method for detecting road surfaces of smart cities based on computer vision according to the present invention;
FIG. 2 is a epipolar line diagram of an embodiment of the intelligent urban road pavement detection method and system based on computer vision according to the invention;
FIG. 3 is a detailed matching procedure of a pixel point matching module of the intelligent urban road pavement detection method and system based on computer vision according to the present invention;
wherein 01 is a three-dimensional space point; 02 is one of binocular images; 03 is the matching search range of the pixel points to be matched; 04 is a camera on one side of the road to be detected; 05 is the other of the binocular images; 06 is the matching search range of the pixel points to be matched; 07 is a camera on the other side of the road to be detected; 08 is the epipolar plane.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the method for visually measuring the production information of a structural member according to the present invention, the specific implementation, structure, features and effects thereof will be given with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following describes a specific scheme of the intelligent urban road pavement detection method based on computer vision in detail with reference to the accompanying drawings.
Referring to fig. 1, a flowchart illustrating steps of a smart urban road surface detection method based on computer vision according to an embodiment of the present invention is shown, the method including the following steps:
step (ii) of
Figure 100002_DEST_PATH_IMAGE013
: and an image acquisition step, namely acquiring binocular images of the road surface area to be detected, taking one of the binocular images as a reference image, and taking the other image as a contrast image.
A plurality of cameras are arranged beside a road to acquire binocular images of the road surface to be detected, one of the binocular images is used as a reference image, and the other image is used as a contrast image. The arrangement of road camera is set for according to actual demand by oneself, but need guarantee that binocular image gathers the acquisition based on the camera under the different visual angles to the image of gathering is to same road surface region on the road, and the same road surface region that just so says that waits to detect needs to appear in binocular image simultaneously. In the embodiment, in order to acquire binocular images, a plurality of cameras are arranged on one side of a road at equal intervals, and one camera is correspondingly arranged on the right opposite side of the road of each camera. In order to obtain binocular images of all road surface areas of the whole road, the intervals of a plurality of cameras on one side of the road cannot be too large, and it is necessary that the shots of adjacent cameras at equal intervals have overlapping areas.
The image acquisition step further comprises preprocessing the reference image and the contrast image, wherein the preprocessing comprises the following steps: firstly, the image is denoised and sharpened, so that the influence of external noise and illumination in the image acquisition process is eliminated, and meanwhile, the texture information of the image edge can be enhanced, and the region with poor matching effect in the image matching process is conveniently eliminated in the follow-up process. In this embodiment, the image denoising processing method adopts a median filtering denoising algorithm, and the image sharpening processing method adopts a laplacian sharpening operator. Of course, as other embodiments, the image denoising algorithm may also adopt a gaussian filter algorithm, a mean value filter algorithm, and the like, and the image sharpening processing algorithm may also adopt a sobel operator, a gradient sharpening, a differential sharpening algorithm, and the like. It should be noted that, the processing methods for denoising and sharpening the image and the implementation process of the algorithm are all the prior art, and are not in the scope of the present invention, and no relevant description is provided here.
Then, in order to highlight the image texture information during image matching, the search range in the matching process is reduced, the retrieval speed is improved, and the low resolution processing is performed on the de-noised and sharpened original image. And (3) carrying out pyramid layer-by-layer filtering on the denoised and sharpened original image, wherein the original image needs to be filtered layer by layer, so that the purpose of reducing the resolution of the original image layer by layer is achieved, and finally, a multi-layer filtered low-resolution image is obtained. It is stated that how many filtering layers are selected can be set according to the quality of the image collected by the camera, and the filtering layers are set to be 3 layers in this embodiment. There are many image low-resolution processing methods, and when the pyramid layer-by-layer filtering method is adopted in this embodiment, the filtering algorithm therein adopts a gaussian filtering algorithm. Pyramid level-by-level filtering and gaussian filtering algorithms are prior art and are not within the scope of the present invention, and are not described here.
Step (ii) of
Figure 933481DEST_PATH_IMAGE014
: and an image edge detection step, wherein edge detection is carried out on the reference image to obtain each edge area. For the extraction of the edge region, the Sobel operator is adopted to extract the image edge, and all the edge regions in the reference image are obtained. The Sobel operator edge algorithm is a well-known technology, is not within the protection scope of the present invention, and is not further described herein.
Step (ii) of
Figure DEST_PATH_IMAGE015
: a pixel point matching step, namely acquiring a current pixel point in the reference image and a parallax value between the current pixel point and a matching pixel point in the comparison image; judging whether a next pixel point of a current pixel point in the reference image is located in any edge region, and determining the size of a matching sliding window corresponding to the next pixel point and a parallax gradient range according to a judgment result; determining the matching search range of the next pixel point in the comparison image according to the current pixel point in the reference image, the parallax value of the current pixel point between the matching pixel points in the comparison image and the parallax gradient range corresponding to the next pixel point; matching a matching pixel point corresponding to the next pixel point in the comparison image according to the size of a matching sliding window corresponding to the next pixel point and the matching search range of the next pixel point in the comparison image; and calculating a parallax value between the next pixel point in the reference image and the matching pixel point in the contrast image, repeating the pixel point matching step according to the next pixel point in the reference image and the parallax value between the next pixel point and the matching pixel point in the contrast image, obtaining the matching pixel point of the next pixel point in the reference image in the contrast image and the parallax value between the next pixel point in the reference image and the matching pixel point in the contrast image, and further obtaining the matching pixel point of any one pixel point in the reference image in the contrast image.
As shown in fig. 3, the specific steps of the pixel matching step are as follows:
step (ii) of
Figure 145019DEST_PATH_IMAGE016
: firstly, acquiring the positions of an initial pixel point in a reference image and a matching pixel point of the initial pixel point in a comparison image in an image marking mode, and obtaining the matching pixel point of the initial pixel point and the matching pixel point in the comparison image according to the positions of the initial pixel point in the reference image and the matching pixel point in the comparison imageAnd matching parallax values among the pixels. For convenience of the following description, the initial pixel point in the reference image is referred to as a current pixel point.
Step (ii) of
Figure DEST_PATH_IMAGE017
: and judging whether the next pixel point of the current pixel point in the reference image is positioned in any edge region, and determining the size of a matching sliding window corresponding to the next pixel point of the current pixel point and the parallax gradient range according to the judgment result.
Step (ii) of
Figure 556409DEST_PATH_IMAGE018
: and judging whether the next pixel point of the current pixel point in the reference image is positioned in any edge area or not according to each edge area extracted in the image edge detection step.
Step (ii) of
Figure DEST_PATH_IMAGE019
: according to the steps
Figure 728764DEST_PATH_IMAGE018
Determining the size of a matching sliding window corresponding to the next pixel point of the current pixel point according to the judgment result, wherein the specific contents are as follows:
in the process of matching the road image pixel points, the matching of a single pixel point can be affected by illumination change and visual angle change, the robustness is poor, and in order to improve the matching precision of the road image pixel points, the embodiment adopts a sliding window mode for matching. When the fixed window is adopted for sliding window matching, a large number of wrong matching results can appear, and misjudgment of the pavement evenness is further caused, so that the matching of the pavement image pixel points adopts a sliding window matching mode.
The pixel points are located in different areas of the reference image, and the parallax variation of the pixel points is different. When the pixel points are in the edge area of the image, the parallax change is large, and the boundary is clear; when the pixel point is in the non-edge area of the image, the parallax change is relatively gentle. And dynamically adjusting the size of the sliding window according to the parallax change condition of the pixel points. For the edge area with large parallax change in the image, a sliding window with a small size is selected, so that the details of the edge area are richer; and selecting a sliding window with a larger size for an image area with smooth parallax change in the image, and increasing the texture features of the image contained in the window.
According to the steps
Figure 509901DEST_PATH_IMAGE018
The judging result determines whether the next pixel point is in the edge area, and then the size of the sliding matching window is dynamically adjusted in the next pixel point matching process, wherein the specific adjusting process is as follows:
when the next pixel point of the current pixel point in the reference image is located in the edge region, reducing the size of an original matching sliding window corresponding to the next pixel point;
and when the next pixel point of the current pixel point in the reference image is positioned in the non-edge region, increasing the size of the original matching sliding window corresponding to the next pixel point.
The specific implementation process is as follows: first, an original sliding window size is set to
Figure 982470DEST_PATH_IMAGE020
And the road surface image information can be set according to the road surface image information. Judging the region of the current pixel point in the image through an edge detection algorithm, and dynamically adjusting a sliding window on the basis:
Figure 564762DEST_PATH_IMAGE022
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE023
in order to be the original size of the sliding window,
Figure 224413DEST_PATH_IMAGE024
is a current pixel point in the reference image
Figure DEST_PATH_IMAGE025
The size of the window after the adjustment is carried out,
Figure 42196DEST_PATH_IMAGE026
to adjust the factor, the present embodiment will adjust the factor
Figure 369272DEST_PATH_IMAGE026
Set to 4.
Step (ii) of
Figure DEST_PATH_IMAGE027
: determining a parallax gradient range corresponding to a next pixel point of the current pixel point according to the judgment result of the edge area, and specifically comprising the following steps:
in this embodiment, a parallax gradient analysis model is constructed for detecting a parallax gradient of a pixel point next to a current pixel point in an image. Analyzing the search range of the matched pixel points according to the obtained parallax gradient range, and reducing the detection amount, wherein the determination process of the parallax gradient range is as follows:
firstly, two adjacent pixel points are selected from a three-dimensional scene
Figure DEST_PATH_IMAGE028
Figure DEST_PATH_IMAGE029
In the reference image and the contrast image respectively
Figure 856886DEST_PATH_IMAGE030
Figure DEST_PATH_IMAGE031
And
Figure DEST_PATH_IMAGE032
Figure 862888DEST_PATH_IMAGE033
the corresponding coordinate points are respectively
Figure DEST_PATH_IMAGE034
Figure 359728DEST_PATH_IMAGE035
And
Figure DEST_PATH_IMAGE036
Figure 134786DEST_PATH_IMAGE037
then, a parallax gradient calculation model is constructed as follows:
Figure 590038DEST_PATH_IMAGE039
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE040
is the parallax gradient of the next pixel point of the current pixel point, | | | | is a norm,
Figure 958703DEST_PATH_IMAGE041
is a pixel point
Figure 524813DEST_PATH_IMAGE030
The parallax between the reference image and the contrast image can be described as
Figure 295323DEST_PATH_IMAGE008
Figure DEST_PATH_IMAGE042
Is a pixel point
Figure 279066DEST_PATH_IMAGE031
The parallax between the reference image and the contrast image can be described as
Figure 400606DEST_PATH_IMAGE043
. The following are made for the parallax gradient modelProcessing:
Figure 239249DEST_PATH_IMAGE045
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE046
Figure 129845DEST_PATH_IMAGE047
Figure DEST_PATH_IMAGE048
because two adjacent pixel points on the reference image
Figure 520375DEST_PATH_IMAGE030
Figure 129211DEST_PATH_IMAGE031
The coordinate relationship is as follows:
Figure 37124DEST_PATH_IMAGE049
Figure DEST_PATH_IMAGE050
the final parallax gradient model is therefore:
Figure DEST_PATH_IMAGE052
wherein, it is known that two adjacent pixels on the reference image have sequence and are impossible to coincide with each other when matching pixels in the comparison image, so
Figure 641280DEST_PATH_IMAGE053
Is greater than zero. According to the final parallax gradient model, the following steps are carried out
Figure 343657DEST_PATH_IMAGE053
Increasing the function value and decreasing it, so that the parallax is observed in the model analysisThe range of the gradient is
Figure DEST_PATH_IMAGE054
Step (ii) of
Figure 174210DEST_PATH_IMAGE055
: according to the parallax gradient range
Figure 354656DEST_PATH_IMAGE054
Determining pixel points in a reference image
Figure 485423DEST_PATH_IMAGE031
The corresponding parallax gradient value range when the parallax gradient value range is located in the edge region or not located in the edge region includes the following specific contents:
when the pixel point in the reference image
Figure 483334DEST_PATH_IMAGE031
When located in the region of any one of the edges,
Figure 801183DEST_PATH_IMAGE031
the parallax gradient value range of
Figure DEST_PATH_IMAGE056
When the pixel point in the reference image
Figure 785320DEST_PATH_IMAGE031
When it is not located in any one of the edge regions,
Figure 505014DEST_PATH_IMAGE031
the parallax gradient value range of
Figure 814773DEST_PATH_IMAGE057
Wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE058
is a threshold value for the gradient of the parallax,
Figure 980437DEST_PATH_IMAGE040
the parallax gradient of the next pixel point of the current pixel point. For parallax gradient threshold
Figure 33844DEST_PATH_IMAGE058
The setting of (c) can be selected according to the common parallax gradient, in this embodiment, the parallax gradient threshold is set to
Figure 608045DEST_PATH_IMAGE059
Step (ii) of
Figure DEST_PATH_IMAGE060
: determining the matching search range of the next pixel point in the comparison image according to the current pixel point in the reference image, the parallax value of the current pixel point between the matching pixel points in the comparison image and the parallax gradient range corresponding to the next pixel point, wherein the determining step comprises the following steps:
firstly, the position of the opposite polar line of the pixel point in the binocular image is determined. In the matching process of the image pixel points, the projection points of one pixel point in the three-dimensional space in the images collected by the two cameras are always on the epipolar line, so that a fixed maximum matching search range can be determined on the epipolar line based on the projection points. The epipolar line is an intersection line of an epipolar plane and a drawing plane, the epipolar plane is a plane containing a base line, and the base line is a connecting line of light spots of the two cameras.
To facilitate the understanding of epipolar lines, fig. 2 shows that 01 is a pixel point in a three-dimensional space, and projection points in binocular images 05 and 02 formed by 07 cameras and 04 cameras are necessarily located on the epipolar lines, so that the matching search range of the pixel point to be matched can be 06 and 03.
Then, based on the position area of the epipolar line, the search range of the next pixel point in the contrast image is determined. Because the parallax changes among the pixel points are correlated, the maximum matching search range does not need to be completely searched and point-by-point matching analysis is not needed in the matching search process. And determining the matching search range of the next pixel point of the current pixel point based on the parallax gradient range of the next pixel point of the current pixel point, the parallax value of the current pixel point and the current pixel point in the reference image. According to the steps
Figure 682180DEST_PATH_IMAGE027
The parallax gradient model of
Figure 974621DEST_PATH_IMAGE061
And
Figure DEST_PATH_IMAGE062
then, the search range prediction model is expressed as:
Figure DEST_PATH_IMAGE064
wherein the content of the first and second substances,
Figure 300560DEST_PATH_IMAGE065
for the next pixel point to be the matching pixel point on the comparison image,
Figure DEST_PATH_IMAGE066
is the current pixel point on the reference image,
Figure 588322DEST_PATH_IMAGE008
is a pixel point
Figure 239883DEST_PATH_IMAGE030
The disparity between the reference image and the contrast image,
Figure 19620DEST_PATH_IMAGE040
the parallax gradient of the next pixel point of the current pixel point.
According to the current pixel point in the reference image, the value range of the parallax gradient of the next pixel point of the current pixel point
Figure 149250DEST_PATH_IMAGE054
The judgment of the position area of the next pixel point of the current pixel point and the parallax gradient of the next pixel point of the current pixel point determine the matching search range of the next pixel point in the comparison image, and the specific content comprises the following steps:
when the next pixel point of the current pixel point in the reference image is located in any edge area, the matching search range of the next pixel point in the comparison image is
Figure 698043DEST_PATH_IMAGE067
When the next pixel point of the initial pixel points in the reference image is not positioned in any edge region, the matching search range of the next pixel point in the comparison image is
Figure DEST_PATH_IMAGE068
Wherein the content of the first and second substances,
Figure 379560DEST_PATH_IMAGE066
for the abscissa in the reference image of the current pixel,
Figure 381014DEST_PATH_IMAGE008
is the disparity value between the current pixel point in the reference image and the matching pixel point in the comparison image,
Figure 314335DEST_PATH_IMAGE065
for the abscissa of the next pixel point in the comparison image,
Figure 717635DEST_PATH_IMAGE069
in order to set the maximum parallax,
Figure DEST_PATH_IMAGE070
to set the minimum parallax.
Step 33: according to the size of a matching sliding window corresponding to the next pixel point and the matching search range of the next pixel point in the comparison image, matching the matching pixel point corresponding to the next pixel point in the comparison image, and the specific steps are as follows:
firstly, according to the size of a matching sliding window of a next pixel point in a comparison image, sliding the matching sliding window in a matching search range of the comparison image of the next pixel point. And in the sliding process of the matching sliding window, matching the next pixel point of the current pixel point in the reference image with each pixel point on the matching sliding window.
And then, screening according to the matching similarity between the next pixel point and each pixel point of the matching sliding window, determining each pixel point with proper matching similarity, and taking each pixel point as a primary pixel point. In the matching process, the distance measurement matching method is used as a determination method for detecting the matching similarity, that is, the absolute value of the gray difference between the next pixel point and each pixel point on the matching sliding window is used as a distance to determine, and each pixel point with the distance not exceeding the set similarity threshold is a tentative pixel point.
Of course, other embodiments may also employ: a normalized correlation coefficient matching method, a distance measurement method, a correlation coefficient matching method, a square error matching method, and the like. The method for measuring the similarity is a well-known technology and is not within the protection scope of the present invention, and the relevant description is not provided herein.
And finally, respectively taking the next pixel point in the reference image and one pixel point of the initial pixel point in the comparison image as the centers of the two matching sliding windows, and correspondingly matching other pixel points which do not comprise the center pixel points of the two matching sliding windows one by one.
And continuously repeating the matching steps of other pixel points, and screening out the optimal pixel point from the preliminary pixel points, wherein the optimal pixel point refers to that when the optimal pixel point is taken as the center of a matching sliding window, all pixel points in the matching sliding window are in one-to-one correspondence with all pixel points in the matching sliding window taking the next pixel point in the reference image as the center, and the matching similarity of each pair of one-to-one corresponding pixel points is greater than the similarity threshold. The best pixel point is the best matching pixel point of the next pixel point in the reference image in the comparison image.
Step (ii) of
Figure 68588DEST_PATH_IMAGE073
: and obtaining the matching pixel point of any one pixel point in the reference image in the comparison image according to the best matching pixel point of the next pixel point in the reference image.
And calculating the parallax value between the next pixel point in the reference image and the matching pixel point in the comparison image according to the optimal matching pixel point of the next pixel point in the reference image. And then, repeating the steps of the pixel point matching module according to the next pixel point in the reference image and the parallax value between the next pixel point and the matching pixel point in the comparison image, and obtaining the matching pixel point of the next pixel point in the reference image in the comparison image and the parallax value between the next pixel point in the reference image and the matching pixel point in the comparison image. And finally, determining that any pixel point in the reference image is matched with a pixel point in the comparison image and determining an integer parallax image of the road pavement to be detected.
The above steps
Figure DEST_PATH_IMAGE074
And 3, actually, repeating the steps of the pixel point matching module based on an iteration mode according to a matching pixel point of a certain pixel point in the reference image in the comparison image and a parallax value between the pixel point in the reference image and the comparison image, and obtaining a matching pixel point of a next pixel point of the certain pixel point in the reference image in the comparison image and a parallax value corresponding to a next pixel point of the certain pixel point in the reference image. Therefore, the matching pixel point and the parallax value of any pixel point in the reference image in the comparison image are determined.
Step (ii) of
Figure 26180DEST_PATH_IMAGE074
4: road surface inspection step, rootAnd determining the flatness condition of the road pavement to be detected according to the parallax value between any one pixel point in the reference image and the matching pixel point in the comparison image.
Determining the variance corresponding to the parallax value according to the calculated parallax values corresponding to all the pixel points in the reference image
Figure 28771DEST_PATH_IMAGE075
And carrying out normalization processing, taking the variance obtained after the normalization processing as the final variance, and setting a first variance threshold value as
Figure DEST_PATH_IMAGE076
The second variance threshold is
Figure 145632DEST_PATH_IMAGE077
At this time, there are:
when the final variance is less than the first variance threshold
Figure 309897DEST_PATH_IMAGE076
Judging that the road surface to be detected is of a first flatness grade, and considering that the road flatness of the area is high and flat;
when the final variance is greater than or equal to the first variance threshold
Figure 285943DEST_PATH_IMAGE076
And is less than or equal to the second variance threshold
Figure 561067DEST_PATH_IMAGE077
Judging that the road surface to be detected is of a second flatness grade, considering that the road flatness of the area is general, and slightly fluctuating on the road surface;
when the final variance is greater than the second variance threshold
Figure 938958DEST_PATH_IMAGE077
Judging that the road surface to be detected is of a third flatness grade, and considering that the road surface in the area has bulges or pits with larger amplitude;
the first flatness level, the second flatness level, and the third flatness level are threshold values set according to specific road flatness information, and the road flatness corresponding to each level is sequentially reduced. When the flatness grade of the road surface is detected to be higher, the system can timely give a corresponding alarm prompt to a driver or an urban road manager, avoid or overhaul the road as soon as possible, and prevent traffic accidents caused by the unevenness of the road surface.
This embodiment still provides a wisdom urban road surface detecting system based on computer vision, includes:
an image acquisition module to: acquiring binocular images of a road surface area to be detected, taking one of the binocular images as a reference image, and taking the other image as a contrast image;
an image edge detection module to: carrying out edge detection on the reference image to obtain each edge area;
the pixel point matching module is used for acquiring an initial pixel point in the reference image and a parallax value between the initial pixel point and a matching point of the initial pixel point in the comparison image;
judging whether a next pixel point of a current pixel point in the reference image is located in any edge region, and determining the size of a matching sliding window corresponding to the next pixel point and a parallax gradient range according to a judgment result;
determining the matching search range of the next pixel point in the comparison image according to the current pixel point in the reference image, the parallax value of the current pixel point between the matching points in the comparison image and the parallax gradient range corresponding to the next pixel point;
matching a matching pixel point corresponding to the next pixel point in the comparison image according to the size of a matching sliding window corresponding to the next pixel point and the matching search range of the next pixel point in the comparison image;
calculating a parallax value between a next pixel point in the reference image and a matching pixel point in the comparison image, repeating the steps in the pixel point matching module according to the next pixel point in the reference image and the parallax value between the next pixel point and the matching pixel point in the comparison image, obtaining the matching pixel point of the next pixel point in the reference image in the comparison image and the parallax value between the next pixel point in the reference image and the matching pixel point in the comparison image, and further obtaining the matching pixel point of any one pixel point in the reference image in the comparison image;
and the road surface detection module is used for calculating a parallax value between any one pixel point in the reference image and a matching pixel point in the comparison image, and determining the road surface leveling condition of the road surface area to be detected according to the parallax values corresponding to all the pixel points in the reference image.
All modules in the system are matched with each other, and the purpose is to realize a computer vision-based intelligent city road surface detection method.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A smart urban road pavement detection method based on computer vision is characterized by comprising the following specific steps:
an image acquisition step: acquiring binocular images of a road surface area to be detected, taking one of the binocular images as a reference image, and taking the other image as a contrast image;
image edge detection: carrying out edge detection on the reference image to obtain each edge area;
pixel point matching: acquiring a current pixel point in a reference image and a parallax value between the current pixel point and a matched pixel point in a comparison image;
judging whether a next pixel point of a current pixel point in the reference image is located in any edge region, and determining the size of a matching sliding window corresponding to the next pixel point and a parallax gradient range according to a judgment result;
determining the matching search range of the next pixel point in the comparison image according to the current pixel point in the reference image, the parallax value of the current pixel point between the matching pixel points in the comparison image and the parallax gradient range corresponding to the next pixel point;
matching a matching pixel point corresponding to the next pixel point in the comparison image according to the size of a matching sliding window corresponding to the next pixel point and the matching search range of the next pixel point in the comparison image;
calculating a parallax value between a next pixel point in the reference image and a matching pixel point in the comparison image, repeating the pixel point matching step according to the next pixel point in the reference image and the parallax value between the next pixel point and the matching pixel point in the comparison image, obtaining the matching pixel point of the next pixel point in the reference image in the comparison image and the parallax value between the next pixel point in the reference image and the matching pixel point in the comparison image, and further obtaining the matching pixel point of any one pixel point in the reference image in the comparison image;
a road surface detection step: and calculating a parallax value between any one pixel point in the reference image and a matching pixel point in the comparison image, and determining the road surface smoothness of the road surface area to be detected according to the parallax values corresponding to all the pixel points in the reference image.
2. The method of claim 1, wherein the step of determining a parallax gradient range corresponding to a pixel next to a current pixel according to the determination result comprises:
when the next pixel point of the current pixel point in the reference image is located in any edge region, the value range of the parallax gradient corresponding to the next pixel point is
Figure DEST_PATH_IMAGE001
When the next pixel point of the current pixel point in the reference image is not located in any edge region, the value range of the parallax gradient corresponding to the next pixel point is
Figure 254167DEST_PATH_IMAGE002
Wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE003
is a threshold value for the gradient of the parallax,
Figure 812187DEST_PATH_IMAGE004
and the parallax gradient corresponding to the next pixel point of the current pixel point.
3. The method of claim 2, wherein the step of determining the matching search range of the next pixel point in the comparison image comprises:
when the next pixel point of the current pixel point in the reference image is positioned at any edgeWhen the area is in the region, the matching search range of the next pixel point in the comparison image is
Figure DEST_PATH_IMAGE005
When the next pixel point of the current pixel point in the reference image is not positioned in any edge region, the matching search range of the next pixel point in the comparison image is
Figure 262760DEST_PATH_IMAGE006
Wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE007
for the abscissa in the reference image of the current pixel,
Figure 160309DEST_PATH_IMAGE008
is the disparity value between the current pixel point in the reference image and the matching pixel point in the comparison image,
Figure DEST_PATH_IMAGE009
for the abscissa of the next pixel point in the comparison image,
Figure 156078DEST_PATH_IMAGE010
in order to set the maximum parallax,
Figure DEST_PATH_IMAGE011
to set the minimum parallax.
4. The computer vision-based intelligent urban road pavement detection method according to any one of claims 1-3, wherein the step of determining the size of the matching sliding window corresponding to the next pixel point according to the judgment result comprises:
when the next pixel point of the current pixel point in the reference image is positioned in any edge region, reducing the size of an original matching sliding window corresponding to the next pixel point;
and when the next pixel point of the current pixel point in the reference image is not positioned in any edge region, increasing the size of the original matching sliding window corresponding to the next pixel point.
5. The method of claim 4, wherein the step of matching the matching pixel point corresponding to the next pixel point in the comparison image according to the size of the matching sliding window corresponding to the next pixel point and the matching search range of the next pixel point in the comparison image comprises:
according to the size of a matching sliding window corresponding to the next pixel point, sliding the matching sliding window within the matching search range in the comparison image of the next pixel point, and in the sliding process of the matching sliding window, matching the next pixel point of the current pixel point in the reference image with each pixel point in the matching sliding window, so as to find each pixel point which is originally designed in the comparison image;
and respectively taking the next pixel point of the current pixel point in the reference image and the preliminarily-designed pixel point in the comparison image as the centers of the two matching sliding windows, carrying out one-to-one corresponding matching on other pixel points of the two matching sliding windows, screening out the best matching pixel point from all the preliminarily-designed pixel points according to the matching result, and taking the best matching pixel point as the matching pixel point of the next pixel point of the current pixel point in the reference image.
6. The method for detecting the road surface of a smart city road based on computer vision as claimed in any one of claims 1-3, wherein the step of determining the road surface smoothness of the road surface area to be detected according to the parallax values corresponding to all pixel points in the reference image comprises:
calculating the variance of the parallax values corresponding to all the pixel points in the reference image, carrying out normalization processing, and taking the variance obtained after the normalization processing as the final variance;
when the final variance is smaller than a first variance threshold value, judging that the road surface to be detected is of a first flatness grade;
when the final variance is more than or equal to the first variance threshold and less than or equal to the second variance threshold, judging that the road surface to be detected is of a second flatness grade;
and when the final variance is larger than a second variance threshold value, judging that the road surface to be detected is a third flatness grade, and sequentially reducing the road surface flatness corresponding to the first flatness grade, the second flatness grade and the third flatness grade.
7. A method for intelligent city road pavement detection based on computer vision according to any one of claims 1-3, wherein the image acquisition step further comprises preprocessing the reference image and the comparison image, the preprocessing comprising:
denoising the reference image and the comparison image respectively, and sharpening the denoised reference image and the comparison image respectively;
and respectively carrying out filtering processing on the sharpened reference image and the sharpened comparison image by adopting a pyramid layer-by-layer filtering mode so as to reduce the resolution of the images.
8. The utility model provides a wisdom urban road surface detecting system based on computer vision which characterized in that includes:
an image acquisition module to: acquiring binocular images of a road surface area to be detected, taking one of the binocular images as a reference image, and taking the other image as a contrast image;
an image edge detection module to: carrying out edge detection on the reference image to obtain each edge area;
the pixel point matching module is used for acquiring an initial pixel point in the reference image and a parallax value between the initial pixel point and a matching point of the initial pixel point in the comparison image;
judging whether a next pixel point of a current pixel point in the reference image is located in any edge region, and determining the size of a matching sliding window corresponding to the next pixel point and a parallax gradient range according to a judgment result;
determining the matching search range of the next pixel point in the comparison image according to the current pixel point in the reference image, the parallax value of the current pixel point between the matching points in the comparison image and the parallax gradient range corresponding to the next pixel point;
matching a matching pixel point corresponding to the next pixel point in the comparison image according to the size of a matching sliding window corresponding to the next pixel point and the matching search range of the next pixel point in the comparison image;
calculating a parallax value between a next pixel point in the reference image and a matching pixel point in the comparison image, repeating the steps in the pixel point matching module according to the next pixel point in the reference image and the parallax value between the next pixel point and the matching pixel point in the comparison image, obtaining the matching pixel point of the next pixel point in the reference image in the comparison image and the parallax value between the next pixel point in the reference image and the matching pixel point in the comparison image, and further obtaining the matching pixel point of any one pixel point in the reference image in the comparison image;
and the road surface detection module is used for calculating a parallax value between any one pixel point in the reference image and a matching pixel point in the comparison image, and determining the road surface leveling condition of the road surface area to be detected according to the parallax values corresponding to all the pixel points in the reference image.
9. The system of claim 8, wherein the step of determining the corresponding parallax gradient range of the next pixel point according to the determination result comprises:
when the next pixel point of the current pixel point in the reference image is located in any edge region, the value range of the parallax gradient corresponding to the next pixel point is
Figure 466974DEST_PATH_IMAGE001
When the next pixel point of the current pixel point in the reference image is not located in any edge region, the value range of the parallax gradient corresponding to the next pixel point is
Figure 65445DEST_PATH_IMAGE002
Wherein the content of the first and second substances,
Figure 879817DEST_PATH_IMAGE003
is a threshold value for the gradient of the parallax,
Figure 295755DEST_PATH_IMAGE004
in order to be a parallax gradient, the parallax gradient,
Figure 562789DEST_PATH_IMAGE012
is a parallax gradient range.
10. The computer vision based intelligent urban road pavement detection system according to claim 9, wherein the step of determining the matching search range of the next pixel point in the comparison image comprises:
when the next pixel point of the current pixel point in the reference image is located in any edge area, the matching search range of the next pixel point in the comparison image is
Figure 27268DEST_PATH_IMAGE005
When the next pixel point of the current pixel point in the reference image is not positioned in any edge region, the matching search range of the next pixel point in the comparison image is
Figure 633830DEST_PATH_IMAGE006
Wherein the content of the first and second substances,
Figure 158352DEST_PATH_IMAGE007
for the abscissa in the reference image of the current pixel,
Figure 912681DEST_PATH_IMAGE008
is the disparity value between the current pixel point in the reference image and the matching pixel point in the comparison image,
Figure DEST_PATH_IMAGE013
the abscissa of the next pixel in the comparison image and,
Figure 993901DEST_PATH_IMAGE014
in order to set the maximum parallax,
Figure 720549DEST_PATH_IMAGE011
to set the minimum parallax.
CN202111093287.3A 2021-09-17 2021-09-17 Intelligent urban road pavement detection method and system based on computer vision Active CN113554646B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111093287.3A CN113554646B (en) 2021-09-17 2021-09-17 Intelligent urban road pavement detection method and system based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111093287.3A CN113554646B (en) 2021-09-17 2021-09-17 Intelligent urban road pavement detection method and system based on computer vision

Publications (2)

Publication Number Publication Date
CN113554646A CN113554646A (en) 2021-10-26
CN113554646B true CN113554646B (en) 2021-12-10

Family

ID=78134648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111093287.3A Active CN113554646B (en) 2021-09-17 2021-09-17 Intelligent urban road pavement detection method and system based on computer vision

Country Status (1)

Country Link
CN (1) CN113554646B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114913477A (en) * 2022-05-06 2022-08-16 广州市城市规划勘测设计研究院 Urban pipeline excavation prevention early warning method, device, equipment and medium
CN114821512B (en) * 2022-06-22 2022-09-06 托伦斯半导体设备启东有限公司 Working road surface abnormity detection and path optimization method based on computer vision
CN116010642B (en) * 2023-03-27 2023-06-20 北京滴普科技有限公司 Quick seal query method and system based on HOG characteristics

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228110A (en) * 2016-07-07 2016-12-14 浙江零跑科技有限公司 A kind of barrier based on vehicle-mounted binocular camera and drivable region detection method
CN108197590A (en) * 2018-01-22 2018-06-22 海信集团有限公司 A kind of pavement detection method, apparatus, terminal and storage medium
CN111243003A (en) * 2018-11-12 2020-06-05 海信集团有限公司 Vehicle-mounted binocular camera and method and device for detecting road height limiting rod

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228138A (en) * 2016-07-26 2016-12-14 国网重庆市电力公司电力科学研究院 A kind of Road Detection algorithm of integration region and marginal information
CN106446785A (en) * 2016-08-30 2017-02-22 电子科技大学 Passable road detection method based on binocular vision
CN109753858A (en) * 2017-11-07 2019-05-14 北京中科慧眼科技有限公司 A kind of road barricade object detecting method and device based on binocular vision
CN109522833A (en) * 2018-11-06 2019-03-26 深圳市爱培科技术股份有限公司 A kind of binocular vision solid matching method and system for Road Detection

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228110A (en) * 2016-07-07 2016-12-14 浙江零跑科技有限公司 A kind of barrier based on vehicle-mounted binocular camera and drivable region detection method
CN108197590A (en) * 2018-01-22 2018-06-22 海信集团有限公司 A kind of pavement detection method, apparatus, terminal and storage medium
CN111243003A (en) * 2018-11-12 2020-06-05 海信集团有限公司 Vehicle-mounted binocular camera and method and device for detecting road height limiting rod

Also Published As

Publication number Publication date
CN113554646A (en) 2021-10-26

Similar Documents

Publication Publication Date Title
CN113554646B (en) Intelligent urban road pavement detection method and system based on computer vision
CN107341453B (en) Lane line extraction method and device
CN107330376B (en) Lane line identification method and system
CN105488454B (en) Front vehicles detection and ranging based on monocular vision
US11989951B2 (en) Parking detection method, system, processing device and storage medium
CN107462223B (en) Automatic measuring device and method for sight distance of vehicle before turning on highway
US20200041284A1 (en) Map road marking and road quality collecting apparatus and method based on adas system
CN110647850A (en) Automatic lane deviation measuring method based on inverse perspective principle
CN104916163B (en) Parking space detection method
CN110569704A (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
Labayrade et al. In-vehicle obstacles detection and characterization by stereovision
GB2569751A (en) Static infrared thermal image processing-based underground pipe leakage detection method
CN106778551B (en) Method for identifying highway section and urban road lane line
CN104574393A (en) Three-dimensional pavement crack image generation system and method
CN108645375B (en) Rapid vehicle distance measurement optimization method for vehicle-mounted binocular system
CN112991369B (en) Method for detecting outline size of running vehicle based on binocular vision
CN107832674B (en) Lane line detection method
CN110189375A (en) A kind of images steganalysis method based on monocular vision measurement
CN117094914B (en) Smart city road monitoring system based on computer vision
CN110733416B (en) Lane departure early warning method based on inverse perspective transformation
CN111598845A (en) Pavement crack detection and positioning method based on deep learning and NEO-6M positioning module
CN114842340A (en) Robot binocular stereoscopic vision obstacle sensing method and system
Dhiman et al. A multi-frame stereo vision-based road profiling technique for distress analysis
CN114724094A (en) System for measuring number of people in gateway vehicle based on three-dimensional image and radar technology
CN112598743B (en) Pose estimation method and related device for monocular vision image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 226000 room 104, building 19, chengjiayuan, Qinghe Road, high tech Zone, Nantong City, Jiangsu Province

Patentee after: Zhengjin Decoration Group Co.,Ltd.

Country or region after: China

Address before: 226000 room 104, building 19, chengjiayuan, Qinghe Road, high tech Zone, Nantong City, Jiangsu Province

Patentee before: Jiangsu Zhengjin Architectural Decoration Engineering Co.,Ltd.

Country or region before: China