CN112598066A - Lightweight road pavement detection method and system based on machine vision - Google Patents

Lightweight road pavement detection method and system based on machine vision Download PDF

Info

Publication number
CN112598066A
CN112598066A CN202011566151.5A CN202011566151A CN112598066A CN 112598066 A CN112598066 A CN 112598066A CN 202011566151 A CN202011566151 A CN 202011566151A CN 112598066 A CN112598066 A CN 112598066A
Authority
CN
China
Prior art keywords
road surface
image
road
frame
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011566151.5A
Other languages
Chinese (zh)
Inventor
胡增
钟生
彭鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Applied Technology Co Ltd
Original Assignee
China Applied Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Applied Technology Co Ltd filed Critical China Applied Technology Co Ltd
Priority to CN202011566151.5A priority Critical patent/CN112598066A/en
Publication of CN112598066A publication Critical patent/CN112598066A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30132Masonry; Concrete

Abstract

The invention discloses a lightweight road surface detection method and system based on machine vision, which are used for acquiring a road surface image to be identified, wherein the road surface image comprises a road surface damage state; marking a target frame at a road surface damage position in the road surface image to obtain a road surface damage target area in the road surface image; inputting a road pavement image into a trained deep learning model, wherein the deep learning model comprises an SVM classifier and a regressor; carrying out image processing on the local image of the road surface damage according to the SVM classifier to obtain the damage type of the road surface; correcting the prediction frame according to a regressor, and performing boundary regression processing on the anchor frame according to a target area to obtain a road surface damage position; adopt this lightweight road pavement to detect and removed the trouble of artifical discernment from, improved the efficiency of discernment, can adapt to different road detection.

Description

Lightweight road pavement detection method and system based on machine vision
Technical Field
The invention relates to the technical field of road pavement detection, in particular to a lightweight road pavement detection method and system based on machine vision.
Background
With the rapid development of cities, more and more automobiles bring about increasing traffic volume, which causes the pressure of urban roads to be heavier and heavier. Due to the reasons of repeated excavation and the like caused by long-term overhaul and additional arrangement of municipal pipelines, parts of roads are damaged in different degrees, and certain potential safety hazards are brought to normal running of vehicles. The existing road detection method is to take a picture of a road surface by a road surface detection vehicle, then manually analyze the picture and extract damage data of the road surface.
However, the above method has the following problems:
1. the photos are analyzed manually, and cracks above 1 mm on the pavement photos need to be recognized by naked eyes, so that the working strength is high, and the recognition period is long;
the manual identification of a large number of road photos is easy to cause judgment errors, so that a correct detection result cannot be obtained, and the development of subsequent work is not facilitated.
Disclosure of Invention
Based on the technical problems in the background art, the invention provides a light-weight road pavement detection method and system based on machine vision, which avoids the trouble of manual identification, improves the identification efficiency and can adapt to different road detections.
The invention provides a machine vision-based lightweight road pavement detection method, which comprises the following steps:
acquiring a road pavement image to be identified, wherein the road pavement image comprises a pavement damage state;
marking a target frame at a road surface damage position in the road surface image to obtain a road surface damage target area in the road surface image;
inputting a road pavement image into a trained deep learning model, wherein the deep learning model comprises an SVM classifier and a regressor;
carrying out image processing on the local image of the road surface damage according to the SVM classifier to obtain the damage type of the road surface;
and correcting the prediction frame according to the regressor, and performing boundary regression processing on the anchor frame according to the target area to obtain the road surface damage position.
Further, after a target frame is marked at a road surface damage position in the road surface image to obtain a target area with a damaged road surface in the road surface image, the road surface image is preprocessed, wherein the preprocessing specifically comprises the following steps:
carrying out data enhancement processing on the road pavement image to obtain an enhanced road pavement image;
setting an initial anchor frame for the enhanced road pavement image data, calculating an anchor frame value of a pavement damage position, and taking an anchor frame position corresponding to the anchor frame value as a prediction frame;
and carrying out self-adaptive scaling on the enhanced road surface image to obtain the road surface image with the same standard.
Further, the method for inputting the road pavement image into the trained deep learning model comprises the following steps:
slicing the road pavement image to obtain a feature vector diagram;
performing convolution processing on the feature vector diagram for multiple times, and extracting a feature value of the road pavement image;
and sending the characteristic value of the road pavement image into an SVM classifier so as to output the damage type of the road pavement.
Further, in correcting the anchor frame according to the regressor and performing boundary regression processing on the anchor frame according to the target area to obtain the road surface damage position, performing boundary regression processing on the anchor frame specifically includes:
mapping the road pavement image by using translation transformation and scale transformation to obtain a predicted value corresponding to the predicted frame;
obtaining a loss function according to the principle of minimum difference between the predicted value and the real value corresponding to the target frame;
optimizing the loss function according to the function optimization target to obtain a corrected prediction frame;
and calculating loss CIOU according to the prediction frame and the target frame to obtain the loss amount of the prediction frame deviating from the target frame, and finally obtaining the road surface damage position.
Furthermore, the road pavement image is mapped by using translation transformation and scale transformation to obtain a predicted value d corresponding to the predicted frame*(P) (. x, y, w, h), calculated by the following formula:
Figure BDA0002861019220000031
wherein, the four-dimensional vector (x, y, w, h), x, y represent the central point coordinate of the window, w, h represent the width and height of the window, phi5(P) is the feature vector of the target frame, w* TAre the parameters to be learned.
Further, in obtaining the LOSS function according to the principle that the difference between the predicted value and the true value corresponding to the target frame is minimum, the LOSS function LOSS is calculated by the following formula:
Figure BDA0002861019220000032
ty=(Gy-Py)Ph
tw=log(Gw/Pw)
th=log(Gh/Ph)
wherein, the real feature vector t of the frame translation transformation and the scale transformation is predicted*=(tx,ty,tw,th),
Figure BDA0002861019220000033
Representing true learning parameters, txAnd tyFor the amount of translation of the prediction frame, tw,thIs a scaleVariable scaling amount, Gx、Gy、Gw、GhCoordinate value of center point and width and height value, P, representing target framex、Py、Pw、PhAnd a central point coordinate value and a width and height value representing the prediction frame.
Further, optimizing the loss function according to the function optimization target to obtain a corrected prediction frame; optimizing the prediction box by adopting the following formula function:
Figure BDA0002861019220000034
wherein the content of the first and second substances,
Figure BDA0002861019220000035
specifically, the loss amount, W*Representing the objective optimization function.
Further, in calculating the loss CIOU from the prediction box and the target box, it is calculated by the following formula:
Figure BDA0002861019220000036
Figure BDA0002861019220000037
Figure BDA0002861019220000041
where IOU represents the cross-over ratio, ρ2(b,bgt) Representing Euclidean distance between the central point of the prediction frame and the central point of the target frame, alpha is a balance parameter, upsilon is a parameter for measuring the consistency of the aspect ratio, c represents the diagonal distance of a minimum closure area containing the prediction frame and the target frame simultaneously,
Figure BDA0002861019220000042
means the sameThe angle of inclination of the diagonal of the rectangle containing the minimum closure area of the prediction box and the target box.
Further, in acquiring the road pavement image to be identified, the method comprises the following steps:
mounting the vehicle-mounted host and the camera on a moving vehicle body;
and judging whether the GPS positioning device of the detected road section where the road surface image to be identified is located is normal, if so, acquiring the road surface image in the current detected road section to detect the road surface damage state.
A light-weight road detection system based on machine vision comprises an image acquisition module, a target frame marking module, an image processing module, a road surface damage type output module and a road surface damage position output module;
the image acquisition module is used for acquiring a road pavement image to be identified, wherein the road pavement image comprises a pavement damage state;
the target frame marking module is used for marking a road surface damage position in the road surface image by a target frame to obtain a road surface damage target area in the road surface image;
the image processing module is used for inputting road pavement images into a trained deep learning model, and the deep learning model comprises an SVM classifier and a regressor;
the road surface damage type output module is used for carrying out image processing on the local image of the road surface damage according to the SVM classifier to obtain the damage type of the road surface;
and the road surface damage position output module is used for correcting the prediction frame according to the regressor and carrying out boundary regression processing on the anchor frame according to the target area to obtain the road surface damage position.
The invention provides a lightweight road pavement detection method and system based on machine vision, which has the advantages that: the light-weight road pavement detection method and system based on machine vision provided by the invention have the advantages that the trouble of manual identification is avoided, the identification efficiency is improved, and the method and system can adapt to different road detections; meanwhile, the road damage position and type in the road pavement image are detected based on the deep neural network, so that which position in the current detection road section the road damage occurs can be obtained, and meanwhile, the specific damage position and damage category are obtained, the identification precision is high, and the condition of the road can be completely known by matching with coordinate information.
Drawings
FIG. 1 is a schematic structural view of the present invention;
fig. 2 is a diagram showing a positional relationship between the prediction frame and the target frame.
Detailed Description
The present invention is described in detail below with reference to specific embodiments, and in the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, but rather should be construed as broadly as the present invention is capable of modification in various respects, all without departing from the spirit and scope of the present invention.
As shown in fig. 1 to 2, the present invention provides a method for detecting a lightweight road surface based on machine vision, including:
s1: acquiring a road pavement image to be identified, wherein the road pavement image comprises a pavement damage state;
the method comprises the steps that a vehicle-mounted host and a camera are installed on a front frame or a rear frame on a moving vehicle body and are firmly fixed, before road surface detection, whether a GPS positioning device of a road section where a road surface image to be identified is located is normal needs to be judged, if yes, the road surface image in the current detected road section is obtained to detect the road surface damage state, if not, the GPS positioning device on the vehicle body needs to be checked, and under the condition that the GPS positioning device can be effectively used, damage defect detection is carried out on the road surface.
Before the road surface detection, the vehicle-mounted host and the camera need to be started up to test whether to work normally, after the normal detection is carried out, the camera shoots the road surface, the road surface is recorded in a circulating mode, and the shot graph line is uploaded to a far-end analysis system so as to output the position and the type of the damaged road surface.
Therefore, the road surface image obtained every time is an image carrying a positioning position, the road surface damage position and type in the road surface image are detected, the position of the road surface damage in the current detection road section can be obtained, and meanwhile, the specific damage position and damage type are obtained, so that the targeted repair of repair personnel is facilitated.
S2: marking a target frame at a road surface damage position in the road surface image to obtain a road surface damage target area in the road surface image;
after the far-end analysis system acquires the road surface image uploaded by the camera, firstly, a target frame is marked on the damaged position of the road surface, and the marked target frame is used as a true value of boundary regression in the deep learning model so as to perform boundary regression processing on the prediction frame.
S3: inputting a road pavement image into a trained deep learning model, wherein the deep learning model comprises an SVM classifier and a regressor;
the deep learning model adopts an RCNN algorithm, the full name of R-CNN is Region-CNN, and the deep learning is applied to target detection. The R-CNN realizes a target detection technology based on algorithms such as a Convolutional Neural Network (CNN), linear regression, a Support Vector Machine (SVM) and the like.
S4: carrying out image processing on the local image of the road surface damage according to the SVM classifier to obtain the damage type of the road surface;
s5: and correcting the prediction frame according to the regressor, and performing boundary regression processing on the anchor frame according to the target area to obtain the road surface damage position.
The road pavement is damaged and detected through the steps S1 to S5, the trouble of manual identification is avoided, the identification efficiency is improved, and the method can adapt to different road detections. Meanwhile, the road condition can be completely known by matching coordinate information based on the deep neural network, so that the road condition is convenient to maintain.
As an example of the way in which the present invention can be used,
s100: acquiring a road pavement image to be identified, wherein the road pavement image comprises a pavement damage state;
s200: carrying out data enhancement processing on the road pavement image to obtain an enhanced road pavement image;
wherein the data enhancement comprises: the road damage state identification method based on the image data is characterized by comprising the following steps of turning, rotating, zooming, cutting and shifting, and is beneficial to providing more image data sets, so that the identification of the final road damage state is more accurate.
S300: setting an initial anchor frame for the enhanced road pavement image data, calculating an anchor frame value of a pavement damage position, and taking an anchor frame position corresponding to the anchor frame value as a prediction frame;
the anchor box is a box that generates a plurality of bounding boxes of different sizes and aspect ratios (aspect ratios) centered on each pixel, and these bounding boxes are called anchor boxes.
Each grid cell can detect an object, and if one grid cell wants to detect a plurality of targets, an anchor frame needs to be arranged to realize the detection of the plurality of targets. The initial anchor frame is set as the initial prediction frame, so that the subsequent correction and boundary regression operations on the prediction frame through a target detection algorithm of a deep learning model are facilitated, the prediction frame tends to the target frame, and the accuracy of the finally output road surface damage category and damage position is improved.
S400: the enhanced road surface image is subjected to self-adaptive scaling to obtain a road surface image with the same standard;
because the sizes of the road surface images after enhancement may be inconsistent, and the convolutional layers and the like set in the deep learning model are performed according to the images with certain sizes, the road surface images after enhancement are input into the deep learning model and are signed and processed into pictures with uniform sizes, so that the accuracy of the damage types and positions of the road surface output subsequently is improved.
S500: inputting a road pavement image into a trained deep learning model, firstly carrying out slicing processing on the road pavement image to obtain a feature vector diagram, carrying out convolution processing on the feature vector diagram for multiple times, and extracting a feature value of the road pavement image;
for example, the 520 × 3 image is cut into 260 × 12 feature vector maps, and then a plurality of convolution operations are performed to extract features of the road image.
S600: sending the characteristic value of the road pavement image into an SVM classifier so as to output the damage type of the road pavement;
the basic model of a Support Vector Machine (SVM) is to define a linear classifier with maximum separation in feature space.
S700: and correcting the prediction frame according to the regressor, and performing boundary regression processing on the anchor frame according to the target area to obtain the road surface damage position. As shown in fig. 2, step S700 specifically includes:
s701: mapping the road pavement image by using translation transformation and scale transformation to obtain a predicted value corresponding to the predicted frame;
the window is generally represented by a four-dimensional vector (x, y, w, h), where x, y respectively represent coordinates of a center point of the window, and w, h respectively represent a width and a height of the window. Box P represents the prediction box, box G represents the target box, and our goal is to find a relationship that allows the input prediction box P to be mapped to obtain a regression window that is closer to the target box G
Figure BDA0002861019220000081
The purpose of frame regression is not only: given ((P)x,Py,Pw,Ph) Find a mapping f such that
Figure BDA0002861019220000082
And is
Figure BDA0002861019220000083
The translation transformation is as follows:
Figure BDA0002861019220000084
Figure BDA0002861019220000085
the scale transformation is as follows:
Figure BDA0002861019220000086
Figure BDA0002861019220000087
it can be known that the frame regression learning is the four transformations dx (p), dy (p), dw (p), dh (p).
tx=(Gx-Px)/Pw
ty=(Gy-Py)/Ph
tw=log(Gw/Pw)
th=log(Gh/Ph)
Wherein, txAnd tyFor the amount of translation of the prediction frame, tw,thAmount of scaling for scale conversion, Gx、Gy、Gw、GhCoordinate value of center point and width and height value, P, representing target framex、Py、Pw、PhAnd a central point coordinate value and a width and height value representing the prediction frame.
The objective function may be expressed as,
Figure BDA0002861019220000088
where Φ 5(P) is the true eigenvector, w* TIs the parameter to be learned (x denotes x, y, w, h, i.e. each transformation corresponds to an objective function), d*(P) is the predicted value obtained.
S702: obtaining a loss function according to the principle of minimum difference between the predicted value and the real value corresponding to the target frame;
to make the predicted value correspond to the real value t of the target frame*=((tx,ty,tw,th) With the minimum gap, the resulting loss function is:
Figure BDA0002861019220000089
wherein, the real feature vector t of the frame translation transformation and the scale transformation is predicted*=(tx,ty,tw,th),
Figure BDA00028610192200000810
Representing the true learning parameters.
S703: optimizing the loss function according to the function optimization target to obtain a corrected prediction frame;
optimizing the prediction box by adopting the following formula function:
Figure BDA0002861019220000091
wherein the content of the first and second substances,
Figure BDA0002861019220000092
specifically, the loss amount, W*Representing the objective optimization function.
S704: and calculating loss CIOU according to the prediction frame and the target frame to obtain the loss amount of the prediction frame deviating from the target frame, and finally obtaining the road surface damage position.
Figure BDA0002861019220000093
Figure BDA0002861019220000094
Figure BDA0002861019220000095
Where IOU represents the cross-over ratio, ρ2(b,bgt) Representing the Euclidean distance between the central point of the prediction frame and the central point of the target frame, and alpha is a balance parameterV is a parameter for measuring the consistency of the aspect ratio, c represents the diagonal distance of the minimum closure area containing the prediction frame and the target frame simultaneously,
Figure BDA0002861019220000096
the angle of inclination of the diagonal of the rectangle representing the minimum closure area containing both the prediction box and the target box.
In the merging ratio of the IOUs, the ratio between the overlapping area and the total area is the value corresponding to the overlapping area between the prediction frame and the target frame in the numerator portion, such as the area corresponding to the planar line in fig. 2, and the value corresponding to the total area occupied by the prediction frame and the target frame in the denominator portion, such as all the occupied areas corresponding to fig. 2.
A light-weight road detection system based on machine vision comprises an image acquisition module, a target frame marking module, an image processing module, a road surface damage type output module and a road surface damage position output module;
the image acquisition module is used for acquiring a road pavement image to be identified, wherein the road pavement image comprises a pavement damage state;
the target frame marking module is used for marking a road surface damage position in the road surface image by a target frame to obtain a road surface damage target area in the road surface image;
the image processing module is used for inputting road pavement images into a trained deep learning model, and the deep learning model comprises an SVM classifier and a regressor;
the road surface damage type output module is used for carrying out image processing on the local image of the road surface damage according to the SVM classifier to obtain the damage type of the road surface;
and the road surface damage position output module is used for correcting the prediction frame according to the regressor and carrying out boundary regression processing on the anchor frame according to the target area to obtain the road surface damage position.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (10)

1. A lightweight road pavement detection method based on machine vision is characterized by comprising the following steps:
acquiring a road pavement image to be identified, wherein the road pavement image comprises a pavement damage state;
marking a target frame at a road surface damage position in the road surface image to obtain a road surface damage target area in the road surface image;
inputting a road pavement image into a trained deep learning model, wherein the deep learning model comprises an SVM classifier and a regressor;
carrying out image processing on the local image of the road surface damage according to the SVM classifier to obtain the damage type of the road surface;
and correcting the prediction frame according to the regressor, and performing boundary regression processing on the anchor frame according to the target area to obtain the road surface damage position.
2. The machine-vision-based lightweight road surface detection method according to claim 1, wherein the preprocessing of the road surface image is performed after the target frame marking of the road surface damage position in the road surface image is performed to obtain the target region of the road surface damage in the road surface image, and the specific preprocessing includes:
carrying out data enhancement processing on the road pavement image to obtain an enhanced road pavement image;
setting an initial anchor frame for the enhanced road pavement image data, calculating an anchor frame value of a pavement damage position, and taking an anchor frame position corresponding to the anchor frame value as a prediction frame;
and carrying out self-adaptive scaling on the enhanced road surface image to obtain the road surface image with the same standard.
3. The machine-vision-based lightweight road surface detection method according to claim 1, wherein inputting road surface images into a trained deep learning model includes:
slicing the road pavement image to obtain a feature vector diagram;
performing convolution processing on the feature vector diagram for multiple times, and extracting a feature value of the road pavement image;
and sending the characteristic value of the road pavement image into an SVM classifier so as to output the damage type of the road pavement.
4. The method for detecting a lightweight road surface based on machine vision according to claim 2, wherein in correcting the anchor frame according to the regressor and performing boundary regression processing on the anchor frame according to the target region to obtain the road surface damage location, the boundary regression processing on the anchor frame specifically comprises:
mapping the road pavement image by using translation transformation and scale transformation to obtain a predicted value corresponding to the predicted frame;
obtaining a loss function according to the principle of minimum difference between the predicted value and the real value corresponding to the target frame;
optimizing the loss function according to the function optimization target to obtain a corrected prediction frame;
and calculating loss CIOU according to the prediction frame and the target frame to obtain the loss amount of the prediction frame deviating from the target frame, and finally obtaining the road surface damage position.
5. The method for detecting a lightweight road surface based on machine vision according to claim 4, wherein a road surface image is mapped by translation transformation and scale transformation to obtain a predicted value d corresponding to the prediction frame*(P) (. x, y, w, h), calculated by the following formula:
Figure FDA0002861019210000021
wherein, a four-dimensional vector: (x, y, w, h), x, y representing the coordinates of the center point of the window, w, h representing the width and height of the window, phi5(P) is the feature vector of the target frame, w* TAre the parameters to be learned.
6. The machine-vision-based lightweight road pavement detection method according to claim 5, wherein in obtaining the LOSS function according to a principle of minimizing a difference between the predicted value and a true value corresponding to the target frame, the LOSS function LOSS is calculated by the following formula:
Figure FDA0002861019210000022
ty=(Gy-Py)/Ph
tw=log(Gw/Pw)
th=log(Gh/Ph)
wherein, the real feature vector t of the frame translation transformation and the scale transformation is predicted*=(tx,ty,tw,th),
Figure FDA0002861019210000023
Representing true learning parameters, txAnd tyFor the amount of translation of the prediction frame, tw,thAmount of scaling for scale conversion, Gx、Gy、Gw、GhCoordinate value of center point and width and height value, P, representing target framex、Py、Pw、PhAnd a central point coordinate value and a width and height value representing the prediction frame.
7. The machine-vision-based lightweight road surface detection method according to claim 6, wherein in a prediction box obtained by optimizing a loss function according to a function optimization objective, the prediction box is corrected; optimizing the prediction box by adopting the following formula function:
Figure FDA0002861019210000031
wherein the content of the first and second substances,
Figure FDA0002861019210000032
denotes the amount of loss, W*Representing the objective optimization function.
8. The machine-vision-based lightweight road surface detection method according to claim 7, wherein in calculating the loss CIOU from the prediction frame and the target frame, the loss CIOU is calculated by the following formula:
Figure FDA0002861019210000033
Figure FDA0002861019210000034
Figure FDA0002861019210000035
where IOU represents the cross-over ratio, ρ2(b,bgt) Representing Euclidean distance between the central point of the prediction frame and the central point of the target frame, alpha is a balance parameter, upsilon is a parameter for measuring the consistency of the aspect ratio, c represents the distance of a rectangular diagonal line of a minimum closure area simultaneously containing the prediction frame and the target frame,
Figure FDA0002861019210000036
the angle of inclination of the diagonal of the rectangle representing the minimum closure area containing both the prediction box and the target box.
9. The machine-vision-based lightweight road surface detection method according to any one of claims 1 to 8, wherein the acquisition of the road surface image to be identified includes:
mounting the vehicle-mounted host and the camera on a moving vehicle body;
and judging whether the GPS positioning device of the detected road section where the road surface image to be identified is located is normal, if so, acquiring the road surface image in the current detected road section to detect the road surface damage state.
10. A light-weight road detection system based on machine vision is characterized by comprising an image acquisition module, a target frame marking module, an image processing module, a road surface damage type output module and a road surface damage position output module;
the image acquisition module is used for acquiring a road pavement image to be identified, wherein the road pavement image comprises a pavement damage state;
the target frame marking module is used for marking a road surface damage position in the road surface image by a target frame to obtain a road surface damage target area in the road surface image;
the image processing module is used for inputting road pavement images into a trained deep learning model, and the deep learning model comprises an SVM classifier and a regressor;
the road surface damage type output module is used for carrying out image processing on the local image of the road surface damage according to the SVM classifier to obtain the damage type of the road surface;
and the road surface damage position output module is used for correcting the prediction frame according to the regressor and carrying out boundary regression processing on the anchor frame according to the target area to obtain the road surface damage position.
CN202011566151.5A 2020-12-25 2020-12-25 Lightweight road pavement detection method and system based on machine vision Pending CN112598066A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011566151.5A CN112598066A (en) 2020-12-25 2020-12-25 Lightweight road pavement detection method and system based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011566151.5A CN112598066A (en) 2020-12-25 2020-12-25 Lightweight road pavement detection method and system based on machine vision

Publications (1)

Publication Number Publication Date
CN112598066A true CN112598066A (en) 2021-04-02

Family

ID=75202283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011566151.5A Pending CN112598066A (en) 2020-12-25 2020-12-25 Lightweight road pavement detection method and system based on machine vision

Country Status (1)

Country Link
CN (1) CN112598066A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113129341A (en) * 2021-04-20 2021-07-16 广东工业大学 Landing tracking control method and system based on light-weight twin network and unmanned aerial vehicle
CN113537016A (en) * 2021-07-06 2021-10-22 南昌市微轲联信息技术有限公司 Method for automatically detecting and early warning road damage in road patrol
CN115100207A (en) * 2022-08-26 2022-09-23 北京恒新天创科技有限公司 Detection system and detection method based on machine vision
WO2023097931A1 (en) * 2021-12-03 2023-06-08 江苏航天大为科技股份有限公司 Hough transform-based license plate tilt correction method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107871102A (en) * 2016-09-23 2018-04-03 北京眼神科技有限公司 A kind of method for detecting human face and device
CN110321815A (en) * 2019-06-18 2019-10-11 中国计量大学 A kind of crack on road recognition methods based on deep learning
CN111723798A (en) * 2020-05-27 2020-09-29 西安交通大学 Multi-instance natural scene text detection method based on relevance hierarchy residual errors

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107871102A (en) * 2016-09-23 2018-04-03 北京眼神科技有限公司 A kind of method for detecting human face and device
CN110321815A (en) * 2019-06-18 2019-10-11 中国计量大学 A kind of crack on road recognition methods based on deep learning
CN111723798A (en) * 2020-05-27 2020-09-29 西安交通大学 Multi-instance natural scene text detection method based on relevance hierarchy residual errors

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
言有三: "《深度学习之人脸图像处理 核心算法与案例实战》", 31 July 2020 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113129341A (en) * 2021-04-20 2021-07-16 广东工业大学 Landing tracking control method and system based on light-weight twin network and unmanned aerial vehicle
US11634227B2 (en) 2021-04-20 2023-04-25 Guangdong University Of Technology Landing tracking control method and system based on lightweight twin network and unmanned aerial vehicle
CN113537016A (en) * 2021-07-06 2021-10-22 南昌市微轲联信息技术有限公司 Method for automatically detecting and early warning road damage in road patrol
CN113537016B (en) * 2021-07-06 2023-01-06 南昌市微轲联信息技术有限公司 Method for automatically detecting and early warning road damage in road patrol
WO2023097931A1 (en) * 2021-12-03 2023-06-08 江苏航天大为科技股份有限公司 Hough transform-based license plate tilt correction method
CN115100207A (en) * 2022-08-26 2022-09-23 北京恒新天创科技有限公司 Detection system and detection method based on machine vision

Similar Documents

Publication Publication Date Title
US11361428B1 (en) Technology for analyzing images depicting vehicles according to base image models
CN112598066A (en) Lightweight road pavement detection method and system based on machine vision
US10319094B1 (en) Technology for capturing, transmitting, and analyzing images of objects
US11551341B2 (en) Method and device for automatically drawing structural cracks and precisely measuring widths thereof
US10657647B1 (en) Image processing system to detect changes to target objects using base object models
US9886771B1 (en) Heat map of vehicle damage
US11288789B1 (en) Systems and methods for repairing a damaged vehicle using image processing
US10636148B1 (en) Image processing system to detect contours of an object in a target object image
US10706321B1 (en) Image processing system to align a target object in a target object image with an object model
Akagic et al. Pothole detection: An efficient vision based method using rgb color space image segmentation
CN110866430B (en) License plate recognition method and device
CN110490936B (en) Calibration method, device and equipment of vehicle camera and readable storage medium
CN111178236A (en) Parking space detection method based on deep learning
CN110033431B (en) Non-contact detection device and detection method for detecting corrosion area on surface of steel bridge
CN110910350B (en) Nut loosening detection method for wind power tower cylinder
CN114359181B (en) Intelligent traffic target fusion detection method and system based on image and point cloud
CN111259710B (en) Parking space structure detection model training method adopting parking space frame lines and end points
CN113781537B (en) Rail elastic strip fastener defect identification method and device and computer equipment
CN116824347A (en) Road crack detection method based on deep learning
CN113592839B (en) Distribution network line typical defect diagnosis method and system based on improved fast RCNN
CN114004858A (en) Method and device for identifying aviation cable surface code based on machine vision
CN116385477A (en) Tower image registration method based on image segmentation
CN114299247A (en) Rapid detection and problem troubleshooting method for road traffic sign lines
CN114332814A (en) Parking frame identification method and device, electronic equipment and storage medium
CN117649633B (en) Pavement pothole detection method for highway inspection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210402

RJ01 Rejection of invention patent application after publication