CN117649633A - Pavement pothole detection method for highway inspection - Google Patents

Pavement pothole detection method for highway inspection Download PDF

Info

Publication number
CN117649633A
CN117649633A CN202410121388.4A CN202410121388A CN117649633A CN 117649633 A CN117649633 A CN 117649633A CN 202410121388 A CN202410121388 A CN 202410121388A CN 117649633 A CN117649633 A CN 117649633A
Authority
CN
China
Prior art keywords
feature
module
convolution
analysis model
pavement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410121388.4A
Other languages
Chinese (zh)
Other versions
CN117649633B (en
Inventor
姜明华
范帅宇
余锋
刘莉
周昌龙
宋坤芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Textile University
Original Assignee
Wuhan Textile University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Textile University filed Critical Wuhan Textile University
Priority to CN202410121388.4A priority Critical patent/CN117649633B/en
Publication of CN117649633A publication Critical patent/CN117649633A/en
Application granted granted Critical
Publication of CN117649633B publication Critical patent/CN117649633B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a pavement pothole detection method for highway inspection, which comprises the following steps: s1: designing a pavement image analysis model suitable for analyzing a highway pavement image, wherein the pavement image analysis model comprises a global perception module, a feature enhancement module, a feature extraction module, a attention enhancement module, a feature supplementing module and a result module; s2: training the designed pavement image analysis model to obtain a trained pavement image analysis model; s3: and analyzing the road surface picture in real time by using the trained road surface image analysis model, and generating a boundary frame of the road surface pit according to the position information predicted by the road surface image analysis model after the road surface pit is detected, and displaying the boundary frame on the original image. According to the invention, the pavement image analysis model suitable for analyzing the pavement image of the expressway is designed, so that the efficiency of pavement pothole detection is improved, the manpower statistics cost is saved, and accurate information and reliable data support are provided for expressway maintenance and planning.

Description

Pavement pothole detection method for highway inspection
Technical Field
The invention relates to the field of target detection, in particular to a pavement pothole detection method for highway inspection.
Background
Road potholes are one of the main causes of jolt, unstable running and damage to the vehicle. The problem of detecting the pavement pits is to analyze a pavement area, identify and position the uneven area on the road, and the aim of the problem of detecting the pavement pits is to provide data for road maintenance and management, so that road maintenance personnel can repair the road in time, the smoothness and the driving safety of the road are improved, and the occurrence probability of traffic accidents is reduced.
The expressway is used as a high-grade highway, and has higher motor vehicle passing capacity and passing speed. In order to ensure the stability and the passing efficiency of the vehicle during high-speed driving, the road surface damage such as the pavement pothole of the expressway needs to be found and repaired in time. The existing pothole detection methods generally have two types: traditional detection method and mechanical detection method.
The traditional detection method is to rely on a manual inspection mode to detect, the identification of the pits by inspection staff depends on personal experience, and the detection quality is unstable and time-consuming. Detection using in-vehicle imaging is a common technique for mechanical detection methods. The vehicle-mounted camera technology continuously shoots the road surface through camera equipment arranged on the front part or the chassis of the vehicle, and technicians calculate and analyze images to judge the pothole condition of the road surface. This approach saves time for personnel inspection compared to traditional approaches, but in the face of the large number of photographs taken by the device, the technician still spends a significant amount of time and effort processing the image.
The object detection technology in deep learning learns and identifies objects from a large amount of data by building a neural network model. The trained target detection model can automatically process tasks, has high detection speed and low cost, is not limited by time, environment, region and other conditions, and avoids the problems caused by manual detection. Therefore, aiming at the defect of the existing pothole detection, a road surface pothole detection method based on deep learning is provided.
The Chinese patent with publication number CN116311173B discloses a multi-sensor fused unmanned vehicle road surface pothole detection method, wherein a sonar sensor on an unmanned vehicle is used for scanning a front road surface, a camera is used for shooting when a pothole is detected, a shooting image is preprocessed according to a light sensor, and the width of the pothole edge from two side road edges is calculated according to image information shot by the camera. The requirements of different sensors on the unmanned vehicle on the environment are different, and when the unmanned vehicle is actually used, all the sensors cannot be guaranteed to be in an optimal working state, the detection effect is easily influenced by the equipment state, on the other hand, the method has no self-learning capability, and is difficult to stably represent the external environment with larger difference, and the optimization can only be carried out by manually adjusting the parameters of the sensors, so that the current requirements are not met.
Therefore, there is a need to design a method for detecting a depression on a road surface for highway inspection, which solves the problems in the prior art.
Disclosure of Invention
Aiming at the defects or improvement demands of the prior art, the invention provides the pavement pothole detection method for highway inspection, and the pavement image analysis model suitable for analyzing the highway pavement image is designed, so that the pavement pothole detection efficiency is improved, the manpower statistics cost is saved, and accurate information and reliable data support are provided for highway maintenance and planning.
In order to achieve the above object, according to one aspect of the present invention, there is provided a pavement pit detection method for highway inspection, the method comprising the steps of:
s1: the method comprises the following steps of designing a pavement image analysis model suitable for analyzing a highway pavement image, wherein the pavement image analysis model comprises a global perception module, a feature enhancement module, a feature extraction module, a attention enhancement module, a feature supplementing module and a result module, and the steps of the pavement image analysis model specifically comprise:
s11: calculating global space information of an input image through the global perception module to obtain global perception module characteristics;
s12: the input image is processed by the feature enhancement module, the feature extraction weight of the pavement image analysis model on the pothole part in the image is enhanced, and the feature enhancement module feature is obtained;
s13: sending the features of the feature enhancement module into a feature extraction module to obtain features of the feature extraction module;
s14: the features of the feature extraction module are sent to an attention enhancement module to obtain attention weights;
s15: multiplying the attention weight by the feature of the feature extraction module and then sending the multiplied attention weight to the feature supplementing module to obtain features of the feature supplementing module;
s16: fusing the global perception module features and the feature supplementing module features to obtain fusion features, and sending the fusion features to a result module to obtain and output confidence level, position information and category of a predicted target;
s2: training the designed pavement image analysis model to obtain a trained pavement image analysis model;
s3: and analyzing the road surface picture in real time by using the trained road surface image analysis model, and generating a boundary frame of the road surface pit according to the position information predicted by the road surface image analysis model after the road surface pit is detected, and displaying the boundary frame on the original image.
As an embodiment of the present application, the step of the global sensing module specifically includes:
s111: the input image firstly passes through a 3 multiplied by 3 grouping convolution layer, wherein the number of convolution kernels is the same as the number of channels of the input image, each convolution kernel is responsible for carrying out convolution operation on a characteristic image of one channel, and the characteristic image with the depth being the number of channels of the input image is output;
s112: sending the feature map with the depth being the channel number of the input image into a 1 multiplied by 1 convolution layer, wherein N convolution kernels are used for feature extraction, and then a Mish activation function is used for outputting the feature map with the depth being N;
s113: sending the feature map with the depth of N into a pooling layer, wherein global feature extraction is carried out on input features by using global average pooling, and feature maps with the depth of N and the size of 1 multiplied by 1 are output;
s114: and up-sampling the feature map with the depth of N and the size of 1 multiplied by 1, recovering the size of the feature map to the size when the feature map is input into the pooling layer, obtaining the features of the global perception module and outputting the features.
As an embodiment of the present application, the step of the feature enhancement module specifically includes:
s121: the method comprises the steps of firstly, carrying out edge extraction on an image through an edge detection layer by using an edge detection algorithm to obtain a gray level edge image, taking the gray level edge image as a mask of the input image, and copying pixel points with 1 pixels in the mask corresponding to the input image to the gray level edge image to obtain a color edge image of the input image;
s122: the color edge map and the input image are subjected to weighted fusion to obtain a characteristic enhanced image; the calculation formula of the weighted fusion is as follows:
wherein,for weighting the fused picture, +.>For inputting pictures +.>For color edge map, fusion factor->、/>0.68 and 0.32 respectively
S123: and extracting features from the image with the enhanced features through a 3X 3 convolution layer, then through a batch normalization layer, finally through a SeLU activation function, obtaining and outputting features of the feature enhancement module.
As an embodiment of the present application, the step of the feature extraction module specifically includes:
s131: the characteristic enhancement module is characterized in that through a convolution layer with the size of 1 multiplied by 1 and the step length of 1, N convolution kernels are used for extracting the characteristics, and a characteristic graph with the depth of N is output;
s132: the characteristic diagram with the depth of N passes through three convolution layers with the size of 3 multiplied by 3 and the step length of 2, and then passes through a batch normalization layer andactivating a function, and further extracting features;
s133: and then the characteristic extraction module characteristics are obtained and output through a convolution layer with the size of 1 multiplied by 1 and the step length of 1.
As an embodiment of the present application, the steps of the attention enhancing module specifically include:
s141: the characteristic extraction module characteristics are subjected to channel-by-channel convolution layer, and the characteristic extraction module characteristics are convolved according to the channel separation;
s142: the method comprises the steps of copying the features of a convolved feature extraction module into two parts, respectively sending the two parts into a first branch and a second branch, sequentially passing through a group normalization layer and a global average pooling layer in the features of the convolved feature extraction module in the first branch to obtain channel concentration features, and then obtaining primary weights through a Softmax layer, wherein the primary weights are multiplied element by element with the features of the feature extraction module activated by a Mish function in the second branch to obtain primary reinforcement features;
s143: and the preliminary reinforcement features are subjected to point-to-point convolution through a point-to-point convolution layer, feature information is extracted through a global maximum pooling layer, and finally attention weight is obtained through a Sigmoid layer and output.
As an embodiment of the present application, the specific formula for generating the preliminary reinforcing feature is:
wherein,representing a preliminary stiffening feature->Representing the feature extraction module feature->Representation->The function of the function is that,representing global average pooling,/->Representation group normalization->Representing a channel-by-channel convolution, ">Representation->Activating a function;
the specific formula for generating the attention weight is as follows:
wherein,representing attention weight,/->Representing a preliminary stiffening feature->Representation->Function (F)>Representing global maximization,/-pooling>Representing a point-by-point convolution.
As an embodiment of the present application, the steps of the feature supplementing module specifically include:
s151: the characteristic enhancement module is characterized in that the characteristic image Q1 is obtained through an expansion convolution layer with the size of 3 multiplied by 3 and the expansion rate of 1 and then through a batch normalization layer;
s152: the characteristic enhancement module is characterized by obtaining a characteristic diagram Q2 through an expansion convolution layer with the size of 3 multiplied by 3 and the expansion rate of 2 and then a batch normalization layer;
s153: the characteristic enhancement module is characterized in that the characteristic image Q3 is obtained through an expansion convolution layer with the size of 3 multiplied by 3 and the expansion rate of 3 and then through a batch normalization layer;
s154: and after the feature graphs Q1, Q2 and Q3 are spatially stacked, activating the feature graphs through a SeLU activation function, and then obtaining and outputting the feature of the feature supplementing module through a convolution layer with the size of 1 multiplied by 1 and the step length of 1.
As an embodiment of the present application, the steps of the result module specifically include:
s161: the fusion features are convolved through convolution layers with the size of 1 multiplied by 1 and the number of convolution kernels of 128 to obtain a feature map P1;
s162: the characteristic map P1 is subjected to convolution layers with the size of 3 multiplied by 3 and the number of convolution kernels of 128, a characteristic map P2 is obtained through convolution, the characteristic map P2 is duplicated in three parts, and the characteristic map P2 is respectively sent into a first branch, a second branch and a third branch;
s163: the feature map P2 passes through convolution layers with the size of 1 multiplied by 1 and the number of convolution kernels of 1 in a first branch to obtain target confidence coefficient and output a confidence coefficient prediction result; passing through convolution layers with the size of 1 multiplied by 1 and the number of convolution kernels of 4 in the second branch to obtain target position information and output a position information prediction result, wherein the target position information comprises a center position, a width and a height; and in the third branch, the target category information is obtained through a convolution layer with the size of 1 multiplied by 1 and the number of convolution kernels as the number of categories, and the category prediction result is output.
As an embodiment of the application, the step S2 of training the pavement image analysis model specifically includes:
s21: shooting expressway pictures, making a data set, and dividing the data set into a training set, a verification set and a test set according to the proportion of 6:2:2;
s22: training the road surface image analysis model by using the training set, calculating the deviation between a predicted target result and a real result through a loss function, and back-propagating weights of all layers of an optimization network; evaluating and training by using the verification set and the test set, and adjusting parameters of the pavement image analysis model according to the training effect to obtain an optimal pavement image analysis model;
the loss functionIncluding the center point loss function->Classification loss function->Confidence loss function->And bounding box loss function->
The loss functionThe specific formula for controlling the balance of the loss function overall in the training process is as follows:
wherein,、/>、/>、/>is a weight coefficient;
the center point loss functionCalculating the error between a predicted central point and an actual central point of a pavement image analysis model by using the mean square error, wherein the specific formula is as follows:
wherein,for the total number of samples->For predicting center point coordinates +.>For the actual center point coordinates +.>Is the corresponding weight coefficient;
the classification loss functionThe method is used for calculating the classification error of the pavement image analysis model, and the specific formula is as follows:
wherein,is->Category label of individual samples->Is->Predictive probability of individual samples +.>Is a balance factor;
the confidence loss functionThe method is used for calculating the prediction loss of the sample on whether an object exists in the boundary frame, and the specific formula is as follows:
wherein,probability of target being present for the predicted point, +.>Is a penalty factor;
the bounding box loss functionThe specific formula for calculating the prediction error of the bounding box is:
wherein,diagonal length of minimum bounding rectangle for prediction bounding box and real bounding box, +.>Is the intersection ratio of the prediction bounding box and the real bounding box.
As an embodiment of the present application, the step S3 specifically includes:
s31: inputting the obtained expressway road surface image into a trained road surface image analysis model;
s32: according to the central position (x, y), the width w and the height h of the pit in the pavement image analysis model analysis result module, determining the coordinate range of the minimum boundary frame BBox of the pit, wherein the coordinate range of the minimum boundary frame BBox on the y axis isThe coordinate range of the minimum bounding box BBox on the x axis isWherein s is the minimum pixel of the picture;
s33: the minimal bounding box BBox of the pothole is displayed on the original image.
The beneficial effects of the invention are as follows:
(1) According to the invention, the road image analysis model suitable for analyzing the road image of the expressway is designed, the trained road image analysis model is more in line with road pothole detection in the inspection scene of the expressway, the trained road image analysis model is used for analyzing the road image of the expressway in real time, after the road pothole is detected, a boundary frame of the road pothole is generated according to the position information predicted by the road image analysis model and is displayed on the original image, the road pothole detection efficiency is improved, the labor cost is saved, and timely and effective data is provided for subsequent road repair and management.
(2) According to the invention, the global perception module is designed to concentrate the global information of the input picture, the global perception feature is generated and is fused with the feature extracted by the road surface image analysis model, so that the global context information lacking in the deep layer of the network of the feature is made up, and the road surface image analysis model enhances the robustness of the road surface image analysis model by focusing on the global feature.
(3) According to the invention, the color edge map of the input image is obtained by carrying out edge extraction on the input image through the design feature enhancement module, and the color edge map and the input image are subjected to weighted fusion and then are subjected to feature extraction, so that the pavement image analysis model is focused on the area around the pits during feature extraction, and the feature extraction effect of the pavement image analysis model is enhanced.
(4) According to the invention, the attention enhancement module is designed to enhance the extraction of the most important features in the current task by the road surface image analysis model, so that the feature extraction capability of the road surface image analysis model is improved.
(5) According to the invention, the expansion convolution of the feature images with different expansion coefficients is used for multi-scale feature extraction and fusion through the design feature supplementing module, so that the multi-scale information of the feature images is enriched, the receptive field of the feature images is enhanced, and the detection precision of small targets such as pavement pits and irregular targets is improved.
Drawings
FIG. 1 is a flow chart of a technical scheme of a method for detecting a pavement pit for highway inspection provided in an embodiment of the invention;
FIG. 2 is a diagram showing an overall structure of a model of a method for detecting a depression in a road surface for highway inspection according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a global perception module of a method for detecting a pavement depression for highway inspection according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a feature enhancement module of a method for detecting a pavement depression for highway inspection according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a feature extraction module of a method for detecting a pavement depression for highway inspection according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a attention enhancement module of a method for detecting a pavement depression for highway inspection according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a feature supplementing module of a method for detecting a pavement depression for highway inspection according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a result module of a method for detecting a depression of a road surface for highway inspection according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that all directional indicators (such as up, down, left, right, front, and rear … …) in the embodiments of the present invention are merely used to explain the relative positional relationship, movement, etc. between the components in a particular posture (as shown in the drawings), and if the particular posture is changed, the directional indicator is changed accordingly.
In the present invention, unless specifically stated and limited otherwise, the terms "connected," "affixed," and the like are to be construed broadly, and for example, "affixed" may be a fixed connection, a removable connection, or an integral body; can be mechanically or electrically connected; either directly or indirectly, through intermediaries, or both, may be in communication with each other or in interaction with each other, unless expressly defined otherwise. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In addition, if there is a description of "first", "second", etc. in the embodiments of the present invention, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the meaning of "and/or" as it appears throughout includes three parallel schemes, for example "A and/or B", including the A scheme, or the B scheme, or the scheme where A and B are satisfied simultaneously. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present invention.
Referring to fig. 1 to 8, the present invention provides a pavement depression detection method for highway inspection, the method comprising the steps of:
s1: the method comprises the following steps of designing a pavement image analysis model suitable for analyzing a highway pavement image, wherein the pavement image analysis model comprises a global perception module, a feature enhancement module, a feature extraction module, a attention enhancement module, a feature supplementing module and a result module, and the steps of the pavement image analysis model specifically comprise:
s11: calculating global space information of an input image through the global perception module to obtain global perception module characteristics;
s12: the input image is processed by the feature enhancement module, the feature extraction weight of the pavement image analysis model on the pothole part in the image is enhanced, and the feature enhancement module feature is obtained;
s13: sending the features of the feature enhancement module into a feature extraction module to obtain features of the feature extraction module;
s14: the features of the feature extraction module are sent to an attention enhancement module to obtain attention weights;
s15: multiplying the attention weight by the feature of the feature extraction module and then sending the multiplied attention weight to the feature supplementing module to obtain features of the feature supplementing module;
s16: fusing the global perception module features and the feature supplementing module features to obtain fusion features, and sending the fusion features to a result module to obtain and output confidence level, position information and category of a predicted target;
s2: training the designed pavement image analysis model to obtain a trained pavement image analysis model;
s3: and analyzing the pavement image in real time by using the trained pavement image analysis model, and generating a boundary frame of the pavement hole according to the position information predicted by the pavement image analysis model after the pavement hole is detected, and displaying the boundary frame on the original image.
Specifically, the invention inputs the pavement image into the pavement image analysis model to generate the boundary frame of the pavement pothole to be displayed on the original image, thereby improving the efficiency of pavement pothole detection, saving the labor cost and providing timely and effective data for the subsequent road repair and management.
As an embodiment of the present application, the global perception module is configured to generate global spatial features of an image, and the steps specifically include:
s111: the input image firstly passes through a 3 multiplied by 3 grouping convolution layer, wherein the number of convolution kernels is the same as the number of channels of the input image, each convolution kernel is responsible for carrying out convolution operation on a characteristic image of one channel, and the characteristic image with the depth being the number of channels of the input image is output;
s112: sending the feature map with the depth of the channel number of the input image into a 1 multiplied by 1 convolution layer, wherein N convolution kernels are used for feature extraction, and a Mish activation function is used for outputting the feature map with the depth of N;
s113: sending the feature map with the depth of N into a pooling layer, wherein global feature extraction is carried out on input features by using global average pooling, and feature maps with the depth of N and the size of 1 multiplied by 1 are output;
s114: and up-sampling the feature map with the depth of N and the size of 1 multiplied by 1, recovering the size of the feature map to the size when the feature map is input into the pooling layer, obtaining the features of the global perception module and outputting the features.
Specifically, the global perception module is used for concentrating global information of an input picture to generate global perception features, the global perception features are fused with features extracted by the road surface image analysis model, global context information lacking in the deep layer of the network of the features is made up, and the road surface image analysis model is used for enhancing the robustness of the road surface image analysis model by focusing on the global features.
As an embodiment of the application, the feature enhancement module is configured to enhance feature extraction weights of a pavement image analysis model on a pothole portion in an image, and the steps specifically include:
s121: the method comprises the steps of firstly, carrying out edge extraction on an image through an edge detection layer by using an edge detection algorithm, specifically, carrying out edge detection on a hollow picture by using a Canny function in an OpenCV library to obtain a gray edge map of the image, taking the gray edge map as a mask of the input image, and copying a pixel point with a pixel of 1 in the mask corresponding to the input image to the gray edge map to obtain a color edge map of the input image;
s122: the color edge map and the input image are subjected to weighted fusion to obtain a characteristic enhanced image; the calculation formula of the weighted fusion is as follows:
wherein,for weighting the fused picture, +.>For inputting pictures +.>For color edge map, fusion factor->、/>0.68 and 0.32 respectively;
s123: and extracting the features of the image with the enhanced features through a 3X 3 convolution layer, then through a batch normalization layer, finally through a SeLU activation function, obtaining and outputting the features of the feature enhancement module.
Specifically, the invention carries out edge extraction on the input image through the characteristic enhancement module to obtain the color edge map of the input image, carries out weighted fusion on the color edge map and the input image, and then carries out characteristic extraction, so that the pavement image analysis model focuses on the area around the pits during characteristic extraction, and the characteristic extraction effect of the pavement image analysis model is enhanced.
As an embodiment of the present application, the step of the feature extraction module specifically includes:
s131: the characteristic enhancement module is characterized in that through a convolution layer with the size of 1 multiplied by 1 and the step length of 1, N convolution kernels are used for extracting the characteristics, and a characteristic graph with the depth of N is output;
s132: the characteristic diagram with the depth of N passes through three convolution layers with the size of 3 multiplied by 3 and the step length of 2, and then passes through a batch normalization layer and a step lengthActivating a function, and further extracting features;
s133: and then, a convolution layer with the size of 1 multiplied by 1 and the step length of 1 is adopted to restore the channel number of the feature map to the depth when the feature extraction module is input, so that the features of the feature extraction module are obtained and output.
As an embodiment of the application, the attention enhancing module is configured to enhance the capability of the pavement image analysis model to extract important spatial information in the feature map, and the steps specifically include:
s141: the characteristic extraction module characteristic passes through a channel-by-channel convolution layer, wherein one grouping number of the channel-by-channel convolution layer is equal to the channel number, and the characteristic extraction module characteristic is convolved according to the channel separation;
s142: the method comprises the steps of copying the features of a convolved feature extraction module into two parts, respectively sending the two parts into a first branch and a second branch, sequentially passing through a group normalization layer and a global average pooling layer in the features of the convolved feature extraction module in the first branch to obtain channel concentration features, and then obtaining primary weights through a Softmax layer, wherein the primary weights are multiplied element by element with the features of the feature extraction module activated by a Mish function in the second branch to obtain primary reinforcement features;
s143: the primary reinforcement features are subjected to point-to-point convolution through a point-to-point convolution layer, cross-channel information of the features is supplemented, feature important information is extracted through a global maximum pooling layer, and finally attention weight is obtained through a Sigmoid layer and output.
As an embodiment of the present application, the specific formula for generating the preliminary reinforcing feature is:
wherein,representing a preliminary stiffening feature->Representing the feature extraction module feature->Representation->The function of the function is that,representing global average pooling,/->Representation group normalization->Representing a channel-by-channel convolution, ">Representation->Activating a function;
the specific formula for generating the attention weight is as follows:
wherein,representing attention weight,/->Representing a preliminary stiffening feature->Representation->Function (F)>Representing global maximization,/-pooling>Representing a point-by-point convolution.
Specifically, the attention enhancement module is used for enhancing the extraction of the most important features of the road surface image analysis model in the current task, so that the feature extraction capability of the road surface image analysis model is improved.
As an embodiment of the present application, the feature supplementing module is configured to perform multi-scale extraction and fusion on features of the feature extracting module, and improve multi-scale information content of the feature map, where the steps specifically include:
s151: the characteristic enhancement module is characterized by passing through an expansion convolution layer with the size of 3 multiplied by 3 and the expansion rate of 1, and then passing through a batch normalization layer to obtain a characteristic diagram Q1;
s152: the characteristic enhancement module is characterized by passing through an expansion convolution layer with the size of 3 multiplied by 3 and the expansion rate of 2, and then passing through a batch normalization layer to obtain a characteristic diagram Q2;
s153: the characteristic enhancement module is characterized by passing through an expansion convolution layer with the size of 3 multiplied by 3 and the expansion rate of 3, and then passing through a batch normalization layer to obtain a characteristic diagram Q3;
s154: and after the feature graphs Q1, Q2 and Q3 are spatially stacked, activating the feature graphs through a SeLU activation function, and recovering the depth of the feature graphs to the depth when the feature complementary module is input through a convolution layer with the size of 1 multiplied by 1 and the step length of 1, so as to obtain and output the features of the feature complementary module.
Specifically, the feature supplementing module is used for extracting and fusing multi-scale features of the feature map by using expansion convolution with different expansion coefficients, so that multi-scale information of the feature map is enriched, the receptive field of the feature map is enhanced, and the detection precision of small targets such as pavement pits and irregular targets is improved.
As an embodiment of the present application, the result module is configured to decouple the feature map to obtain confidence level, location information and category of the predicted target, and the steps specifically include:
s161: the fusion feature passes through a convolution layer with the size of 1 multiplied by 1 and the number of convolution kernels of 128, and a feature map P1 is obtained through convolution;
s162: the characteristic map P1 is subjected to convolution layers with the size of 3 multiplied by 3 and the number of convolution kernels of 128, a characteristic map P2 is obtained through convolution, the characteristic map P2 is duplicated in three parts, and the characteristic map P2 is respectively sent into a first branch, a second branch and a third branch;
s163: the feature map P2 passes through convolution layers with the size of 1 multiplied by 1 and the number of convolution kernels of 1 in a first branch to obtain target confidence coefficient and output a confidence coefficient prediction result; passing through convolution layers with the size of 1 multiplied by 1 and the number of convolution kernels of 4 in the second branch to obtain target position information and output a position information prediction result, wherein the target position information comprises a center position, a width and a height; and in the third branch, the target category information is obtained through a convolution layer with the size of 1 multiplied by 1 and the number of convolution kernels as the number of categories, and the category prediction result is output.
As an embodiment of the application, the step S2 of training the pavement image analysis model specifically includes:
s21: taking expressway pictures, making a data set, dividing the data set into a training set, a verification set and a test set according to the ratio of 6:2:2, and specifically, moving an unmanned aerial vehicle carrying a camera or a patrol vehicle provided with the camera along the expressway, and taking expressway pictures in the moving process;
s22: training the road surface image analysis model by using the training set, calculating the deviation between a predicted target result and a real result through a loss function, and back-propagating weights of all layers of an optimization network; evaluating and training by using the verification set and the test set, and adjusting parameters of the pavement image analysis model according to the training effect to obtain an optimal pavement image analysis model;
the loss functionIncluding the center point loss function->Classification loss function->Confidence loss function->And bounding box loss function->
The loss functionThe method is used for controlling the overall balance of the loss function in the training process, optimizing the training effect of the pavement image analysis model, and comprises the following specific formulas:
wherein,、/>、/>、/>is a weight coefficient;
the center point loss functionCalculating the error between a predicted central point and an actual central point of a pavement image analysis model by using the mean square error, wherein the specific formula is as follows:
wherein,for the total number of samples->For predicting center point coordinates +.>For the actual center point coordinates +.>Is the corresponding weight coefficient;
the classification loss functionThe method is used for calculating the classification error of the pavement image analysis model, and the specific formula is as follows:
wherein,is->Category label of individual samples->Is->Predictive probability of individual samples +.>As a balance factor, when->Individual sample classification results and realismDifferent and +.>Above 70%, the sample is considered to be an error prone sample, and when the loss of classification of the error prone sample is calculated, the sample is taken to be +.>Is->Calculating the other samples as the reciprocal of the ratio of the number of error prone samples in the total samples to the other samples,/when calculating the other samples>Is->The ratio of the number of error-prone samples in the total samples to other samples;
the confidence loss functionThe method is used for calculating the prediction loss of the sample on whether an object exists in the boundary frame, and the specific formula is as follows:
wherein,probability of target being present for the predicted point, +.>For penalty factors, for samples with predicted outcome different from the real case +.>Is-1.5, the remaining samples->Is-1;
the bounding box loss functionThe specific formula for calculating the prediction error of the bounding box is:
wherein,diagonal length of minimum bounding rectangle for prediction bounding box and real bounding box, +.>The intersection ratio of the prediction boundary frame and the real boundary frame is set;
as an embodiment of the present application, the step S3 specifically includes:
s31: inputting the acquired real-time expressway road surface image into a trained road surface image analysis model, specifically, controlling the unmanned aerial vehicle to fly along the expressway by personnel at a certain height, utilizing a camera carried on the unmanned aerial vehicle to shoot the high-definition road surface image in real time, or using a camera arranged on the front part or a chassis of the inspection vehicle to shoot the high-definition road surface image when the vehicle runs along the road, and storing the acquired image into storage equipment to realize real-time acquisition of the expressway road surface image;
s32: determining the coordinate range of a minimum boundary frame BBox of the pit according to the central position (x, y), the width w and the height h of the pit in the pavement image analysis model analysis result module, wherein the coordinate range of the minimum boundary frame BBox on the y axis in the pixel coordinate system is thatThe coordinate range of the minimum bounding box BBox on the x-axis is +.>Wherein s is the minimum pixel of the picture;
s33: the method comprises the steps of displaying the minimum boundary frame BBox of the pothole on an original image according to the output of a result module, completing positioning and marking of the pothole, storing pictures in equipment with a storage function, improving the efficiency of detecting the pothole on a road surface, saving labor cost and providing timely and effective data for subsequent road repair and management.
According to the invention, the road image analysis model suitable for analyzing the road image of the expressway is designed, the trained road image analysis model is more in line with road pothole detection in the inspection scene of the expressway, the trained road image analysis model is used for analyzing the road image of the expressway in real time, after the road pothole is detected, a boundary frame of the road pothole is generated according to the position information predicted by the road image analysis model and is displayed on the original image, the road pothole detection efficiency is improved, the labor cost is saved, and timely and effective data is provided for subsequent road repair and management.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (10)

1. The method for detecting the pavement pits for highway inspection is characterized by comprising the following steps of:
s1: the method comprises the following steps of designing a pavement image analysis model suitable for analyzing a highway pavement image, wherein the pavement image analysis model comprises a global perception module, a feature enhancement module, a feature extraction module, a attention enhancement module, a feature supplementing module and a result module, and the steps of the pavement image analysis model specifically comprise:
s11: calculating global space information of an input image through the global perception module to obtain global perception module characteristics;
s12: the input image is processed by the feature enhancement module, the feature extraction weight of the pavement image analysis model on the pothole part in the image is enhanced, and the feature enhancement module feature is obtained;
s13: sending the features of the feature enhancement module into a feature extraction module to obtain features of the feature extraction module;
s14: the features of the feature extraction module are sent to an attention enhancement module to obtain attention weights;
s15: multiplying the attention weight by the feature of the feature extraction module and then sending the multiplied attention weight to the feature supplementing module to obtain features of the feature supplementing module;
s16: fusing the global perception module features and the feature supplementing module features to obtain fusion features, and sending the fusion features to a result module to obtain and output confidence level, position information and category of a predicted target;
s2: training the designed pavement image analysis model to obtain a trained pavement image analysis model;
s3: and analyzing the road surface picture in real time by using the trained road surface image analysis model, and generating a boundary frame of the road surface pit according to the position information predicted by the road surface image analysis model after the road surface pit is detected, and displaying the boundary frame on the original image.
2. The method for detecting the pavement pits for highway inspection according to claim 1, wherein the step of the global perception module specifically comprises:
s111: the input image firstly passes through a 3 multiplied by 3 grouping convolution layer, wherein the number of convolution kernels is the same as the number of channels of the input image, each convolution kernel is responsible for carrying out convolution operation on a characteristic image of one channel, and the characteristic image with the depth being the number of channels of the input image is output;
s112: sending the feature map with the depth being the channel number of the input image into a 1 multiplied by 1 convolution layer, wherein N convolution kernels are used for feature extraction, and then a Mish activation function is used for outputting the feature map with the depth being N;
s113: sending the feature map with the depth of N into a pooling layer, wherein global feature extraction is carried out on input features by using global average pooling, and feature maps with the depth of N and the size of 1 multiplied by 1 are output;
s114: and up-sampling the feature map with the depth of N and the size of 1 multiplied by 1 to obtain and output the features of the global perception module.
3. The method for detecting a depression on a road surface for highway inspection according to claim 1, wherein the step of the feature enhancement module specifically comprises:
s121: the method comprises the steps of firstly carrying out edge extraction on an input image through an edge detection layer to obtain a gray level edge image, taking the gray level edge image as a mask of the input image, and copying pixel points with 1 pixels in the mask corresponding to the input image to the gray level edge image to obtain a color edge image;
s122: the color edge map and the input image are subjected to weighted fusion to obtain a characteristic enhanced image; the calculation formula of the weighted fusion is as follows:
wherein,for weighting the fused picture, +.>For inputting pictures +.>For color edge map, fusion factor->、/>0.68 and 0.32 respectively;
s123: and extracting features from the image with the enhanced features through a 3X 3 convolution layer, then through a batch normalization layer, finally through a SeLU activation function, obtaining and outputting features of the feature enhancement module.
4. The method for detecting a depression on a road surface for highway inspection according to claim 1, wherein the step of the feature extraction module specifically comprises:
s131: the characteristic enhancement module is characterized in that through a convolution layer with the size of 1 multiplied by 1 and the step length of 1, N convolution kernels are used for extracting the characteristics, and a characteristic graph with the depth of N is output;
s132: the characteristic diagram with the depth of N passes through three convolution layers with the size of 3 multiplied by 3 and the step length of 2, and then passes through a batch normalization layer andactivating a function, and further extracting features;
s133: and then the characteristic extraction module characteristics are obtained and output through a convolution layer with the size of 1 multiplied by 1 and the step length of 1.
5. The method for detecting a depression in a road surface for highway inspection according to claim 1, wherein said attention enhancing module step comprises:
s141: the characteristic extraction module characteristics are subjected to a channel-by-channel convolution layer, and the characteristic extraction module characteristics are subjected to convolution according to channel separation to obtain a characteristic diagram;
s142: duplicating two feature graphs obtained through convolution, respectively sending the feature graphs into a first branch and a second branch, sequentially passing the feature extraction module features subjected to convolution in the first branch through a group normalization layer and a global average pooling layer to obtain channel concentration features, and then obtaining primary weights through a Softmax layer, wherein the primary weights are multiplied element by element with the feature extraction module features subjected to Mish function activation in the second branch to obtain primary reinforcement features;
s143: and the preliminary reinforcement features are subjected to point-to-point convolution through a point-to-point convolution layer, feature information is extracted through a global maximum pooling layer, and finally attention weight is obtained through a Sigmoid layer and output.
6. The method for detecting the pavement pits for highway inspection according to claim 5, wherein the specific formula for generating the preliminary reinforcing features is as follows:
wherein,representing a preliminary stiffening feature->Representing the feature extraction module feature->Representation->Function (F)>Representing global average pooling,/->Representation group normalization->Representing a channel-by-channel convolution, ">Representation->Activating a function;
the specific formula for generating the attention weight is as follows:
wherein,representing attention weight,/->Representing a preliminary stiffening feature->Representation->Function (F)>Representing global maximization,/-pooling>Representing a point-by-point convolution.
7. The method for detecting a depression on a road surface for highway inspection according to claim 1, wherein said feature supplementing module step specifically comprises:
s151: the characteristic enhancement module is characterized in that the characteristic image Q1 is obtained through an expansion convolution layer with the size of 3 multiplied by 3 and the expansion rate of 1 and then through a batch normalization layer;
s152: the characteristic enhancement module is characterized by obtaining a characteristic diagram Q2 through an expansion convolution layer with the size of 3 multiplied by 3 and the expansion rate of 2 and then a batch normalization layer;
s153: the characteristic enhancement module is characterized in that the characteristic image Q3 is obtained through an expansion convolution layer with the size of 3 multiplied by 3 and the expansion rate of 3 and then through a batch normalization layer;
s154: and after the feature graphs Q1, Q2 and Q3 are spatially stacked, activating the feature graphs through a SeLU activation function, and then obtaining and outputting the feature of the feature supplementing module through a convolution layer with the size of 1 multiplied by 1 and the step length of 1.
8. The method for detecting the pavement pits for highway inspection according to claim 1, wherein the step of the result module specifically comprises:
s161: the fusion features are convolved through convolution layers with the size of 1 multiplied by 1 and the number of convolution kernels of 128 to obtain a feature map P1;
s162: the characteristic map P1 is subjected to convolution layers with the size of 3 multiplied by 3 and the number of convolution kernels of 128, a characteristic map P2 is obtained through convolution, the characteristic map P2 is duplicated in three parts, and the characteristic map P2 is respectively sent into a first branch, a second branch and a third branch;
s163: the feature map P2 passes through convolution layers with the size of 1 multiplied by 1 and the number of convolution kernels of 1 in a first branch to obtain target confidence coefficient and output a confidence coefficient prediction result; passing through convolution layers with the size of 1 multiplied by 1 and the number of convolution kernels of 4 in the second branch to obtain target position information and output a position information prediction result, wherein the target position information comprises a center position, a width and a height; and in the third branch, the target category information is obtained through a convolution layer with the size of 1 multiplied by 1 and the number of convolution kernels as the number of categories, and the category prediction result is output.
9. The method for detecting the pavement pits for highway inspection according to claim 1, wherein the step S2 of training the pavement image analysis model specifically comprises the following steps:
s21: shooting expressway pictures, making a data set, and dividing the data set into a training set, a verification set and a test set according to the proportion of 6:2:2;
s22: training the road surface image analysis model by using the training set, and calculating the deviation between the predicted target result and the real result through a loss function; evaluating and training by using the verification set and the test set, and adjusting parameters of the pavement image analysis model according to the training effect to obtain an optimal pavement image analysis model;
the loss functionIncluding the center point loss function->Classification loss function->Confidence loss function->And bounding box loss function->
The loss functionThe specific formula for controlling the balance of the loss function overall in the training process is as follows:
wherein,、/>、/>、/>is a weight coefficient;
the center point loss functionCalculating the error between the predicted central point and the actual central point of the pavement image analysis model by using the mean square error, wherein the specific formula is as follows:
Wherein,for the total number of samples->For predicting center point coordinates +.>For the actual center point coordinates +.>Is the corresponding weight coefficient;
the classification loss functionThe method is used for calculating the classification error of the pavement image analysis model, and the specific formula is as follows:
wherein,is->Category label of individual samples->Is->Predictive probability of individual samples +.>Is a balance factor;
the confidence loss functionThe method is used for calculating the prediction loss of the sample on whether an object exists in the boundary frame, and the specific formula is as follows:
wherein,probability of target being present for the predicted point, +.>Is a penalty factor;
the bounding box loss functionThe specific formula for calculating the prediction error of the bounding box is:
wherein,diagonal length of minimum bounding rectangle for prediction bounding box and real bounding box, +.>Is the intersection ratio of the prediction bounding box and the real bounding box.
10. The method for detecting the depression of the road surface for highway inspection according to claim 1, wherein the step S3 specifically comprises:
s31: inputting the obtained expressway road surface image into a trained road surface image analysis model;
s32: according to the central position (x, y), the width w and the height h of the pit in the pavement image analysis model analysis result module, determining the coordinate range of the minimum boundary frame BBox of the pit, wherein the coordinate range of the minimum boundary frame BBox on the y axis isThe coordinate range of the minimum bounding box BBox on the x axis isWherein s is the minimum pixel of the picture;
s33: and displaying the minimal boundary box BBox of the pothole on the original image.
CN202410121388.4A 2024-01-30 2024-01-30 Pavement pothole detection method for highway inspection Active CN117649633B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410121388.4A CN117649633B (en) 2024-01-30 2024-01-30 Pavement pothole detection method for highway inspection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410121388.4A CN117649633B (en) 2024-01-30 2024-01-30 Pavement pothole detection method for highway inspection

Publications (2)

Publication Number Publication Date
CN117649633A true CN117649633A (en) 2024-03-05
CN117649633B CN117649633B (en) 2024-04-26

Family

ID=90045440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410121388.4A Active CN117649633B (en) 2024-01-30 2024-01-30 Pavement pothole detection method for highway inspection

Country Status (1)

Country Link
CN (1) CN117649633B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1991011783A1 (en) * 1990-01-23 1991-08-08 Massachusetts Institute Of Technology Recognition of patterns in images
US20170024645A1 (en) * 2015-06-01 2017-01-26 Salesforce.Com, Inc. Dynamic Memory Network
CN112733749A (en) * 2021-01-14 2021-04-30 青岛科技大学 Real-time pedestrian detection method integrating attention mechanism
CN113256601A (en) * 2021-06-10 2021-08-13 北方民族大学 Pavement disease detection method and system
US20210319561A1 (en) * 2020-11-02 2021-10-14 BeSTDR Infrastructure Hospital(Pingyu) Image segmentation method and system for pavement disease based on deep learning
CN113808103A (en) * 2021-09-16 2021-12-17 广州大学 Automatic road surface depression detection method and device based on image processing and storage medium
CN113902729A (en) * 2021-10-26 2022-01-07 重庆邮电大学 Road surface pothole detection method based on YOLO v5 model
US20220203996A1 (en) * 2020-12-31 2022-06-30 Cipia Vision Ltd. Systems and methods to limit operating a mobile phone while driving
CN114898403A (en) * 2022-05-16 2022-08-12 北京联合大学 Pedestrian multi-target tracking method based on Attention-JDE network
CN115352454A (en) * 2022-09-29 2022-11-18 哈尔滨工程大学 Interactive auxiliary safe driving system
CN116824461A (en) * 2023-08-30 2023-09-29 山东建筑大学 Question understanding guiding video question answering method and system
CN116823852A (en) * 2023-06-09 2023-09-29 苏州大学 Strip-shaped skin scar image segmentation method and system based on convolutional neural network
US20230317066A1 (en) * 2022-03-09 2023-10-05 Amazon Technologies, Inc. Shared encoder for natural language understanding processing
US20230331235A1 (en) * 2022-04-18 2023-10-19 Qualcomm Incorporated Systems and methods of collaborative enhanced sensing
CN117095368A (en) * 2023-09-04 2023-11-21 中科领航智能科技(苏州)有限公司 Traffic small target detection method based on YOLOV5 fusion multi-target feature enhanced network and attention mechanism
CN117095702A (en) * 2023-07-24 2023-11-21 南京邮电大学 Multi-mode emotion recognition method based on gating multi-level feature coding network
US20230394823A1 (en) * 2022-06-03 2023-12-07 Nvidia Corporation Techniques to perform trajectory predictions

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1991011783A1 (en) * 1990-01-23 1991-08-08 Massachusetts Institute Of Technology Recognition of patterns in images
US20170024645A1 (en) * 2015-06-01 2017-01-26 Salesforce.Com, Inc. Dynamic Memory Network
US20210319561A1 (en) * 2020-11-02 2021-10-14 BeSTDR Infrastructure Hospital(Pingyu) Image segmentation method and system for pavement disease based on deep learning
US20220203996A1 (en) * 2020-12-31 2022-06-30 Cipia Vision Ltd. Systems and methods to limit operating a mobile phone while driving
CN112733749A (en) * 2021-01-14 2021-04-30 青岛科技大学 Real-time pedestrian detection method integrating attention mechanism
CN113256601A (en) * 2021-06-10 2021-08-13 北方民族大学 Pavement disease detection method and system
CN113808103A (en) * 2021-09-16 2021-12-17 广州大学 Automatic road surface depression detection method and device based on image processing and storage medium
CN113902729A (en) * 2021-10-26 2022-01-07 重庆邮电大学 Road surface pothole detection method based on YOLO v5 model
US20230317066A1 (en) * 2022-03-09 2023-10-05 Amazon Technologies, Inc. Shared encoder for natural language understanding processing
US20230331235A1 (en) * 2022-04-18 2023-10-19 Qualcomm Incorporated Systems and methods of collaborative enhanced sensing
CN114898403A (en) * 2022-05-16 2022-08-12 北京联合大学 Pedestrian multi-target tracking method based on Attention-JDE network
US20230394823A1 (en) * 2022-06-03 2023-12-07 Nvidia Corporation Techniques to perform trajectory predictions
CN115352454A (en) * 2022-09-29 2022-11-18 哈尔滨工程大学 Interactive auxiliary safe driving system
CN116823852A (en) * 2023-06-09 2023-09-29 苏州大学 Strip-shaped skin scar image segmentation method and system based on convolutional neural network
CN117095702A (en) * 2023-07-24 2023-11-21 南京邮电大学 Multi-mode emotion recognition method based on gating multi-level feature coding network
CN116824461A (en) * 2023-08-30 2023-09-29 山东建筑大学 Question understanding guiding video question answering method and system
CN117095368A (en) * 2023-09-04 2023-11-21 中科领航智能科技(苏州)有限公司 Traffic small target detection method based on YOLOV5 fusion multi-target feature enhanced network and attention mechanism

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A.D.ISAH等: "Development of asphalt paved road pothole detection system using modified colour space approach", 《THE JOURNAL OF COMPUTER SCIENCE AND ITS APPLICATIONS》, 31 December 2018 (2018-12-31), pages 1 - 14 *
周叶凡: "驾驶场景下基于深度学习的手势识别技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 03, 15 March 2022 (2022-03-15), pages 138 - 1773 *

Also Published As

Publication number Publication date
CN117649633B (en) 2024-04-26

Similar Documents

Publication Publication Date Title
CN109977812B (en) Vehicle-mounted video target detection method based on deep learning
CN110070008B (en) Bridge disease identification method adopting unmanned aerial vehicle image
CN109816024B (en) Real-time vehicle logo detection method based on multi-scale feature fusion and DCNN
CN114663346A (en) Strip steel surface defect detection method based on improved YOLOv5 network
CN110992349A (en) Underground pipeline abnormity automatic positioning and identification method based on deep learning
CN114743119B (en) High-speed rail contact net hanger nut defect detection method based on unmanned aerial vehicle
CN112330593A (en) Building surface crack detection method based on deep learning network
CN109961013A (en) Recognition methods, device, equipment and the computer readable storage medium of lane line
CN112598066A (en) Lightweight road pavement detection method and system based on machine vision
CN113962960A (en) Pavement disease detection method based on deep learning
CN113449632B (en) Vision and radar perception algorithm optimization method and system based on fusion perception and automobile
CN114926984B (en) Real-time traffic conflict collection and road safety evaluation method
CN115995056A (en) Automatic bridge disease identification method based on deep learning
CN116109986A (en) Vehicle track extraction method based on laser radar and video technology complementation
CN116824399A (en) Pavement crack identification method based on improved YOLOv5 neural network
CN115171045A (en) YOLO-based power grid operation field violation identification method and terminal
CN113312987B (en) Recognition method based on unmanned aerial vehicle road surface crack image
CN117215316B (en) Method and system for driving environment perception based on cooperative control and deep learning
CN117369479B (en) Unmanned aerial vehicle obstacle early warning method and system based on oblique photogrammetry technology
CN112699748B (en) Human-vehicle distance estimation method based on YOLO and RGB image
CN116901089B (en) Multi-angle vision distance robot control method and system
CN117649633B (en) Pavement pothole detection method for highway inspection
CN115330726B (en) Quick evaluation system for quality of reinforcement protection layer and quality of wall body
CN116363426A (en) Automatic detection method for health state of highway antiglare shield
CN113239962A (en) Traffic participant identification method based on single fixed camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant