CN116777865A - Underwater crack identification method, system, device and storage medium - Google Patents
Underwater crack identification method, system, device and storage medium Download PDFInfo
- Publication number
- CN116777865A CN116777865A CN202310724328.7A CN202310724328A CN116777865A CN 116777865 A CN116777865 A CN 116777865A CN 202310724328 A CN202310724328 A CN 202310724328A CN 116777865 A CN116777865 A CN 116777865A
- Authority
- CN
- China
- Prior art keywords
- data
- crack
- result
- underwater
- detection model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 238000001514 detection method Methods 0.000 claims abstract description 75
- 230000007613 environmental effect Effects 0.000 claims abstract description 26
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims abstract description 8
- 238000012545 processing Methods 0.000 claims description 39
- 238000012549 training Methods 0.000 claims description 14
- 238000011176 pooling Methods 0.000 claims description 11
- 239000013598 vector Substances 0.000 claims description 10
- 238000005192 partition Methods 0.000 claims description 5
- 230000006798 recombination Effects 0.000 claims description 3
- 238000005215 recombination Methods 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 description 18
- 230000006870 function Effects 0.000 description 16
- 238000002372 labelling Methods 0.000 description 14
- 238000010586 diagram Methods 0.000 description 12
- 201000010099 disease Diseases 0.000 description 11
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 11
- 238000000605 extraction Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 11
- 238000003062 neural network model Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 235000019879 cocoa butter substitute Nutrition 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000007689 inspection Methods 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000000547 structure data Methods 0.000 description 3
- 238000006467 substitution reaction Methods 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008521 reorganization Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 239000004566 building material Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000007847 structural defect Effects 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/05—Underwater scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Quality & Reliability (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a method, a system, a device and a storage medium for identifying underwater cracks. The method comprises the following steps: collecting image data of an object to be measured under water; the object to be detected is used for representing whether the object to be identified has cracks or not; dividing the image data to obtain a plurality of divided image data; inputting the plurality of map data into a crack detection model to obtain a first result; the crack detection model is used for predicting whether cracks exist in each map; obtaining environmental complexity according to the plurality of sub-graph data; and calibrating the first result according to the environment complexity to obtain the identification result of the object to be detected. According to the embodiment of the application, the acquired underwater image data is processed, and crack identification is performed through the crack detection model, so that the problem of low efficiency of manual identification is solved; meanwhile, the recognition result is calibrated through the environment complexity, so that the recognition accuracy is improved; can be widely applied to the technical field of computers.
Description
Technical Field
The application relates to the technical field of computers, in particular to a method, a system, a device and a storage medium for identifying underwater cracks.
Background
Bridges are an essential component of modern traffic construction and they play an important role in development. However, bridge structures are susceptible to various diseases due to various factors, with cracks being the most common. These diseases, if not found and handled in time, pose a serious threat to the safety and stability of the bridge.
Conventional underwater structure inspection methods generally rely on divers or underwater robots, but such methods have many problems such as high cost for divers, and consumption of a lot of time and manpower resources. Moreover, due to the complexity of the underwater environment, the traditional detection method cannot cover all structural parts, so that some hidden diseases can be omitted. With the development of robot technology, the underwater robot is used for underwater structure detection, so that the acquisition duration and the acquisition range of underwater shooting acquisition are greatly improved. However, the video data collected by the underwater robot still requires engineers to take a long time to analyze and process, which results in low efficiency and low accuracy of structure detection.
Disclosure of Invention
The present application aims to solve at least one of the technical problems existing in the prior art to a certain extent.
Therefore, the application aims to provide an efficient and accurate underwater crack identification method, system and device and a storage medium.
In order to achieve the technical purpose, the technical scheme adopted by the embodiment of the application comprises the following steps:
in one aspect, the embodiment of the application provides a method for identifying underwater cracks, which comprises the following steps:
the method for identifying the underwater crack comprises the following steps: collecting image data of an object to be measured under water; the object to be detected is used for representing an object to be identified whether a crack exists or not; dividing the image data to obtain a plurality of divided image data; inputting the plurality of map data into a crack detection model to obtain a first result; the crack detection model is used for predicting whether cracks exist in each map; obtaining environmental complexity according to the plurality of sub-graph data; and calibrating the first result according to the environment complexity to obtain the identification result of the object to be detected. According to the embodiment of the application, the acquired underwater image data is processed, and crack identification is performed through the crack detection model, so that the problem of low efficiency of manual identification is solved; meanwhile, the recognition result is calibrated through the environment complexity, and the recognition accuracy is improved.
In addition, the method for identifying underwater cracks according to the embodiment of the application may further have the following additional technical features:
further, in the method for identifying an underwater crack according to the embodiment of the present application, the plurality of map data are input into a crack detection model to obtain a first result, including:
extracting the characteristics of the plurality of sub-graph data to obtain first characteristic data;
carrying out recombination and expansion processing on the first characteristic data to obtain a first boundary frame;
pooling the first boundary box to obtain second characteristic data;
and carrying out convolution processing on the second characteristic data to obtain a first result.
Further, in one embodiment of the application, the method further trains the crack detection model by:
acquiring an image sample;
dividing the image sample to obtain a plurality of divided image samples;
inputting the plurality of map samples into the crack detection model to obtain a prediction result, and updating the crack detection model according to the matching degree between the prediction result and the real result to obtain a trained crack detection model.
Further, in one embodiment of the present application, the method further comprises the steps of:
acquiring the image sample through a building crack data set; the architectural crack dataset is used to characterize the disclosed dataset;
or, acquiring the image sample through a data processing model; the data processing model is used to increase the number of image samples.
Further, in one embodiment of the present application, the data processing model is obtained by:
acquiring building crack image data, and recording the building crack image data as first data;
the first data is subjected to coverage processing of a random mask to obtain mask data;
extracting features of the mask data through an encoder to obtain a first feature vector;
and carrying out feature processing on the first feature vector through a decoder to obtain prediction data, and updating parameters of the data processing model according to the matching degree between the prediction data and real data to obtain a trained data processing model.
Further, in an embodiment of the present application, the calibrating the first result according to the environmental complexity includes:
determining a confidence coefficient threshold according to the environment complexity, and calibrating the first result according to the confidence coefficient threshold;
or determining an NMS threshold according to the environment complexity, and calibrating the first result according to the NMS threshold.
Further, in an embodiment of the present application, the environmental complexity is obtained according to the plurality of map data; and calibrating the first result according to the environmental complexity to obtain the identification result of the object to be detected, including:
acquiring the environment complexity of each piece of map data, and recording the environment complexity as a first numerical value;
determining the weight of each piece of map data according to the environmental complexity of each piece of map data;
and calibrating the first result according to the weight of the map data to obtain the identification result of the object to be detected.
On the other hand, the embodiment of the application provides an underwater crack identification system, which comprises:
the first module is used for collecting image data of the object to be detected under water; the object to be detected is used for representing an object to be identified whether a crack exists or not;
the second module is used for dividing the image data to obtain a plurality of pieces of image data;
the third module is used for inputting the plurality of map data into a crack detection model to obtain a first result; the crack detection model is used for predicting whether cracks exist in each map;
a fourth module, configured to obtain environmental complexity according to the plurality of partition map data; and calibrating the first result according to the environment complexity to obtain the identification result of the object to be detected.
In another aspect, an embodiment of the present application provides an apparatus for identifying an underwater crack, including:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement the method of identifying underwater fractures described above.
In another aspect, an embodiment of the present application provides a storage medium in which a processor-executable program is stored, which when executed by a processor is configured to implement the above-described underwater crack identification method.
According to the embodiment of the application, the acquired underwater image data is processed, and crack identification is performed through the crack detection model, so that the problem of low efficiency of manual identification is solved; meanwhile, the recognition result is calibrated through the environment complexity, and the recognition accuracy is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description is made with reference to the accompanying drawings of the embodiments of the present application or the related technical solutions in the prior art, and it should be understood that the drawings in the following description are only for convenience and clarity of describing some embodiments in the technical solutions of the present application, and other drawings may be obtained according to these drawings without the need of inventive labor for those skilled in the art.
FIG. 1 is a schematic flow chart of an embodiment of a method for identifying underwater cracks provided by the application;
FIG. 2 is a schematic diagram of an embodiment of a crack detection model provided by the present application;
FIG. 3 is a schematic diagram illustrating an embodiment of feature extraction of partition map data according to the present application;
FIG. 4 is a schematic diagram of another embodiment of feature extraction of the split map data according to the present application;
FIG. 5 is a schematic diagram illustrating an embodiment of the reorganization and expansion process of partition map data according to the present application;
FIG. 6 is a schematic diagram illustrating an embodiment of a down-sampling process of the split map data according to the present application;
FIG. 7 is a schematic structural diagram of an embodiment of feature stitching of the split map data provided by the present application;
FIG. 8 is a schematic diagram illustrating an embodiment of a pooling process of partition map data according to the present application;
FIG. 9 is a schematic diagram illustrating an embodiment of a data processing model according to the present application;
FIG. 10 is a schematic flow chart of another embodiment of a method for identifying underwater cracks according to the present application;
FIG. 11 is a schematic diagram illustrating an identification effect of an embodiment of the method for identifying underwater cracks according to the present application;
FIG. 12 is a schematic diagram illustrating an embodiment of an underwater crack identification system according to the present application;
fig. 13 is a schematic structural view of an embodiment of an underwater crack identification device provided by the application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the application. The step numbers in the following embodiments are set for convenience of illustration only, and the order between the steps is not limited in any way, and the execution order of the steps in the embodiments may be adaptively adjusted according to the understanding of those skilled in the art.
Bridges are an essential component of modern traffic construction and they play an important role in connecting different areas. However, bridge structures are susceptible to various diseases, most commonly in the form of cracks, due to a variety of factors. These diseases, if not found and handled in time, pose a serious threat to the safety and stability of the bridge.
Conventional underwater structure inspection methods generally rely on divers or underwater robots, but such methods have many problems such as high cost for divers, and consumption of a lot of time and manpower resources. Moreover, due to the complexity of the underwater environment, the traditional detection method cannot cover all structural parts, so that some hidden diseases can be omitted. With the development of robot technology, the underwater robot is used for underwater structure detection, so that the acquisition duration and the acquisition range of underwater shooting acquisition are greatly improved. However, the video data collected by the underwater robot still requires engineers to take a long time to analyze and process. Therefore, the recognition scheme of the underwater crack in the related art has the following disadvantages:
consuming a lot of time and costs: manual labeling requires a significant amount of time and labor costs, especially when the amount of data is large. The annotator needs to watch each video, identify and describe the disease and record it. Subject to subjective influence by personnel: the manual labeling process is often affected by subjective factors of labeling personnel. The background, experience and judgment of the labeling personnel can influence the identification and description of diseases, so that the labeling result is inaccurate or consistent. The training cost of personnel is increased: when new types of diseases need to be marked, marking personnel need to be trained again, and marking cost is increased. Moreover, the effect of manual labeling may be reduced for more complex or detailed diseases. Standard needs to be formulated: before manual labeling, data standards and disease classification rules need to be explicitly defined. If the standard is not clear enough or the classification is not accurate enough, inconsistency of labeling results can be caused, and reliability and usability of data are reduced. Processing large-scale data is difficult: when large-scale detection data needs to be marked, the process of manual marking becomes more difficult and complex. Labeling personnel may be tired, miss or have labeling loopholes, resulting in inaccurate or complete labeling results.
Based on the problem, the embodiment of the application designs a crack detection model based on the neural network, trains the model in a mode of fine tuning after pre-training weights, and finally obtains the neural network model meeting task requirements.
It can be understood that the application scene of the neural network comprises the fields of inspection and evaluation of bridge underwater structures, detection of underwater pipelines and tunnels and the like. In the inspection and evaluation of the bridge underwater structure, the neural network can rapidly and accurately detect the structural defects and provide support for related decisions; in the detection of underwater pipelines and tunnels, the neural network can play an important role, and help ensure the safe operation of the pipelines and tunnels. Therefore, the neural network has wide application scenes.
The method and system for identifying an underwater crack according to the embodiments of the present application will be described in detail below with reference to the accompanying drawings, and first, the method for identifying an underwater crack according to the embodiments of the present application will be described with reference to the accompanying drawings.
Referring to fig. 1, an embodiment of the present application provides a method for identifying an underwater crack, which may be applied to a terminal, a server, software running in a terminal or a server, and the like. The terminal may be, but is not limited to, a tablet computer, a notebook computer, a desktop computer, etc. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms. The method for identifying the underwater crack in the embodiment of the application mainly comprises the following steps:
s100: collecting image data of an object to be measured under water; the object to be detected is used for representing whether the object to be identified has cracks or not;
s200: dividing the image data to obtain a plurality of divided image data;
s300: inputting a plurality of map data into a crack detection model to obtain a first result; the crack detection model is used for predicting whether cracks exist in each map;
s400: obtaining environmental complexity according to the plurality of sub-graph data; and calibrating the first result according to the environmental complexity to obtain the identification result of the object to be detected.
In some possible implementations, the embodiment of the application performs crack identification through a crack detection model by processing underwater image data. Specifically, the crack detection model may be any neural network model, and illustratively, a neural network model of YOLOv7 network structure may be selected for crack detection. And then calibrating the detection result through the environment complexity, so as to improve the recognition accuracy.
In summary, the embodiment of the application adopts an advanced neural network structure as a detection model: based on the neural network model, an advanced neural network for identifying and detecting cracks is constructed, so that the neural network has more accurate detection capability and faster processing speed, has smaller operation resource requirements, can be embedded into miniaturized hardware for integration, and can be applied to various scenes. Specifically, the embodiment of the application can realize the automation of crack identification: the neural network carries out automatic detection, so that the workload and cost of manual marking can be greatly reduced, and the detection efficiency is improved. The embodiment of the application can realize the consistency of crack identification: the neural network performs automatic detection, so that inconsistency and subjectivity of manual labeling can be avoided, and consistency and reliability of detection results are improved. The embodiment of the application can realize the robustness of crack identification: the neural network performs automatic detection, the robustness and generalization capability of a detection algorithm can be improved through means of data enhancement, model optimization and the like, and the method is suitable for different environments and scenes. The embodiment of the application can realize the expandability of crack identification: the neural network can be conveniently expanded to different hardware platforms and devices for automatic detection, and is suitable for various detection tasks and application scenes. The embodiment of the application can realize the real-time performance of crack identification: the neural network performs automatic detection, so that target detection can be completed in a short time, real-time detection is realized, and the actual application requirements are met.
Optionally, in one embodiment of the present application, inputting the plurality of map data into the crack detection model, the first result is obtained, including:
extracting features of the plurality of sub-graph data to obtain first feature data;
carrying out recombination and expansion processing on the first characteristic data to obtain a first boundary frame;
pooling the first boundary box to obtain second characteristic data;
and carrying out convolution processing on the second characteristic data to obtain a first result.
In some possible implementations, the processing procedure of the crack detection model in the embodiment of the present application is:
s101: in the input process, the input picture of YOLOv7 is divided into a plurality of uniform lattices, each lattice representing a region, each region being responsible for detecting an object therein. At the same time, each grid predicts a plurality of bounding boxes for locating the target object.
S102: feature extraction, for each input picture, YOLOv7 performs feature extraction on the input picture through a convolutional neural network, and extracts a feature map related to target detection. In this process, YOLOv7 uses a plurality of CBM (Convolutional Block Module) modules, wherein the CBM comprises a plurality of convolutional layers, BN layers, leakyReLU activation functions, residual connections, etc., to improve the feature extraction capability and robustness of the network.
S103: prediction boxes are generated, and in each grid, YOLOv7 generates a plurality of boundary boxes by predicting parameters such as center coordinates, width, height and confidence of the boundary boxes, wherein the confidence represents whether a target object exists in the boundary boxes. In the process, a REP (Reorganization Expansion Projection) module is introduced into YOLOv7 to recombine and expand the feature map, so that the detection capability of the network on small targets is improved.
S104: screening to improve detection accuracy and speed, YOLOv7 uses a SPPCSPC (Spatial Pyramid Pooling Convolutional Spatial Pyramid Pooling Convolutional) module to screen a plurality of generated bounding boxes. Specifically, the SPPCSPC module performs spatial pyramid pooling (Spatial Pyramid Pooling) on the feature map in the bounding box to obtain a feature vector with a fixed length, and then inputs the feature vector into a plurality of convolution layers to perform feature fusion and prediction to obtain a final detection result.
Optionally, in one embodiment of the application, the method further trains the crack detection model by:
acquiring an image sample;
dividing the image sample to obtain a plurality of divided image samples;
inputting a plurality of map samples into a crack detection model to obtain a prediction result, and updating the crack detection model according to the matching degree between the prediction result and the real result to obtain a trained crack detection model.
In some possible implementations, the detection network of the embodiment of the present application uses the structure of the neural network model as a base structure, and the training and data processing strategies are formulated according to the requirements of the crack detection task. Referring to fig. 2, the neural network model in the present application mainly includes two parts: a feature extraction section and a target detection section. Specifically, as shown in fig. 3, the cbs module consists of a convolution layer, a BN layer, i.e., batch normalization layer, and a Silu activation function layer. CBSs of different parameters need to be combined in the model. The first CBS module is a 1x1 convolution, seen from left to right, with a step size of 1. The second CBS module is a 3x3 convolution with a step size of 1. The third CBS module is a 3x3 convolution with a step size of 2. The convolution of 1x1 is mainly used to change the number of channels. A convolution of 3x3, step size 1, is mainly used for feature extraction. A convolution of 3x3, step size 2, is mainly used for downsampling. Referring to FIG. 4, the CBM module may also be used for feature extraction, and in particular, the feature extraction module is composed of a Conv layer, namely a convolution layer, a BN layer, namely a Batch normalization layer, and a sigmoid activation function layer. The convolution layer uses a convolution kernel of 1x1, with a step size of 1.
As shown in fig. 5, the rep module is used for the reassembly and expansion process to generate a first bounding box. Specifically, the REP module can be split into two, one train, i.e., training, and one deploy, i.e., reasoning. The training module has three branches. The uppermost branch is a 3x3 convolution for feature extraction. The middle branch is a convolution of 1x1 for smoothing features and the lowest branch is one containing only one BN layer without convolution operations. And finally add them together. The reasoning module comprises a convolution kernel of 3x3, a convolution layer with a step length of 1 and a BN layer.
As in fig. 6, mp-1 has two branches, which act to downsample. The first branch passes through a maxpool layer, the max pooling layer. The effect of the maximization is to downsample and then to change the number of channels by a convolution of 1x 1. The second branch is first convolved by 1x1 to change the number of channels, and then convolved by a convolution block with a 3x3 convolution kernel and a step length of 2, which is also used for downsampling. And finally, adding the results of the first branch and the second branch together to obtain a super downsampled result.
As shown in fig. 7, the elan module is an efficient network architecture that allows the network to learn more features and is more robust by controlling the shortest and longest gradient paths. The ELAN has two branches. The first branch is convolved by a 1x1 convolution to make a change in the number of channels. The second branch is more complex. It first passes through a 1x1 convolution module to make channel number change. And then, performing feature extraction through four 3x3 convolution modules. Also, it will be appreciated that the ELAN-W module is quite similar in structure to the ELAN module, except that it selects a different number of outputs in the second branch. The ELAN module selects three outputs for final addition and the ELAN-W module selects five outputs for addition.
As shown in fig. 8, the sppcspc module is used for feature screening, which functions to increase receptive fields so that the algorithm adapts to different resolution images by maximizing pooling to obtain different receptive fields. In the first branch, there are four sizes of maxpool, 5,9, 13,1 respectively, which represent that he can handle different objects. That is, it has four receptive fields for maximum pooling of four different scales to distinguish between large and small targets.
Optionally, in one embodiment of the present application, the method further comprises:
acquiring an image sample through a building crack data set; the architectural crack dataset is used to characterize the disclosed dataset;
or, acquiring an image sample through a data processing model; the data processing model is used to increase the number of image samples.
In some possible embodiments, sample data for model training may be obtained from a related public data set; the sample number can be expanded through the data processing model, so that the training effect of the model is improved.
Alternatively, in one embodiment of the application, the data processing model is obtained by:
acquiring building crack image data, and recording the building crack image data as first data;
the first data is subjected to coverage processing of a random mask to obtain mask data;
extracting features of the mask data through an encoder to obtain a first feature vector;
and carrying out feature processing on the first feature vector through a decoder to obtain prediction data, and updating parameters of the data processing model according to the matching degree between the prediction data and the real data to obtain a trained data processing model.
In some possible embodiments, the difficulty of data collection of the image sample is large, and the crack detection model cannot be trained directly, and the embodiment of the application adopts the data processing model to obtain the image sample, specifically, referring to the structure of the data processing model shown in fig. 9, the data processing network module provided by the embodiment of the application includes a random mask generator and a data predictor. Specifically, 1500 ordinary building crack data pictures are randomly divided into 1000 training sets and 500 test sets. 300 underwater structure datasets will be used for the transfer learning encoder-decoder structure, the workflow of the network is as follows:
s91: the random mask generator randomly generates a square mask of 2 x 2 pixel size and randomly overlays it on the normal architectural crack data.
S92: the encoder extracts the features of the mask data layer by layer through the convolutional layer and encodes the feature vectors.
S93: the decoder predicts the non-masked image data based on the feature vector, and the prediction results are compared with the non-masked image.
S94: and calculating a decoding loss value of the decoder according to the comparison condition, and optimizing parameters of the encoder and the decoder.
S95: resulting in an encoder-decoder structure with certain pre-training weights.
S96: the bridge underwater structure data with insufficient data quantity is input into a random mask generator and covered with a random mask.
S97: the weights of the encoder-decoder were fine tuned using the 300 bridge underwater structure dataset to obtain the data predictive encoder.
S98: the bridge underwater structure data with the mask is input into a data prediction encoder to predict the complete image.
The data processing network keeps the overall characteristics of the input data, predicts the detail characteristics of the input data, increases the data quantity through the diversified detail characteristics, and finally achieves the function of expanding the data set.
Optionally, in an embodiment of the present application, calibrating the first result according to the environmental complexity includes:
determining a confidence coefficient threshold according to the environment complexity, and calibrating the first result according to the confidence coefficient threshold;
alternatively, the NMS threshold is determined based on the environmental complexity, and the first result is calibrated based on the NMS threshold.
In some possible implementations, the first result is calibrated by environmental complexity; specifically, through environmental complexity, a proper confidence coefficient threshold value is set, and numerical values in the first result are screened to obtain a more accurate identification result. More accurate recognition results can also be obtained through NMS thresholds (non-maximum suppression thresholds).
Optionally, in one embodiment of the present application, the environmental complexity is derived from a number of the split map data; and calibrating the first result according to the environmental complexity to obtain the identification result of the object to be detected, including:
acquiring the environment complexity of each piece of image data and recording the environment complexity as a first numerical value;
determining the weight of each piece of map data according to the environmental complexity of each piece of map data;
and calibrating the first result according to the weight of the image data to obtain the identification result of the object to be detected.
In some possible embodiments, the weights of the map data to the recognition result are determined according to the environmental complexity of different maps. It can be appreciated that if the environment complexity is complex, the map may be set to a smaller weight to mitigate the influence of the environment on the recognition result. Conversely, if the environmental complexity is simpler, the map may be set to a larger weight, so as to improve the accuracy of the recognition result.
With reference to fig. 10, a description is given of a method for identifying underwater cracks according to an embodiment of the present application:
s11: simulating the environment of the underwater structure of the building by using the damaged simply supported beams and the water pool, and acquiring image data by using the underwater robot;
s12: inviting a detection engineer to identify and mark crack positions in the acquired data and converting the color image into a gray image;
s13: constructing a YOLOv7 network structure by using a deep learning packet, wherein the network structure comprises a CBS module, an ELAN-W module, a REP module, an MP module and the like;
s14: pre-training the model by using the disclosed building Crack data set Crack500, and adjusting model parameters according to the test result;
s15: evaluating the performance of the model, and storing a weight file with good performance after parameter adjustment as a pre-training weight file;
s16: loading a pre-training weight file, and finely adjusting model weights by using the acquired underwater building structure data set;
s17: and predicting by using the finally obtained model, and selecting proper confidence and NMS value according to the calculated environment complexity to filter the identification result.
As shown in fig. 11, by the method for identifying underwater cracks provided by the application, cracks of the underwater structure of the bridge can be identified and marked. The crack detection model can have high marking precision in a simple single crack scene, and still has good marking precision in a complex multi-crack scene. In summary, our neural network is practical and efficient.
The beneficial effects of this embodiment are as follows: the embodiment of the application can improve the efficiency: compared with the traditional manual detection method, the neural network can automatically detect the labeling cracks, and the detection speed and efficiency are greatly improved. The embodiment of the application can improve the accuracy: the neural network adopts an advanced YOLOv7 network and utilizes a self-coding mask data prediction method to acquire more sufficient data. The crack can be detected more accurately by training with sufficient data, and subjectivity and error of manual detection are avoided. The embodiment of the application can be suitable for various application scenes: the neural network can be applied to bridges with different types and different building material surfaces, can automatically identify and detect cracks, and has strong adaptability. The embodiment of the application can realize large detection tasks: the neural network can be used for large detection tasks, and the working efficiency and the detection accuracy can be effectively improved through automatic and efficient labeling.
Next, a system for identifying underwater cracks according to an embodiment of the present application will be described with reference to fig. 12.
FIG. 12 is a schematic structural diagram of an underwater crack identification system according to an embodiment of the present application, the system specifically includes:
a first module 210, configured to collect image data of an object under water; the object to be detected is used for representing whether the object to be identified has cracks or not;
a second module 220, configured to perform segmentation processing on the image data to obtain a plurality of map data;
a third module 230, configured to input a plurality of map data into the crack detection model, to obtain a first result; the crack detection model is used for predicting whether cracks exist in each map;
a fourth module 240, configured to obtain environmental complexity according to the plurality of map data; and calibrating the first result according to the environmental complexity to obtain the identification result of the object to be detected.
It can be seen that the content in the above method embodiment is applicable to the system embodiment, and the functions specifically implemented by the system embodiment are the same as those of the method embodiment, and the beneficial effects achieved by the method embodiment are the same as those achieved by the method embodiment.
Referring to fig. 13, an embodiment of the present application provides an apparatus for identifying an underwater crack, including:
at least one processor 310;
at least one memory 320 for storing at least one program;
the at least one program, when executed by the at least one processor 310, causes the at least one processor 310 to implement the method of identifying underwater fractures.
Similarly, the content in the above method embodiment is applicable to the embodiment of the present device, and the functions specifically implemented by the embodiment of the present device are the same as those of the embodiment of the above method, and the beneficial effects achieved by the embodiment of the above method are the same as those achieved by the embodiment of the above method.
The embodiment of the application also provides a computer readable storage medium, in which a program executable by a processor is stored, which when executed by the processor is used for executing the above-mentioned underwater crack identification method.
Similarly, the content in the above method embodiment is applicable to the present storage medium embodiment, and the specific functions of the present storage medium embodiment are the same as those of the above method embodiment, and the achieved beneficial effects are the same as those of the above method embodiment.
In some alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of the present application are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of a larger operation are performed independently.
Furthermore, while the application is described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the functions and/or features may be integrated in a single physical device and/or software module or may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present application. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Accordingly, one of ordinary skill in the art can implement the application as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the application, which is to be defined in the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in the form of a software product stored in a storage medium, including several programs for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable programs for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with a program execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the programs from the program execution system, apparatus, or device and execute the programs. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the program execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable program execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the foregoing description of the present specification, reference has been made to the terms "one embodiment/example", "another embodiment/example", "certain embodiments/examples", and the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present application have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the application, the scope of which is defined by the claims and their equivalents.
While the preferred embodiment of the present application has been described in detail, the present application is not limited to the embodiments described above, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the present application, and these equivalent modifications and substitutions are intended to be included in the scope of the present application as defined in the appended claims.
Claims (10)
1. The method for identifying the underwater crack is characterized by comprising the following steps of:
collecting image data of an object to be measured under water; the object to be detected is used for representing an object to be identified whether a crack exists or not;
dividing the image data to obtain a plurality of divided image data;
inputting the plurality of map data into a crack detection model to obtain a first result; the crack detection model is used for predicting whether cracks exist in each map;
obtaining environmental complexity according to the plurality of sub-graph data; and calibrating the first result according to the environment complexity to obtain the identification result of the object to be detected.
2. The method of claim 1, wherein inputting the plurality of map data into a crack detection model to obtain a first result comprises:
extracting the characteristics of the plurality of sub-graph data to obtain first characteristic data;
carrying out recombination and expansion processing on the first characteristic data to obtain a first boundary frame;
pooling the first boundary box to obtain second characteristic data;
and carrying out convolution processing on the second characteristic data to obtain a first result.
3. The method of claim 1, further comprising training the crack detection model by:
acquiring an image sample;
dividing the image sample to obtain a plurality of divided image samples;
inputting the plurality of map samples into the crack detection model to obtain a prediction result, and updating the crack detection model according to the matching degree between the prediction result and the real result to obtain a trained crack detection model.
4. A method of identifying underwater cracks as claimed in claim 3, further comprising the steps of:
acquiring the image sample through a building crack data set; the architectural crack dataset is used to characterize the disclosed dataset;
or, acquiring the image sample through a data processing model; the data processing model is used to increase the number of image samples.
5. The method for identifying underwater cracks according to claim 4, wherein the data processing model is obtained by:
acquiring building crack image data, and recording the building crack image data as first data;
the first data is subjected to coverage processing of a random mask to obtain mask data;
extracting features of the mask data through an encoder to obtain a first feature vector;
and carrying out feature processing on the first feature vector through a decoder to obtain prediction data, and updating parameters of the data processing model according to the matching degree between the prediction data and real data to obtain a trained data processing model.
6. The method of claim 1, wherein calibrating the first result based on the environmental complexity comprises:
determining a confidence coefficient threshold according to the environment complexity, and calibrating the first result according to the confidence coefficient threshold;
or determining an NMS threshold according to the environment complexity, and calibrating the first result according to the NMS threshold.
7. The method for identifying an underwater crack according to claim 1, wherein the environmental complexity is obtained from the plurality of map data; and calibrating the first result according to the environmental complexity to obtain the identification result of the object to be detected, including:
acquiring the environment complexity of each piece of map data, and recording the environment complexity as a first numerical value;
determining the weight of each piece of map data according to the environmental complexity of each piece of map data;
and calibrating the first result according to the weight of the map data to obtain the identification result of the object to be detected.
8. An underwater crack identification system, comprising:
the first module is used for collecting image data of the object to be detected under water; the object to be detected is used for representing an object to be identified whether a crack exists or not;
the second module is used for dividing the image data to obtain a plurality of pieces of image data;
the third module is used for inputting the plurality of map data into a crack detection model to obtain a first result; the crack detection model is used for predicting whether cracks exist in each map;
a fourth module, configured to obtain environmental complexity according to the plurality of partition map data; and calibrating the first result according to the environment complexity to obtain the identification result of the object to be detected.
9. An apparatus for identifying underwater cracks, comprising:
at least one processor;
at least one memory for storing at least one program;
when the at least one program is executed by the at least one processor, the at least one processor is caused to implement the method of identifying underwater fractures as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium in which a processor-executable program is stored, characterized in that the processor-executable program, when executed by a processor, is for implementing the method of identifying underwater cracks as claimed in any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310724328.7A CN116777865B (en) | 2023-06-16 | Underwater crack identification method, system, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310724328.7A CN116777865B (en) | 2023-06-16 | Underwater crack identification method, system, device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116777865A true CN116777865A (en) | 2023-09-19 |
CN116777865B CN116777865B (en) | 2024-09-06 |
Family
ID=
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117557499A (en) * | 2023-10-20 | 2024-02-13 | 中水珠江规划勘测设计有限公司 | Submarine pipeline leakage identification method and device, electronic equipment and medium |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020046213A1 (en) * | 2018-08-31 | 2020-03-05 | Agency For Science, Technology And Research | A method and apparatus for training a neural network to identify cracks |
CN111257341A (en) * | 2020-03-30 | 2020-06-09 | 河海大学常州校区 | Underwater building crack detection method based on multi-scale features and stacked full convolution network |
CN112001411A (en) * | 2020-07-10 | 2020-11-27 | 河海大学 | Dam crack detection algorithm based on FPN structure |
CN112529901A (en) * | 2020-12-31 | 2021-03-19 | 江西飞尚科技有限公司 | Crack identification method in complex environment |
CN112926584A (en) * | 2021-05-11 | 2021-06-08 | 武汉珈鹰智能科技有限公司 | Crack detection method and device, computer equipment and storage medium |
CN112950570A (en) * | 2021-02-25 | 2021-06-11 | 昆明理工大学 | Crack detection method combining deep learning and dense continuous central point |
CN113822880A (en) * | 2021-11-22 | 2021-12-21 | 中南大学 | Crack identification method based on deep learning |
US20220051403A1 (en) * | 2020-08-13 | 2022-02-17 | PAIGE.AI, Inc. | Systems and methods to process electronic images for continuous biomarker prediction |
US20220092856A1 (en) * | 2020-09-22 | 2022-03-24 | Bentley Systems, Incorporated | Crack detection, assessment and visualization using deep learning with 3d mesh model |
CN114266892A (en) * | 2021-12-20 | 2022-04-01 | 江苏燕宁工程科技集团有限公司 | Pavement disease identification method and system for multi-source data deep learning |
CN114332473A (en) * | 2021-09-29 | 2022-04-12 | 腾讯科技(深圳)有限公司 | Object detection method, object detection device, computer equipment, storage medium and program product |
CN115661031A (en) * | 2022-09-21 | 2023-01-31 | 广州大学 | Pixel-level concrete crack detection method of convolutional neural network |
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020046213A1 (en) * | 2018-08-31 | 2020-03-05 | Agency For Science, Technology And Research | A method and apparatus for training a neural network to identify cracks |
CN111257341A (en) * | 2020-03-30 | 2020-06-09 | 河海大学常州校区 | Underwater building crack detection method based on multi-scale features and stacked full convolution network |
CN112001411A (en) * | 2020-07-10 | 2020-11-27 | 河海大学 | Dam crack detection algorithm based on FPN structure |
US20220051403A1 (en) * | 2020-08-13 | 2022-02-17 | PAIGE.AI, Inc. | Systems and methods to process electronic images for continuous biomarker prediction |
US20220092856A1 (en) * | 2020-09-22 | 2022-03-24 | Bentley Systems, Incorporated | Crack detection, assessment and visualization using deep learning with 3d mesh model |
CN112529901A (en) * | 2020-12-31 | 2021-03-19 | 江西飞尚科技有限公司 | Crack identification method in complex environment |
CN112950570A (en) * | 2021-02-25 | 2021-06-11 | 昆明理工大学 | Crack detection method combining deep learning and dense continuous central point |
CN112926584A (en) * | 2021-05-11 | 2021-06-08 | 武汉珈鹰智能科技有限公司 | Crack detection method and device, computer equipment and storage medium |
CN114332473A (en) * | 2021-09-29 | 2022-04-12 | 腾讯科技(深圳)有限公司 | Object detection method, object detection device, computer equipment, storage medium and program product |
CN113822880A (en) * | 2021-11-22 | 2021-12-21 | 中南大学 | Crack identification method based on deep learning |
CN114266892A (en) * | 2021-12-20 | 2022-04-01 | 江苏燕宁工程科技集团有限公司 | Pavement disease identification method and system for multi-source data deep learning |
CN115661031A (en) * | 2022-09-21 | 2023-01-31 | 广州大学 | Pixel-level concrete crack detection method of convolutional neural network |
Non-Patent Citations (4)
Title |
---|
KEQUAN CHEN: "Safety Helmet Detection Based on YOLOv7", CSAE \'22: PROCEEDINGS OF THE 6TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND APPLICATION ENGINEERING, 13 December 2022 (2022-12-13) * |
寇大磊;权冀川;张仲伟;: "基于深度学习的目标检测框架进展研究", 计算机工程与应用, no. 11, 26 March 2019 (2019-03-26) * |
柴雪松;朱兴永;李健超;薛峰;辛学仕;: "基于深度卷积神经网络的隧道衬砌裂缝识别算法", 铁道建筑, no. 06, 20 June 2018 (2018-06-20) * |
陈波;张华;汪双;王皓冉;刘昭伟;李永龙;谢辉;: "基于全卷积神经网络的坝面裂纹检测方法研究", 水力发电学报, no. 07, 31 July 2020 (2020-07-31) * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117557499A (en) * | 2023-10-20 | 2024-02-13 | 中水珠江规划勘测设计有限公司 | Submarine pipeline leakage identification method and device, electronic equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109147254B (en) | Video field fire smoke real-time detection method based on convolutional neural network | |
Lei et al. | New crack detection method for bridge inspection using UAV incorporating image processing | |
CN113705478B (en) | Mangrove single wood target detection method based on improved YOLOv5 | |
CN108846835B (en) | Image change detection method based on depth separable convolutional network | |
CN111723732B (en) | Optical remote sensing image change detection method, storage medium and computing equipment | |
Li et al. | Automatic bridge crack identification from concrete surface using ResNeXt with postprocessing | |
CN107622239B (en) | Detection method for remote sensing image specified building area constrained by hierarchical local structure | |
CN109145836A (en) | Ship target video detection method based on deep learning network and Kalman filtering | |
CN114926511A (en) | High-resolution remote sensing image change detection method based on self-supervision learning | |
CN102346854A (en) | Method and device for carrying out detection on foreground objects | |
CN111914804A (en) | Multi-angle rotation remote sensing image small target detection method | |
CN109584206B (en) | Method for synthesizing training sample of neural network in part surface flaw detection | |
CN116151319A (en) | Method and device for searching neural network integration model and electronic equipment | |
CN112418149A (en) | Abnormal behavior detection method based on deep convolutional neural network | |
CN113487600A (en) | Characteristic enhancement scale self-adaptive sensing ship detection method | |
CN116486231A (en) | Concrete crack detection method based on improved YOLOv5 | |
CN118038489A (en) | Visual algorithm testing process and data optimizing method | |
CN111627018B (en) | Steel plate surface defect classification method based on double-flow neural network model | |
CN115187884A (en) | High-altitude parabolic identification method and device, electronic equipment and storage medium | |
CN117288761A (en) | Flaw detection classification evaluation method and system based on test materials | |
Chen et al. | Deep learning based underground sewer defect classification using a modified RegNet | |
CN116777865B (en) | Underwater crack identification method, system, device and storage medium | |
CN117422990A (en) | Bridge structure classification and evaluation method based on machine learning | |
KR102494829B1 (en) | Structure damage evaluation method for using the convolutional neural network, and computing apparatus for performing the method | |
CN116977271A (en) | Defect detection method, model training method, device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |