CN108346144B - Automatic bridge crack monitoring and identifying method based on computer vision - Google Patents

Automatic bridge crack monitoring and identifying method based on computer vision Download PDF

Info

Publication number
CN108346144B
CN108346144B CN201810089404.0A CN201810089404A CN108346144B CN 108346144 B CN108346144 B CN 108346144B CN 201810089404 A CN201810089404 A CN 201810089404A CN 108346144 B CN108346144 B CN 108346144B
Authority
CN
China
Prior art keywords
crack
subunit
samples
unit
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810089404.0A
Other languages
Chinese (zh)
Other versions
CN108346144A (en
Inventor
李惠
徐阳
鲍跃全
李顺龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201810089404.0A priority Critical patent/CN108346144B/en
Publication of CN108346144A publication Critical patent/CN108346144A/en
Application granted granted Critical
Publication of CN108346144B publication Critical patent/CN108346144B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M5/00Investigating the elasticity of structures, e.g. deflection of bridges or air-craft wings
    • G01M5/0008Investigating the elasticity of structures, e.g. deflection of bridges or air-craft wings of bridges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M5/00Investigating the elasticity of structures, e.g. deflection of bridges or air-craft wings
    • G01M5/0033Investigating the elasticity of structures, e.g. deflection of bridges or air-craft wings by determining damage, crack or wear
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30132Masonry; Concrete

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention discloses a bridge crack automatic monitoring and identifying method based on computer vision. Aiming at the automatic monitoring and identification problems of the bridge cracks, the invention realizes the automatic processing of the whole process of model training, crack identification and result display of the real steel box girder crack image containing complex background interference information. The method is convenient and accurate, and improves the efficiency of bridge crack detection and the accuracy and stability of the detection result.

Description

Automatic bridge crack monitoring and identifying method based on computer vision
Technical Field
The invention relates to the field of civil engineering monitoring, in particular to a bridge crack automatic monitoring and identifying method based on computer vision.
Background
With the rapid development of national economic construction, more and more large-scale infrastructure construction plays an ever-more important role, in particular to a large-scale steel box girder sea-crossing bridge. Due to the fact that the large steel box girder sea-crossing bridge bears complex vehicle load for a long time, fatigue damage accumulation in different degrees is often caused at the welding seam of the steel box girder due to the existence of initial defects, and then fatigue cracks are formed. The fatigue crack can expand along the welding seam direction or towards the components such as the top plate and the diaphragm plate under the coupling action of disaster factors such as long-term effect, fatigue effect and mutation effect of the load, so that the resistance of the bridge structure is attenuated, and a disaster accident can be caused under an extreme condition. Therefore, the bridge management department invests a large amount of manpower, material resources and financial resources to manually inspect the interior of the steel box girder every year. At present, the cracks of the steel box girder are mainly detected by visual inspection personnel or by means of professional equipment, and the cracks are positioned and marked. Such detection is inefficient and inaccurate, takes a long period of time to detect, and relies heavily on the subjective awareness of the inspector.
With the wide application of image processing methods in civil engineering, some crack identification methods based on traditional image processing algorithms such as threshold segmentation and morphological calculation exist at present. However, these methods are often not really effective in the interior of actual bridges. This is because the internal environment of the steel box girder is very complicated, and in the photographed image, such as the structural member boundary, the state of the complicated structural surface (such as anticorrosive paint, magnetic powder, local corrosion, etc.), the uneven lighting condition, etc., all bring great difficulty to the recognition of the fatigue crack in the steel box girder. The biggest influence is that, often the inspection personnel will draw a marking line along the crack trend with the marker pen after discovering the crack to record the section position and the preliminary size measurement result that this crack was located around the crack. In the conventional image processing process, the manual marks and the handwritten handwriting bring huge interference to the identification of real cracks. The fatigue crack has relatively small scale, and the crack width is only 10-1mm level, which is more easily treated as noise in conventional image processing. In addition, some recognition methods also require the provision of camera internal and external parameters for image capture (such as object distance, image distance, capture angle, etc.), or require additional specialized measurement equipment. Overall, conventional fracture identification methods require excessive manual intervention and are costly.
Disclosure of Invention
Based on the defects, the invention provides the automatic bridge crack monitoring and identifying method based on computer vision, which can be used for offline identification and evaluation of crack images and real-time crack monitoring.
The technology adopted by the invention is as follows: a bridge crack automatic monitoring and identification method based on computer vision comprises the following steps:
step one, training set preparation: an original input image is cut into a sub-unit set of 64 multiplied by 3, samples with a certain proportion are randomly extracted from the sub-unit set, and the number of the samples can be determined according to needs; simultaneously observing the image characteristics of the subunits, and respectively labeling, wherein the number 1 represents a crack unit, the number 2 represents a handwriting unit, the number 3 represents a background unit, after the completion, a newly added subunit set is fused into an original training set, each subunit corresponds to a corresponding label, in order to consider the influence of unbalanced three types of subunit sample numbers, the number of the three types of subunits at the moment is displayed, the subunit number with the minimum number is taken as a reference, the same number of samples are randomly extracted from the other two types of subunit samples, then, each subunit sample is rotated counterclockwise by 90 degrees, 180 degrees and 270 degrees to generate three new samples, the data expansion is completed, each newly expanded subunit sample has the same label as that before the rotation, and the training set is manufactured;
step two, training a crack recognizer based on a depth network model: establishing a deep convolution neural network fusing multi-level features and completing initialization, wherein the size and the function of each layer are shown in table 1, a 64 × 64 × 3 subunit in a training set is used as input, a corresponding label is used as output, parameters in the network are trained, a loss function in the training process is a softmax loss function, an optimization algorithm is a random gradient descent algorithm of driving variables, initial values of learning rate, momentum parameters and weight parameters are used, and the obtained deep network is a crack recognizer;
TABLE 1 size and function of layers in deep networks
Figure GDA0002922703440000021
Figure GDA0002922703440000031
Step three, identifying the crack unit image: dividing the image into 64 multiplied by 3 subunits, inputting each subunit into a crack identifier, outputting corresponding label values on an output layer, namely, the subunit with the label value of 1 is a crack unit, the subunit with the label value of 2 is a writing unit, the subunit with the label value of 3 is a background unit, and respectively displaying the identification results of each type;
step four, post-processing output: and (4) carrying out image segmentation on each crack subunit by adopting an optimal entropy threshold method, outputting a binarization crack pixel point identification result, and obtaining the length and width information of the crack according to the binarization crack pixel point.
The invention also has the following technical characteristics:
1. in the second step, the loss function in the training process is softmax loss function, and its formula is as follows:
Figure GDA0002922703440000032
in the formula, L is a loss function, m is the number of samples, and C is the number of classifications; 1{ y(i)J is an index function when the y is(i)1 when each sample is classified into the jth class, and 0 if not;
Figure GDA0002922703440000033
bjfor the weights and offsets to be updated, x(i)For input, λ is a weight parameter.
2. In the second step, the optimization algorithm is a stochastic gradient descent algorithm of the driving amount, and the formula is as follows:
Figure GDA0002922703440000034
v in the formulaWFor the weight update rate, αWLearning the rate, η, for weightWIn order to weight the momentum parameter,
Figure GDA0002922703440000035
partial differentiation of the weight for the loss function; v isbTo bias the update rate, αbTo bias the learning rate, ηbIn order to bias the momentum parameter of the magnetic resonance,
Figure GDA0002922703440000036
is the partial differential of the loss function versus the bias.
3. And step four, performing image segmentation on each crack subunit by adopting an optimal entropy threshold method, wherein the formula is as follows:
Figure GDA0002922703440000041
in the formula, piRepresenting the proportion of the ith gray level, niRepresenting the number of ith gray scale, n representing the total number of pixels, PiExpressing cumulative probability of ith gradation, HP(t) represents the foreground entropy, HB(T) represents background entropy, H (T) represents image total entropy, and T represents a gray division value when the image total entropy takes a maximum value.
4. And step four, after the threshold segmentation is completed, inputting the pixel resolution at the user interaction interface to obtain the information of the real length and width of the crack.
Aiming at the automatic monitoring and identification problems of the bridge cracks, the invention realizes the automatic processing of the whole process of model training, crack identification and result display of the real steel box girder crack image containing complex background interference information. The method is convenient and accurate, and improves the efficiency of bridge crack detection and the accuracy and stability of the detection result. The whole crack identification process is automated, and the manual participation degree in the crack identification process is obviously reduced. The method can meet the real-time data processing requirement of online monitoring and early warning of the crack, namely, the training set is not updated, the acquired image is directly identified, and the result output delay can be as low as a second level. The method improves the automation, the intellectualization, the accuracy and the robustness of bridge crack identification, and provides a solution for the automatic monitoring and identification of civil engineering bridge cracks.
Drawings
FIG. 1 is a flow chart of automatic monitoring and identification of bridge cracks based on computer vision and deep learning
FIG. 2 is a deep convolutional neural network graph incorporating multilevel features;
FIG. 3 is a comparison graph of the recognition results of a long crack unit;
FIG. 4 is a comparison graph of the recognition results of a plurality of crack units;
FIG. 5 is a comparison graph of crack magnified image recognition results;
FIG. 6 is a diagram of a binarization identification result of a long crack;
FIG. 7 is a diagram of a result of binarization identification of a plurality of cracks;
fig. 8 is a graph of the binarization identification result of the crack enlarged image.
Detailed Description
The invention is further illustrated by way of example in the accompanying drawings of the specification:
example 1:
as shown in fig. 1, a method for automatically monitoring and identifying a bridge crack based on computer vision is realized based on an MATLAB environment:
firstly, training set preparation: an original input image is cut into a sub-unit set of 64 multiplied by 3, samples with a certain proportion are randomly extracted from the sub-unit set, and the number of the samples can be determined according to needs; and simultaneously observing the image characteristics of the subunits, and respectively labeling, wherein the number 1 represents a crack unit, the number 2 represents a handwriting unit, the number 3 represents a background unit, after the completion, a newly added subunit set is fused into an original training set, each subunit corresponds to a corresponding label, in order to consider the influence of unbalanced three types of subunit sample numbers, the number of the three types of subunits at the moment is displayed, the same number of samples are randomly extracted from the other two types of subunit samples by taking the subunit number with the minimum number as a reference, then, each subunit sample is rotated by 90 degrees, 180 degrees and 270 degrees counterclockwise to generate three new samples, data expansion is completed, and each newly expanded subunit sample has the same label as that before the rotation. Thus, the training set is finished.
And secondly, training a crack recognizer. And (3) building a deep convolutional neural network which is fused with multilevel features and is shown in fig. 2, and completing initialization, wherein the size and the function of each layer are shown in table 1. And training parameters in the network by taking the subunits of 64 multiplied by 3 in the training set as input and the corresponding labels as output. The loss function in the training process is softmaxloss function (shown in formula 1), and the optimization algorithm is a stochastic gradient descent algorithm with momentum (SGDM, shown in formula 2). And using initial values of the learning rate, the momentum parameter and the weight parameter to obtain a depth network which is the crack identifier.
Figure GDA0002922703440000051
In the formula, L is a loss function, m is the number of samples, and C is the number of classifications; 1{ y(i)J is an index function when the y is(i)1 when each sample is classified into the jth class, and 0 if not;
Figure GDA0002922703440000052
bjfor the weights and offsets to be updated, x(i)As input, λ is a weight parameter;
Figure GDA0002922703440000053
v in the formulaWFor the weight update rate, αWLearning the rate, η, for weightWIn order to weight the momentum parameter,
Figure GDA0002922703440000054
partial differentiation of the weight for the loss function; v isbTo bias the update rate, αbTo bias the learning rate, ηbIn order to bias the momentum parameter of the magnetic resonance,
Figure GDA0002922703440000055
partial differentiation of the bias for a loss function;
TABLE 1 size and function of layers in deep networks
Layer classification Height Width of Depth of field Operation of Height Width of Depth of field Number of Step pitch
L0 64 64 3 Convolutional layer 1-1 10 10 3 16 2
L1 28 28 16 Go into layer 1-1 - - - - -
L2 28 28 16 Active layer 1-1 - - - - -
L3 28 28 16 Pooling layer 1-1 2 2 - - 2
L4 14 14 16 Convolutional layers 1-2 5 5 16 25 1
L5 10 10 25 Go into layers 1-2 - - - - -
L6 10 10 25 Active layer 1-2 - - - - -
L7 10 10 25 Pooling layers 1-2 2 2 - - 2
L8 5 5 25 Full connection layer 1 5 5 25 3 1
L9 14 14 16 Convolutional layer 2-1 7 7 16 25 1
L10 8 8 25 Go into layer 2-1 - - - - -
L11 8 8 25 Active layer 2-1 - - - - -
L12 8 8 25 Pooling layer 2-1 2 2 - - 2
L13 4 4 25 Convolutional layer 2-2 4 4 25 36 1
L14 1 1 36 Go into layer 2-2 - - - - -
L15 1 1 36 Active layer 2-2 - - - - -
L16 1 1 36 Full connection layer 2 1 1 36 3 1
L17 4 4 25 Full connection 3-1 4 4 25 36 1
L18 1 1 36 Active layer 3-1 - - - - -
L19 1 1 36 Missing layer - - - - -
L20 1 1 36 Full connection 3-2 1 1 36 3 1
L21 1 1 36 Full connection layer 4 1 1 36 3 1
L22 1 1 3 Fusion layer - - - - -
L23 1 1 3 A classification layer - - - - -
L24 1 1 1 Error layer - - - - -
Thirdly, identifying the crack unit image: the image is divided into 64 × 64 × 3 subunits, each subunit is input into the crack identifier, the output layer is a corresponding label value, that is, the subunit with the label value of 1 is a crack unit, the subunit with the label value of 2 is a writing unit, the subunit with the label value of 3 is a background unit, and the recognition results of each type are respectively displayed, as shown in fig. 3-5.
Fourthly, post-processing output: and (3) carrying out image segmentation on each crack subunit by adopting an optimal entropy threshold method, outputting a binarization crack pixel point identification result as shown in a formula 3, and obtaining the length and width information of the crack according to the binarization crack pixel point as shown in figures 6-8. After the threshold segmentation is completed, the information of the real length and width of the crack is obtained by inputting the pixel resolution (mm/pixel).
Figure GDA0002922703440000071
In the formula, piRepresenting the proportion of the ith gray level, niRepresenting the number of ith gray scale, n representing the total number of pixels, PiExpressing cumulative probability of ith gradation, HP(t) represents the foreground entropy, HB(T) represents background entropy, H (T) represents image total entropy, and T represents a gray division value when the image total entropy takes a maximum value.
The method is implemented in an MATLAB environment, can be directly suitable for crack images shot by a consumer-grade common camera, does not need special shooting or detection equipment, is high in identification precision, high in speed and low in cost, can be used for offline identification and evaluation, can also be used for real-time monitoring, and improves automation, intelligence, accuracy and robustness of steel box girder fatigue crack identification.

Claims (5)

1. A bridge crack automatic monitoring and identification method based on computer vision is characterized by comprising the following steps:
step one, training set preparation: cutting an original input image into a sub-unit set of 64 multiplied by 3, randomly extracting a certain proportion of samples from the sub-unit set, and determining the number of the samples according to the requirement; simultaneously observing the image characteristics of the subunits, respectively labeling, wherein the number 1 represents a crack unit, the number 2 represents a handwriting unit, the number 3 represents a background unit, after the completion, a newly added subunit set is fused into an original training set, each subunit corresponds to a corresponding label, the number of the three subunits at the time is displayed, the same number of samples are randomly extracted from the rest two types of subunit samples by taking the subunit with the minimum number as a reference, and then, the samples with the minimum number and the same number of samples are randomly extracted from the rest two types of subunit samples and are rotated by 90 degrees, 180 degrees and 270 degrees counterclockwise to generate three new samples, so that data expansion is completed, each newly expanded subunit sample has the same label as that before the rotation, and the training set is completely manufactured;
step two, training a crack recognizer based on a depth network model: establishing a deep convolution neural network fusing multi-level features and completing initialization, wherein the size and the function of each layer are shown in table 1, a 64 × 64 × 3 subunit in a training set is used as input, a corresponding label is used as output, parameters in the network are trained, a loss function in the training process is a softmax loss function, an optimization algorithm is a random gradient descent algorithm of driving variables, initial values of learning rate, momentum parameters and weight parameters are used, and the obtained deep network is a crack recognizer;
TABLE 1 size and function of layers in deep networks
Figure FDA0002922703430000011
Figure FDA0002922703430000021
Step three, identifying the crack unit image: dividing the image into 64 multiplied by 3 subunits, inputting each subunit into a crack identifier, outputting corresponding label values on an output layer, namely, the subunit with the label value of 1 is a crack unit, the subunit with the label value of 2 is a writing unit, the subunit with the label value of 3 is a background unit, and respectively displaying the identification results of each type;
step four, post-processing output: and (4) carrying out image segmentation on each crack subunit by adopting an optimal entropy threshold method, outputting a binarization crack pixel point identification result, and obtaining the length and width information of the crack according to the binarization crack pixel point.
2. The method for automatically monitoring and identifying the bridge crack based on the computer vision of claim 1, wherein the method comprises the following steps: step two, the loss function in the training process is a softmaxloss function, and the formula is as follows:
Figure FDA0002922703430000031
in the formula, L is a loss function, m is the number of samples, and C is the number of classifications; 1{ y(i)J is an index function when the y is(i)1 when each sample is classified into the jth class, and 0 if not;
Figure FDA0002922703430000032
bjfor the weights and offsets to be updated, x(i)For input, λ is a weight parameter.
3. The method for automatically monitoring and identifying the bridge crack based on the computer vision of claim 1, wherein the method comprises the following steps: step two, the optimization algorithm is a random gradient descent algorithm of driving quantity, and the formula is as follows:
Figure FDA0002922703430000033
v in the formulaWFor the weight update rate, αWLearning the rate, η, for weightWIn order to weight the momentum parameter,
Figure FDA0002922703430000034
partial differentiation of the weight for the loss function; v isbTo bias the update rate, αbTo bias the learning rate, ηbIn order to bias the momentum parameter of the magnetic resonance,
Figure FDA0002922703430000035
is the partial differential of the loss function versus the bias.
4. The method for automatically monitoring and identifying the bridge crack based on the computer vision of claim 1, wherein the method comprises the following steps: step four, carrying out image segmentation on each crack subunit by adopting an optimal entropy threshold method, wherein the formula is as follows:
Figure FDA0002922703430000036
in the formula, piRepresenting the proportion of the ith gray level, niRepresenting the number of ith gray scale, n representing the total number of pixels, PiExpressing cumulative probability of ith gradation, HP(t) represents the foreground entropy, HB(T) represents background entropy, H (T) represents image total entropy, and T represents a gray division value when the image total entropy takes a maximum value.
5. The method for automatically monitoring and identifying the bridge crack based on the computer vision of claim 1, wherein the method comprises the following steps: and step four, after threshold segmentation is completed, obtaining the information of the real length and width of the crack by inputting the pixel resolution.
CN201810089404.0A 2018-01-30 2018-01-30 Automatic bridge crack monitoring and identifying method based on computer vision Active CN108346144B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810089404.0A CN108346144B (en) 2018-01-30 2018-01-30 Automatic bridge crack monitoring and identifying method based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810089404.0A CN108346144B (en) 2018-01-30 2018-01-30 Automatic bridge crack monitoring and identifying method based on computer vision

Publications (2)

Publication Number Publication Date
CN108346144A CN108346144A (en) 2018-07-31
CN108346144B true CN108346144B (en) 2021-03-16

Family

ID=62960701

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810089404.0A Active CN108346144B (en) 2018-01-30 2018-01-30 Automatic bridge crack monitoring and identifying method based on computer vision

Country Status (1)

Country Link
CN (1) CN108346144B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109029381B (en) * 2018-10-19 2021-04-06 石家庄铁道大学 Tunnel crack detection method and system and terminal equipment
CN109376676A (en) * 2018-11-01 2019-02-22 哈尔滨工业大学 Highway engineering site operation personnel safety method for early warning based on unmanned aerial vehicle platform
CN109408985A (en) * 2018-11-01 2019-03-01 哈尔滨工业大学 The accurate recognition methods in bridge steel structure crack based on computer vision
CN109753954A (en) * 2018-11-14 2019-05-14 安徽艾睿思智能科技有限公司 The real-time positioning identifying method of text based on deep learning attention mechanism
CN110020652A (en) * 2019-01-07 2019-07-16 新而锐电子科技(上海)有限公司 The dividing method of Tunnel Lining Cracks image
CN109919942B (en) * 2019-04-04 2020-01-14 哈尔滨工业大学 Bridge crack intelligent detection method based on high-precision noise reduction theory
CN111091554B (en) * 2019-12-12 2020-08-28 哈尔滨市科佳通用机电股份有限公司 Railway wagon swing bolster fracture fault image identification method
CN111369526B (en) * 2020-03-03 2023-04-18 中建二局土木工程集团有限公司 Multi-type old bridge crack identification method based on semi-supervised deep learning
CN111563888A (en) * 2020-05-06 2020-08-21 清华大学 Quantitative crack growth monitoring method
CN111832617B (en) * 2020-06-05 2022-11-08 上海交通大学 Engine cold state test fault diagnosis method
CN113406088A (en) * 2021-05-10 2021-09-17 同济大学 Fixed point type steel box girder crack development observation device
CN113935086B (en) * 2021-09-17 2022-08-02 哈尔滨工业大学 Intelligent structure design method based on computer vision and deep learning
CN118379237A (en) * 2024-03-14 2024-07-23 哈尔滨工业大学 Bridge apparent crack pixel level identification method based on visual large model SAM

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1236173A2 (en) * 1999-10-27 2002-09-04 Biowulf Technologies, LLC Methods and devices for identifying patterns in biological systems
CN105975968A (en) * 2016-05-06 2016-09-28 西安理工大学 Caffe architecture based deep learning license plate character recognition method
CN106384080A (en) * 2016-08-31 2017-02-08 广州精点计算机科技有限公司 Apparent age estimating method and device based on convolutional neural network
CN107133943A (en) * 2017-04-26 2017-09-05 贵州电网有限责任公司输电运行检修分公司 A kind of visible detection method of stockbridge damper defects detection
CN107301383A (en) * 2017-06-07 2017-10-27 华南理工大学 A kind of pavement marking recognition methods based on Fast R CNN
CN107403197A (en) * 2017-07-31 2017-11-28 武汉大学 A kind of crack identification method based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1236173A2 (en) * 1999-10-27 2002-09-04 Biowulf Technologies, LLC Methods and devices for identifying patterns in biological systems
CN105975968A (en) * 2016-05-06 2016-09-28 西安理工大学 Caffe architecture based deep learning license plate character recognition method
CN106384080A (en) * 2016-08-31 2017-02-08 广州精点计算机科技有限公司 Apparent age estimating method and device based on convolutional neural network
CN107133943A (en) * 2017-04-26 2017-09-05 贵州电网有限责任公司输电运行检修分公司 A kind of visible detection method of stockbridge damper defects detection
CN107301383A (en) * 2017-06-07 2017-10-27 华南理工大学 A kind of pavement marking recognition methods based on Fast R CNN
CN107403197A (en) * 2017-07-31 2017-11-28 武汉大学 A kind of crack identification method based on deep learning

Also Published As

Publication number Publication date
CN108346144A (en) 2018-07-31

Similar Documents

Publication Publication Date Title
CN108346144B (en) Automatic bridge crack monitoring and identifying method based on computer vision
CN107316064B (en) Asphalt pavement crack classification and identification method based on convolutional neural network
CN107423760A (en) Based on pre-segmentation and the deep learning object detection method returned
CN108257114A (en) A kind of transmission facility defect inspection method based on deep learning
CN114092389A (en) Glass panel surface defect detection method based on small sample learning
CN111553303A (en) Remote sensing ortho image dense building extraction method based on convolutional neural network
CN110927171A (en) Bearing roller chamfer surface defect detection method based on machine vision
CN110232379A (en) A kind of vehicle attitude detection method and system
CN109284779A (en) Object detection method based on deep full convolution network
CN111369526B (en) Multi-type old bridge crack identification method based on semi-supervised deep learning
CN113962951B (en) Training method and device for detecting segmentation model, and target detection method and device
CN101140216A (en) Gas-liquid two-phase flow type recognition method based on digital graphic processing technique
CN111242899B (en) Image-based flaw detection method and computer-readable storage medium
CN115082444B (en) Copper pipe weld defect detection method and system based on image processing
CN112489026B (en) Asphalt pavement disease detection method based on multi-branch parallel convolution neural network
CN111103307A (en) Pcb defect detection method based on deep learning
CN109584206B (en) Method for synthesizing training sample of neural network in part surface flaw detection
CN115546223A (en) Method and system for detecting loss of fastening bolt of equipment under train
CN115019294A (en) Pointer instrument reading identification method and system
CN111091534A (en) Target detection-based pcb defect detection and positioning method
Ashraf et al. Machine learning-based pavement crack detection, classification, and characterization: a review
CN113516652A (en) Battery surface defect and adhesive detection method, device, medium and electronic equipment
CN113870326A (en) Structural damage mapping, quantifying and visualizing method based on image and three-dimensional point cloud registration
CN108765391A (en) A kind of plate glass foreign matter image analysis methods based on deep learning
CN117197085A (en) Road rapid-inspection image pavement disease detection method based on improved YOLOv8 network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant