CN111985499B - High-precision bridge apparent disease identification method based on computer vision - Google Patents

High-precision bridge apparent disease identification method based on computer vision Download PDF

Info

Publication number
CN111985499B
CN111985499B CN202010717371.7A CN202010717371A CN111985499B CN 111985499 B CN111985499 B CN 111985499B CN 202010717371 A CN202010717371 A CN 202010717371A CN 111985499 B CN111985499 B CN 111985499B
Authority
CN
China
Prior art keywords
disease
image
training
training set
yolo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010717371.7A
Other languages
Chinese (zh)
Other versions
CN111985499A (en
Inventor
茅建校
万亚华
倪有豪
温学华
赵恺雍
庞振浩
谢以顺
王飞球
王浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202010717371.7A priority Critical patent/CN111985499B/en
Publication of CN111985499A publication Critical patent/CN111985499A/en
Application granted granted Critical
Publication of CN111985499B publication Critical patent/CN111985499B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Abstract

The invention discloses a high-precision bridge apparent disease identification method based on computer vision, which comprises the following steps: the method comprises an image preprocessing stage, a simulation image generation stage and an apparent disease identification stage. In the image preprocessing stage, gaussian filtering and image pixel equalization processing are carried out on a training set containing a disease image. In the generation stage of the simulation image, a data distribution mode for generating a confrontation network learning training set is adopted, and the simulation image is generated according to the data distribution mode, so that the data scale of the training set is increased. In the stage of disease identification, an increased disease training set is adopted to train a YOLO model, and the trained model is used for bridge apparent disease identification. When the training data set is small in scale, the scale of the neural network training data set is enlarged by generating the confrontation network, and the accuracy of apparent disease identification of the YOLO model is guaranteed.

Description

High-precision bridge apparent disease identification method based on computer vision
Technical Field
The invention relates to a high-precision bridge apparent disease identification method based on computer vision, which is suitable for the field of health monitoring of civil traffic structures, optimally extracts a disease target boundary of a training image, increases a training set of a neural network model, and improves the identification precision of the identification model on a disease target.
Technical Field
The rapid development of the civil transportation industry, a large number of capital construction facilities enter the operation and maintenance stage. Along with the increase of service time, the appearance of an engineering structure is continuously changed, such as steel corrosion, concrete cracks, steel structure node bolt falling and the like. These appearance defects directly or indirectly affect the change of the mechanical properties of the structure, and the durability and even the safety of the structure may be reduced to some extent, so that the monitoring of the appearance defects of the structure is increasingly important and becomes an important part of the monitoring of the health of the structure. In the aspect of appearance disease monitoring, the traditional mode mainly based on manual detection is limited by the development of structure height and span, and has the characteristics of difficult accessibility and huge workload. In recent years, the development of computer vision and deep learning concepts makes the acquisition, processing and recognition of images tend to be automated and intelligent, the accessibility is strong, the accuracy is high, and classic methods such as RCNN, fast RCNN, SSD, YOLO and the like appear, wherein YOLO is one of the most advanced target detection schemes so far. Therefore, structural health monitoring means based on computer vision and deep learning have been developed.
At the present stage, the apparent disease identification based on computer vision and deep learning algorithm has certain development bottleneck and faces various difficulties: (1) Due to the reasons of acquisition equipment, working environment, shooting angle and the like, the definition of the acquired image information is not enough, so that the accuracy of image target identification is severely restricted; (2) The application of deep learning to perform image recognition is based on a large number of training data sets, and in order to ensure accurate recognition of targets in different environments and different states, a large number of original samples are required to be provided as training sets, so that the balance of the training sets is ensured, and the accuracy of target recognition is improved; (3) The actual engineering environment is complex and changeable, the influence factors are numerous, and how to eliminate the key problem in the image target identification of the same interference components in the obtained image is solved; (4) For the most interesting disease problem in the engineering structure, the boundary marking needs to be carried out on common diseases, and high-precision samples are provided for deep learning as much as possible, so that the disease identification accuracy of the trained model is improved.
The generation of the countermeasure network is taken as a deep learning method which is started in recent years, through mutual game between a generator and a discriminator, semi-supervised learning can be realized by a small amount of marked training samples, so that a high-precision simulation image is generated, and the problem of acquisition of a large amount of training data sets can be solved. Meanwhile, by means of an image processing technology of computer vision, a training set with high precision, accurate marking and wide coverage range is generated for screening and processing the existing engineering structure disease sample images. The method has the advantages that the YOLO neural network model with high precision and fast operation is trained to obtain the model for accurately identifying the surface diseases, and the method has important application values for guaranteeing the safe service of the infrastructure structure and reducing irreversible damage of the structure caused by missing detection of the appearance diseases.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the defects in the prior art, a high-precision bridge apparent disease identification method based on computer vision is provided, the definition of a disease image training set is improved, the disease target boundary of the training set is optimally extracted, the number of the training sets is increased, and the disease identification precision of a deep learning model is improved.
The technical scheme is as follows: in order to realize the purpose, the technical scheme of the invention is as follows:
a high-precision bridge apparent disease identification method based on computer vision comprises the following steps:
the first step is as follows: and (5) an image preprocessing stage.
And carrying out Gaussian filtering and image pixel equalization on the disease images in the disease image training set, wherein local abnormal noise points are removed by the Gaussian filtering, pixels with smaller occupation ratio are merged by the image pixel equalization, the pixel gray distribution distance is effectively increased, and the definition of the input training images is improved.
The second step is that: simulation image generation phase
And training the established generated countermeasure network by utilizing the preprocessed disease image training set, wherein the generated countermeasure network consists of a generator and a discriminator, the training discriminator discriminates whether the image is true or false, and the training generator generates a simulation image. Firstly fixing the generator and the training discriminator, and then fixing the discriminator and the training generator. When the probability distribution of the generated image of the discriminator judging generator is consistent with the probability distribution of the training set and the frequency domain distribution is consistent, the generator is determined, and a simulation disease image set is generated; otherwise, the generator continues to be trained.
The third step: apparent disease identification stage
(1) Optimizing the disease boundary labels of the preprocessed disease image training set and the simulated disease image set: selecting a disease target position in the disease image by manual vision as an initial label; and in the frame selection range, automatically optimizing and identifying the disease boundary by jointly using a Canny operator and a Sobel operator to serve as a training label. The disease image training set and the simulated disease image which are preprocessed with the training labels form a YOLO sample set, and the YOLO sample set is respectively a YOLO training set and a YOLO testing set.
(2) And inputting the YOLO training set into a YOLO neural network model for training. After training is finished, the identification precision of the disease target is identified by using the test identification model, if the identification precision meets a threshold value of correct identification rate, the disease target is considered to be qualified, and the currently trained YOLO neural network model is the disease identification model; otherwise, supplementing the disease image, and repeating the first step to the third step until the correct recognition rate meets the threshold requirement.
Further, the gaussian filtering step is as follows:
the first step is as follows: converting the disease images in the disease image training set into an HSV color model, wherein H is hue, S is saturation, and V is brightness;
the second step is that: performing convolution calculation on the H channel, the S channel and the V channel by adopting Gaussian convolution kernels respectively;
the third step: adjusting according to the actual needs of the original disease image saturation and lightness, and changing the corresponding weight coefficient; and converting the disease image into an RGB color model through a conversion formula.
Further, the image equalization step is as follows:
the first step is as follows: converting the RGB defect image subjected to Gaussian filtering into a gray image;
the second step: counting a grey image histogram, and grouping and merging grey values according to a probability density function;
the third step: and (4) taking the gray value ratio of the pixels before and after merging as an equalization coefficient, and multiplying the equalization coefficient by the RGB three-channel colors at the corresponding pixel point positions respectively to reconstruct the disease image.
Furthermore, the Sobel operator obtains an optimal disease boundary by using a difference method on the basis that the Canny operator identifies the disease boundary, and the optimal disease boundary is used as a training label.
The invention has the beneficial effects that:
(1) The disease image training set is preprocessed, wherein the preprocessing comprises Gaussian filtering and image pixel equalization, local abnormal noise points of the image can be effectively removed, pixels are merged, and the image definition of the training set is greatly improved;
(2) Optimizing and extracting an image lesion boundary by jointly using the characteristic of a multistage edge detection algorithm of a Canny operator and the characteristic of a discrete difference algorithm of a Sobel operator, and realizing the accuracy of a weight coefficient in the training process of a YOLO neural network model;
(3) The invention utilizes the image simulation performance of the anti-generation network to train the generator to generate a large number of simulated disease images, solves the problem of insufficient disease image data set in the training process, thereby effectively improving the disease identification precision after the neural network model is trained and having good application prospect.
Drawings
FIG. 1 is a flow chart of a high-precision bridge apparent disease identification method based on computer vision;
FIG. 2 is a flow chart of Gaussian filtering during image pre-processing;
FIG. 3 is a flow chart of image pixel equalization during an image pre-processing phase;
FIG. 4 is a diagram of generator training termination conditions;
Detailed Description
The technical solution of the present invention will be further described in detail with reference to the accompanying drawings.
As shown in fig. 1, the implementation process of the method of the present invention is described in detail by taking the high-strength bolt fall-off identification of a certain large-span steel truss bridge node as an example, and the method mainly comprises the following steps:
(1) And (5) preprocessing the node bolt falling image training set. The method includes the steps that a plurality of steel truss bridge node images with high-strength bolt falling diseases are collected together by considering influence factors such as shooting distance, lighting conditions and visual angles, gaussian filtering and image pixel equalization are conducted on a bolt falling image color model, local abnormal noise points are removed through the Gaussian filtering, pixels with small proportion are merged through the image equalization, pixel distribution distance is effectively shortened, and input training image definition is improved;
(2) And (4) training an ensemble training discriminator by using the node bolt falling image, and discriminating the authenticity of the generated image. The nature of the discriminator is a two-classifier, and for a 'new sample' formed by fusing a real high-strength bolt damage sample library x and a generation sample library z, the discriminator P is used data (x) And P z (z) the probability score difference, and the generated incomplete sample of the preliminary screening, namely the training target of the incomplete sample can be expressed as:
Figure BDA0002598711850000031
wherein D is a target value obtained by optimizing the discriminator D and the generator G,
Figure BDA0002598711850000032
for a sample pool P data (x) Is a desired value of
Figure BDA0002598711850000033
To generate a sample pool P z The expected value of (z), D (x) is the output value of the discriminator D to the sample pool x, and D (G (z)) is the output value of the discriminator D to the generator G as input to generate the sample pool.
(3) And (5) training the generator by using the node bolt falling image training set. The generator continuously evolves and generates images closer to real samples in the training process, and the difference between the generated bolt falling images and the real bolt images is reduced. When the discriminator discriminates that the probability density distribution and the frequency domain distribution of the generated image of the generator are consistent with those of the training set, determining the generator as shown in FIG. 4, and generating a certain amount of simulation bolt falling images; otherwise, the generator continues to be trained.
(4) And optimizing the boundary labels of the node bolt falling images and the training set of the simulation bolt falling images. Firstly, marking coordinates and categories of a bounding box with bolt falling holes on high-strength bolt nodes on images in a training set through artificial vision, and taking the coordinates and categories as initial labels. Then, in the frame selection range, automatically identifying the falling boundary by jointly using a Canny operator and a Sobel operator to form the boundary of the falling bolt hole as a training label for bolt falling;
(5) And inputting the bolt shedding training set with the training label into a YOLO neural network model for training. After training is finished, a certain amount of verification data sets are selected, the recognition accuracy of the recognition model on the disease target is tested, and the correct recognition rate threshold is set to be 95%. If the correct recognition rate is over 95%, the trained recognition model is considered to be qualified; otherwise, complementing and shooting the bolt falling-off image of the node, and repeating the work of (1) to (5) until the correct recognition rate meets the threshold requirement, finishing the training of the recognition model, and meeting the precision requirement.
As shown in fig. 2, the principle of gaussian filtering on the bolt-off image training set is to convert the images in the training set into HSV color models, where H channel represents hue, S channel represents saturation, and V channel represents lightness; and respectively carrying out convolution calculation on the H channel, the S channel and the V channel by adopting Gaussian convolution kernels with different standard deviations according to requirements, converting the three channels after adjustment back to the RGB color model by combining the actual identification requirements according to different characteristics of bolt falling and adjusting the weights of the S channel and the V channel, and generating a filtered high-strength bolt falling training set image.
As shown in fig. 3, the bolt shedding training set image after gaussian filtering needs to be converted into a gray image for image pixel equalization, and the gray value calculation formula of each pixel point is as follows:
Gray=R*0.299+G*0.587+B*0.114 (1)
wherein, the RGB image is a true color image, R, G and B respectively represent 3 different basic colors of red, green and blue, and Gray is a Gray value.
Counting a grey image histogram, and grouping and merging grey values according to a probability density function; and taking the ratio of the gray value of the pixel point after merging to the gray value of the pixel point before merging as an equalization coefficient, and multiplying the equalization coefficient by the RGB three-channel colors at the corresponding pixel point position respectively to reconstruct the image.
As shown in fig. 4, is a generator training termination condition diagram. Converting the simulated disease image generated by the generator and the image of the training set into a gray image, counting a gray distribution histogram, and fitting a probability density curve; meanwhile, the images of the simulated disease images generated by the generator and the images of the training set are subjected to Fourier transform to frequency domains, and the distribution rules of the frequency domains are compared. If the probability density distribution and the frequency domain distribution are consistent, the generator determines that the simulated disease image can be generated and used as a training set for training a YOLO neural network model.
The above description is only the preferred embodiment of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications can be made without departing from the principles of the invention and these modifications are to be considered within the scope of the invention.

Claims (4)

1. A high-precision bridge apparent disease identification method based on computer vision is characterized by comprising the following steps:
the first step is as follows: image preprocessing stage
Carrying out Gaussian filtering and image pixel equalization on the disease images in the disease image training set to complete pretreatment of the disease image training set;
the second step: simulation image generation phase
Training the established generation countermeasure network by utilizing the preprocessed disease image training set, wherein the generation countermeasure network consists of a generator and a discriminator; when training, firstly fixing the generator and the training discriminator, and then fixing the discriminator and the training generator, when the probability distribution of the image generated by the discriminator discrimination generator is consistent with that of the preprocessed disease image training set and the frequency domain distribution is consistent, determining the generator to generate a simulated disease image set, otherwise, continuing to train the generator;
the third step: apparent disease identification stage
(1) Respectively performing frame selection on disease target positions of the disease images in the preprocessed disease image training set and the disease images in the simulated disease image set through artificial vision to serve as initial labels; in the frame selection range, a Canny operator and a Sobel operator are jointly used for automatically optimizing and identifying the disease boundary to be used as a training label; forming a YOLO sample set by the preprocessed disease image training set with training labels and the simulated disease images, and respectively taking the YOLO sample set as a YOLO training set and a YOLO testing set;
(2) Training a YOLO neural network model by using a YOLO training set, testing the trained YOLO neural network model by using a YOLO testing set, and if the correct recognition rate of the trained YOLO neural network model on a disease target is greater than a set threshold value, determining the currently trained YOLO neural network model as a disease recognition model; otherwise, supplementing the disease images in the disease image training set, and repeating the first step to the third step until the correct recognition rate is greater than the set threshold value.
2. The high-precision bridge apparent disease identification method based on computer vision of claim 1 is characterized in that in the first step, the Gaussian filtering step is as follows:
the first step is as follows: converting the disease images in the disease image training set from an RGB color model into an HSV color model, wherein H is hue, S is saturation and V is lightness;
the second step: performing convolution calculation on the H channel, the S channel and the V channel by adopting Gaussian convolution kernels respectively;
the third step: adjusting the weighting coefficients of an S channel and a V channel according to the actual needs of the saturation and lightness of the original disease image; and converting the disease image of the HSV color model into an RGB color model through a conversion formula.
3. The method for identifying the apparent bridge diseases based on the computer vision is characterized in that in the first step, the image equalization step is as follows:
the first step is as follows: converting the disease image after Gaussian filtering into a gray image;
the second step: counting a grey image histogram, and grouping and merging grey values according to a probability density function;
the third step: and (4) taking the gray value ratio of the pixel points before and after merging as an equalization coefficient, multiplying the equalization coefficient by the RGB three-channel colors at the corresponding pixel point positions respectively, and reconstructing a disease image.
4. The method for identifying the apparent diseases of the bridge with high precision based on the computer vision of claim 1, wherein in the third step, the Canny operator and the Sobel operator are jointly applied to automatically optimize and identify the disease boundaries specifically as follows: the Sobel operator obtains the optimal disease boundary by using a difference method on the basis that the Canny operator identifies the disease boundary, and the optimal disease boundary is used as a training label.
CN202010717371.7A 2020-07-23 2020-07-23 High-precision bridge apparent disease identification method based on computer vision Active CN111985499B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010717371.7A CN111985499B (en) 2020-07-23 2020-07-23 High-precision bridge apparent disease identification method based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010717371.7A CN111985499B (en) 2020-07-23 2020-07-23 High-precision bridge apparent disease identification method based on computer vision

Publications (2)

Publication Number Publication Date
CN111985499A CN111985499A (en) 2020-11-24
CN111985499B true CN111985499B (en) 2022-11-04

Family

ID=73438856

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010717371.7A Active CN111985499B (en) 2020-07-23 2020-07-23 High-precision bridge apparent disease identification method based on computer vision

Country Status (1)

Country Link
CN (1) CN111985499B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613097A (en) * 2020-12-15 2021-04-06 中铁二十四局集团江苏工程有限公司 BIM rapid modeling method based on computer vision
CN113313107B (en) * 2021-04-25 2023-08-15 湖南桥康智能科技有限公司 Intelligent detection and identification method for multiple types of diseases on cable surface of cable-stayed bridge
CN113112498B (en) * 2021-05-06 2024-01-19 东北农业大学 Grape leaf spot identification method based on fine-grained countermeasure generation network
CN113627299B (en) * 2021-07-30 2024-04-09 广东电网有限责任公司 Wire floater intelligent recognition method and device based on deep learning
CN116682072B (en) * 2023-08-04 2023-10-20 四川公路工程咨询监理有限公司 Bridge disease monitoring system
CN117067226A (en) * 2023-08-17 2023-11-17 兰州交通大学 Steel bridge rust detection robot and rust detection method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111382601A (en) * 2018-12-28 2020-07-07 河南中原大数据研究院有限公司 Illumination face image recognition preprocessing system and method for generating confrontation network model
CN110188824B (en) * 2019-05-31 2021-05-14 重庆大学 Small sample plant disease identification method and system
CN111191714A (en) * 2019-12-28 2020-05-22 浙江大学 Intelligent identification method for bridge appearance damage diseases

Also Published As

Publication number Publication date
CN111985499A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
CN111985499B (en) High-precision bridge apparent disease identification method based on computer vision
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN110543878B (en) Pointer instrument reading identification method based on neural network
CN109118479B (en) Capsule network-based insulator defect identification and positioning device and method
CN109615611B (en) Inspection image-based insulator self-explosion defect detection method
CN108021938A (en) A kind of Cold-strip Steel Surface defect online detection method and detecting system
CN109840471A (en) A kind of connecting way dividing method based on improvement Unet network model
CN105975913B (en) Road network extraction method based on adaptive cluster learning
CN109741328A (en) A kind of automobile apparent mass detection method based on production confrontation network
CN110148162A (en) A kind of heterologous image matching method based on composition operators
CN109214298A (en) A kind of Asia women face value Rating Model method based on depth convolutional network
CN110400293B (en) No-reference image quality evaluation method based on deep forest classification
CN113409314A (en) Unmanned aerial vehicle visual detection and evaluation method and system for corrosion of high-altitude steel structure
CN108681689B (en) Frame rate enhanced gait recognition method and device based on generation of confrontation network
CN106951863B (en) Method for detecting change of infrared image of substation equipment based on random forest
CN110909657A (en) Method for identifying apparent tunnel disease image
CN112597798A (en) Method for identifying authenticity of commodity by using neural network
CN111260645B (en) Tampered image detection method and system based on block classification deep learning
CN114972216A (en) Construction method and application of texture surface defect detection model
CN109598681A (en) The reference-free quality evaluation method of image after a kind of symmetrical Tangka repairs
CN116029979A (en) Cloth flaw visual detection method based on improved Yolov4
CN115457551A (en) Leaf damage identification method suitable for small sample condition
CN109472790A (en) A kind of machine components defect inspection method and system
CN113421223B (en) Industrial product surface defect detection method based on deep learning and Gaussian mixture
CN114155226A (en) Micro defect edge calculation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant