CN109801292A - A kind of bituminous highway crack image partition method based on generation confrontation network - Google Patents
A kind of bituminous highway crack image partition method based on generation confrontation network Download PDFInfo
- Publication number
- CN109801292A CN109801292A CN201811508604.1A CN201811508604A CN109801292A CN 109801292 A CN109801292 A CN 109801292A CN 201811508604 A CN201811508604 A CN 201811508604A CN 109801292 A CN109801292 A CN 109801292A
- Authority
- CN
- China
- Prior art keywords
- net
- model
- image
- crack
- gan
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a kind of based on the bituminous highway crack image partition method for generating confrontation network, uses U-Net, CU-Net and FU-Net network as the Maker model G for generating confrontation GAN model;Identical two sorter networks arbiter model D (Discriminative) is combined into U-GAN, CU-GAN and FU-GAN model to GU-Net, CU-Net and FU-Net that Maker model includes with three respectively;By generator and the mutual iteration competition optimization training of arbiter, Maker model U-Net, CU-Net and the FU-Net for finally using training to complete are as crack image segmentating device;By U-Net, CU-Net and FU-Net model realization to complicated bituminous highway crack image segmentation.Compared with prior art, the training dataset needed is less, and crack segmentation precision is higher, and has higher precision ratio and recall ratio.
Description
Technical field
It is the present invention relates to bituminous highway crack technical field of image segmentation, in particular to a kind of to use deep learning generation pair
Anti- network G AN carries out the dividing method in crack in bituminous highway image.
Background technique
Along with expanding economy, highway infrastructures build the development of the national economy development in play it is more and more important
Effect, periodic maintenance and management for highway can reduce highway maintenance expense and traffic accident, the inspection of highway pavement crack
Survey is the important component of highway maintenance and management.The segmentation of highway image crack is auxiliary highway monitoring management automation road surface
The important technology of Defect inspection, many people have put into image crack cutting techniques by digital image processing techniques and have largely ground
Study carefully.
Due to highway crack image there are uneven illumination, random noise, image grayscale level be fuzzy, road sign and greasy dirt etc. with
The interference of machine factor, there are bigger difficulty for the segmentation extraction in crack.Initial stage passes through some traditional figures when Crack Detection is main
As processing method.The methods of InMM, GaMM and Morph are used for the segmentation of highway image crack, but for different gray features
The adjustment parameter of image, the setting of these algorithms is all different, and the Generalization Capability of algorithm is weaker.Method based on gray level threshold segmentation
It is difficult to handle that gray-level is fuzzy, image of uneven illumination in practical applications.Image pixel space projection method is right first
Image carries out greyscale transformation and picture smooth treatment, and then using Mathematical Morphology Method, (corrosion and expansion, morphological image are opened
Operation and closed operation) crack image is handled, finally by image projection by crack and background separation.This method is mainly
For highway crack region and the obvious situation of background noise, when crack area is with highway image similar in ambient noise
There are much noises for the crack pattern projected.Region growing algorithm is the space pixel cluster that will have different gray value levels
For different faith regions (ROB), different confidence factors is arranged in each faith region, is kind with a faith region
Son is starting point, determines search range according to the rule of confidence factor special designing, and will look up to area according to similarity criteria
Domain scans for merging, and this method in the image of ambient noise rareness can be partitioned into the crack in the image of crack but in illumination
Unevenly, there is no the effects obtained for the ambiguous image of edge of crack.Minimal path selection algorithm is by reducing crackle
Loop and unrelated shade in detection is to search for crack path, while also fracture area pixel width is estimated, this method
It is easy when noise and crack are closely similar using noise as crack path, therefore is only applicable in the weaker situation of noise.
Deep learning showed stronger generalization ability in terms of extracting image overall feature and sensitive features in recent years
And robustness.Common method is that complete crack image is divided into equal-sized subgraph, is used depth convolutional network (CNN)
The classification that crack area and non-crack area are carried out to subgraph, image spy can preferably be extracted by making disaggregated model using CNN
Sign, but the disadvantage is that needing a large amount of labeled data, it is desirable that the accuracy of labeled data mark is higher.But actually mark in due to
The reason of mark personnel, crack is obscured in piece image may be used as crack area, and probably as normal in another image
Region, this causes the accuracy rate that will appear model in the case where data obscure, mark unclear higher, and recall rate is lower.Work as mark
Some normal region divisions can be crack area when often classifying when undesirable by note data.
Convolutional neural networks are widely used in machine vision due to the ability in feature extraction powerful to image.U-Net
Network is using one of more extensive semantic segmentation model in deep learning, and U-Net network will compress Lu Jingyu path expander
Respective layer series connection, finally classify to each pixel, can training very little data set on realize it is high-precision
Semantic segmentation, U-Net network has been widely used in medical image segmentation and natural scene semantic segmentation at present.Generate confrontation
Network G AN (Generative Adversarial Network) is by generator (Generative) and arbiter
(Discriminator) network of two mutual games is constituted, the purpose of arbiter be resolution image be generator generate or
True picture;The target of generator is to generate a picture closely similar with true picture, thus arbiter of out-tricking.Generation pair
Its thought for fighting game of anti-network is also widely used in image segmentation, in the inspection of wood surface flaw, retinal blood
Pipe segmentation, natural scene image segmentation all achieve higher segmentation precision.
It can realize that high-precision is divided and generated confrontation network and can be generated under low volume data collection in conjunction with U-Net network
The characteristics of obeying truthful data spatial distribution image, present invention improves over U-Net network, use the output of convolution interlayer it is different across
CU-Net the and FU-Net network of layer connection method, using CU-Net and FU-Net network after U-Net network and improvement as generation
The generator of network is fought, arbiter is that common two sorter network is combined into U-GAN, CU-GAN and FU-GAN and network.Pass through
Generator and the later generator network of arbiter dual training can be used as a divider for highway crack image segmentation.
Summary of the invention
In view of existing bituminous highway crack image partition method there are the problem of, the object of the present invention is to provide a kind of depths
Degree study generates the method that confrontation network G AN carries out crack segmentation in bituminous highway image.The realization of the object of the invention is based on such as
Under means.
A kind of bituminous highway crack image partition method based on generation confrontation network, utilizes the image of deep learning frame
Model is generated, uses U-Net, CU-Net and FU-Net network as the Maker model G for generating confrontation GAN model
(Generative);U-Net, CU-Net and FU-Net that Maker model G includes identical two sorter network with three respectively
Arbiter model D (Discriminative) is combined into U-GAN, CU-GAN and FU-GAN model;Pass through generator and arbiter
Mutual iteration competition optimization training, Maker model U-Net, CU-Net and the FU-Net for finally using training to complete are as crack
Image segmentating device;By U-Net, CU-Net and FU-Net model realization to complicated bituminous highway crack image segmentation;Comprising with
Lower key step:
Step 1, data set pretreatment
Step 1.1 establishes model data collection: collected bituminous highway crack image, selects multiple image at random and constitutes original
Beginning data set;By manually marking crack area on raw data set, when mark, will be split according to crack actual size in image
Area marking is stitched into black, remaining background area is labeled as white, and the data after marking again based on raw data set constitute mesh
Mark data set;
Step 1.2 image data enhances (Image data generation): to raw data set in step 1.1 and mesh
It marks data set and carries out identical Random-Rotation transformation, random turning-over changed and random translation transformation, obtain model training number respectively
According to collection and simulated target data set.
Step 2 trains U-GAN, FU-GAN and CU-GAN model
The configuration of step 2.1 model parameter: it is S that model training data set total amount is obtained in step 1.2;Training pattern process
N number of epochs, i.e. model repetitive exercise n times are set, and the batch-size value for selecting each epoch is K, i.e., in each iteration
Need to be trained S/K times in training process, each K width image participates in training, model training use Adam loss function as
Lose majorized function;
Step 2.2 train three arbiter model D: by model training data set in step 1.2 input to respectively U-Net,
CU-Net and FU-Net model, it is easy that U-Net, CU-Net and FU-Net model generate three width in the case where not having trained situation respectively
The pseudo- image Fake image I of identification, by Fake image I in step 1.2 in simulated target data set it is corresponding true
Image Real image is incorporated as the input of arbiter model D, and training arbiter model D identifies puppet image Fake image I
With true picture Real image corresponding in simulated target data set;
Step 2.3 trains Maker model G: including tri- models of U-Net, CU-Net and FU-Net;In fixing step 2.2
Model training data set in step 1.2 is inputed to U-Net, CU-Net and FU-Net by the parameter of three arbiter model D respectively
Network, training generator U-Net, CU-Net and FU-Net generate pseudo- image similar with simulated target data set in step 1.2
It is denoted as Fake image II, pseudo- image Fake image II is bianry image, Maker model U-Net, CU-Net and FU-
Net is using two classification cross entropy loss function optimization losses;
Step 2.4 repetitive exercise Iterative training: step 2.2 and step 2.3 are repeated, as all epochs
U-Net, CU-Net and FU-Net model and model parameter information are saved after repetitive exercise is complete;
Step 3 test process: bituminous highway crack image segmentation
Step 3.1: U-Net, CU-Net and FU-Net model and model parameter information that load step 2.4 saves;
Step 3.2: the bituminous highway crack image detected will be needed to be input to Maker model G (Generative), wrapped
Containing tri- models of U-Net, CU-Net and FU-Net, Maker model G generates corresponding crack pattern Crack image;
Step 3.3: crack pattern Crack image being subjected to Image erosion processing with 5 × 5 convolution kernel, then passes through company
Logical domain analysis removal area is less than the crack area of preset value, finally obtains the crack pattern that segmentation is completed.
The positive effect of the present invention is:
1, the convolutional layer parallel link between the different Output Sizes of later FU-Net model realization is improved, image is avoided to believe
Information is lost when breath transmits between different dimensioned network layers.Improve the connection type and U- that later CU-Net model uses
Net model parallel link mode is identical, but increases the convolutional layer parallel link path of identical Output Size all the way, for making up
Information is lost when image information is transmitted between identical dimensioned network layer.
2, the Maker model G (U-Net, CU-Net and FU-Net) of U-GAN, CU-GAN and FU-GAN model is every to image
One pixel is classified, and is classified compared to by convolutional neural networks to subgraph, and the training dataset needed is less, is split
It is higher to stitch segmentation precision.
3, U-GAN, FU-GAN and CU-GAN model proposed by the present invention can extend to medical image segmentation, natural scene
The fields such as image segmentation and image generation, model have preferable generalization ability and robustness.
Detailed description of the invention
Fig. 1 is crack segmentation work flow diagram.Wherein figure (a) is process of data preprocessing and model training procedure chart, figure
It (b) is the test process figure of model.
Fig. 2 is U-Net generator network architecture figure.U-Net network is mainly by convolutional layer
(Convolutionlayer) and pond layer (Pool layer) composition full convolutional neural networks (CNN), network is by feature
Compressed path and feature extensions path form, and use parallel link (Cross- in Feature Compression path and feature extensions path
Layer connection), it is combined into Fusion Features layer (Merged layer), in Feature Compression path and feature extensions path
Image information is extracted by convolution operation (Convolution operation), pondization operates (Pool operation) and carries out
Feature Dimension Reduction.
Fig. 3 is CU-Net generator network architecture figure.CU-Net network and U-Net network difference are CU-Net net
More parallel link (the Cross-layer connection) all the way of network.
Fig. 4 is FU-Net generator network architecture figure.FU-Net network by parallel path in Feature Compression path most
It is down-sampled that the output of the latter convolutional layer (Convolution layer) has carried out multichannel, and down-sampled layer is expanded respectively at feature
First convolutional layer of lower layer path is merged in exhibition path, composition characteristic fused layer (Merged layer), FU-Net net
Network realizes the operation of the fusion () between different size convolutional layers.
Fig. 5 is FU-Net network difference size convolutional layer joining method.Different size is used for the fusion of convolutional layer
Convolution kernel (f) and different step-length (s).For example, original volume lamination is having a size of n × n, target convolutional layer is having a size of m × m, then f
=n/m, s=n/m.
Fig. 6 is different crack image partition method segmentation result figure on the test set of AigleRN data set.Wherein,
Scheming (a) is original image, and figure (b) is crack canonical reference figure, and figure (c) is the segmentation result figure of NGU method, and figure (d) is the side MPS
The segmentation result figure of method, figure (e) are the segmentation result figures of CrackForest method, and figure (f) is the segmentation result of U-Net method
Figure, figure (g) are the segmentation result figures of FU-Net method, and figure (h) is the segmentation result figure of CU-Net method, and figure (i) is the side U-GAN
The segmentation result figure of method, figure (j) are the segmentation results of FU-GAN method, and figure (k) is the segmentation result figure of CU-GAN method.
Fig. 7 is different crack image partition method segmentation result figure on the test set of CFD data set.Wherein, scheme (a)
Be original image, figure (b) be crack canonical reference figure, figure (c) be Canny method segmentation result figure, figure (d) be
The segmentation result figure of Threshold method, figure (e) are the segmentation result figures of CrackForest method, and figure (f) is U-Net method
Segmentation result figure, figure (g) be FU-Net method segmentation result figure, figure (h) be CU-Net method segmentation result figure, figure
(i) be U-GAN method segmentation result figure, figure (j) be FU-GAN method segmentation result, figure (k) be CU-GAN method point
Cut result figure.
Fig. 8 is different crack image partition method segmentation result figure on the test set of HTR data set.Wherein, scheme (a)
It is original image, figure (b) is crack canonical reference figure, and figure (c) is the segmentation result figure of U-Net method, and figure (d) is the side FU-Net
The segmentation result figure of method, figure (e) are the segmentation result figures of U-GAN method, and figure (f) is the segmentation result of FU-GAN method.
Specific embodiment
Specific embodiment
Implementation steps are as follows:
Step 1, data set pretreatment
Step 1.1: establishing model data collection.Collected bituminous highway crack image is having a size of 3040 pixels × 2048 pictures
Element selects 100 width image construction raw data sets at random;By manually marking crack area on raw data set, when mark
Crack area is marked into black according to crack actual size in image, remaining background area is labeled as white, is based on original number
Data after marking again according to collection constitute target data set.
Step 1.2: image data enhances (Image data generation).Due to raw data set in step 1.1
Data volume is less, needs to carry out image data enhancing before training pattern.Before training pattern, to original in step 1.1
Data set and target data set carry out identical Random-Rotation transformation, random turning-over changed and random translation transformation, obtain respectively
Model training data set and simulated target data set.
Step 2, training U-GAN, FU-GAN and CU-GAN model
Step 2.1: model parameter configuration.U-GAN, FU-GAN and CU-GAN model contain Maker model G
(Generative) and U- is respectively adopted in arbiter model D (Discriminative), Maker model G (Generative)
Net, CU-Net and FU-Net network model, three arbiter model D (Discriminative) are all using two classification nerve nets
Network.150 epochs (i.e. model repetitive exercise 150 times) are arranged in training pattern process, the batch-size of each epoch is
1 (needing to be trained 100 times during each repetitive exercise, each piece image participates in training), model training uses
Adam loss function (learning rate 10-4) as loss majorized function.
Step 2.2: three arbiter model D (Discriminative) of training.By model training data set in step 1.2
U-Net, CU-Net and FU-Net model are inputed to respectively, and U-Net, CU-Net and FU-Net model are not having trained situation
It is lower to generate three pseudo- images (being denoted as Fake image I) easy to identify respectively, by model in Fake image I and step 1.2
Target data concentrates corresponding true picture (Real image) to be incorporated as the defeated of arbiter model D (Discriminative)
Enter, training arbiter model D (Discriminative) identifies in pseudo- image (Fake image I) and simulated target data set
Corresponding true picture (Real image).The loss function of arbiter model D (Discriminative) are as follows:
Arbiter model D's (Discriminative) aims at minimum: Wherein a (xn, yn) indicate that D (Discriminative) is true picture (Real to picture
) or the prediction probability of pseudo- picture (Fake image I) image.G(xn) indicate the figure generated by generator G (Generative)
Piece, N are sample total number, which sample n is.
Step 2.3: training Maker model G (Generative) includes tri- models of U-Net, CU-Net and FU-Net.
The parameter (backpropagation that printenv updates) of three arbiter model D (Discriminative), will walk in fixing step 2.2
Model training data set inputs to U-Net, CU-Net and FU-Net network, training generator U-Net, CU- respectively in rapid 1.2
Net and FU-Net generates pseudo- image (being denoted as Fake image II) similar with simulated target data set in step 1.2, generates
Pseudo- image (Fake image II) be bianry image, Maker model U-Net, CU-Net and FU-Net use two classification
The optimization loss of cross entropy loss function.For the following target of minimum of Maker model G (Generative):
The loss function preliminary for U-GAN, FU-GAN and CU-GAN model are as follows:
Wherein λ, μ are the loss weight of setting.For the pseudo- image (Fake for allowing generator G (Generative) to generate
Image II)) more approached with true picture (Real image), model convergence rate faster, by U-GAN, FU-GAN and CU-
The preliminary loss function and L of GAN1Norm combines, L1Norm are as follows:
L1(G)=| | yn-G(xn)|| (4)
The final loss function of U-GAN, FU-GAN and CU-GAN model are as follows:
Wherein υ is L1The loss weight distribution of norm.The integration objective of Maker model is to minimize U-GAN, FU-
Loss function l, λ, μ and υ parameter value of GAN and CU-GAN model is respectively set to 100,1,1.
Step 2.4: repetitive exercise (Iterative training).Step 2.2 and step 2.3 are repeated, when all
U-Net, CU-Net and FU-Net model and model parameter information are saved after epochs repetitive exercise is complete.
Step 3, test process: bituminous highway crack image segmentation.
Step 3.1: U-Net, CU-Net and FU-Net model and model parameter information that load step 2.4 saves.
Step 3.2: the bituminous highway crack image detected will be needed to be input to Maker model G (Generative), wrapped
Containing tri- models of U-Net, CU-Net and FU-Net, Maker model G (Generative) generates corresponding crack pattern (Crack
image)。
Step 3.3: the crack pattern (Crack image) generated in step 3.2 may be difficult to observe by with some
Noise crack pattern (Crack image) is subjected to image with 5 × 5 convolution kernel (all values are 1) to remove tiny noise
Then corrosion treatment removes crack area of the area less than 600 by connected domain analysis, finally obtain the crack that segmentation is completed
Figure.
Fig. 6 is the width extracted out at random in 21 width test image of AigleRN data set under distinct methods treated knot
Fruit image.AigleRN data set is derived from the bituminous pavement image of French highway, from result after distinct methods processing in Fig. 6
The asphalt pavement crack image that image can be seen that the method for the present invention U-GAN, FU-GAN, CU-GAN are partitioned into is more accurate, more
The nearly crack canonical reference figure of adjunction.Fig. 7 is at the width extracted out at random in 28 width test image of CFD data set under distinct methods
Result images after reason.CFD data set is derived from the cement pavement image of BeiJing, China, from distinct methods in Fig. 7
Result images can be seen that the cement pavement crack image that the method for the present invention U-GAN, FU-GAN, CU-GAN are partitioned into after reason
Detailed information is richer, is more nearly crack canonical reference figure.Fig. 8 is the 34 width test image of HTR data set under distinct methods
In the width extracted out at random treated result images.HTR data set is derived from the bituminous pavement in Sichuan Province China province Chengdu
Image can be seen that the pitch that the method for the present invention U-GAN, FU-GAN is partitioned into from result images after distinct methods processing in Fig. 8
Pavement crack image has better clarity.
To verify effectiveness of the invention, the present invention has carried out test evaluation on three different data sets, and evaluation refers to
It is designated as precision ratio Ppixel(Precision), recall ratio Rpixel(Recall), F1-score, the value range of three evaluation indexes
For [0,1], calculation formula such as formula (6-8).
Wherein TP, FN, FP respectively correspond to be correctly detected as mesh target area, error detection is the region of background, mistake
It is detected as mesh target area.For details such as the following table 1 of three data sets used.
1 three pavement image data set details of table
A.AigleRN data set
On AigleRN data set, the method for the present invention and tri- kinds of methods of NGU, MPS, CrackForest have done test ratio
Compared with the results are shown in Table 2 for experiment:
The P of 21 width test image of AigleRN data set under 2 distinct methods of tablepixel、Rpixel, F1-score average value
Table 2 is that three evaluation indexes in AigleRN data set 21 open the average value on test set picture, it can be seen that U-
Net, FU-Net, FU-Net obtain higher precision ratio P compared to NGU, MPS, CrackForestpixelAnd recall ratio
Rpixel, U-GAN, FU-GAN, CU-GAN network are compared to U-Net, FU-Net, FU-Net in precision ratio PpixelOn obtain more
High value, CU-Net network achieve best result on overall target F1-score.
B.CFD data set
On CFD data set, the method for the present invention is tested with tri- kinds of methods of Canny, Threshold, CrackForest
Compare, the results are shown in Table 3 for experiment:
CFD data set 28 opens the P of test picture under 3 distinct methods of tablepixel、Rpixel, F1-score average value
Table 3 is that three evaluation indexes in CFD data set 28 open the average value on test set picture, it can be seen that U-Net,
FU-Net, FU-Net obtain higher precision ratio P compared to NGU, MPS, CrackForestpixel, U-GAN, FU-GAN, CU-
GAN network is compared to U-Net, FU-Net, CU-Net in precision ratio PpixelOn obtain higher value, FU-GAN network is in synthesis
Best result is achieved on index F1-score.
C.HTR data set
It is tested on HTR data set mainly for U-Net, FU-Net, U-GAN, FU-GAN.Its result tested
It is as shown in table 4:
HTR data set 34 opens the P of test picture under 4 distinct methods of tablepixel、Rpixel, F1-score average value
Table 4 is that three evaluation indexes in HTR data set 34 open the average value on test set picture, it can be seen that FU-Net exists
Precision ratio PpixelObtain ready-made as a result, FU-GAN network is in recall ratio RpixelIt is achieved most on overall target F1-score
Good result.
Claims (2)
1. it is a kind of based on the bituminous highway crack image partition method for generating confrontation network, it is raw using the image of deep learning frame
At model, use U-Net, CU-Net and FU-Net network as the Maker model G for generating confrontation GAN model;Generator mould
GU-Net, CU-Net and FU-Net that type includes identical two sorter networks arbiter model D with three respectively
(Discriminative) U-GAN, CU-GAN and FU-GAN model are combined into;It is competing by generator and the mutual iteration of arbiter
Optimization training is striven, Maker model U-Net, CU-Net and the FU-Net for finally using training to complete are as crack image segmentation
Device;By U-Net, CU-Net and FU-Net model realization to complicated bituminous highway crack image segmentation;Include following main step
It is rapid:
Step 1, data set pretreatment
Step 1.1 establishes model data collection: collected bituminous highway crack image, selects multiple image at random and constitutes original number
According to collection;By manually marking crack area on raw data set, according to crack actual size in image by crack area when mark
Domain is marked into black, remaining background area is labeled as white, and the data after being marked again based on raw data set constitute number of targets
According to collection;
Step 1.2 image data enhances (Image data generation): to raw data set in step 1.1 and number of targets
Identical Random-Rotation transformation, random turning-over changed and random translation transformation are carried out according to collection, obtains model training data set respectively
With simulated target data set;
Step 2 trains U-GAN, FU-GAN and CU-GAN model
The configuration of step 2.1 model parameter: it is S that model training data set total amount is obtained in step 1.2;Model process setting is N number of
Epochs, i.e. model repetitive exercise n times, the batch-size value for selecting each epoch is K, i.e., in each repetitive exercise mistake
It needs to be trained S/K times in journey, each K width image participates in training, and model training uses Adam loss function excellent as losing
Change function;
Step 2.2 trains three arbiter model D:Model training data set in step 1.2 is inputed into U-Net, CU-Net respectively
With FU-Net model, U-Net, CU-Net and FU-Net model generated respectively in the case where not having trained situation three it is easy to identify
Pseudo- image Fake image I, by Fake image I and corresponding true picture in simulated target data set in step 1.2
Real image is incorporated as the input of arbiter model D, training arbiter model D identification puppet image Fake image I and mould
Type target data concentrates corresponding true picture Real image;
Step 2.3 trains Maker model G: including tri- models of U-Net, CU-Net and FU-Net;Three in fixing step 2.2
Model training data set in step 1.2 is inputed to U-Net, CU-Net and FU-Net net by the parameter of arbiter model D respectively
Network, training generator U-Net, CU-Net and FU-Net generate pseudo- image note similar with simulated target data set in step 1.2
It is bianry image, Maker model U-Net, CU-Net and FU-Net for Fake image II, pseudo- image Fake image II
Using two classification cross entropy loss function optimization losses;
Step 2.4 repetitive exercise Iterative training: step 2.2 and step 2.3 are repeated, when all epochs iteration
U-Net, CU-Net and FU-Net model and model parameter information are saved after having trained;
Step 3 test process: bituminous highway crack image segmentation
Step 3.1: U-Net, CU-Net and FU-Net model and model parameter information that load step 2.4 saves;
Step 3.2: the bituminous highway crack image detected will be needed to be input to Maker model G (Generative), include U-
Tri- models of Net, CU-Net and FU-Net, Maker model G generate corresponding crack pattern Crack image;
Step 3.3: crack pattern Crackimage being subjected to Image erosion processing with 5 × 5 convolution kernel, then passes through connected domain point
Analysis removal area is less than the crack area of preset value, finally obtains the crack pattern that segmentation is completed.
2. it is according to claim 1 a kind of based on the bituminous highway crack image partition method for generating confrontation network, it is special
Sign is that N is the natural number more than or equal to 1 in step 2.1;The frequency of training S/K for needing to carry out during each repetitive exercise
More than or equal to 1;The learning rate of Adam loss function used is 10-4。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811508604.1A CN109801292A (en) | 2018-12-11 | 2018-12-11 | A kind of bituminous highway crack image partition method based on generation confrontation network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811508604.1A CN109801292A (en) | 2018-12-11 | 2018-12-11 | A kind of bituminous highway crack image partition method based on generation confrontation network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109801292A true CN109801292A (en) | 2019-05-24 |
Family
ID=66556588
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811508604.1A Pending CN109801292A (en) | 2018-12-11 | 2018-12-11 | A kind of bituminous highway crack image partition method based on generation confrontation network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109801292A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110502972A (en) * | 2019-07-05 | 2019-11-26 | 广东工业大学 | A kind of pavement crack segmentation and recognition methods based on deep learning |
CN111033532A (en) * | 2019-11-26 | 2020-04-17 | 驭势(上海)汽车科技有限公司 | Training method and system for generating countermeasure network, electronic device, and storage medium |
CN111161272A (en) * | 2019-12-31 | 2020-05-15 | 北京理工大学 | Embryo tissue segmentation method based on generation of confrontation network |
CN111368633A (en) * | 2020-01-18 | 2020-07-03 | 中国海洋大学 | AUV-based side-scan sonar image identification method |
CN111445446A (en) * | 2020-03-16 | 2020-07-24 | 重庆邮电大学 | Concrete surface crack detection method based on improved U-net |
CN112185486A (en) * | 2020-09-24 | 2021-01-05 | 长安大学 | Deep learning-based cement emulsified asphalt mixture shrinkage behavior prediction method |
CN112232391A (en) * | 2020-09-29 | 2021-01-15 | 河海大学 | Dam crack detection method based on U-net network and SC-SAM attention mechanism |
CN112706764A (en) * | 2020-12-30 | 2021-04-27 | 潍柴动力股份有限公司 | Active anti-collision early warning method, device, equipment and storage medium |
CN112862706A (en) * | 2021-01-26 | 2021-05-28 | 北京邮电大学 | Pavement crack image preprocessing method and device, electronic equipment and storage medium |
CN113096126A (en) * | 2021-06-03 | 2021-07-09 | 四川九通智路科技有限公司 | Road disease detection system and method based on image recognition deep learning |
CN113436169A (en) * | 2021-06-25 | 2021-09-24 | 东北大学 | Industrial equipment surface crack detection method and system based on semi-supervised semantic segmentation |
CN114049356A (en) * | 2022-01-17 | 2022-02-15 | 湖南大学 | Method, device and system for detecting structure apparent crack |
CN114708190A (en) * | 2022-03-03 | 2022-07-05 | 郑州大学 | Road crack detection and evaluation algorithm based on deep learning |
CN115983352A (en) * | 2023-02-14 | 2023-04-18 | 北京科技大学 | Data generation method and device based on radiation field and generation countermeasure network |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108596915A (en) * | 2018-04-13 | 2018-09-28 | 深圳市未来媒体技术研究院 | A kind of medical image segmentation method based on no labeled data |
CN108921851A (en) * | 2018-06-06 | 2018-11-30 | 深圳市未来媒体技术研究院 | A kind of medicine CT image dividing method based on 3D confrontation network |
CN108921119A (en) * | 2018-07-12 | 2018-11-30 | 电子科技大学 | A kind of barrier real-time detection and classification method |
-
2018
- 2018-12-11 CN CN201811508604.1A patent/CN109801292A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108596915A (en) * | 2018-04-13 | 2018-09-28 | 深圳市未来媒体技术研究院 | A kind of medical image segmentation method based on no labeled data |
CN108921851A (en) * | 2018-06-06 | 2018-11-30 | 深圳市未来媒体技术研究院 | A kind of medicine CT image dividing method based on 3D confrontation network |
CN108921119A (en) * | 2018-07-12 | 2018-11-30 | 电子科技大学 | A kind of barrier real-time detection and classification method |
Non-Patent Citations (2)
Title |
---|
BOAH KIM ET AL.: "Cycle-consistent adversarial network with polyphase U-Nets for liver lesion segmentation", 《CONFERENCE ON MEDICAL IMAGING WITH DEEP LEARNING》 * |
ZHIQIANG TANG ET AL.: "CU-Net:Coupled U-Nets", 《ARXIV》 * |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110502972A (en) * | 2019-07-05 | 2019-11-26 | 广东工业大学 | A kind of pavement crack segmentation and recognition methods based on deep learning |
CN111033532A (en) * | 2019-11-26 | 2020-04-17 | 驭势(上海)汽车科技有限公司 | Training method and system for generating countermeasure network, electronic device, and storage medium |
CN111033532B (en) * | 2019-11-26 | 2024-04-02 | 驭势(上海)汽车科技有限公司 | Training method and system for generating countermeasure network, electronic device and storage medium |
CN111161272B (en) * | 2019-12-31 | 2022-02-08 | 北京理工大学 | Embryo tissue segmentation method based on generation of confrontation network |
CN111161272A (en) * | 2019-12-31 | 2020-05-15 | 北京理工大学 | Embryo tissue segmentation method based on generation of confrontation network |
CN111368633A (en) * | 2020-01-18 | 2020-07-03 | 中国海洋大学 | AUV-based side-scan sonar image identification method |
CN111445446A (en) * | 2020-03-16 | 2020-07-24 | 重庆邮电大学 | Concrete surface crack detection method based on improved U-net |
CN111445446B (en) * | 2020-03-16 | 2022-05-10 | 重庆邮电大学 | Concrete surface crack detection method based on improved U-net |
CN112185486A (en) * | 2020-09-24 | 2021-01-05 | 长安大学 | Deep learning-based cement emulsified asphalt mixture shrinkage behavior prediction method |
CN112185486B (en) * | 2020-09-24 | 2024-02-09 | 长安大学 | Cement-emulsified asphalt mixture shrinkage behavior prediction method based on deep learning |
CN112232391B (en) * | 2020-09-29 | 2022-04-08 | 河海大学 | Dam crack detection method based on U-net network and SC-SAM attention mechanism |
CN112232391A (en) * | 2020-09-29 | 2021-01-15 | 河海大学 | Dam crack detection method based on U-net network and SC-SAM attention mechanism |
CN112706764A (en) * | 2020-12-30 | 2021-04-27 | 潍柴动力股份有限公司 | Active anti-collision early warning method, device, equipment and storage medium |
CN112862706A (en) * | 2021-01-26 | 2021-05-28 | 北京邮电大学 | Pavement crack image preprocessing method and device, electronic equipment and storage medium |
CN113096126A (en) * | 2021-06-03 | 2021-07-09 | 四川九通智路科技有限公司 | Road disease detection system and method based on image recognition deep learning |
CN113436169A (en) * | 2021-06-25 | 2021-09-24 | 东北大学 | Industrial equipment surface crack detection method and system based on semi-supervised semantic segmentation |
CN113436169B (en) * | 2021-06-25 | 2023-12-19 | 东北大学 | Industrial equipment surface crack detection method and system based on semi-supervised semantic segmentation |
CN114049356A (en) * | 2022-01-17 | 2022-02-15 | 湖南大学 | Method, device and system for detecting structure apparent crack |
CN114049356B (en) * | 2022-01-17 | 2022-04-12 | 湖南大学 | Method, device and system for detecting structure apparent crack |
CN114708190A (en) * | 2022-03-03 | 2022-07-05 | 郑州大学 | Road crack detection and evaluation algorithm based on deep learning |
CN115983352A (en) * | 2023-02-14 | 2023-04-18 | 北京科技大学 | Data generation method and device based on radiation field and generation countermeasure network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109801292A (en) | A kind of bituminous highway crack image partition method based on generation confrontation network | |
CN106339998B (en) | Multi-focus image fusing method based on contrast pyramid transformation | |
CN108319972A (en) | A kind of end-to-end difference online learning methods for image, semantic segmentation | |
Mechrez et al. | Photorealistic style transfer with screened poisson equation | |
CN109145939A (en) | A kind of binary channels convolutional neural networks semantic segmentation method of Small object sensitivity | |
CN109376603A (en) | A kind of video frequency identifying method, device, computer equipment and storage medium | |
CN108460403A (en) | The object detection method and system of multi-scale feature fusion in a kind of image | |
CN106228528B (en) | A kind of multi-focus image fusing method based on decision diagram and rarefaction representation | |
CN110348319A (en) | A kind of face method for anti-counterfeit merged based on face depth information and edge image | |
CN108334847A (en) | A kind of face identification method based on deep learning under real scene | |
CN108198207A (en) | Multiple mobile object tracking based on improved Vibe models and BP neural network | |
CN107437092A (en) | The sorting algorithm of retina OCT image based on Three dimensional convolution neutral net | |
CN107423678A (en) | A kind of training method and face identification method of the convolutional neural networks for extracting feature | |
CN111833273B (en) | Semantic boundary enhancement method based on long-distance dependence | |
CN103714181B (en) | A kind of hierarchical particular persons search method | |
CN111968088B (en) | Building detection method based on pixel and region segmentation decision fusion | |
CN108573222A (en) | The pedestrian image occlusion detection method for generating network is fought based on cycle | |
CN111242837A (en) | Face anonymous privacy protection method based on generation of countermeasure network | |
CN108596211A (en) | It is a kind of that pedestrian's recognition methods again is blocked based on focusing study and depth e-learning | |
CN110163213A (en) | Remote sensing image segmentation method based on disparity map and multiple dimensioned depth network model | |
CN109961434A (en) | Non-reference picture quality appraisement method towards the decaying of level semanteme | |
CN109446982A (en) | A kind of power screen cabinet pressing plate state identification method and system based on AR glasses | |
CN109492528A (en) | A kind of recognition methods again of the pedestrian based on gaussian sum depth characteristic | |
CN106295501A (en) | The degree of depth based on lip movement study personal identification method | |
CN105913407A (en) | Method for performing fusion optimization on multi-focusing-degree image base on difference image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190524 |