CN107945161B - Road surface defect detection method based on textural feature extraction - Google Patents

Road surface defect detection method based on textural feature extraction Download PDF

Info

Publication number
CN107945161B
CN107945161B CN201711167478.3A CN201711167478A CN107945161B CN 107945161 B CN107945161 B CN 107945161B CN 201711167478 A CN201711167478 A CN 201711167478A CN 107945161 B CN107945161 B CN 107945161B
Authority
CN
China
Prior art keywords
road surface
gray level
extracting
image
defect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201711167478.3A
Other languages
Chinese (zh)
Other versions
CN107945161A (en
Inventor
陈里里
任君兰
曹浩
司吉兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Jiaotong University
Original Assignee
Chongqing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Jiaotong University filed Critical Chongqing Jiaotong University
Priority to CN201711167478.3A priority Critical patent/CN107945161B/en
Publication of CN107945161A publication Critical patent/CN107945161A/en
Application granted granted Critical
Publication of CN107945161B publication Critical patent/CN107945161B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24143Distances to neighbourhood prototypes, e.g. restricted Coulomb energy networks [RCEN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Signal Processing (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Quality & Reliability (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention provides a road surface defect detection method based on textural feature extraction, which comprises the following steps: acquiring an image with a road surface defect and carrying out gray level processing to form a road surface defect gray level image; extracting texture features of the pavement defect gray level image, extracting feature values to form texture feature vectors, and equally dividing the pavement defect gray level image represented by the texture feature vectors of the same defect type to form a training set and a test set; extracting high-dimensional abstract features of the training set by a stacked self-encoder; stacking the softmax layer logic classification layer on a stacked self-encoder to form a deep neural network, training high-dimensional abstract features through the deep neural network, and completing classification recognition on pavement gray level images in a test set; the method can accurately detect the road surface defects and improve the accuracy of results.

Description

Road surface defect detection method based on textural feature extraction
Technical Field
The invention relates to a road surface defect detection method, in particular to a road surface defect detection method based on texture feature extraction.
Background
The detection of the surface defects of the road is an important guarantee for ensuring the normal operation of traffic, and in the prior art, the detection of the surface defects of the road comprises the following steps: ultrasonic, detection radar, laser triangulation, manual detection, machine vision, and the like; in the above detection method, there are the following defects: firstly, the manual intervention is more, the manpower is wasted, the efficiency is low, the working strength is high, and the traffic isolation is difficult to realize in the detection process, so that the serious potential safety hazard is brought to the working personnel; secondly, the accuracy of the detection result is difficult to guarantee by the existing detection method: because the road condition is influenced by the environment, the parameters in the detection have serious interference in the natural environment, and in order to provide the interference, the calculation process is complex and the difficulty is high.
Therefore, a new road surface defect detection method needs to be provided, which can accurately detect the road surface defects and improve the accuracy of results, thereby facilitating the formulation of subsequent treatment measures, effectively reducing manual intervention, saving labor cost, improving detection efficiency, especially ensuring the safety of workers, and effectively avoiding the influence on traffic operation in the detection process.
Disclosure of Invention
In view of the above, the present invention provides a road surface defect detection method based on texture feature extraction, which can accurately detect a road surface defect and improve accuracy of a result, thereby facilitating subsequent treatment measures to be made, effectively reducing manual intervention, saving labor cost, improving detection efficiency, especially ensuring safety of workers, and effectively avoiding influence on traffic operation in a detection process.
The invention provides a road surface defect detection method based on textural feature extraction, which comprises the following steps:
s1, obtaining an image with a road surface defect and carrying out gray level processing to form a road surface defect gray level image;
s2, extracting texture features of the pavement defect gray level image, extracting feature values to form texture feature vectors, and representing the pavement defect gray level image by the texture feature vectors;
s4, equally dividing the pavement defect gray level images represented by the texture feature vectors of the same defect category to form a training set and a testing set;
s5, forming a stacked self-encoder by two same self-encoders, and performing high-dimensional abstract feature extraction on the training set through the stacked self-encoder;
and S6, stacking the softmax layer logic classification layer on a stacked self-encoder to form a deep neural network, training the high-dimensional abstract characteristics through the deep neural network, and after the training is finished, finishing classification recognition on the pavement gray level image in the test set.
Further, the method also includes step S3: the texture feature vector is normalized and is processed by the following formula:
Figure BDA0001476575430000021
wherein, M is the texture feature vector after normalization, a is the feature value of the texture feature vector to be normalized, b is the maximum feature value in the texture feature vector, and c is the minimum feature value in the texture feature vector.
Further, pavement defects include potholes, cracks, fissures, and loosening.
Further, step S2 includes the following steps:
s21, extracting the characteristics of the gray level image of the road surface defect surface by adopting a gray level difference statistical method, and extracting three characteristic values: differential mean, differential contrast, and differential entropy;
s22, extracting features of the road surface defect gray level image by adopting a Gabor algorithm, and extracting three feature values: gabor mean, Gabor contrast, and Gabor entropy;
s23, extracting characteristic values of the road surface defect gray level image by adopting a gray level gradient method, and extracting four characteristic values: gradient mean, gradient variance, gradient skewness and gradient kurtosis;
s24, extracting the characteristics of the pavement defect gray level image by adopting a gray level co-occurrence matrix method, and extracting five characteristic values: energy, correlation, symbiotic contrast, homogeneity and symbiotic entropy;
s25, extracting the characteristics of the pavement defect gray image by adopting a gray histogram method, and extracting four characteristic values: histogram mean, histogram variance, histogram skewness and histogram kurtosis;
s26, extracting features of the road surface defect gray level image by adopting a Tamura algorithm, and extracting six feature values: roughness, regularity, contrast, directionality, linearity, and coarseness;
and S27, arranging the characteristic values extracted in the steps S21-S26 according to an extraction sequence to form texture characteristic vectors of the 1 × 25-order representation pavement defect gray level image.
Further, in step S27, the feature values are arranged in the following order to form texture feature vectors of the grayscale image representing the road surface defect of 1 × 25 steps:
histogram mean, differential mean, gradient mean, Gabor mean, energy, coarseness, histogram variance, gradient variance, correlation, regularity, histogram skewness, gradient skewness, homogeneity, directionality, histogram kurtosis, gradient kurtosis, linearity, differential contrast, Gabor contrast, symbiotic contrast, Tamura contrast, coarseness, differential entropy, Gabor entropy, symbiotic entropy.
Further, in step S5, the self-encoder is composed of three layers of neural networks, i.e., an input layer, a hidden layer, and an output layer, and the high-dimensional abstract feature extraction training process of the self-encoder is as follows:
encoding process from input layer to hidden layer:
z=sigmoid(W(1)·x+v(1)) (ii) a Wherein z is a new abstract feature vector obtained by calculation; sigmoid is the S-shaped transfer function of the encoding process, W(1)As weight matrix of the encoding process, v(1)Is the offset vector of the encoding process, x is the input training set;
decoding process from hidden layer to output layer:
Figure BDA0001476575430000031
wherein the content of the first and second substances,
Figure BDA0001476575430000032
for the reconstructed data set of the input data obtained after decoding, linear is the linear transfer function in the decoding process, W(2)As a weight matrix of the decoding process, v(2)Is a bias vector for the decoding process;
error adjustment of input data and reconstructed data from the encoder:
Figure BDA0001476575430000041
where K is the number of input data,
Figure BDA0001476575430000042
is the sum of the mean square deviations of the input data and the output data, λ is the regularization coefficient, L is the number of hidden layers, n is the number of training sets, k is the number of variables in the training sets,
Figure BDA0001476575430000043
is a weight matrix of a variable i in a sample j in a training set in the coding process, beta is a sparse term coefficient,
Figure BDA0001476575430000044
is the activation value of neuron i
Figure BDA0001476575430000045
Divergence of K-L of the cross entropy with the desired value ρ, D(1)Is the range of encoding.
Further, the calculation process of the softmax layer logical classification layer is as follows:
calculating the probability of the road surface defect type corresponding to each feature vector in the training set:
Figure BDA0001476575430000046
wherein, wherein: x is the number of(i)Is the ith characterization in the training setOf road surface defects, y(i)Is a binary digital label to which the defect corresponding to the characterization image belongs; θ is the parameter matrix calculated by all logical classification layers of the softmax layer, p (y)(i)=k|x(i)(ii) a Theta) is the probability of the road surface defect type corresponding to the characterization image;
Figure BDA0001476575430000047
is the transpose of the parameter matrix of the corresponding logic classification layer 1, … and the logic classification layer k of the characterization road surface defect image i,
Figure BDA0001476575430000048
then it is the exponential function budget of the transposed parameter matrix;
fine-tuning the deep neural network by the following formula:
Figure BDA0001476575430000051
wherein m is the number of the estimated characteristic images, namely the characteristic vectors, k is the number of the road defects, 1{ y }(i)J is a judgment function for representing whether the image i is corresponding to the defect type j;
Figure BDA0001476575430000052
is the transpose of the probability matrix of the category j corresponding to the representation of the road surface defect image i,
Figure BDA0001476575430000053
is the exponential function budget of the transpose matrix.
The invention has the beneficial effects that: according to the invention, the road surface defects can be accurately detected, the accuracy of the result is improved, so that subsequent treatment measures can be formulated conveniently, the manual intervention can be effectively reduced, the labor cost is saved, the detection efficiency is improved, especially the safety of workers can be ensured, and the influence on traffic operation in the detection process can be effectively avoided.
Drawings
The invention is further described below with reference to the following figures and examples:
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a schematic structural diagram of a stacked self-encoder according to the present invention.
Fig. 3 is a schematic flowchart of the working process of the stacked self-encoder of the present invention.
Detailed Description
The invention is explained in further detail below with reference to the drawings, in which:
the invention provides a road surface defect detection method based on textural feature extraction, which comprises the following steps:
s1, obtaining an image with a road surface defect and carrying out gray level processing to form a road surface defect gray level image; in the process of image acquisition, a German Mantag-419 type industrial high-definition camera is adopted to shoot a road defect video, the initial object distance shot by the high-definition camera is determined to be 3 m, and manual focusing is carried out as required until the defect texture is clearly adjusted. In order to acquire the road surface defect image, AE software is used for converting the acquired road surface defect video into a frame-by-frame image, the speed of 24 frames per second is set during conversion, the image storage format is selected to be 'PNG', and the road surface defect image is acquired after storage is finished. Classifying all collected images highlighting the road surface defects according to defect types, wherein the road surface defect types comprise pit slots, cracks and looseness, at least 100 images are screened out of each type of defects, the resolution of the screened images is adjusted by adopting 200dpi resolution, the pixels are 900 multiplied by 900, the image size is 900 multiplied by 900 pixels, the sizes of all road surface defect images obtained by cutting are 2.25 multiplied by 2.25 inches, and after the processing is finished, all the images form an image database of the road surface defects; then, the images in the image database are further subjected to gray scale conversion processing: reading an image in an image database through an imread function in MATLAB software, obtaining an array formed by image pixel values after the function is read, wherein the array is used for expressing the read original image, then converting the original image into a gray image through an rgb2gray function in the MATLAB software, and clearly expressing the outlines and textures of different objects in the obtained road surface gray image through the gray conversion treatment, so that the subsequent texture feature extraction is facilitated;
s2, extracting texture features of the pavement defect gray level image, extracting feature values to form texture feature vectors, and representing the pavement defect gray level image by the texture feature vectors;
s4, equally dividing the pavement defect gray level images represented by the texture feature vectors of the same defect category to form a training set and a testing set; taking 100 gray level images corresponding to each type of road surface defects, and dividing each type of gray level images into two parts, wherein each part is 50, one part is a training set, and the other part is a testing set;
s5, forming a stacked self-encoder by two same self-encoders, and performing high-dimensional abstract feature extraction on the training set through the stacked self-encoder;
s6, stacking the softmax layer logic classification layer on a stack type self-encoder to form a deep neural network, training high-dimensional abstract features through the deep neural network, and after training is completed, completing classification identification on the pavement gray level images in the test set, namely, determining which type of the four pavement defects the current image belongs to; by the method, the road surface defects can be accurately detected, the accuracy of the result is improved, so that subsequent treatment measures can be formulated conveniently, manual intervention can be effectively reduced, the labor cost is saved, the detection efficiency is improved, especially the safety of workers can be ensured, and the influence on traffic operation in the detection process can be effectively avoided; the high-dimensional abstract features refer to vectors which are extracted from feature vectors, namely, represented images and are composed of feature values capable of comprehensively representing all features of the images, and the vectors have lower dimensions, namely, the features not only can reflect the essential features of image textures, but also can highlight the characteristics of the textures in various aspects such as shape, color change, size, edges and the like.
In this embodiment, the method further includes step S3: the texture feature vector is normalized and is processed by the following formula:
Figure BDA0001476575430000071
the method comprises the steps of obtaining a texture feature vector to be normalized, obtaining a characteristic value of the texture feature vector to be normalized, obtaining a maximum characteristic value of the texture feature vector, obtaining a minimum characteristic value of the texture feature vector, and obtaining a difference between the characteristic values.
In this embodiment, step S2 includes the following steps:
s21, extracting the characteristics of the gray level image of the road surface defect surface by adopting a gray level difference statistical method, and extracting three characteristic values: differential mean, differential contrast, and differential entropy;
s22, extracting features of the road surface defect gray level image by adopting a Gabor algorithm, and extracting three feature values: gabor mean, Gabor contrast, and Gabor entropy;
s23, extracting characteristic values of the road surface defect gray level image by adopting a gray level gradient method, and extracting four characteristic values: gradient mean, gradient variance, gradient skewness and gradient kurtosis;
s24, extracting the characteristics of the pavement defect gray level image by adopting a gray level co-occurrence matrix method, and extracting five characteristic values: energy, correlation, symbiotic contrast, homogeneity and symbiotic entropy;
s25, extracting the characteristics of the pavement defect gray image by adopting a gray histogram method, and extracting four characteristic values: histogram mean, histogram variance, histogram skewness and histogram kurtosis;
s26, extracting features of the road surface defect gray level image by adopting a Tamura algorithm, and extracting six feature values: roughness, regularity, contrast, directionality, linearity, and coarseness;
s27, arranging the characteristic values extracted in the steps S21-S26 according to an extraction sequence to form texture characteristic vectors of the 1 × 25-order representation pavement defect gray level image; by the method, different characteristic values of the representation image are extracted by different methods, and then the different characteristic values are combined into the characteristic vector to represent the gray level image of the road surface defect, so that the subsequent accurate identification of the road surface defect is facilitated, and the final detection accuracy is ensured.
In the extracted road surface image, because the color characteristics, shape characteristics and the like of the road surface image become unstable along with the change of the external environment in the road surface image acquisition process, such as the change of light and the like, how to accurately represent the information of the original image becomes a technical difficulty; in this embodiment, 25 texture features are extracted by 6 feature extraction methods, and then the 25 texture feature values are sequentially arranged according to the following sequence to form a final texture feature vector of a 1 × 25-order representation road surface defect gray level image:
histogram mean, differential mean, gradient mean, Gabor mean, energy, roughness, histogram variance, gradient variance, correlation, regularity, histogram skewness, gradient skewness, homogeneity, directionality, histogram kurtosis, gradient kurtosis, linearity, differential contrast, Gabor contrast, symbiotic contrast, Tamura contrast, coarseness, differential entropy, Gabor entropy, symbiotic entropy; through the method, the texture features of the image can be extracted from different angles, so that the original image can be completely reflected, the texture features of the processed road surface image in the horizontal and vertical directions, 45 and other special angles can be accurately reflected through 25 texture feature values, so that the integrity of image information is accurately ensured, and according to the arrangement method, the image represented by the texture feature vector can be completely the same as the original road surface image in property, so that the image is easy to identify, and the accuracy of a final detection result can be ensured.
In this embodiment, in step S5, the self-encoder is composed of three layers of neural networks, which are an input layer, a hidden layer, and an output layer, and the high-dimensional abstract feature extraction training process of the self-encoder is as follows:
encoding process from input layer to hidden layer:
z=sigmoid(W(1)·x+v(1)) (ii) a Wherein z is a new abstract feature obtained by calculationVector quantity; sigmoid is the S-shaped transfer function of the encoding process, W(1)As weight matrix of the encoding process, v(1)Is the offset vector of the encoding process, x is the input training set;
decoding process from hidden layer to output layer:
Figure BDA0001476575430000081
wherein the content of the first and second substances,
Figure BDA0001476575430000082
for the reconstructed data set of the input data obtained after decoding, linear is the linear transfer function in the decoding process, W(2)As a weight matrix of the decoding process, v(2)Is a bias vector for the decoding process;
error adjustment of input data and reconstructed data from the encoder:
Figure BDA0001476575430000091
where K is the number of input data,
Figure BDA0001476575430000092
is the sum of the mean square deviations of the input data and the output data, λ is the regularization coefficient, L is the number of hidden layers, n is the number of training sets, k is the number of variables in the training sets,
Figure BDA0001476575430000093
is a weight matrix of a variable i in a sample j in a training set in the coding process, beta is a sparse term coefficient,
Figure BDA0001476575430000094
is the activation value of neuron i
Figure BDA0001476575430000095
Divergence of K-L of the cross entropy with the desired value ρ, D(1)Is the range of encoding; before high-dimensional abstract feature extraction through a stacked self-encoder, the method needs to be carried outParameter settings of the auto-encoder, wherein: parameter setting of the self-encoder I: the size of a hidden layer of the self-encoder I is 10, and the number of neurons in the hidden layer of the self-encoder I is represented; the coefficient of the quadratic weight regularization is 0.01, namely, a classification weight coefficient is given to each input feature when the high-dimensional abstract features are extracted; controlling the sparse regularization influence coefficient to be 4, namely controlling the sparsity degree of the high-dimensional feature vector due to the fact that the features of the input data are reproduced as much as possible; the proportion of the training samples reflected by the neurons is 0.05, namely the proportion of one neuron to the corresponding feature vector number needing to be processed; the transform function of the decoder is a linear transfer function.
And (3) setting parameters of an automatic encoder II: the size of the hidden layer of the auto-encoder is 9, which represents the number of neurons in the hidden layer in the auto-encoder; the coefficient of the quadratic weight regularization is 0.01, namely, a classification weight coefficient is given to each input feature when the high-dimensional abstract features are extracted; controlling the sparse regularization influence coefficient to be 4, namely controlling the sparsity degree of the high-dimensional feature vector due to the fact that the features of the input data are reproduced as much as possible; the proportion of the training samples reflected by the neurons is 0.05, namely the proportion of one neuron to the corresponding feature vector number needing to be processed; the transform function of the decoder is a linear transfer function; setting a self-encoding range for automatically scaling input data.
In this embodiment, the calculation process of the softmax layer logical classification layer is as follows:
calculating the probability of the road surface defect type corresponding to each feature vector in the training set:
Figure BDA0001476575430000101
wherein, wherein: x is the number of(i)Is the pavement defect gray level image of the ith representation in the training set, y(i)Is a binary digital label to which the defect corresponding to the characterization image belongs; θ is the parameter matrix calculated by all logical classification layers of the softmax layer, p (y)(i)=k|x(i)(ii) a Theta) is the probability of the road surface defect type corresponding to the characterization image;
Figure BDA0001476575430000102
is the transpose of the parameter matrix of the corresponding logic classification layer 1, … and the logic classification layer k of the characterization road surface defect image i,
Figure BDA0001476575430000103
then it is the exponential function budget of the transposed parameter matrix;
fine-tuning the deep neural network by the following formula:
Figure BDA0001476575430000104
wherein m is the number of the estimated characteristic images, namely the characteristic vectors, k is the number of the road defects, 1{ y }(i)J is a judgment function for representing whether the image i is corresponding to the defect type j;
Figure BDA0001476575430000105
is the transpose of the probability matrix of the category j corresponding to the representation of the road surface defect image i,
Figure BDA0001476575430000106
is the exponential function budget of the transposed matrix; the softmax layer logic classification layer calculates the probability value of each type of defect corresponding to the input data through an assumed equation, then selects the type corresponding to the maximum probability value as the defect type of the data, realizes the classification of the data, and simultaneously has strict mutual exclusion between the types, namely one data does not belong to two types at the same time.
Finally, the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made to the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, and all of them should be covered in the claims of the present invention.

Claims (4)

1. A road surface defect detection method based on texture feature extraction is characterized by comprising the following steps: the method comprises the following steps:
s1, obtaining an image with a road surface defect and carrying out gray level processing to form a road surface defect gray level image;
s2, extracting texture features of the pavement defect gray level image, extracting feature values to form texture feature vectors, and representing the pavement defect gray level image by the texture feature vectors;
s4, equally dividing the pavement defect gray level images represented by the texture feature vectors of the same defect category to form a training set and a testing set;
s5, forming a stacked self-encoder by two same self-encoders, and performing high-dimensional abstract feature extraction on the training set through the stacked self-encoder;
s6, stacking the softmax layer logic classification layer on a stacked self-encoder to form a deep neural network, training high-dimensional abstract features through the deep neural network, and after training is completed, completing classification recognition on the pavement gray level images in the test set; in step S2, the method includes the steps of:
s21, extracting the characteristics of the gray level image of the road surface defect surface by adopting a gray level difference statistical method, and extracting three characteristic values: differential mean, differential contrast, and differential entropy;
s22, extracting features of the road surface defect gray level image by adopting a Gabor algorithm, and extracting three feature values: gabor mean, Gabor contrast, and Gabor entropy;
s23, extracting characteristic values of the road surface defect gray level image by adopting a gray level gradient method, and extracting four characteristic values: gradient mean, gradient variance, gradient skewness and gradient kurtosis;
s24, extracting the characteristics of the pavement defect gray level image by adopting a gray level co-occurrence matrix method, and extracting five characteristic values: energy, correlation, symbiotic contrast, homogeneity and symbiotic entropy;
s25, extracting the characteristics of the pavement defect gray image by adopting a gray histogram method, and extracting four characteristic values: histogram mean, histogram variance, histogram skewness and histogram kurtosis;
s26, extracting features of the road surface defect gray level image by adopting a Tamura algorithm, and extracting six feature values: roughness, regularity, contrast, directionality, linearity, and coarseness;
s27, arranging the characteristic values extracted in the steps S21-S26 to form texture characteristic vectors of the 1 × 25-order representation pavement defect gray level image;
in step S27, the feature values are arranged in the following order to form texture feature vectors representing grayscale images of road surface defects of 1 × 25 order:
histogram mean, differential mean, gradient mean, Gabor mean, energy, roughness, histogram variance, gradient variance, correlation, regularity, histogram skewness, gradient skewness, homogeneity, directionality, histogram kurtosis, gradient kurtosis, linearity, differential contrast, Gabor contrast, symbiotic contrast, Tamura contrast, coarseness, differential entropy, Gabor entropy, symbiotic entropy;
in step S5, the self-encoder is composed of three layers of neural networks, namely an input layer, a hidden layer, and an output layer, and the high-dimensional abstract feature extraction training process of the self-encoder is as follows:
encoding process from input layer to hidden layer:
z=sigmoid(W(1)·x+v(1)) (ii) a Wherein z is a new abstract feature vector obtained by calculation; sigmoid is the S-shaped transfer function of the encoding process, W(1)As weight matrix of the encoding process, v(1)Is the offset vector of the encoding process, x is the input training set;
decoding process from hidden layer to output layer:
Figure FDA0002561188640000021
wherein the content of the first and second substances,
Figure FDA0002561188640000022
for input data obtained after decodingLinear is the linear transfer function in the decoding process, W(2)As a weight matrix of the decoding process, v(2)Is a bias vector for the decoding process;
error adjustment of input data and reconstructed data from the encoder:
Figure FDA0002561188640000023
where K is the number of input data,
Figure FDA0002561188640000024
is the sum of the mean square deviations of the input data and the output data, λ is the regularization coefficient, L is the number of hidden layers, n is the number of training sets, k is the number of variables in the training sets,
Figure FDA0002561188640000025
is a weight matrix of a variable i in a sample j in a training set in the coding process, beta is a sparse term coefficient,
Figure FDA0002561188640000031
is the activation value of neuron i
Figure FDA0002561188640000032
Divergence of K-L of the cross entropy with the desired value ρ, D(1)Is the range of encoding.
2. The method for detecting the road surface defect based on the texture feature extraction as claimed in claim 1, wherein: further comprising step S3: the texture feature vector is normalized and is processed by the following formula:
Figure FDA0002561188640000033
wherein, M is the texture feature vector after normalization, a is the texture feature vector to be normalized, b is the maximum feature value in the texture feature vector, and c isThe smallest eigenvalue in the texture eigenvector.
3. The method for detecting the road surface defect based on the texture feature extraction as claimed in claim 1, wherein: the resulting pavement defects include potholes, cracks, fissures, and loosening.
4. The method for detecting the defects of the road surface based on the extraction of the textural features of claim 1, wherein: the calculation process of the softmax layer logic classification layer is as follows:
calculating the probability of the road surface defect type corresponding to each feature vector in the training set:
Figure FDA0002561188640000034
wherein, wherein: x is the number of(i)Is the pavement defect gray level image of the ith representation in the training set, y(i)Is a binary digital label to which the defect corresponding to the characterization image belongs; θ is the parameter matrix calculated by all logical classification layers of the softmax layer, p (y)(i)=k|x(i)(ii) a Theta) is the probability of the road surface defect type corresponding to the characterization image;
Figure FDA0002561188640000035
is the transpose of the parameter matrix of the corresponding logic classification layer 1, … and the logic classification layer k of the characterization road surface defect image i,
Figure FDA0002561188640000036
then it is the exponential function budget of the transposed parameter matrix;
fine-tuning the deep neural network by the following formula:
Figure FDA0002561188640000041
wherein m is the number of the estimated characteristic images, namely the characteristic vectors, and k is the road defectNumber of kinds of (1 { y)(i)J is a judgment function for representing whether the image i is corresponding to the defect type j;
Figure FDA0002561188640000042
is the transpose of the probability matrix of the category j corresponding to the representation of the road surface defect image i,
Figure FDA0002561188640000043
is the exponential function budget of the transpose matrix.
CN201711167478.3A 2017-11-21 2017-11-21 Road surface defect detection method based on textural feature extraction Expired - Fee Related CN107945161B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711167478.3A CN107945161B (en) 2017-11-21 2017-11-21 Road surface defect detection method based on textural feature extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711167478.3A CN107945161B (en) 2017-11-21 2017-11-21 Road surface defect detection method based on textural feature extraction

Publications (2)

Publication Number Publication Date
CN107945161A CN107945161A (en) 2018-04-20
CN107945161B true CN107945161B (en) 2020-10-23

Family

ID=61930548

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711167478.3A Expired - Fee Related CN107945161B (en) 2017-11-21 2017-11-21 Road surface defect detection method based on textural feature extraction

Country Status (1)

Country Link
CN (1) CN107945161B (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765404B (en) * 2018-05-31 2019-10-18 南京行者易智能交通科技有限公司 A kind of road damage testing method and device based on deep learning image classification
CN109255792B (en) * 2018-08-02 2021-12-21 广州市鑫广飞信息科技有限公司 Video image segmentation method and device, terminal equipment and storage medium
CN109145993B (en) * 2018-08-27 2021-05-07 大连理工大学 SAR image classification method based on multi-feature and non-negative automatic encoder
CN109670392A (en) * 2018-09-04 2019-04-23 中国人民解放军陆军工程大学 Based on mixing autocoder road image semantic segmentation method
CN109584286B (en) * 2019-01-22 2023-03-21 东南大学 Asphalt pavement structure depth calculation method based on generalized regression neural network
CN110119687A (en) * 2019-04-17 2019-08-13 浙江工业大学 Detection method based on the road surface slight crack defect that image procossing and convolutional neural networks combine
CN110176000B (en) * 2019-06-03 2022-04-05 斑马网络技术有限公司 Road quality detection method and device, storage medium and electronic equipment
CN111028210B (en) * 2019-11-25 2023-07-18 北京航天控制仪器研究所 Glass tube end face defect detection method based on deep neural network
CN110992336A (en) * 2019-12-02 2020-04-10 东莞西尼自动化科技有限公司 Small sample defect detection method based on image processing and artificial intelligence
CN111127424A (en) * 2019-12-23 2020-05-08 交通运输部科学研究院 Road construction safety risk monitoring method and system
CN111179263B (en) * 2020-01-06 2023-10-13 广东宜通联云智能信息有限公司 Industrial image surface defect detection model, method, system and device
CN111797687A (en) * 2020-06-02 2020-10-20 上海市城市建设设计研究总院(集团)有限公司 Road damage condition extraction method based on unmanned aerial vehicle aerial photography
CN111767874B (en) * 2020-07-06 2024-02-13 中兴飞流信息科技有限公司 Pavement disease detection method based on deep learning
CN112102254A (en) * 2020-08-21 2020-12-18 佛山职业技术学院 Wood surface defect detection method and system based on machine vision
CN112098417B (en) * 2020-09-07 2022-09-20 中国工程物理研究院激光聚变研究中心 Device and method for online monitoring of surface passivation state of asphalt polishing disc in annular polishing
CN112418198B (en) * 2021-01-25 2021-04-13 城云科技(中国)有限公司 Method for detecting fluctuation defects of floor tiles of pedestrian walkways based on gray scale map energy values
CN113112458A (en) * 2021-03-27 2021-07-13 上海工程技术大学 Metal surface defect detection method based on support vector machine
CN113502721A (en) * 2021-08-10 2021-10-15 重庆大学 Pavement performance determination method and system based on pavement texture
CN113505865B (en) * 2021-09-10 2021-12-07 浙江双元科技股份有限公司 Sheet surface defect image recognition processing method based on convolutional neural network
CN113780259B (en) * 2021-11-15 2022-03-15 中移(上海)信息通信科技有限公司 Road surface defect detection method and device, electronic equipment and readable storage medium
CN114370844B (en) * 2021-12-20 2024-03-22 包头钢铁(集团)有限责任公司 Statistical method for uniformity of characteristic values of surface of plate
CN114663767A (en) * 2022-04-03 2022-06-24 国交空间信息技术(北京)有限公司 Remote sensing image sand-buried road section identification method
CN115937595A (en) * 2022-12-20 2023-04-07 中交公路长大桥建设国家工程研究中心有限公司 Bridge apparent anomaly identification method and system based on intelligent data processing
CN117474909B (en) * 2023-12-27 2024-04-05 深圳市信来誉包装有限公司 Machine vision-based flaw detection method for packaging paper box
CN118015002B (en) * 2024-04-10 2024-06-18 盈客通天下科技(大连)有限公司 Traffic engineering road condition visual detection method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508110A (en) * 2011-10-10 2012-06-20 上海大学 Texture-based insulator fault diagnostic method
CN104866862A (en) * 2015-04-27 2015-08-26 中南大学 Strip steel surface area type defect identification and classification method
CN105719259A (en) * 2016-02-19 2016-06-29 上海理工大学 Pavement crack image detection method
CN105957092A (en) * 2016-05-31 2016-09-21 福州大学 Mammary gland molybdenum target image feature self-learning extraction method for computer-aided diagnosis
CN106599810A (en) * 2016-12-05 2017-04-26 电子科技大学 Head pose estimation method based on stacked auto-encoding
CN106910186A (en) * 2017-01-13 2017-06-30 陕西师范大学 A kind of Bridge Crack detection localization method based on CNN deep learnings
CN107133960A (en) * 2017-04-21 2017-09-05 武汉大学 Image crack dividing method based on depth convolutional neural networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10579923B2 (en) * 2015-09-15 2020-03-03 International Business Machines Corporation Learning of classification model

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508110A (en) * 2011-10-10 2012-06-20 上海大学 Texture-based insulator fault diagnostic method
CN104866862A (en) * 2015-04-27 2015-08-26 中南大学 Strip steel surface area type defect identification and classification method
CN105719259A (en) * 2016-02-19 2016-06-29 上海理工大学 Pavement crack image detection method
CN105957092A (en) * 2016-05-31 2016-09-21 福州大学 Mammary gland molybdenum target image feature self-learning extraction method for computer-aided diagnosis
CN106599810A (en) * 2016-12-05 2017-04-26 电子科技大学 Head pose estimation method based on stacked auto-encoding
CN106910186A (en) * 2017-01-13 2017-06-30 陕西师范大学 A kind of Bridge Crack detection localization method based on CNN deep learnings
CN107133960A (en) * 2017-04-21 2017-09-05 武汉大学 Image crack dividing method based on depth convolutional neural networks

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Adversarial Autoencoders;Alireza Makhzani 等;《arXiv preprint arXiv:1511.05644》;20160525;第1-17页 *
Auto-Encoding Variational Bayes;Diederik P. Kingma;《arXIV:1312.6114V10》;20140501;第1-14页 *
医学图像检索的特征提取算法的开发与应用;王斌;《中国优秀硕士学位论文全文数据库-信息科技辑》;20111215(第S1期);第I138-1606页摘要、第3章 *
基于改进神经网络的机器视觉的路面破损检测系统研究;徐婷 等;《公路》;20120930(第9期);第210-213页第2-3节 *
面向深度网络的自编码器研究;鲁亚平;《中国优秀硕士学位论文全文数据库-信息科技辑》;20170215(第2期);第I14-185页第2-3章 *

Also Published As

Publication number Publication date
CN107945161A (en) 2018-04-20

Similar Documents

Publication Publication Date Title
CN107945161B (en) Road surface defect detection method based on textural feature extraction
CN111815601B (en) Texture image surface defect detection method based on depth convolution self-encoder
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN111383209B (en) Unsupervised flaw detection method based on full convolution self-encoder network
CN108388896B (en) License plate identification method based on dynamic time sequence convolution neural network
CN103593670B (en) A kind of copper plate/strip detection method of surface flaw based on online limit of sequence learning machine
CN108428231B (en) Multi-parameter part surface roughness learning method based on random forest
CN110188774B (en) Eddy current scanning image classification and identification method based on deep learning
CN110610475A (en) Visual defect detection method of deep convolutional neural network
CN111639587B (en) Hyperspectral image classification method based on multi-scale spectrum space convolution neural network
CN108829711B (en) Image retrieval method based on multi-feature fusion
CN116363127B (en) Image processing-based quality detection method for fully-degradable plastic product
CN115861226A (en) Method for intelligently identifying surface defects by using deep neural network based on characteristic value gradient change
CN117197682B (en) Method for blind pixel detection and removal by long-wave infrared remote sensing image
CN117036756B (en) Remote sensing image matching method and system based on variation automatic encoder
CN113313678A (en) Automatic sperm morphology analysis method based on multi-scale feature fusion
CN117011280A (en) 3D printed concrete wall quality monitoring method and system based on point cloud segmentation
CN114821098A (en) High-speed pavement damage detection algorithm based on gray gradient fusion characteristics and CNN
CN111854617B (en) Micro drill bit size detection method based on machine vision
Magdalena et al. Identification of beef and pork using gray level co-occurrence matrix and probabilistic neural network
CN111696070A (en) Multispectral image fusion power internet of things fault point detection method based on deep learning
Ren et al. Sar image data enhancement method based on emd
CN118097310B (en) Method for digitally detecting concrete surface defects
Asha et al. Automatic detection of defects on periodically patterned textures
Barmpoutis et al. Detection of various characteristics on wooden surfaces, using scanner and image processing techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201023