CN111681223B - Method for detecting mine well wall under low illumination condition based on convolutional neural network - Google Patents

Method for detecting mine well wall under low illumination condition based on convolutional neural network Download PDF

Info

Publication number
CN111681223B
CN111681223B CN202010517286.6A CN202010517286A CN111681223B CN 111681223 B CN111681223 B CN 111681223B CN 202010517286 A CN202010517286 A CN 202010517286A CN 111681223 B CN111681223 B CN 111681223B
Authority
CN
China
Prior art keywords
image
detection
network
decomposition
illumination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010517286.6A
Other languages
Chinese (zh)
Other versions
CN111681223A (en
Inventor
黄友锐
韩涛
徐善永
许家昌
鲍士水
凌六一
唐超礼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University of Science and Technology
Original Assignee
Anhui University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University of Science and Technology filed Critical Anhui University of Science and Technology
Priority to CN202010517286.6A priority Critical patent/CN111681223B/en
Publication of CN111681223A publication Critical patent/CN111681223A/en
Application granted granted Critical
Publication of CN111681223B publication Critical patent/CN111681223B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06T5/90

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting a mine well wall under a low illumination condition based on a convolutional neural network, which comprises a training stage and an online detection stage. The training stage comprises the steps of constructing an image decomposition network and an image detection network, and respectively training the image decomposition network and the image detection network by acquiring an image decomposition data set and an image detection data set; the on-line detection stage comprises image decomposition, image enhancement and image detection, the image decomposition network which is trained and used for obtaining the on-site mine well wall image is decomposed into a reflection image and an illumination image, the brightness enhancement is carried out on the well wall image through the image enhancement, and finally the image detection network which is trained is used for detecting the well wall image, so that the detection of the state of the mine well wall under the low illumination condition is realized. The invention improves the accuracy of the mine wall detection, reduces the operation cost of the mine wall detection and improves the safety.

Description

Method for detecting mine well wall under low illumination condition based on convolutional neural network
Technical Field
The invention relates to the field of mine well wall detection methods, in particular to a low-illumination-level mine well wall detection method based on a convolutional neural network.
Background
The relief environment of most mine areas in China is comparatively abominable, and the mineral seam rock mass intensity is not enough, in addition the stratum concreties, the adverse effect that groundwater level rises to cause such as, the mine wall of a well often can produce great internal stress, when stress is greater than the ultimate strength of wall of a well structure, will appear the wall of a well destruction, the mine accident of collapsing, in order to the damaged problem that the discovery mine wall of a well that can be timely accurate produced, reduce the potential safety hazard of mine production, need carry out accurate efficient detection to the mine wall of a well.
At present, the detection of the mine well wall in China generally stays on a manual screening method, detection personnel carry a cage to carry out short-distance detection through naked eyes, or the problem is observed and checked by the personnel after a video image is shot by a camera installed on the cage, so that the problems of low accuracy and poor detection effect in manual detection are brought. And because the mine underground environment is special, the mine underground environment is not provided with fixed lighting equipment, and can only be irradiated by carrying light through a cage or personnel, the problem that the images are darker and difficult to observe can be caused no matter the personnel directly observe or shoot video images through a camera, and the darker images are called as low-illumination images in image processing. The problems cause that the detection cost is high, the working efficiency is low, the speed is low, the time consumption is long, the accuracy is not high when the mine shaft wall is detected, and the safety risk is high.
Disclosure of Invention
The invention aims to provide a method for detecting a mine wall under a low-illumination condition based on a convolutional neural network, and the method is used for solving the problems of poor detection effect, low accuracy and long time consumption in the manual detection of the mine wall condition in the prior art.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the mine wall detection method under the low illumination condition based on the convolutional neural network is characterized by comprising the following steps: the method comprises the following steps:
(1) Constructing an image decomposition network, and training the image decomposition network by using the acquired image data, wherein the process is as follows:
(1.1) constructing an image decomposition training data set and an image decomposition testing data set:
shooting and collecting images at a plurality of positions of a mine shaft wall through a camera, respectively shooting and collecting images under a normal exposure condition and images under a low exposure condition for each position of the mine shaft wall, forming an illumination image pair of the position by the images under the normal exposure condition and the images under the low exposure condition of each position, thereby obtaining illumination image pairs of the plurality of positions, selecting a part from the plurality of illumination image pairs as an image decomposition training data set, and using the rest parts as an image decomposition testing data set;
(1.2) constructing an image decomposition network:
constructing a convolutional neural network consisting of a convolutional layer 1, a convolutional active layer 2, a convolutional active layer 3, a convolutional active layer 4, a convolutional active layer 5, a convolutional active layer 6, a convolutional active layer 7 and a convolutional active layer 8 as an image decomposition network, and inputting an image a [0] Output a through convolutional layer 1 [1] =w [1] a [0] +b [1] Output a from the convolution active layer 2 to the convolution active layer 6 [i] =ReLU(w [i] a [i-1] +b [i] ) I e (2, 3,4,5, 6), output a via convolution activation layer 7 [7] =sigmoid(w [7] a [6] +b [7] ),a [7] I.e. the reflection image R obtained after the decomposition of the input image, passes through the output a of the convolution activation layer 8 [8] =sigmoid(w [8] a [6] +b [8] ),a [8] Namely, the illumination image I obtained after the input image is decomposed, wherein:
w [ly] ,b [ly] all unknown quantities are determined after training, ly belongs to (1, 2,3,4,5,6,7, 8), the ReLU function is calculated as ReLU (x) = Max (0, x), the Max (x) function represents taking parameter maximum value, and the sigmoid function is calculated as
Figure BDA0002530564220000021
x represents the corresponding parameter substituted into the function in the corresponding convolution activation layer;
(1.3) inputting the image decomposition training data set and the image decomposition testing data set obtained in the step (1.1) into the image decomposition network constructed in the step (1.2) to carry out repeated iterative training on the image decomposition network to obtain a trained image decomposition network;
(2) Constructing an image detection network, and training the image detection network by using the acquired image data, wherein the process is as follows:
(2.1) constructing an image detection training data set and an image detection testing data set:
firstly, acquiring images of a plurality of positions of a mine well wall, searching images with a plurality of well wall conditions including abnormal-free conditions in the images, and selecting a plurality of images from each well wall condition to form a well wall image sample set;
secondly, establishing a label value table of various well wall conditions, wherein the well wall image samples are concentrated on images of the same well wall conditions in the label value table to be set with the same state label values, and the images of different well wall conditions are set with different state label values;
then, respectively selecting a plurality of images of each well wall condition from the well wall image sample set, placing the selected images of the plurality of well wall conditions according to a random sequence, storing the image name of each image and the corresponding state label value of each image into the same training file according to the placing sequence, and forming an image detection training data set by the images and the training files which are selected and placed again from the well wall image sample set;
selecting a plurality of images from the residual images after each well wall condition in the well wall image sample set is selected according to the same method, placing the images of the multiple well wall conditions selected from the residual images according to a random sequence, storing the image name and the corresponding state label value of each image into the same test file according to the placing sequence, and forming an image detection test data set by the images selected from the residual images in the well wall image sample set and placed and the test files;
(2.2) constructing an image detection network:
constructing a convolutional neural network consisting of a convolutional activation layer 1, a pooling layer 1, a convolutional activation layer 2, a pooling layer 2, a convolutional activation layer 3, a pooling layer 3, a convolutional activation layer 4, a pooling layer 4, a convolutional activation layer 5, a pooling layer 5, a fully-connected layer 6, a fully-connected layer 7 and a Softmax layer, and inputting an image a [0] Output z through convolution activation layer [j] =ReLU(w [j] a [j-1] +b [j] ) J e (1, 2,3,4, 5), output a through pooling layers [j] =Max(z [j] ) J e (1, 2,3,4, 5), output a through the full connection layer [k] =w [k] a [k -1] +b [k] K ∈ (6, 7), output through Softmax layer
Figure BDA0002530564220000031
The Softmax layer calculates the probability that the detection result is possibly the label value of each state, and then selects the maximum probability value as the final detection result; ,/>
Figure BDA0002530564220000032
Indicating the probability that the detection result is various state label values, and selecting the state label value with the highest probability as the final detection result, i.e.
Figure BDA0002530564220000033
Wherein:
w [lay] ,b [lay] is unknown and needs to be determined after training, and lay belongs to (1, 2,3,4,5,6, 7); the Max (x) function represents taking the maximum value of the parameter; the ReLU (x) function is calculated as ReLU (x) = Max (0, x), x representing the corresponding parameter of the corresponding convolutional active layer substitution function;
(2.3) inputting the image detection training data set and the image detection test data set obtained in the step (2.1) into the image detection network constructed in the step (2.2) so as to carry out iterative training on the image detection network for multiple times to obtain the trained image detection network;
(3) And (3) carrying out online detection on the wall of the mine well by using the trained image decomposition network obtained in the step (1) and the trained image detection network obtained in the step (2), wherein the process is as follows:
(3.1) acquiring and obtaining a mine well wall image;
(3.2) inputting the borehole wall image into a trained image decomposition network to decompose a reflection image R and an illumination image I;
(3.3) Using the formula L (x, y) = [ I (x, y)] γ Performing brightness gamma correction on the illumination image I to obtain an illumination brightness correction image L, wherein I (x, y) represents a pixel value at an (x, y) position in the illumination image I, L (x, y) represents a pixel value at an (x, y) position in the corrected image L, gamma is a constant and gamma is less than 1; will be reversedMultiplying each pixel value of the shot image R with a pixel value of a corresponding position of the illumination brightness correction image L to obtain a pixel value of the brightness enhancement image S, wherein a calculation formula is S (x, y) = L (x, y) · R (x, y);
and (3.4) inputting the brightness enhancement image S into the trained image detection network, wherein the output result calculated by the image detection network is the detection result of the well wall condition, and the result is one of the multiple well wall conditions.
The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network is characterized by comprising the following steps of: in step (1.1), the number of images in the image decomposition training data set is greater than the number of images in the image decomposition test data set.
The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network is characterized by comprising the following steps of: the specific process of step (1.3) is as follows:
(1.3 a), parameter initialization, setting w [ly] 、b [ly] For random values, ly e (1, 2,3,4,5,6,7, 8), set the number of iterations to S p The learning rate is L p The decomposition accuracy threshold is T p The e-th illumination image of the image decomposition training data set is marked as S pair [e]Let e =1;
(1.3 b) taking S pair [e]Middle normal illumination image S normal Inputting into image decomposition network, calculating reflected image R output by the network normal And an illumination image I normal
(1.3 c) taking S pair [e]Medium and low illumination image S low Inputting into image decomposition network, calculating reflected image R output by the network low And an illumination image I low
(1.3 d) according to R low And R normal Calculating a reflectance image consistency loss function L (R) low ,R normal ),L(R low ,R normal )=||R low -R normal || 1
(1.3 e), calculating parameter w [ly] And b [ly] Change value Δ w of [ly] And Δ b [ly]
Figure BDA0002530564220000051
Figure BDA0002530564220000052
Representing calculating partial derivatives;
(1.3 f) according to the formula w [ly] =w [ly] -L p *Δw [ly] ,b [ly] =b [ly] -L p *Δb [ly] Update w [ly] And b [ly] ,ly∈(1,2,3,4,5,6,7,8);
(1.3 g), judging whether the image is the last illumination image pair, if not, inputting the next illumination image pair, and returning to the step (1.3 b); if yes, turning to the step (1.3 h) to calculate the decomposition accuracy rate;
(1.3 h), inputting all illumination image pairs in the image decomposition test data set, and calculating a reflection image R of normal illumination image decomposition in each illumination image pair normal-test And low-light image decomposed reflection image R low-test And compared, and then the decomposition accuracy rate is calculated
Figure BDA0002530564220000053
Wherein ∑ (R) low-test ==R normal-test ) Represents the number of coincidence of the reflected image of the normal-illumination image decomposition and the reflected image of the low-illumination image decomposition, ∑ Num (S) pair-test ) Representing a total number of image pairs in the test dataset;
(1.3 k), judging whether the detection accuracy rate meets the requirement, if Ac p ≥T p If yes, turning to the step (1.3 l) to finish training; if Ac p <T p Judging whether the iteration times are finished or not, if S is finished p If not equal to 0, the step (1.3 b) is carried out, and the training data set is reused to carry out a new round of training until the iteration is finished; if S p If =0, the training is finished;
(1.3 l) all parameters w [ly] And b [ly] And storing, and the ly belongs to (1, 2,3,4,5,6,7, 8), and finishing the training of the image decomposition network.
The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network is characterized by comprising the following steps of: in the step (2.1), in the image, the image with various well wall conditions including abnormal conditions is searched, and at least the conditions of cracks, pits and water seepage are included.
The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network is characterized by comprising the following steps of: in the step (2.1), in the tag value table, the state tag value set for the image of the borehole wall image sample concentration no-condition is 1, the state tag value set for the image of the borehole wall image sample concentration crack condition is 2, the state tag value set for the image of the borehole wall image sample concentration pothole condition is 3, and the state tag value set for the image of the borehole wall image sample concentration water seepage condition is 4.
The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network is characterized by comprising the following steps of: the specific process of step (2.3) is as follows:
(2.3 a), initializing parameters and setting w [lay] 、b [lay] For random values, lay e (1, 2,3,4,5,6, 7), set the number of iterations to S d The learning rate is L d The image detection accuracy threshold is T d And the f image of the image detection training data set is marked as IM [ f]Let f =1;
(2.3 b) image IM [ f]Inputting into image detection network, calculating its detection estimation value
Figure BDA0002530564220000061
(2.3 c) from the input image IM [ f]Corresponding state label value y and calculated estimated value
Figure BDA0002530564220000062
Calculating a cross entropy loss function->
Figure BDA0002530564220000063
Figure BDA0002530564220000064
(2.3 d) calculating each parameter w in each layer of the image detection network [lay] And b [lay] Change value Δ w of [lay] And Δ b [lay]
Figure BDA0002530564220000065
Wherein lay ∈ (1, 2,3,4,5,6, 7);
(2.3 e) according to the formula w [lay] =w [lay] -L d *Δw [lay] ,b [lay] =b [lay] -L d *Δb [lay] Update w [lay] And b [lay] Wherein lay ∈ (1, 2,3,4,5,6, 7);
(2.3 f), judging whether the image is the last image, if not, inputting the next image, and returning to the step (2.3 b); if yes, the step is carried out (2.3 g) to calculate the detection accuracy;
(2.3 g) inputting images of the image detection test data set, and calculating a detection evaluation value of each image
Figure BDA0002530564220000066
Comparing the value with the corresponding label state value y stored in the test file, and calculating the detection accuracy rate->
Figure BDA0002530564220000067
Wherein +>
Figure BDA0002530564220000068
The number of images with the same detection estimation value and label state value is represented, and sigma Num (y) represents the total number of images in the image detection test data set;
(2.3 h), judging whether the detection accuracy rate meets the requirement or not, and if the detection accuracy rate meets the requirement, judging whether the detection accuracy rate meets the requirement or not, if the detection accuracy rate meets the requirement d ≥T d If yes, turning to the step (2.3 k) and finishing training; if Ac d <T d Judging whether the iteration times are finished or not, if S d If not equal to 0, turning to the step (2.3 b), and reusing the image detection training data set to perform a new round of training until the iteration is finished; if S d If =0, the training is finished;
(2.3 k), all parameters w [lay] And b [lay] And storing, lay belongs to (1, 2,3,4,5,6, 7), and finishing the training of the image detection network.
The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network is characterized by comprising the following steps of: in the step (3.4), the finally output detection result is one of abnormal condition, crack, pot hole and water seepage condition.
The invention provides a well wall detection method based on a convolutional neural network under a low-illumination condition of a mine. The image decomposition network is constructed by the convolutional neural network, the brightness gamma calibration is used, the brightness enhancement of the mine well wall image under the low illumination condition is realized, the automatic detection of the mine well wall image is realized through the image detection network constructed by the convolutional neural network, and the abnormal state of the mine well wall is accurately detected.
The invention has the beneficial effects that:
the invention is used for automatically detecting the wall of the mine well under the condition of low illumination. An image decomposition network is constructed through a convolutional neural network, the brightness of the mine well wall image under the low illumination condition is enhanced by using a brightness gamma calibration method, and the automatic detection of the mine well wall image is realized through the image detection network constructed through the convolutional neural network. The method of the invention uses an image processing mode for detection, avoids manual participation, realizes the automation of detection, reduces the operation cost, increases the detection efficiency and reduces the potential safety hazard in the detection. The image enhancement and the well wall state detection of the image under the low illumination condition are realized through the convolutional neural network, the image brightness enhancement effect is good, the detection speed is high, the efficiency is high, and the accuracy and the reliability of the detection are greatly improved.
Drawings
FIG. 1 is an overall block diagram of the process of the present invention.
Fig. 2 is a block diagram of an image decomposition network of the method of the present invention.
FIG. 3 is a flow chart of a training image decomposition network of the method of the present invention.
FIG. 4 is a schematic diagram of the method of the present invention for constructing an image test training data set and an image test data set.
FIG. 5 is a block diagram of an image detection network according to the method of the present invention.
FIG. 6 is a flow chart of a training image detection network of the method of the present invention.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
As shown in figure 1, the method comprises a training stage and an online detection stage, wherein in the training stage, an image decomposition network and an image detection network are constructed through a convolutional neural network, the image decomposition network is trained through a shot normal illumination image and a shot low illumination image, and the image detection network is trained through an acquired image without abnormality, crack, pot hole and water seepage of the wall of a mine well. After training is completed, in an online detection stage, a shot actual mine well wall image is decomposed into an illumination image and a reflection image through a trained image decomposition network, the illumination image is subjected to brightness gamma calibration and then fused with the reflection image again to realize brightness enhancement of the low-illumination image, the well wall image after brightness enhancement is input into an image detection network, and finally a detection result of the mine well wall state is obtained. The specific process is as follows:
(1) Constructing an image decomposition network, and training the image decomposition network by using the acquired image data, wherein the process is as follows:
(1.1) the process of constructing the image decomposition training data set and the image decomposition testing data set comprises the following steps:
first, an image is taken with a camera, and an image taken by the camera under normal exposure conditions is referred to as a normal-light image S normal (ii) a Taking the same image at the same location and under low exposure conditions, called low light image S low (ii) a The normal-light image and the low-light image at the same position are together called a light image pair S pair (ii) a Then, selecting different positions of the mine well wall, shooting the light images for M, wherein M is more than or equal to 500, and selecting M train The individual image pairs are used as image decomposition training data sets, M remains test The individual image pairs are used as image decomposition test data sets, where M train >M test
(1.2) as shown in fig. 2, constructing an image decomposition network:
constructing a convolutional neural network consisting of a convolutional layer 1, a convolutional activation layer 2, a convolutional activation layer 3, a convolutional activation layer 4, a convolutional activation layer 5, a convolutional activation layer 6, a convolutional activation layer 7 and a convolutional activation layer 8 as an image decomposition network, and inputting an image a [0] Output a through convolutional layer 1 [1] =w [1] a [0] +b [1] Output a through convolution activation layers 2 to 6 [i] =ReLU(w [i] a [i-1] +b [i] ) I e (2, 3,4,5, 6), output a through convolution activation layer 7 [7] =sigmoid(w [7] a [6] +b [7] ),a [7] I.e. the reflection image R obtained after the decomposition of the input image, passes through the output a of the convolution activation layer 8 [8] =sigmoid(w [8] a [7] +b [8] ),a [8] Namely, the illumination image I obtained after the input image is decomposed, wherein:
w [ly] ,b [ly] all unknown quantities are determined after training, ly belongs to (1, 2,3,4,5,6,7, 8), the ReLU function is calculated as ReLU (x) = Max (0, x), the Max (x) function represents taking parameter maximum value, and the sigmoid function is calculated as
Figure BDA0002530564220000081
x represents the corresponding parameter substituted into the function in the corresponding convolution activation layer;
(1.3) as shown in FIG. 3, the process of training the image decomposition network of the method of the present invention is:
(1.3 a), parameter initialization, setting w [ly] 、b [ly] For random values, ly e (1, 2,3,4,5,6,7, 8), set the number of iterations to S p The learning rate is L p The threshold value of the decomposition accuracy rate is T p The e-th illumination image of the image decomposition training data set is marked as S pair [e]Let e =1;
(1.3 b) taking S pair [j]Middle normal illumination image S normal Inputting into image decomposition network, calculating reflected image R output by the network normal And an illumination image I normal
(1.3 c) taking S pair [j]Middle and low lightTaking picture S low Inputting into image decomposition network, calculating reflected image R output by the network low And an illumination image I low
(1.3 d) according to R low And R normal Calculating a reflectance image consistency loss function L (R) low ,R normal ),L(R low ,R normal )=||R low -R normal || 1
(1.3 e), calculating parameter w [ly] And b [ly] Change value Δ w of [ly] And Δ b [ly]
Figure BDA0002530564220000091
Figure BDA0002530564220000092
Representing calculating partial derivatives;
(1.3 f) according to the formula w ly] =w[ ly] -L p *Δw[ ly] ,b[ ly] =b[ ly] -L p *Δb[ ly] Update w [ly] And b [ly] ,ly∈(1,2,3,4,5,6,7,8);
(1.3 g), judging whether the image is the last illumination image pair, if not, inputting the next illumination image pair, and returning to the step (1.3 b); if yes, turning to the step (1.3 h) to calculate the decomposition accuracy rate;
(1.3 h), inputting all illumination image pairs in the image decomposition test data set, and calculating a reflection image R of normal illumination image decomposition in each illumination image pair normal-test And low-light image decomposed reflection image R low-test And compared, and then the decomposition accuracy rate is calculated
Figure BDA0002530564220000093
Wherein ∑ (R) low-test ==R normal-test ) Represents the number of coincidence of the reflected image of the normal-illumination image decomposition and the reflected image of the low-illumination image decomposition, ∑ Num (S) pair-test ) Representing a total number of image pairs in the test dataset;
(1.3 k), judging whether the detection accuracy rate meets the requirement, and if the acc is more than or equal to T, turning to the step (1.3 l) to finish the training; if acc is less than T, judging whether the iteration times are finished, if S is not equal to 0, turning to the step (1.3 b), and reusing the training data set to perform a new round of training until the iteration is finished; if S =0, the training is finished;
(1.3 l) all parameters w [ly] And b [ly] And storing, and the ly belongs to (1, 2,3,4,5,6,7, 8), and finishing the training of the image decomposition network.
(2) Constructing an image detection network, and training the image detection network by using the acquired image data, wherein the process is as follows:
(2.1) as shown in fig. 4, constructing an image detection training data set and an image detection test data set:
firstly, shooting a well wall image by using a camera, and selecting N images of four conditions of 'no abnormity', 'crack', 'hole' and 'water seepage', wherein N is more than or equal to 800, so as to form a well wall image sample set; secondly, establishing a Label (Label) value table of four well wall states: "no anomaly" =1, "crack" =2, "pit hole" =3, "water seepage" =4; thirdly, setting a corresponding state label value for each image in the borehole wall image sample set; finally, 500 borehole wall images of 'no abnormity', 'crack', 'hole' and 'water seepage' are selected from the borehole wall image sample set, 2000 images are arranged according to a random sequence, the image name of each image and the corresponding borehole wall state label value are stored in a train file according to the arrangement sequence, and the 2000 borehole wall images and the train file form an image detection training data set; according to the same method, randomly selecting 200 images from the rest images in the borehole wall image sample set, placing the images according to a random sequence, storing the image names of the 200 images and the corresponding borehole wall state label values into a test _ label.txt file, namely a test file, according to the placing sequence, and taking the 200 images and the test _ label.txt file as an image detection test data set;
(2.2) as shown in fig. 5, constructing an image detection network:
the structure is composed of a convolution active layer 1, a pooling layer 1, a convolution active layer 2, a pooling layer 2,A convolutional neural network composed of a convolutional active layer 3, a pooling layer 3, a convolutional active layer 4, a pooling layer 4, a convolutional active layer 5, a pooling layer 5, a full-link layer 6, a full-link layer 7, and a Softmax layer, and an input image a [0] Output z through convolution activation layer [j] =ReLU(w [j] a [j-1] +b [j] ) J e (1, 2,3,4, 5), output a through pooling layers [j] =Max(z [j] ) J e (1, 2,3,4, 5), output a through the full connection layer [k] =w [k] a [k -1] +b [k] K ∈ (6, 7), output through Softmax layer
Figure BDA0002530564220000101
The Softmax layer calculates the probability that the detection result is possible to be the label value of each state, and then selects the maximum probability value as the final detection result; ,/>
Figure BDA0002530564220000102
Indicating the probability that the detection result is various state label values, and selecting the state label value with the highest probability as the final detection result, i.e.
Figure BDA0002530564220000103
Wherein:
w [lay] ,b [lay] is unknown and needs to be determined after training, and lay belongs to (1, 2,3,4,5,6, 7); the Max (x) function represents taking the maximum value of the parameter; the ReLU (x) function is calculated as ReLU (x) = Max (0, x), x represents the corresponding parameter of the corresponding convolutional active layer substitution function;
(2.3) as shown in fig. 6, the process of training the image detection network is as follows:
(2.3 a), initializing parameters and setting w [lay] 、b [lay] For random values, lay e (1, 2,3,4,5,6, 7), set the number of iterations to S d The learning rate is L d The image detection accuracy threshold is T d The f-th image of the image detection training dataset is denoted as IM [ f [ ]]Let f =1;
(2.3 b) image IM [ f)]Inputting into image detection network, calculating its detection estimation value
Figure BDA0002530564220000111
(2.3 c) from the input image IM [ f]Corresponding state label value y and calculated estimate value
Figure BDA0002530564220000112
Calculating a cross entropy loss function +>
Figure BDA0002530564220000113
Figure BDA0002530564220000114
(2.3 d) calculating each parameter w in each layer of the image detection network [lay] And b [lay] Change value Δ w of [lay] And Δ b [lay]
Figure BDA0002530564220000115
Wherein lay ∈ (1, 2,3,4,5,6, 7);
(2.3 e) according to the formula w [lay] =w [lay] -L d *Δw [lay] ,b [lay] =b [lay] -L d *Δb [lay] Update w [lay] And b [lay] Wherein lay ∈ (1, 2,3,4,5,6, 7);
(2.3 f), judging whether the image is the last image, if not, inputting the next image, and returning to the step (2.3 b); if yes, the step is carried out (2.3 g) to calculate the detection accuracy;
(2.3 g) inputting images of the image detection test data set, and calculating a detection evaluation value of each image
Figure BDA0002530564220000116
Comparing the value with the corresponding label state value y stored in the test file, and calculating the detection accuracy rate->
Figure BDA0002530564220000117
Wherein->
Figure BDA0002530564220000118
The number of images with the same detection estimation value and label state value is represented, and sigma Num (y) represents the total number of images in the image detection test data set;
(2.3 h), judging whether the detection accuracy rate meets the requirement or not, and if the detection accuracy rate meets the requirement, judging whether the detection accuracy rate meets the requirement or not, if the detection accuracy rate meets the requirement d ≥T d If yes, turning to the step (2.3 k) and finishing training; if Ac d <T d Judging whether the iteration times are finished or not, if S is finished d If not equal to 0, turning to the step (2.3 b), and reusing the image detection training data set to perform a new round of training until the iteration is finished; if S d If =0, the training is finished;
(2.3 k) all parameters w [lay] And b [lay] And storing, lay belongs to (1, 2,3,4,5,6, 7), and finishing the training of the image detection network.
(3) The process of online detection of the wall of the mine well comprises the following steps:
(3.1) acquiring and obtaining a mine well wall image;
(3.2) inputting the borehole wall image into a trained image decomposition network, and decomposing a reflection image R and an illumination image I;
(3.3) Using the formula L (x, y) = [ I (x, y)] γ Performing brightness gamma correction on the illumination image I to obtain an illumination brightness correction image L, wherein I (x, y) represents the pixel value of the (x, y) position in the illumination image I, L (x, y) represents the pixel value of the (x, y) position in the corrected image L, gamma is a constant, and gamma is less than 1; multiplying each pixel value of the reflection image R with a pixel value of a corresponding position of the illumination brightness correction image L to obtain a pixel value of a brightness enhancement image S, wherein a calculation formula is S (x, y) = L (x, y) · R (x, y);
and (3.4) inputting the brightness enhancement image S into the trained image detection network, wherein the output result calculated by the image detection network is the detection result of the well wall state, and is one of four states of no abnormal condition, no crack, no hole and no water seepage.
The embodiments of the present invention are described only for the preferred embodiments of the present invention, and not for the limitation of the concept and scope of the present invention, and various modifications and improvements made to the technical solution of the present invention by those skilled in the art without departing from the design concept of the present invention shall fall into the protection scope of the present invention, and the technical content of the present invention which is claimed is fully set forth in the claims.

Claims (7)

1. The mine wall detection method under the low illumination condition based on the convolutional neural network is characterized by comprising the following steps: the method comprises the following steps:
(1) Constructing an image decomposition network, and training the image decomposition network by using the acquired image data, wherein the process is as follows:
(1.1) constructing an image decomposition training data set and an image decomposition testing data set:
shooting and collecting images at a plurality of positions of a mine shaft wall through a camera, respectively shooting and collecting images under a normal exposure condition and images under a low exposure condition for each position of the mine shaft wall, forming an illumination image pair of the position by the images under the normal exposure condition and the images under the low exposure condition of each position, thereby obtaining illumination image pairs of the plurality of positions, selecting a part from the plurality of illumination image pairs as an image decomposition training data set, and using the rest parts as an image decomposition testing data set;
(1.2) constructing an image decomposition network:
constructing a convolutional neural network consisting of a convolutional layer 1, a convolutional activation layer 2, a convolutional activation layer 3, a convolutional activation layer 4, a convolutional activation layer 5, a convolutional activation layer 6, a convolutional activation layer 7 and a convolutional activation layer 8 as an image decomposition network, and inputting an image a [0] Output a through convolutional layer 1 [1] =w [1] a [0] +b [1] Output a from the convolution active layer 2 to the convolution active layer 6 [i] =ReLU(w [i] a [i-1] +b [i] ) I e (2, 3,4,5, 6), output a via convolution activation layer 7 [7] =sigmoid(w [7] a [6] +b [7] ),a [7] I.e. the reflected image R obtained after the decomposition of the input image, passes through the output a of the convolution activation layer 8 [8] =sigmoid(w [8] a [6] +b [8] ),a [8] I.e. the illumination image I obtained after the decomposition of the input image, which isThe method comprises the following steps:
w [ly] ,b [ly] all unknown quantities are determined after training, ly belongs to (1, 2,3,4,5,6,7, 8), the ReLU function is calculated as ReLU (x) = Max (0, x), the Max (x) function represents taking parameter maximum value, and the sigmoid function is calculated as
Figure FDA0002530564210000011
x represents the corresponding parameter substituted into the function in the corresponding convolution activation layer;
(1.3) inputting the image decomposition training data set and the image decomposition testing data set obtained in the step (1.1) into the image decomposition network constructed in the step (1.2) so as to carry out iterative training on the image decomposition network for multiple times to obtain the trained image decomposition network;
(2) Constructing an image detection network, and training the image detection network by using the acquired image data, wherein the process is as follows:
(2.1) constructing an image detection training data set and an image detection testing data set:
firstly, acquiring images of a plurality of positions of a mine well wall, searching images with various well wall conditions including abnormal conditions, and selecting a plurality of images from each well wall condition to form a well wall image sample set;
secondly, establishing a label value table of various well wall conditions, wherein the well wall image samples are concentrated on images of the same well wall conditions in the label value table to be set with the same state label values, and the images of different well wall conditions are set with different state label values;
then, respectively selecting a plurality of images of each well wall condition from the well wall image sample set, placing the selected images of the various well wall conditions according to a random sequence, storing the image name of each image and the corresponding state label value of each image into the same training file according to the placing sequence, and forming an image detection training data set by the images and the training files which are selected and placed again from the well wall image sample set;
selecting a plurality of images from the residual images after each well wall condition in the well wall image sample set is selected according to the same method, placing the images of the multiple well wall conditions selected from the residual images according to a random sequence, storing the image name and the corresponding state label value of each image into the same test file according to the placing sequence, and forming an image detection test data set by the images selected from the residual images in the well wall image sample set and placed and the test files;
(2.2) constructing an image detection network:
constructing a convolutional neural network consisting of a convolutional activation layer 1, a pooling layer 1, a convolutional activation layer 2, a pooling layer 2, a convolutional activation layer 3, a pooling layer 3, a convolutional activation layer 4, a pooling layer 4, a convolutional activation layer 5, a pooling layer 5, a fully-connected layer 6, a fully-connected layer 7 and a Softmax layer, and inputting an image a [0] Output z through convolution activation layer [j] =ReLU(w [j] a [j-1] +b [j] ) J e (1, 2,3,4, 5), output a through pooling layers [j] =Max(z [j] ) J is the element (1, 2,3,4, 5), output a through the full connection layer [k] =w [k] a [k-1] +b [k] K ∈ (6, 7), output through Softmax layer
Figure FDA0002530564210000021
The Softmax layer calculates the probability that the detection result is possible to be the label value of each state, and then selects the maximum probability value as the final detection result; ,/>
Figure FDA0002530564210000022
Indicating the probability that the detection result is various state label values, and selecting the state label value with the highest probability as the final detection result, i.e.
Figure FDA0002530564210000023
Wherein:
w [lay] ,b [lay] is unknown and needs to be determined after training, and lay belongs to (1, 2,3,4,5,6, 7); the Max (x) function represents taking the maximum value of the parameter; the ReLU (x) function is calculated as ReLU (x) = Max (0, x), x representing the corresponding parameter of the corresponding convolutional active layer substitution function;
(2.3) inputting the image detection training data set and the image detection test data set obtained in the step (2.1) into the image detection network constructed in the step (2.2) so as to carry out iterative training on the image detection network for multiple times to obtain the trained image detection network;
(3) And (3) carrying out online detection on the wall of the mine well by using the trained image decomposition network obtained in the step (1) and the trained image detection network obtained in the step (2), wherein the process is as follows:
(3.1) acquiring a mine well wall image;
(3.2) inputting the borehole wall image into a trained image decomposition network, and decomposing a reflection image R and an illumination image I;
(3.3) Using the formula L (x, y) = [ I (x, y)] γ Performing brightness gamma correction on the illumination image I to obtain an illumination brightness correction image L, wherein I (x, y) represents a pixel value at an (x, y) position in the illumination image I, L (x, y) represents a pixel value at an (x, y) position in the corrected image L, gamma is a constant and gamma is less than 1; multiplying each pixel value of the reflection image R with a pixel value of a corresponding position of the illumination brightness correction image L to obtain a pixel value of a brightness enhancement image S, wherein a calculation formula is S (x, y) = L (x, y) · R (x, y);
and (3.4) inputting the brightness enhancement image S into the trained image detection network, wherein the output result calculated by the image detection network is the detection result of the well wall condition, and the result is one of the multiple well wall conditions.
2. The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network as claimed in claim 1, characterized in that: in step (1.1), the number of images in the image decomposition training data set is greater than the number of images in the image decomposition test data set.
3. The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network as claimed in claim 1, wherein: the specific process of step (1.3) is as follows:
(1.3 a), parameter initialization, setting w [ly] 、b [ly] Is a random valueLy e (1, 2,3,4,5,6,7, 8), setting the number of iterations to S p The learning rate is L p The decomposition accuracy threshold is T p And the e-th illumination image pair of the image decomposition training data set is marked as S pair [e]Let e =1;
(1.3 b) taking S pair [e]Middle normal illumination image S normal Inputting into image decomposition network, calculating reflected image R output by the network normal And an illumination image I normal
(1.3 c) taking S pair [e]Medium and low illumination image S low Inputting into image decomposition network, calculating reflected image R output by the network low And an illumination image I low
(1.3 d) according to R low And R normal Calculating a reflectance image consistency loss function L (R) low ,R normal ),L(R low ,R normal )=||R low -R normal || 1
(1.3 e), calculating parameter w [ly] And b [ly] Change value Δ w of [ly] And Δ b [ly]
Figure FDA0002530564210000041
Figure FDA0002530564210000042
Representing calculating partial derivatives;
(1.3 f) according to the formula w [ly] =w [ly] -L p *Δw [ly] ,b [ly] =b [ly] -L p *Δb [ly] Update w [ly] And b [ly] ,ly∈(1,2,3,4,5,6,7,8);
(1.3 g), judging whether the image is the last illumination image pair, if not, inputting the next illumination image pair, and returning to the step (1.3 b); if yes, turning to the step (1.3 h) to calculate the decomposition accuracy rate;
(1.3 h), inputting all illumination image pairs in the image decomposition test data set, and calculating a reflection image R of normal illumination image decomposition in each illumination image pair normal-test And low lightImage decomposed reflection image R low-test And compared, and then the decomposition accuracy is calculated
Figure FDA0002530564210000043
Wherein ∑ (R) low-test ==R normal-test ) Represents the number of coincidence of the reflected image of the normal-illumination image decomposition and the reflected image of the low-illumination image decomposition, ∑ Num (S) pair-test ) Representing a total number of image pairs in the test dataset;
(1.3 k), judging whether the detection accuracy rate meets the requirement, if Ac p ≥T p If yes, turning to the step (1.3 l) to finish training; if Ac p <T p Judging whether the iteration times are finished or not, if S is finished p If not equal to 0, the step (1.3 b) is carried out, and the training data set is reused to carry out a new round of training until the iteration is finished; if S p If =0, the training is finished;
(1.3 l) all parameters w [ly] And b [ly] And storing, and the ly belongs to (1, 2,3,4,5,6,7, 8), and finishing the training of the image decomposition network.
4. The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network as claimed in claim 1, wherein: in the step (2.1), in the image, the image with various well wall conditions including abnormal conditions is searched, and at least the conditions of cracks, pits and water seepage are included.
5. The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network as claimed in claim 1 or 4, wherein: in the step (2.1), in the tag value table, the state tag value set for the image of the borehole wall image sample concentration no-condition is 1, the state tag value set for the image of the borehole wall image sample concentration crack condition is 2, the state tag value set for the image of the borehole wall image sample concentration pothole condition is 3, and the state tag value set for the image of the borehole wall image sample concentration water seepage condition is 4.
6. The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network as claimed in claim 1, wherein: the specific process of step (2.3) is as follows:
(2.3 a), parameter initialization, setting w [lay] 、b [lay] For random values, lay e (1, 2,3,4,5,6, 7), set the number of iterations to S d The learning rate is L d The image detection accuracy threshold is T d And the f image of the image detection training data set is marked as IM [ f]Let f =1;
(2.3 b) image IM [ f]Inputting into image detection network, calculating its detection estimation value
Figure FDA0002530564210000051
(2.3 c) from the input image IM [ f]Corresponding state label value y and calculated estimated value
Figure FDA0002530564210000052
Calculating a cross entropy loss function->
Figure FDA0002530564210000053
Figure FDA0002530564210000054
(2.3 d) calculating each parameter w in each layer of the image detection network [lay] And b [lay] Change value Δ w of [lay] And Δ b [lay]
Figure FDA0002530564210000055
Wherein lay ∈ (1, 2,3,4,5,6, 7); />
(2.3 e), according to the formula w [lay] =w [lay] -L d *Δw [lay] ,b [lay] =b [lay] -L d *Δb [lay] Update w [lay] And b [lay] Wherein lay ∈ (1, 2,3,4,5,6, 7);
(2.3 f), judging whether the image is the last image, if not, inputting the next image, and returning to the step (2.3 b); if yes, the step is carried out (2.3 g) to calculate the detection accuracy;
(2.3 g) inputting images of the image detection test data set, and calculating a detection evaluation value of each image
Figure FDA0002530564210000056
Comparing the value with the corresponding label state value y stored in the test file, and calculating the detection accuracy rate->
Figure FDA0002530564210000057
Wherein->
Figure FDA0002530564210000058
The number of images with the same detection estimation value and label state value is represented, and sigma Num (y) represents the total number of images in the image detection test data set;
(2.3 h), judging whether the detection accuracy rate meets the requirement or not, and if the detection accuracy rate meets the requirement, judging whether the detection accuracy rate meets the requirement or not, if the detection accuracy rate meets the requirement d ≥T d If yes, turning to the step (2.3 k) and finishing training; if Ac d <T d Judging whether the iteration times are finished or not, if S is finished d If not equal to 0, turning to the step (2.3 b), and reusing the image detection training data set to perform a new round of training until the iteration is finished; if S d If =0, the training is finished;
(2.3 k) all parameters w [lay] And b [lay] And storing, lay belongs to (1, 2,3,4,5,6, 7), and finishing the training of the image detection network.
7. The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network as claimed in claim 1 or 4, wherein: in the step (3.4), the finally output detection result is one of no abnormal condition, no crack, no pot hole and no water seepage condition.
CN202010517286.6A 2020-06-09 2020-06-09 Method for detecting mine well wall under low illumination condition based on convolutional neural network Active CN111681223B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010517286.6A CN111681223B (en) 2020-06-09 2020-06-09 Method for detecting mine well wall under low illumination condition based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010517286.6A CN111681223B (en) 2020-06-09 2020-06-09 Method for detecting mine well wall under low illumination condition based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN111681223A CN111681223A (en) 2020-09-18
CN111681223B true CN111681223B (en) 2023-04-18

Family

ID=72435659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010517286.6A Active CN111681223B (en) 2020-06-09 2020-06-09 Method for detecting mine well wall under low illumination condition based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN111681223B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018028255A1 (en) * 2016-08-11 2018-02-15 深圳市未来媒体技术研究院 Image saliency detection method based on adversarial network
CN108510488A (en) * 2018-03-30 2018-09-07 安徽理工大学 Four kinds of damage detecting methods of conveyer belt based on residual error network
CN109305534A (en) * 2018-10-25 2019-02-05 安徽理工大学 Coal wharf's belt conveyor self-adaptation control method based on computer vision
CN110378845A (en) * 2019-06-17 2019-10-25 杭州电子科技大学 A kind of image repair method under extreme condition based on convolutional neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018028255A1 (en) * 2016-08-11 2018-02-15 深圳市未来媒体技术研究院 Image saliency detection method based on adversarial network
CN108510488A (en) * 2018-03-30 2018-09-07 安徽理工大学 Four kinds of damage detecting methods of conveyer belt based on residual error network
CN109305534A (en) * 2018-10-25 2019-02-05 安徽理工大学 Coal wharf's belt conveyor self-adaptation control method based on computer vision
CN110378845A (en) * 2019-06-17 2019-10-25 杭州电子科技大学 A kind of image repair method under extreme condition based on convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于改进的卷积神经网络的道路井盖缺陷检测研究;姚明海等;《计算机测量与控制》;20200125(第01期);全文 *

Also Published As

Publication number Publication date
CN111681223A (en) 2020-09-18

Similar Documents

Publication Publication Date Title
CN105678332B (en) Converter steelmaking end point judgment method and system based on flame image CNN recognition modeling
CN112884747B (en) Automatic bridge crack detection system integrating cyclic residual convolution and context extractor network
CN109272123B (en) Sucker-rod pump working condition early warning method based on convolution-circulation neural network
CN108230278B (en) Image raindrop removing method based on generation countermeasure network
CN111507990A (en) Tunnel surface defect segmentation method based on deep learning
CN110992349A (en) Underground pipeline abnormity automatic positioning and identification method based on deep learning
CN113469953B (en) Transmission line insulator defect detection method based on improved YOLOv4 algorithm
CN112581463A (en) Image defect detection method and device, electronic equipment, storage medium and product
CN109685793B (en) Pipe body defect detection method and system based on three-dimensional point cloud data
CN110826588A (en) Drainage pipeline defect detection method based on attention mechanism
CN109992872B (en) Mechanical equipment residual life prediction method based on stacked separation convolution module
CN109816002B (en) Single sparse self-encoder weak and small target detection method based on feature self-migration
CN113989257A (en) Electric power comprehensive pipe gallery settlement crack identification method based on artificial intelligence technology
CN110909657A (en) Method for identifying apparent tunnel disease image
CN117190900B (en) Tunnel surrounding rock deformation monitoring method
CN110826624A (en) Time series classification method based on deep reinforcement learning
CN113255690B (en) Composite insulator hydrophobicity detection method based on lightweight convolutional neural network
CN111681223B (en) Method for detecting mine well wall under low illumination condition based on convolutional neural network
CN117372424B (en) Defect detection method, device, equipment and storage medium
CN117152119A (en) Profile flaw visual detection method based on image processing
CN116662920A (en) Abnormal data identification method, system, equipment and medium for drilling and blasting method construction equipment
CN115330743A (en) Method for detecting defects based on double lights and corresponding system
CN113642662B (en) Classification detection method and device based on lightweight classification model
CN109978962B (en) Low-contrast indicating value image intelligent identification method for darkroom illuminometer calibration
CN112036403A (en) Intelligent detection method for missing of bolt pin of power transmission tower based on attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant