CN111681223A - Method for detecting mine well wall under low illumination condition based on convolutional neural network - Google Patents

Method for detecting mine well wall under low illumination condition based on convolutional neural network Download PDF

Info

Publication number
CN111681223A
CN111681223A CN202010517286.6A CN202010517286A CN111681223A CN 111681223 A CN111681223 A CN 111681223A CN 202010517286 A CN202010517286 A CN 202010517286A CN 111681223 A CN111681223 A CN 111681223A
Authority
CN
China
Prior art keywords
image
detection
network
decomposition
illumination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010517286.6A
Other languages
Chinese (zh)
Other versions
CN111681223B (en
Inventor
黄友锐
韩涛
徐善永
许家昌
鲍士水
凌六一
唐超礼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University of Science and Technology
Original Assignee
Anhui University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University of Science and Technology filed Critical Anhui University of Science and Technology
Priority to CN202010517286.6A priority Critical patent/CN111681223B/en
Publication of CN111681223A publication Critical patent/CN111681223A/en
Application granted granted Critical
Publication of CN111681223B publication Critical patent/CN111681223B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06T5/90

Abstract

The invention discloses a method for detecting a mine well wall under a low illumination condition based on a convolutional neural network, which comprises a training stage and an online detection stage. The training stage comprises the steps of constructing an image decomposition network and an image detection network, and respectively training the image decomposition network and the image detection network by acquiring an image decomposition data set and an image detection data set; the on-line detection stage comprises image decomposition, image enhancement and image detection, the image decomposition network which is trained and used for obtaining the on-site mine well wall image is decomposed into a reflection image and an illumination image, the brightness enhancement is carried out on the well wall image through the image enhancement, and finally the image detection network which is trained is used for detecting the well wall image, so that the detection of the state of the mine well wall under the low illumination condition is realized. The invention improves the accuracy of the mine wall detection, reduces the operation cost of the mine wall detection and improves the safety.

Description

Method for detecting mine well wall under low illumination condition based on convolutional neural network
Technical Field
The invention relates to the field of mine well wall detection methods, in particular to a method for detecting a mine well wall under a low-illumination condition based on a convolutional neural network.
Background
The relief environment of most mine areas in China is relatively severe, the rock mass strength of an ore bed is insufficient, in addition, adverse effects caused by stratum consolidation, underground water level rising and the like are caused, the mine well wall can often generate large internal stress, when the stress is greater than the ultimate strength of the well wall structure, well wall damage and mine collapse accidents can occur, in order to find out the damage problem generated by the mine well wall timely and accurately, the potential safety hazard of mine production is reduced, and accurate and efficient detection needs to be carried out on the mine well wall.
At present, the detection of the mine well wall in China generally stays on a manual screening method, detection personnel carry a cage to carry out short-distance detection through naked eyes, or the problem is observed and checked by the personnel after a video image is shot by a camera installed on the cage, so that the problems of low accuracy and poor detection effect in manual detection are brought. And because the mine underground environment is special, it does not have fixed lighting apparatus, can only carry the light through cage or personnel and shine, no matter personnel direct observation or shoot the video image through the camera, can produce the darker difficult problem of observing of image, calls this darker image as the low light level image in the image processing. The problems cause that the detection cost is high, the working efficiency is low, the speed is low, the time consumption is long, the accuracy is not high when the mine shaft wall is detected, and the safety risk is high.
Disclosure of Invention
The invention aims to provide a method for detecting a mine wall under a low-illumination condition based on a convolutional neural network, and the method is used for solving the problems of poor detection effect, low accuracy and long time consumption in the manual detection of the mine wall condition in the prior art.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the mine wall detection method under the low illumination condition based on the convolutional neural network is characterized by comprising the following steps: the method comprises the following steps:
(1) constructing an image decomposition network, and training the image decomposition network by using the acquired image data, wherein the process is as follows:
(1.1) constructing an image decomposition training data set and an image decomposition testing data set:
shooting and collecting images at a plurality of positions of a mine shaft wall through a camera, respectively shooting and collecting images under a normal exposure condition and images under a low exposure condition for each position of the mine shaft wall, forming an illumination image pair of the position by the images under the normal exposure condition and the images under the low exposure condition of each position, thereby obtaining illumination image pairs of the plurality of positions, selecting a part from the plurality of illumination image pairs as an image decomposition training data set, and using the rest parts as an image decomposition testing data set;
(1.2) constructing an image decomposition network:
constructing a convolutional neural network consisting of a convolutional layer 1, a convolutional activation layer 2, a convolutional activation layer 3, a convolutional activation layer 4, a convolutional activation layer 5, a convolutional activation layer 6, a convolutional activation layer 7 and a convolutional activation layer 8 as an image decomposition network, and inputting an image a[0]Output a through convolutional layer 1[1]=w[1]a[0]+b[1]Output a from the convolution active layer 2 to the convolution active layer 6[i]=ReLU(w[i]a[i-1]+b[i]) I ∈ (2,3,4,5,6), output a via the convolutional activation layer 7[7]=sigmoid(w[7]a[6]+b[7]),a[7]I.e. the reflection image R obtained after the decomposition of the input image, passes through the output a of the convolution activation layer 8[8]=sigmoid(w[8]a[6]+b[8]),a[8]Namely, the illumination image I obtained after the input image is decomposed, wherein:
w[ly],b[ly]are all made ofUnknown quantities, which need to be determined after training, ly ∈ (1,2,3,4,5,6,7,8), the ReLU function is calculated as ReLU (x) Max (0, x), Max (x) function represents taking the parameter maximum, the sigmoid function is calculated as
Figure BDA0002530564220000021
x represents the corresponding parameter substituted into the function in the corresponding convolution activation layer;
(1.3) inputting the image decomposition training data set and the image decomposition testing data set obtained in the step (1.1) into the image decomposition network constructed in the step (1.2) so as to carry out iterative training on the image decomposition network for multiple times to obtain the trained image decomposition network;
(2) constructing an image detection network, and training the image detection network by using the acquired image data, wherein the process is as follows:
(2.1) constructing an image detection training data set and an image detection testing data set:
firstly, acquiring images of a plurality of positions of a mine well wall, searching images with a plurality of well wall conditions including abnormal-free conditions in the images, and selecting a plurality of images from each well wall condition to form a well wall image sample set;
secondly, establishing a label value table of various well wall conditions, wherein the well wall image samples are concentrated into images of the same well wall conditions in the label value table and are set with the same state label values, and the images of different well wall conditions are set with different state label values;
then, respectively selecting a plurality of images of each well wall condition from the well wall image sample set, placing the selected images of the plurality of well wall conditions according to a random sequence, storing the image name of each image and the corresponding state label value of each image into the same training file according to the placing sequence, and forming an image detection training data set by the images and the training files which are selected and placed again from the well wall image sample set;
selecting a plurality of images from the residual images after each well wall condition in the well wall image sample set is selected according to the same method, placing the images of the multiple well wall conditions selected from the residual images according to a random sequence, storing the image name and the corresponding state label value of each image into the same test file according to the placing sequence, and forming an image detection test data set by the images selected from the residual images in the well wall image sample set and placed and the test files;
(2.2) constructing an image detection network:
constructing a convolutional neural network consisting of a convolutional activation layer 1, a pooling layer 1, a convolutional activation layer 2, a pooling layer 2, a convolutional activation layer 3, a pooling layer 3, a convolutional activation layer 4, a pooling layer 4, a convolutional activation layer 5, a pooling layer 5, a fully-connected layer 6, a fully-connected layer 7 and a Softmax layer, and inputting an image a[0]Output z through convolution activation layer[j]=ReLU(w[j]a[j-1]+b[j]) J ∈ (1,2,3,4,5), output a through pooling layer[j]=Max(z[j]) J ∈ (1,2,3,4,5), output a via the full interconnect layer[k]=w[k]a[k -1]+b[k]K ∈ (6,7), output through Softmax layer
Figure BDA0002530564220000031
The Softmax layer calculates the probability that the detection result is possible to be the label value of each state, and then selects the maximum probability value as the final detection result; ,
Figure BDA0002530564220000032
indicating the probability that the detection result is various state label values, and selecting the state label value with the highest probability as the final detection result, i.e.
Figure BDA0002530564220000033
Wherein:
w[lay],b[lay]for unknown quantity, it needs to be determined after training, lay ∈ (1,2,3,4,5,6,7), Max (x) function represents maximum value of parameter, ReLU (x) function is calculated as ReLU (x) Max (0, x), x represents corresponding parameter of substitution function of convolution activation layer;
(2.3) inputting the image detection training data set and the image detection test data set obtained in the step (2.1) into the image detection network constructed in the step (2.2) so as to carry out iterative training on the image detection network for multiple times to obtain the trained image detection network;
(3) and (3) carrying out online detection on the wall of the mine well by using the trained image decomposition network obtained in the step (1) and the trained image detection network obtained in the step (2), wherein the process is as follows:
(3.1) acquiring and obtaining a mine well wall image;
(3.2) inputting the borehole wall image into a trained image decomposition network to decompose a reflection image R and an illumination image I;
(3.3) using the formula L (x, y) ═ I (x, y)]γPerforming brightness gamma correction on the illumination image I to obtain an illumination brightness correction image L, wherein I (x, y) represents a pixel value at an (x, y) position in the illumination image I, L (x, y) represents a pixel value at an (x, y) position in the corrected image L, gamma is a constant and gamma is less than 1; multiplying each pixel value of the reflection image R with a pixel value of a corresponding position of the illumination brightness correction image L to obtain a pixel value of a brightness enhanced image S, wherein the calculation formula is S (x, y) L (x, y) R (x, y);
and (3.4) inputting the brightness enhancement image S into the trained image detection network, wherein the output result calculated by the image detection network is the detection result of the well wall condition, and the result is one of the multiple well wall conditions.
The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network is characterized by comprising the following steps of: in step (1.1), the number of images in the image decomposition training data set is greater than the number of images in the image decomposition test data set.
The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network is characterized by comprising the following steps of: the specific process of step (1.3) is as follows:
(1.3a), parameter initialization, setting w[ly]、b[ly]For a random value, ly ∈ (1,2,3,4,5,6,7,8), set the number of iterations to SpThe learning rate is LpThe threshold value of the decomposition accuracy rate is TpThe e-th illumination image of the image decomposition training data set is marked as Spair[e]Let e be 1;
(1.3b) taking Spair[e]Middle normal illumination image SnormalInputting into image decomposition network, calculating reflected image R output by the networknormalAnd an illumination image Inormal
(1.3c) taking Spair[e]Medium and low illumination image SlowInputting into image decomposition network, calculating reflected image R output by the networklowAnd an illumination image Ilow
(1.3d) according to RlowAnd RnormalCalculating a reflectance image consistency loss function L (R)low,Rnormal),L(Rlow,Rnormal)=||Rlow-Rnormal||1
(1.3e), calculating parameter w[ly]And b[ly]Change value Δ w of[ly]And Δ b[ly]
Figure BDA0002530564220000051
Figure BDA0002530564220000052
Representing calculating partial derivatives;
(1.3f) according to the formula w[ly]=w[ly]-Lp*Δw[ly],b[ly]=b[ly]-Lp*Δb[ly]Update w[ly]And b[ly],ly∈(1,2,3,4,5,6,7,8);
(1.3g), judging whether the image is the last illumination image pair, if not, inputting the next illumination image pair, and returning to the step (1.3 b); if yes, turning to the step (1.3h) to calculate the decomposition accuracy rate;
(1.3h), inputting all illumination image pairs in the image decomposition test data set, and calculating a reflection image R of normal illumination image decomposition in each illumination image pairnormal-testAnd low-light image decomposed reflection image Rlow-testAnd compared, and then the decomposition accuracy is calculated
Figure BDA0002530564220000053
Wherein ∑ (R)low-test==Rnormal-test) The number of coincidence between the reflection image of the normal-illumination image decomposition and the reflection image of the low-illumination image decomposition is represented by ∑ Num (S)pair-test) Representing a total number of image pairs in the test dataset;
(1.3k), judging whether the detection accuracy rate meets the requirement or not, and if the detection accuracy rate meets the requirement, judging whether the detection accuracy rate meets the requirement or not, if the detection accuracy rate meets the requirementp≥TpIf yes, turning to the step (1.3l) to finish training; if Acp<TpJudging whether the iteration times are finished or not, if S is finishedpIf not equal to 0, the step (1.3b) is carried out, and the training data set is reused to carry out a new round of training until the iteration is finished; if SpIf the result is 0, the training is finished;
(1.3l) all parameters w[ly]And b[ly]Save, ly ∈ (1,2,3,4,5,6,7,8), and image decomposition network training ends.
The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network is characterized by comprising the following steps of: in the step (2.1), in the image, the image with various well wall conditions including abnormal conditions is searched, and at least the conditions of cracks, pits and water seepage are included.
The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network is characterized by comprising the following steps of: in the step (2.1), in the tag value table, the state tag value set for the image of the borehole wall image sample concentration no-condition is 1, the state tag value set for the image of the borehole wall image sample concentration crack condition is 2, the state tag value set for the image of the borehole wall image sample concentration pothole condition is 3, and the state tag value set for the image of the borehole wall image sample concentration water seepage condition is 4.
The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network is characterized by comprising the following steps of: the specific process of step (2.3) is as follows:
(2.3a), parameter initialization, setting w[lay]、b[lay]For random values, lay ∈ (1,2,3,4,5,6,7), set the number of iterations to SdThe learning rate is LdThe image detection accuracy threshold is TdAnd the f image of the image detection training data set is marked as IM [ f]Let f be 1;
(2.3b) image IM [ f)]Inputting into image detection network, calculating its detection estimation value
Figure BDA0002530564220000061
(2.3c) from the input image IM [ f]Corresponding state label value y and calculated estimated value
Figure BDA0002530564220000062
Computing cross entropy loss function
Figure BDA0002530564220000063
Figure BDA0002530564220000064
(2.3d) calculating each parameter w in each layer of the image detection network[lay]And b[lay]Change value Δ w of[lay]And Δ b[lay]
Figure BDA0002530564220000065
Wherein lay ∈ (1,2,3,4,5,6, 7);
(2.3e) according to the formula w[lay]=w[lay]-Ld*Δw[lay],b[lay]=b[lay]-Ld*Δb[lay]Update w[lay]And b[lay]Where lay ∈ (1,2,3,4,5,6, 7);
(2.3f), judging whether the image is the last image, if not, inputting the next image, and returning to the step (2.3 b); if yes, the step is carried out (2.3g) to calculate the detection accuracy;
(2.3g) inputting images of the image detection test data set, and calculating a detection evaluation value of each image
Figure BDA0002530564220000066
Comparing with the corresponding label state value y stored in the test file, and calculating the detection accuracy
Figure BDA0002530564220000067
Wherein
Figure BDA0002530564220000068
Indicating the number of images with the same detection estimation value and label state value, ∑ num (y) indicating the total number of images in the image detection test data set;
(2.3h), judging whether the detection accuracy rate meets the requirement or not, and if the detection accuracy rate meets the requirement, judging whether the detection accuracy rate meets the requirement or not, if the detection accuracy rate meets the requirementd≥TdIf yes, turning to the step (2.3k) and finishing training; if Acd<TdJudging whether the iteration times are finished or not, if S is finisheddIf not equal to 0, turning to the step (2.3b), and reusing the image detection training data set to perform a new round of training until the iteration is finished; if SdIf the result is 0, the training is finished;
(2.3k), all parameters w[lay]And b[lay]Save, lay ∈ (1,2,3,4,5,6,7), image detection network training is complete.
The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network is characterized by comprising the following steps of: in the step (3.4), the finally output detection result is one of no abnormal condition, no crack, no pot hole and no water seepage condition.
The invention provides a well wall detection method based on a convolutional neural network under a low-illumination condition of a mine. The image decomposition network is constructed by the convolutional neural network, the brightness gamma calibration is used, the brightness enhancement of the mine well wall image under the low illumination condition is realized, the automatic detection of the mine well wall image is realized through the image detection network constructed by the convolutional neural network, and the abnormal state of the mine well wall is accurately detected.
The invention has the beneficial effects that:
the invention is used for automatically detecting the wall of the mine well under the condition of low illumination. An image decomposition network is constructed through a convolutional neural network, the brightness of the mine well wall image under the low illumination condition is enhanced by using a brightness gamma calibration method, and the automatic detection of the mine well wall image is realized through the image detection network constructed through the convolutional neural network. The method of the invention uses an image processing mode for detection, avoids manual participation, realizes the automation of detection, reduces the operation cost, increases the detection efficiency and reduces the potential safety hazard in the detection. The image enhancement and the well wall state detection of the image under the low illumination condition are realized through the convolutional neural network, the image brightness enhancement effect is good, the detection speed is high, the efficiency is high, and the accuracy and the reliability of the detection are greatly improved.
Drawings
FIG. 1 is an overall block diagram of the process of the present invention.
Fig. 2 is a block diagram of an image decomposition network of the method of the present invention.
FIG. 3 is a flow chart of a training image decomposition network of the method of the present invention.
FIG. 4 is a schematic diagram of the method of the present invention for constructing an image test training data set and an image test data set.
Fig. 5 is a block diagram of an image detection network according to the method of the present invention.
FIG. 6 is a flow chart of a training image detection network of the method of the present invention.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
As shown in figure 1, the method comprises a training stage and an online detection stage, wherein in the training stage, an image decomposition network and an image detection network are constructed through a convolutional neural network, the image decomposition network is trained through a shot normal illumination image and a shot low illumination image, and the image detection network is trained through an acquired image without abnormality, crack, pot hole and water seepage of the wall of a mine well. After training is completed, in an online detection stage, a shot actual mine well wall image is decomposed into an illumination image and a reflection image through a trained image decomposition network, the illumination image is subjected to brightness gamma calibration and then fused with the reflection image again to realize brightness enhancement of the low-illumination image, the well wall image after brightness enhancement is input into an image detection network, and finally a detection result of the mine well wall state is obtained. The specific process is as follows:
(1) constructing an image decomposition network, and training the image decomposition network by using the acquired image data, wherein the process is as follows:
(1.1) the process of constructing the image decomposition training data set and the image decomposition testing data set comprises the following steps:
first, an image is taken with a camera, and an image taken by the camera under normal exposure conditions is referred to as a normal-light image Snormal(ii) a Taking the same image at the same location and under low exposure conditions, called low light image Slow(ii) a The normal-light image and the low-light image at the same position are together called a light image pair Spair(ii) a Then, selecting different positions of the mine well wall, shooting the light images for M, wherein M is more than or equal to 500, and selecting MtrainThe individual image pairs are used as image decomposition training data sets, M remainstestThe individual image pairs are used as image decomposition test data sets, where Mtrain>Mtest
(1.2), as shown in fig. 2, constructing an image decomposition network:
constructing a convolutional neural network consisting of a convolutional layer 1, a convolutional activation layer 2, a convolutional activation layer 3, a convolutional activation layer 4, a convolutional activation layer 5, a convolutional activation layer 6, a convolutional activation layer 7 and a convolutional activation layer 8 as an image decomposition network, and inputting an image a[0]Output a through convolutional layer 1[1]=w[1]a[0]+b[1]Output a from the convolution active layer 2 to the convolution active layer 6[i]=ReLU(w[i]a[i-1]+b[i]) I ∈ (2,3,4,5,6), output a via the convolutional activation layer 7[7]=sigmoid(w[7]a[6]+b[7]),a[7]I.e. the reflection image R obtained after the decomposition of the input image, passes through the output a of the convolution activation layer 8[8]=sigmoid(w[8]a[7]+b[8]),a[8]Namely, the illumination image I obtained after the input image is decomposed, wherein:
w[ly],b[ly]all unknown quantities and need to be determined after training, ly ∈ (1,2,3,4,5,6,7,8), the ReLU function is calculated as ReLU (x) Max (0, x), the Max (x) function represents the maximum value of the parameters, the sigmoid function is calculated as
Figure BDA0002530564220000081
x represents the corresponding parameter substituted into the function in the corresponding convolution activation layer;
(1.3) as shown in FIG. 3, the process of training the image decomposition network of the method of the present invention is:
(1.3a), parameter initialization, setting w[ly]、b[ly]For a random value, ly ∈ (1,2,3,4,5,6,7,8), set the number of iterations to SpThe learning rate is LpThe threshold value of the decomposition accuracy rate is TpThe e-th illumination image of the image decomposition training data set is marked as Spair[e]Let e be 1;
(1.3b) taking Spair[j]Middle normal illumination image SnormalInputting into image decomposition network, calculating reflected image R output by the networknormalAnd an illumination image Inormal
(1.3c) taking Spair[j]Medium and low illumination image SlowInputting into image decomposition network, calculating reflected image R output by the networklowAnd an illumination image Ilow
(1.3d) according to RlowAnd RnormalCalculating a reflectance image consistency loss function L (R)low,Rnormal),L(Rlow,Rnormal)=||Rlow-Rnormal||1
(1.3e), calculating parameter w[ly]And b[ly]Change value Δ w of[ly]And Δ b[ly]
Figure BDA0002530564220000091
Figure BDA0002530564220000092
Representing calculating partial derivatives;
(1.3f) according to the formula wly]=w[ly]-Lp*Δw[ly],b[ly]=b[ly]-Lp*Δb[ly]Update w[ly]And b[ly],ly∈(1,2,3,4,5,6,7,8);
(1.3g), judging whether the image is the last illumination image pair, if not, inputting the next illumination image pair, and returning to the step (1.3 b); if yes, turning to the step (1.3h) to calculate the decomposition accuracy rate;
(1.3h), inputting all illumination image pairs in the image decomposition test data set, and calculating a reflection image R of normal illumination image decomposition in each illumination image pairnormal-testAnd low-light image decomposed reflection image Rlow-testAnd compared, and then the decomposition accuracy is calculated
Figure BDA0002530564220000093
Wherein ∑ (R)low-test==Rnormal-test) The number of coincidence between the reflection image of the normal-illumination image decomposition and the reflection image of the low-illumination image decomposition is represented by ∑ Num (S)pair-test) Representing a total number of image pairs in the test dataset;
(1.3k), judging whether the detection accuracy rate meets the requirement, and if the acc is more than or equal to T, turning to the step (1.3l) to finish the training; if acc is less than T, judging whether the iteration times are finished, if S is not equal to 0, turning to the step (1.3b), and reusing the training data set to perform a new round of training until the iteration is finished; if S is equal to 0, finishing the training;
(1.3l) all parameters w[ly]And b[ly]Save, ly ∈ (1,2,3,4,5,6,7,8), and image decomposition network training ends.
(2) Constructing an image detection network, and training the image detection network by using the acquired image data, wherein the process is as follows:
(2.1) as shown in fig. 4, constructing an image detection training data set and an image detection test data set:
firstly, shooting a well wall image by using a camera, and selecting N images of four conditions of 'no abnormity', 'crack', 'hole' and 'water seepage', wherein N is more than or equal to 800, so as to form a well wall image sample set; secondly, establishing a Label (Label) value table of four well wall states: 1,2,3,4, or even a hole; thirdly, setting a corresponding state label value for each image in the borehole wall image sample set; finally, 500 borehole wall images of 'no abnormity', 'crack', 'hole' and 'water seepage' are selected from the borehole wall image sample set, 2000 images are arranged according to a random sequence, the image name of each image and the corresponding borehole wall state label value are stored in a train file according to the arrangement sequence, and the 2000 borehole wall images and the train file form an image detection training data set; according to the same method, randomly selecting 200 images from the rest images in the borehole wall image sample set, placing the images according to a random sequence, storing the image names of the 200 images and the corresponding borehole wall state label values into a test _ label.txt file, namely a test file, according to the placing sequence, and taking the 200 images and the test _ label.txt file as an image detection test data set;
(2.2), as shown in fig. 5, an image detection network is constructed:
constructing a convolutional neural network consisting of a convolutional activation layer 1, a pooling layer 1, a convolutional activation layer 2, a pooling layer 2, a convolutional activation layer 3, a pooling layer 3, a convolutional activation layer 4, a pooling layer 4, a convolutional activation layer 5, a pooling layer 5, a fully-connected layer 6, a fully-connected layer 7 and a Softmax layer, and inputting an image a[0]Output z through convolution activation layer[j]=ReLU(w[j]a[j-1]+b[j]) J ∈ (1,2,3,4,5), output a through pooling layer[j]=Max(z[j]) J ∈ (1,2,3,4,5), output a via the full interconnect layer[k]=w[k]a[k -1]+b[k]K ∈ (6,7), output through Softmax layer
Figure BDA0002530564220000101
The Softmax layer calculates the probability that the detection result is possible to be the label value of each state, and then selects the maximum probability value as the final detection result; ,
Figure BDA0002530564220000102
indicating the probability that the detection result is various state label values, and selecting the state label value with the highest probability as the final detection result, i.e.
Figure BDA0002530564220000103
Wherein:
w[lay],b[lay]for unknown quantity, it needs to be determined after training, lay ∈ (1,2,3,4,5,6,7), Max (x) function represents maximum value of parameter, ReLU (x) function is calculated as ReLU (x) Max (0, x), x represents corresponding parameter of substitution function of convolution activation layer;
(2.3) as shown in fig. 6, the process of training the image detection network is as follows:
(2.3a), parameter initialization, setting w[lay]、b[lay]For random values, lay ∈ (1,2,3,4,5,6,7), set the number of iterations to SdThe learning rate is LdThe image detection accuracy threshold is TdAnd the f image of the image detection training data set is marked as IM [ f]Let f be 1;
(2.3b) image IM [ f)]Inputting into image detection network, calculating its detection estimation value
Figure BDA0002530564220000111
(2.3c) from the input image IM [ f]Corresponding state label value y and calculated estimated value
Figure BDA0002530564220000112
Computing cross entropy loss function
Figure BDA0002530564220000113
Figure BDA0002530564220000114
(2.3d) calculating each parameter w in each layer of the image detection network[lay]And b[lay]Change value Δ w of[lay]And Δ b[lay]
Figure BDA0002530564220000115
Wherein lay ∈ (1,2,3,4,5,6, 7);
(2.3e) according to the formula w[lay]=w[lay]-Ld*Δw[lay],b[lay]=b[lay]-Ld*Δb[lay]Update w[lay]And b[lay]Where lay ∈ (1,2,3,4,5,6, 7);
(2.3f), judging whether the image is the last image, if not, inputting the next image, and returning to the step (2.3 b); if yes, the step is carried out (2.3g) to calculate the detection accuracy;
(2.3g) inputting images of the image detection test data set, and calculating a detection evaluation value of each image
Figure BDA0002530564220000116
Comparing with the corresponding label state value y stored in the test file, and calculating the detection accuracy
Figure BDA0002530564220000117
Wherein
Figure BDA0002530564220000118
Indicating the number of images with the same detection estimation value and label state value, ∑ num (y) indicating the total number of images in the image detection test data set;
(2.3h), judging whether the detection accuracy rate meets the requirement or not, and if the detection accuracy rate meets the requirement, judging whether the detection accuracy rate meets the requirement or not, if the detection accuracy rate meets the requirementd≥TdIf yes, turning to the step (2.3k) and finishing training; if Acd<TdJudging whether the iteration times are finished or not, if S is finisheddIf not equal to 0, turning to the step (2.3b), and reusing the image detection training data set to perform a new round of training until the iteration is finished; if SdIf the result is 0, the training is finished;
(2.3k), all parameters w[lay]And b[lay]Save, lay ∈ (1,2,3,4,5,6,7), image detection network training is complete.
(3) The process of online detection of the wall of the mine well comprises the following steps:
(3.1) acquiring and obtaining a mine well wall image;
(3.2) inputting the borehole wall image into a trained image decomposition network to decompose a reflection image R and an illumination image I;
(3.3) using the formula L (x, y) ═ I (x, y)]γSubjecting the illumination image I to brightness gamma correction to obtain an illumination brightness corrected image L, wherein I (x, y) representsA pixel value of an (x, y) position in the illumination image I, L (x, y) represents a pixel value of an (x, y) position in the corrected image L, γ is a constant, and γ < 1; multiplying each pixel value of the reflection image R with a pixel value of a corresponding position of the illumination brightness correction image L to obtain a pixel value of a brightness enhanced image S, wherein the calculation formula is S (x, y) L (x, y) R (x, y);
and (3.4) inputting the brightness enhancement image S into the trained image detection network, wherein the output result calculated by the image detection network is the detection result of the well wall state, and is one of four states of no abnormal condition, no crack, no hole and no water seepage.
The embodiments of the present invention are described only for the preferred embodiments of the present invention, and not for the limitation of the concept and scope of the present invention, and various modifications and improvements made to the technical solution of the present invention by those skilled in the art without departing from the design concept of the present invention shall fall into the protection scope of the present invention, and the technical content of the present invention which is claimed is fully set forth in the claims.

Claims (7)

1. The mine wall detection method under the low illumination condition based on the convolutional neural network is characterized by comprising the following steps: the method comprises the following steps:
(1) constructing an image decomposition network, and training the image decomposition network by using the acquired image data, wherein the process is as follows:
(1.1) constructing an image decomposition training data set and an image decomposition testing data set:
shooting and collecting images at a plurality of positions of a mine shaft wall through a camera, respectively shooting and collecting images under a normal exposure condition and images under a low exposure condition for each position of the mine shaft wall, forming an illumination image pair of the position by the images under the normal exposure condition and the images under the low exposure condition of each position, thereby obtaining illumination image pairs of the plurality of positions, selecting a part from the plurality of illumination image pairs as an image decomposition training data set, and using the rest parts as an image decomposition testing data set;
(1.2) constructing an image decomposition network:
constructing a convolutional neural network consisting of a convolutional layer 1, a convolutional activation layer 2, a convolutional activation layer 3, a convolutional activation layer 4, a convolutional activation layer 5, a convolutional activation layer 6, a convolutional activation layer 7 and a convolutional activation layer 8 as an image decomposition network, and inputting an image a[0]Output a through convolutional layer 1[1]=w[1]a[0]+b[1]Output a from the convolution active layer 2 to the convolution active layer 6[i]=ReLU(w[i]a[i-1]+b[i]) I ∈ (2,3,4,5,6), output a via the convolutional activation layer 7[7]=sigmoid(w[7]a[6]+b[7]),a[7]I.e. the reflection image R obtained after the decomposition of the input image, passes through the output a of the convolution activation layer 8[8]=sigmoid(w[8]a[6]+b[8]),a[8]Namely, the illumination image I obtained after the input image is decomposed, wherein:
w[ly],b[ly]all unknown quantities and need to be determined after training, ly ∈ (1,2,3,4,5,6,7,8), the ReLU function is calculated as ReLU (x) Max (0, x), the Max (x) function represents the maximum value of the parameters, the sigmoid function is calculated as
Figure FDA0002530564210000011
x represents the corresponding parameter substituted into the function in the corresponding convolution activation layer;
(1.3) inputting the image decomposition training data set and the image decomposition testing data set obtained in the step (1.1) into the image decomposition network constructed in the step (1.2) so as to carry out iterative training on the image decomposition network for multiple times to obtain the trained image decomposition network;
(2) constructing an image detection network, and training the image detection network by using the acquired image data, wherein the process is as follows:
(2.1) constructing an image detection training data set and an image detection testing data set:
firstly, acquiring images of a plurality of positions of a mine well wall, searching images with a plurality of well wall conditions including abnormal-free conditions in the images, and selecting a plurality of images from each well wall condition to form a well wall image sample set;
secondly, establishing a label value table of various well wall conditions, wherein the well wall image samples are concentrated into images of the same well wall conditions in the label value table and are set with the same state label values, and the images of different well wall conditions are set with different state label values;
then, respectively selecting a plurality of images of each well wall condition from the well wall image sample set, placing the selected images of the plurality of well wall conditions according to a random sequence, storing the image name of each image and the corresponding state label value of each image into the same training file according to the placing sequence, and forming an image detection training data set by the images and the training files which are selected and placed again from the well wall image sample set;
selecting a plurality of images from the residual images after each well wall condition in the well wall image sample set is selected according to the same method, placing the images of the multiple well wall conditions selected from the residual images according to a random sequence, storing the image name and the corresponding state label value of each image into the same test file according to the placing sequence, and forming an image detection test data set by the images selected from the residual images in the well wall image sample set and placed and the test files;
(2.2) constructing an image detection network:
constructing a convolutional neural network consisting of a convolutional activation layer 1, a pooling layer 1, a convolutional activation layer 2, a pooling layer 2, a convolutional activation layer 3, a pooling layer 3, a convolutional activation layer 4, a pooling layer 4, a convolutional activation layer 5, a pooling layer 5, a fully-connected layer 6, a fully-connected layer 7 and a Softmax layer, and inputting an image a[0]Output z through convolution activation layer[j]=ReLU(w[j]a[j-1]+b[j]) J ∈ (1,2,3,4,5), output a through pooling layer[j]=Max(z[j]) J ∈ (1,2,3,4,5), output a via the full interconnect layer[k]=w[k]a[k-1]+b[k]K ∈ (6,7), output through Softmax layer
Figure FDA0002530564210000021
The Softmax layer calculates the detection result possibly as perSelecting the probability with the maximum probability value as a final detection result according to the probability of the seed state label value; ,
Figure FDA0002530564210000022
indicating the probability that the detection result is various state label values, and selecting the state label value with the highest probability as the final detection result, i.e.
Figure FDA0002530564210000023
Wherein:
w[lay],b[lay]for unknown quantity, it needs to be determined after training, lay ∈ (1,2,3,4,5,6,7), Max (x) function represents maximum value of parameter, ReLU (x) function is calculated as ReLU (x) Max (0, x), x represents corresponding parameter of substitution function of convolution activation layer;
(2.3) inputting the image detection training data set and the image detection test data set obtained in the step (2.1) into the image detection network constructed in the step (2.2) so as to carry out iterative training on the image detection network for multiple times to obtain the trained image detection network;
(3) and (3) carrying out online detection on the wall of the mine well by using the trained image decomposition network obtained in the step (1) and the trained image detection network obtained in the step (2), wherein the process is as follows:
(3.1) acquiring and obtaining a mine well wall image;
(3.2) inputting the borehole wall image into a trained image decomposition network to decompose a reflection image R and an illumination image I;
(3.3) using the formula L (x, y) ═ I (x, y)]γPerforming brightness gamma correction on the illumination image I to obtain an illumination brightness correction image L, wherein I (x, y) represents a pixel value at an (x, y) position in the illumination image I, L (x, y) represents a pixel value at an (x, y) position in the corrected image L, gamma is a constant and gamma is less than 1; multiplying each pixel value of the reflection image R with a pixel value of a corresponding position of the illumination brightness correction image L to obtain a pixel value of a brightness enhanced image S, wherein the calculation formula is S (x, y) L (x, y) R (x, y);
and (3.4) inputting the brightness enhancement image S into the trained image detection network, wherein the output result calculated by the image detection network is the detection result of the well wall condition, and the result is one of the multiple well wall conditions.
2. The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network as claimed in claim 1, wherein: in step (1.1), the number of images in the image decomposition training data set is greater than the number of images in the image decomposition test data set.
3. The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network as claimed in claim 1, wherein: the specific process of step (1.3) is as follows:
(1.3a), parameter initialization, setting w[ly]、b[ly]For a random value, ly ∈ (1,2,3,4,5,6,7,8), set the number of iterations to SpThe learning rate is LpThe threshold value of the decomposition accuracy rate is TpThe e-th illumination image of the image decomposition training data set is marked as Spair[e]Let e be 1;
(1.3b) taking Spair[e]Middle normal illumination image SnormalInputting into image decomposition network, calculating reflected image R output by the networknormalAnd an illumination image Inormal
(1.3c) taking Spair[e]Medium and low illumination image SlowInputting into image decomposition network, calculating reflected image R output by the networklowAnd an illumination image Ilow
(1.3d) according to RlowAnd RnormalCalculating a reflectance image consistency loss function L (R)low,Rnormal),L(Rlow,Rnormal)=||Rlow-Rnormal||1
(1.3e), calculating parameter w[ly]And b[ly]Change value Δ w of[ly]And Δ b[ly]
Figure FDA0002530564210000041
Figure FDA0002530564210000042
Representing calculating partial derivatives;
(1.3f) according to the formula w[ly]=w[ly]-Lp*Δw[ly],b[ly]=b[ly]-Lp*Δb[ly]Update w[ly]And b[ly],ly∈(1,2,3,4,5,6,7,8);
(1.3g), judging whether the image is the last illumination image pair, if not, inputting the next illumination image pair, and returning to the step (1.3 b); if yes, turning to the step (1.3h) to calculate the decomposition accuracy rate;
(1.3h), inputting all illumination image pairs in the image decomposition test data set, and calculating a reflection image R of normal illumination image decomposition in each illumination image pairnormal-testAnd low-light image decomposed reflection image Rlow-testAnd compared, and then the decomposition accuracy is calculated
Figure FDA0002530564210000043
Wherein ∑ (R)low-test==Rnormal-test) The number of coincidence between the reflection image of the normal-illumination image decomposition and the reflection image of the low-illumination image decomposition is represented by ∑ Num (S)pair-test) Representing a total number of image pairs in the test dataset;
(1.3k), judging whether the detection accuracy rate meets the requirement or not, and if the detection accuracy rate meets the requirement, judging whether the detection accuracy rate meets the requirement or not, if the detection accuracy rate meets the requirementp≥TpIf yes, turning to the step (1.3l) to finish training; if Acp<TpJudging whether the iteration times are finished or not, if S is finishedpIf not equal to 0, the step (1.3b) is carried out, and the training data set is reused to carry out a new round of training until the iteration is finished; if SpIf the result is 0, the training is finished;
(1.3l) all parameters w[ly]And b[ly]Save, ly ∈ (1,2,3,4,5,6,7,8), and image decomposition network training ends.
4. The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network as claimed in claim 1, wherein: in the step (2.1), in the image, the image with various well wall conditions including abnormal conditions is searched, and at least the conditions of cracks, pits and water seepage are included.
5. The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network as claimed in claim 1 or 4, wherein: in the step (2.1), in the tag value table, the state tag value set for the image of the borehole wall image sample concentration no-condition is 1, the state tag value set for the image of the borehole wall image sample concentration crack condition is 2, the state tag value set for the image of the borehole wall image sample concentration pothole condition is 3, and the state tag value set for the image of the borehole wall image sample concentration water seepage condition is 4.
6. The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network as claimed in claim 1, wherein: the specific process of step (2.3) is as follows:
(2.3a), parameter initialization, setting w[lay]、b[lay]For random values, lay ∈ (1,2,3,4,5,6,7), set the number of iterations to SdThe learning rate is LdThe image detection accuracy threshold is TdAnd the f image of the image detection training data set is marked as IM [ f]Let f be 1;
(2.3b) image IM [ f)]Inputting into image detection network, calculating its detection estimation value
Figure FDA0002530564210000051
(2.3c) from the input image IM [ f]Corresponding state label value y and calculated estimated value
Figure FDA0002530564210000052
Computing cross entropy loss function
Figure FDA0002530564210000053
Figure FDA0002530564210000054
(2.3d) calculating each parameter w in each layer of the image detection network[lay]And b[lay]Change value Δ w of[lay]And Δ b[lay]
Figure FDA0002530564210000055
Wherein lay ∈ (1,2,3,4,5,6, 7);
(2.3e) according to the formula w[lay]=w[lay]-Ld*Δw[lay],b[lay]=b[lay]-Ld*Δb[lay]Update w[lay]And b[lay]Where lay ∈ (1,2,3,4,5,6, 7);
(2.3f), judging whether the image is the last image, if not, inputting the next image, and returning to the step (2.3 b); if yes, the step is carried out (2.3g) to calculate the detection accuracy;
(2.3g) inputting images of the image detection test data set, and calculating a detection evaluation value of each image
Figure FDA0002530564210000056
Comparing with the corresponding label state value y stored in the test file, and calculating the detection accuracy
Figure FDA0002530564210000057
Wherein
Figure FDA0002530564210000058
Indicating the number of images with the same detection estimation value and label state value, ∑ num (y) indicating the total number of images in the image detection test data set;
(2.3h), judging whether the detection accuracy rate meets the requirement or not, and if the detection accuracy rate meets the requirement, judging whether the detection accuracy rate meets the requirement or not, if the detection accuracy rate meets the requirementd≥TdIf yes, turning to the step (2.3k) and finishing training; if Acd<TdJudging whether the iteration times are finished or not, if S is finisheddIf not equal to 0, turning to the step (2.3b), and reusing the image detection training data set to perform a new round of training until the iteration is finished; if SdIf the result is 0, the training is finished;
(2.3k), all parameters w[lay]And b[lay]Save, lay ∈ (1,2,3,4,5,6,7), image detection network training is complete.
7. The method for detecting the wall of the mine well under the low-illumination condition based on the convolutional neural network as claimed in claim 1 or 4, wherein: in the step (3.4), the finally output detection result is one of no abnormal condition, no crack, no pot hole and no water seepage condition.
CN202010517286.6A 2020-06-09 2020-06-09 Method for detecting mine well wall under low illumination condition based on convolutional neural network Active CN111681223B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010517286.6A CN111681223B (en) 2020-06-09 2020-06-09 Method for detecting mine well wall under low illumination condition based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010517286.6A CN111681223B (en) 2020-06-09 2020-06-09 Method for detecting mine well wall under low illumination condition based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN111681223A true CN111681223A (en) 2020-09-18
CN111681223B CN111681223B (en) 2023-04-18

Family

ID=72435659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010517286.6A Active CN111681223B (en) 2020-06-09 2020-06-09 Method for detecting mine well wall under low illumination condition based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN111681223B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018028255A1 (en) * 2016-08-11 2018-02-15 深圳市未来媒体技术研究院 Image saliency detection method based on adversarial network
CN108510488A (en) * 2018-03-30 2018-09-07 安徽理工大学 Four kinds of damage detecting methods of conveyer belt based on residual error network
CN109305534A (en) * 2018-10-25 2019-02-05 安徽理工大学 Coal wharf's belt conveyor self-adaptation control method based on computer vision
CN110378845A (en) * 2019-06-17 2019-10-25 杭州电子科技大学 A kind of image repair method under extreme condition based on convolutional neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018028255A1 (en) * 2016-08-11 2018-02-15 深圳市未来媒体技术研究院 Image saliency detection method based on adversarial network
CN108510488A (en) * 2018-03-30 2018-09-07 安徽理工大学 Four kinds of damage detecting methods of conveyer belt based on residual error network
CN109305534A (en) * 2018-10-25 2019-02-05 安徽理工大学 Coal wharf's belt conveyor self-adaptation control method based on computer vision
CN110378845A (en) * 2019-06-17 2019-10-25 杭州电子科技大学 A kind of image repair method under extreme condition based on convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姚明海等: "基于改进的卷积神经网络的道路井盖缺陷检测研究", 《计算机测量与控制》 *

Also Published As

Publication number Publication date
CN111681223B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN109509187B (en) Efficient inspection algorithm for small defects in large-resolution cloth images
CN105678332B (en) Converter steelmaking end point judgment method and system based on flame image CNN recognition modeling
CN108346144B (en) Automatic bridge crack monitoring and identifying method based on computer vision
CN111507990A (en) Tunnel surface defect segmentation method based on deep learning
CN110992349A (en) Underground pipeline abnormity automatic positioning and identification method based on deep learning
CN112581463A (en) Image defect detection method and device, electronic equipment, storage medium and product
CN112884747B (en) Automatic bridge crack detection system integrating cyclic residual convolution and context extractor network
CN108021938A (en) A kind of Cold-strip Steel Surface defect online detection method and detecting system
CN110363337B (en) Oil measuring method and system of oil pumping unit based on data driving
CN109816002B (en) Single sparse self-encoder weak and small target detection method based on feature self-migration
CN111161224A (en) Casting internal defect grading evaluation system and method based on deep learning
CN101140216A (en) Gas-liquid two-phase flow type recognition method based on digital graphic processing technique
CN112465124B (en) Twin depth space-time neural network model acquisition/fault diagnosis method and device
CN109685793A (en) A kind of pipe shaft defect inspection method and system based on three dimensional point cloud
CN113989257A (en) Electric power comprehensive pipe gallery settlement crack identification method based on artificial intelligence technology
CN110826624A (en) Time series classification method based on deep reinforcement learning
CN114091606A (en) Tunnel blasting blast hole half-hole mark identification and damage flatness evaluation classification method
CN111626358B (en) Tunnel surrounding rock grading method based on BIM picture identification
CN116843650A (en) SMT welding defect detection method and system integrating AOI detection and deep learning
CN111914902A (en) Traditional Chinese medicine identification and surface defect detection method based on deep neural network
CN112258495A (en) Building wood crack identification method based on convolutional neural network
CN110633739B (en) Polarizer defect image real-time classification method based on parallel module deep learning
CN113255690B (en) Composite insulator hydrophobicity detection method based on lightweight convolutional neural network
CN116662920B (en) Abnormal data identification method, system, equipment and medium for drilling and blasting method construction equipment
CN111681223B (en) Method for detecting mine well wall under low illumination condition based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant