CN113744237A - Deep learning-based automatic detection method and system for muck fluidity - Google Patents

Deep learning-based automatic detection method and system for muck fluidity Download PDF

Info

Publication number
CN113744237A
CN113744237A CN202111008790.4A CN202111008790A CN113744237A CN 113744237 A CN113744237 A CN 113744237A CN 202111008790 A CN202111008790 A CN 202111008790A CN 113744237 A CN113744237 A CN 113744237A
Authority
CN
China
Prior art keywords
dimensional array
muck
model
fluidity
plasticity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111008790.4A
Other languages
Chinese (zh)
Inventor
骆汉宾
刘文黎
柳洋
李琛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202111008790.4A priority Critical patent/CN113744237A/en
Publication of CN113744237A publication Critical patent/CN113744237A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a system for automatically detecting muck fluidity plasticity based on deep learning, wherein the method comprises the following steps: classifying the muck fluidity plasticity to establish training sets of different types; constructing deep layers by using Resnet network as basic networkA convolutional neural network model; inputting the training set into the input layer of the model to obtain the three-dimensional original characteristic mapping S e Rc×h×wRespectively inputting the original feature mapping S into three convolution layers with the same filter to generate three-dimensional arrays A, B and C, converting the three-dimensional array A into a two-dimensional array and transposing to form a two-dimensional array D, respectively converting the three-dimensional arrays B and C into two-dimensional arrays E and F, performing logic operation to form a two-dimensional array G,
Figure DDA0003238071600000011
performing a logic operation to form a two-dimensional array H,
Figure DDA0003238071600000012
converting the two-dimensional array H into a three-dimensional array and summing the three-dimensional array with the original feature mapping S to update the model; and carrying out muck fluidity plastic classification training on the model to obtain an automatic detection model for identifying the muck fluidity plastic category.

Description

Deep learning-based automatic detection method and system for muck fluidity
Technical Field
The invention belongs to the technical field of tunnel construction safety, and particularly relates to a deep learning-based automatic detection method and system for muck fluidity plasticity.
Background
With the development of rail transit, tunnel construction has become a relatively popular way of traffic. The earth pressure balance shield has become the main method of urban subway tunnel construction because of its characteristics such as fast construction speed, little influence to the surrounding environment, high degree of automation, the key of earth pressure balance shield is to realize the pressure balance inside and outside the earth storehouse and the balance of soil input and output in the earth storehouse, and the pressure balance inside and outside the earth storehouse is realized mainly through the control of the volume of discharging soil. In the construction process, the stratum traversed by the shield machine is complex and changeable, and the hardware device of the shield machine is difficult to change after the shield process is started, so that the muck in the soil bin needs to be improved in real time in the shield process, the muck has good fluidity, the variation range of total thrust and total torque of each ring in the tunneling process is reduced, the shield stability is further improved, and the phenomena of mud cake formation, gushing and the like can be caused if the improvement is improper, and the construction efficiency is seriously influenced. Therefore, the improved muck needs to be monitored to judge whether the current muck improvement control is proper, and once the improved muck quality is found to be not in accordance with the requirement, the improvement control mode is timely adjusted. The monitoring mode that adopts at present carries out manual monitoring to the slag notch, however, manual monitoring's mode inefficiency, and it is great to receive the impression, and misjudgment appears easily in the monitoring result, makes shield structure machine construction safety have the hidden danger.
Disclosure of Invention
Aiming at the defects or improvement requirements in the prior art, the invention provides an automatic detection method and system for muck fluidity plasticity based on deep learning, and aims to provide an intelligent muck identification technology, so that the monitoring efficiency and accuracy of improved muck are improved during the period that a shield machine penetrates through a stratum, and the construction safety and stability of the shield machine are improved.
In order to achieve the above object, according to one aspect of the present invention, there is provided an automatic detection method for muck fluidity based on deep learning, comprising:
acquiring a muck image, extracting characteristic information of the muck image and classifying muck fluidity plasticity to establish training sets of different types;
constructing a deep convolutional neural network model by taking a Resnet network as a basic network, wherein the deep convolutional neural network model comprises an input layer, a hidden layer and an output layer;
inputting the training set into an input layer of the model, and obtaining a three-dimensional original feature mapping from the input layer to a hidden layer, wherein the mapping belongs to the category of the S ∈ Rc×h×wWherein w is the array width, h is the array height, C is the number of channels, the original feature mapping S is respectively input into three convolutional layers with the same filter to generate a three-dimensional array A, a three-dimensional array B and a three-dimensional array C, the three-dimensional array A is converted into a two-dimensional array and then transposed to form a two-dimensional array D, the three-dimensional array B and the three-dimensional array C are respectively converted into a two-dimensional array E and a two-dimensional array F, D, E, F E and Rc×nN is h multiplied by w, and the two-dimensional array D and the two-dimensional array E are subjected to logical operation to form a two-dimensional array G, wherein the j-th row and the j-th column of the two-dimensional array G have the numerical values of
Figure BDA0003238071580000021
Performing logic operation on the two-dimensional array G and the two-dimensional array F to form a two-dimensional array H, wherein the ith row of data in the two-dimensional array H
Figure BDA0003238071580000022
Wherein θ is from 0The learned weight is used for converting the two-dimensional array H into a three-dimensional array and summing the three-dimensional array with the original feature mapping S so as to update the feature mapping of the model;
and carrying out muck fluidity plasticity classification training on the updated model until the loss function reaches a preset convergence degree, and obtaining an automatic detection model for identifying the muck fluidity plasticity category.
Preferably, the method further comprises the steps of obtaining an image of the muck, inputting the image into the automatic detection model, and identifying and classifying the flow plasticity of the muck by using the automatic detection model.
Preferably, the muck is classified into dry soil, wet soil and suitable soil according to the fluidity of the muck.
Preferably, the layer convolution neural network model further comprises a softmax layer, and the logical operation of the two-dimensional array D and the two-dimensional array E comprises the logical operation of the two-dimensional array D and the two-dimensional array E through the softmax layer.
Preferably, the filter is a 3 x 3 dimensional filter.
Preferably, the first and second electrodes are formed of a metal,
the method further comprises the following steps: extracting a plurality of feature maps S, performing region-of-interest pooling and L2 normalization, connecting and rescaling generated result features to adapt to the original proportion of the features, matching the layer number of an original network by using a convolution filter of 1 x 1, and extracting the final region-of-interest features.
Preferably, for the muck image obtained by shooting during the movement, before extracting the characteristic information, the method further comprises the following steps: and restoring the residue soil image by adopting a wiener filtering technology.
Preferably, before extracting the feature information, the method further includes, for the residue image whose contrast does not meet the condition: the histogram equalization technology is adopted to change the gray scale of each pixel in the image by changing the histogram of the image, thereby enhancing the contrast of the image.
Preferably, training the updated model until the loss function reaches a preset convergence degree includes:
and setting a plurality of groups of different learning rates, training the model under the different learning rates to reach preset times, and comparing the convergence condition of the loss function under each group of learning rates to use the learning rate with the best convergence effect as the initial learning parameter of the model.
According to another aspect of the invention, an automatic detection system for muck fluidity based on deep learning is provided, which comprises:
a training set establishing unit: the method comprises the steps of obtaining a muck image, extracting characteristic information of the muck image and establishing training sets of different flow plasticity types;
an initial model establishing unit: the method comprises the steps of constructing a deep convolutional neural network model by taking a Resnet network as a basic network, wherein the deep convolutional neural network model comprises an input layer, a hidden layer and an output layer;
a mapping update unit: inputting the training set into an input layer of the model, and obtaining a three-dimensional original feature mapping from the input layer to a hidden layer, wherein the mapping belongs to the category of the modelc×h×wWherein w is the array width, h is the array height, C is the number of channels, the original feature mapping S is respectively input into three convolution layers with the same filter to generate a three-dimensional array A, a three-dimensional array B and a three-dimensional array C, the three-dimensional array A is converted into a two-dimensional array and then transposed to form a two-dimensional array D, the feature mapping B and the feature mapping C are respectively converted into a two-dimensional array E and a two-dimensional array F, D, E, F E Rc×nN is h multiplied by w, and the two-dimensional array D and the two-dimensional array E are subjected to logical operation to form a two-dimensional array G, wherein the j-th row and the j-th column of the two-dimensional array G have the numerical values of
Figure BDA0003238071580000041
Figure BDA0003238071580000042
Performing logic operation on the two-dimensional array G and the two-dimensional array F to form a two-dimensional array H, wherein the ith row of data in the two-dimensional array H
Figure BDA0003238071580000043
Wherein theta is the weight of learning from 0, the two-dimensional array H is converted into a three-dimensional array and summed with the original feature mapping S to update the feature mapping of the model;
and the model training unit is used for carrying out muck fluidity plasticity classification training on the updated model until the loss function reaches the preset convergence degree, so as to obtain an automatic detection model for identifying the muck fluidity plasticity category.
Generally speaking, according to the technical scheme of the invention, the detection model capable of automatically identifying the muck plasticity category is obtained by establishing a deep convolutional neural network model and training the model by taking muck images with different flow plasticity as a training set. Meanwhile, after an initial convolutional neural network model is established, logical operation processing is carried out on the feature mapping to update the feature mapping of the model, so that the accuracy and reliability of feature extraction during deep learning can be improved, the accuracy and reliability of model identification are improved, the classification of the muck can be automatically detected and identified through the obtained model, the monitoring efficiency and accuracy of improved muck can be improved during the period that the shield machine penetrates through the stratum, and the construction safety and stability of the shield machine are improved.
Drawings
Fig. 1 is a flow chart illustrating steps of an automatic detection method for muck fluidity plasticity based on deep learning according to an embodiment of the present invention;
FIG. 2 is a diagram of a pretreatment process for three different types of improved muck with different plasticity according to an embodiment of the invention;
FIG. 3 is a diagram illustrating a logical operation of updating feature maps according to an embodiment of the present invention;
fig. 4 is a diagram of a process of extracting features of a region of interest according to an embodiment of the present invention;
FIG. 5 is a Loss-Epoch plot for different initial learning rates provided by an embodiment of the present invention;
fig. 6 is a diagram illustrating a detection result of three different types of improved muck with different plasticity according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Fig. 1 is a flow chart illustrating steps of an automatic detection method for earth flow plasticity based on deep learning in an embodiment of the present invention, wherein the method includes:
step S100: and acquiring a muck image, extracting characteristic information of the muck image and classifying muck fluidity plasticity to establish training sets of different types.
In one embodiment, an image of the improved muck is captured by video, and the original image is cropped, restored and enhanced by preprocessing to obtain a feature enhanced image, creating a data set, which includes a training set. The training set comprises image features of different types of muck, the models are trained by utilizing the training sets of different types, and the trained models can recognize and classify the features of the muck images. In one embodiment, the modified muck can be divided into wet soil, dry soil and suitable soil.
In one embodiment, the initial image is cropped by a screenshot method to only reserve the main body part, and the interference caused by the non-main body part is eliminated.
In an embodiment, for the muck image obtained by shooting during the movement, before extracting the characteristic information, restoring the muck image by adopting a wiener filtering technology. Specifically, based on the minimum mean square error criterion, the mean square error between the original image and the restored image is minimized, and the calculation formula is as follows:
Figure BDA0003238071580000061
Figure BDA0003238071580000062
wherein e is2Representing MSE (mean square error); e is the expected value of the parameter; f (x, y) and
Figure BDA0003238071580000063
respectively representing an original image and a restored image; h (u, v) is a degeneration function; g (u, v) is a degraded image; sn(u, v) is the noise power spectrum; sf(u, v) is the power spectrum of the undegraded image;
Figure BDA0003238071580000064
is a frequency domain estimation. As shown in fig. 2, the three different types of muck images are restored by the wiener filtering technique to obtain the muck images.
In an embodiment, for an image of the residue soil with an unsatisfactory contrast, for example, for the problems of bias and low contrast caused by the influence of illumination, vibration and the like on image information, before extracting the feature information, a histogram equalization technique is further adopted to change the gray scale of each pixel in the image by changing the histogram of the image, so as to enhance the contrast of the image. The specific calculation formula is as follows:
Figure BDA0003238071580000065
Figure BDA0003238071580000066
wherein r and s respectively represent the normalized original image gray level and the image gray level after histogram equalization; t (r) is a transformation function of r to s; p is a radical ofr(r) is the probability density of the random variable r; p is a radical ofs(s) is the probability density of the random variable s; fs(s) is a distribution function assuming a random variable s.
Step S200: a deep Convolutional Neural Network (CNN) model is constructed by taking a Resnet network as a basic network, and comprises an input layer, a hidden layer and an output layer. The meaning of the hidden layer is to abstract the characteristics of the input data to another dimension space. Each layer of the network model is provided with a plurality of neurons, each neuron of the next layer between the two adjacent layers is respectively connected with the neuron of the previous layer, and in the image recognition problem, each neuron of the input layer possibly represents the gray value of one pixel. The neurons of the input layer need to be connected to the neurons of the hidden layer, and this mapping from the input layer to the hidden layer is called feature mapping.
Step S300: inputting the training set into an input layer of the model, and obtaining a three-dimensional original feature mapping from the input layer to a hidden layer, wherein the mapping belongs to the category of the S ∈ Rc×h×wWherein w is the array width, h is the array height, C is the number of channels, the original feature mapping S is respectively input into three convolution layers with the same filter to generate a three-dimensional array A, a three-dimensional array B and a three-dimensional array C, the three-dimensional array A is converted into a two-dimensional array and then transposed to form a two-dimensional array D, the feature mapping B and the feature mapping C are respectively converted into a two-dimensional array E and a two-dimensional array F, D, E, F E Rc×nN is h multiplied by w, and the two-dimensional array D and the two-dimensional array E are subjected to logical operation to form a two-dimensional array G, wherein the j-th row and the j-th column of the two-dimensional array G have the numerical values of
Figure BDA0003238071580000071
Figure BDA0003238071580000072
Performing logic operation on the two-dimensional array G and the two-dimensional array F to form a two-dimensional array H, wherein the ith row of data in the two-dimensional array H
Figure BDA0003238071580000073
Where θ is the weight learned from 0, the two-dimensional array H is converted to a three-dimensional array and summed with the original feature map S to update the feature map of the model.
As shown in fig. 3, the training set is input into the input layer of the deep convolutional neural network model, and the original feature mapping S e R from the input layer to the hidden layer is obtainedc×h×wWherein, the original feature mapping S is a three-dimensional array, w is the array width, h is the array height, and c is the number of channels. Mapping the features to S e Rc×h×wInput to three convolutional layers of the same filter size, three new feature maps are generated, three dimensional arrays A, B and C (A, B, C ∈ R, respectively)c×h×w). In thatIn one embodiment, the filter is a 3 x 3 dimensional filter.
Since the three-dimensional array cannot be directly subjected to logic operation, the three-dimensional arrays A, B and C need to be reshaped, the three-dimensional array with the size h × w × C is converted into a C × n two-dimensional array, n ═ h × w, the converted two-dimensional array still retains the characteristic information of the original three-dimensional array, and only data is converted from the three-dimensional form into the two-dimensional form so as to facilitate logic operation. Taking c-3 as an example, the two-dimensional array with 3 channels is formed, each channel is h-w, the two-dimensional array with h-w of each channel is converted into a one-dimensional array with 1-n, wherein n-h-w, and then the 1-n one-dimensional arrays with 3 channels are spliced into the 3-n two-dimensional array. Similarly, if one wants to convert a two-dimensional array of c × n into a three-dimensional array of h × w × c, the above process is the reverse process.
As shown in fig. 3, the three-dimensional array a is reshaped into a two-dimensional array and then transposed to form an n × c two-dimensional array D. Converting the three-dimensional array B into a C x n two-dimensional array E, and converting the three-dimensional array C into a C x n two-dimensional array F, D, E, F E to Rc×nAnd n is h × w. Performing logic operation on the two-dimensional array D and the two-dimensional array E to form a two-dimensional array G, wherein the value of the ith row and the jth column in the two-dimensional array G is
Figure BDA0003238071580000081
Figure BDA0003238071580000082
Performing logic operation on the two-dimensional array G and the two-dimensional array F to form a two-dimensional array H, wherein the ith row of data in the two-dimensional array H
Figure BDA0003238071580000083
Where θ is the weight learned from 0. The weight is a learnable scalar that is initialized to 0, introducing learnable θ allows the network to first rely on cues in the local neighborhood, and then learn gradually to assign more weight to non-local evidence. Converting the two-dimensional array H into a three-dimensional array and summing the three-dimensional array with the original feature map S to update the feature map of the model, the updated feature map being
Figure BDA0003238071580000084
In this embodiment, because Resnet is used as a base network of Faster R-CNN, the translation invariance phenomenon and its local features are likely to cause the problem of misclassification of objects and contents, and the like, by performing the above logical operation, the location awareness feature is introduced, and wider context information is encoded into the local features, thereby improving the accuracy of classification and identification of the features by the model.
In an embodiment, as shown in fig. 4, to obtain different feature details of regions of interest of different granularities, the present invention is further based on combining light level and deep level features in multiple convolutional layers to improve the region of interest collection layer. Specifically, a plurality of convolution feature maps S are extracted, then region-of-interest pooling and L2 normalization are carried out on the convolution feature maps S, then generated result features are connected and rescaled to adapt to the original proportion of the features, and finally, a convolution filter of 1 x 1 is utilized to match the layer number of the original network. In general, some intermediate results and final feature maps in the region generation network (RPN) are combined to produce final region of interest pool features for object detection.
Step S400: and carrying out muck fluidity plasticity classification training on the updated model until the loss function reaches a preset convergence degree, and obtaining an automatic detection model for identifying the muck fluidity plasticity category.
In one embodiment, after updating the feature map, the model is trained for muck plasticity classification. Specifically, the training times may be set, for example, the maximum training times MaxEpochs parameters are all set to 30, the convergence conditions of the Loss functions under multiple groups of different learning rates are respectively tested, the convergence conditions of the Loss functions under each group of learning rates are compared, a Loss-Epoch curve is respectively obtained, and the learning rate with the best convergence effect is used as the initial learning parameter of the model. For example, test 5X 10 separately-3、10-3、10-4And 10-5The test results for the convergence of the loss function for four different learning rates are shown in fig. 5. The test results showed that when the initial learning rate was set to 10-4And 10-5At this time, the training gradient oscillates around the minimum and eventually fails to converge, indicating 10-4And 10-5Too large a learning rate setting. When the initial learning rate is set to 5 × 10-3And 10-3When the learning rate is 5 × 10, the network converges-3The network iteration tends to converge about 21000 times, and the learning rate is 10-3The time network iteration tends to converge about 11000 times, and the learning rate is 10-3Has a learning rate of about 5 × 10-3Approximately twice the convergence speed. In summary, select 10-3And obtaining initial learning parameters of the network model as the initial learning rate.
In an embodiment, the data set includes a training set and a validation set, and the validation set is substituted into the network to test the accuracy of the model, so as to obtain detection results of three types of improved muck, as shown in fig. 6, thereby indicating that the model can accurately identify the type of muck.
In an embodiment, after the trained model is obtained, the model can be put into practical application, in the process of excavating the tunnel by the pressure balance shield, a muck image is obtained and input into the automatic detection model, the flow plasticity of the muck is identified and classified by using the automatic detection model, and when the muck type is identified to be improper soil, the monitoring system automatically feeds back to the control system to adjust the muck improvement mode.
The application also relates to a sediment fluidity automatic check out system based on degree of depth study, it includes:
a training set establishing unit: the method comprises the steps of obtaining a muck image, extracting characteristic information of the muck image and establishing training sets of different flow plasticity types;
an initial model establishing unit: the method comprises the steps of constructing a deep convolutional neural network model by taking a Resnet network as a basic network, wherein the deep convolutional neural network model comprises an input layer, a hidden layer and an output layer;
a mapping update unit: inputting the training set into an input layer of the model, and obtaining a three-dimensional original feature mapping from the input layer to a hidden layer, wherein the mapping belongs to the category of the modelc×h×wWherein w is the array width, h is the array height, c is the number of channels, the original feature map S is respectively input into three convolution layers with the same filter to generate three-dimensionalThe method comprises the steps of converting the three-dimensional array A into a two-dimensional array and then transposing the two-dimensional array to form a two-dimensional array D, and converting the feature mapping B and the feature mapping C into a two-dimensional array E and two-dimensional arrays F, D, E and F belonging to R respectivelyc×nN is h multiplied by w, and the two-dimensional array D and the two-dimensional array E are subjected to logical operation to form a two-dimensional array G, wherein the j-th row and the j-th column of the two-dimensional array G have the numerical values of
Figure BDA0003238071580000101
Figure BDA0003238071580000102
Performing logic operation on the two-dimensional array G and the two-dimensional array F to form a two-dimensional array H, wherein the ith row of data in the two-dimensional array H
Figure BDA0003238071580000103
Wherein theta is the weight of learning from 0, the two-dimensional array H is converted into a three-dimensional array and summed with the original feature mapping S to update the feature mapping of the model;
and the model training unit is used for carrying out muck fluidity plasticity classification training on the updated model until the loss function reaches the preset convergence degree, so as to obtain an automatic detection model for identifying the muck fluidity plasticity category.
The automatic detection system for the muck fluidity plasticity based on deep learning is used for executing the automatic detection method for the muck fluidity plasticity based on deep learning, and the functions of each unit of the system correspond to each step in the method, so that the detailed description is omitted.
A deep convolutional neural network model is established, and muck images with different flow plasticity are used as a training set to train the model, so that a detection model capable of automatically identifying the muck flow plasticity category is obtained. Meanwhile, after an initial convolutional neural network model is established, logical operation processing is carried out on the feature mapping to update the feature mapping of the model, so that the accuracy and reliability of feature extraction during deep learning can be improved, the accuracy and reliability of model identification are improved, the classification of the muck can be automatically detected and identified through the obtained model, the monitoring efficiency and accuracy of improved muck can be improved during the period that the shield machine penetrates through the stratum, and the construction safety and stability of the shield machine are improved.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. An automatic detection method for muck fluidity based on deep learning is characterized by comprising the following steps:
acquiring a muck image, extracting characteristic information of the muck image and classifying muck fluidity plasticity to establish training sets of different types;
constructing a deep convolutional neural network model by taking a Resnet network as a basic network, wherein the deep convolutional neural network model comprises an input layer, a hidden layer and an output layer;
inputting the training set into an input layer of the model, and obtaining a three-dimensional original feature mapping from the input layer to a hidden layer, wherein the mapping belongs to the category of the S ∈ Rc ×h×wWherein w is the array width, h is the array height, C is the number of channels, the original feature mapping S is respectively input into three convolutional layers with the same filter to generate a three-dimensional array A, a three-dimensional array B and a three-dimensional array C, the three-dimensional array A is converted into a two-dimensional array and then transposed to form a two-dimensional array D, the three-dimensional array B and the three-dimensional array C are respectively converted into a two-dimensional array E and a two-dimensional array F, D, E, F E and Rc×nN is h multiplied by w, and the two-dimensional array D and the two-dimensional array E are subjected to logical operation to form a two-dimensional array G, wherein the j-th row and the j-th column of the two-dimensional array G have the numerical values of
Figure FDA0003238071570000011
Performing logic operation on the two-dimensional array G and the two-dimensional array F to form a two-dimensional array H, wherein the ith row of data in the two-dimensional array H
Figure FDA0003238071570000012
Where θ is the weight to start learning from 0Converting the two-dimensional array H into a three-dimensional array and summing the three-dimensional array with the original feature mapping S to update the feature mapping of the model;
and carrying out muck fluidity plasticity classification training on the updated model until the loss function reaches a preset convergence degree, and obtaining an automatic detection model for identifying the muck fluidity plasticity category.
2. The method for automatically detecting the fluidity plastic of the muck as claimed in claim 1, further comprising the steps of obtaining an image of the muck, inputting the image into the automatic detection model, and identifying and classifying the fluidity plastic of the muck by using the automatic detection model.
3. The method for automatically detecting the fluidity of the muck as claimed in claim 1, wherein the muck is divided into dry soil, wet soil and suitable soil according to the fluidity of the muck.
4. The method for automatically detecting the soil fluidity according to the claim 1, wherein the layer convolution neural network model further comprises a softmax layer, and the logical operation of the two-dimensional array D and the two-dimensional array E comprises the logical operation of the two-dimensional array D and the two-dimensional array E through the softmax layer.
5. The method for automatically detecting the fluidity plasticity of the slag soil according to claim 1, wherein the filter is a 3 x 3 dimensional filter.
6. The method for automatically detecting the fluidity plasticity of the slag soil according to claim 1, wherein the method further comprises the following steps: extracting a plurality of feature maps S, performing region-of-interest pooling and L2 normalization, connecting and rescaling generated result features to adapt to the original proportion of the features, matching the layer number of an original network by using a convolution filter of 1 x 1, and extracting the final region-of-interest features.
7. The method for automatically detecting the fluidity plasticity of the muck as claimed in claim 1, wherein for the muck image obtained by shooting during the movement, before extracting the characteristic information, the method further comprises the following steps: and restoring the residue soil image by adopting a wiener filtering technology.
8. The method for automatically detecting the fluidity plasticity of the muck as claimed in claim 1, wherein for the muck image with the contrast not meeting the condition, before extracting the characteristic information, the method further comprises the following steps: the histogram equalization technology is adopted to change the gray scale of each pixel in the image by changing the histogram of the image, thereby enhancing the contrast of the image.
9. The method for automatically detecting the fluidity plasticity of the slag soil according to claim 1, wherein the step of training the updated model until the loss function reaches the preset convergence degree comprises the following steps:
and setting a plurality of groups of different learning rates, training the model under the different learning rates to reach preset times, and comparing the convergence condition of the loss function under each group of learning rates to use the learning rate with the best convergence effect as the initial learning parameter of the model.
10. The utility model provides a dregs flow plasticity automatic check out system based on degree of depth study which characterized in that includes:
a training set establishing unit: the method comprises the steps of obtaining a muck image, extracting characteristic information of the muck image and establishing training sets of different flow plasticity types;
an initial model establishing unit: the method comprises the steps of constructing a deep convolutional neural network model by taking a Resnet network as a basic network, wherein the deep convolutional neural network model comprises an input layer, a hidden layer and an output layer;
a mapping update unit: inputting the training set into an input layer of the model, and obtaining a three-dimensional original feature mapping from the input layer to a hidden layer, wherein the mapping belongs to the category of the modelc×h×wWherein w is the array width, h is the array height, C is the number of channels, the original feature mapping S is respectively input into three convolutional layers with the same filter to generate a three-dimensional array A, a three-dimensional array B and a three-dimensional array C, the three-dimensional array A is converted into a two-dimensional array and then transposed to form a two-dimensional array D, and the feature mapping B and the feature mapping C are respectively input into a three-dimensional array A, a three-dimensional array B and a three-dimensional array CConverting into two-dimensional array E and two-dimensional array F, D, E, F ∈ Rc×nN is h multiplied by w, and the two-dimensional array D and the two-dimensional array E are subjected to logical operation to form a two-dimensional array G, wherein the j-th row and the j-th column of the two-dimensional array G have the numerical values of
Figure FDA0003238071570000031
Figure FDA0003238071570000032
Performing logic operation on the two-dimensional array G and the two-dimensional array F to form a two-dimensional array H, wherein the ith row of data in the two-dimensional array H
Figure FDA0003238071570000033
Wherein theta is the weight of learning from 0, the two-dimensional array H is converted into a three-dimensional array and summed with the original feature mapping S to update the feature mapping of the model;
and the model training unit is used for carrying out muck fluidity plasticity classification training on the updated model until the loss function reaches the preset convergence degree, so as to obtain an automatic detection model for identifying the muck fluidity plasticity category.
CN202111008790.4A 2021-08-31 2021-08-31 Deep learning-based automatic detection method and system for muck fluidity Pending CN113744237A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111008790.4A CN113744237A (en) 2021-08-31 2021-08-31 Deep learning-based automatic detection method and system for muck fluidity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111008790.4A CN113744237A (en) 2021-08-31 2021-08-31 Deep learning-based automatic detection method and system for muck fluidity

Publications (1)

Publication Number Publication Date
CN113744237A true CN113744237A (en) 2021-12-03

Family

ID=78734139

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111008790.4A Pending CN113744237A (en) 2021-08-31 2021-08-31 Deep learning-based automatic detection method and system for muck fluidity

Country Status (1)

Country Link
CN (1) CN113744237A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758184A (en) * 2022-06-14 2022-07-15 福建南方路面机械股份有限公司 Deep learning-based muck classification processing guide method and device and readable medium
WO2023240776A1 (en) * 2022-06-14 2023-12-21 福建南方路面机械股份有限公司 Deep learning-based muck classification processing guidance method and apparatus, and readable medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107092859A (en) * 2017-03-14 2017-08-25 佛山科学技术学院 A kind of depth characteristic extracting method of threedimensional model
CN110001608A (en) * 2019-03-06 2019-07-12 江苏大学 A kind of automobile smart brake system and its control method based on road surface vision-based detection
AU2020102100A4 (en) * 2020-09-02 2020-10-22 Khan, Mohd. Arsh MR Disease detection using iot and machine learning in rice crops
CN112906300A (en) * 2021-02-09 2021-06-04 北京化工大学 Polarized SAR (synthetic Aperture Radar) soil humidity inversion method based on two-channel convolutional neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107092859A (en) * 2017-03-14 2017-08-25 佛山科学技术学院 A kind of depth characteristic extracting method of threedimensional model
CN110001608A (en) * 2019-03-06 2019-07-12 江苏大学 A kind of automobile smart brake system and its control method based on road surface vision-based detection
AU2020102100A4 (en) * 2020-09-02 2020-10-22 Khan, Mohd. Arsh MR Disease detection using iot and machine learning in rice crops
CN112906300A (en) * 2021-02-09 2021-06-04 北京化工大学 Polarized SAR (synthetic Aperture Radar) soil humidity inversion method based on two-channel convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YUE PAN等: ""A spatial-channel hierarchical deep learning network for pixel-level automated crack detection", AUTOMATION IN CONSTRUCTION, pages 1 - 16 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758184A (en) * 2022-06-14 2022-07-15 福建南方路面机械股份有限公司 Deep learning-based muck classification processing guide method and device and readable medium
CN114758184B (en) * 2022-06-14 2022-09-06 福建南方路面机械股份有限公司 Deep learning-based muck classification processing guide method and device and readable medium
WO2023240776A1 (en) * 2022-06-14 2023-12-21 福建南方路面机械股份有限公司 Deep learning-based muck classification processing guidance method and apparatus, and readable medium

Similar Documents

Publication Publication Date Title
CN109949317B (en) Semi-supervised image example segmentation method based on gradual confrontation learning
CN109657584B (en) Improved LeNet-5 fusion network traffic sign identification method for assisting driving
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN113378632B (en) Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method
WO2021134871A1 (en) Forensics method for synthesized face image based on local binary pattern and deep learning
CN109118479B (en) Capsule network-based insulator defect identification and positioning device and method
EP3620980B1 (en) Learning method, learning device for detecting lane by using cnn and testing method, testing device using the same
CN112488025B (en) Double-temporal remote sensing image semantic change detection method based on multi-modal feature fusion
CN112347970B (en) Remote sensing image ground object identification method based on graph convolution neural network
CN113744237A (en) Deep learning-based automatic detection method and system for muck fluidity
CN111008639B (en) License plate character recognition method based on attention mechanism
CN109741340B (en) Ice cover radar image ice layer refined segmentation method based on FCN-ASPP network
CN110599459A (en) Underground pipe network risk assessment cloud system based on deep learning
CN109977968A (en) A kind of SAR change detecting method of deep learning classification and predicting
CN113052215A (en) Sonar image automatic target identification method based on neural network visualization
CN116342942A (en) Cross-domain target detection method based on multistage domain adaptation weak supervision learning
CN113592894A (en) Image segmentation method based on bounding box and co-occurrence feature prediction
CN116543168A (en) Garbage image denoising method based on multidimensional image information fusion
CN110349119B (en) Pavement disease detection method and device based on edge detection neural network
CN114168782B (en) Deep hash image retrieval method based on triplet network
CN115482463A (en) Method and system for identifying land cover of mine area of generated confrontation network
CN116012702A (en) Remote sensing image scene level change detection method
CN115965968A (en) Small sample target detection and identification method based on knowledge guidance
CN114724245A (en) CSI-based incremental learning human body action identification method
CN114119382A (en) Image raindrop removing method based on attention generation countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination