CN116934852A - Lattice type slope protection monitoring system and method based on deep learning - Google Patents
Lattice type slope protection monitoring system and method based on deep learning Download PDFInfo
- Publication number
- CN116934852A CN116934852A CN202310255224.6A CN202310255224A CN116934852A CN 116934852 A CN116934852 A CN 116934852A CN 202310255224 A CN202310255224 A CN 202310255224A CN 116934852 A CN116934852 A CN 116934852A
- Authority
- CN
- China
- Prior art keywords
- lattice
- image
- slope protection
- deep learning
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 37
- 238000013135 deep learning Methods 0.000 title claims abstract description 31
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000003384 imaging method Methods 0.000 claims abstract description 33
- 238000006073 displacement reaction Methods 0.000 claims abstract description 23
- 238000004458 analytical method Methods 0.000 claims abstract description 19
- 230000011218 segmentation Effects 0.000 claims abstract description 17
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 238000012549 training Methods 0.000 claims description 34
- 238000012360 testing method Methods 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 12
- 238000012795 verification Methods 0.000 claims description 10
- 238000005520 cutting process Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 5
- 238000005457 optimization Methods 0.000 claims description 4
- 238000002372 labelling Methods 0.000 claims description 3
- 230000001360 synchronised effect Effects 0.000 claims description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 3
- 238000004364 calculation method Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000002776 aggregation Effects 0.000 description 3
- 238000004220 aggregation Methods 0.000 description 3
- 230000002146 bilateral effect Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 239000011435 rock Substances 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000004873 anchoring Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 239000010931 gold Substances 0.000 description 1
- 229910052737 gold Inorganic materials 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012821 model calculation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 229920000642 polymer Polymers 0.000 description 1
- 238000006116 polymerization reaction Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000002689 soil Substances 0.000 description 1
- 230000008961 swelling Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/141—Control of illumination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a lattice type slope protection monitoring system and method based on deep learning, belongs to the technical field of slope management, and solves the problem of displacement or deformation amount calculation accuracy of lattice beams. Comprising the following steps: the system comprises an imaging module, an auxiliary lighting module, a mobile control module and a computer; the imaging module, the auxiliary lighting module, the mobile control module and the computer are mutually matched and connected, and the imaging module, the auxiliary lighting module and the mobile control module are all electrically connected with the computer. The method comprises the following steps: s1, constructing a prediction model based on a sample image dataset, a deep learning network and a semantic segmentation model; s2, acquiring an image of the lattice type slope protection by using a lattice type slope protection monitoring system; s3, preprocessing the image to obtain a preprocessed image; s4, carrying out predictive analysis on the preprocessed image in the S3 through the predictive model in the S1; and S5, calculating the displacement or deformation of the lattice beam. The application is beneficial to improving the stability of slope engineering and the reliability of slope monitoring.
Description
Technical Field
The application relates to the technical field of side slope treatment, in particular to a lattice type slope protection monitoring system and method based on deep learning.
Background
The lattice type slope protection is a side slope supporting mode widely used in modern times, and has the advantages of low cost, high strength, no height limitation and the like. The main structure comprises two parts: lattice beams and anchor lines. The anchor rope is anchored into the rock body, and the outer ends of the anchor rope are connected by using the lattice beams, so that the easily sliding body is connected with the rock body, the integrity and the stability of the slope are improved, and the purpose of slope protection is achieved. Although the use of the slope protection mode is mature in China, the service life of the anchoring structure is difficult to comprehensively and accurately evaluate under the action of various environmental factors. Along with the lapse of the time of use, the anchor rope can be cut and destroyed at the sliding surface position, the anchor rope of anchor section slippage, and the lattice beam also can atress fracture, leads to the structure to warp under rock soil pressure, produces landslide accident, consequently needs to carry out real-time supervision to the side slope, ensures the stability of the slope body.
At present, the conventional method for lattice type slope protection monitoring is to monitor the stress of an anchor cable, such as an optical fiber grating pressure sensor, a force measuring ring, a resistance strain gauge and the like. The fiber bragg grating has high precision and good stability, but is fragile; the force measuring ring is convenient to install and use, but has low precision; the resistance strain gauge has low manufacturing cost, easy installation, high precision, but poor durability.
Disclosure of Invention
Aiming at the problems in the prior art, the application provides a lattice type slope protection monitoring system and method based on deep learning, which aims at: the stability of slope engineering and the reliability of slope monitoring are improved.
The technical scheme adopted by the application is as follows:
lattice type slope protection monitoring system based on deep learning, includes: the system comprises an imaging module, an auxiliary lighting module, a mobile control module and a computer; the imaging module, the auxiliary lighting module, the mobile control module and the computer are mutually matched and connected, and the imaging module, the auxiliary lighting module and the mobile control module are all electrically connected with the computer; the imaging module is used for acquiring image data, the auxiliary lighting module is used for lighting, the movement control module is used for controlling the movement of the imaging module and the auxiliary lighting module, and the computer is used for receiving the image data acquired by the imaging module and feeding back, adjusting and controlling the movement control module.
Preferably, the imaging module includes: the zoom lens, the area array CMOS camera and the image acquisition card are mutually matched and installed, and are mutually and electrically connected with each other, and are electrically connected with each other.
Preferably, the auxiliary lighting module includes: the light source and the light source controller are electrically connected with each other, and are electrically connected with each other.
Preferably, the movement control module includes: the four-way cradle head and cradle head controller are electrically connected with each other, and are electrically connected with each other.
A lattice type slope protection monitoring method based on deep learning comprises the following steps:
s1, constructing a prediction model based on a sample image dataset, a deep learning network and a semantic segmentation model;
s2, acquiring an image of the lattice type slope protection by using a lattice type slope protection monitoring system;
s3, preprocessing the image acquired in the S2 to obtain a preprocessed image;
s4, carrying out predictive analysis on the preprocessed image in the S3 through the predictive model in the S1 to obtain a predictive analysis result;
and S5, calculating the displacement or deformation of the lattice beam according to the prediction analysis result in the step S4.
Preferably, the specific process of S1 is as follows:
s101, acquiring an initial sample image dataset containing a lattice;
s102, labeling all lattices in an initial sample image dataset by using deep learning image label labeling software to obtain a first sample image dataset;
s103, performing synchronous repeated cutting on the initial sample image data set and the first sample image data set to obtain a second sample image data set;
s104, dividing the second sample image data set into a training set, a testing set and a verification set, wherein the training set is used for training a prediction model, the verification set is used for checking convergence conditions of the prediction model and selecting parameters of the prediction model, and the testing set is used for testing generalization errors of the prediction model;
s105, training by using a semantic segmentation model after the S104 is completed, setting a learning rate as the iteration times, and using a momentum optimization and cross entropy loss function.
Preferably, the specific process of S2 is as follows:
the computer control cradle head of the lattice type slope protection monitoring system is used for changing the orientation of the imaging module, the imaging module shoots lattice type slope protection images, and the shot lattice type slope protection images are returned.
Preferably, the pretreatment in S3 specifically includes: the acquired image is randomly rotated, symmetrical, cut, brightness changed and noise added.
Preferably, the specific process of S4 is as follows:
cutting the preprocessed image in the step S3 into a plurality of sub-images, enabling the cut sub-images to correspond to the sizes of the sub-images of the sample image in the prediction model, and importing the sub-images of the preprocessed image into the prediction model for prediction analysis to obtain a prediction analysis result.
Preferably, the specific process of S5 is as follows:
calculating displacement or deformation of the lattice beam by using an image difference method; after the prediction analysis result is obtained by the S4, if displacement or deformation occurs to the lattice beams, strip-shaped deviation occurs after the image predicted by the prediction model is differentiated from the image which is not subjected to displacement or deformation, the maximum width value of the strip-shaped deviation is used as the pixel displacement or deformation value of the lattice beams and is converted into an actual distance, an imaging module of the lattice slope protection monitoring system is calibrated, the ratio of the pixel to the actual distance is determined, the ratio of the pixel length of each lattice beam to the actual distance on the image is calculated, the scales of different positions on the image are obtained, and the scale of the corresponding position is found by using the pixel point coordinates of the differential image, so that the actual distance is converted;
because deformation of the lattice beams is difficult to observe in practice, any two lattice beams in the image are manually deformed by using computer software, and the photographed displacement or deformation lattice beam image is simulated.
In summary, the application has the following beneficial effects:
(1) The application improves the stability of slope engineering and the reliability of slope monitoring;
(2) The application uses the image processing technology to macroscopically monitor the change condition of the lattice slope protection;
(3) According to the application, the bilateral segmentation network lightweight object segmentation model in deep learning is applied to the monitoring of the lattice type slope protection, the model training is carried out through the self-built data set, then all lattice beams are detected by using the trained model, and the displacement change condition can be obtained by comparing the lattice pictures at different moments, so that the lattice type slope protection monitoring is realized.
Drawings
The application will now be described by way of example and with reference to the accompanying drawings in which:
FIG. 1 is a schematic block diagram of a lattice type slope protection monitoring system of the application;
FIG. 2 is a schematic view of dataset cropping according to the present application;
FIG. 3 is a diagram of a BiSeNet-V2 network structure according to the present application;
FIG. 4 is a block diagram of a backbone block and context insert block according to the present application;
FIG. 5 is a schematic view of the structure of a polymeric intumescent layer according to the present application;
FIG. 6 is a schematic diagram of a dual-sided guided polymer layer structure according to the present application;
FIG. 7 is a schematic representation of the annotation of a dataset using Labelme software in accordance with the present application;
FIG. 8 is a diagram of a model training loss curve (a) and an accuracy curve (b) according to the present application;
fig. 9 is a schematic diagram of semantic segmentation effect of a predicted picture according to the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present application.
In describing embodiments of the present application, it should be noted that the terms "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. refer to an azimuth or a positional relationship based on that shown in the drawings, or that the inventive product is conventionally put in place when used, merely for convenience of describing the present application and simplifying the description, and do not indicate or imply that the apparatus or elements referred to must have a specific azimuth, be configured and operated in a specific azimuth, and thus should not be construed as limiting the present application. Furthermore, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
The present application will be described in detail with reference to fig. 1 to 9.
Example 1
Lattice type slope protection monitoring system based on deep learning, includes: the system comprises an imaging module, an auxiliary lighting module, a mobile control module and a computer; the imaging module, the auxiliary lighting module, the mobile control module and the computer are mutually matched and connected, and the imaging module, the auxiliary lighting module and the mobile control module are all electrically connected with the computer; the imaging module is used for acquiring image data, the auxiliary lighting module is used for lighting, the movement control module is used for controlling the movement of the imaging module and the auxiliary lighting module, and the computer is used for receiving the image data acquired by the imaging module and feeding back, adjusting and controlling the movement control module.
The imaging module includes: the zoom lens, the area array CMOS camera and the image acquisition card are mutually matched and installed, and are mutually and electrically connected with each other, and are electrically connected with each other.
The auxiliary lighting module includes: the light source and the light source controller are electrically connected with each other, and are electrically connected with each other.
The movement control module includes: the four-way cradle head and cradle head controller are electrically connected with each other, and are electrically connected with each other.
The working principle of the lattice type slope protection monitoring system is as follows:
and shooting an image of the lattice type slope protection by using an area array CMOS camera, preprocessing the image, dividing the lattice image by using a trained picture dividing model, and comparing the image with the previous divided image to obtain a displacement change condition. When the lattices in different places need to be shot, the cradle head is controlled by software, so that the camera can rotate up, down, left and right in four directions. Because of the time variation, the brightness of the pictures shot at different moments is different, the auxiliary light source may be used for illumination during shooting, and the software automatically controls the starting of the auxiliary illumination according to the sensing information of the light sensor during shooting.
A lattice type slope protection monitoring method based on deep learning comprises the following steps:
s1, constructing a prediction model based on a sample image dataset, a deep learning network and a semantic segmentation model;
s2, acquiring an image of the lattice type slope protection by using a lattice type slope protection monitoring system;
s3, preprocessing the image acquired in the S2 to obtain a preprocessed image;
s4, carrying out predictive analysis on the preprocessed image in the S3 through the predictive model in the S1 to obtain a predictive analysis result;
and S5, calculating the displacement or deformation of the lattice beam according to the prediction analysis result in the step S4.
The specific process of S1 is as follows:
s101, acquiring an initial sample image dataset containing a lattice;
s102, labeling all lattices in an initial sample image dataset by using deep learning image label labeling software to obtain a first sample image dataset;
s103, performing synchronous repeated cutting on the initial sample image data set and the first sample image data set to obtain a second sample image data set;
s104, dividing the second sample image data set into a training set, a testing set and a verification set, wherein the training set is used for training a prediction model, the verification set is used for checking convergence conditions of the prediction model and selecting parameters of the prediction model, and the testing set is used for testing generalization errors of the prediction model;
s105, training by using a semantic segmentation model after the S104 is completed, setting a learning rate as the iteration times, and using a momentum optimization and cross entropy loss function.
The specific process of S2 is as follows:
the computer control cradle head of the lattice type slope protection monitoring system is used for changing the orientation of the imaging module, the imaging module shoots lattice type slope protection images, and the shot lattice type slope protection images are returned.
The pretreatment in S3 is specifically as follows: the acquired image is randomly rotated, symmetrical, cut, brightness changed and noise added.
The specific process of S4 is as follows:
cutting the preprocessed image in the step S3 into a plurality of sub-images, enabling the cut sub-images to correspond to the sizes of the sub-images of the sample image in the prediction model, and importing the sub-images of the preprocessed image into the prediction model for prediction analysis to obtain a prediction analysis result.
S5, the specific process is as follows:
calculating displacement or deformation of the lattice beam by using an image difference method; after the prediction analysis result is obtained by the S4, if displacement or deformation occurs to the lattice beams, strip-shaped deviation occurs after the image predicted by the prediction model is differentiated from the image which is not subjected to displacement or deformation, the maximum width value of the strip-shaped deviation is used as the pixel displacement or deformation value of the lattice beams and is converted into an actual distance, an imaging module of the lattice slope protection monitoring system is calibrated, the ratio of the pixel to the actual distance is determined, the ratio of the pixel length of each lattice beam to the actual distance on the image is calculated, the scales of different positions on the image are obtained, and the scale of the corresponding position is found by using the pixel point coordinates of the differential image, so that the actual distance is converted;
because deformation of the lattice beams is difficult to observe in practice, any two lattice beams in the image are manually deformed by using computer software, and the photographed displacement or deformation lattice beam image is simulated.
Example two
Many times we are often faced with the problem of small data volumes, since the training effect of the network model is very dependent on the number and quality of training samples, if the samples are insufficient, the network is prone to be locally optimal, resulting in an overfitting. Although the network performs well on the training set during training, the generalization ability is poor. To reduce the impact of this problem on the training model, data enhancement techniques are required. For a computer, a picture is considered to be different in terms of brightness change, rotation, symmetry, etc., but the elements in the picture are unchanged. Therefore, to improve the robustness of the algorithm, we often make random rotations, symmetries, clipping, changing brightness, adding noise, etc. to the data set, so that the algorithm can cope with more occasions.
Because the lattice elements in the pictures occupy smaller space, if the whole picture is used for training, the training time is longer, and the network possibly has difficulty in extracting the details of the lattices in the pictures, the application randomly cuts each picture of the data set for a plurality of times, and a large number of sub-pictures with different sizes are segmented, as shown in fig. 2. Compared with the original image, the lattice in the sub-image occupies a larger area, which is more beneficial to extracting details and semantic information from the network.
The BiSeNet v2 target segmentation model is a lightweight semantic segmentation network model. In the aspect of processing low-level detail and high-level semantic information, the network separates the low-level detail and the high-level semantic information, compared with other current segmentation networks, the low-level detail information extraction is sacrificed for improving the efficiency, and the high-precision and the high-efficiency are realized. As shown in fig. 3, the main structure of the network model is a two-channel backbone in a purple box (the upper half is a detail branch, and the lower half is a semantic branch), a polymerization layer in an orange box, and an enhancement part in a yellow box.
The detail branch uses a simpler wide channel 9-layer convolution layer, and adopts VGG blocks for design, so that the picture is downsampled 8 times in total, and the detail information of the picture is extracted. In order to reduce memory consumption, this section therefore does not use residual connections to improve network characterization capability. Semantic branching uses a lightweight network structure, running in parallel with detail branching. The semantic branch has three key features as shown in fig. 4 and 5. First, the Block (Stem Block) shown in fig. 4 (a) downsamples a picture using two branches, and then connects the two downsampled features as output. This is followed by a context embedding block (Context Embedding Block) as shown in fig. 4 (b) which uses global averaging pooling and residual connection to embed the context information in order to obtain the context information through a large receptive field. Finally, a polymeric swelling Layer (Gather-and-Expansion Layer) was formed as shown in FIG. 5. The layer comprises: (1) A 3 x 3 convolutional layer is used to aggregate features and extend into higher dimensional space; (2) A 3 x 3 convolution performed on each individual output channel of the extension layer; (3) The output is projected to the low channel space using a 1 x 1 convolution layer as the projection layer. When the stride is equal to 2, two 3×3 convolution layers are used instead of 5×5 convolution, and the receptive field is enlarged while the operation is reduced.
Since the feature information levels extracted by the detail branch and the semantic branch are different, the extracted features of the two branches need to be merged using a double-sided guide aggregation layer (Bilateral Guided Aggregation Layer), as shown in fig. 6. The operation of the part is to downsample the extracted features of the detail branches to enable the feature matrix to be the same as the feature matrix of the semantic branches in size, so that the two feature matrices can be multiplied. Meanwhile, the semantic branch output feature matrix is up-sampled and multiplied by the detail branch output, and then the two fusion matrices are added to realize bilateral guide aggregation.
The enhancement part is used for improving the segmentation precision, and can enhance the feature expression in the training stage and discard the feature expression in the reasoning stage, so that the calculation complexity of the reasoning stage is reduced. Segmentation enhancement can be inserted into any part of a semantic branch, and is similar to ResNet, the network is optimized through an auxiliary loss function, so that the segmentation enhancement is an enhancement training strategy.
The deep learning framework used by the network model in the application is PaddlePaddle, and the hardware environment is AI Studio, intel (R) Xeon (R) Gold 6148CPU@2.40GHz,16GBRAM,NVIDIA Tesla V100GPU,16GBVideoMemory provided by hundred-degree propellers. The software environment is CUDAtoolkit 10.1.243,cuDNN 7.6,Ubuntu 7.5.0,Python 3.7.4,PaddlePaddle 2.3.1.
In the fields of deep learning, big data processing and the like, python programming languages are often selected for research, and the Python programming languages have the advantages of readability, easiness in maintenance, wide application range and the like, and have various powerful libraries which can be downloaded and called, such as OpenCV, scikit-Image in the field of Image processing, scikit-learn, pyTorch in the field of deep learning, and scientific researchers can reduce the programming time and place the working center of gravity on the problem.
Firstly, a data set is processed, the original image size of a lattice type slope protection image acquired on the side slope is 837 multiplied by 765, all lattices in the original image shot on the site are marked by using a deep learning image label marking software Labelme, as shown in fig. 7, wherein a red part is marked lattice pixel points, and then the original image and the marking image are synchronously and repeatedly cut into 350 sub-images with different sizes, so that a data set is obtained. The data set is divided into a training set, a testing set and a verification set according to the proportion of 6:2:2. The training set is used for training the model, and each time the network model iterates a training picture, model parameters are updated once, so that the network can be more reasonably fitted to the model. The validation set is used to verify model convergence and select network model parameters. The convergence condition of the test model is that after the training set data is iterated once, the training condition of the network can be known by observing the running result of the verification set on the network. Generally, the training accuracy of the model should be gradually stable, and if the training accuracy is stable and the verification accuracy is reduced after the iteration is performed for a plurality of times, whether the fitting is performed or not needs to be considered, so that the iteration times are reduced. The network model parameters are selected by comparing the effects of the network models of different super parameters on the verification set to select the best performing network super parameters. The test set is used to test the generalization error of the network, and since it is impossible to collect pictures in all practical scenes, the error of the network in the test set needs to be used to approximate the generalization error.
After the data set processing is completed, training is performed by using a BiSeNet V2 model, the learning rate is set to be 0.01, the iteration times are 1000, and the loss and accuracy rate graphs obtained by model training are shown in FIG. 8 by using Momentum optimization (Momentum) and cross entropy loss function (Cross EntropyLoss).
As can be seen from the loss and accuracy graphs, the loss of the model on the data set is smaller and smaller along with the increase of the iteration times, the accuracy is also continuously improved, the training effect of the model on the data set is better, and the lattices in the pictures can be well segmented.
For a picture to be predicted, the project predicts a plurality of times in a similar manner to a sliding window. And the lattice elements in the predicted picture belong to small targets, and the predicted picture is cut into a plurality of small pictures, so that the predicted picture is equivalent to the size of the sub-picture cut by the data set, and compared with the predicted picture directly using the whole large picture, the prediction result has better precision. The trained model is used for carrying out lattice segmentation on the pictures of the project sites to obtain the result shown in the figure 9, and the trained model can identify almost all lattices from the pictures, so that the requirements of the users on lattice image segmentation are met.
According to the application, the displacement of the lattice beam is calculated by using an image difference method, and as the lattice beam is deformed, the deviation of a strip shape appears after the picture predicted by using the network model is differentiated from the undeformed picture, and the maximum width of the strip deviation is calculated as the pixel displacement value. In order to convert the actual distance, the camera is also calibrated, and the ratio of the pixel to the actual distance is determined. The ratio of the pixel length of each lattice beam edge on the picture to the actual distance can be calculated to obtain the scales of different positions on the picture, and the coordinates of the pixel points of the differential image are utilized to find the scales of the corresponding positions, so that the actual distance is converted.
Because deformation of the lattice beams is difficult to observe in practice, the computer software is used for manually deforming certain two lattice beams in the picture, and the photographed deformed picture is simulated. And obtaining a predicted picture of the original picture and the deformation picture after the model calculation. And then, the difference of the two images is calculated, the widest part of the difference image is 3 pixels, and the difference image accords with the manually changed pixels, so that the scheme is proved to be feasible.
The above examples merely illustrate specific embodiments of the application, which are described in more detail and are not to be construed as limiting the scope of the application. It should be noted that it is possible for a person skilled in the art to make several variants and modifications without departing from the technical idea of the application, which fall within the scope of protection of the application.
Claims (10)
1. Lattice type slope protection monitoring system based on deep learning, characterized by comprising: the system comprises an imaging module, an auxiliary lighting module, a mobile control module and a computer; the imaging module, the auxiliary lighting module, the mobile control module and the computer are mutually matched and connected, and the imaging module, the auxiliary lighting module and the mobile control module are all electrically connected with the computer; the imaging module is used for acquiring image data, the auxiliary lighting module is used for lighting, the movement control module is used for controlling the movement of the imaging module and the auxiliary lighting module, and the computer is used for receiving the image data acquired by the imaging module and feeding back, adjusting and controlling the movement control module.
2. The deep learning based lattice slope monitoring system of claim 1, wherein the imaging module comprises: the zoom lens, the area array CMOS camera and the image acquisition card are mutually matched and installed, and are mutually and electrically connected with each other, and are electrically connected with each other.
3. The deep learning based lattice slope monitoring system of claim 1, wherein the auxiliary lighting module comprises: the light source and the light source controller are electrically connected with each other, and are electrically connected with each other.
4. The deep learning based lattice slope monitoring system of claim 1, wherein the mobile control module comprises: the four-way cradle head and cradle head controller are electrically connected with each other, and are electrically connected with each other.
5. The lattice type slope protection monitoring method based on deep learning is characterized by comprising the following steps of:
s1, constructing a prediction model based on a sample image dataset, a deep learning network and a semantic segmentation model;
s2, acquiring an image of the lattice type slope protection by using a lattice type slope protection monitoring system;
s3, preprocessing the image acquired in the S2 to obtain a preprocessed image;
s4, carrying out predictive analysis on the preprocessed image in the S3 through the predictive model in the S1 to obtain a predictive analysis result;
and S5, calculating the displacement or deformation of the lattice beam according to the prediction analysis result in the step S4.
6. The lattice slope protection monitoring method based on deep learning of claim 5, wherein the specific process of S1 is as follows:
s101, acquiring an initial sample image dataset containing a lattice;
s102, labeling all lattices in an initial sample image dataset by using deep learning image label labeling software to obtain a first sample image dataset;
s103, performing synchronous repeated cutting on the initial sample image data set and the first sample image data set to obtain a second sample image data set;
s104, dividing the second sample image data set into a training set, a testing set and a verification set, wherein the training set is used for training a prediction model, the verification set is used for checking convergence conditions of the prediction model and selecting parameters of the prediction model, and the testing set is used for testing generalization errors of the prediction model;
s105, training by using a semantic segmentation model after the S104 is completed, setting a learning rate and iteration times, and using a momentum optimization and cross entropy loss function.
7. The lattice slope protection monitoring method based on deep learning of claim 5, wherein the specific process of S2 is as follows:
the orientation of the imaging module is changed by using the control cradle head of the lattice type slope protection monitoring system, the imaging module shoots a lattice type slope protection image, and the shot lattice type slope protection image is returned.
8. The lattice slope protection monitoring method based on deep learning of claim 5, wherein the preprocessing in S3 specifically comprises: the acquired image is randomly rotated, symmetrical, cut, brightness changed and noise added.
9. The lattice slope protection monitoring method based on deep learning of claim 5, wherein the specific process of S4 is as follows:
cutting the preprocessed image in the step S3 into a plurality of sub-images, enabling the cut sub-images to correspond to the sizes of the sub-images of the sample image in the prediction model, and importing the sub-images of the preprocessed image into the prediction model for prediction analysis to obtain a prediction analysis result.
10. The lattice slope protection monitoring method based on deep learning of claim 5, wherein the specific process of S5 is as follows:
calculating displacement or deformation of the lattice beam by using an image difference method; and S4, after a prediction analysis result is obtained, if displacement or deformation occurs to the lattice beams, long-strip-shaped deviation occurs after the image predicted by the prediction model is differentiated from the image which is not subjected to displacement or deformation, the maximum width value of the long-strip-shaped deviation is used as the pixel displacement or deformation value of the lattice beams and is converted into an actual distance, an imaging module of the lattice slope protection monitoring system is calibrated, the ratio of the pixel to the actual distance is determined, the ratio of the pixel length of each lattice beam to the actual distance on the image is calculated, the scales of different positions on the image are obtained, and the scale of the corresponding position is found by using the pixel point coordinates of the differential image, so that the actual distance is converted.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310255224.6A CN116934852A (en) | 2023-03-16 | 2023-03-16 | Lattice type slope protection monitoring system and method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310255224.6A CN116934852A (en) | 2023-03-16 | 2023-03-16 | Lattice type slope protection monitoring system and method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116934852A true CN116934852A (en) | 2023-10-24 |
Family
ID=88381627
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310255224.6A Pending CN116934852A (en) | 2023-03-16 | 2023-03-16 | Lattice type slope protection monitoring system and method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116934852A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117685881A (en) * | 2024-01-31 | 2024-03-12 | 成都建工第七建筑工程有限公司 | Sensing and detecting system for concrete structure entity position and size deviation |
-
2023
- 2023-03-16 CN CN202310255224.6A patent/CN116934852A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117685881A (en) * | 2024-01-31 | 2024-03-12 | 成都建工第七建筑工程有限公司 | Sensing and detecting system for concrete structure entity position and size deviation |
CN117685881B (en) * | 2024-01-31 | 2024-06-04 | 成都建工第七建筑工程有限公司 | Sensing and detecting method for concrete structure entity position and size deviation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113516660B (en) | Visual positioning and defect detection method and device suitable for train | |
CN111950453A (en) | Optional-shape text recognition method based on selective attention mechanism | |
CN114495029B (en) | Traffic target detection method and system based on improved YOLOv4 | |
CN114399672A (en) | Railway wagon brake shoe fault detection method based on deep learning | |
CN114049356B (en) | Method, device and system for detecting structure apparent crack | |
CN113033390B (en) | Dam remote sensing intelligent detection method based on deep learning | |
CN116934852A (en) | Lattice type slope protection monitoring system and method based on deep learning | |
CN112580382B (en) | Two-dimensional code positioning method based on target detection | |
CN115880571A (en) | Water level gauge reading identification method based on semantic segmentation | |
CN110909657A (en) | Method for identifying apparent tunnel disease image | |
CN114758222A (en) | Concrete pipeline damage identification and volume quantification method based on PointNet ++ neural network | |
CN113361528A (en) | Multi-scale target detection method and system | |
CN118229524A (en) | Tunnel image splicing method, device, equipment and storage medium based on point cloud mapping | |
CN113989718A (en) | Human body target detection method facing radar signal heat map | |
CN117876787A (en) | Method and system for detecting cutting pick of drum of coal mining machine and data processing equipment | |
CN115656444B (en) | Method for reconstructing concentration of carbon dioxide field in large-scale venue | |
CN112016542A (en) | Urban waterlogging intelligent detection method and system | |
CN116773100A (en) | Structure water leakage real-time determination method based on multidimensional video analysis | |
CN115717865A (en) | Method for measuring full-field deformation of annular structure | |
CN116309418A (en) | Intelligent monitoring method and device for deformation of girder in bridge cantilever construction | |
CN116046303B (en) | Deflection intelligent detection system, method and device | |
CN117892590B (en) | Concrete dam damage identification and safety assessment method based on crack intelligent identification and finite element inversion | |
CN114943693B (en) | Jetson Nano bridge crack detection method and system | |
CN117809043B (en) | Foundation cloud picture segmentation and classification method | |
CN116758026B (en) | Dam seepage area measurement method based on binocular remote sensing image significance analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |