CN109444845B - Device and method for identifying coal-rock interface based on solid-state laser radar - Google Patents
Device and method for identifying coal-rock interface based on solid-state laser radar Download PDFInfo
- Publication number
- CN109444845B CN109444845B CN201811138102.4A CN201811138102A CN109444845B CN 109444845 B CN109444845 B CN 109444845B CN 201811138102 A CN201811138102 A CN 201811138102A CN 109444845 B CN109444845 B CN 109444845B
- Authority
- CN
- China
- Prior art keywords
- feature map
- convolution
- layer
- coal
- convolution layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/4802—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/30—Assessment of water resources
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Remote Sensing (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Networks & Wireless Communication (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Astronomy & Astrophysics (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a device and a method for identifying a coal-rock interface based on solid-state laser radar, belonging to the technical field of coal-rock identification. The device comprises a plurality of laser radar modules, a signal transmission module, a data storage module, a radar imaging module, an image fusion module and an image recognition module; the plurality of laser radar modules are used for transmitting radar signals to the same area of the coal rock mine to obtain a plurality of groups of coal rock mine data information of the same area of the coal rock mine; the signal transmission module is used for transmitting the data information of the plurality of groups of coal rock ores to the data storage module; the data storage module is used for storing a plurality of groups of coal rock ore data information; the radar imaging module is used for respectively imaging each group of coal rock mine data information to obtain a plurality of coal rock texture images; the image fusion module is used for fusing the plurality of coal rock texture images to obtain fused coal rock texture images; the image recognition module is used for carrying out normalization processing on the fused coal rock texture images and recognizing the coal rock texture images to obtain a coal rock interface recognition result.
Description
Technical Field
The invention relates to the technical field of coal rock identification, in particular to a device and a method for identifying a coal rock interface based on solid-state laser radar.
Background
And identifying coal and rock, namely identifying whether the coal and rock object is a coal mine or a rock. In the coal production process, the coal rock identification technology can be widely applied to production links such as roller coal mining, tunneling, caving coal mining, raw coal ore dressing and grinding, and has important significance for reducing working personnel on a mining working face, reducing the labor intensity of workers, improving the working environment, realizing coal mine safe and efficient production and comprehensive mechanized coal mining.
At present, various coal and rock identification methods exist, such as a natural ray detection method, a stress cutting pick method, an infrared detection method, an active power monitoring method, a vibration detection method, a sound detection method, a dust detection method and the like. However, due to complex and variable geological conditions of the coal seam, the various methods have no universal applicability, and meanwhile, due to the severe working face environment and the real-time identification, the methods are not widely applied to the identification of coal and rock.
Disclosure of Invention
In order to solve the problems of poor applicability and poor recognition instantaneity of the existing coal-rock recognition method, on the one hand, the invention provides a device for recognizing a coal-rock interface based on solid-state laser radar, which comprises the following components: the system comprises a plurality of laser radar modules, a signal transmission module, a data storage module, a radar imaging module, an image fusion module and an image recognition module;
The system comprises a plurality of laser radar modules, a plurality of data acquisition module and a data acquisition module, wherein the laser radar modules are used for transmitting radar signals to the same area of the coal rock and obtaining a plurality of groups of coal rock and mineral data information of the same area of the coal rock and mineral, and can transmit radar signals to the coal rock and mineral and obtain the coal rock and mineral data information according to the reflected signals reflected by the coal rock and mineral;
the signal transmission module is used for transmitting the multiple groups of coal rock mine data information to the data storage module;
the data storage module is used for storing the multiple groups of coal rock mine data information transmitted by the signal transmission module;
the radar imaging module is used for calling a plurality of groups of coal rock ore data information stored by the data storage module, and respectively imaging each group of coal rock ore data information to obtain a coal rock texture image corresponding to each group of coal rock ore data information, namely a plurality of coal rock texture images of the same area of the coal rock ore;
the image fusion module is used for fusing the plurality of coal rock texture images to obtain a fused coal rock texture image; and the image recognition module is used for carrying out normalization processing on the fused coal rock texture image, recognizing the normalized image and obtaining a coal rock interface recognition result.
Each laser radar module comprises a radar signal transmitting unit, a radar reflection signal receiving unit and a radar signal A/D conversion unit;
the radar signal transmitting unit is used for transmitting radar signals to the coal rock mine;
the radar reflected signal receiving unit is used for receiving reflected signals reflected by the coal rock mine;
and the radar signal A/D conversion unit is used for carrying out data conversion on the reflected signals to obtain coal rock mine data information.
The laser radar module is a solid-state laser radar.
Carrying out normalization processing on the existing coal-rock texture image, constructing a full-convolution neural network model, and training and testing the full-convolution neural network model by utilizing the normalized existing coal-rock texture image to obtain a trained full-convolution neural network model; loading the trained full convolution neural network model into the image recognition module;
and the image recognition module performs normalization processing on the fused coal rock texture image, inputs the normalized image into a trained full-convolution neural network model, and outputs a coal rock interface recognition result.
The depth of the trained full convolution neural network model is five layers, namely a first layer, a second layer, a third layer, a fourth layer and a fifth layer;
the first layer consists of a convolution layer C1, a convolution layer C2 and a pooling layer P1, wherein the convolution layer C1 and the convolution layer C2 comprise 64 convolution kernels with the size of 3*3 and a ReLU activation function; the convolution layer C1 is configured to input the normalized image, where the pixel size of the normalized image is 320×320×1, and after all convolution kernels and ReLU activation functions of the normalized image are processed by the convolution layer C1, a feature map A1 is output, and the pixel size of the feature map A1 is 318×318×64; the convolution layer C2 is used for inputting the feature map A1, and after the feature map A1 is processed by all convolution kernels and ReLU activation functions of the convolution layer C2, the feature map A2 is output, and the pixel size of the feature map A2 is 316×316×64; the pooling layer P1 is configured to input a feature map A2, divide a plurality of blocks of 2×2 on the feature map A2, and output a feature map A3 after taking a maximum value from each block, where a pixel size of the feature map A3 is 158×158×64;
the second layer consists of a convolution layer C3, a convolution layer C4 and a pooling layer P2, wherein the convolution layer C3 and the convolution layer C4 comprise 128 convolution kernels with the size of 2 x 2 and a ReLU activation function; the convolution layer C3 is used for inputting a feature map A3, and after the feature map A3 is processed by all convolution kernels and ReLU activation functions of the convolution layer C3, a feature map A4 is output, and the pixel size of the feature map A4 is 156×156×128; the convolution layer C4 is used for inputting a feature map A4, and after the feature map A4 is processed by all convolution kernels and ReLU activation functions of the convolution layer C4, a feature map A5 is output, and the pixel size of the feature map A5 is 154×128; the pooling layer P2 is configured to input a feature map A5, divide a plurality of blocks of 2×2 on the feature map A5, and output a feature map A6 after taking a maximum value from each block, where a pixel size of the feature map A6 is 77×77×128;
The third layer consists of a convolution layer C5 and a convolution layer C6, wherein the convolution layer C5 and the convolution layer C6 comprise 256 convolution kernels with the size of 3*3 and a ReLU activation function; the convolution layer C5 is used for inputting a feature map A6, and after the feature map A6 is processed by all convolution kernels and ReLU activation functions of the convolution layer C5, a feature map A7 is output, and the pixel size of the feature map A7 is 75 x 256; the convolution layer C6 is used for inputting a feature map A7, and after the feature map A7 is processed by all convolution kernels and ReLU activation functions of the convolution layer C6, a feature map A8 is output, and the pixel size of the feature map A8 is 73 x 256;
the fourth layer is composed of an up-sampling layer U1, a convolution layer C7 and a convolution layer C8, wherein the up-sampling layer U1 comprises 256 convolution kernels with the size of 2 x 2, and the convolution layer C7 and the convolution layer C8 comprise 128 convolution kernels with the size of 3*3 and a ReLU activation function; the up-sampling layer U1 is used for inputting a feature map A8, and after the feature map A8 is subjected to deconvolution processing by all convolution kernels of the up-sampling layer U1, a feature map A9 is output, wherein the pixel size of the feature map A9 is 146 x 256; the convolution layer C7 is used for inputting a feature map A9, and after the feature map A9 is processed by all convolution kernels and ReLU activation functions of the convolution layer C7, a feature map a10 is output, and the pixel size of the feature map a10 is 144×144×128; the convolution layer C8 is used for inputting a feature map a10, and after the feature map a10 is processed by all convolution kernels and ReLU activation functions of the convolution layer C8, a feature map a11 is output, and the pixel size of the feature map a11 is 142×142×128;
The fifth layer is composed of an up-sampling layer U2, a convolution layer C9, a convolution layer C10 and a convolution layer C11, wherein the up-sampling layer U2 comprises 128 convolution kernels with the size of 2 x 2, the convolution layer C9 and the convolution layer C10 comprise 64 convolution kernels with the size of 3*3 and a ReLU activation function, and the convolution layer C11 comprises 2 convolution kernels with the size of 1*1 and a ReLU activation function; the up-sampling layer U2 is configured to input a feature map a11, where the feature map a11 is deconvoluted by all convolution kernels of the up-sampling layer U2, and then output a feature map a12, where a pixel size of the feature map a12 is 284×284×128; the convolution layer C9 is used for inputting a feature map a12, and after the feature map a12 is processed by all convolution kernels and ReLU activation functions of the convolution layer C9, a feature map a13 is output, and the pixel size of the feature map a13 is 282×282×64; the convolution layer C10 is used for inputting a feature map a13, and after the feature map a13 is processed by all convolution kernels and ReLU activation functions of the convolution layer C10, outputting a feature map a14, wherein the pixel size of the feature map a14 is 280×280×64; the convolution layer C11 is used for inputting the feature map a14, after the feature map a14 is processed by all convolution kernels and ReLU activation functions of the convolution layer C11, outputting a feature map a15, wherein the pixel size of the feature map a15 is 280×280×2, and the output features of the feature map a15 comprise coal mine interfaces and rock interfaces of coal and rock ores.
The device also comprises a power supply module, which is used for supplying power to the laser radar modules, the signal transmission module, the data storage module, the radar imaging module, the image fusion module and the image recognition module.
In another aspect, the invention provides a method for identifying a coal-rock interface based on solid-state laser radar, the method comprising:
transmitting radar signals to the same area of the coal rock mine by adopting a plurality of laser radar modules to obtain a plurality of groups of coal rock mine data information, wherein each laser radar module can transmit radar signals to the coal rock mine and obtain the coal rock mine data information according to the reflected signals reflected by the coal rock mine;
storing the multiple groups of coal rock ore data information;
retrieving stored multiple groups of coal rock ore data information, and respectively imaging each group of coal rock ore data information to obtain a coal rock texture image corresponding to each group of coal rock ore data information, namely multiple coal rock texture images of the same area of the coal rock ore;
fusing the plurality of coal rock texture images to obtain a fused coal rock texture image;
and carrying out normalization processing on the fused coal rock texture image, and identifying the normalized image to obtain a coal rock interface identification result.
The laser radar module can both transmit radar signals to coal rock ore, and obtain coal rock ore data information according to reflected signals reflected by the coal rock ore, and comprises:
transmitting radar signals to the coal rock ore through a radar signal transmitting unit of the laser radar module;
receiving a reflected signal reflected by the coal rock ore through a radar reflected signal receiving unit of the laser radar module; and performing data conversion on the reflected signals through a radar signal A/D conversion unit of the laser radar module to obtain coal rock data information.
The normalizing processing is carried out on the fused coal rock texture image, and the normalized image is identified, and the method comprises the following steps:
carrying out normalization processing on the existing coal-rock texture image, constructing a full-convolution neural network model, and training and testing the full-convolution neural network model by utilizing the normalized existing coal-rock texture image to obtain a trained full-convolution neural network model;
and carrying out normalization processing on the fused coal-rock texture image, inputting the normalized image into a trained full-convolution neural network model, and outputting a coal-rock interface recognition result by the trained full-convolution neural network model.
The depth of the trained full convolution neural network model is five layers, namely a first layer, a second layer, a third layer, a fourth layer and a fifth layer;
the first layer consists of a convolution layer C1, a convolution layer C2 and a pooling layer P1, wherein the convolution layer C1 and the convolution layer C2 comprise 64 convolution kernels with the size of 3*3 and a ReLU activation function; the convolution layer C1 is configured to input the normalized image, where the pixel size of the normalized image is 320×320×1, and after all convolution kernels and ReLU activation functions of the normalized image are processed by the convolution layer C1, a feature map A1 is output, and the pixel size of the feature map A1 is 318×318×64; the convolution layer C2 is used for inputting the feature map A1, and after the feature map A1 is processed by all convolution kernels and ReLU activation functions of the convolution layer C2, the feature map A2 is output, and the pixel size of the feature map A2 is 316×316×64; the pooling layer P1 is configured to input a feature map A2, divide a plurality of blocks of 2×2 on the feature map A2, and output a feature map A3 after taking a maximum value from each block, where a pixel size of the feature map A3 is 158×158×64;
the second layer consists of a convolution layer C3, a convolution layer C4 and a pooling layer P2, wherein the convolution layer C3 and the convolution layer C4 comprise 128 convolution kernels with the size of 2 x 2 and a ReLU activation function; the convolution layer C3 is used for inputting a feature map A3, and after the feature map A3 is processed by all convolution kernels and ReLU activation functions of the convolution layer C3, a feature map A4 is output, and the pixel size of the feature map A4 is 156×156×128; the convolution layer C4 is used for inputting a feature map A4, and after the feature map A4 is processed by all convolution kernels and ReLU activation functions of the convolution layer C4, a feature map A5 is output, and the pixel size of the feature map A5 is 154×128; the pooling layer P2 is configured to input a feature map A5, divide a plurality of blocks of 2×2 on the feature map A5, and output a feature map A6 after taking a maximum value from each block, where a pixel size of the feature map A6 is 77×77×128;
The third layer consists of a convolution layer C5 and a convolution layer C6, wherein the convolution layer C5 and the convolution layer C6 comprise 256 convolution kernels with the size of 3*3 and a ReLU activation function; the convolution layer C5 is used for inputting a feature map A6, and after the feature map A6 is processed by all convolution kernels and ReLU activation functions of the convolution layer C5, a feature map A7 is output, and the pixel size of the feature map A7 is 75 x 256; the convolution layer C6 is used for inputting a feature map A7, and after the feature map A7 is processed by all convolution kernels and ReLU activation functions of the convolution layer C6, a feature map A8 is output, and the pixel size of the feature map A8 is 73 x 256;
the fourth layer is composed of an up-sampling layer U1, a convolution layer C7 and a convolution layer C8, wherein the up-sampling layer U1 comprises 256 convolution kernels with the size of 2 x 2, and the convolution layer C7 and the convolution layer C8 comprise 128 convolution kernels with the size of 3*3 and a ReLU activation function; the up-sampling layer U1 is used for inputting a feature map A8, and after the feature map A8 is subjected to deconvolution processing by all convolution kernels of the up-sampling layer U1, a feature map A9 is output, wherein the pixel size of the feature map A9 is 146 x 256; the convolution layer C7 is used for inputting a feature map A9, and after the feature map A9 is processed by all convolution kernels and ReLU activation functions of the convolution layer C7, a feature map a10 is output, and the pixel size of the feature map a10 is 144×144×128; the convolution layer C8 is used for inputting a feature map a10, and after the feature map a10 is processed by all convolution kernels and ReLU activation functions of the convolution layer C8, a feature map a11 is output, and the pixel size of the feature map a11 is 142×142×128;
The fifth layer is composed of an up-sampling layer U2, a convolution layer C9, a convolution layer C10 and a convolution layer C11, wherein the up-sampling layer U2 comprises 128 convolution kernels with the size of 2 x 2, the convolution layer C9 and the convolution layer C10 comprise 64 convolution kernels with the size of 3*3 and a ReLU activation function, and the convolution layer C11 comprises 2 convolution kernels with the size of 1*1 and a ReLU activation function; the up-sampling layer U2 is configured to input a feature map a11, where the feature map a11 is deconvoluted by all convolution kernels of the up-sampling layer U2, and then output a feature map a12, where a pixel size of the feature map a12 is 284×284×128; the convolution layer C9 is used for inputting a feature map a12, and after the feature map a12 is processed by all convolution kernels and ReLU activation functions of the convolution layer C9, a feature map a13 is output, and the pixel size of the feature map a13 is 282×282×64; the convolution layer C10 is used for inputting a feature map a13, and after the feature map a13 is processed by all convolution kernels and ReLU activation functions of the convolution layer C10, outputting a feature map a14, wherein the pixel size of the feature map a14 is 280×280×64; the convolution layer C11 is used for inputting the feature map a14, after the feature map a14 is processed by all convolution kernels and ReLU activation functions of the convolution layer C11, outputting a feature map a15, wherein the pixel size of the feature map a15 is 280×280×2, and the output features of the feature map a15 comprise coal mine interfaces and rock interfaces of coal and rock ores.
Through the technical scheme, compared with the prior art, the invention has the following beneficial effects:
according to the invention, the radar signals are utilized to detect the coal-rock interface, the detection precision can reach millimeter level, the relative depth of the rugged surface of the coal rock can be detected, the detection process does not depend on environmental radiation, the anti-interference capability is strong, the radar signals are transmitted to the same area of the coal-rock mine by utilizing a plurality of laser radar modules, a plurality of coal-rock texture images of the same area are formed, the accuracy of coal-rock mine image imaging is improved by fusing the plurality of coal-rock texture images of the same area, and the coal-rock texture images after fusion are used for identifying the coal-rock interface and the rock interface by utilizing a full convolution neural network model, so that the identification result is more accurate; the invention has strong anti-interference capability in the mine under the complex environment, can accurately identify the coal and rock, has simple operation process and better applicability, and can identify the distribution condition of the coal mine and the rock in real time.
Drawings
The invention will be further described with reference to the drawings and examples.
FIG. 1 is a schematic structural diagram of an apparatus for identifying a coal-rock interface based on solid-state laser radar;
FIG. 2 is a schematic diagram of the arrangement of a lidar module of the present invention;
FIG. 3 is a block diagram of a full convolutional neural network of the present invention;
fig. 4 is a flow chart of a method of identifying a coal-rock interface based on solid-state laser radar of the present invention.
Detailed Description
The invention will now be described in further detail with reference to the accompanying drawings. The drawings are simplified schematic representations which merely illustrate the basic structure of the invention and therefore show only the structures which are relevant to the invention.
Example 1
In order to solve the problems of poor applicability and poor recognition real-time of the existing coal-rock recognition method, as shown in fig. 1, the invention provides a device for recognizing a coal-rock interface based on solid-state laser radar, which comprises: the system comprises a plurality of laser radar modules, a signal transmission module, a data storage module, a radar imaging module, an image fusion module and an image recognition module; the system comprises a plurality of laser radar modules, a plurality of data acquisition module and a data acquisition module, wherein the laser radar modules are used for transmitting radar signals to the same area of the coal rock and obtaining a plurality of groups of coal rock and mineral data information of the same area of the coal rock and mineral, and each laser radar module can transmit radar signals to the coal rock and mineral and obtain the coal rock and mineral data information according to the reflected signals reflected by the coal rock and mineral;
In the invention, each laser radar module comprises a radar signal transmitting unit, a radar reflected signal receiving unit and a radar signal A/D conversion unit;
the radar signal transmitting unit is used for transmitting radar signals to the coal rock mine;
the radar reflected signal receiving unit is used for receiving reflected signals reflected by the coal rock mine;
the radar signal A/D conversion unit is used for carrying out data conversion on the reflected signals to obtain coal rock mine data information.
The radar signal transmitting units of each laser radar module transmit radar signals to the same area on the surface of the coal rock mine, the radar signals transmitted by each laser radar module can reflect on the surface of the coal rock mine and penetrate through the coal rock mine to be reflected, the radar reflection signal receiving units of each laser radar module receive the reflection signals of the radar signals transmitted by the radar signal transmitting units of the laser radar module and conduct data conversion to obtain a group of coal rock mine data information, and therefore a plurality of laser radar modules can obtain a plurality of groups of coal rock mine data information of the same area of the coal rock mine.
As shown in fig. 2, the laser radar module in the invention may be a solid-state laser radar, the type CE-30 solid-state laser radar may be adopted, the solid-state laser radars are arranged in a row in front of the coal mine surface, radar signals are generated on the coal mine surface by the solid-state laser radar in a linear movement mode, for example, as shown in fig. 2, a plurality of connected areas, namely, an area 1, an area 2 and an area 3 to an area N, may be divided on the coal mine surface, and each laser radar module transmits radar signals to the coal mine surface in the order of the area 1, the area 2 and the area 3 to the area N, so that multiple groups of coal mine data information can be obtained for the same area, only 3 areas of the coal mine surface are shown in fig. 2, and in the case shown in fig. 2, the coal mine surface of the area 2 is subjected to laser detection by the 3 laser radar modules.
The signal transmission module is used for transmitting the data information of the plurality of groups of coal and rock ores to the data storage module;
the signal transmission module can transmit data through Ethernet.
The data storage module is used for storing the multiple groups of coal rock mine data information transmitted by the signal transmission module;
the radar imaging module is used for calling a plurality of groups of coal rock ore data information stored by the data storage module, and respectively imaging each group of coal rock ore data information to obtain a coal rock texture image corresponding to each group of coal rock ore data information, namely a plurality of coal rock texture images of the same area of the coal rock ore;
the image fusion module is used for fusing the plurality of coal rock texture images to obtain a fused coal rock texture image;
the image recognition module is used for carrying out normalization processing on the fused coal-rock texture images and recognizing the normalized images to obtain a coal-rock interface recognition result, namely, a coal mine interface and a rock interface are recognized, so that the distribution of coal mines and rocks of the coal-rock mines is obtained, wherein multiple groups of coal-rock texture images of each area of the coal-rock mines are imaged, fused and recognized to obtain the coal-rock recognition result of each area of the coal-rock mines.
In the invention, the data storage module, the radar imaging module, the image fusion module and the image recognition module can be integrated in an upper computer, the upper computer can be a PC computer, and the solid-state laser radar can be used as a lower computer.
In the invention, the coal rock texture image can be identified by utilizing the full convolution neural network model, and concretely:
carrying out normalization processing on the existing coal-rock texture image, constructing a full convolution neural network model, and training and testing the full convolution neural network model by utilizing the normalized existing coal-rock texture image to obtain a trained full convolution neural network model;
the method comprises the steps of acquiring an existing coal-rock texture image, normalizing the existing coal-rock texture image, dividing the existing coal-rock texture image into two parts, wherein one part is used as training data, the other part is used as test data, training a full-convolution neural network model by using the training data, testing the full-convolution neural network model by using the test data, if the test result has smaller errors with the known coal interface and rock interface, the test result meets the requirements, if the test result does not meet the requirements, training the full-convolution neural network model by adding the training data until the test result meets the requirements, and obtaining the trained full-convolution neural network model.
Loading the trained full-convolution neural network model into an image recognition module, carrying out normalization processing on the coal rock texture image fused by the image fusion module by adopting the image recognition module, inputting the normalized image into the trained full-convolution neural network model, and outputting a coal rock interface recognition result by the trained full-convolution neural network model, wherein the coal rock recognition result comprises a coal mine interface and a rock interface.
Specifically, in the invention, the depth of the trained full convolution neural network model is five layers, namely a first layer, a second layer, a third layer, a fourth layer and a fifth layer, and the full convolution neural network model is shown in fig. 3; the first layer consists of a convolution layer C1, a convolution layer C2 and a pooling layer P1, wherein the convolution layer C1 and the convolution layer C2 comprise 64 convolution kernels with the size of 3*3 and a ReLU activation function; the convolution layer C1 is used for inputting the normalized image, the pixel size of the normalized image is 320×320×1, and after the normalized image is subjected to convolution processing by all convolution kernels of the convolution layer C1 and a ReLU activation function, a feature map A1 is output, and the pixel size of the feature map A1 is 318×318×64; the convolution layer C2 is used for inputting the feature map A1, and after the feature map A1 is subjected to convolution processing by all convolution kernels and ReLU activation functions of the convolution layer C2, the feature map A2 is output, and the pixel size of the feature map A2 is 316×316×64; the pooling layer P1 is configured to input a feature map A2, divide a plurality of blocks of 2×2 on the feature map A2, and output a feature map A3 after taking a maximum value from each block, where a pixel size of the feature map A3 is 158×158×64;
The second layer consists of a convolution layer C3, a convolution layer C4 and a pooling layer P2, wherein the convolution layer C3 and the convolution layer C4 comprise 128 convolution kernels with the size of 2 x 2 and a ReLU activation function; the convolution layer C3 is used for inputting a feature map A3, and after the feature map A3 is subjected to convolution processing by all convolution kernels and ReLU activation functions of the convolution layer C3, a feature map A4 is output, and the pixel size of the feature map A4 is 156 x 128; the convolution layer C4 is used for inputting a feature map A4, and after the feature map A4 is subjected to convolution processing by all convolution kernels and ReLU activation functions of the convolution layer C4, a feature map A5 is output, and the pixel size of the feature map A5 is 154×154×128; the pooling layer P2 is configured to input a feature map A5, divide a plurality of blocks of 2×2 on the feature map A5, and output a feature map A6 after taking a maximum value from each block, where a pixel size of the feature map A6 is 77×77×128;
the third layer consists of a convolution layer C5 and a convolution layer C6, wherein the convolution layer C5 and the convolution layer C6 comprise 256 convolution kernels with the size of 3*3 and a ReLU activation function; the convolution layer C5 is used for inputting a feature map A6, and after the feature map A6 is subjected to convolution processing by all convolution kernels and ReLU activation functions of the convolution layer C5, a feature map A7 is output, and the pixel size of the feature map A7 is 75 x 256; the convolution layer C6 is used for inputting a feature map A7, and after the feature map A7 is subjected to convolution processing by all convolution kernels and ReLU activation functions of the convolution layer C6, a feature map A8 is output, and the pixel size of the feature map A8 is 73 x 256;
The fourth layer is composed of an up-sampling layer U1, a convolution layer C7 and a convolution layer C8, wherein the up-sampling layer U1 comprises 256 convolution kernels with the size of 2 x 2, and the convolution layer C7 and the convolution layer C8 comprise 128 convolution kernels with the size of 3*3 and a ReLU activation function; the up-sampling layer U1 is used for inputting a feature map A8, and after the feature map A8 is subjected to deconvolution processing by all convolution kernels of the up-sampling layer U1, a feature map A9 is output, wherein the pixel size of the feature map A9 is 146 x 256; the convolution layer C7 is used for inputting a feature map A9, and after the feature map A9 is subjected to convolution processing by all convolution kernels and ReLU activation functions of the convolution layer C7, a feature map a10 is output, wherein the pixel size of the feature map a10 is 144×144×128; the convolution layer C8 is used for inputting a feature map a10, and after the feature map a10 is subjected to convolution processing by all convolution kernels and ReLU activation functions of the convolution layer C8, outputting a feature map a11, wherein the pixel size of the feature map a11 is 142×142×128;
the fifth layer is composed of an up-sampling layer U2, a convolution layer C9, a convolution layer C10 and a convolution layer C11, wherein the up-sampling layer U2 comprises 128 convolution kernels with the size of 2 x 2, the convolution layer C9 and the convolution layer C10 comprise 64 convolution kernels with the size of 3*3 and a ReLU activation function, and the convolution layer C11 comprises 2 convolution kernels with the size of 1*1 and a ReLU activation function; the up-sampling layer U2 is configured to input a feature map a11, where the feature map a11 is deconvoluted by all convolution kernels of the up-sampling layer U2, and then output a feature map a12, where a pixel size of the feature map a12 is 284×284×128; the convolution layer C9 is used for inputting a feature map a12, and after the feature map a12 is subjected to convolution processing by all convolution kernels and ReLU activation functions of the convolution layer C9, a feature map a13 is output, wherein the pixel size of the feature map a13 is 282 x 64; the convolution layer C10 is used for inputting a feature map A13, and after the feature map A13 is subjected to convolution processing by all convolution kernels and ReLU activation functions of the convolution layer C10, a feature map A14 is output, wherein the pixel size of the feature map A14 is 280×280×64; the convolution layer C11 is used for inputting the feature map a14, after the feature map a14 is subjected to convolution processing by all convolution kernels and ReLU activation functions of the convolution layer C11, a feature map a15 is output, the pixel size of the feature map a15 is 280×280×2, and the output features of the feature map a15 comprise coal mine interfaces and rock interfaces of coal and rock ores.
The device of the invention can also comprise a power supply module for supplying power to the laser radar modules, the signal transmission module, the data storage module, the radar imaging module, the image fusion module and the image recognition module.
According to the device, the radar signals are utilized to detect the coal-rock interface, the detection precision can reach the millimeter level, the relative depth of the rugged surface of the coal rock can be detected, the detection process does not depend on environmental radiation, the anti-interference capability is strong, the radar signals are transmitted to the same area of the coal-rock ore by utilizing the plurality of laser radar modules, a plurality of coal-rock texture images of the same area are formed, the accuracy of coal-rock ore image imaging is improved by fusing the plurality of coal-rock texture images of the same area, and the coal-rock interface and the rock interface are identified by utilizing the full convolution neural network model, so that the identification result is more accurate; the device has strong anti-interference capability in a mine under a complex environment, can accurately identify coal and rock, has simple operation process and better applicability, and can identify the distribution condition of coal mine and rock in real time.
Example 2
The invention provides a method for identifying a coal-rock interface based on solid-state laser radar, which comprises the following steps of:
101. transmitting radar signals to the same area of the coal rock mine by adopting a plurality of laser radar modules to obtain a plurality of groups of coal rock mine data information, wherein each laser radar module can transmit radar signals to the coal rock mine and obtain the coal rock mine data information according to the reflected signals reflected by the coal rock mine;
the laser radar module includes radar signal transmitting unit, radar reflection signal receiving unit and radar signal A/D conversion unit, the laser radar module, homoenergetic is to coal petrography ore deposit transmission radar signal to obtain coal petrography ore deposit data information according to the reflection signal that the coal petrography ore deposit reflection comes back, include:
transmitting radar signals to the coal rock ore through a radar signal transmitting unit of the laser radar module;
receiving a reflected signal reflected by the coal rock mine through a radar reflected signal receiving unit of the laser radar module;
and carrying out data conversion on the reflected signals through a radar signal A/D conversion unit of the laser radar module to obtain coal rock data information.
The radar signal transmitting unit of each laser radar module transmits radar signals to the same area on the surface of the coal rock mine, the radar signals transmitted by each laser radar module can reflect on the surface of the coal rock mine and penetrate through the coal rock mine to be reflected, the radar reflection signal receiving unit of each laser radar module receives the reflection signals of the radar signals transmitted by the radar signal transmitting unit of each laser radar module and performs data conversion to obtain a group of coal rock mine data information, and therefore a plurality of laser radar modules can obtain a plurality of groups of coal rock mine data information of the same area of the coal rock mine.
The laser radar module in the invention may be a solid-state laser radar, where a plurality of solid-state laser radars are arranged in a row in front of the coal mine surface, and the solid-state laser radars generate radar signals on the coal mine surface in a linear moving manner, for example, as shown in fig. 2, a plurality of connected areas, namely, an area 1, an area 2 and an area 3 to an area N, may be divided on the coal mine surface, and each laser radar module transmits radar signals to the coal mine surface in the order of the area 1, the area 2 and the area 3 to the area N, so that multiple groups of coal mine data information can be obtained for the same area.
102. Storing a plurality of groups of coal rock ore data information;
103. retrieving stored multiple groups of coal rock ore data information, and respectively imaging each group of coal rock ore data information to obtain a coal rock texture image corresponding to each group of coal rock ore data information, namely multiple coal rock texture images of the same area of the coal rock ore;
104. fusing the plurality of coal rock texture images to obtain a fused coal rock texture image;
105. and carrying out normalization processing on the fused coal-rock texture images, and identifying the normalized images to obtain a coal-rock interface identification result, namely identifying coal interfaces and rock interfaces, so as to obtain the distribution of coal mines and rocks of the coal-rock mines, and carrying out imaging, fusion and identification on multiple groups of coal-rock texture images of each area of the coal-rock mines to obtain the coal-rock identification result of each area of the coal-rock mines.
The method comprises the steps of carrying out normalization processing on the fused coal rock texture image and identifying the normalized image, wherein the normalized image can be identified by utilizing a full convolution neural network model, and specifically:
carrying out normalization processing on the existing coal-rock texture image, constructing a full convolution neural network model, and training and testing the full convolution neural network model by utilizing the normalized existing coal-rock texture image to obtain a trained full convolution neural network model;
the method comprises the steps of acquiring an existing coal-rock texture image, normalizing the existing coal-rock texture image, dividing the normalized coal-rock texture image into two parts, taking one part as training data and the other part as test data, training a full-convolution neural network model by using the training data, testing the full-convolution neural network model by using the test data, if the test result has smaller errors with the known coal interface and rock interface, the test result meets the requirements, if the test result does not meet the requirements, training the full-convolution neural network model by adding the training data, and further training the full-convolution neural network model until the test result meets the requirements, thereby obtaining the trained full-convolution neural network model.
And carrying out normalization processing on the fused coal-rock texture images, inputting the normalized images into a trained full-convolution neural network model, and outputting a coal-rock interface recognition result by the trained full-convolution neural network model, wherein the coal-rock recognition result comprises a coal mine interface and a rock interface.
Specifically, in the invention, the depth of the trained full convolutional neural network model is five layers, namely a first layer, a second layer, a third layer, a fourth layer and a fifth layer, and the depth is shown in fig. 3, which is a convolutional neural network structure diagram in the invention; the first layer consists of a convolution layer C1, a convolution layer C2 and a pooling layer P1, wherein the convolution layer C1 and the convolution layer C2 comprise 64 convolution kernels with the size of 3*3 and a ReLU activation function; the convolution layer C1 is used for inputting the normalized image, the pixel size of the normalized image is 320×320×1, and after the normalized image is subjected to convolution processing by all convolution kernels of the convolution layer C1 and a ReLU activation function, a feature map A1 is output, and the pixel size of the feature map A1 is 318×318×64; the convolution layer C2 is used for inputting the feature map A1, and after the feature map A1 is subjected to convolution processing by all convolution kernels and ReLU activation functions of the convolution layer C2, the feature map A2 is output, and the pixel size of the feature map A2 is 316×316×64; the pooling layer P1 is configured to input a feature map A2, divide a plurality of blocks of 2×2 on the feature map A2, and output a feature map A3 after taking a maximum value from each block, where a pixel size of the feature map A3 is 158×158×64;
The second layer consists of a convolution layer C3, a convolution layer C4 and a pooling layer P2, wherein the convolution layer C3 and the convolution layer C4 comprise 128 convolution kernels with the size of 2 x 2 and a ReLU activation function; the convolution layer C3 is used for inputting a feature map A3, and after the feature map A3 is subjected to convolution processing by all convolution kernels and ReLU activation functions of the convolution layer C3, a feature map A4 is output, and the pixel size of the feature map A4 is 156 x 128; the convolution layer C4 is used for inputting a feature map A4, and after the feature map A4 is subjected to convolution processing by all convolution kernels and ReLU activation functions of the convolution layer C4, a feature map A5 is output, and the pixel size of the feature map A5 is 154×154×128; the pooling layer P2 is configured to input a feature map A5, divide a plurality of blocks of 2×2 on the feature map A5, and output a feature map A6 after taking a maximum value from each block, where a pixel size of the feature map A6 is 77×77×128;
the third layer consists of a convolution layer C5 and a convolution layer C6, wherein the convolution layer C5 and the convolution layer C6 comprise 256 convolution kernels with the size of 3*3 and a ReLU activation function; the convolution layer C5 is used for inputting a feature map A6, and after the feature map A6 is subjected to convolution processing by all convolution kernels and ReLU activation functions of the convolution layer C5, a feature map A7 is output, and the pixel size of the feature map A7 is 75 x 256; the convolution layer C6 is used for inputting a feature map A7, and after the feature map A7 is subjected to convolution processing by all convolution kernels and ReLU activation functions of the convolution layer C6, a feature map A8 is output, and the pixel size of the feature map A8 is 73 x 256;
The fourth layer is composed of an up-sampling layer U1, a convolution layer C7 and a convolution layer C8, wherein the up-sampling layer U1 comprises 256 convolution kernels with the size of 2 x 2, and the convolution layer C7 and the convolution layer C8 comprise 128 convolution kernels with the size of 3*3 and a ReLU activation function; the up-sampling layer U1 is used for inputting a feature map A8, and after the feature map A8 is subjected to deconvolution processing by all convolution kernels of the up-sampling layer U1, a feature map A9 is output, wherein the pixel size of the feature map A9 is 146 x 256; the convolution layer C7 is used for inputting a feature map A9, and after the feature map A9 is subjected to convolution processing by all convolution kernels and ReLU activation functions of the convolution layer C7, a feature map a10 is output, wherein the pixel size of the feature map a10 is 144×144×128; the convolution layer C8 is used for inputting a feature map a10, and after the feature map a10 is subjected to convolution processing by all convolution kernels and ReLU activation functions of the convolution layer C8, outputting a feature map a11, wherein the pixel size of the feature map a11 is 142×142×128;
the fifth layer is composed of an up-sampling layer U2, a convolution layer C9, a convolution layer C10 and a convolution layer C11, wherein the up-sampling layer U2 comprises 128 convolution kernels with the size of 2 x 2, the convolution layer C9 and the convolution layer C10 comprise 64 convolution kernels with the size of 3*3 and a ReLU activation function, and the convolution layer C11 comprises 2 convolution kernels with the size of 1*1 and a ReLU activation function; the up-sampling layer U2 is configured to input a feature map a11, where the feature map a11 is deconvoluted by all convolution kernels of the up-sampling layer U2, and then output a feature map a12, where a pixel size of the feature map a12 is 284×284×128; the convolution layer C9 is used for inputting a feature map a12, and after the feature map a12 is subjected to convolution processing by all convolution kernels and ReLU activation functions of the convolution layer C9, a feature map a13 is output, wherein the pixel size of the feature map a13 is 282 x 64; the convolution layer C10 is used for inputting a feature map A13, and after the feature map A13 is subjected to convolution processing by all convolution kernels and ReLU activation functions of the convolution layer C10, a feature map A14 is output, wherein the pixel size of the feature map A14 is 280×280×64; the convolution layer C11 is used for inputting the feature map a14, after the feature map a14 is subjected to convolution processing by all convolution kernels and ReLU activation functions of the convolution layer C11, a feature map a15 is output, the pixel size of the feature map a15 is 280×280×2, and the output features of the feature map a15 comprise coal mine interfaces and rock interfaces of coal and rock ores.
According to the method, the radar signals are utilized to detect the coal-rock interface, the detection precision can reach millimeter level, the relative depth of the rugged surface of the coal rock can be detected, the detection process does not depend on environmental radiation, the anti-interference capability is strong, the radar signals are transmitted to the same area of the coal-rock ore by utilizing the plurality of laser radar modules, a plurality of coal-rock texture images of the same area are formed, the accuracy of coal-rock ore image imaging is improved by fusing the plurality of coal-rock texture images of the same area, and the coal-rock interface and the rock interface are identified by utilizing the full convolution neural network model, so that the identification result is more accurate; the method has strong anti-interference capability in the mine under the complex environment, can accurately identify the coal and the rock, has simple operation process and better applicability, and can identify the distribution condition of the coal mine and the rock in real time.
With the above-described preferred embodiments according to the present invention as an illustration, the above-described descriptions can be used by persons skilled in the relevant art to make various changes and modifications without departing from the scope of the technical idea of the present invention. The technical scope of the present invention is not limited to the description, but must be determined according to the scope of claims.
Claims (6)
1. Device based on solid-state laser radar reaches carries out discernment to coal petrography interface, its characterized in that: the device comprises a plurality of laser radar modules, a signal transmission module, a data storage module, a radar imaging module, an image fusion module and an image recognition module;
the system comprises a plurality of laser radar modules, a plurality of data acquisition module and a data acquisition module, wherein the laser radar modules are used for transmitting radar signals to the same area of the coal rock and obtaining a plurality of groups of coal rock and mineral data information of the same area of the coal rock and mineral, and can transmit radar signals to the coal rock and mineral and obtain the coal rock and mineral data information according to the reflected signals reflected by the coal rock and mineral;
the signal transmission module is used for transmitting the multiple groups of coal rock mine data information to the data storage module;
the data storage module is used for storing the multiple groups of coal rock mine data information transmitted by the signal transmission module;
the radar imaging module is used for calling a plurality of groups of coal rock ore data information stored by the data storage module, and respectively imaging each group of coal rock ore data information to obtain a coal rock texture image corresponding to each group of coal rock ore data information, namely a plurality of coal rock texture images of the same area of the coal rock ore;
the image fusion module is used for fusing the plurality of coal rock texture images to obtain a fused coal rock texture image;
The image recognition module is used for carrying out normalization processing on the fused coal rock texture image, recognizing the normalized image and obtaining a coal rock interface recognition result;
carrying out normalization processing on the existing coal-rock texture image, constructing a full-convolution neural network model, and training and testing the full-convolution neural network model by utilizing the normalized existing coal-rock texture image to obtain a trained full-convolution neural network model; loading the trained full convolution neural network model into the image recognition module;
the image recognition module performs normalization processing on the fused coal-rock texture image, inputs the normalized image into a trained full-convolution neural network model, and outputs a coal-rock interface recognition result;
the depth of the trained full convolution neural network model is five layers, namely a first layer, a second layer, a third layer, a fourth layer and a fifth layer;
the first layer consists of a convolution layer C1, a convolution layer C2 and a pooling layer P1, wherein the convolution layer C1 and the convolution layer C2 comprise 64 convolution kernels with the size of 3*3 and a ReLU activation function; the convolution layer C1 is configured to input the normalized image, where the pixel size of the normalized image is 320×320×1, and after all convolution kernels and ReLU activation functions of the normalized image are processed by the convolution layer C1, a feature map A1 is output, and the pixel size of the feature map A1 is 318×318×64; the convolution layer C2 is used for inputting the feature map A1, and after the feature map A1 is processed by all convolution kernels and ReLU activation functions of the convolution layer C2, the feature map A2 is output, and the pixel size of the feature map A2 is 316×316×64; the pooling layer P1 is configured to input a feature map A2, divide a plurality of blocks of 2×2 on the feature map A2, and output a feature map A3 after taking a maximum value from each block, where a pixel size of the feature map A3 is 158×158×64;
The second layer consists of a convolution layer C3, a convolution layer C4 and a pooling layer P2, wherein the convolution layer C3 and the convolution layer C4 comprise 128 convolution kernels with the size of 2 x 2 and a ReLU activation function; the convolution layer C3 is used for inputting a feature map A3, and after the feature map A3 is processed by all convolution kernels and ReLU activation functions of the convolution layer C3, a feature map A4 is output, and the pixel size of the feature map A4 is 156×156×128; the convolution layer C4 is used for inputting a feature map A4, and after the feature map A4 is processed by all convolution kernels and ReLU activation functions of the convolution layer C4, a feature map A5 is output, and the pixel size of the feature map A5 is 154×128; the pooling layer P2 is configured to input a feature map A5, divide a plurality of blocks of 2×2 on the feature map A5, and output a feature map A6 after taking a maximum value from each block, where a pixel size of the feature map A6 is 77×77×128;
the third layer consists of a convolution layer C5 and a convolution layer C6, wherein the convolution layer C5 and the convolution layer C6 comprise 256 convolution kernels with the size of 3*3 and a ReLU activation function; the convolution layer C5 is used for inputting a feature map A6, and after the feature map A6 is processed by all convolution kernels and ReLU activation functions of the convolution layer C5, a feature map A7 is output, and the pixel size of the feature map A7 is 75 x 256; the convolution layer C6 is used for inputting a feature map A7, and after the feature map A7 is processed by all convolution kernels and ReLU activation functions of the convolution layer C6, a feature map A8 is output, and the pixel size of the feature map A8 is 73 x 256;
The fourth layer is composed of an up-sampling layer U1, a convolution layer C7 and a convolution layer C8, wherein the up-sampling layer U1 comprises 256 convolution kernels with the size of 2 x 2, and the convolution layer C7 and the convolution layer C8 comprise 128 convolution kernels with the size of 3*3 and a ReLU activation function; the up-sampling layer U1 is used for inputting a feature map A8, and after the feature map A8 is subjected to deconvolution processing by all convolution kernels of the up-sampling layer U1, a feature map A9 is output, wherein the pixel size of the feature map A9 is 146 x 256; the convolution layer C7 is used for inputting a feature map A9, and after the feature map A9 is processed by all convolution kernels and ReLU activation functions of the convolution layer C7, a feature map a10 is output, and the pixel size of the feature map a10 is 144×144×128; the convolution layer C8 is used for inputting a feature map a10, and after the feature map a10 is processed by all convolution kernels and ReLU activation functions of the convolution layer C8, a feature map a11 is output, and the pixel size of the feature map a11 is 142×142×128;
the fifth layer is composed of an up-sampling layer U2, a convolution layer C9, a convolution layer C10 and a convolution layer C11, wherein the up-sampling layer U2 comprises 128 convolution kernels with the size of 2 x 2, the convolution layer C9 and the convolution layer C10 comprise 64 convolution kernels with the size of 3*3 and a ReLU activation function, and the convolution layer C11 comprises 2 convolution kernels with the size of 1*1 and a ReLU activation function; the up-sampling layer U2 is configured to input a feature map a11, where the feature map a11 is deconvoluted by all convolution kernels of the up-sampling layer U2, and then output a feature map a12, where a pixel size of the feature map a12 is 284×284×128; the convolution layer C9 is used for inputting a feature map a12, and after the feature map a12 is processed by all convolution kernels and ReLU activation functions of the convolution layer C9, a feature map a13 is output, and the pixel size of the feature map a13 is 282×282×64; the convolution layer C10 is used for inputting a feature map a13, and after the feature map a13 is processed by all convolution kernels and ReLU activation functions of the convolution layer C10, outputting a feature map a14, wherein the pixel size of the feature map a14 is 280×280×64; the convolution layer C11 is used for inputting the feature map a14, after the feature map a14 is processed by all convolution kernels and ReLU activation functions of the convolution layer C11, outputting a feature map a15, wherein the pixel size of the feature map a15 is 280×280×2, and the output features of the feature map 15 comprise coal mine interfaces and rock interfaces of coal and rock ores.
2. The device for identifying a coal-rock interface based on solid state laser radar according to claim 1, wherein: each laser radar module comprises a radar signal transmitting unit, a radar reflection signal receiving unit and a radar signal A/D conversion unit;
the radar signal transmitting unit is used for transmitting radar signals to the coal rock mine;
the radar reflected signal receiving unit is used for receiving reflected signals reflected by the coal rock mine;
and the radar signal A/D conversion unit is used for carrying out data conversion on the reflected signals to obtain coal rock mine data information.
3. The device for identifying a coal-rock interface based on solid state laser radar according to claim 1, wherein: the laser radar module is a solid-state laser radar.
4. The device for identifying a coal-rock interface based on solid state laser radar according to claim 1, wherein: the device also comprises a power supply module, which is used for supplying power to the laser radar modules, the signal transmission module, the data storage module, the radar imaging module, the image fusion module and the image recognition module.
5. The method for identifying the coal-rock interface based on the solid-state laser radar is characterized by comprising the following steps of: the method comprises the following steps:
transmitting radar signals to the same area of the coal rock mine by adopting a plurality of laser radar modules to obtain a plurality of groups of coal rock mine data information, wherein each laser radar module can transmit radar signals to the coal rock mine and obtain the coal rock mine data information according to the reflected signals reflected by the coal rock mine;
storing the multiple groups of coal rock ore data information;
retrieving stored multiple groups of coal rock ore data information, and respectively imaging each group of coal rock ore data information to obtain a coal rock texture image corresponding to each group of coal rock ore data information, namely multiple coal rock texture images of the same area of the coal rock ore;
fusing the plurality of coal rock texture images to obtain a fused coal rock texture image;
normalizing the fused coal rock texture image, and identifying the normalized image to obtain a coal rock interface identification result; the normalizing processing is carried out on the fused coal rock texture image, and the normalized image is identified, and the method comprises the following steps:
carrying out normalization processing on the existing coal-rock texture image, constructing a full-convolution neural network model, and training and testing the full-convolution neural network model by utilizing the normalized existing coal-rock texture image to obtain a trained full-convolution neural network model;
Normalizing the fused coal-rock texture image, inputting the normalized image into a trained full-convolution neural network model, and outputting a coal-rock interface identification result by the trained full-convolution neural network model;
the depth of the trained full convolution neural network model is five layers, namely a first layer, a second layer, a third layer, a fourth layer and a fifth layer;
the first layer consists of a convolution layer C1, a convolution layer C2 and a pooling layer P1, wherein the convolution layer C1 and the convolution layer C2 comprise 64 convolution kernels with the size of 3*3 and a ReLU activation function; the convolution layer C1 is configured to input the normalized image, where the pixel size of the normalized image is 320×320×1, and after all convolution kernels and ReLU activation functions of the normalized image are processed by the convolution layer C1, a feature map A1 is output, and the pixel size of the feature map A1 is 318×318×64; the convolution layer C2 is used for inputting the feature map A1, and after the feature map A1 is processed by all convolution kernels and ReLU activation functions of the convolution layer C2, the feature map A2 is output, and the pixel size of the feature map A2 is 316×316×64; the pooling layer P1 is configured to input a feature map A2, divide a plurality of blocks of 2×2 on the feature map A2, and output a feature map A3 after taking a maximum value from each block, where a pixel size of the feature map A3 is 158×158×64;
The second layer consists of a convolution layer C3, a convolution layer C4 and a pooling layer P2, wherein the convolution layer C3 and the convolution layer C4 comprise 128 convolution kernels with the size of 2 x 2 and a ReLU activation function; the convolution layer C3 is used for inputting a feature map A3, and after the feature map A3 is processed by all convolution kernels and ReLU activation functions of the convolution layer C3, a feature map A4 is output, and the pixel size of the feature map A4 is 156×156×128; the convolution layer C4 is used for inputting a feature map A4, and after the feature map A4 is processed by all convolution kernels and ReLU activation functions of the convolution layer C4, a feature map A5 is output, and the pixel size of the feature map A5 is 154×128; the pooling layer P2 is configured to input a feature map A5, divide a plurality of blocks of 2×2 on the feature map A5, and output a feature map A6 after taking a maximum value from each block, where a pixel size of the feature map A6 is 77×77×128;
the third layer consists of a convolution layer C5 and a convolution layer C6, wherein the convolution layer C5 and the convolution layer C6 comprise 256 convolution kernels with the size of 3*3 and a ReLU activation function; the convolution layer C5 is used for inputting a feature map A6, and after the feature map A6 is processed by all convolution kernels and ReLU activation functions of the convolution layer C5, a feature map A7 is output, and the pixel size of the feature map A7 is 75 x 256; the convolution layer C6 is used for inputting a feature map A7, and after the feature map A7 is processed by all convolution kernels and ReLU activation functions of the convolution layer C6, a feature map A8 is output, and the pixel size of the feature map A8 is 73 x 256;
The fourth layer is composed of an up-sampling layer U1, a convolution layer C7 and a convolution layer C8, wherein the up-sampling layer U1 comprises 256 convolution kernels with the size of 2 x 2, and the convolution layer C7 and the convolution layer C8 comprise 128 convolution kernels with the size of 3*3 and a ReLU activation function; the up-sampling layer U1 is used for inputting a feature map A8, and after the feature map A8 is subjected to deconvolution processing by all convolution kernels of the up-sampling layer U1, a feature map A9 is output, wherein the pixel size of the feature map A9 is 146 x 256; the convolution layer C7 is used for inputting a feature map A9, and after the feature map A9 is processed by all convolution kernels and ReLU activation functions of the convolution layer C7, a feature map a10 is output, and the pixel size of the feature map a10 is 144×144×128; the convolution layer C8 is used for inputting a feature map a10, and after the feature map a10 is processed by all convolution kernels and ReLU activation functions of the convolution layer C8, a feature map a11 is output, and the pixel size of the feature map a11 is 142×142×128;
the fifth layer is composed of an up-sampling layer U2, a convolution layer C9, a convolution layer C10 and a convolution layer C11, wherein the up-sampling layer U2 comprises 128 convolution kernels with the size of 2 x 2, the convolution layer C9 and the convolution layer C10 comprise 64 convolution kernels with the size of 3*3 and a ReLU activation function, and the convolution layer C11 comprises 2 convolution kernels with the size of 1*1 and a ReLU activation function; the up-sampling layer U2 is configured to input a feature map a11, where the feature map a11 is deconvoluted by all convolution kernels of the up-sampling layer U2, and then output a feature map a12, where a pixel size of the feature map a12 is 284×284×128; the convolution layer C9 is used for inputting a feature map a12, and after the feature map a12 is processed by all convolution kernels and ReLU activation functions of the convolution layer C9, a feature map a13 is output, and the pixel size of the feature map a13 is 282×282×64; the convolution layer C10 is used for inputting a feature map a13, and after the feature map a13 is processed by all convolution kernels and ReLU activation functions of the convolution layer C10, outputting a feature map a14, wherein the pixel size of the feature map a14 is 280×280×64; the convolution layer C11 is used for inputting the feature map a14, after the feature map a14 is processed by all convolution kernels and ReLU activation functions of the convolution layer C11, outputting a feature map a15, wherein the pixel size of the feature map a15 is 280×280×2, and the output features of the feature map 15 comprise coal mine interfaces and rock interfaces of coal and rock ores.
6. The method for identifying a coal-rock interface based on solid state laser radar according to claim 5, wherein: the laser radar module can both transmit radar signals to coal rock ore, and obtain coal rock ore data information according to reflected signals reflected by the coal rock ore, and comprises:
transmitting radar signals to the coal rock ore through a radar signal transmitting unit of the laser radar module;
receiving a reflected signal reflected by the coal rock ore through a radar reflected signal receiving unit of the laser radar module;
and performing data conversion on the reflected signals through a radar signal A/D conversion unit of the laser radar module to obtain coal rock data information.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811138102.4A CN109444845B (en) | 2018-09-28 | 2018-09-28 | Device and method for identifying coal-rock interface based on solid-state laser radar |
PCT/CN2018/115023 WO2020062470A1 (en) | 2018-09-28 | 2018-11-12 | Apparatus and method for recognizing coal-rock interface based on solid-state laser radar imaging |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811138102.4A CN109444845B (en) | 2018-09-28 | 2018-09-28 | Device and method for identifying coal-rock interface based on solid-state laser radar |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109444845A CN109444845A (en) | 2019-03-08 |
CN109444845B true CN109444845B (en) | 2023-05-23 |
Family
ID=65546183
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811138102.4A Active CN109444845B (en) | 2018-09-28 | 2018-09-28 | Device and method for identifying coal-rock interface based on solid-state laser radar |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109444845B (en) |
WO (1) | WO2020062470A1 (en) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111259095A (en) * | 2020-01-08 | 2020-06-09 | 京工博创(北京)科技有限公司 | Method, device and equipment for calculating boundary of ore rock |
CN111337883B (en) * | 2020-04-17 | 2022-02-08 | 中国矿业大学(北京) | Intelligent detection and identification system and method for mine coal rock interface |
CN111812671A (en) * | 2020-06-24 | 2020-10-23 | 北京佳力诚义科技有限公司 | Artificial intelligence ore recognition device and method based on laser imaging |
CN111931824B (en) * | 2020-07-15 | 2024-05-28 | 中煤科工集团重庆研究院有限公司 | Coal rock identification method based on drilling slag return image |
CN112001253B (en) * | 2020-07-23 | 2021-11-30 | 西安科技大学 | Coal dust particle image identification method based on improved Fast R-CNN |
CN111968136A (en) * | 2020-08-18 | 2020-11-20 | 华院数据技术(上海)有限公司 | Coal rock microscopic image analysis method and analysis system |
CN114689625B (en) * | 2020-12-29 | 2024-09-17 | 中冶长天国际工程有限责任公司 | Ore grade acquisition system and method based on multi-source data |
CN112818952B (en) * | 2021-03-11 | 2024-07-26 | 中国科学院武汉岩土力学研究所 | Coal rock boundary recognition method and device and electronic equipment |
CN113137230B (en) * | 2021-05-20 | 2023-08-22 | 太原理工大学 | Coal-rock interface recognition system |
CN113421222B (en) * | 2021-05-21 | 2023-06-23 | 西安科技大学 | Lightweight coal gangue target detection method |
CN113267124B (en) * | 2021-05-26 | 2024-10-15 | 济南玛恩机械电子科技有限公司 | Laser radar-based caving coal caving amount measuring system and caving control method |
CN113406296A (en) * | 2021-06-24 | 2021-09-17 | 辽宁工程技术大学 | Coal petrography intelligent recognition system based on degree of depth learning |
CN113777108B (en) * | 2021-11-10 | 2022-01-18 | 河北工业大学 | Method, device and medium for identifying boundary of double-substance interface |
CN114322743B (en) * | 2022-01-05 | 2024-04-12 | 瞬联软件科技(北京)有限公司 | Tunnel deformation real-time monitoring system and monitoring method |
CN116297544A (en) * | 2023-03-16 | 2023-06-23 | 南京京烁雷达科技有限公司 | Method and device for extracting target object of coal rock identification ground penetrating radar |
CN116539643B (en) * | 2023-03-16 | 2023-11-17 | 南京京烁雷达科技有限公司 | Method and system for identifying coal rock data by using radar |
CN116310843B (en) * | 2023-05-16 | 2023-07-21 | 三一重型装备有限公司 | Coal rock identification method, device, readable storage medium and heading machine |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN202383714U (en) * | 2011-11-24 | 2012-08-15 | 中国矿业大学(北京) | Coal petrography interface identification system based on image |
CN103207999A (en) * | 2012-11-07 | 2013-07-17 | 中国矿业大学(北京) | Method and system for coal and rock boundary dividing based on coal and rock image feature extraction and classification and recognition |
CN103472447A (en) * | 2013-09-13 | 2013-12-25 | 北京科技大学 | Multipoint-radar collaborative imaging device based on chute position judgment and method thereof |
CN107272017A (en) * | 2017-06-29 | 2017-10-20 | 深圳市速腾聚创科技有限公司 | Multilasered optical radar system and its control method |
CN107728143A (en) * | 2017-09-18 | 2018-02-23 | 西安电子科技大学 | Radar High Range Resolution target identification method based on one-dimensional convolutional neural networks |
CN107886121A (en) * | 2017-11-03 | 2018-04-06 | 北京清瑞维航技术发展有限公司 | Target identification method, apparatus and system based on multiband radar |
CN108519812A (en) * | 2018-03-21 | 2018-09-11 | 电子科技大学 | A kind of three-dimensional micro-doppler gesture identification method based on convolutional neural networks |
CN108564108A (en) * | 2018-03-21 | 2018-09-21 | 天津市协力自动化工程有限公司 | The recognition methods of coal and device |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
SU891914A1 (en) * | 1980-03-26 | 1981-12-23 | Научно-Производственное Объединение "Автоматгормаш Союзуглеавтоматика" | Method of monitoring the coal-rock interface |
US4981327A (en) * | 1989-06-09 | 1991-01-01 | Consolidation Coal Company | Method and apparatus for sensing coal-rock interface |
US8884806B2 (en) * | 2011-10-26 | 2014-11-11 | Raytheon Company | Subterranean radar system and method |
CN102496004B (en) * | 2011-11-24 | 2013-11-06 | 中国矿业大学(北京) | Coal-rock interface identifying method and system based on image |
CN103927514B (en) * | 2014-04-09 | 2017-07-25 | 中国矿业大学(北京) | A kind of Coal-rock identification method based on random local image characteristics |
CN104134074B (en) * | 2014-07-31 | 2017-06-23 | 中国矿业大学 | A kind of Coal-rock identification method based on laser scanning |
CN106845509A (en) * | 2016-10-19 | 2017-06-13 | 中国矿业大学(北京) | A kind of Coal-rock identification method based on bent wave zone compressive features |
CN107676095B (en) * | 2017-11-01 | 2019-07-26 | 天地科技股份有限公司 | High seam top coal caving device and method |
-
2018
- 2018-09-28 CN CN201811138102.4A patent/CN109444845B/en active Active
- 2018-11-12 WO PCT/CN2018/115023 patent/WO2020062470A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN202383714U (en) * | 2011-11-24 | 2012-08-15 | 中国矿业大学(北京) | Coal petrography interface identification system based on image |
CN103207999A (en) * | 2012-11-07 | 2013-07-17 | 中国矿业大学(北京) | Method and system for coal and rock boundary dividing based on coal and rock image feature extraction and classification and recognition |
CN103472447A (en) * | 2013-09-13 | 2013-12-25 | 北京科技大学 | Multipoint-radar collaborative imaging device based on chute position judgment and method thereof |
CN107272017A (en) * | 2017-06-29 | 2017-10-20 | 深圳市速腾聚创科技有限公司 | Multilasered optical radar system and its control method |
CN107728143A (en) * | 2017-09-18 | 2018-02-23 | 西安电子科技大学 | Radar High Range Resolution target identification method based on one-dimensional convolutional neural networks |
CN107886121A (en) * | 2017-11-03 | 2018-04-06 | 北京清瑞维航技术发展有限公司 | Target identification method, apparatus and system based on multiband radar |
CN108519812A (en) * | 2018-03-21 | 2018-09-11 | 电子科技大学 | A kind of three-dimensional micro-doppler gesture identification method based on convolutional neural networks |
CN108564108A (en) * | 2018-03-21 | 2018-09-21 | 天津市协力自动化工程有限公司 | The recognition methods of coal and device |
Non-Patent Citations (1)
Title |
---|
基于地质雷达探测的煤—岩分界面实验分析;徐旭东 等;《华北科技学院学报》;20161231;第13卷(第06期);第78-81页 * |
Also Published As
Publication number | Publication date |
---|---|
WO2020062470A1 (en) | 2020-04-02 |
CN109444845A (en) | 2019-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109444845B (en) | Device and method for identifying coal-rock interface based on solid-state laser radar | |
CN101557452B (en) | Scanning system for 3d mineralogy modelling | |
CN110135468A (en) | A kind of recognition methods of gangue | |
CN111126238A (en) | X-ray security inspection system and method based on convolutional neural network | |
CN107423734B (en) | SAR image ocean target rapid detection method and device | |
CN112801061A (en) | Posture recognition method and system | |
Jiang et al. | A deep neural networks approach for pixel-level runway pavement crack segmentation using drone-captured images | |
US9613328B2 (en) | Workflow monitoring and analysis system and method thereof | |
CN107891015A (en) | Bastard coal infrared identification device in a kind of bastard coal sorting | |
Harraden et al. | Automated core logging technology for geotechnical assessment: A study on core from the Cadia East porphyry deposit | |
CN105003301B (en) | A kind of fully-mechanized mining working staff danger Attitute detecting device and detecting system | |
CN205330670U (en) | Coal petrography recognition device based on study of dependence tolerance | |
EP4253905A1 (en) | Information processing device, information processing method, and program | |
CN117689960B (en) | Lithology scene classification model construction method and classification method | |
CN102496032A (en) | Electrical equipment X ray digital image processing algorithm support system | |
Wang et al. | Real-time detection and location of reserved anchor hole in coal mine roadway support steel belt | |
CN109211607A (en) | A kind of method of sampling, device, equipment, system and readable storage medium storing program for executing | |
Valencia et al. | Blasthole Location Detection Using Support Vector Machine and Convolutional Neural Networks on UAV Images and Photogrammetry Models | |
CN116796643A (en) | Surface subsidence monitoring method and device, electronic equipment and storage medium | |
CN115576025A (en) | Detection and analysis system for overburden rock gap after coal seam mining of underground mining coal mine | |
Kurniawan et al. | Videogrammetry: A new approach of 3-dimensional reconstruction from video using SfM algorithm: Case studi: Coal mining area | |
Peng et al. | A new method for recognizing discontinuities from 3D point clouds in tunnel construction environments | |
CN110174694A (en) | A kind of acquisition of advance geologic prediction data and analysis method | |
CN118584537B (en) | Wireless microseismic data perception processing method and system based on edge calculation | |
Zhuang et al. | Surrounding rock classification from onsite images with deep transfer learning based on EfficientNet |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |