CN112686889B - Hydraulic support pushing progress detection method and system based on monocular vision automatic label - Google Patents

Hydraulic support pushing progress detection method and system based on monocular vision automatic label Download PDF

Info

Publication number
CN112686889B
CN112686889B CN202110116865.4A CN202110116865A CN112686889B CN 112686889 B CN112686889 B CN 112686889B CN 202110116865 A CN202110116865 A CN 202110116865A CN 112686889 B CN112686889 B CN 112686889B
Authority
CN
China
Prior art keywords
camera
hydraulic support
neural network
information
pushing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110116865.4A
Other languages
Chinese (zh)
Other versions
CN112686889A (en
Inventor
王永强
张幸福
常亚军
杨文明
杨艺
杜洪生
连东辉
崔科飞
李春鹏
冯敬培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Hengda Intelligent Control Technology Co ltd
Original Assignee
Zhengzhou Coal Mining Machinery Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Coal Mining Machinery Group Co Ltd filed Critical Zhengzhou Coal Mining Machinery Group Co Ltd
Priority to CN202110116865.4A priority Critical patent/CN112686889B/en
Publication of CN112686889A publication Critical patent/CN112686889A/en
Application granted granted Critical
Publication of CN112686889B publication Critical patent/CN112686889B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a method and a system for detecting the pushing progress of a hydraulic support based on a monocular vision automatic label, wherein the method comprises the following steps: installing a camera on a hydraulic support, shooting a pushing rod of the hydraulic support by the camera in a depression manner, and respectively arranging a mark 1 and a mark 2 on a hydraulic support base and a scraper conveyor cross head at two ends of the pushing rod; constructing ten layers of depth residual error network models consisting of one-dimensional convolutional layers Conv1D (M, N); the method comprises the steps that a built deep neural network model is placed in a GPU unit of a controller, the controller reads image information of a camera, position information of the camera and a pushing distance of a pushing rod at regular time and sends the image information, the position information and the pushing distance into the deep neural network model, and the deep neural network model is trained; the controller reads image information of the camera and position information of the camera, and the image information and the position information of the camera are used as input signals to be sent into the trained deep neural network model, so that the pushing distance of the pushing rod can be output. The method of the invention does not need calibration and manual label supply and can be used for on-line training.

Description

Hydraulic support pushing progress detection method and system based on monocular vision automatic label
Technical Field
The invention relates to a detection method based on machine vision, in particular to a method and a system for detecting the pushing progress of a hydraulic support based on a monocular vision automatic label.
Background
The hydraulic support is the core equipment for the safety support of the underground working face of the coal mine. In the working face, the hydraulic support pushes the scraper conveyor forward behind the cutting coal wall of the shearer, thereby propelling the working face in the direction of mining.
In the working face extraction process, the straightness of the scraper conveyor directly determines the thickness of a coal wall cut by a drum of the coal cutter, and further determines the coal mining amount of a single cutter of the coal cutter. As the straightness of the scraper conveyor is determined by the pushing cylinder of the hydraulic support, the straightness determines the spatial distribution of the hydraulic support. If the straightness of the working face is not good, the safety support of the working face is possibly unstable, and safety accidents are caused. Therefore, the straightness detection of the working face is a key problem to be solved in the underground coal mine.
The main methods for detecting the straightness of the working face comprise inertial navigation device detection and displacement sensor detection. However, the inertial navigation detection cost is high, and the detected working face straightness is only the running track of the coal mining machine in the previous cut, not the real-time working face straightness, and the function of the inertial navigation device is not exerted to a certain extent. The displacement sensor detection method is mainly used for detecting the relative displacement of the hydraulic support and the scraper conveyor through a laser or sonar sensor. This type of method requires a displacement sensor to be installed at the front end of each hydraulic support to confirm the straightness of the entire working surface. Due to the influence of coal falling, coal floating and the like, the sensors are easy to be damaged or have data errors in the using process.
The method is characterized in that a camera is used for detecting the pushing progress of a pushing cylinder of the hydraulic support. However, in the field of machine vision, the detection position of a monocular camera can usually be inverted by calibrating parameters in the camera with a world coordinate system through pixels of captured video image information. However, the monocular camera can act on two dimensions, namely a horizontal axis and a pitch axis, so that the imaging distortion under the dynamic different-angle condition cannot be determined by using the calibration method of the monocular camera. In addition, in the propelling process of the hydraulic support, the position of the monocular camera relative to the ground is changed, so that the shooting environments of the cameras are greatly different. Therefore, accurate detection of the pushing stroke of the hydraulic support cannot be realized by a calibration method by adopting a monocular camera.
Disclosure of Invention
The invention aims to provide a hydraulic support pushing progress detection method and system based on a monocular vision automatic label aiming at the defects of the prior art.
In order to achieve the purpose, the invention adopts the technical scheme that:
the invention provides a hydraulic support propulsion degree detection method based on a monocular vision automatic label, which comprises the following steps:
deployment detection system
Mounting a camera on a hydraulic support, wherein a horizontal plane 0-degree line of the camera is a central line of a top beam of the hydraulic support, a pitching plane of the camera is vertical to the top beam of the hydraulic support, and a pitching plane 0-degree line of the camera is a central line of the top beam of the hydraulic support;
the camera shoots a pushing rod of the hydraulic support in a downward direction, and a hydraulic support base and a scraper conveyor cross head at two ends of the pushing rod are respectively provided with a mark 1 and a mark 2;
constructing a deep neural network model
Constructing ten-layer depth residual error network model composed of one-dimensional convolutional layers Conv1D (M, N), wherein M represents the number of input channels, N represents the number of output channels, the first layer convolutional layer is Conv1D (3, 64), the second layer convolutional layer is Conv1D (64, 128), the third layer convolutional layer is Conv1D (128 ), the fourth layer convolutional layer is Conv1D (128, 64), the fifth layer convolutional layer is Conv1D (64, 64), the sixth layer convolutional layer is Conv1D (64, 32), the seventh layer convolutional layer is Conv1D (32, 32), the eighth layer convolutional layer is Conv1D (32, 4), the ninth layer convolutional layer is Conv1D (4, 2), and the tenth layer is Conv1D (2, 4);
neural network training based on automatic labeling
The camera is arranged on the holder and can record and output the rotation angle information of the horizontal shaft and the pitching shaft.
Based on the above, when training the neural network based on the automatic label, the image information of the camera read by the controller is subjected to enhancement preprocessing of histogram equalization and contrast adjustment, wherein the contrast enhancement adjustment range is between +0.0 to + 1.0; and image information of the current hydraulic frame and the pushing rods of the left and right adjacent hydraulic frames is obtained from the read image information of the camera.
Based on the above, during the neural network training based on the automatic label, the controller only reads the transition distances of the push rods of the current hydraulic frame and the left and right adjacent hydraulic frames, and the transition distances of the push rods are obtained through the detection of the sensors.
Based on the above, the method for extracting the position information of the marker in the image according to the image information of the camera comprises the following steps:
in the image that the camera was shot, need confirm current hydraulic support, control the end information of adjacent hydraulic support, divide into three kinds of situations: the left deviation of the camera, the centering of the camera and the right deviation of the camera;
dividing each mark in the enhanced image, and numbering from top to bottom and from left to right;
according to the positions of the marks in the image, firstly, the marks are divided into an upper row and a lower row, and then mark pairing is carried out from left to right;
the center point is confirmed by the intersection point of the horizontal line and the vertical line in the divided marker, and the coordinate of the spatial position center of the marker pixel on the coordinate axis x0y is obtained.
Based on the above, the camera is a monocular servo camera, provides three primary color images of RGB, and has a resolution of 1920 × 1080.
Based on the above, the data accuracy of the moving distance of the moving rod is 0.01 m.
Based on the above, the GPU of the controller selects the NVIDIA T4 GPU graphics card.
The method comprises the steps that a built deep neural network model is placed in a GPU unit of a controller, and the controller reads image information of a camera, position information of the camera and a pushing distance of a pushing rod at fixed time;
extracting position information of a marker in the image according to the image information of the camera, namely coordinates of a spatial position center of the marker pixel on a coordinate axis x0y, wherein the total number of the position information is 4 data; extracting horizontal angle and pitch angle information of the camera according to the angle information of the camera, wherein the total number of the information is 2;
the 6 data information is used as input signals and sent into a deep neural network model, the transition distance of a transition rod is used as a label of the deep neural network model, the transition distance is used as an output signal and sent into the deep neural network model, and the deep neural network model is trained;
during training, the loss function: log loss function
Figure 100002_DEST_PATH_IMAGE002
In the formula, X is a neural network input signal, and Y is a corresponding neural network label; the training method comprises the following steps: a random gradient descent method; end of training marker: the loss function value is less than 0.001;
push progress detection
The controller reads image information of the camera and position information of the camera, extracts coordinates of a space position center of an identification pixel on a coordinate axis x0y according to the image information of the camera, and extracts horizontal angle and pitch angle information of the camera according to angle information of the camera, wherein the total number of the data is 4; and the 6 data information is used as an input signal and is sent into a trained deep neural network model, and the pushing distance of the pushing rod can be output.
Based on the above, the mark 1 and the mark 2 are red and white alternating reflective stickers with 5cm by 5 cm.
Based on the above, the camera is arranged on the holder and can record and output the rotation angle information of the horizontal axis and the pitch axis.
Based on the above, when training the neural network based on the automatic label, the image information of the camera read by the controller is subjected to enhancement preprocessing of histogram equalization and contrast adjustment, wherein the contrast enhancement adjustment range is between +0.0 to + 1.0; and image information of the current hydraulic frame and the pushing rods of the left and right adjacent hydraulic frames is obtained from the read image information of the camera.
Based on the above, during the neural network training based on the automatic label, the controller only reads the transition distances of the push rods of the current hydraulic frame and the left and right adjacent hydraulic frames, and the transition distances of the push rods are obtained through the detection of the sensors.
Based on the above, the method for extracting the position information of the marker in the image according to the image information of the camera comprises the following steps:
in the image that the camera was shot, need confirm current hydraulic support, control the end information of adjacent hydraulic support, divide into three kinds of situations: the left deviation of the camera, the centering of the camera and the right deviation of the camera;
dividing each mark in the enhanced image, and numbering from top to bottom and from left to right;
according to the positions of the marks in the image, firstly, the marks are divided into an upper row and a lower row, and then mark pairing is carried out from left to right;
the center point is confirmed by the intersection point of the horizontal line and the vertical line in the divided marker, and the coordinate of the spatial position center of the marker pixel on the coordinate axis x0y is obtained.
The invention provides a hydraulic support pushing degree detection system based on a monocular vision automatic label, which comprises a hydraulic support, a camera and a controller, wherein the controller is provided with a GPU unit; the hydraulic support, the camera and the controller are used for completing the monocular vision automatic label based hydraulic support propulsion degree detection method of any one of claims 1 to 6.
Based on the above, the camera is a monocular servo camera, provides three primary color images of RGB, and has a resolution of 1920 × 1080.
Based on the above, the data accuracy of the moving distance of the moving rod is 0.01 m.
Based on the above, the GPU of the controller selects the NVIDIA T4 GPU graphics card.
Compared with the prior art, the invention has prominent substantive characteristics and remarkable progress, and particularly has the following beneficial effects:
(1) only one monocular servo camera is employed: the camera is arranged on the hydraulic support, a pushing rod of the hydraulic support is shot in a depression mode, after image information shot by the camera is segmented and matched with the identification according to the identification, if n pairs of identification objects exist in one image, the one image can provide n samples to participate in training, and multiple training samples can be provided by only adopting one monocular servo camera;
(2) calibration is not required: the method comprises the following steps of training image information of a camera, position information of the camera and a pushing distance of a pushing rod by adopting a deep neural network model, and realizing accurate detection of a pushing stroke of the hydraulic support without calibrating a detection position of the camera;
(3) the labels do not need to be provided manually: the label of the deep neural network model corresponds to the pushing distance of the pushing rod, the pushing distance detected by the sensor is automatically read by the controller to serve as the label, and the label is not required to be manually provided;
(4) can be trained on line: the image information of the camera, the position information of the camera and the transition distance of the transition rod are trained by adopting a deep neural network model, wherein the image information of the camera, the position information of the camera and the transition distance of the transition rod are in communication connection with the controller through the camera and the sensor, and the controller can read the GPU on line at regular time for training.
Drawings
Fig. 1 is a schematic diagram of the installation position of a camera on a hydraulic bracket in the system of the invention.
FIG. 2 is a spatial distribution diagram of the hydraulic mount and flight conveyor of the system of the present invention.
FIG. 3 is a diagram of spatial locations identified in the system of the present invention.
FIG. 4 is a schematic diagram of imaging of three hydraulic supports by a camera according to the method of the present invention.
FIG. 5 is a schematic diagram of the identification extraction and numbering coordinates in the method of the present invention.
FIG. 6 is a diagram of a deep neural network model architecture in the method of the present invention.
Fig. 7 is a block diagram of the system framework of the present invention.
Detailed Description
The technical solution of the present invention is further described in detail by the following embodiments.
Example 1
As shown in fig. 1 to 7, the embodiment provides a hydraulic support pushing progress detection system based on a monocular vision automatic label, and the detection system includes a hydraulic support, a camera and a controller, and the controller is provided with a GPU unit; the hydraulic support, the camera and the controller are used for completing the detection method of the propelling degree of the hydraulic support based on the monocular vision automatic label, and the method comprises the following steps:
deployment detection system
Mounting a camera on a hydraulic support, wherein a horizontal plane 0-degree line of the camera is a central line of a top beam of the hydraulic support, a pitching plane of the camera is vertical to the top beam of the hydraulic support, and a pitching plane 0-degree line of the camera is a central line of the top beam of the hydraulic support;
the camera shoots a pushing rod of the hydraulic support in a downward direction, and a hydraulic support base and a scraper conveyor cross head at two ends of the pushing rod are respectively provided with a mark 1 and a mark 2; specifically, the marks 1 and 2 are 5cm by 5cm red and white reflecting stickers, and in other embodiments, other markers which are used for marking on the base of the hydraulic support and the crosshead of the scraper conveyor can be directly adopted.
Constructing a deep neural network model
Ten deep residual error network models composed of one-dimensional convolutional layers Conv1D (M, N) were constructed, where M represents the number of input channels, N represents the number of output channels, the first layer convolutional layers were Conv1D (3, 64), the second layer convolutional layers were Conv1D (64, 128), the third layer convolutional layers were Conv1D (128 ), the fourth layer convolutional layers were Conv1D (128, 64), the fifth layer convolutional layers were Conv1D (64, 64), the sixth layer convolutional layers were Conv1D (64, 32), the seventh layer convolutional layers were Conv1D (32, 32), the eighth layer convolutional layers were Conv1D (32, 4), the ninth layer convolutional layers were Conv1D (4, 2), and the tenth layer was Conv1D (2, 4).
Neural network training based on automatic labeling
(1) The image acquisition method comprises the following steps: the camera is arranged on the holder and can record and output the rotation angle information of the horizontal shaft and the pitching shaft, and the camera is a monocular servo camera and can be directly purchased;
(2) information acquisition and preprocessing:
to determine the length of the push rod in the video image, the detection system provides three types of information: image information of the camera; the length of the pushing rod is detected by a sensor; the pitch angle and the horizontal angle of rotation of the camera;
the camera provides three primary color images of RGB, and the resolution is 1920 x 1080. The image information of the camera read by the controller is subjected to enhancement preprocessing of histogram equalization and contrast adjustment, wherein the contrast enhancement adjustment range is +0.0 to + 1.0; and image information of the current hydraulic frame and the push rods of the left and right adjacent hydraulic frames is obtained from the read image information of the camera, and the other image information is not considered. The controller only reads the pushing distance of the current hydraulic frame and the pushing rods of the left and right adjacent hydraulic frames, the pushing distance of the pushing rods is obtained through detection of the sensors, and the data precision of the pushing distance of the pushing rods is 0.01 m;
(3) determining end head information of pushing rods of three hydraulic supports:
in the image that the camera was shot, need confirm current hydraulic support, control the end information of adjacent hydraulic support, divide into three kinds of situations: the left deviation of the camera, the centering of the camera and the right deviation of the camera;
dividing each mark in the enhanced image, and numbering from top to bottom and from left to right;
according to the position of the mark in the image, firstly dividing the mark into an upper row and a lower row, and then carrying out mark pairing from left to right, namely realizing the pairing of the image information of the camera, the length of a push rod, the rotating pitch angle and the horizontal angle of the camera;
confirming a central point through the intersection point of a horizontal line and a vertical line in the divided identification to obtain the coordinate of the spatial position center of the identification pixel on a coordinate axis x0y, namely the coordinate of two identification central points on the coordinate axis, wherein the total number of the two identification central points is 4 data;
(4) deep neural network training
The method comprises the steps of placing a built deep neural network model in a GPU unit of a controller, wherein a GPU of the controller selects an NVIDIA T4 GPU display card; the controller reads image information of the camera, position information of the camera and the pushing distance of the pushing rod at fixed time;
extracting position information of the marker in the image according to the image information of the camera, namely coordinates of a spatial position center of the marker pixel on a coordinate axis x0 y; extracting horizontal angle and pitch angle information of the camera according to the angle information of the camera, wherein the total number of the information is 2;
the 6 data information is used as input signals and sent into a deep neural network model, the transition distance of a transition rod is used as a label of the deep neural network model, the transition distance is used as an output signal and sent into the deep neural network model, and the deep neural network model is trained;
the main technical parameters of neural network training include:
loss function: log loss function
Figure 37838DEST_PATH_IMAGE002
In the formula, X is a neural network input signal, and Y is a corresponding neural network label;
the training method comprises the following steps: a random gradient descent method;
end of training marker: the loss function value is less than 0.001.
Push progress detection
The controller reads image information of the camera and position information of the camera, extracts coordinates of a space position center of an identification pixel on a coordinate axis x0y according to the image information of the camera, and extracts horizontal angle and pitch angle information of the camera according to angle information of the camera, wherein the total number of the data is 4; and the 6 data information is used as an input signal and is sent into a trained deep neural network model, and the pushing distance of the pushing rod can be output.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention and not to limit it; although the present invention has been described in detail with reference to preferred embodiments, those skilled in the art will understand that: modifications to the specific embodiments of the invention or equivalent substitutions for parts of the technical features may be made; without departing from the spirit of the present invention, it is intended to cover all aspects of the invention as defined by the appended claims.

Claims (10)

1. A hydraulic support propulsion degree detection method based on a monocular vision automatic label is characterized by comprising the following steps:
deployment detection system
Mounting a camera on a hydraulic support, wherein a horizontal plane 0-degree line of the camera is a central line of a top beam of the hydraulic support, a pitching plane of the camera is vertical to the top beam of the hydraulic support, and a pitching plane 0-degree line of the camera is a central line of the top beam of the hydraulic support;
the camera shoots a pushing rod of the hydraulic support in a downward direction, and a hydraulic support base and a scraper conveyor cross head at two ends of the pushing rod are respectively provided with a mark 1 and a mark 2;
constructing a deep neural network model
Constructing ten-layer depth residual error network model composed of one-dimensional convolutional layers Conv1D (M, N), wherein M represents the number of input channels, N represents the number of output channels, the first layer convolutional layer is Conv1D (3, 64), the second layer convolutional layer is Conv1D (64, 128), the third layer convolutional layer is Conv1D (128 ), the fourth layer convolutional layer is Conv1D (128, 64), the fifth layer convolutional layer is Conv1D (64, 64), the sixth layer convolutional layer is Conv1D (64, 32), the seventh layer convolutional layer is Conv1D (32, 32), the eighth layer convolutional layer is Conv1D (32, 4), the ninth layer convolutional layer is Conv1D (4, 2), and the tenth layer is Conv1D (2, 4);
neural network training based on automatic labeling
The method comprises the steps that a built deep neural network model is placed in a GPU unit of a controller, and the controller reads image information of a camera, position information of the camera and a pushing distance of a pushing rod at fixed time;
extracting position information of a marker in the image according to the image information of the camera, namely coordinates of a spatial position center of the marker pixel on a coordinate axis x0y, wherein the total number of the position information is 4 data; extracting horizontal angle and pitch angle information of the camera according to the angle information of the camera, wherein the total number of the information is 2;
the 6 data information is used as input signals and sent into a deep neural network model, the transition distance of a transition rod is used as a label of the deep neural network model, the transition distance is used as an output signal and sent into the deep neural network model, and the deep neural network model is trained;
during training, the loss function: log loss function
Figure DEST_PATH_IMAGE002
In the formula, X is a neural network input signal, and Y is a corresponding neural network label; the training method comprises the following steps: a random gradient descent method; end of training marker: the loss function value is less than 0.001;
push progress detection
The controller reads image information of the camera and position information of the camera, extracts coordinates of a space position center of an identification pixel on a coordinate axis x0y according to the image information of the camera, and extracts horizontal angle and pitch angle information of the camera according to angle information of the camera, wherein the total number of the data is 4; and the 6 data information is used as an input signal and is sent into a trained deep neural network model, and the pushing distance of the pushing rod can be output.
2. The hydraulic support propelling degree detection method based on the monocular vision automatic label as recited in claim 1, wherein: the mark 1 and the mark 2 are red and white alternating reflective stickers with the length of 5cm x 5 cm.
3. The hydraulic support propelling degree detection method based on the monocular vision automatic label as recited in claim 1, wherein: the camera is arranged on the holder and can record and output the rotation angle information of the horizontal shaft and the pitching shaft.
4. The method for detecting the propelling degree of a hydraulic bracket based on the monocular vision automatic label according to claim 1, characterized in that, during the neural network training based on the automatic label, the image information of the camera read by the controller is firstly subjected to the enhancement preprocessing of histogram equalization and the contrast adjustment, wherein the contrast enhancement adjustment range is between +0.0 to + 1.0; and image information of the current hydraulic frame and the pushing rods of the left and right adjacent hydraulic frames is obtained from the read image information of the camera.
5. The hydraulic bracket propelling degree detection method based on the monocular vision automatic labeling of claim 4, wherein during the neural network training based on the automatic labeling, the controller only reads the pushing distance of the pushing rod of the current hydraulic bracket and the left and right adjacent hydraulic brackets, and the pushing distance of the pushing rod is obtained through the detection of the sensor.
6. The hydraulic support advancing degree detection method based on the monocular vision automatic label according to claim 5, wherein the method for extracting the position information of the marker in the image according to the image information of the camera is as follows:
in the image that the camera was shot, need confirm current hydraulic support, control the end information of adjacent hydraulic support, divide into three kinds of situations: the left deviation of the camera, the centering of the camera and the right deviation of the camera;
dividing each mark in the enhanced image, and numbering from top to bottom and from left to right;
according to the positions of the marks in the image, firstly, the marks are divided into an upper row and a lower row, and then mark pairing is carried out from left to right;
the center point is confirmed by the intersection point of the horizontal line and the vertical line in the divided marker, and the coordinate of the spatial position center of the marker pixel on the coordinate axis x0y is obtained.
7. The utility model provides a hydraulic support impelling degree detecting system based on automatic label of monocular vision which characterized in that: the detection system comprises a hydraulic support, a camera and a controller, wherein the controller is provided with a GPU unit; the hydraulic support, the camera and the controller are used for completing the monocular vision automatic label based hydraulic support propulsion degree detection method of any one of claims 1 to 6.
8. The monocular vision auto tag based hydraulic mount advance detection system of claim 7, wherein: the camera is a monocular servo camera, provides RGB three primary color images, and has resolution of 1920 x 1080.
9. The monocular vision auto tag based hydraulic mount advance detection system of claim 7, wherein: the data precision of the pushing distance of the pushing rod is 0.01 m.
10. The monocular vision auto tag based hydraulic mount advance detection system of claim 7, wherein: the GPU of the controller adopts an NVIDIA T4 GPU display card.
CN202110116865.4A 2021-01-28 2021-01-28 Hydraulic support pushing progress detection method and system based on monocular vision automatic label Active CN112686889B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110116865.4A CN112686889B (en) 2021-01-28 2021-01-28 Hydraulic support pushing progress detection method and system based on monocular vision automatic label

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110116865.4A CN112686889B (en) 2021-01-28 2021-01-28 Hydraulic support pushing progress detection method and system based on monocular vision automatic label

Publications (2)

Publication Number Publication Date
CN112686889A CN112686889A (en) 2021-04-20
CN112686889B true CN112686889B (en) 2022-03-25

Family

ID=75459434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110116865.4A Active CN112686889B (en) 2021-01-28 2021-01-28 Hydraulic support pushing progress detection method and system based on monocular vision automatic label

Country Status (1)

Country Link
CN (1) CN112686889B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113838133A (en) * 2021-09-23 2021-12-24 上海商汤科技开发有限公司 State detection method and device, computer equipment and storage medium
CN114922667A (en) * 2022-05-31 2022-08-19 中煤科工开采研究院有限公司 Vision-based automatic straightening control system and method for hydraulic support of fully mechanized coal mining face
CN115075857B (en) * 2022-08-18 2022-11-04 中煤科工开采研究院有限公司 Quantitative pushing method and system for hydraulic support
CN115371560B (en) * 2022-09-13 2023-08-29 山东科技大学 Working face hydraulic support base group state sensing description method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012230092A (en) * 2011-04-27 2012-11-22 Zenrin Datacom Co Ltd Navigation system
CN102797489A (en) * 2012-08-08 2012-11-28 北京天地玛珂电液控制系统有限公司 Propping plate pressure graphical displaying and analyzing method based on thrusting degree of coal face
CN104747221A (en) * 2015-01-16 2015-07-01 北京煤科天玛自动化科技有限公司 Automatic support moving intelligent control method for hydraulic support for mining
CN105182820A (en) * 2015-08-25 2015-12-23 太原理工大学 Realization method of centralized control platform of large-scale mining fully-mechanized working-face equipment
CN106869929A (en) * 2017-01-20 2017-06-20 中国矿业大学 Hydraulic support based on image interferes protection system and method with winning machine cutting part
CN107905846A (en) * 2017-10-24 2018-04-13 北京天地玛珂电液控制系统有限公司 A kind of fully-mechanized mining working advance rate detecting system and method
CN108416428A (en) * 2018-02-28 2018-08-17 中国计量大学 A kind of robot visual orientation method based on convolutional neural networks
CN109272007A (en) * 2018-07-07 2019-01-25 河南理工大学 Setting load, last resistance recognition methods, storage medium based on deep neural network
CN111723448A (en) * 2020-06-29 2020-09-29 中国矿业大学(北京) Digital twin intelligent fully mechanized coal mining face hydraulic support straightness monitoring method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9700219B2 (en) * 2013-10-17 2017-07-11 Siemens Healthcare Gmbh Method and system for machine learning based assessment of fractional flow reserve
US11106941B2 (en) * 2018-07-16 2021-08-31 Accel Robotics Corporation System having a bar of relocatable distance sensors that detect stock changes in a storage area
CN109284686A (en) * 2018-08-23 2019-01-29 国网山西省电力公司计量中心 A kind of label identification method that camera automatic pitching is taken pictures

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012230092A (en) * 2011-04-27 2012-11-22 Zenrin Datacom Co Ltd Navigation system
CN102797489A (en) * 2012-08-08 2012-11-28 北京天地玛珂电液控制系统有限公司 Propping plate pressure graphical displaying and analyzing method based on thrusting degree of coal face
CN104747221A (en) * 2015-01-16 2015-07-01 北京煤科天玛自动化科技有限公司 Automatic support moving intelligent control method for hydraulic support for mining
CN105182820A (en) * 2015-08-25 2015-12-23 太原理工大学 Realization method of centralized control platform of large-scale mining fully-mechanized working-face equipment
CN106869929A (en) * 2017-01-20 2017-06-20 中国矿业大学 Hydraulic support based on image interferes protection system and method with winning machine cutting part
CN107905846A (en) * 2017-10-24 2018-04-13 北京天地玛珂电液控制系统有限公司 A kind of fully-mechanized mining working advance rate detecting system and method
CN108416428A (en) * 2018-02-28 2018-08-17 中国计量大学 A kind of robot visual orientation method based on convolutional neural networks
CN109272007A (en) * 2018-07-07 2019-01-25 河南理工大学 Setting load, last resistance recognition methods, storage medium based on deep neural network
CN111723448A (en) * 2020-06-29 2020-09-29 中国矿业大学(北京) Digital twin intelligent fully mechanized coal mining face hydraulic support straightness monitoring method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Optimization and Practice of Support Working Resistance in Fully-Mechanized Top Coal Caving in Shallow Thick Seam;Peng Huang 等,;《energies》;20170914;第2017年卷;第1-12页 *
基于BP神经网络的液压支架支护位姿运动学分析;李海锋,;《煤炭工程》;20180920;第50卷(第9期);第117-120页 *
智能综采工作面刮板输送机直线度监测方法研究;张帆 等,;《煤炭科学技术》;20201012;第2020年卷;第1-11页, *

Also Published As

Publication number Publication date
CN112686889A (en) 2021-04-20

Similar Documents

Publication Publication Date Title
CN112686889B (en) Hydraulic support pushing progress detection method and system based on monocular vision automatic label
CN110726726A (en) Quantitative detection method and system for tunnel forming quality and defects thereof
CN109931939B (en) Vehicle positioning method, device, equipment and computer readable storage medium
CN106978774B (en) A kind of road surface pit slot automatic testing method
CN100458359C (en) Small-displacement measuring system in long-distance plane
CN104142145B (en) Big cross section rectangular top pipe method for automatic measurement and device
CN102721365A (en) Method and device for high-speed and accurate measurement of tunnel section
CN110736446B (en) Pose identification system and method for cantilever type heading machine
CN113280798A (en) Geometric correction method for vehicle-mounted scanning point cloud under tunnel GNSS rejection environment
CN106918312B (en) Pavement marking peeling area detection device and method based on mechanical vision
CN109577382A (en) Pile crown analysis system and method, the storage medium for being stored with pile crown analysis program
JP6746250B2 (en) Drilling system
US20230105991A1 (en) Method of imaging a wind turbine rotor blade
CN112348952A (en) Three-dimensional scene construction method for multi-source geographic information data fusion in hard mountainous area
CN105627931A (en) Pantograph offset detection method and system
CN110702343B (en) Deflection measurement system and method based on stereoscopic vision
CN114803386A (en) Conveying belt longitudinal tearing detection system and method based on binocular line laser camera
CN109931889B (en) Deviation detection system and method based on image recognition technology
CN105303564A (en) Tower type crane load stereo pendulum angle vision detection method
CN112161571B (en) Low-data-volume binocular vision coal mining machine positioning and pose detection system and method
CN111273314A (en) Point cloud data processing method and device and storage medium
CN107621229A (en) Real-time railroad track width measure system and method based on face battle array black and white camera
CN112964192A (en) Engineering measurement online calibration method and system based on image video
CN112033293B (en) Method for automatically tracking effective boundary characteristic points of duct piece in machine vision shield tail clearance detection
CN111550273A (en) Hydraulic support leveling and straightening method based on computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220729

Address after: 450016 No. 167, 9th Street, Zhengzhou area (Economic Development Zone), Henan pilot Free Trade Zone, Zhengzhou City, Henan Province

Patentee after: HYDRAULIC & ELECTRIC CONTROL EQUIPMENT CO LTD ZHENGZHOU COAL MINING MACHINERY GROUP Co.,Ltd.

Address before: No.167, 9th Street, Zhengzhou Economic and Technological Development Zone, Henan Province, 450000

Patentee before: ZHENGZHOU COAL MINING MACHINERY GROUP Co.,Ltd.

TR01 Transfer of patent right
CP03 Change of name, title or address

Address after: No.167, 9th Street, Zhengzhou Economic and Technological Development Zone, Henan Province, 450016

Patentee after: Zhengzhou Hengda Intelligent Control Technology Co.,Ltd.

Address before: 450016 No. 167, 9th Street, Zhengzhou area (Economic Development Zone), Henan pilot Free Trade Zone, Zhengzhou City, Henan Province

Patentee before: HYDRAULIC & ELECTRIC CONTROL EQUIPMENT CO LTD ZHENGZHOU COAL MINING MACHINERY GROUP Co.,Ltd.

CP03 Change of name, title or address