CN110727223A - Ring-type track intelligent inspection robot based on underground coal face and application thereof - Google Patents

Ring-type track intelligent inspection robot based on underground coal face and application thereof Download PDF

Info

Publication number
CN110727223A
CN110727223A CN201911008206.8A CN201911008206A CN110727223A CN 110727223 A CN110727223 A CN 110727223A CN 201911008206 A CN201911008206 A CN 201911008206A CN 110727223 A CN110727223 A CN 110727223A
Authority
CN
China
Prior art keywords
image
module
camera
inspection robot
humidity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911008206.8A
Other languages
Chinese (zh)
Inventor
李洪安
谢谦
李茹
刘航舵
彭静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Science and Technology
Original Assignee
Xian University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Science and Technology filed Critical Xian University of Science and Technology
Priority to CN201911008206.8A priority Critical patent/CN110727223A/en
Publication of CN110727223A publication Critical patent/CN110727223A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0428Safety, monitoring
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/24Pc safety
    • G05B2219/24215Scada supervisory control and data acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details

Abstract

The invention discloses a ring-type track intelligent inspection robot based on an underground coal face and application thereof, wherein the intelligent inspection robot comprises an image acquisition module, a temperature and humidity detection module, a harmful gas detection module, a wireless communication module, an alarm module, an LED display module, a power module, a control module and an automatic control module; the robot has the functions of autonomous movement, positioning, image acquisition, gas sensing, temperature sensing, data transmission and the like, and can realize detection of hydraulic struts, harmful gases, temperature and humidity and the like. The abnormal point is found through analyzing the image, the sensor senses the concentration of the harmful gas and the underground temperature and humidity, when a certain index reaches an early warning state, the early warning information can be timely transmitted to a dispatching room through a remote transmission device, and dispatching control personnel can timely take measures to avoid disasters.

Description

Ring-type track intelligent inspection robot based on underground coal face and application thereof
Technical Field
The invention relates to the technical field of mine intelligent robots, in particular to an underground coal face-based ring-type track intelligent inspection robot and application thereof.
Background
With the intensive development of artificial intelligence, various industries are involved. The artificial intelligence technology is applied to mine construction, so that the mine has human thinking, reaction and action capabilities, the comprehensive information integration and response capabilities of objects, objects and people are realized, the mine system is used for actively sensing, analyzing and rapidly making correct processing, the human factors are reduced to the minimum degree, the people, the objects, the goods. The maturity and fusion of various technologies such as a new generation of internet, cloud computing, intelligent sensing, communication, remote sensing, satellite positioning, geographic information system and the like realize digital and intelligent management and feedback mechanisms, and provide a technical basis for the development of smart mines.
The coal face is used as a first production site of a mine, has the characteristics of narrow operation space, more mechanical equipment, poor visual environment and high temperature, and is a place where mine accidents occur frequently. Coal mine roof accidents, water permeability accidents, coal spontaneous combustion accidents, gas explosion accidents, coal dust explosion accidents and occupational diseases all take a considerable proportion of the coal mining working face. Therefore, the risk evaluation research on the coal face becomes a problem which needs to be solved urgently by coal mine safety management.
At present, in the coal mining process, the fully mechanized coal mining face is severely vibrated due to the influence of a plurality of factors such as coal seam environment, geological conditions, rib caving and the like, and according to the regulation of coal mine safety regulations, the supports are arranged in a straight line, 50m of the supports are pulled, the linearity deviation is not more than +/-50 mm, the hydraulic supports always incline or even collapse, and the linearity is difficult to ensure. Therefore, the fully mechanized coal mining face needs to be stopped every time when a few cutters enter the fully mechanized coal mining face, the working efficiency of the fully mechanized coal mining face is seriously influenced, and particularly certain potential safety production hazards are generated. How to investigate the potential safety hazard of the support, we work the running analysis to obtain in abominable environment in the pit according to the hydraulic support: the equipment is influenced by a plurality of factors, the position of the equipment can deviate after long-time operation, the equipment can be divided into 2 types of front and back misalignment deviation and non-perpendicular to scraper conveyor deviation, and generally, the position deviation can be corrected through the posture and straightness detection of a hydraulic support base. Most of traditional hydraulic support pose and straightness detection methods are that relative poses among all components of a hydraulic support are measured by using an angle sensor and a displacement sensor, or local pose detection is realized by judging relative distances among the hydraulic supports by using laser, infrared rays and electromagnetic sensors. The traditional detection method is high in implementation cost, full-position posture detection of two objects is difficult to achieve, and in the coal industry, identification of mechanical equipment and fault equipment positioning have important significance on safety production of coal.
The coal industry monitoring video image has the characteristics that ① illumination is low, although lighting equipment is arranged underground, the illumination is obviously insufficient compared with natural light imaging, ② illumination is not uniformly distributed, the illumination is strong near a light source in the same monitoring scene, even mirror reflection occurs, the image is white, the illumination is insufficient and an object is difficult to distinguish far away from the light source, ③ has almost no color, all images mainly comprise black, gray and white colors except for equipment with obvious colors, and no color can be utilized when the image is processed, so that the video quality is poor and the distinguishing performance of the image is not high due to the underground special and complex environment.
Disclosure of Invention
Aiming at the existing problems, the invention aims to provide the ring-type track intelligent inspection robot based on the underground coal face and the application thereof.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the intelligent ring-type track inspection robot based on the underground coal face comprises an image acquisition module, a temperature and humidity detection module, a harmful gas detection module, a wireless communication module, an alarm module, an LED display module, a power module, a control module and an automatic control module; wherein the content of the first and second substances,
the image acquisition module acquires a detected target through a camera, the acquired image is subjected to sharpening processing through a system platform, and finally a fault target is verified and positioned;
the temperature and humidity detection module comprises a temperature sensor and a humidity sensor, information of the temperature and humidity sensor and the humidity sensor is collected through the control module, data are sent to the system platform through the wireless communication module, the information is calculated and processed through a program set by the control module, the temperature and humidity value of the underground coal mine is obtained, and then the data are displayed in real time through the LED display module or alarm prompt is timely carried out through the alarm module;
the harmful gas detection module comprises a gas sensor, after the methane concentration and the carbon monoxide concentration are collected by the gas sensor, data are sent to the system platform through the wireless communication module, and are compared through a program set by the control module, and if the methane concentration and the carbon monoxide concentration exceed an alarm limit, an alarm module gives an alarm;
the power module provides power output for the intelligent inspection robot;
the power module comprises an electric storage device and a charging hole, and the electric storage device is electrically connected with the charging pile through the charging hole to realize autonomous charging;
the automatic control module comprises a machine body steering device, and the machine body always faces the direction of the hydraulic support through the machine body steering device.
Preferably, the temperature and humidity detection module further comprises a temperature and humidity acquisition circuit, a communication circuit, a power circuit, a display circuit and an alarm circuit, underground temperature and humidity parameters are acquired through the temperature and humidity acquisition circuit respectively, acquired information is processed by the control module and then transmitted to the wireless communication module through the communication circuit, transmitted to the LED display module through the display circuit, transmitted to the alarm module through the alarm circuit and connected with the power module through the power circuit.
Preferably, the image acquisition sharpening process comprises an image preprocessing unit, an image feature extraction unit, a pose detection unit and a straightness determination unit; wherein the content of the first and second substances,
the image preprocessing unit is used for eliminating irrelevant information in the acquired image and carrying out noise reduction and enhancement processing on the image;
the image feature extraction unit is used for accurately extracting the feature points of the target object in the image through the coordinates of n feature points on the target object in a camera coordinate system;
the pose detection unit is used for determining the coordinates of n characteristic points on the target object under a camera coordinate system, and finally calculating the motion position and pose parameters of the target according to calibrated internal and external parameters of the camera and coordinate values of the characteristic points under a world coordinate system;
and the straightness determining unit is used for performing coordinate fitting in the three-axis direction by using the same characteristic point on each hydraulic support to obtain the straightness of the hydraulic support in multiple directions.
The application of the ring-type track intelligent inspection robot based on the underground coal face comprises the following steps:
s1: image acquisition and processing
The intelligent inspection robot is placed underground, a power switch is turned on, inspection operation is started, an underground fully mechanized mining face is subjected to image acquisition through a camera, acquired image information is transmitted to a system platform to be processed, stored and displayed, and image processing comprises image preprocessing, image feature extraction, pose detection calculation and straightness calculation;
a. the image preprocessing comprises image noise reduction and image enhancement, wherein the image noise reduction processing is firstly carried out, and then the image enhancement processing is carried out;
image denoising: performing noise reduction processing on the image by adopting a bilateral filter;
image enhancement: enhancing the fully mechanized coal mining face image through an MSR algorithm;
b. feature extraction of images
Determining coordinates of n feature points on a target object under a camera coordinate system, accurately extracting feature points of the target object in an image, and analyzing characteristics of the underground image and underground real-time requirements to obtain a target object characterized by SIFT features;
c. pose detection calculation
Determining coordinates of n feature points on the target object in a camera coordinate system, and finally calculating motion positions and pose parameters of the target according to calibrated internal and external parameters of the camera and coordinate values of the feature points in a world coordinate system;
d. and (4) selecting any one characteristic point from the space position coordinates of the hydraulic support obtained in the steps to calculate the transverse and longitudinal straightness of the hydraulic support. Coordinate fitting in the three-axis direction is carried out by utilizing the same characteristic point on each hydraulic support, straightness in multiple directions of the hydraulic support is obtained, and finally, experimental simulation is carried out on numerical values;
s2: harmful gas detection
The intelligent inspection robot is provided with a harmful gas detection module, underground harmful gas data are collected, various parameter thresholds stored in a control module are compared, if the parameters exceed alarm limits, an audible and visual alarm is given out, and meanwhile, the wireless communication module sends data to a system platform;
the harmful gas in the mine is harmful gas to human bodies in the mine, and the harmful gas which is gushed out from coal rocks in the underground mining process is generally called gas. Gas can be burnt or exploded, and the gas explosion is one of the main disasters of coal mines. The main components of the gas are CO and H2S、CH4Isohydrocarbon compound of which CH4Is the main component of gas, and accounts for more than 90 percent of the mixed gas. In the process of underground operation, gas is often sprayed from coal rock cracks, and CH4 belongs to flammable and explosive gas, so that the gas can be rapidly combusted when meeting a fire source in a complex underground environment of a coal mine, even gas explosion can occur, and the life safety and property safety of underground workers are threatened to a certain degree. Tables 4-4 list the explosive range of harmful gases downhole.
Tables 4-4 explosive Range of test gases
Figure BDA0002243402280000061
When flammable and explosive gas is between the LEL and the UEL in a mine, the flammable and explosive gas meets open fire, so that explosion can be caused, and serious disasters can be caused. Methane gas is not easy to be found by people, is colorless and tasteless, is easy to cause suffocation accidents when the concentration of the methane gas is increased, and can cause people to die immediately when the concentration of the methane gas exceeds 40 percent; meanwhile, the explosion limit of methane gas is 5.3% -14%, namely when the methane gas is less than 5% or exceeds 14%, the methane gas can be burnt but cannot cause explosion when exposed to fire, but when the methane gas is more than 5.3%, the methane gas can be exploded when exposed to fire in the air.
S3: temperature and humidity detection
The temperature and humidity detection module is arranged on the intelligent inspection robot, information of the temperature and humidity detection module is collected, the collected data is operated and processed through the control module, underground temperature values and humidity values are obtained, data are displayed in real time through the driving display circuit, and meanwhile, the data are sent to the system platform through the wireless communication module.
Preferably, the bilateral filtering adopted in the image denoising in step S1 is to add spatial information and gray scale similarity information between two pixels to the setting of the weight, the weighting coefficient is formed by multiplying two factors, one is determined by the spatial distance between the pixels, the other is determined by the difference between the luminance values between the pixels, and the definition of the bilateral filter is as shown in equation (4-5):
wherein: wpIs a standard quantity.
Figure BDA0002243402280000072
Parameter sigmadAnd σrRepresenting the amount of noise removal of image I, equation (A)3.6) representing a normalized weighted average of pixels in the neighborhood, wherein
Figure BDA0002243402280000075
Is a spatial function which decreases with increasing euclidean distance between the pixel point and the centre point,is a range function that decreases as the difference between the luminance values of the two pixels increases; spatial neighbor function for bilateral filter
Figure BDA0002243402280000077
And gray level similarity function
Figure BDA0002243402280000078
The distance between two pixels is taken as a Gaussian function of a parameter, and the distance is defined as shown in formulas (4-7) and (4-8):
wherein d (p, q) and delta (I (p), I (q)) are respectively the distance difference between two pixel points of the image and the gray level difference of the pixel, sigmadAnd σrIs the standard deviation of the gaussian function.
Preferably, the MSR algorithm in the image enhancement in the step S1 is
According to land theory, let the ideal image be I (x, y) as:
I(x,y)=R(x,y)×L(x,y) (4-14)
i.e. the image f (x, y) can be expressed as the product of the ambient brightness function L (x, y) and the scene reflection function R (x, y), the MSR enhancement method is described as follows:
Figure BDA0002243402280000081
where i denotes the ith spectral band, N denotes the number of spectral bands, N ═ 1 denotes a grayscale image, N ═ 3 denotes a color image, and R denotes a color imagei(x, y) is the output image function, Ii(x, y) is a distribution function of the input image function, representing a convolution operation, log is a natural logarithm, Fk(x, y) represents an environment function, and the selection of the environment function can be various, wherein a Gaussian function is selected as the environment function, k represents the number of multi-scales, and the standard deviation sigma is differentkEnvironmental function Fk(x, y) the selection controls the scale of the Gaussian function. WkRepresents and FkThe associated weight coefficient; the MSR algorithm selects three scales of different levels according to the scene processing requirement, and then the three scales are fused by different weight coefficients to realize image enhancement.
Preferably, the image enhancement further comprises negative pixel point correction, the negative pixel point in the image is corrected by adopting a gain/offset method, and the corrected gray value is mapped into a gray range displayed by the display according to the formula (4-17);
R0(x,y)=G×Ri(x,y)+offset (4-16)
Figure BDA0002243402280000082
wherein R isi(x, y) and R0(x, y) represent input and output gradation values of an image, respectively, G represents a gain coefficient, and offset represents an offset amount.
Preferably, the MSR algorithm step in the image enhancement in step S1 is:
① reading in video frame image I and taking logarithm of its image function;
② calculated at different standard deviations sigmakThe filter coefficient of the lower Gaussian filter;
③ selection of 3 σ skThe scale, which is selected from 15, 80 and 250 in the implementation process, is used for carrying out convolution operation on the image by using the three different Gaussian filter coefficients;
④ calculating weighted average of the results obtained in three scales according to the formula (4-15), wherein the weights are all selected to be 1/3, and the image is divided into illumination component L and reflection component R;
⑤, adjusting the result of the reflection component R according to the formulas (4-16) and (4-17), and then carrying out histogram equalization on R to obtain a new reflection component R';
⑥ the new reflection component R 'is added to the illumination component L to get a new image I' and then an enhanced image for exp.
Preferably, the attitude detection calculation in step S1 is to establish Oc-xcyczcIs a camera coordinate system, O-xy is an image coordinate system, P1-P4The coordinates of the 4 coplanar feature points in the image coordinate system are C1-C4From the geometric relationship, one can obtain:
Figure BDA0002243402280000091
wherein the formula is as follows: s1Is DeltaP1P2P3The area of (d); s2Is DeltaP1P2P4The area of (d); s3Is DeltaP1P3P4The area of (d); s4Is DeltaP2P3P4The area of (d); h is the optical center of the camera to PiThe points (i ═ 1, 2, 3, 4) constitute the distances of the planes. At the same time, the system has the advantages that,
Figure BDA0002243402280000092
wherein (x)i,yi) Is CiCoordinates of (d)iIs the optical center O of the cameracPoint to PiDistance of points, MiIs the optical center O of the cameracPoint to CiDistance of points, f is the effective focal length of the camera, (x)i,yi) F and MiCan be obtained by coordinate system conversion and camera internal parameter calibration, is a known quantity, and is obtained by solving diThen the characteristic point P under the camera coordinate system can be obtainediAccording to camera coordinates and image coordinates, object coordinatesKnowing:
Figure BDA0002243402280000102
wherein the characteristic point P is in a measured target object coordinate system Ow-xwywzwHas the coordinate of (X)w,Yw,Zw) In the camera coordinate system Oc-xcyczcHas the coordinates of (X)w,Yw,Zw) (ii) a R is a rotation matrix; t is a translation vector;
the rotation matrix and the translation vector determine the direction and the position of the camera relative to the coordinate system of the measured target object; the rotation matrix is an orthogonal matrix and is represented by 3 rotation angles, namely a pitch angle alpha, a yaw angle beta and a roll angle gamma; the formula is solved by using three-dimensional coordinate space conversion and a non-iterative method, and the translation vectors R and T of the rotation matrix and the translation vector P, namely the pose of the camera relative to the measured target object, can be obtained.
Preferably, the space position coordinate P of the hydraulic support is calculated by the posei=(Xi,Yi,Zi) Selecting any one of the characteristic points to calculate the transverse and longitudinal straightness of the hydraulic support; coordinate fitting in the three-axis direction is carried out by utilizing the same characteristic point on each hydraulic support, straightness in multiple directions of the hydraulic support is obtained, and finally experimental simulation is carried out on numerical data.
The invention has the beneficial effects that:
the robot has the functions of autonomous movement, positioning, image acquisition, gas sensing, temperature sensing, data transmission and the like, and can realize detection of hydraulic struts, harmful gases, temperature and humidity and the like. The abnormal point is found through analyzing the image, the sensor senses the concentration of the harmful gas and the underground temperature and humidity, when a certain index reaches an early warning state, the early warning information can be timely transmitted to a dispatching room through a remote transmission device, and dispatching control personnel can timely take measures to avoid disasters.
Drawings
FIG. 1 is a schematic view of the robot of the present invention;
FIG. 2 is a block diagram of the platform design of the system of the present invention;
FIG. 3a is a schematic view of a power storage device according to the present invention;
FIG. 3b is a flow chart of the automatic charging process of the inspection robot according to the present invention;
FIG. 4a is a schematic view of the inspection robot after steering of the body of the inspection robot according to the present invention;
FIG. 4b is a block diagram of the automatic body rotation process of the inspection robot of the present invention;
FIG. 5 is a block diagram of the process of image acquisition of the inspection robot of the present invention;
fig. 6a is an original image of a fully mechanized mining face of a certain mine in the implementation process of the inspection robot;
fig. 6b is a gray level histogram corresponding to an original image of a fully mechanized mining face of a certain mine in the implementation process of the inspection robot;
FIGS. 7a, 7b, 7c, and 7d are graphs of image denoising results using median filtering, mean filtering, Gaussian filtering, and bilateral filtering, respectively;
FIG. 8 is a MSR algorithm framework diagram;
FIG. 9 is a graph of a pose detection algorithm of the present invention;
fig. 10 is a functional diagram of a harmful gas detection module according to the present invention.
FIG. 11 is a diagram of MQ-5 structure;
FIG. 12 shows a gas concentration detection circuit;
FIG. 13 is a flow chart of gas detection;
FIG. 14 is a schematic diagram of the SHT11 circuit;
FIG. 15 is a flow chart of a temperature and humidity monitoring module;
fig. 16 is a Zigbee network topology;
fig. 17 is an alarm flow chart.
Wherein: the intelligent monitoring system comprises an image acquisition module, a temperature and humidity detection module, a harmful gas detection module, a gas sensor 31, a wireless communication module 4, an alarm module 5, an LED display module 6, a power switch 7, a power module 8, a power module 9, a control module 10, an electric storage device 11, a charging hole 12, a machine body and power supply combination part 13 and a machine body and power supply combination part 14.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the following further describes the technical solution of the present invention with reference to the drawings and the embodiments.
Example (b): referring to attached drawings 1 and 10, the ring-type track intelligent inspection robot based on the underground coal face and the application thereof are disclosed, and the ring-type track intelligent inspection robot based on the underground coal face comprises an image acquisition module, a temperature and humidity detection module, a harmful gas detection module, a wireless communication module, an alarm module, an LED display module, a power module, a control module and an automatic control module; wherein the content of the first and second substances,
in the data feedback part, a Zigbee wireless communication technology is adopted, and the Zigbee has the characteristics of low power consumption and low cost, and has strong networking capability and excellent safety. The Zigbee is composed of a plurality of communication nodes, each communication node of the system can automatically form a network in a certain communication range, and any two nodes in the ad hoc network can realize communication in forms of characters, pictures, voice and the like.
The acquisition node comprises various sensors, Zigbee chips such as CC2530, a power supply module and a microprocessor. The sensor module comprises a gas sensor, a temperature and humidity sensor and image acquisition equipment. Fig. 16 is a diagram illustrating a Zigbee network topology.
Firstly, a Zigbee wireless module configures parameters to establish a Zigbee network, becomes a coordinator of the whole system and is used as a centralized and distributed place for information exchange of the whole network. After the Zigbee network is built, parameter configuration of terminal nodes is carried out on each subsystem node, and when the terminal nodes detect the network built by the coordinator, the terminal nodes can actively join the network and send the current state of the terminal nodes and the acquired data information to the control terminal.
The control terminal information processing chip adopts STC89C 52. The coordinator sends the information collected from each terminal node to the single chip microcomputer through serial port communication to perform centralized processing of the information, and simultaneously performs centralized issuing of commands to the following nodes after processing, and issues the commands to be issued to the terminal nodes through the coordinator.
The communication protocol in the invention is suitable for all produced wireless communication modules, and realizes the transmission of data among the modules. The Zigbee protocol stack is established over the IEEE 802.15.4 standard, defining MAC and PHY layers. The protocol standardizes parameters and hardware resources in the wireless module, so that resources in the module can be accessed and controlled by the same method; the serial port control protocol provides a control access channel for a module for a user, and the user equipment can control wireless communication through a serial port to finish transmission, parameter access and the like.
Designing an alarm module: when the collected pose, harmful gas concentration data and temperature and humidity data of the hydraulic support reach an early warning value, an alarm can be generated through a buzzer, and meanwhile alarm information (including a robot running position, a pose alarm picture of the hydraulic support, harmful gas alarm concentration and temperature and humidity alarm data) is transmitted to a console through Zigbee. The console staff can timely take effective measures to prevent disasters through the alarm information. The alarm flow chart is shown in fig. 17.
The image acquisition module acquires a detected target through a camera, the acquired image is subjected to sharpening processing through a system platform, and finally a fault target is verified and positioned;
the temperature and humidity detection module comprises a temperature sensor and a humidity sensor, information of the temperature and humidity sensor and the humidity sensor is collected through the control module, data are sent to the system platform through the wireless communication module, the information is calculated and processed through a program set by the control module, the temperature and humidity value of the underground coal mine is obtained, and then the data are displayed in real time through the LED display module or alarm prompt is timely carried out through the alarm module;
the harmful gas detection module comprises a gas sensor, after the methane concentration and the carbon monoxide concentration are collected by the gas sensor, data are sent to the system platform through the wireless communication module, and are compared through a program set by the control module, and if the methane concentration and the carbon monoxide concentration exceed an alarm limit, an alarm module gives an alarm;
the power module provides power output for the intelligent inspection robot;
the power module comprises an electric storage device and a charging hole, and the electric storage device is electrically connected with the charging pile through the charging hole to realize autonomous charging;
in order to facilitate the management of a coal face, the inspection robot with an automatic charging function is designed, the robot returns to an inspection starting point every time to judge whether the electric quantity meets the condition of continuing inspection, and if the electric quantity does not meet the condition, the robot enters a charging pile to perform autonomous charging. An electric power storage device of the inspection robot;
as shown in fig. 3a and 3b, a flow chart of an electric power storage device and an automatic charging of the intelligent inspection robot for a coal face is provided, which specifically includes the following steps:
step 1: initializing a path as a routing inspection starting point, and judging whether the electric quantity is sufficient;
step 2: if the electric quantity is sufficient, continuing to inspect, otherwise, performing step 3;
step 3: the electric quantity is insufficient, and charging is prepared;
step 4: judging whether the charging position is reached, if so, performing step5 is not reached;
step 5: adjusting the position of the robot to a charging position, and returning to step 4; and if the current reaches the preset value, charging is carried out.
step 6: judging whether the charging is finished or not, and returning to step1 if the charging is finished; if not, charging continues until the end of charging returns to step 1.
Therefore, the link of automatic underground charging of the robot can be completed, and the link of charging by operators due to insufficient electric quantity is reduced.
Referring to fig. 4a and 4b, the automatic control module comprises a body steering device, and the body is always faced to the direction of the hydraulic support through the body steering device.
Further, referring to fig. 5, fig. 6a and fig. 6b, during the turning of the curve, the LED panel inevitably faces the direction which is inconvenient for the operator to observe, and for this reason, the body is designed to face the direction of the hydraulic support (the direction of the sidewalk) all the time, which is convenient for the operator or the inspection personnel to view;
step 1: initializing the orientation of the body of the inspection robot;
step 2: judging whether to carry out straight line driving, if so, continuing to inspect, and otherwise, adjusting the machine body;
step 3: judging whether the machine body faces the hydraulic support or not, if so, continuing to inspect, and otherwise, adjusting the machine body;
this function can keep facing the direction that operating personnel conveniently looked over with the LED display screen, makes things convenient for operating personnel in the pit to look over at any time patrolling and examining robot collection data.
Referring to FIG. 2, a block diagram of a system platform design is shown, comprising:
1) a presentation layer: various information of the system is directly presented to the control personnel;
2) and (4) a service layer: the modular design is adopted, and the module is expanded according to the requirement of inspection;
3) and (3) a data layer: the functions of data acquisition, data storage, data analysis and the like are realized;
4) monitoring layer: and the monitoring system is responsible for monitoring the machine condition, the temperature and the gas concentration of the coal face and sending early warning information to a console.
The inspection robot of the invention has the following operation steps:
firstly, image acquisition
Referring to fig. 5, for the image acquisition flow chart of the inspection robot,
the hydraulic support is a measured target, and attitude calculation is carried out by using a vision measurement algorithm based on a plurality of coplanar characteristic points. The target to be measured is collected by a camera, and the collected image is processed by a computer. The image processing comprises the main steps of image preprocessing, image feature extraction, pose resolving and straightness resolving.
In the technical scheme of the invention, before the pose detection of the hydraulic support of the machine vision is carried out, the collected image needs to be preprocessed to improve the quality of the image, eliminate irrelevant information, and enhance the detectability of a target area and the accuracy of judging a fault target. Because the underground environment of the mine is severe, the light is poor, the illumination is low, and meanwhile, the resolution of the image of the industrial monitoring image is unclear due to the interference of various noise information in signal transmission, the image is preprocessed before target detection. First, a general image preprocessing method including image noise reduction and image enhancement will be described. Secondly, image feature extraction is carried out, the camera model and extracted target feature points are used for resolving the relative poses of the target and the camera, then any one feature point is used for resolving the transverse and longitudinal straightness of the hydraulic support, and finally verification is carried out to locate the fault target.
(1) Image noise reduction
The purpose of image noise reduction is to reduce the effect of noise on target detection and thus reduce the probability of target false positives.
Preprocessing the image acquired by the robot by respectively adopting a neighborhood average method, median filtering, Gaussian filtering and bilateral filtering;
neighborhood averaging method
The neighborhood averaging method is to perform noise reduction processing in a spatial domain. Suppose a given image [ f (i, j)]N×NEach pixel point is set to be (M, n), the neighborhood is assumed to be S, M pixels are contained in S, and the average value in the neighborhood is as follows:
Figure BDA0002243402280000171
will be provided with
Figure BDA0002243402280000172
And replacing the original gray value of the pixel by the average value in the pixel field as the gray value at the neighborhood average rear point (m, n). The neighborhood S may be selected from a list, typically many in squares, according to the shape and size of the image' S feature. The neighborhood average method comprises a simple average method, a gray difference threshold method, a weighted average method and the like, wherein the smoothing effect of the weighted average method is better than that of the other two methods. The neighborhood averaging method has the characteristics of simple realization and high calculation speed, but has the defects ofThe dots weaken the edges of the image, causing some blurring of the image.
Median filtering
Median filtering is similar to neighborhood averaging, but is a non-linear noise reduction method. Because the noise has a large contrast relative to surrounding pixels, the noise pixel points are either brighter or darker. If the pixel points are arranged in the order of the gray scale in a neighborhood, the noise points are definitely positioned at the two ends of the sequence, and at the moment, the middle value is removed as output, so that the influence of the noise can be eliminated, and the idea of median filtering is realized. The specific implementation method comprises the following steps: determining a neighborhood A, sequencing the gray values of the pixel points in the neighborhood, and taking the median of the sequence as the gray value of the pixel point. [ x (i, j)]M×NRepresents a sequence, ANIf the value is a window, the pixel point (i, j) is
Figure BDA0002243402280000173
The noise reduction effect of the median filtering depends on the neighborhood space range and the number of pixels in the neighborhood, the median filtering is good for the superposition of impulse noise in the flat region inside the image, and meanwhile, the image loss is more serious along with the increase of the neighborhood window of the median filtering.
Gauss filtering
The gaussian filtering is to perform convolution operation between gaussian function and gray image to achieve noise reduction, and is defined as shown in equation (4-3):
Figure BDA0002243402280000181
wherein G isσ(x) Refers to a two-dimensional gaussian kernel function:
Figure BDA0002243402280000182
from the above formula, the gaussian filtering essentially calculates the weighted average of the neighboring positions in the neighborhood of the pixel. And the weight of each pixel point is along with the distance from the center point PAnd decrease progressively, Gσ(| p-q |) represents the distance between any point q and the central point p in the neighborhood, and sigma is a parameter representing the size of the neighborhood, and according to the formulas (4-3) and (4-4), the optimal approximation of the Gaussian function depends on coefficients of binomial expansion of the Gaussian function, but the computation is too slow by definition for image filtering, so that a Gaussian template computed according to discrete Gaussian distribution can be directly adopted. Experiments prove that the Gaussian filtering is suitable for removing the noise with the normal distribution characteristic. However, the edges of the image are prone to blurring because the gaussian filter only considers the distance between the pixels in the image and ignores the gray values between the pixels.
Bilateral filtering
Bilateral filtering is a nonlinear filter proposed on the basis of a gaussian filtering algorithm. The difference is that on the basis of Gaussian filtering, spatial information and gray level similarity information between two pixels are added to the setting of the weight. The weighting coefficient is formed by multiplying two parts of factors, one part is determined by the space distance between pixels, and the other part is determined by the difference of brightness values between pixels. The bilateral filter is defined as shown in equation (4-5):
wherein: wpIs a standard quantity.
Parameter sigmadAnd σrRepresenting the amount of noise removal for image I, equation (3.6) represents a normalized weighted average of the pixels in the neighborhood, where
Figure BDA0002243402280000195
Is a spatial function which decreases with increasing euclidean distance between the pixel point and the centre point,
Figure BDA0002243402280000196
is aA range function which decreases as the difference between the luminance values of the two pixels increases. Typically, the spatial neighbor function of the bilateral filterAnd gray level similarity function
Figure BDA0002243402280000198
The distance between two pixels is taken as a Gaussian function of a parameter, and the distance is defined as shown in formulas (4-7) and (4-8):
Figure BDA0002243402280000193
Figure BDA0002243402280000194
wherein d (p, q) and delta (I (p), I (q)) are respectively the distance difference between two pixel points of the image and the gray level difference of the pixel, sigmadAnd σrIs the standard deviation of the gaussian function.
Noise reduction evaluation
After the noise of the image is reduced, the image can be evaluated through an objective standard, human eyes observe the image to be an effective subjective image quality evaluation standard, and meanwhile, in order to explain the noise reduction effect of the image more deterministically, the following objective quality evaluation standards are adopted to check the effectiveness of various noise reduction algorithms.
Criterion one is as follows: root Mean Square Error (RMSE)
Suppose the image is f (x, y), and the image after noise reduction is g (x, y), wherein x is more than or equal to 0 and less than or equal to M-1, and y is more than or equal to 0 and less than or equal to N-1. For any x and y, the root mean square error between f (x, y) and g (x, y) can be expressed as:
Figure BDA0002243402280000201
criterion two: mean square signal-to-noise ratio (SNR)
Similarly, f (x, y) and g (x, y) are defined as the original image and the denoised output image respectively, and the mean square signal-to-noise ratio between f (x, y) and g (x, y) can be expressed as:
Figure BDA0002243402280000202
in practical applications, the SNR is often normalized and expressed in decibels (dB), such that
Then there are:
Figure BDA0002243402280000204
criterion three: peak signal-to-noise ratio: (PSNR)
If order fmaxMax { f (x, y), x ═ 0,1,., M-1, y ═ 0,1,.., N-1}, then the peak signal-to-noise ratio PSNR:
Figure BDA0002243402280000205
according to the above criteria, it can be seen that the smaller the root mean square error is, the larger the mean square signal-to-noise ratio and the peak signal-to-noise ratio are, the better the image noise reduction quality is.
The above several common noise reduction methods are tested, and the noise-reduced image is evaluated according to the described objective standard, and the test result shows that the bilateral filtering is better for reducing the noise of the image, so that the bilateral filtering is finally selected for reducing the noise of the image.
Referring to fig. 7a, 7b, 7c, and 7d, there are shown noise reduction result graphs of the above four kinds of filtering, respectively; because the image under the mine is darker, the noise reduction effect is only subjectively seen, the median filtering and the mean filtering reduce the noise to cause the blurring of the image in different degrees, and the Gaussian filtering and the bilateral filtering enable the image to be clearer and have better effect.
TABLE 4-1 comparison of various noise reduction methods
Figure BDA0002243402280000211
In addition to subjective evaluation, the technical scheme of the invention compares the Root Mean Square Error (RMSE), the mean square signal-to-noise ratio (SNR) and the peak signal-to-noise ratio (PSNR) of the above methods. The results are shown in Table 4-1. Experience shows that the smaller the RMSE, the larger the SNR and PSNR, the better the image noise reduction quality.
As the data compared in the table above, the image after bilateral filtering has the lowest RMSE and the highest SNR and PSNR, thus the image after noise reduction has the best quality, and the technical scheme of the invention adopts the bilateral filtering method to denoise the image.
(2) Image enhancement
The invention adopts Retinex theory to divide the image into a luminance image and a reflection image, wherein the luminance image is the low-frequency part of the image, the reflection image is the high-frequency information of the image, the enhancement based on the Retinex is realized by changing the proportion of the luminance image and the reflection image in the original image, and a multi-scale Retinex Method (MSR) can compress the dynamic range of the image, so that the invention still has better processing effect under the condition of uneven or insufficient illumination.
(1) MSR algorithm
According to land theory, let the ideal image be I (x, y) as:
I(x,y)=R(x,y)×L(x,y) (4-14)
i.e. the image f (x, y) can be expressed as the product of the ambient brightness function L (x, y) and the scene reflection function R (x, y). The MSR enhancement method is described as follows:
Figure BDA0002243402280000221
where i denotes the ith spectral band, N denotes the number of spectral bands, N ═ 1 denotes a grayscale image, and N ═ 3 denotes a color image. Ri(x, y) is the output image function, Ii(x, y) is a distribution function of the input image function, representing a convolution operation, log is a natural logarithm, Fk(x, y) represents an environment function, and the selection of the environment function can be various, wherein a Gaussian function is selected as the environment function, and k represents the number of multi-scales (namely, Gaussian functions) according to theDifferent standard deviation σkEnvironmental function Fk(x, y) the selection controls the scale of the Gaussian function. WkRepresents and FkThe associated weight coefficients. The MSR algorithm selects the number of scales according to the needs of scene processing, generally selects three scales with different levels, and then fuses the three scales with different weight coefficients to realize image enhancement.
The gray scale value obtained by the image after MSR enhancement may have a negative value, so that the negative pixel point in the image needs to be corrected by using a gain/offset method, and the corrected gray scale value is mapped into the gray scale range displayed by the display according to equation (4-17).
R0(x,y)=G×Ri(x,y)+offset (4-16)
Figure BDA0002243402280000222
Wherein R isi(x, y) and R0(x, y) represent input and output gradation values of an image, respectively, G represents a gain coefficient, and offset represents an offset amount.
(2) MSR algorithm and implementation steps
After the underground image is subjected to image enhancement by the MSR, the detail information of the image is improved, but the image after the image enhancement is too bright and is not suitable for subjective observation, so that histogram equalization is added in the MSR image enhancement algorithm process, the image over-bright phenomenon is reduced, the local detail information of the image is improved, and the contrast is enhanced. The principle of the improved method is as follows:
as shown in fig. 8, the MSR method algorithm steps:
① reading in video frame image I and taking logarithm of its image function;
② calculating the filter coefficients of the Gaussian filter under different standard deviations σ k;
③ selection of 3 σ skThe scale, which is selected from 15, 80 and 250 in the implementation process, is used for carrying out convolution operation on the image by using the three different Gaussian filter coefficients;
④ calculating weighted average of the results obtained in three scales according to the formula (4-15), wherein the weights are all selected to be 1/3, and the image is divided into illumination component L and reflection component R;
⑤, adjusting the result of the reflection component R according to the formulas (4-16) and (4-17), and then carrying out histogram equalization on R to obtain a new reflection component R';
⑥ the new reflection component R 'is added to the illumination component L to get a new image I' and then an enhanced image for exp.
And (3) image enhancement result: and after the image is subjected to noise reduction, enhancing the fully mechanized coal mining face image by adopting an MSR method. MSR enhancement integrates the advantages of the two methods, the whole gray value is improved, the phenomenon of over-enhancement is avoided, the pixel value of over-brightness is reduced, the image is clearer, the low-brightness area in the image is enhanced, and the high-brightness area is inhibited. Experiments show that the method is suitable for image enhancement of underground low-illumination videos, the algorithm is simple to implement, and after image preprocessing, a support target on an adhesive tape machine is clearer, and fault target detection and positioning are facilitated.
Second, image feature extraction
The method can determine the coordinates of N characteristic points on a target object under a camera coordinate system, so that the characteristic points of the target object on the image are necessary to be accurately extracted, and the characteristic of the downhole image and downhole real-time requirement analysis result that the target object is characterized by using the SIFT characteristics has the advantages that ① is based on some local appearance interest points on the object and is not related to the size and rotation of the image, ② is quite high in tolerance to light, noise and micro visual angle change, based on the characteristics, the characteristics are highly remarkable and relatively easy to capture, the object is easy to identify and can be easily mistakenly recognized in a mother-number characteristic database, ③ uses the SIFT characteristics to describe that the detection rate of partial object shielding is quite high, even more than 3 object characteristics are needed to calculate the position, the orientation and the real-time speed of a computer under the condition of computing the hardware and the real-time speed of the hardware under the current condition of computing the orientation and the orientation of the SIFT characteristics.
Third, pose detection and calculation
Referring to the coordinate system shown in fig. 9, the pose of the target object is calculated, i.e. the transformation matrix between the target coordinate system and the camera coordinate system, i.e. the camera external parameter matrix containing three rotation parameters and three translation parameters. PNP (active-N-Point) is a positioning calculation method based on a single image, and the method is widely applied to target position positioning and attitude calculation because the corresponding relation between image points does not need to be established. The method can determine the coordinates of n characteristic points on a target object in a camera coordinate system, and finally calculate the motion position and pose parameters of the target according to calibrated internal and external parameters of the camera and coordinate values of the characteristic points in a world coordinate system. The coordinates on the coordinate system of the measured target object are known, the coordinates of the feature points under the coordinate system of the camera are calculated by utilizing an algorithm based on a plurality of coplanar feature points, and the pose of the camera relative to the measured target object can be calculated according to the three-dimensional coordinate space conversion. The principle of the algorithm based on a plurality of coplanar feature points is shown in fig. 9. In the figure Oc-xcyczcIs the camera coordinate system and O-xy is the image coordinate system. P1-P 44 coplanar feature points, forming a square. The coordinates of the 4 characteristic points in the image coordinate system are C1-C4
From the geometric relationship in fig. 9, it can be found that:
Figure BDA0002243402280000251
wherein the formula is as follows: s1Is DeltaP1P2P3The area of (d); s2Is DeltaP1P2P4The area of (d); s3Is DeltaP1P3P4The area of (d); s4Is DeltaP2P3P4The area of (d); h is the optical center of the camera to PiThe points (i ═ 1, 2, 3, 4) constitute the distances of the planes. At the same time, the system has the advantages that,
Figure BDA0002243402280000252
wherein (x)i,yi) Is CiCoordinates of (d)iIs the optical center O of the cameracPoint to PiDistance of points, MiIs the optical center O of the cameracPoint to CiThe distance of the points, f, is the effective focal length of the camera. (x)i,yi) F and MiCan be obtained by coordinate system conversion and camera internal parameter calibration, is a known quantity, and is obtained by solving diThen the characteristic point P under the camera coordinate system can be obtainediThe coordinates of (a). According to the camera coordinates, the image coordinates and the target object coordinates, the method comprises the following steps:
Figure BDA0002243402280000253
Figure BDA0002243402280000254
wherein the characteristic point P is in a measured target object coordinate system Ow-xwywzwHas the coordinate of (X)w,Yw,Zw) In the camera coordinate system Oc-xcyczcHas the coordinates of (X)w,Yw,Zw). R is a rotation matrix; t is a translation vector.
The rotation matrix and the translation vector determine the direction and the position of the camera relative to the coordinate system of the measured target object. The rotation matrix is an orthogonal matrix and is represented by 3 rotation angles, namely a pitch angle alpha, a yaw angle beta and a roll angle gamma. The formula is solved by using three-dimensional coordinate space conversion and a non-iterative method, and the translation vectors R and T of the rotation matrix and the translation vector P, namely the pose of the camera relative to the measured target object, can be obtained.
The space position coordinate P of the hydraulic support is obtainedi=(Xi,Yi,Zi) Any one of the characteristic points can be selected to solve the straightness of the hydraulic support in the transverse direction and the longitudinal direction. By using the same characteristic point on each hydraulic supportAnd fitting the coordinates in the three-axis direction to obtain the straightness in multiple directions of the hydraulic support. And finally, carrying out experimental simulation on the numerical data.
TABLE 4-2 measurement of pose of hydraulic prop
Figure BDA0002243402280000261
TABLE 4-3 comparison of measured values of transverse straightness of hydraulic support with actual values
Figure BDA0002243402280000262
The following are specifically stipulated in the coal mine safety regulations: the hydraulic supports (pillars) are arranged in a straight line, the deviation between two adjacent hydraulic supports is not more than 50mm, and the No. 2 support is a fault support as seen from tables 4-3.
Fourthly, detection of harmful gas
Control module adopts Arduino MEGA2560 mainboard, and it is the core circuit board who adopts the USB interface, has 54 way digital input-output ports, 16 way analog input and 4 way UART interfaces, is fit for the design that needs a large amount of IO interfaces, can satisfy the demand of this design. The data acquisition unit consists of a sensor detection module, and the sensor detection module adopts an MQ-5 gas sensor. Alarm module and LED display module comprise liquid crystal display and bee calling organ two parts, and liquid crystal display is used for showing methane concentration and carbon monoxide concentration, and bee calling organ then is used for sending out and whistling and produce the warning. The wireless communication unit selects a serial port Wi-Fi for communication and sends the generated alarm to the master console.
Gas detection circuit
In the design of harmful gas detection, the gas mainly detected is the concentration of gas, the explosion concentration range of the gas is 5% -16%, and the concentration range measured by MQ-5 is 2% -50%, so that the MQ-5 gas sensor can well detect the concentration of the gas. The gas-sensitive material used by the MQ-5 gas sensor is tin dioxide (SnO) with low conductivity in clean air2) When combustible gas exists in the environment where the sensor is located,the conductivity of the sensor increases with increasing concentration of combustible gas in the air. The change in conductivity can be converted into an output signal corresponding to the gas concentration using a simple circuit.
The MQ-5 gas sensor has high sensitivity to butane, propane and methane, basically does not influence ethanol smoke, and can better take methane and propane into account. The sensor can detect various combustible gases, particularly natural gas, has the characteristics of quick recovery, long service life, reliable stability, simple test circuit and the like, and is a low-cost sensor suitable for various applications.
The structure and the appearance of the MQ-5 gas sensor are shown in FIG. 11, and the gas sensor is composed of a miniature AL203 ceramic tube and SnO2The sensitive layer, the measuring electrode and the heating element are fixed in the cavity made of plastic or stainless steel, and the heater provides necessary working conditions for the gas sensitive element. The packaged gas sensor has 6 pin pins, 4 of which are used for signal extraction and 2 of which are used for providing heating current.
Because the MQ-5 sensor has different resistance values for different kinds of gases with different concentrations. Therefore, when using such sensors, the adjustment of sensitivity is important, and 1000ppm isobutane or hydrogen is selected for calibration of such sensors. The designed circuit diagram is shown in fig. 12.
The software design is compiled by Arduino IDE, and the main program of the system comprises an initialization program, a data acquisition and data processing program, a Wi-Fi communication program, a data display program, an alarm program and the like. The program firstly carries out serial port initialization, I2C initialization, Wi-Fi module initialization and the like, the data acquisition program comprises methane concentration acquisition and carbon monoxide acquisition, then, the parameter threshold values stored in the EEPROM are compared, and if the parameter threshold values exceed the alarm limit, sound and light alarm is sent out. The main program flowchart is shown in fig. 13.
Fifth, temperature and humidity measurement
In order to improve the measurement accuracy and reliability of the system, SMD-type patch package SHT11 manufactured by EDNSIRION, Switzerland is used as a temperature and humidity sensor of the underground temperature and humidity monitoring system of the coal mine. The SHT11 integrates the sensing element and the signal processing circuit on a miniature circuit board, and the output signal is a fully calibrated digital signal. The SHT11 adopts the COMSensR patent technology to ensure good long-term stability and high reliability of the sensor. The SHT11 comprises a temperature measuring element made of energy gap material and a capacitive polymer humidity measuring element, and can be perfectly matched with a 14-bit A/D conversion chip and a serial interface circuit.
The underground temperature and humidity monitoring environment of the coal mine is severe, and the requirements on the reliability and the safety of measuring equipment are high due to the fact that flammable and explosive gases such as moisture, multiple gas and dust are generated. The SHT11 has the advantages of simple structure, good sensitivity, quick response, strong anti-interference capability and the like, and can meet the technical requirements of measuring temperature and humidity parameters in the underground coal mine. The coal mine underground temperature and humidity monitoring system is characterized in that an SHT11 is arranged on a circuit board and is controlled by a single chip microcomputer, and a circuit schematic diagram is shown in FIG. 14. The SCK pin of the SHT11 is used for synchronizing clock signals in the communication process between the single chip microcomputer and the SHT11, and the DATA pin is used for reading the information of the SHT11 measurement DATA.
The program flow of the coal mine underground temperature and humidity monitoring module software is shown in fig. 15. The system is reset and is initialized by SHT11, a starting SHT11 subprogram is called to collect temperature and humidity data, whether danger exists or not is judged through the collected data through a set threshold value, the air temperature of the mining working face of a production mine cannot exceed 26 ℃, and if the air temperature exceeds the set threshold value, an alarm is generated through a buzzer. If errors occur in the whole temperature and humidity measurement process, the system can measure the temperature and humidity again.
In conclusion, the design of the intelligent inspection robot for the coal face comprises method theories such as automatic control, machine vision, sensor principle and the like. The intelligent inspection robot can be effectively utilized to perform inspection work on the coal face, and errors of manual inspection are reduced to a certain extent. The design summary of the robot is as follows:
1) the automatic charging function and the body posture adjusting function of the robot are satisfied by using an automatic control theory;
2) by utilizing a machine vision theory, the monitoring function of the robot on the hydraulic prop in underground image acquisition and processing is satisfied;
3) by utilizing the sensor theory, the function of detecting harmful gas, temperature and humidity of the robot on the coal mining working face is met;
4) the alarm function of the robot for detecting the over-standard value on the coal mining working face is met by utilizing the single chip theory;
5) the Zigbee is used for carrying out remote transmission function on the standard exceeding data, so that the control room operator can effectively manage the coal face.
Therefore, the research and development of the mine intelligent robot are an essential part in the establishment of an intelligent mine, wherein the design of the underground coal face intelligent inspection robot can replace the existing manual inspection, the defects existing in the manual inspection are avoided, the inspection quality can be greatly improved, and the safety quality of the mine can be improved. The intelligent inspection robot is suitable for underground coal faces, has the functions of autonomous movement, positioning, image acquisition, gas sensing, temperature sensing, data transmission and the like, and can realize detection of hydraulic pillars, harmful gases, temperature and humidity and the like. The abnormal point is found through analyzing the image, the sensor senses the concentration of the harmful gas and the underground temperature and humidity, when a certain index reaches an early warning state, the early warning information can be timely transmitted to a dispatching room through a remote transmission device, and dispatching control personnel can timely take measures to avoid disasters.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (10)

1. Robot is patrolled and examined to ring type track intelligence based on coal face in pit, its characterized in that: the intelligent inspection robot comprises an image acquisition module, a temperature and humidity detection module, a harmful gas detection module, a wireless communication module, an alarm module, an LED display module, a power module, a control module and an automatic control module; wherein the content of the first and second substances,
the image acquisition module acquires a detected target through a camera, the acquired image is subjected to sharpening processing through a system platform, and finally a fault target is verified and positioned;
the temperature and humidity detection module comprises a temperature sensor and a humidity sensor, information of the temperature and humidity sensor and the humidity sensor is collected through the control module, data are sent to the system platform through the wireless communication module, the information is calculated and processed through a program set by the control module, the temperature and humidity value of the underground coal mine is obtained, and then the data are displayed in real time through the LED display module or alarm prompt is timely carried out through the alarm module;
the harmful gas detection module comprises a gas sensor, after the methane concentration and the carbon monoxide concentration are collected by the gas sensor, data are sent to the system platform through the wireless communication module, and are compared through a program set by the control module, and if the methane concentration and the carbon monoxide concentration exceed an alarm limit, an alarm module gives an alarm;
the power module provides power output for the intelligent inspection robot;
the power module comprises an electric storage device and a charging hole, and the electric storage device is electrically connected with the charging pile through the charging hole to realize autonomous charging;
the automatic control module comprises a machine body steering device, and the machine body always faces the direction of the hydraulic support through the machine body steering device.
2. The underground coal face-based circular track intelligent inspection robot and the application thereof according to claim 1, wherein: the temperature and humidity detection module further comprises a temperature and humidity acquisition circuit, a communication circuit, a power circuit, a display circuit and an alarm circuit, underground temperature and humidity parameters are acquired through the temperature and humidity acquisition circuit respectively, acquired information is processed by the control module and then transmitted to the wireless communication module through the communication circuit, the display circuit transmits the information to the LED display module, the alarm circuit transmits signals to the alarm module, and the power circuit is connected with the power module.
3. The underground coal face-based circular track intelligent inspection robot and the application thereof according to claim 1, wherein: the image acquisition sharpening processing comprises an image preprocessing unit, an image feature extraction unit, a pose detection unit and a straightness accuracy determination unit; wherein the content of the first and second substances,
the image preprocessing unit is used for eliminating irrelevant information in the acquired image and carrying out noise reduction and enhancement processing on the image;
the image feature extraction unit is used for accurately extracting the feature points of the target object in the image through the coordinates of n feature points on the target object in a camera coordinate system;
the pose detection unit is used for determining the coordinates of n characteristic points on the target object under a camera coordinate system, and finally calculating the motion position and pose parameters of the target according to calibrated internal and external parameters of the camera and coordinate values of the characteristic points under a world coordinate system;
and the straightness determining unit is used for performing coordinate fitting in the three-axis direction by using the same characteristic point on each hydraulic support to obtain the straightness of the hydraulic support in multiple directions.
4. Use of a downhole coal face based looped track intelligent inspection robot according to any of claims 1-3, comprising the steps of:
s1: image acquisition and processing
The intelligent inspection robot is placed underground, a power switch is turned on, inspection operation is started, an underground fully mechanized mining face is subjected to image acquisition through a camera, acquired image information is transmitted to a system platform to be processed, stored and displayed, and image processing comprises image preprocessing, image feature extraction, pose detection calculation and straightness calculation;
a. the image preprocessing comprises image noise reduction and image enhancement, wherein the image noise reduction processing is firstly carried out, and then the image enhancement processing is carried out;
image denoising: performing noise reduction processing on the image by adopting a bilateral filter;
image enhancement: enhancing the fully mechanized coal mining face image through an MSR algorithm;
b. feature extraction of images
Determining coordinates of n feature points on a target object under a camera coordinate system, accurately extracting feature points of the target object in an image, and analyzing characteristics of the underground image and underground real-time requirements to obtain a target object characterized by SIFT features;
c. pose detection calculation
Determining coordinates of n feature points on the target object in a camera coordinate system, and finally calculating motion positions and pose parameters of the target according to calibrated internal and external parameters of the camera and coordinate values of the feature points in a world coordinate system;
d. and (4) selecting any one characteristic point from the space position coordinates of the hydraulic support obtained in the steps to calculate the transverse and longitudinal straightness of the hydraulic support. Coordinate fitting in the three-axis direction is carried out by utilizing the same characteristic point on each hydraulic support, straightness in multiple directions of the hydraulic support is obtained, and finally, experimental simulation is carried out on numerical values;
s2: harmful gas detection
The intelligent inspection robot is provided with a harmful gas detection module, underground harmful gas data are collected, various parameter thresholds stored in a control module are compared, if the parameters exceed alarm limits, an audible and visual alarm is given out, and meanwhile, the wireless communication module sends data to a system platform;
s3: temperature and humidity detection
The temperature and humidity detection module is arranged on the intelligent inspection robot, information of the temperature and humidity detection module is collected, the collected data is operated and processed through the control module, underground temperature values and humidity values are obtained, data are displayed in real time through the driving display circuit, and meanwhile, the data are sent to the system platform through the wireless communication module.
5. The application of the intelligent inspection robot for circular orbit based on underground coal face as claimed in claim 4, wherein the bilateral filtering adopted for the image noise reduction in the step S1 is to add spatial information and gray level similarity information between two pixels to the setting of the weight, the weighting coefficient is formed by multiplying two factors, one is determined by the spatial distance between the pixels, the other is determined by the difference between the brightness values between the pixels, and the definition of the bilateral filter is as shown in equation (4-5):
wherein: wpIs a standard quantity.
Figure FDA0002243402270000042
Parameter sigmadAnd σrRepresenting the amount of noise removal for image I, equation (3.6) represents a normalized weighted average of the pixels in the neighborhood, where
Figure FDA0002243402270000043
Is a spatial function which decreases with increasing euclidean distance between the pixel point and the centre point,
Figure FDA0002243402270000044
is a range function that decreases as the difference between the luminance values of the two pixels increases; spatial neighbor function for bilateral filter
Figure FDA0002243402270000045
And gray level similarity function
Figure FDA0002243402270000046
The distance between two pixels is taken as a Gaussian function of a parameter, and the distance is defined as shown in formulas (4-7) and (4-8):
Figure FDA0002243402270000047
Figure FDA0002243402270000051
wherein d (p, q) and delta (I (p), I (q)) are respectively the distance difference between two pixel points of the image and the gray level difference of the pixel, sigmadAnd σrIs the standard deviation of the gaussian function.
6. The use of an underground coal face-based intelligent inspection robot in circular orbit according to claim 5, wherein the MSR algorithm in the image enhancement in the step S1 is
According to land theory, let the ideal image be I (x, y) as:
I(x,y)=R(x,y)×L(x,y) (4-14)
i.e. the image f (x, y) can be expressed as the product of the ambient brightness function L (x, y) and the scene reflection function R (x, y), the MSR enhancement method is described as follows:
Figure FDA0002243402270000052
where i denotes the ith spectral band, N denotes the number of spectral bands, N ═ 1 denotes a grayscale image, N ═ 3 denotes a color image, and R denotes a color imagei(x, y) is the output image function, Ii(x, y) is a distribution function of the input image function, representing a convolution operation, log is a natural logarithm, Fk(x, y) represents an environment function, and the selection of the environment function can be various, wherein a Gaussian function is selected as the environment function, k represents the number of multi-scales, and the standard deviation sigma is differentkEnvironmental function Fk(x, y) the selection controls the scale of the Gaussian function. WkRepresents and FkThe associated weight coefficient; the MSR algorithm selects three scales of different levels according to the scene processing requirement, and then the three scales are fused by different weight coefficients to realize image enhancement.
7. The application of the underground coal face-based ring-type track intelligent inspection robot according to claim 6, wherein the image enhancement further comprises negative pixel point correction, the negative pixel points in the image are corrected by adopting a gain/offset method, and the corrected gray value is mapped into a gray range displayed by a display according to an equation (4-17);
R0(x,y)=G×Ri(x,y)+offset (4-16)
Figure FDA0002243402270000061
wherein R isi(x, y) and R0(x, y) represent input and output gradation values of an image, respectively, G represents a gain coefficient, and offset represents an offset amount.
8. The use of the underground coal face-based intelligent inspection robot with ring-type track is characterized in that the MSR algorithm in the image enhancement in the step S1 is
The method comprises the following steps:
① reading in video frame image I and taking logarithm of its image function;
② calculated at different standard deviations sigmakThe filter coefficient of the lower Gaussian filter;
③ selection of 3 σ skThe scale, which is selected from 15, 80 and 250 in the implementation process, is used for carrying out convolution operation on the image by using the three different Gaussian filter coefficients;
④ calculating weighted average of the results obtained in three scales according to the formula (4-15), wherein the weights are all selected to be 1/3, and the image is divided into illumination component L and reflection component R;
⑤, adjusting the result of the reflection component R according to the formulas (4-16) and (4-17), and then carrying out histogram equalization on R to obtain a new reflection component R';
⑥ the new reflection component R 'is added to the illumination component L to get a new image I' and then an enhanced image for exp.
9. According to any one of claims 3 to 7The application of the ring-type track intelligent inspection robot based on the underground coal face is characterized in that the bit attitude detection calculation in the step S1 is to establish Oc-xcyczcIs a camera coordinate system, O-xy is an image coordinate system, P1-P4The coordinates of the 4 coplanar feature points in the image coordinate system are C1-C4From the geometric relationship, one can obtain:
Figure FDA0002243402270000071
wherein the formula is as follows: s1Is DeltaP1P2P3The area of (d); s2Is DeltaP1P2P4The area of (d); s3Is DeltaP1P3P4The area of (d); s4Is DeltaP2P3P4The area of (d); h is the optical center of the camera to PiThe points (i ═ 1, 2, 3, 4) constitute the distances of the planes. At the same time, the system has the advantages that,
wherein (x)i,yi) Is CiCoordinates of (d)iIs the optical center O of the cameracPoint to PiDistance of points, MiIs the optical center O of the cameracPoint to CiDistance of points, f is the effective focal length of the camera, (x)i,yi) F and MiCan be obtained by coordinate system conversion and camera internal parameter calibration, is a known quantity, and is obtained by solving diThen the characteristic point P under the camera coordinate system can be obtainediThe coordinates of (2) are known from the camera coordinates, the image coordinates, and the object coordinates:
Figure FDA0002243402270000073
Figure FDA0002243402270000074
wherein the characteristic point P is in a measured target object coordinate system Ow-xwywzwHas the coordinate of (X)w,Yw,Zw) In the camera coordinate system Oc-xcyczcHas the coordinates of (X)w,Yw,Zw) (ii) a R is a rotation matrix; t is a translation vector;
the rotation matrix and the translation vector determine the direction and the position of the camera relative to the coordinate system of the measured target object; the rotation matrix is an orthogonal matrix and is represented by 3 rotation angles, namely a pitch angle alpha, a yaw angle beta and a roll angle gamma; the formula is solved by using three-dimensional coordinate space conversion and a non-iterative method, and the translation vectors R and T of the rotation matrix and the translation vector P, namely the pose of the camera relative to the measured target object, can be obtained.
10. Use of an endless track intelligent inspection robot based on a downhole coal face according to claim 9, characterized in that the spatial position coordinates P of the hydraulic support calculated from the posei=(Xi,Yi,Zi) Selecting any one of the characteristic points to calculate the transverse and longitudinal straightness of the hydraulic support; coordinate fitting in the three-axis direction is carried out by utilizing the same characteristic point on each hydraulic support, straightness in multiple directions of the hydraulic support is obtained, and finally experimental simulation is carried out on numerical data.
CN201911008206.8A 2019-10-22 2019-10-22 Ring-type track intelligent inspection robot based on underground coal face and application thereof Pending CN110727223A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911008206.8A CN110727223A (en) 2019-10-22 2019-10-22 Ring-type track intelligent inspection robot based on underground coal face and application thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911008206.8A CN110727223A (en) 2019-10-22 2019-10-22 Ring-type track intelligent inspection robot based on underground coal face and application thereof

Publications (1)

Publication Number Publication Date
CN110727223A true CN110727223A (en) 2020-01-24

Family

ID=69222772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911008206.8A Pending CN110727223A (en) 2019-10-22 2019-10-22 Ring-type track intelligent inspection robot based on underground coal face and application thereof

Country Status (1)

Country Link
CN (1) CN110727223A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111427062A (en) * 2020-04-23 2020-07-17 南京大学 Hydraulic support alignment method based on laser radar
CN111915759A (en) * 2020-08-03 2020-11-10 国网安徽省电力有限公司铜陵供电公司 Automatic inspection method for cable work well
CN112412536A (en) * 2020-11-12 2021-02-26 临沂矿业集团菏泽煤电有限公司 Stringing type mobile target inspection bearing system for fully mechanized mining face of mine and working method
CN112612002A (en) * 2020-12-01 2021-04-06 北京天地玛珂电液控制系统有限公司 Digital construction system and method for scene space of full working face under coal mine
CN113504737A (en) * 2021-05-11 2021-10-15 中钢集团马鞍山矿山研究总院股份有限公司 Multi-element pregnant disaster digital twin intelligent perception identification early warning system and method
CN115903592A (en) * 2022-11-10 2023-04-04 山东新普锐智能科技有限公司 Hydraulic car unloader control method and system based on pose detection
CN115922745A (en) * 2022-12-13 2023-04-07 北京龙德时代技术服务有限公司 Detection method and system of movable lifting gas inspection robot
CN117455802A (en) * 2023-12-25 2024-01-26 榆林金马巴巴网络科技有限公司 Noise reduction and enhancement method for image acquisition of intrinsic safety type miner lamp

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6947176B1 (en) * 1999-08-31 2005-09-20 Sharp Kabushiki Kaisha Method for correcting lightness of image
KR20160001897A (en) * 2014-06-27 2016-01-07 서강대학교산학협력단 Image Processing Method and Apparatus for Integrated Multi-scale Retinex Based on CIELAB Color Space for Preserving Color
CN107313801A (en) * 2017-08-23 2017-11-03 合肥中盈信息工程有限公司 Formula robot system is protected in a kind of self-regulation for underground inspection
CN108267172A (en) * 2018-01-25 2018-07-10 神华宁夏煤业集团有限责任公司 Mining intelligent robot inspection system
CN207833574U (en) * 2017-12-29 2018-09-07 马钢集团设计研究院有限责任公司 A kind of intelligent inspection system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6947176B1 (en) * 1999-08-31 2005-09-20 Sharp Kabushiki Kaisha Method for correcting lightness of image
KR20160001897A (en) * 2014-06-27 2016-01-07 서강대학교산학협력단 Image Processing Method and Apparatus for Integrated Multi-scale Retinex Based on CIELAB Color Space for Preserving Color
CN107313801A (en) * 2017-08-23 2017-11-03 合肥中盈信息工程有限公司 Formula robot system is protected in a kind of self-regulation for underground inspection
CN207833574U (en) * 2017-12-29 2018-09-07 马钢集团设计研究院有限责任公司 A kind of intelligent inspection system
CN108267172A (en) * 2018-01-25 2018-07-10 神华宁夏煤业集团有限责任公司 Mining intelligent robot inspection system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
张旭辉等: "基于视觉测量的液压支架位姿检测方法", 《工矿自动化》 *
梁广顺等: "基于双边滤波与非局部均值的图像去噪研究", 《光电子•激光》 *
肖晓: "基于Retinex的图像增强算法研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
陈铁健: "智能制造装备机器视觉检测识别关键技术及应用研究", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111427062A (en) * 2020-04-23 2020-07-17 南京大学 Hydraulic support alignment method based on laser radar
CN111427062B (en) * 2020-04-23 2022-11-29 南京大学 Hydraulic support alignment method based on laser radar
CN111915759A (en) * 2020-08-03 2020-11-10 国网安徽省电力有限公司铜陵供电公司 Automatic inspection method for cable work well
CN112412536A (en) * 2020-11-12 2021-02-26 临沂矿业集团菏泽煤电有限公司 Stringing type mobile target inspection bearing system for fully mechanized mining face of mine and working method
CN112612002A (en) * 2020-12-01 2021-04-06 北京天地玛珂电液控制系统有限公司 Digital construction system and method for scene space of full working face under coal mine
CN113504737A (en) * 2021-05-11 2021-10-15 中钢集团马鞍山矿山研究总院股份有限公司 Multi-element pregnant disaster digital twin intelligent perception identification early warning system and method
CN115903592A (en) * 2022-11-10 2023-04-04 山东新普锐智能科技有限公司 Hydraulic car unloader control method and system based on pose detection
CN115922745A (en) * 2022-12-13 2023-04-07 北京龙德时代技术服务有限公司 Detection method and system of movable lifting gas inspection robot
CN117455802A (en) * 2023-12-25 2024-01-26 榆林金马巴巴网络科技有限公司 Noise reduction and enhancement method for image acquisition of intrinsic safety type miner lamp
CN117455802B (en) * 2023-12-25 2024-04-05 榆林金马巴巴网络科技有限公司 Noise reduction and enhancement method for image acquisition of intrinsic safety type miner lamp

Similar Documents

Publication Publication Date Title
CN110727223A (en) Ring-type track intelligent inspection robot based on underground coal face and application thereof
CN111091072A (en) YOLOv 3-based flame and dense smoke detection method
CN105488941B (en) Double spectrum fire monitoring method and devices based on Infrared-Visible image
CN106971152B (en) Method for detecting bird nest in power transmission line based on aerial images
US20160260306A1 (en) Method and device for automated early detection of forest fires by means of optical detection of smoke clouds
CN101393603B (en) Method for recognizing and detecting tunnel fire disaster flame
CN110689531A (en) Automatic power transmission line machine inspection image defect identification method based on yolo
CN109559310A (en) Power transmission and transformation inspection image quality evaluating method and system based on conspicuousness detection
CN106981063A (en) A kind of grid equipment state monitoring apparatus based on deep learning
CN106485868A (en) The monitoring server of the monitoring method of the condition of a fire, system and the condition of a fire
CN108389359A (en) A kind of Urban Fires alarm method based on deep learning
CN108209926A (en) Human Height measuring system based on depth image
CN113299035A (en) Fire identification method and system based on artificial intelligence and binocular vision
CN102456142A (en) Analysis method for smoke blackness based on computer vision
CN116738552B (en) Environment detection equipment management method and system based on Internet of things
CN108830840A (en) A kind of active intelligent detecting method of circuit board defect and its application
CN111488802A (en) Temperature curve synthesis algorithm using thermal imaging and fire early warning system
CN111931573A (en) Helmet detection and early warning method based on YOLO evolution deep learning model
CN113593170B (en) Intelligent early warning system based on remote smoke detection
CN114627461A (en) Method and system for high-precision identification of water gauge data based on artificial intelligence
CN108182679B (en) Haze detection method and device based on photos
CN110967285B (en) Smoke concentration quantization standard experimental box based on image recognition
CN116778269A (en) Method for constructing product surface defect detection model based on self-encoder reconstruction
CN111539264A (en) Ship flame detection positioning system and detection positioning method
CN111126230A (en) Smoke concentration quantitative evaluation method and electronic equipment applying same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200124

RJ01 Rejection of invention patent application after publication