CN117308900A - Underground transport vehicle movement measurement system and carrying traffic state simulation and monitoring method - Google Patents
Underground transport vehicle movement measurement system and carrying traffic state simulation and monitoring method Download PDFInfo
- Publication number
- CN117308900A CN117308900A CN202311624264.XA CN202311624264A CN117308900A CN 117308900 A CN117308900 A CN 117308900A CN 202311624264 A CN202311624264 A CN 202311624264A CN 117308900 A CN117308900 A CN 117308900A
- Authority
- CN
- China
- Prior art keywords
- module
- carrying
- model
- roadway
- measurement
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 122
- 238000005259 measurement Methods 0.000 title claims abstract description 104
- 238000004088 simulation Methods 0.000 title claims abstract description 55
- 238000012544 monitoring process Methods 0.000 title claims abstract description 33
- 230000033001 locomotion Effects 0.000 title claims abstract description 27
- 238000004891 communication Methods 0.000 claims abstract description 29
- 238000002955 isolation Methods 0.000 claims abstract description 24
- 230000008447 perception Effects 0.000 claims abstract description 24
- 238000004364 calculation method Methods 0.000 claims abstract description 15
- 238000012545 processing Methods 0.000 claims abstract description 10
- 230000004927 fusion Effects 0.000 claims description 89
- 230000008569 process Effects 0.000 claims description 54
- 230000003993 interaction Effects 0.000 claims description 26
- 230000000007 visual effect Effects 0.000 claims description 25
- 230000011218 segmentation Effects 0.000 claims description 15
- 230000008878 coupling Effects 0.000 claims description 10
- 238000010168 coupling process Methods 0.000 claims description 10
- 238000005859 coupling reaction Methods 0.000 claims description 10
- 230000006870 function Effects 0.000 claims description 10
- 238000012937 correction Methods 0.000 claims description 9
- 238000001914 filtration Methods 0.000 claims description 9
- 238000011897 real-time detection Methods 0.000 claims description 9
- 238000013507 mapping Methods 0.000 claims description 8
- 238000005457 optimization Methods 0.000 claims description 8
- 230000003190 augmentative effect Effects 0.000 claims description 7
- 230000002787 reinforcement Effects 0.000 claims description 7
- 238000012800 visualization Methods 0.000 claims description 7
- 230000001133 acceleration Effects 0.000 claims description 6
- 238000007405 data analysis Methods 0.000 claims description 6
- 238000013135 deep learning Methods 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 6
- 238000009434 installation Methods 0.000 claims description 6
- 230000002452 interceptive effect Effects 0.000 claims description 6
- 230000009467 reduction Effects 0.000 claims description 6
- 230000001360 synchronised effect Effects 0.000 claims description 5
- 238000004458 analytical method Methods 0.000 claims description 4
- 239000007787 solid Substances 0.000 claims description 4
- 230000018109 developmental process Effects 0.000 claims description 3
- 238000004880 explosion Methods 0.000 claims description 3
- 239000011521 glass Substances 0.000 claims description 3
- 238000012805 post-processing Methods 0.000 claims description 3
- 238000007789 sealing Methods 0.000 claims description 3
- 230000003068 static effect Effects 0.000 claims description 3
- 230000002194 synthesizing effect Effects 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims 1
- 239000003245 coal Substances 0.000 abstract description 4
- 238000005094 computer simulation Methods 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000003137 locomotive effect Effects 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 238000006073 displacement reaction Methods 0.000 description 3
- 238000007689 inspection Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000013450 outlier detection Methods 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 239000000839 emulsion Substances 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000005562 fading Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000007794 visualization technique Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C15/00—Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1652—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C7/00—Tracing profiles
- G01C7/06—Tracing profiles of cavities, e.g. tunnels
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B31/00—Predictive alarm systems characterised by extrapolation or other computation using updated historic data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W64/00—Locating users or terminals or network equipment for network management purposes, e.g. mobility management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W84/00—Network topologies
- H04W84/18—Self-organising networks, e.g. ad-hoc networks or sensor networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Abstract
The invention relates to the technical field of mine transport vehicle measurement, and discloses an underground transport vehicle movement measurement system and a carrying traffic state simulation and monitoring method, wherein the underground transport vehicle movement measurement system comprises a perception measurement module and a calculation module, and is used for processing data transmitted by the perception measurement module; the isolation module is used for providing communication isolation between the sensing measurement module and the calculation module; the communication module is used for wireless communication in different areas in the pit; the intrinsic safety power supply module is used for supplying power and converting the power into voltage grades of corresponding electric appliances; the power supply module and the display screen are used for locally displaying modeling and monitoring states; the perception measurement module comprises a line laser sensor, an intrinsic safety laser radar, an intrinsic safety camera, an inertial measurement unit and an ultra-wideband ranging unit. The invention can realize the dynamic simulation and intelligent monitoring of the carrying passing state when the coal mine transport vehicle carries large equipment such as a hydraulic support and the like, and realize the high-efficiency, safe and reliable carrying of the large equipment.
Description
Technical Field
The invention relates to the technical field of mine transport vehicle measurement, in particular to a underground transport vehicle movement measurement system and a carrying traffic state simulation and monitoring method.
Background
In the transportation process of large electromechanical equipment such as a coal mine underground working face coal cutter, a hydraulic support, an emulsion pump and the like, a flat car or a monorail crane locomotive is driven to transport by personnel operation. In order to strengthen mine transportation safety management and eliminate transportation accidents, before equipment transportation, signal workers are required to carry out comprehensive and detailed measurement and inspection on roadway, track conditions and the like by using a manual inspection and manual measurement method, whether the real-time conditions of the roadway meet traffic conditions, whether the heights and widths of equipment such as brackets and the like meet transportation requirements, whether the conditions of top and bottom plates, side wall states and the like of a passing route are strictly controlled, the workload is high, and large errors exist in manual measurement, so that accidents such as falling and blocking of transportation of carrying equipment occur, and the safety and reliability are difficult to ensure; in the transportation process, a driver and a signaler have to be contacted in advance, the transportation of the vehicle is controlled slowly according to the fed-back signal, and the vehicle has to be stopped under the conditions of unclear sight, unaware whistle and unaware instruction, so that the transportation efficiency is severely limited; the signaler has to clear personnel in and nearby the roadway in time by manual inspection, so that no person is ensured to stay or operate, and potential safety hazards are difficult to avoid; the complex operating conditions of the flat car and the monorail crane are severe, fixed obstacles such as fluctuation, turning, side slopes, belt conveyors and the like exist in the running process, the danger of the large transportation equipment process is high, the trafficable effect cannot be known in advance, and great hidden danger is brought. Therefore, there is an urgent need to improve the current situation that the condition of the roadway must be checked and measured manually before and during the existing transportation, and to improve the intelligentization level of the transportation process.
Disclosure of Invention
The invention aims to provide a system for measuring the movement of an underground transport vehicle and a method for simulating and monitoring the transport traffic state, which solve the problems in the background technology.
In order to achieve the above object, the present invention provides a system for measuring movement of a downhole carrier,
comprises a perception measuring module and a sensing module, wherein the perception measuring module is used for sensing the sensing signals,
the computing module is used for processing the data transmitted by the perception measuring module;
the isolation module is used for providing communication isolation between the sensing measurement module and the calculation module;
the communication module is used for wireless communication in different areas in the pit;
the intrinsic safety power supply module is used for supplying power to the sensing measurement module, the calculation module, the isolation module and the communication module in a shunt way and converting the power into voltage grades of corresponding electric appliances;
the power supply module is used for outputting direct current to supply power for various electric equipment;
the display screen is used for locally displaying modeling and monitoring states;
the system comprises a calculation module, an isolation module, an intrinsic safety power module, a communication module and a power supply module, wherein the calculation module, the isolation module, the intrinsic safety power module, the communication module and the power supply module are arranged in an explosion-proof control cabinet of a transport vehicle;
the perception measurement module comprises a line laser sensor and is used for scanning a roadway area;
The intrinsic safety type laser radar is used for collecting three-dimensional point cloud data;
the intrinsic safety camera is used for observing the section form of the tunnel and the line laser scanning area;
the inertial measurement unit is used for acquiring attitude information;
and the ultra-wideband ranging unit is used for realizing the positioning of the wireless sensor network.
Preferably, the quantity and the installation angle of the intrinsic safety type laser radars are deployed according to the conditions of the transport vehicle and the carrying equipment, and the intrinsic safety type laser radars collect three-dimensional point cloud data of the tunnel section in the advancing process;
the intrinsic safety type cameras are arranged on a sensor bracket of the transport vehicle, the intrinsic safety type cameras are visual cameras, the number of the intrinsic safety type cameras is deployed according to actual requirements, and the intrinsic safety type cameras are used for observing the whole shape of the section of the roadway in front and the line laser scanning area;
the inertia measurement unit in the sensing measurement module is deployed on the explosion-proof control cabinet according to actual conditions;
the ultra-wideband ranging unit realizes wireless sensor network positioning through interactive ranging with the base station.
Preferably, the computing module comprises a CPU, a GPU processor and a memory, and is used for executing core function operations of mobile measurement and data acquisition, multi-source information fusion modeling, graphical user interface man-machine interaction, carrying state simulation, key parameter generation and anti-collision obstacle avoidance strategy generation; the computing module input/output interface comprises a communication port and an SD card slot.
Preferably, the isolation module comprises a multipath signal isolation circuit; the intrinsic safety power supply module comprises multiple paths of intrinsic safety power supplies with different voltage levels.
Preferably, the communication module adopts a combination mode of a network bridge, WIFI6 and wireless Mesh, and the antenna is external and meets the requirement of intrinsic safety; the power supply module comprises a plurality of pouring and sealing intrinsic safety power supply boxes, a power switch and an aviation plug.
A transit state simulation method comprising the steps of:
s101, calibrating external parameters of a perception measurement module: performing internal and external parameter calibration on a sensor of a sensing measurement module to obtain laser radar distortion correction parameters, vision camera internal parameters, zero offset of an inertial measurement unit and ultra-wideband ranging correction coefficients;
s102, installing a perception measurement module: according to the size of the loading equipment, the measuring equipment is installed on site, the positions and the directions of the intrinsically safe laser radar, the intrinsically safe camera and the line laser sensor are adjusted, and the lighting equipment is installed on the transport vehicle according to actual conditions;
s103, original data acquisition and recording: driving or remotely controlling an empty carrier vehicle in a remote mode, and acquiring roadway data once in a reciprocating mode, wherein the roadway data comprise three-dimensional point cloud, roadway surrounding environment images, line laser observed camera images, ultra-wideband measurement distance, acceleration and angular velocity original information; acquiring and recording data by using a tool of a robot operating system;
S104, modeling multi-source information fusion SLAM (synchronous positioning and map building): performing multi-source information fusion SLAM modeling by utilizing radar, cameras, inertial data and UWB sensor information deployed by a plurality of different poses to obtain a three-dimensional original point cloud model and a three-dimensional color point cloud model;
s105, offline optimization of the model: post-processing is carried out on the multisource information fusion SLAM modeling result, noise reduction and filtering processing are carried out on the point cloud model, and cutting and simplifying are carried out on redundant data;
s106, extracting key information of a roadway model: comprehensively utilizing a vision and line laser triangulation method and a three-dimensional point cloud model parameter estimation method obtained by multi-source information fusion SLAM to jointly estimate the cross section, the outline size and the central axis of the roadway;
s107, carrying device loading and envelope surface size measurement: loading equipment on a transport vehicle, and measuring the critical dimension of an envelope surface of the loading equipment;
s108, generating a loading model: inputting a software interface and generating a simplified equipment model by utilizing the size of the envelope surface, and generating solid models of cubes, cylinders and spheres according to the equipment type;
s109, generating a carrying model: selecting and loading a transport vehicle model in a software interface tab according to the basic parameters of the volume and the size of the transport vehicle carrying equipment, and combining the transport vehicle model with the carried equipment model to generate a final carrying model;
S110, operation state simulation: based on a frame platform used by a robot operating system, a carrying model is imported, a running track obtained by multi-source information fusion SLAM is loaded, the carrying model is driven to move, running state simulation dynamic demonstration is carried out, and actual running conditions are observed;
s111, data analysis and early warning prediction: based on the running state simulation process, outputting relevant measurement data and key parameters, analyzing and processing dangerous area alarm information, calculating and outputting parameters of an area which cannot be passed, and outputting a trafficability analysis report of the whole roadway; setting a threshold value, and grading, early warning and predicting the non-passing or risky areas.
Preferably, in S104, in the multi-source information fusion SLAM modeling process, information fusion is performed by using a laser-vision-inertia-ultra-wideband tight coupling mode.
Preferably, in S106, in the process of extracting the key information of the roadway model, the vision and line laser triangulation method is implemented by projecting the beam laser, forming linear stripes on the inner wall of the roadway by using the projection light spots, calculating the projection height of each point of the stripes based on the vision image, and accumulating the heights of each point by using the riman integral to obtain the cross-sectional area of the contour projection, so as to implement the measurement and estimation of the roadway cross section, the contour dimension and the central axis roadway key parameters; the three-dimensional point cloud model parameter estimation method utilizes a three-dimensional point cloud model obtained by multi-source information fusion SLAM modeling to further carry out point cloud splicing, noise reduction and simplification, calculates a central axis based on a projection method, utilizes a polynomial function to fit an outer contour curve, and realizes measurement and estimation of the cross section, contour dimension and central axis roadway key parameters of the roadway; the key parameters obtained by the two methods are calculated through weighted average to realize joint estimation.
Preferably, in S110, in the running state simulation, the loaded transport vehicle model selects multiple tabs according to the type of the flat car, and the size and shape parameters of the mounting device are set through the man-machine interaction interface to generate a corresponding mark, so as to realize synchronous driving dynamic demonstration with the transport vehicle.
Preferably, in S111, in the data analysis and early warning prediction process, the distance between the transport vehicle carrying model and the roadway point cloud is calculated and generated according to the coordinates of the point cloud in the generated three-dimensional point cloud model and the pose of the transport vehicle carrying modelS,SLess than a set thresholdS th And carrying out early warning and prompting the position, the gesture and the distance of the loading model at the early warning position, predicting the position to be impacted and highlighting.
A method of traffic state monitoring comprising the steps of:
s201, constructing a digital twin model of a roadway and carrying equipment: constructing a roadway digital twin model by utilizing a roadway point cloud model obtained by multi-source information fusion SLAM modeling in a carrying traffic state simulation method, constructing a digital twin model of carrying equipment by utilizing the carrying model generated in S109, and introducing the digital twin model into a Unity3D simulation engine to construct an initial visual simulation model;
s202, carrying state live-action updating and real-time pose driving based on multi-source information fusion SLAM: based on an initial visual digital twin model of a roadway in Unity3D, in the motion process of transport vehicle carrying equipment, carrying out live-action update on a basic model based on a high-precision point cloud model obtained by multi-source information fusion SLAM, and driving the digital twin simulation model to move based on a real-time pose obtained by multi-source information fusion SLAM;
S203, multi-type target real-time detection, identification, segmentation and positioning based on multi-sensor fusion: in the moving process of the carrying equipment, the laser point cloud and the visual image are utilized to carry out target detection, target identification and semantic segmentation based on a deep learning network, and the dynamic and static objects of personnel, vehicles and equipment are detected, identified, segmented and positioned;
s204, carrying traffic process intelligent decision, early warning and monitoring based on reinforcement learning: constructing a software development platform based on an AirSim+Unity 3D+ROS simulation engine, importing roadway and carrying equipment digital twin models in the running process in real time, carrying out multi-type multi-target real-time detection, identification, segmentation and positioning results, carrying out scene expansion on the twin simulation models based on a reinforcement learning method, carrying out autonomous deduction on future states, synthesizing the simulation deduction results and a field feedback observation data correction decision scheme, generating an anti-collision obstacle avoidance strategy, carrying out online early warning on dangerous situations in the running process of the vehicle, and realizing intelligent monitoring on the running states of the vehicle;
s205, carrying traffic state virtual-real interaction and man-machine visualization: and carrying out virtual-real superposition and fusion display on the roadway real-time point cloud model constructed based on the multi-source information fusion SLAM and the constructed roadway twin body, and realizing immersive multi-perception interactive man-machine visual interaction based on augmented reality.
Preferably, in S203, the multi-type target real-time detection, recognition, segmentation and positioning process based on multi-sensor fusion is performed by using the multi-task multi-mode deep learning network to perform detection, recognition and segmentation tasks, and the multi-type multi-target six-degree-of-freedom pose estimation is realized based on the multi-sensor external parameter calibration result and the multi-source information fusion SLAM real-time pose estimation, so as to realize positioning.
Preferably, in S205, in the carrying traffic state virtual-real interaction and man-machine visualization, the virtual-real superposition and fusion display process of the twin body realizes the real-time mapping, real-scene mapping and high-frequency updated physical body and virtual body digital twin functions by using the local color point cloud model constructed in real time in the roadway; immersive multi-perception interaction based on augmented reality comprises an integrated interaction platform consisting of an anti-explosion AR helmet, glasses, a display screen and an audio interface, and is used for carrying out text prompt and voice broadcast on a carrying process, a dangerous area and a work plan, so that man-machine interaction and visual early warning are realized.
Therefore, the underground transport vehicle movement measurement system and the carrying traffic state simulation and monitoring method have the following beneficial effects: (the following three points are the beneficial effects, namely, the advantages or innovations of the invention are directly described, and the invention does not need to be embodied)
(1) According to the underground transport vehicle movement measurement system and the transport traffic state simulation and monitoring method, the underground transport vehicle movement measurement system is constructed, so that the transport vehicle can carry out fine measurement and modeling on the whole situation of each part of the roadway before the transport process, the key parameters of the transport roadway can be accurately, automatically and comprehensively measured and recorded, and the state of the transport process can be pre-judged in advance;
(2) According to the underground transport vehicle movement measurement system and the transport traffic state simulation and monitoring method, the constructed roadway model is utilized to simulate the transport traffic state, so that the omnibearing demonstration and simulation of the dynamic movement process of the transport vehicle and the transport equipment in the roadway model can be realized, the relative pose relationship between the transport process and the roadway model is intuitively displayed, and the minimum distance and other parameters between the transport equipment and the roadway model are calculated, so that the dangerous grade division, early warning and prediction of the safety state of the transport process are realized;
(3) The underground transport vehicle movement measurement system and the transport traffic state simulation and monitoring method provided by the invention realize virtual-real superposition and intelligent interaction of a constructed roadway model and a real-time constructed twin body based on digital twin, reinforcement learning, augmented reality and other technologies by utilizing the transport traffic state monitoring which is carried out on line, realize transport process state monitoring by utilizing an immersive multi-perception interactive man-machine visualization method, have high intelligent degree, are more realistic in simulation and visualization effects of the transport process, and provide true three-dimensional and high-precision working condition simulation capability for further carrying out intelligent control.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
FIG. 1 is a schematic diagram of a system for measuring movement of a downhole transporter according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the installation of a device of a mobile measurement system for a downhole transporter according to an embodiment of the present invention;
FIG. 3 is a flow chart of a method for simulating a traffic state according to an embodiment of the present invention;
FIG. 4 is a flow chart of a method for monitoring traffic status in accordance with an embodiment of the present invention;
FIG. 5 is a flow chart of a multi-source fusion positioning method according to an embodiment of the invention;
fig. 6 is a block diagram of a standard kalman filter iteration according to an embodiment of the invention.
Reference numerals
1. A transport vehicle; 2. an explosion-proof control cabinet; 3. an intrinsically safe lidar; 4. an intrinsically safe camera; 5. an inertial measurement unit; 6. an ultra-wideband ranging unit; 7. a line laser sensor.
Detailed Description
The technical scheme of the invention is further described below through the attached drawings and the embodiments.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. The terms "first," "second," and the like, as used herein, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "disposed," "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed.
Examples
As shown in fig. 1, a system for measuring movement of a downhole transporter according to the present invention,
the system comprises a perception measurement module, a calculation module, an isolation module, a communication module, an intrinsic safety power module and a power supply module, and is provided with an explosion-proof and intrinsic safety explosion-proof mode. The calculation module, the isolation module, the intrinsic safety power module, the communication module and the power supply module are arranged in the flameproof control cabinet 2 of the transport vehicle 1. The flameproof control cabinet 2 is fixedly connected with the transport vehicle 1, is arranged at a certain position on the transport vehicle 1 which is not interfered by large pieces of equipment such as a hydraulic support and the like, and can quickly obtain the external parameter transformation relation of the position relative to the central position of the shape of the vehicle through manual measurement. The flameproof control cabinet 2 is provided with a sensor bracket. The measuring system also comprises a display screen for locally displaying modeling and monitoring states.
As shown in fig. 2, the sensing and measuring module is mounted on the sensor bracket, and the sensing and measuring module is fixed on the transport vehicle 1 by selecting the mounting position according to the characteristics of the transport vehicle 1 and the carrying equipment. The sensing measurement module comprises a line laser sensor 7, an intrinsically safe laser radar 3, an intrinsically safe camera 4, an inertial measurement unit 5 and an ultra-wideband ranging unit 6, and the installation and external parameter transformation relation between the sensors can be directly obtained through a mechanical installation relation. The quantity and the installation angle of the intrinsically safe laser radars 3 adopted by the perception measurement module are deployed according to the actual conditions of the transport vehicle 1 and the carrying equipment, the positions of the radars are adjusted in the horizontal direction during deployment so as to adapt to the actual conditions, the target area is effectively covered, the positions of the radars are adjusted in the vertical direction so as to meet the perception requirement, and the intrinsically safe laser radars 3 are used for acquiring complete three-dimensional point cloud data near the tunnel section in the advancing process. The intrinsic safety cameras 4 are visual cameras, the intrinsic safety cameras 4 are installed on the transport vehicle 1, the number of the intrinsic safety cameras is deployed according to actual requirements, and the overall shape of the section of the roadway in front and the line laser scanning area are observed. The inertia measurement unit 5 in the sensing measurement module can be deployed on the explosion-proof control cabinet 2 according to actual conditions, so that the external parameter transformation relation is convenient to measure. The ultra-wideband ranging unit 6 in the perception measurement module only plays a role when an ultra-wideband base station is deployed in a roadway, and realizes wireless sensor network positioning through the interaction ranging with the base station.
The computing module is used for processing the data transmitted by the perception measuring module and comprises a CPU, a GPU processor and a memory, and is used for executing core function operations such as mobile measurement and data acquisition, multi-source information fusion modeling, graphical user interface man-machine interaction, carrying state simulation, key parameter generation, anti-collision obstacle avoidance strategy generation and the like. The computing module input/output interface comprises a conventional communication port including but not limited to USB, RJ45, HDMI, VGA and the like, and also comprises an SD card slot and a solid state disk, so that the computing module input/output interface can be quickly plugged and unplugged, and quick copying and transferring of data can be realized. The isolation module comprises a multi-channel signal isolation circuit, including but not limited to an isolation circuit such as RS232, RS485, photoelectric signal isolation (such as an optical transceiver), and the like, and provides communication isolation between the sensing measurement module and the calculation module. The intrinsic safety power supply module comprises multiple paths of intrinsic safety power supplies with different voltage levels and is used for supplying power to the sensing measurement calculation module, the isolation module and the communication module in a shunt way and converting the power into the voltage level of a corresponding electric appliance. The communication module adopts a combination mode of a network bridge, WIFI6 and wireless Mesh, the antenna is external, the requirement of intrinsic safety is met, and the wireless communication module is suitable for wireless communication requirements of different areas in the pit. The power supply module comprises a plurality of pouring and sealing and intrinsic safety power boxes, and can output direct current meeting the endurance requirements to supply power for various electric equipment; the power supply module further comprises a power switch and an aviation plug which meet the requirement of intrinsic safety, and the power switch and the aviation plug are used for controlling the on-off of an output circuit of the power supply module and the access of a charging circuit.
As shown in fig. 3, the method for simulating the delivery traffic state according to the present invention performs the following steps based on the movement measurement system of the underground transportation vehicle 1:
s101, calibrating external parameters of a perception measurement module: performing internal and external parameter calibration on a plurality of sensors such as an intrinsic safety laser radar 3, a vision camera, an inertial measurement unit 5 and the like contained in the perception measurement module to obtain laser radar distortion correction parameters, vision camera internal parameters, zero offset of the inertial measurement unit 5 and ultra-wideband ranging correction coefficients;
s102, installing a perception measurement module: according to the size of the loading equipment, the measuring equipment is installed on site, the positions and the directions of the intrinsically safe laser radar 3, the visual camera and the line laser sensor 7 are adjusted, and the locomotive is provided with illumination according to actual conditions;
s103, original data acquisition and recording: driving or remotely controlling an empty-load scooter in a remote mode, and reciprocating once and collecting roadway data, wherein the roadway data comprise, but are not limited to, original information such as three-dimensional point cloud, roadway surrounding environment images, line laser observed camera images, ultra-wideband measurement distance, acceleration, angular speed and the like; as an implementation mode, a Rosbag tool of a robot operating system is utilized for data acquisition and recording;
s104, modeling the multi-source information fusion SLAM: performing multi-source information fusion SLAM modeling by utilizing radar, cameras, inertial data and UWB sensor information deployed by a plurality of different poses to obtain a three-dimensional original point cloud model and a three-dimensional color point cloud model;
S105, offline optimization of the model: post-processing is carried out on the multisource information fusion SLAM modeling result, noise reduction and filtering processing are carried out on the point cloud model, and cutting and simplifying are carried out on redundant data;
s106, extracting key information of the roadway model: comprehensively utilizing a vision and line laser triangulation method and a three-dimensional point cloud model parameter estimation method obtained by multi-source information fusion SLAM to carry out joint estimation on the cross section, the maximum outline size, the minimum outline size and the central axis of the roadway;
s107, carrying equipment loading and envelope surface size measurement: loading equipment on the transport vehicle 1 according to cubic rules as much as possible; measuring the critical dimension of the envelope surface of the loading device;
s108, generating a loading model: the size of the envelope surface is utilized to rapidly input a software interface and generate a simplified equipment model, and solid models such as cubes, cylinders, spheres and the like are generated according to the equipment type;
s109, generating a carrying model: according to basic parameters such as the volume, the size and the like of the adopted transport vehicle 1 carrying equipment, selecting and loading a transport model in a software interface tab, and combining the transport vehicle 1 model with a carried equipment model to generate a final carrying model;
s110, operation state simulation: based on a Robot Operating System (ROS), using Gazebo and an Rviz frame platform, importing a carrying model, loading a running track obtained by multi-source information fusion SLAM, driving the carrying model to move, developing running state simulation dynamic demonstration, and observing actual running conditions;
S111, data analysis and early warning prediction: based on the running state simulation process, outputting relevant measurement data and key parameters, analyzing and processing dangerous area alarm information, calculating and outputting parameters of an area which cannot be passed, and outputting a full roadway trafficability analysis report; setting a threshold value, and grading, early warning and predicting the non-passing or risky areas.
S101, performing external parameter calibration and S104 multisource information fusion SLAM modeling of a perception measurement module synchronously, namely performing off-line calibration and on-line calibration; in the S104 multisource information fusion SLAM modeling process, information fusion is carried out by utilizing a laser-vision-inertia-ultra wide band tight coupling mode, and as an implementation mode, tight coupling fusion is carried out based on iterative error extended Kalman filtering (IESKF) and Factor Graph Optimization (FGO).
S101, performing external parameter calibration and S104 multisource information fusion SLAM modeling synchronously, namely off-line calibration and on-line calibration, wherein the on-line calibration method mainly adopts a mapping algorithm to estimate states by using a graph optimization or Kalman filtering mode, and estimates states of a radar and a camera to estimate relative translation and rotation of the radar and the camera. The method of off-line estimation mainly comprises the steps that a radar provides 3D data, a camera provides 2D data, 3D points are described under a world coordinate system, the pose of the camera at each moment is finally obtained through a pnp method, and the relative transformation between the camera and the radar is solved through a nonlinear optimization (such as Gauss Newton method).
S104, in the multi-source information fusion SLAM modeling process, information fusion is carried out by utilizing a laser-vision-inertia-ultra wide band tight coupling mode, and as an implementation mode, tight coupling fusion is carried out based on iterative error expansion Kalman filtering (IESKF) and Factor Graph Optimization (FGO); the method comprises the following steps: the visual laser inertial tight coupling odometer based on the UWB online anchor point is used for tight coupling fusion, and the visual laser inertial tight coupling odometer has the following structure: the radio wave ranging module UWB made by ultra-wideband technology is deployed at two positions, one is deployed around the environment as an anchor point, and the position of the anchor point is calculated by an online anchor point calibration unit; secondly, the method is used as a label to be placed on a flat car, the label actively transmits a distance measurement request to an anchor point according to a preset sequence, the distance between two modules is obtained according to a TW-ToF measurement principle, partial noise is filtered through an outlier detection unit, and the distance is used as input to be sent into an error state Kalman filter; the IMU inertial measurement unit is used for measuring acceleration and angular velocity information of the flat car at high frequency, and the IMU measured value is processed by the low-pass filter module and is also input into the error state Kalman filter; after the measured value of the binocular camera is processed by the visual odometer, the measured value is also used as input to be sent into an error state Kalman filter; the measured value of the laser radar is also used as input to be sent into an error state Kalman filter after being downsampled so as to construct point-plane residual error constraint; the on-line anchor point calibration unit, the outlier detection unit and the error state Kalman filter are modules for the embedded airborne processor to call, and the embedded airborne processor processes the distance measurement value, the IMU measurement value, the binocular camera measurement value and the laser radar measurement value among UWB modules by using the error state Kalman filter and analyzes the position and attitude information of the flat car in a tight coupling optimization mode.
S106, in the roadway model key information extraction process, a visual and linear laser triangulation method is adopted, wire harness laser is projected, linear stripes are formed on the inner wall of a roadway by utilizing projection light spots, projection heights of each point of the stripes are calculated based on visual images, and the heights of each point are accumulated by using Riemann integration to obtain the sectional area of contour projection, so that the measurement and estimation of roadway cross section, the maximum contour size, the minimum contour size and central axis roadway key parameters are realized; the three-dimensional point cloud model parameter estimation method utilizes a three-dimensional point cloud model obtained by multi-source information fusion SLAM modeling to further carry out point cloud splicing, noise reduction and simplification, calculates a central axis based on a projection method, utilizes a polynomial function to fit an outer contour curve, and realizes measurement and estimation of a roadway cross section, a contour maximum size, a contour minimum size and central axis roadway key parameters; the key parameters obtained by the two methods are calculated by weighted average to realize joint estimation.
In S110 running state simulation, the loaded transport vehicle 1 model can be selected according to the type of a common flat car, parameters such as the size, the shape and the like of the mounting equipment are set through a man-machine interaction interface, and corresponding marks (markers) are rapidly generated in Rviz, so that synchronous driving dynamic demonstration with the transport vehicle 1 is realized.
In S111 data analysis and early warning prediction process, calculating and generating the minimum distance between the transport vehicle 1 carrying model and the roadway point cloud according to the coordinates of the point cloud in the generated three-dimensional point cloud model and the pose of the transport vehicle 1 carrying modelSIf it is smaller than the set threshold valueS th I.e.S<S th And carrying out early warning, prompting the position and the gesture of the loading model at the early warning position, and predicting the position to be impacted and highlighting the position. As a calculation method of the minimum distance, the minimum distance from each point coordinate on the transport vehicle 1 to the local point cloud coordinate of the surrounding roadway where the point coordinate is located under the current pose can be calculated by utilizing violence search.
As shown in fig. 4, the method for monitoring the traffic state of the present invention, based on the movement measurement system of the underground transportation vehicle 1, comprises the following steps:
s201, constructing a digital twin model of a roadway and carrying equipment: constructing a roadway digital twin model by utilizing a roadway point cloud model obtained by multi-source information fusion SLAM modeling in a carrying traffic state simulation method, constructing a digital twin model of carrying equipment by utilizing the carrying model generated in S109, and introducing the digital twin model into a Unity3D simulation engine to construct an initial visual simulation model;
s202, carrying state live-action updating and real-time pose driving based on multi-source information fusion SLAM: based on an initial visual digital twin model of a roadway in Unity3D, in the motion process of transport vehicle 1 carrying equipment, carrying out live-action update on a basic model based on a high-precision point cloud model obtained by multi-source information fusion SLAM, and driving the digital twin simulation model to move based on a real-time pose obtained by multi-source information fusion SLAM;
S203, multi-type target real-time detection, identification, segmentation and positioning based on multi-sensor fusion: in the moving process of the carrying equipment, the laser point cloud and the visual image are utilized to perform target detection, target identification and semantic segmentation based on a deep learning network, and dynamic and static objects such as personnel, vehicles, equipment and the like near the carrying vehicle are detected, identified, segmented and positioned;
s204, carrying traffic process intelligent decision, early warning and monitoring based on reinforcement learning: constructing a software development platform based on an AirSim+Unity 3D+ROS simulation engine, importing roadway and carrying equipment digital twin models in the running process in real time, carrying out multi-type multi-target real-time detection, identification, segmentation and positioning results, carrying out scene expansion on the twin simulation models based on a reinforcement learning method, carrying out autonomous deduction on future states, synthesizing the simulation deduction results and a field feedback observation data correction decision scheme, generating an anti-collision obstacle avoidance strategy, carrying out online early warning on dangerous situations in the running process of the vehicle, and realizing intelligent monitoring on the running states of the vehicle;
s205, carrying traffic state virtual-real interaction and man-machine visualization: and carrying out virtual-real superposition and fusion display on the roadway real-time point cloud model constructed based on the multi-source information fusion SLAM and the constructed roadway twin body, and realizing immersive multi-perception interactive man-machine visual interaction based on augmented reality.
S203, based on the multi-sensor fusion multi-type target real-time detection, identification, segmentation and positioning process, the multi-task multi-mode deep learning network is utilized to carry out detection, identification and segmentation tasks, based on the multi-sensor external parameter calibration result in S101 and based on the multi-source information fusion SLAM real-time pose estimation in S202, the multi-type multi-target six-degree-of-freedom pose estimation is realized, and the positioning is realized.
S205, carrying traffic state virtual-real interaction and man-machine visualization, wherein in the process of virtual-real superposition and fusion display of a twin body, a local color point cloud model constructed in real time by a roadway is used for realizing the functions of real-time mapping, live-action mapping and high-frequency updating of a physical body and a virtual body digital twin; immersive multi-perception interaction based on augmented reality comprises interfaces such as an anti-explosion AR helmet, glasses, a display screen and audio frequency to form a comprehensive interaction platform, and performs text prompt and voice broadcast on a carrying process, a dangerous area, a work plan and the like, so that man-machine interaction and visual early warning are realized.
The multi-source fusion positioning method is used in a multi-source fusion positioning system comprising UWB equipment, an IMU, a wheel type odometer and a radar. The optimal observed quantity can be selected underground, and the positioning precision of the multi-source fusion navigation is improved; and the mechanism of updating and restarting the machine is applied to the wheel-type/visual odometer, so that accumulated errors of observables are eliminated in time, continuous navigation positioning of different scenes can be realized, and the overall power consumption of the navigation module is reduced.
The UWB (Ultra Wide Band) technology is a wireless carrier communication technology, which does not adopt a sine carrier, but utilizes nanosecond non-sine wave narrow pulse to transmit data, so that the UWB (Ultra Wide Band) technology occupies a Wide spectrum range, has the advantages of low system complexity, low power spectrum density of a transmitting signal, insensitivity to channel fading, low interception capability, high positioning accuracy and the like, and is particularly suitable for high-speed wireless access in indoor and other dense multipath places.
IMU (Inertial measurement unit), which is a device for measuring the three-axis angular velocity and acceleration of an object. The general IMU includes a tri-axis gyroscope and a tri-axis accelerometer. The IMU information is measurement data of the IMU device.
As shown in fig. 5, the multi-source fusion positioning method includes:
s301, judging whether UWB precision is larger than a preset first precision value, if not, executing step S305; if so, step S302 is performed.
The UWB precision factor is a precision factor DOP, which may be one of a position precision factor, a clock error precision factor, a horizontal component precision factor, a vertical component precision factor, and the like, which is not limited.
S302, judging whether the wheel type odometer slips, if not, executing a step S304; if so, step S303 is performed.
The wheel odometer is a device for estimating a change in the position of an object with time using data obtained from a movement sensor, and may be specifically a wheel odometer or the like, and the embodiment of the present application is not limited thereto. The odometer information is measurement data of the wheel type odometer.
S303, starting the radar, carrying out fusion positioning through the IMU and the radar to obtain a positioning result, and ending the flow.
The IMU/radar fusion mode is used as the fusion mode with the lowest priority, so that the standby positioning information in the complex environment is further ensured, the overlarge positioning error is avoided, and the second precision threshold is directly determined according to the characteristics of output noise, variation statistics and the like of the final positioning information.
S304, carrying out fusion positioning through the IMU and the wheel type odometer to obtain a positioning result, and ending the flow. And when the wheel type odometer is judged not to skid, entering an IMU/wheel type odometer fusion mode, and carrying out fusion positioning through the IMU and the wheel type odometer to obtain a positioning result.
S305, fusion positioning is carried out through the IMU and the UWB, a positioning result is obtained, and the process is ended.
And when the UWB precision factor is not larger than a preset first precision threshold value, entering an IMU/UWB fusion mode, and carrying out fusion positioning through the IMU and UWB to obtain a positioning result.
The criterion of whether to switch to the IMU/UWB fusion mode is mainly UWB positioning accuracy factor DOP, namely judging whether the UWB accuracy factor (DOP) is a preset first accuracy threshold. The positioning precision factor DOP is an evaluation value comprehensively defined by the UWB module according to the distribution of the base stations, the ranging change and the like, and a first precision threshold representing whether the horizontal positioning precision is good or bad is determined by experimental analysis.
The preset first precision threshold and the preset second precision threshold are preset, specifically, precision factor thresholds may be 1-5, and the like, which are not limited. The values of the preset first precision threshold and the preset second precision threshold can be obtained through independent tests or experiments.
The execution subject of the method can be a computing device such as a computer, a server, a multi-source fusion positioning system and the like, which is not limited in any way.
The priority selected by the specified measurement according to the adaptive performance of the device under different environments is as follows: IMU > UWB > wheel type odometer, and the corresponding fusion mode is IMU/radar fusion mode > IMU/UWB fusion mode > IMU/wheel type odometer fusion mode. Only when the sensing source with higher priority is invalid (or the precision factor is larger than the preset precision threshold), the sensing source with the next highest priority is selected as the observed quantity in the Kalman filtering iteration according to the priority sorting. And when judging whether the sensing source is invalid, the preset precision threshold value comprises the first preset precision threshold value and the second precision threshold value. The values below the first precision threshold and the second precision threshold are used for indicating that the positioning precision is higher, and conversely, are used for indicating that the positioning precision is insufficient.
The constructed algorithm mathematical models of rotation, speed and position are respectively as follows:
1. the rotational estimation algorithm model includes a state equation and a measurement equation,
wherein, the state equation is:
;
the measurement equation is:
;
wherein,is the rotation angle around the x-axis; />For rotation about the y-axisThe angle, also the heading angle of UWB; />Is the rotation angle around the z-axis; />Is the angular velocity about the x-axis; />Is the angular velocity about the y-axis; />Is the angular velocity about the z-axis.
2. The speed estimation algorithm model comprises a state equation and a measurement equation, wherein the state equation is as follows:
;
the measurement equation is:
;
wherein,respectively representing the speeds in the x, y and z directions at the kth moment;respectively representing linear acceleration in the x, y and z directions at the kth moment;representing the horizontal and vertical speeds at time k, respectively.
3. The algorithmic model of the position estimate includes a state equation and a measurement equation, wherein:
;
the measurement equation is:
;
wherein,respectively representing the displacement along the x, y and z directions at the kth moment; />Representing the measured positions along the x, y, z directions, respectively; />Representing initial positions along x, y and z directions respectively; />The included angle between the projection of the displacement vector P on the xoy plane and the positive direction of the x axis; />Is the angle between the displacement vector P and its projection on the xoy plane.
The measurement equations are linear, the speed and position-level state equations are also linear, and standard five-step Kalman filtering iteration can be directly applied.
The hardware components of the system forming the multi-source fusion positioning method comprise a sensing source, a VIO module, a satellite module, an IMU module, a GNSS RTK module, a data transmission module, a UWB module, a wheel type odometer and a data transmission antenna. The method can solve the problem that in the multi-source fusion positioning method, the accuracy cannot reach the optimum because the state estimation is influenced by the signal characteristics of all sensing sources in any scene, and simultaneously can also solve the problem that the wheel type odometer sensor cannot eliminate accumulated errors in time because each sensing source (comprising a VIO module, an IMU module, a UWB module and a wheel type odometer) independently works from beginning, so that continuous seamless navigation in different scenes cannot be realized.
Fig. 6 shows a block diagram of a standard kalman filter iteration. Wherein the estimation algorithm of the rotation is unchanged, and EKF (extended kalman filter) is applied to linearize the process.
The execution subject of the multi-source fusion positioning method may be a computing device such as a computer, a server, a multi-source fusion positioning system, and the like, which is not limited in this embodiment.
According to a carrying traffic state simulation method and a carrying traffic state monitoring method, multi-source information is fused with SLAM, and only a map building function is operated under the condition that other positioning modes are contained in a roadway; as an implementation mode of a positioning method, the ultra-wideband ranging and inertial measurement unit 5 is utilized to fuse inertial navigation state estimation and ultra-wideband ranging observation information based on error state Kalman filtering, so that positioning based on UWB/IMU is realized, and a mapping module in the multi-source information fusion SLAM is operated based on the positioning result.
According to the underground transport vehicle movement measurement system, the transport traffic state simulation method and the transport traffic state monitoring method, the underground transport vehicle movement measurement system and the transport traffic state simulation and monitoring method are applicable to objects including, but not limited to, an underground rail type electric locomotive, a flat plate transport vehicle 1, a monorail crane locomotive, a trackless rubber-tyred transport vehicle 1 and the like.
Therefore, the underground transport vehicle movement measurement system and the transport passing state simulation and monitoring method can realize dynamic simulation and intelligent monitoring of the transport passing state when the coal mine transport vehicle carries large equipment such as a hydraulic support and the like, and realize high-efficiency, safe and reliable large equipment carrying.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention and not for limiting it, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that: the technical scheme of the invention can be modified or replaced by the same, and the modified technical scheme cannot deviate from the spirit and scope of the technical scheme of the invention.
Claims (10)
1. The utility model provides a underground transportation car removes measurement system which characterized in that:
comprises a perception measuring module and a sensing module, wherein the perception measuring module is used for sensing the sensing signals,
the computing module is used for processing the data transmitted by the perception measuring module;
the isolation module is used for providing communication isolation between the sensing measurement module and the calculation module;
the communication module is used for wireless communication in different areas in the pit;
the intrinsic safety power supply module is used for supplying power to the sensing measurement module, the calculation module, the isolation module and the communication module in a shunt way and converting the power into voltage grades of corresponding electric appliances;
the power supply module is used for outputting direct current to supply power for various electric equipment;
the display screen is used for locally displaying modeling and monitoring states;
the system comprises a calculation module, an isolation module, an intrinsic safety power module, a communication module and a power supply module, wherein the calculation module, the isolation module, the intrinsic safety power module, the communication module and the power supply module are arranged in an explosion-proof control cabinet of a transport vehicle;
the perception measurement module comprises a line laser sensor and is used for scanning a roadway area;
the intrinsic safety type laser radar is used for collecting three-dimensional point cloud data;
the intrinsic safety camera is used for observing the section form of the tunnel and the line laser scanning area;
The inertial measurement unit is used for acquiring attitude information;
the ultra-wideband ranging unit is used for realizing the positioning of the wireless sensor network;
the number and the installation angle of the intrinsic safety type laser radars are deployed according to the conditions of the transport vehicle and the carrying equipment, and the intrinsic safety type laser radars collect three-dimensional point cloud data of the tunnel section in the advancing process; the intrinsic safety type cameras are arranged on a sensor bracket of the transport vehicle, the intrinsic safety type cameras are visual cameras, the number of the intrinsic safety type cameras is deployed according to actual requirements, and the intrinsic safety type cameras are used for observing the whole shape of the section of the roadway in front and the line laser scanning area; the inertia measurement unit in the sensing measurement module is deployed on the explosion-proof control cabinet according to actual conditions; the ultra-wideband ranging unit realizes wireless sensor network positioning through interactive ranging with the base station.
2. The downhole transporter movement measurement system of claim 1, wherein: the computing module comprises a CPU, a GPU processor and a memory, and is used for executing core function operations of mobile measurement and data acquisition, multi-source information fusion modeling, graphical user interface man-machine interaction, carrying state simulation, key parameter generation and anti-collision obstacle avoidance strategy generation; the computing module input/output interface comprises a communication port and an SD card slot.
3. The downhole transporter movement measurement system of claim 1, wherein: the isolation module comprises a multipath signal isolation circuit; the intrinsic safety power supply module comprises multiple paths of intrinsic safety power supplies with different voltage levels.
4. The downhole transporter movement measurement system of claim 1, wherein: the communication module adopts a combination mode of a network bridge, WIFI6 and wireless Mesh, and an antenna is externally arranged and meets the requirement of intrinsic safety; the power supply module comprises a plurality of pouring and sealing intrinsic safety power supply boxes, a power switch and an aviation plug.
5. A carrying traffic state simulation method is characterized in that: the method comprises the following steps:
s101, calibrating external parameters of a perception measurement module: performing internal and external parameter calibration on a sensor of a sensing measurement module to obtain laser radar distortion correction parameters, vision camera internal parameters, zero offset of an inertial measurement unit and ultra-wideband ranging correction coefficients;
s102, installing a perception measurement module: according to the size of the loading equipment, the measuring equipment is installed on site, the positions and the directions of the intrinsically safe laser radar, the intrinsically safe camera and the line laser sensor are adjusted, and the lighting equipment is installed on the transport vehicle according to actual conditions;
s103, original data acquisition and recording: driving or remotely controlling an empty carrier vehicle in a remote mode, and acquiring roadway data once in a reciprocating mode, wherein the roadway data comprise three-dimensional point cloud, roadway surrounding environment images, line laser observed camera images, ultra-wideband measurement distance, acceleration and angular velocity original information; acquiring and recording data by using a tool of a robot operating system;
S104, modeling the multi-source information fusion SLAM: SLAM is synchronous positioning and map construction, and multi-source information fusion SLAM modeling is executed by utilizing radar, cameras, inertial data and UWB sensor information deployed by a plurality of different poses to obtain a three-dimensional original point cloud model and a three-dimensional color point cloud model; in the multi-source information fusion SLAM modeling process, information fusion is carried out by utilizing a laser-vision-inertia-ultra-wideband tight coupling mode;
s105, offline optimization of the model: post-processing is carried out on the multisource information fusion SLAM modeling result, noise reduction and filtering processing are carried out on the point cloud model, and cutting and simplifying are carried out on redundant data;
s106, extracting key information of a roadway model: comprehensively utilizing a vision and line laser triangulation method and a three-dimensional point cloud model parameter estimation method obtained by multi-source information fusion SLAM to jointly estimate the cross section, the outline size and the central axis of the roadway;
s107, carrying device loading and envelope surface size measurement: loading equipment on a transport vehicle, and measuring the critical dimension of an envelope surface of the loading equipment;
s108, generating a loading model: inputting a software interface and generating a simplified equipment model by utilizing the size of the envelope surface, and generating solid models of cubes, cylinders and spheres according to the equipment type;
S109, generating a carrying model: selecting and loading a transport vehicle model in a software interface tab according to the basic parameters of the volume and the size of the transport vehicle carrying equipment, and combining the transport vehicle model with the carried equipment model to generate a final carrying model;
s110, operation state simulation: based on a frame platform used by a robot operating system, a carrying model is imported, a running track obtained by multi-source information fusion SLAM is loaded, the carrying model is driven to move, running state simulation dynamic demonstration is carried out, and actual running conditions are observed; in the running state simulation, the loaded transport vehicle model selects multiple tabs according to the type of the flat car, and the size and shape parameters of the mounting equipment are set through a man-machine interaction interface to generate corresponding marks so as to realize synchronous driving dynamic demonstration with the transport vehicle;
s111, data analysis and early warning prediction: based on the running state simulation process, outputting relevant measurement data and key parameters, analyzing and processing dangerous area alarm information, calculating and outputting parameters of an area which cannot be passed, and outputting a trafficability analysis report of the whole roadway; setting a threshold value, and grading, early warning and predicting the non-passing or risky areas.
6. A method of traffic state simulation according to claim 5, wherein: in S106, in the process of extracting the key information of the roadway model, the vision and line laser triangulation method is used for projecting the laser beam, linear stripes are formed on the inner wall of the roadway by utilizing projection light spots, the projection height of each point of the stripes is calculated based on a vision image, and the projection sectional area of the contour projection is obtained by accumulating the heights of each point by using Rieman integral, so that the measurement and estimation of the roadway cross section, the contour dimension and the central axis roadway key parameters are realized; the three-dimensional point cloud model parameter estimation method utilizes a three-dimensional point cloud model obtained by multi-source information fusion SLAM modeling to further carry out point cloud splicing, noise reduction and simplification, calculates a central axis based on a projection method, utilizes a polynomial function to fit an outer contour curve, and realizes measurement and estimation of the cross section, contour dimension and central axis roadway key parameters of the roadway; the key parameters obtained by the two methods are calculated through weighted average to realize joint estimation.
7. A method of traffic state simulation according to claim 5, wherein: in S111, in the data analysis and early warning prediction process, calculating and generating the distance between the transport vehicle carrying model and the roadway point cloud according to the coordinates of the point cloud in the generated three-dimensional point cloud model and the pose of the transport vehicle carrying model S,SLess than a set thresholdS th And carrying out early warning and prompting the position, the gesture and the distance of the loading model at the early warning position, predicting the position to be impacted and highlighting.
8. A method for monitoring traffic state of vehicles, which is characterized in that: the method comprises the following steps:
s201, constructing a digital twin model of a roadway and carrying equipment: constructing a roadway digital twin model by utilizing a roadway point cloud model obtained by multi-source information fusion SLAM modeling in a carrying traffic state simulation method, constructing a digital twin model of carrying equipment by utilizing the generated carrying model, and introducing the digital twin model into a Unity3D simulation engine to construct an initial visual simulation model;
s202, carrying state live-action updating and real-time pose driving based on multi-source information fusion SLAM: based on an initial visual digital twin model of a roadway in Unity3D, in the motion process of transport vehicle carrying equipment, carrying out live-action update on a basic model based on a high-precision point cloud model obtained by multi-source information fusion SLAM, and driving the digital twin simulation model to move based on a real-time pose obtained by multi-source information fusion SLAM;
s203, multi-type target real-time detection, identification, segmentation and positioning based on multi-sensor fusion: in the moving process of the carrying equipment, the laser point cloud and the visual image are utilized to carry out target detection, target identification and semantic segmentation based on a deep learning network, and the dynamic and static objects of personnel, vehicles and equipment are detected, identified, segmented and positioned;
S204, carrying traffic process intelligent decision, early warning and monitoring based on reinforcement learning: constructing a software development platform based on an AirSim+Unity 3D+ROS simulation engine, importing roadway and carrying equipment digital twin models in the running process in real time, carrying out multi-type multi-target real-time detection, identification, segmentation and positioning results, carrying out scene expansion on the twin simulation models based on a reinforcement learning method, carrying out autonomous deduction on future states, synthesizing the simulation deduction results and a field feedback observation data correction decision scheme, generating an anti-collision obstacle avoidance strategy, carrying out online early warning on dangerous situations in the running process of the vehicle, and realizing intelligent monitoring on the running states of the vehicle;
s205, carrying traffic state virtual-real interaction and man-machine visualization: and carrying out virtual-real superposition and fusion display on the roadway real-time point cloud model constructed based on the multi-source information fusion SLAM and the constructed roadway twin body, and realizing immersive multi-perception interactive man-machine visual interaction based on augmented reality.
9. A method of traffic state monitoring according to claim 8, characterized in that: in S203, the multi-type target real-time detection, identification, segmentation and positioning process based on multi-sensor fusion is performed by utilizing the multi-task multi-mode deep learning network to perform detection, identification and segmentation tasks, and the multi-type multi-target six-degree-of-freedom pose estimation is realized based on the multi-sensor external parameter calibration result and the multi-source information fusion SLAM real-time pose estimation, so that positioning is realized.
10. A method of traffic state monitoring according to claim 8, characterized in that: in S205, in carrying traffic state virtual-real interaction and man-machine visualization, in the process of virtual-real superposition and fusion display of a twin body, a local color point cloud model constructed in real time by a roadway is used for realizing the functions of real-time mapping, live-action mapping and high-frequency updating of digital twin bodies and virtual bodies; immersive multi-perception interaction based on augmented reality comprises an integrated interaction platform consisting of an anti-explosion AR helmet, glasses, a display screen and an audio interface, and is used for carrying out text prompt and voice broadcast on a carrying process, a dangerous area and a work plan, so that man-machine interaction and visual early warning are realized.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311624264.XA CN117308900B (en) | 2023-11-30 | 2023-11-30 | Underground transport vehicle movement measurement system and carrying traffic state simulation and monitoring method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311624264.XA CN117308900B (en) | 2023-11-30 | 2023-11-30 | Underground transport vehicle movement measurement system and carrying traffic state simulation and monitoring method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117308900A true CN117308900A (en) | 2023-12-29 |
CN117308900B CN117308900B (en) | 2024-02-09 |
Family
ID=89285290
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311624264.XA Active CN117308900B (en) | 2023-11-30 | 2023-11-30 | Underground transport vehicle movement measurement system and carrying traffic state simulation and monitoring method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117308900B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107843208A (en) * | 2017-10-27 | 2018-03-27 | 北京矿冶研究总院 | Mine roadway contour sensing method and system |
CN108345305A (en) * | 2018-01-31 | 2018-07-31 | 中国矿业大学 | Railless free-wheeled vehicle intelligent vehicle-mounted system, underground vehicle scheduling system and control method |
CN111735445A (en) * | 2020-06-23 | 2020-10-02 | 煤炭科学研究总院 | Monocular vision and IMU (inertial measurement Unit) integrated coal mine tunnel inspection robot system and navigation method |
CN115454057A (en) * | 2022-08-24 | 2022-12-09 | 中国矿业大学 | Digital twin intelligent control modeling system and method for coal mine robot group |
US20230066441A1 (en) * | 2020-01-20 | 2023-03-02 | Shenzhen Pudu Technology Co., Ltd. | Multi-sensor fusion slam system, multi-sensor fusion method, robot, and medium |
CN115981337A (en) * | 2023-01-18 | 2023-04-18 | 中国矿业大学 | Underground unmanned vehicle decision making system and method based on multi-source information |
CN116352722A (en) * | 2023-05-10 | 2023-06-30 | 天地科技股份有限公司北京技术研究分公司 | Multi-sensor fused mine inspection rescue robot and control method thereof |
CN116518984A (en) * | 2023-07-05 | 2023-08-01 | 中国矿业大学 | Vehicle road co-location system and method for underground coal mine auxiliary transportation robot |
-
2023
- 2023-11-30 CN CN202311624264.XA patent/CN117308900B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107843208A (en) * | 2017-10-27 | 2018-03-27 | 北京矿冶研究总院 | Mine roadway contour sensing method and system |
CN108345305A (en) * | 2018-01-31 | 2018-07-31 | 中国矿业大学 | Railless free-wheeled vehicle intelligent vehicle-mounted system, underground vehicle scheduling system and control method |
US20230066441A1 (en) * | 2020-01-20 | 2023-03-02 | Shenzhen Pudu Technology Co., Ltd. | Multi-sensor fusion slam system, multi-sensor fusion method, robot, and medium |
CN111735445A (en) * | 2020-06-23 | 2020-10-02 | 煤炭科学研究总院 | Monocular vision and IMU (inertial measurement Unit) integrated coal mine tunnel inspection robot system and navigation method |
CN115454057A (en) * | 2022-08-24 | 2022-12-09 | 中国矿业大学 | Digital twin intelligent control modeling system and method for coal mine robot group |
CN115981337A (en) * | 2023-01-18 | 2023-04-18 | 中国矿业大学 | Underground unmanned vehicle decision making system and method based on multi-source information |
CN116352722A (en) * | 2023-05-10 | 2023-06-30 | 天地科技股份有限公司北京技术研究分公司 | Multi-sensor fused mine inspection rescue robot and control method thereof |
CN116518984A (en) * | 2023-07-05 | 2023-08-01 | 中国矿业大学 | Vehicle road co-location system and method for underground coal mine auxiliary transportation robot |
Non-Patent Citations (1)
Title |
---|
李猛钢 等: "煤矿移动机器人LiDAR/IMU紧耦合SLAM方法", 《工矿自动化》, vol. 48, no. 12, pages 68 - 77 * |
Also Published As
Publication number | Publication date |
---|---|
CN117308900B (en) | 2024-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7125214B2 (en) | Programs and computing devices | |
US20230244233A1 (en) | Determining a three-dimensional model of a scan target | |
CN111693050B (en) | Indoor medium and large robot navigation method based on building information model | |
CN109541535A (en) | A method of AGV indoor positioning and navigation based on UWB and vision SLAM | |
CN111813130A (en) | Autonomous navigation obstacle avoidance system of intelligent patrol robot of power transmission and transformation station | |
EP3799618B1 (en) | Method of navigating a vehicle and system thereof | |
CN112461227A (en) | Intelligent autonomous navigation method for polling wheel type chassis robot | |
CN115256414B (en) | Mining drilling robot and coupling operation method thereof with geological and roadway model | |
Al-Darraji et al. | A technical framework for selection of autonomous uav navigation technologies and sensors | |
KR102396675B1 (en) | Position estimation and 3d tunnel mapping system of underground mine autonomous robot using lidar sensor, and its method | |
US20220282967A1 (en) | Method and mobile detection unit for detecting elements of infrastructure of an underground line network | |
CN109491383A (en) | Multirobot positions and builds drawing system and method | |
EP4148385A1 (en) | Vehicle navigation positioning method and apparatus, and base station, system and readable storage medium | |
Cui et al. | Navigation and positioning technology in underground coal mines and tunnels: A review | |
CN212515475U (en) | Autonomous navigation obstacle avoidance system of intelligent patrol robot of power transmission and transformation station | |
CN117308900B (en) | Underground transport vehicle movement measurement system and carrying traffic state simulation and monitoring method | |
CN116352722A (en) | Multi-sensor fused mine inspection rescue robot and control method thereof | |
CN113002540B (en) | Mining dump truck control method and device | |
CN210072405U (en) | Unmanned aerial vehicle cooperative control verification platform | |
Samarakoon et al. | Impact of the Trajectory on the Performance of RGB-D SLAM Executed by a UAV in a Subterranean Environment | |
CN113808419A (en) | Method for determining an object of an environment, object sensing device and storage medium | |
US20230288224A1 (en) | Ultrasonic wave-based indoor inertial navigation mapping method and system | |
US20230237793A1 (en) | False track mitigation in object detection systems | |
CN115685989A (en) | Unmanned system and control method for mining electric locomotive | |
Bai et al. | Subway Obstacle Detection System Based on Multi-sensor Data Fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |