CN117201567A - System and method for controlling cage entering and exiting through underground mining scene perception fusion technology - Google Patents

System and method for controlling cage entering and exiting through underground mining scene perception fusion technology Download PDF

Info

Publication number
CN117201567A
CN117201567A CN202311460434.5A CN202311460434A CN117201567A CN 117201567 A CN117201567 A CN 117201567A CN 202311460434 A CN202311460434 A CN 202311460434A CN 117201567 A CN117201567 A CN 117201567A
Authority
CN
China
Prior art keywords
vehicle
unit
information
cage
obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311460434.5A
Other languages
Chinese (zh)
Other versions
CN117201567B (en
Inventor
黄琰
周欣
王�琦
田瑞丰
夏宇
卫晓滨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Leike Zhitu Tai'an Automobile Technology Co ltd
Polytechnic Leike Zhitu Beijing Technology Co ltd
Original Assignee
Leike Zhitu Tai'an Automobile Technology Co ltd
Polytechnic Leike Zhitu Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Leike Zhitu Tai'an Automobile Technology Co ltd, Polytechnic Leike Zhitu Beijing Technology Co ltd filed Critical Leike Zhitu Tai'an Automobile Technology Co ltd
Priority to CN202311460434.5A priority Critical patent/CN117201567B/en
Publication of CN117201567A publication Critical patent/CN117201567A/en
Application granted granted Critical
Publication of CN117201567B publication Critical patent/CN117201567B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The application discloses an access cage control system and method of a mining scene perception fusion technology, which relate to the technical field of unmanned, and comprise the following steps: acquiring multi-source heterogeneous data comprising traveling vehicle information and environment information; analyzing the acquired vehicle information and environment information through a preset neural network model to generate a preliminary control instruction for the cage; analyzing vehicle information and environment information through a preset model based on deep learning, and generating a control instruction for the cage by combining with planning of obstacle avoidance tracks; combining the control instruction generated in the cloud computing step and the control instruction generated in the road end computing step according to a rule of priority of the cloud instruction to form a final control instruction; and driving the cage to execute opening or closing operation according to the finally formed control instruction. Aiming at the problem of low control efficiency of the vehicle access cage in the prior art, the application improves the cooperative control efficiency of the underground access cage.

Description

System and method for controlling cage entering and exiting through underground mining scene perception fusion technology
Technical Field
The application relates to the technical field of unmanned, in particular to an access cage control system and method of a well engineering and mining scene perception fusion technology.
Background
With the rapid development of Chinese economy, mineral resource exploitation is increasingly increased, and mine well engineering technology is widely applied to resource exploitation processes of coal mines and the like. In order to ensure the ventilation quality of the underground operation area, ventilation doors or cages are required to be arranged at the inlet and the outlet of the mine, and the vehicles entering and exiting are strictly controlled. However, conventional manual control methods are inefficient and fail to meet the ever-increasing demands of downhole ventilation and transportation. Therefore, the intelligent and accurate control technology for the cage entering and exiting in the mining scene is developed, the automatic control on the vehicle entering and exiting is realized, and the method has important significance for improving the mining operation efficiency and the safety production level.
With the rapid development of intelligent network-connected vehicles, various intelligent network-connected operation vehicles are widely applied to underground operation, and the underground operation efficiency is remarkably improved. However, in the existing underground mining scene, the cage structure is simpler, the vehicle access is controlled only by manpower, and the transportation requirement of the intelligent network operation vehicle cannot be met.
In the related technology, for example, a management and control system and a management and control method for a mining area operation path and a social road intersection are provided in a Chinese patent document CN116343501A, the application focuses on the environment perception and the vehicle behavior prediction of the intersection, but does not relate to the cage control problem in a well and mining scene, and the accurate cage control effect is difficult to achieve by direct application; the application concentrates on the highway intelligent traffic system, does not consider the influence of underground special environment on perception and control, is directly applied to the underground mining scene, and cannot acquire sufficiently accurate environmental information, so that the subsequent control efficiency is low; the application provides a road side perception method and a system thereof in vehicle-road cooperation, which only rely on single road side perception to not acquire global information of a vehicle, and also have no cloud computing and analyzing capability, so that the perception of complex underground environment is insufficient and the control efficiency is low.
Disclosure of Invention
1. Technical problem to be solved
Aiming at the problem of low control efficiency of the vehicle access cage in the prior art, the invention provides an access cage control system and method of a well mining scene perception fusion technology, which are used for generating control instructions through the collaborative analysis of road ends and cloud ends, combining track planning, obstacle avoidance algorithm and the like, and improving the collaborative control efficiency of the underground access cage.
2. Technical proposal
The aim of the invention is achieved by the following technical scheme.
One aspect of the embodiments of the present disclosure provides an access cage control system for a mining scene-aware fusion technique, comprising: the road side sensing module is used for detecting running vehicle information and environment information; the computing module is connected with the road side sensing module, receives the vehicle information and the environment information sent by the road side sensing module and generates a control instruction; the cage control module is connected with the calculation module, receives the control instruction sent by the calculation module, and executes opening or closing operation of the cage; the communication module is respectively connected with the road side sensing module, the calculation module and the cage control module and is used for communicating through V2I or V2N.
Further, the road side perception module includes: an image acquisition unit that acquires vehicle information and environmental information using a visual technique; the radar unit is used for detecting the position and the speed of a vehicle by using radars with different frequency bands and adopting a CFAR interference suppression and Kalman filtering target tracking algorithm; the inertial measurement unit is used for acquiring measurement data of the vehicle by using the IMU, and processing the measurement data by using an extended Kalman filtering algorithm to acquire a motion state of the vehicle, wherein the motion state of the vehicle comprises speed and acceleration; the sensor unit adopts an extended Kalman filter to fuse multi-source heterogeneous data comprising an IMU, a radar unit, an infrared sensor and an air image sensor to obtain vehicle posture information, wherein the vehicle posture information comprises a position, a direction and an angle; the obstacle detection unit is used for generating three-dimensional point cloud data through a visual algorithm by utilizing the image data acquired by the image acquisition unit and the distance data acquired by the radar unit, and acquiring the category of the obstacle based on the convolutional neural network by utilizing the generated three-dimensional point cloud data; and the positioning unit is used for obtaining positioning parameters of the vehicle and the cage by adopting an extended Kalman filtering or particle filtering algorithm through data of the inertial measurement unit and the sensor unit, wherein the positioning parameters are coordinate positions.
Further, the obstacle detecting unit: the image feature extraction subunit receives the image data acquired by the image acquisition unit and acquires multi-scale feature points of the image data by adopting a scale-invariant feature transform (SIFT) algorithm; the distance information acquisition subunit receives the distance data acquired by the radar unit and acquires sampling data of the distance data through a multipath inhibition technology; the data matching subunit is used for matching the multi-scale characteristic points and the sampling data by adopting an iterative closest point ICP algorithm to generate matched three-dimensional point cloud data; the three-dimensional reconstruction subunit is used for generating a three-dimensional point cloud model according to the matched three-dimensional point cloud data based on a poisson surface reconstruction algorithm; and the three-dimensional detection subunit is used for analyzing the three-dimensional Point cloud model based on the Point Net network, outputting confidence degrees of different categories through the network classification layer, and judging the category of the obstacle according to the confidence degree threshold value.
Further, the computing module includes: the road end computing sub-module receives the collected vehicle information and environment information through a wired network, computes the received information through a preset neural network and outputs a control instruction; the cloud computing sub-module receives the acquired vehicle information and environment information through a wireless network, computes the received information through a preset deep learning model and a preset access cage priority and outputs a control instruction; the control submodule controls the opening and closing of the cage according to the control instruction output by the road end calculation submodule or the cloud end calculation submodule; the intelligent network connection judging sub-module judges whether the vehicle is an intelligent network connection vehicle or not through comprehensive detection of the vehicle; for the intelligent network connection vehicle, a control instruction is sent to the vehicle and cage control module; and for the non-intelligent network-connected vehicle, sending a control instruction to a cage control module.
Further, the intelligent networking judgment submodule comprises: the vehicle-mounted terminal signaling analysis unit is used for judging whether the identification code accords with the identification specification of the intelligent network connection vehicle or not by analyzing the identification code in the terminal signaling sent by the vehicle so as to judge whether the vehicle supports the intelligent network connection function or not; the vehicle sensor data monitoring unit monitors the type, the number and the performance parameters of sensors installed on the vehicle and judges whether the sensor configuration meets the requirements of the intelligent network-connected vehicle on the sensors; the communication protocol verification unit is used for receiving a communication request of the vehicle, extracting protocol data adopted by the communication request, and verifying whether the protocol data accords with a standard protocol specified by an intelligent networking technology by comparing a preset intelligent networking standard protocol library; and the comprehensive judgment unit judges that the vehicle is an intelligent network vehicle when the judgment conditions of the vehicle-mounted terminal signaling analysis unit, the vehicle sensor data monitoring unit and the communication protocol verification unit are all met.
Further, the road end calculation sub-module includes: the wired communication unit adopts an industrial Ethernet to interact information with the road side sensing module; the wireless communication unit is used for carrying out information interaction with the cloud computing unit by adopting 5G; the information processing unit is used for carrying out format conversion, time synchronization and data calibration pretreatment on the data acquired by the road side sensing module; the road end calculation decision unit is used for pre-storing a decision model based on a neural network, inputting the preprocessed data and outputting an opening or closing control instruction for the cage according to the decision model; the road end priority merging unit is used for carrying out priority merging on the output instruction of the cloud computing submodule and the output instruction of the road end computing submodule, wherein the fact that the output instruction of the cloud computing submodule is higher than the output instruction of the road end computing submodule is determined; the road end calculation control unit sends the combined control instruction to the cage control module to control the opening or closing of the cage; the state self-checking unit is used for detecting state data of the road end computing sub-module, which comprises the utilization rate of a processor, the memory occupation and the network communication quality, judging whether the processing state of the road end computing sub-module is abnormal, and sending an abnormality report and the state data to the cloud computing unit if the processing state of the road end computing sub-module is abnormal.
Further, the cloud computing sub-module includes: the receiving unit is used for acquiring the vehicle information and the environment information acquired by the road side sensing module through a 5G communication link; the analysis unit is used for analyzing the vehicle sensor data acquired by the road side sensing module by using a deep learning model based on a convolutional neural network and predicting the vehicle motion state of the vehicle, which comprises speed and acceleration; the obstacle analysis unit is used for receiving the three-dimensional point cloud data of the obstacle output by the road side sensing module, carrying out cluster analysis on the three-dimensional point cloud data and outputting an obstacle avoidance track; the cloud computing decision unit is used for carrying out track planning and outputting a vehicle suggested track by considering vehicle information, environment information, a predicted vehicle motion state and a predicted obstacle avoidance track based on the vehicle obstacle avoidance model; the command unit generates a control command for opening or closing the cage according to the vehicle proposal track; the priority merging unit is used for carrying out priority merging on the control instruction input by the user and the control instruction output by the instruction unit, wherein the priority of the control instruction input by the user is higher than that of the control instruction output by the cloud computing submodule; the output unit is used for sending the combined control instruction to the road end computing sub-module; a recording unit for storing vehicle information, environment information and control instructions by using a NoSQL database; and the abnormal unit is used for judging whether the cloud computing unit fails or not by detecting network delay and system response timeout indexes.
Further, the obstacle analysis unit includes: the point cloud acquisition unit is used for receiving the three-dimensional point cloud data of the obstacle output by the road side sensing module; the point cloud segmentation unit is used for segmenting the three-dimensional point cloud data by adopting a hierarchical clustering algorithm and extracting a point cloud set of each obstacle; a size acquisition unit that calculates a bounding box of each point cloud set as size information of the obstacle; a position acquisition unit for acquiring the mass center of each point cloud set by adopting a shape fitting technology as the position information of the obstacle; the track generation unit inputs the size information and the position information of the obstacle into an obstacle avoidance Trajectory Rollout algorithm, performs dynamic track planning through an RRT algorithm, simulates obstacle avoidance movement of the vehicle and outputs an obstacle avoidance track; wherein the shape fitting technique comprises: performing plane fitting on the point cloud set by adopting a least square method, and calculating a plane normal vector of the point cloud set; projecting along the plane normal vector direction to obtain projection distribution of the point cloud in the plane normal vector direction; the mathematical expectation of the projection distribution is calculated as the centroid coordinates of the point cloud.
Further, the track generating unit inputs the size information and the position information of the obstacle into an obstacle avoidance Trajectory Rollout algorithm, performs dynamic track planning through an RRT algorithm, simulates obstacle avoidance movement of the vehicle, and outputs an obstacle avoidance track comprising: a receiving unit that receives the size information of the obstacle output by the size acquiring unit and the position information of the obstacle output by the position acquiring unit; the input unit is used for inputting the size information and the position information of the obstacle into a preset obstacle avoidance Trajectory Rollout algorithm; the modeling unit is used for constructing a motion space model containing barrier information by using a barrier avoidance Trajectory Rollout algorithm; the planning unit is used for searching tracks in the motion space model by using a motion planning algorithm based on a rapid-growth random tree RRT, and generating vehicle tracks meeting obstacle avoidance constraint; and the output unit is used for outputting the vehicle track meeting the obstacle avoidance constraint as the obstacle avoidance track.
Another aspect of the embodiments of the present disclosure further provides a method for controlling an access cage of a mining scene perception fusion technology, including: a road side sensing step of acquiring running vehicle information and environment information by using an image acquisition unit, a radar unit, an inertial measurement unit, a sensor unit and a positioning unit; a road end calculation step, namely analyzing the acquired vehicle information and environment information through a preset neural network model to generate a preliminary control instruction for the cage; cloud computing, namely analyzing vehicle information and environment information through a preset model based on deep learning, and generating a control instruction for the cage by combining with planning of obstacle avoidance tracks; the cloud computing step comprises the following steps: analyzing obstacle information to generate an obstacle avoidance track, wherein the size and position information of the obstacle are analyzed by using a clustering algorithm, bounding box detection and centroid fitting, and the obstacle avoidance track is generated based on the obstacle avoidance algorithm; judging whether the vehicle is an intelligent network-connected vehicle or not, judging according to signaling, sensor configuration and communication protocol of the vehicle, and generating a control instruction in a targeted manner; a step of merging the instruction priority, wherein the control instruction generated in the cloud computing step and the control instruction generated in the road end computing step are merged according to a rule of the priority of the cloud instruction to form a final control instruction; and a cage control step, namely driving the cage to execute opening or closing operation according to the finally formed control instruction.
3. Advantageous effects
Compared with the prior art, the invention has the advantages that:
(1) The information of the environment and the vehicle is collected through various sensors and devices, the data fusion is carried out by utilizing a deep learning model, and the information is integrated together to form the comprehensive perception of the vehicle and the environment. The deep learning model may be used to identify and classify objects in the image, analyze radar data to detect obstacles, integrate inertial measurements and sensor information to obtain the motion state of the vehicle, and combine positioning information to accurately position the vehicle and the surrounding environment. Through the comprehensive perception, the system can establish a comprehensive environment model, and provide accurate input for subsequent decision and control; on the basis, by utilizing an obstacle avoidance algorithm, the system can generate an accurate obstacle avoidance track according to the perceived environment; this ensures that the obstacle can be intelligently avoided when the vehicle is operated, thereby improving the control efficiency of the vehicle entering and exiting the cage;
(2) The road end calculation and the cloud end calculation are combined, the control instruction is the key for ensuring the cooperative work between the two calculation modules, the control instruction to the cage is always based on the latest data and the obstacle avoidance track, the real-time decision and the optimization capability enable the system to be more flexible, different scenes and requirements are met, and therefore the control efficiency and the reliability of cage operation are improved;
(3) The intelligent network connection judgment sub-module can identify intelligent network connection vehicles, and the system can carry out specific communication with the intelligent network connection vehicles through identifying the intelligent network connection vehicles and carry out corresponding control according to the states and behaviors of the intelligent network connection judgment sub-module, so that the control efficiency and reliability of the vehicles entering and exiting the cage are improved.
Drawings
The present specification will be further described by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. The embodiments are not limiting, in which like numerals represent like structures, wherein:
FIG. 1 is an exemplary block diagram of an access cage control system of a mining scene aware fusion technique according to some embodiments of the present description;
FIG. 2 is an exemplary flow chart of an access cage control method for a mining scene aware fusion technique according to some embodiments of the present description;
FIG. 3 is a schematic illustration of an exemplary application scenario of an access cage control system for a mining scene aware fusion technique according to some embodiments of the present description;
FIG. 4 is a second exemplary application scenario diagram of an access cage control system for a mining scenario-aware fusion technique according to some embodiments of the present description;
FIG. 5 is a third exemplary scenario diagram illustrating an access cage control system for a mining scenario awareness fusion technique according to some embodiments of the present disclosure;
fig. 6 is a schematic diagram of an exemplary scenario four application of an access cage control system of a mining scene aware fusion technique according to some embodiments of the present description.
Detailed Description
The method and system provided in the embodiments of the present specification are described in detail below with reference to the accompanying drawings.
FIG. 1 is an exemplary block diagram of an access cage control system for a mining scene aware fusion technique, as shown in FIG. 1, according to some embodiments of the present description, comprising:
the system mainly comprises a road side sensing module, a calculating module, a cage control module and a communication module. The road side sensing module is used for detecting running vehicle information and environment information.
The road side perception module can comprise a plurality of units such as an image acquisition unit, a radar unit, an inertial measurement unit and the like, and acquires vehicle and environment information through multi-source heterogeneous data such as images, radars and the like. The road side sensing module adopts a multi-source heterogeneous data fusion technology, acquires vehicle and environment data through different types of sensors such as images, radars, IMUs and the like, and adopts an extended Kalman filtering algorithm to fuse the multi-source data after preprocessing such as data calibration, time synchronization and the like, so that richer and reliable vehicle and environment information is obtained, and the accuracy of subsequent decisions is improved.
The computing module is connected with the road side sensing module and is used for receiving the vehicle information and the environment information sent by the road side sensing module. The computing module comprises a road end computing sub-module and a cloud computing sub-module, and can process information at the local and the cloud respectively to generate control instructions for opening or closing the cage. And a road end and a cloud sub-module in the computing module respectively adopt a neural network and a deep learning model to process information, so that the road cloud cooperative computing is realized. The response speed of the road end is faster, and the cloud computing is stronger. Meanwhile, specific optimization can be performed on the intelligent network-connected vehicle, and the pertinence of decision making is improved. The cloud computing sub-module analyzes the obstacle information by utilizing algorithms such as point cloud processing, track planning and the like, plans the obstacle avoidance track of the vehicle, gives the obstacle avoidance track to the road terminal module to cooperatively avoid the obstacle with the vehicle, and ensures the driving safety. And through the instruction priority merging unit, the cooperative control of the road cloud instruction is realized, the higher decision capability of the cloud platform is exerted, and the global optimality of the control instruction is improved.
The cage control module is connected with the calculation module and is used for receiving the control instruction sent by the calculation module and driving and executing the opening or closing operation of the cage. And the cage control module is used for rapidly responding and executing the opening/closing operation after receiving the control instruction. The communication module realizes real-time data interaction among the system modules.
The communication module is used for connecting the road side sensing module, the calculation module and the cage control module, and adopts V2I or V2N for information transmission and interaction. V2I (Vehicle to Infrastructure) represents communication between the vehicle and the infrastructure, V2I in the present application primarily refers to communication interactions between the vehicle and the roadside units, including: the vehicle actively uploads the information of the state, the position and the like to the road side unit. The road side unit issues control instructions, planned paths and other information to the vehicle. Real-time data interaction between the vehicle and the road side unit is realized, and the vehicle and road cooperative control is realized in a closed loop. V2N (Vehicle to Network) represents communication between a vehicle and a network. In the application, V2N mainly refers to that a vehicle performs information interaction with a cloud server through networks such as 4G, 5G and the like, and comprises the following steps: vehicle status, location, environmental information, etc. are uploaded to the cloud server. And the cloud server issues control instructions such as path planning, obstacle avoidance decision and the like to the vehicle. And network data interaction among the vehicle, the road side unit and the cloud server is realized, so that perception fusion control is realized. The combined application of V2I and V2N ensures the close coordination among vehicles, roads and clouds, and improves the intelligent decision and control capability of the system.
In conclusion, the application fully exerts the technical advantages of road cloud cooperation, data fusion, intelligent planning and the like, so that the underground vehicle can rapidly and safely finish the access of the cage, the vehicle and environment information can be obtained in real time through the roadside intelligent perception and cloud side cooperative calculation, the multisource data fusion and intelligent analysis are carried out, and the control instruction of the access cage is generated, thereby improving the safety of the underground environment and the intelligent level of the access control, and greatly improving the control efficiency and safety of the underground environment.
Wherein, the road side perception module includes: the image acquisition unit adopts a visual technology, can acquire image information of a vehicle and an environment, and is one of key units for realizing environment perception. The image acquisition unit comprises hardware equipment such as a high-definition industrial camera, an image processing board card and the like, and can be arranged near a wellhead or a driving route to shoot images of a driving vehicle and surrounding environment. High frame rate imaging is employed to acquire a sequence of images. The image processing algorithm can analyze the image to extract vehicle information such as vehicle type, vehicle body number, running direction, speed and the like, and can detect environmental information such as pavement trace, broken stone, moisture and the like. Algorithms such as object detection, segmentation, etc. may also be applied to detect obstacles. The image data acquired by the image acquisition unit can be fused with data from sensors such as radar, IMU and the like, so that the system perceives the environment more three-dimensionally and comprehensively. Compared with the traditional detection equipment, the image acquisition unit is richer in data, so that the follow-up electronic control system is favorable for understanding the underground complex environment, the intelligent level of cage control can be improved, and the control efficiency is enhanced.
The radar unit uses radars with different frequency bands, can realize accurate detection of the position and the speed of the vehicle, and is an important component of environment perception. And by adopting radars with different frequency bands, such as millimeter wave radars and centimeter wave radars, vehicle information with different ranges and precision can be obtained. Millimeter wave radar has high accuracy to close range targets and the detection range of the millimeter wave radar is farther. The CFAR interference suppression algorithm can effectively filter radar noise and improve the reliability of target detection. The CFAR (Constant False Alarm Rate) interference suppression algorithm can effectively filter radar noise, and the CFAR algorithm estimates local noise power in an adaptive mode to obtain a local noise threshold. The signal power of each cell is compared to a local noise threshold. If the cell power is above the threshold, then a target is determined, otherwise a noise is determined. Since the noise threshold is adaptively estimated, noise interference in different environments can be filtered out. In a vehicle-mounted radar system, CFAR can effectively inhibit various noises and improve the probability of target detection. A low false alarm rate can be maintained even in severely noisy environments. The Kalman filtering target tracking algorithm can continuously and stably track the vehicle in a complex environment and output the motion parameters of the vehicle. Using a physical or mathematical model of the object motion. The future state of the target is predicted using the model. The state estimate is updated as new measurements of the target are taken. The prediction and updating are performed recursively, resulting in an optimized target state estimate. The Kalman filtering can effectively filter out process noise and output a smooth and stable target state. The method can be used for continuously tracking the vehicle and outputting the motion parameter information of the vehicle. And providing a contour state for a radar-based CFAR algorithm, and improving the inhibition effect. The CFAR and the Kalman filtering are combined, so that stable and reliable vehicle detection and tracking can be realized. The radar directly detects the position and speed information of the vehicle, and the output is stable and accurate and is not influenced by the environment. Mutual authentication with image information is needed, and the perception reliability is improved. The radar data sampling frequency is high, and the high dynamic change of the vehicle, such as rapid acceleration, sharp turning and the like, can be detected, so that the electronic control system can respond quickly. Compared with the traditional equipment, the radar unit is richer and more accurate in data acquisition, and the electronic control system is favorable for accurately judging the state of the vehicle, so that a more optimized and safer cage entering and exiting control strategy is formulated. In conclusion, the quality of environmental perception can be greatly improved, and the control of the access cage is more efficient and stable.
The inertial measurement unit obtains measurement data of the vehicle by using the IMU, can directly monitor high dynamic motion change of the vehicle, improves the perception precision of the motion state of the vehicle, is a motion sensor, can output high-precision kinematic parameters, comprises a gyroscope and an accelerometer, and can measure triaxial angular velocity and triaxial acceleration. Parameters such as the attitude angle, the linear acceleration and the like of the triaxial can be further calculated through a data fusion algorithm. The frequency of the output kinematic parameters can reach more than 0Hz, and the precision is very high. Even when the GPS signal is out of lock in a short time, accurate position and speed information can be continuously output through an integration algorithm. And the method can be combined with sensor data such as a vehicle-mounted laser radar and the like to perform high-precision autonomous positioning. The IMU is very sensitive to the motion state of the vehicle, and can effectively sense the motions of emergency acceleration or steering and the like of the vehicle. The method is favorable for quickly detecting avoidance actions when encountering obstacles, and improves the real-time performance of obstacle avoidance. The accurate kinematic parameters can enhance the robustness of obstacle avoidance trajectory planning. The IMU is a very key component of the vehicle-mounted sensing system and provides stable state input for obstacle avoidance planning and control. In summary, the motion parameter output of the IMU plays an important role in improving the real-time obstacle avoidance performance of the scheme. In the present application: the IMU comprises a gyroscope and an accelerometer, and can measure parameters such as angular velocity, angular acceleration, linear acceleration and the like of the vehicle. These parameters intuitively reflect the state of motion of the vehicle. The original measurement data of the IMU is processed through the extended Kalman filtering algorithm, so that the influence of noise errors can be effectively reduced, and the speed and acceleration parameters of the vehicle are more accurate. The accurate speed and acceleration parameters of the vehicle are obtained, and are basic information generated by the control strategy of the access cage. Particularly, the acceleration change of the vehicle directly affects the running safety. The IMU sampling frequency is very high, so that the instantaneous high dynamic change of the vehicle, such as sharp turning, sudden braking and the like, can be detected, and the rapid response of a subsequent control module is facilitated. The data of the IMU, the radar and the data after image processing are subjected to multi-source fusion, remote detection verification and optimization correction can be performed, and more reliable vehicle motion states can be output. Compared with the traditional equipment, the IMU can reflect the motion state of the vehicle more truly and finely, and the efficiency and the safety of subsequent decisions are improved. In conclusion, the technical means of the inertia measurement unit can greatly improve the perception capability of the vehicle motion state, so that the control of the cage entering and exiting is more intelligent.
The sensor unit adopts an extended Kalman filter to perform multi-source heterogeneous data fusion, so that richer and more accurate vehicle attitude information can be obtained, and in the application: the multi-source heterogeneous data come from IMU, radar, infrared sensor, meteorological sensor and the like, and comprise motion parameters of the vehicle, surrounding environment information and the like. The Kalman filter can effectively reduce noise errors of the sensors, and data fusion is carried out to obtain an optimized state. The extended Kalman filter combines different sensor data through a state equation, iterative optimization is carried out, and more accurate attitude information such as vehicle position, direction, angle and the like can be obtained. Different sensors are mutually verified, so that the sensing accuracy and stability of the vehicle posture are improved, and particularly in a complex underground environment. Accurate and continuous attitude information of the vehicle is obtained, and the vehicle is a key input generated by an access cage control strategy. Compared with a single sensor, the multi-source heterogeneous fusion can comprehensively monitor the state change of the vehicle, output more abundant and reliable attitude information and improve the quality of subsequent decisions. The Kalman filtering realizes data fusion, has high calculation efficiency, and meets the low delay requirement of the underground environment on control response. In conclusion, the vehicle attitude monitoring system and method can greatly improve the vehicle attitude monitoring effect, and enable the access cage to be controlled more intelligently, rapidly and accurately.
The obstacle detection unit generates three-dimensional point cloud through a visual algorithm by utilizing fusion of images and radar data, and then identifies the category of the obstacle based on a convolutional neural network, so that the underground complex environment obstacle can be accurately detected, and the method comprises the following steps: the image data provides visual features of the obstacle, the radar data provides range information, and the two in combination can generate a three-dimensional point cloud containing rich geometric structure information. Three-dimensional point clouds are generated based on technologies such as stereo matching and projection of visual algorithms, and more environment detail information is contained than single sensor data. The process of generating three-dimensional point cloud data by a visual algorithm is as follows: stereoscopic images are acquired using binocular cameras in an image acquisition unit. And calculating parallax information between binocular images through a parallax matching algorithm. And recovering three-dimensional space information of the scene according to the camera parameters, parallax information and the principle of triangulation. And calculating the three-dimensional coordinates of each pixel point in the image to form a three-dimensional point cloud. The three-dimensional point cloud retains depth information of a scene and can directly represent the three-dimensional structure of a space object. And the data of the laser radar are aligned and fused, so that richer and finer three-dimensional point clouds can be obtained. The three-dimensional Point cloud can realize three-dimensional semantic segmentation of the obstacle through a network such as a code Point Net and the like. The convolution network directly acts on the point cloud, extracts the three-dimensional characteristics of the scene, and realizes the identification of the obstacle category. The visual algorithm generates a three-dimensional point cloud which is an important means for obstacle detection and provides input for a subsequent planning algorithm. In summary, the visual algorithm can effectively acquire three-dimensional scene information, which is an important link for obstacle detection. In the present application, visual algorithms suitable for downhole vehicle control include: ORB feature matching algorithm: binary features of the images are extracted, feature matching of images with different visual angles is achieved, and the method can be used for autonomous positioning of vehicles. Optical flow method: a pixel motion vector field of the image sequence is calculated for analyzing motion information of the vehicle or obstacle. Deep learning target detection algorithm: such as YOLO, SSD, etc., vehicles and obstructions in the downhole environment may be detected. Semantic segmentation algorithm: like FCN, deep lab, etc., pixel level scene segmentation may be performed, identifying road areas. Three-dimensional reconstruction algorithm: and restoring the underground three-dimensional space information by using methods such as structured light or binocular vision. Data fusion algorithm: and fusing the camera visual information with radar and Lidar data to obtain more comprehensive environmental perception. SLAM algorithm: and the environment map required by vehicle positioning can be generated in real time by synchronous positioning and map construction. Image enhancement algorithm: aiming at the conditions of poor underground illumination conditions and the like, the visual effect is enhanced through image processing. Video compression algorithm: the image sequence is effectively compressed, and the transmission bandwidth requirement is reduced. The algorithms can effectively acquire underground key environment information and provide support for intelligent control and planning of vehicles.
Based on the convolutional neural network, semantic segmentation and target recognition are carried out on the point cloud, and obstacles of different categories can be detected and recognized. The network learns the three-dimensional shape characteristics of different obstacles through training. Compared with the traditional image processing algorithm, the convolutional neural network is more efficient and stable in point cloud identification, the accuracy of obstacle detection is improved, and the false alarm rate is reduced. The obstacle information is accurately acquired, and is key input for generating an obstacle avoidance planning strategy by the electronic control system, so that the vehicle can safely avoid the obstacle. The multi-source data fusion improves the richness, accuracy and robustness of the environment perception information, and further improves the safety and the intelligent level of the control of the access cage. In conclusion, the application can greatly improve the obstacle detection and recognition capability of complex underground environment, so that the control of the access cage is safer and more efficient.
More specifically, the image feature extraction subunit extracts image multi-scale feature points by using a SIFT algorithm, so as to improve understanding of environmental information, and SIFT (scale-invariant feature transform) is a visual algorithm, and in the present application: the SIFT algorithm can detect feature key points of the image under different scales and extract local feature descriptors. The multi-scale feature points contain detailed information such as textures, edges and the like of the image, which is beneficial to finer understanding of environmental details. SIFT features have scale invariance, can cope with image deformation caused by environmental factors, and improve the robustness of feature extraction. Combining SIFT features with a deep learning network can identify different classes of targets, such as vehicles, obstacles, etc. The SIFT algorithm is efficient in calculation, image information can be extracted rapidly, and the requirement of underground on the response speed of the system is met. Compared with the global features, the local features extracted by SIFT are more sensitive to environmental changes, and are favorable for detecting fine changes. The extracted multi-scale image features can provide richer sample data for training of a subsequent deep learning network. In downhole vehicle positioning, SIFT may match image scenes taken at different locations. The feature matching result is used to estimate the relative positional transformation of the vehicle. SIFT may also be used to detect specific markers of downhole environments, assisting in localization. Compared with the common characteristics, the SIFT characteristics are more suitable for environments with complex illumination changes in the pit. Overall, the SIFT algorithm can improve accuracy and robustness of visual localization of the downhole vehicle. In conclusion, SIFT image feature extraction can improve the perception and understanding ability of the environment, and richer and more robust decision support information is provided for cage access control.
The distance information acquisition subunit processes the radar distance data through a multipath inhibition technology, and the multipath inhibition technology can filter multipath noise in the radar distance data and improve the data quality. In the present application: the presence of a large number of metal structures in the downhole environment can create severe radar multipath effects. Multipath signals can cause false objects or distance errors in the distance data. Multipath suppression uses information such as angle, distance, signal strength, etc. to make multipath decisions. One approach is to construct a three-dimensional model of the environment, predicting the likely multipath occurrence. Multipath signals can also be automatically identified and filtered by analyzing the statistical characteristics. Common suppression filtering algorithms include angle filtering, distance filtering, amplitude filtering, and the like, and fuzzy Hough transformation: and (5) suppressing by integrating the angle and distance information. Radio Frequency (RF) beam spatial analysis: the angular characteristics of the signal are resolved using the sensor array. Recursive successive echo suppression: multipath caused by multiple refractions is recursively eliminated. Through multipath inhibition, the quality of underground distance data can be effectively improved, and the accuracy of environment perception is enhanced. The obtained sampling distance data for removing the multipath effect is clearer and more reliable, and is favorable for accurately obtaining the position information of the target. The distance data sampling frequency is high, and the tiny movement change of the target can be truly reflected. And the distance data after image processing is subjected to data fusion, so that telemetry mutual authentication can be realized, single-sensor noise is eliminated, and more accurate distance information is output. Accurate and real-time distance information is the basis of path planning, obstacle avoidance, parking control and the like, and the accuracy of cage access control is improved. Compared with the traditional ranging mode, the multipath inhibition distance information acquisition is more efficient and reliable, and is beneficial to the perception of underground complex environments. In conclusion, the application can provide high-quality key distance information, so that the control of the cage entering and exiting is safer and more accurate.
The data matching subunit generates a matched three-dimensional point cloud by adopting an ICP algorithm, and the method comprises the following steps: the ICP (iterative closest point) algorithm is a three-dimensional point cloud matching algorithm. ICP can match two sets of three-dimensional point clouds, and the matching result is improved continuously through iteration. Firstly, establishing a data corresponding relation of two point clouds, and finding out the nearest point pair. And calculating the coordinate transformation relation of the matching point pairs. And (3) moving one of the point clouds to align through the transformation relation. And iterating the steps until the minimum matching error is obtained. ICP algorithm is often applied to three-dimensional reconstruction, point cloud fusion and other scenes. In a downhole vehicle, the visual point cloud and Lei Dadian cloud may be ICP matched. And matching the coordinate systems of the two data to obtain the accurate three-dimensional environment model. The iterative optimization process may exclude noise present in the visual point cloud. The multi-sensor fusion can promote the comprehensiveness of environment perception. The ICP iterative algorithm can also be extended into the matching of multi-frame point clouds. The ICP is matched with the multi-phase three-dimensional point cloud, so that the positioning result of the odometer of the vehicle can be optimized. In summary, ICP is an important algorithm for realizing multi-source three-dimensional point cloud matching, and can enhance the precision of the environment model. In conclusion, the data matching subunit generates a matched three-dimensional point cloud by adopting an ICP algorithm, so that the perceptibility of complex underground environment can be obviously improved, and the control of the access cage is safer and more accurate.
The three-dimensional reconstruction subunit generates a three-dimensional point cloud model based on a poisson surface reconstruction algorithm, and the method comprises the following steps: the three-dimensional point cloud is considered as a point process sampled from a poisson distribution. The poisson distribution-related reconstruction objective is to find a poisson function that best fits the point distribution. By solving the poisson equation, an isosurface can be obtained, which represents the geometric shape of the surface of the three-dimensional object. The algorithm can reconstruct a high-quality three-dimensional model in the point cloud with noise. The method is suitable for underground point cloud modeling of vehicle Lidar or vision acquisition. The details of the three-dimensional structure can be preserved more than the linear interpolation method. The reconstructed three-dimensional model may represent a collision constraint of the downhole environment. And more accurate environment information is provided for a three-dimensional point cloud-based path planning algorithm. The three-dimensional model is easier to store and process than a direct point cloud representation. By compression and simplification, the computational load can be reduced. The poisson surface reconstruction can efficiently obtain a three-dimensional environment model, and the autonomous driving capability of the underground vehicle is improved. In summary, the three-dimensional reconstruction subunit adopts a poisson surface reconstruction algorithm, so that modeling and understanding capability of complex scenes can be greatly improved, and the cage access control is more intelligent and safer.
The three-dimensional detection subunit detects and identifies the obstacle based on the Point Net network, so that the accuracy of environment perception can be improved, the Point Net (Point Neural Network) is a neural network for directly processing Point cloud, and the three-dimensional detection subunit comprises the following steps: the Point Net directly contacts the original three-dimensional Point cloud as network input. Firstly, extracting the spatial characteristics of each point by using a multi-layer perceptron. Global features for all points are aggregated by a max pooling layer. The maximum pooling layer ensures that the network-to-point cloud order is not changed. The global features connect the classification network and the segmentation network. The classification network can judge the type of the whole point cloud, and the obstacle can be identified. The segmentation network can output semantic tags of each point to realize scene analysis. The Point Net can directly act on the three-dimensional reconstructed Point cloud model. The three-dimensional structure information is reserved without conversion into image representation. The end-to-end network structure is suitable for deployment and use on a vehicle-mounted platform. Various fixed or mobile obstructions downhole may be detected. Providing environmental awareness for the autonomous driving of a downhole vehicle. In summary, the Point Net can directly analyze the three-dimensional Point cloud. The Point Net directly processes the Point cloud, extracts the three-dimensional geometric characteristics of the Point cloud, and classifies different obstacles. Compared with a two-dimensional image, the three-dimensional point cloud reserves the three-dimensional structure information of the target, and is more beneficial to network extraction of features. After the network is trained, the classification confidence of different barriers can be output, and specific categories are judged according to confidence threshold values. The Point Net is efficient in calculation, three-dimensional data can be processed rapidly, and the requirement of a system on response speed is met. And the obstacle information is accurately acquired, so that the subsequent planning module can generate a safe and reasonable obstacle avoidance path. And the method is matched with the multi-source data fusion of the front end, so that the information richness of the input point cloud data is improved, and the accuracy of network judgment is enhanced. In cooperation with the path planning at the rear end, the vehicle is enabled to safely avoid obstacles according to the detection result, and collision is prevented. Different obstacles are identified, different avoidance strategies can be adopted, and control is more optimized. In conclusion, the three-dimensional detection subunit enhances the perception of complex environments, so that the overall access cage control is more intelligent and smoother.
The positioning unit obtains the accurate positioning of the vehicle and the cage through a data fusion and filtering algorithm, and the control efficiency is improved in cooperation with other units: and acquiring data of the IMU and the multisource sensor, wherein the data comprises vehicle motion parameters and environmental characteristic information. The Kalman filtering or particle filtering algorithm can optimize data, eliminate noise and output accurate coordinate positions of the vehicle and the cage. The positioning parameters are basic information of cage access control, and accurate positioning is a precondition for ensuring control effect. And in cooperation with the path planning unit, an optimal path is planned according to the position of the vehicle, so that the vehicle safely enters the cage. And the control execution unit is used for controlling the vehicle to accurately stop according to the relative positions of the cage and the vehicle. The Kalman filtering algorithm and the particle filtering algorithm are efficient in calculation, and the requirement on the response speed of the system is met. The multi-source data fusion improves the positioning stability and robustness and improves the positioning effect under the complex environment. Accurate positioning is the basis that the vehicle realizes automatic access cage, has promoted the control efficiency of underground environment by a wide margin. In conclusion, the positioning unit is matched with other units to provide key positioning information, so that the overall control is more efficient and intelligent.
Wherein, the calculation module includes: the road terminal module obtains real-time data through a wired network, and accurate control is achieved. The cloud terminal module obtains more data through a wireless network, and intelligent decision making is achieved. The road end adopts a preset neural network, and the cloud end adopts a more complex deep learning model, so that the calculation is more intelligent. Cloud computing comprehensively judges the priority of entering and exiting the tank, and achieves fleet coordination and sequential control. The control submodule converts the calculation instruction into opening and closing actions of the cage to be executed. The judging submodule distinguishes intelligent network vehicles from non-intelligent vehicles and adopts different control strategies. Accurately guiding the movement of a vehicle body for the intelligent vehicle; only the cage is controlled for the non-intelligent vehicle. The multi-submodule cooperates, comprehensively plays the advantages of a wired network and a wireless network, and improves the real-time performance, the intelligence and the refinement of control. The functions of the submodules are reasonably divided, the calculation decision and the execution control are decoupled, and the system structure is more optimized. The diversified network is combined with the computing mode, so that the system has stronger self-adaption capability. In conclusion, the design of the calculation module fully integrates the technical features of each unit, so that the overall control efficiency and the intelligent level of the system are obviously improved.
Specifically, the road end computing sub-module receives the collected vehicle information and environment information through a wired network, computes the received information through a preset neural network and outputs a control instruction; the road end calculation sub-module realizes intelligent control through wired network and neural network calculation, and the application: the wired network can obtain real-time data of the vehicle and the environment, and timeliness of calculation and control is guaranteed. The preset neural network performs data extraction and decision calculation, so that more intelligent control is realized. The neural network can quickly identify the environment and plan the path through training and learning. The network outputs accurate control instructions to guide the operation of the vehicle and the cage. And in cooperation with the sensor module, acquiring data of various vehicles and environments as network computing input. In cooperation with the execution module, the computing instructions are converted into precise control execution. And the intelligent decision-making system is cooperated with the cloud module, the road end bears real-time control, and the cloud bears intelligent decision. The road end adopts wired connection, so that data interaction is more reliable, and calculation is more centralized. Road end calculation reduces the burden of vehicle-mounted calculation, and enables the system structure to be more optimized. Real-time accurate calculation control of the road end is cooperated with intelligent decision making of the cloud end, and the overall control level is improved. In conclusion, the road end computing sub-module plays a key role in real-time accurate control in the system, and improves the efficiency and the intellectualization of cage control in and out in cooperation with other modules.
More specifically, the road end calculation submodule includes: the wired communication unit adopts an industrial Ethernet to interact information with the road side sensing module; the wireless communication unit is used for carrying out information interaction with the cloud computing unit by adopting 5G; the industrial Ethernet adopts a deterministic time sequence mechanism, can realize low-time delay and real-time interaction of vehicle information and environment information, and meets the real-time requirement of cage access control. The industrial Ethernet adopts a redundant link technology, so that the reliability of information interaction can be improved, and communication interruption caused by single-point faults is avoided. The 5G has the characteristics of high bandwidth and low time delay, and can realize the real-time performance of information interaction between the vehicle and the cloud computing unit. And 5G adopts a network slicing technology, so that special network resources can be reserved for controlling the vehicle to enter and exit the cage, and the communication quality is improved. The combination of wired and wireless meets the real-time certainty of wired communication, the flexibility of wireless communication and the adaptation to underground complex environments. The wired and wireless dual channels provide redundancy, and if one link is abnormal, the system can be quickly switched to the other link, so that the reliability of the system is improved. The technical advantages of the industrial Ethernet and 5G are fully utilized, the reliable real-time transmission of the vehicle information and the environment information is realized, and the method is a basis for realizing accurate and intelligent control.
The information processing unit is used for carrying out format conversion, time synchronization and data calibration pretreatment on the data acquired by the road side sensing module; the format conversion unifies the formats of the data from different sources, thereby facilitating the subsequent network training and calculation. Time synchronization stamps the data acquired at different times with accurate time stamps for time alignment of different modules of the system. The data calibration eliminates acquisition errors, improves data accuracy, such as zero offset removal, scale error removal and the like. The preprocessing improves the data quality, is beneficial to the extraction of effective characteristics of the neural network, and improves the accuracy of decision making. The environment perception effect can be improved by preprocessing the image through denoising, filtering and the like, and the recognition accuracy is higher. The processing time can be shortened by adopting parallel processing, pipeline processing and the like, and the real-time control requirement can be met. The preprocessing algorithm can realize parallelization on chips such as FPGA, DSP and the like, and improves the processing speed. Accords with the preprocessing principle of the information fusion system, and makes the decision more accurate by improving the data quality. The preprocessing is combined with the network decision, so that the environment sensing capability and decision control capability of the whole system are improved.
The road end calculation decision unit is used for pre-storing a decision model based on a neural network, inputting the preprocessed data and outputting an opening or closing control instruction for the cage according to the decision model; the neural network can fit a complex nonlinear function, and is suitable for complex control scenes such as in-out cages. Through training and learning, the neural network can extract control rules from a large amount of data without manually making a control model. The neural network structure based on deep learning is more complex, the fitting capacity is stronger, and the decision is more accurate. The preprocessed data is used as input, so that the quality of network decision can be improved. The network prediction is quick, real-time calculation can be realized, and the requirement on the response time of the system is met. The network parallel computing structure can be realized on a GPU and a chip, and the computing speed is faster. And outputting an accurate control instruction for opening and closing the cage, and finally realizing automatic access control. The neural network decision control and the environment perception module are deeply fused, so that the intelligence of decision control is improved. The intelligent decision control concept based on data driving is met, and stronger adaptability is shown in a complex scene. In the present application, the downhole vehicle decision model comprises: end-to-end driving model based on convolutional neural network: directly from the sensor input to the control output. Reinforcement learning decision model: the environmental rewards are utilized to train out a vehicle control strategy. Planning a network: planning a motion track and then outputting a control instruction. Rule-based decision tree model: integrating the environment model and the rules, and judging actions. And (3) migrating a learning model: pre-training is performed by using the existing data set, and then fine tuning is performed to a specific scene. Meta learning decision model: learning how to adapt quickly to new environments. Imitate a learning model: learning human driving behavior. Multi-agent cooperative model: and the different vehicles cooperatively decide. Depth deterministic strategy gradient model: and solving a target by a series of decisions. Graph neural network decision model: modeling vehicle interactions. The models can utilize deep learning to extract environmental characteristics, and decision control is performed, so that the intellectualization of underground vehicles is realized.
And comprehensively utilizing the calculation results of the cloud end and the road end, and improving the global optimality of the decision. Cloud computing has stronger computing power, can perform complex planning operation, and makes decisions more global optimization. The road end calculation can make a quick local decision according to the real-time data. The cloud instructions are preferentially adopted, so that limitation of road end decision can be avoided. If the cloud instruction is wrong or delayed, the road side instruction can be used as a backup, so that the reliability of the system is ensured. The two-stage calculation is cooperated, but the strategy of cloud instructions is mainly adopted, so that the advantages of different levels of the two steps can be exerted. The merging unit algorithm can perform priority resolution and conflict detection and output an optimal instruction. The priority design increases the adaptivity and intelligence of the system. The design principle of the networked cooperative control system is met, and the cooperative efficiency of the system is improved. The advantages of calculation instructions of all levels are integrated, so that the final decision is higher in quality and reliable. In summary, the priority merging design technically makes full use of cloud computing efficiency, and improves overall control efficiency of the system.
And receiving a final control instruction output by the priority merging unit, and ensuring that the execution of the optimal instruction subjected to comprehensive judgment is ensured. The instruction content is specific, and the control action is specific for opening or closing different cages. And a deterministic industrial serial port or a field bus is adopted for instruction transmission, so that the instructions are ensured to be reliably sent to the control execution unit. The control execution unit can adopt a field programmable logic controller and the like to accurately finish the control of the cage machinery according to the instruction. The calculation control unit is decoupled from the mechanical execution department, so that the expandability of the system is improved. And specific execution instructions are sent instead of intermediate state information, so that the processing complexity of an execution department is reduced. And the distributed control structure idea is adopted to control the decision in the instruction set, and the execution units execute in a decentralized manner. The design principle of control execution separation of an automatic control system is met. The respective professionals of the calculation unit and the execution unit are fully exerted, and the system performance is improved. In summary, the technical design of the control unit is advantageous to improve the accuracy and reliability of performing the control.
Monitoring processor utilization may determine whether computing resources are over-occupied, preventing computing overload from occurring. The monitoring of memory occupancy may discover anomalies such as memory leaks or fragmentation anomalies. Monitoring network communication quality may determine whether information interaction is blocked. The comprehensive monitoring of the hardware state can comprehensively judge the running condition of the system. And setting a threshold value to judge an abnormal state, and realizing active precaution and early warning. Once the abnormality is determined, the abnormality is fed back to the cloud for analysis in time, and assistance or task taking over is requested. The anomaly data can be used for cause analysis and problem diagnosis in the cloud. The self-fault tolerance is realized by monitoring the self state and asking the cloud for help. And the self-checking, feedback, analysis and correction are automatically completed by matching with the cloud to form closed-loop control. The intelligent monitoring and maintenance design concept of the industrial Internet equipment is met. In conclusion, the technical design of the state self-checking unit improves the robustness and maintainability of the system.
Specifically, the wireless network is adopted to receive data, so that the real-time state of the vehicle and the environment can be obtained. The deep learning model has strong feature extraction and pattern recognition capability, and can realize a finer and intelligent control strategy. The pre-training model can be rapidly subjected to fine optimization aiming at the current scene, so that high-precision decision prediction is realized. The model can realize the comprehensive calculation analysis of multiple variables, and the decision is more comprehensive. And integrating a plurality of control instructions according to the priority order, and outputting the optimized final control instructions. The cloud has strong storage and calculation capabilities, and can realize complex models and algorithms. Cloud computing has parallel acceleration capability, and real-time requirements of control are met. The cloud can continuously optimize the iterative model, and autonomous learning and adaptation to the environment are realized. The cloud computing realizes decentralization and task division and cooperation with road end control. In conclusion, the cloud computing sub-module plays a key role in intelligent planning and decision making in cage access control, and forms cooperative control with road ends so as to improve automation level.
More specifically, the cloud computing submodule includes: the receiving unit adopts 5G communication, and can realize low-delay and high-reliability transmission of vehicle and environment information. The 5G network has the advantages of high bandwidth and large connection, and can transmit multi-source heterogeneous data such as video, radar and the like. And a 5G network slicing technology is utilized to reserve special network resources for controlling the vehicle to enter and exit the cage. The high-speed moving scene of the vehicle is supported, and the stable and real-time and reliable transmission of the information of the channel is ensured. The 5G adopts the MIMO antenna technology, so that spatial multiplexing can be realized, and the anti-interference capability of information is improved. Communication reliability is improved through redundancy design, and single-point faults are avoided. Advanced scheduling and access technology is adopted, the method is suitable for complex environments such as underground, and QoS is guaranteed. High-speed stable wireless transmission is the basis for realizing cloud edge cooperative control. Receiving complete and accurate state information is a precondition for realizing fine control by the cloud. In conclusion, the 5G receiving unit plays a role in real-time and efficient data acquisition in the cloud control system, and provides key support for intelligent access control.
And the analysis unit, the convolutional neural network can efficiently extract the spatial characteristics of the multidimensional sensor data. Advanced feature representations of vehicle states are automatically learned through convolution and pooling operations. The depth network structure has strong capability of fitting complex nonlinear functions, and can accurately predict the state of the vehicle. The end-to-end learning method can learn the motion rule of the vehicle directly from the data without manually extracting the characteristics. And the advantage of parallel computation of the convolution network is utilized, the prediction time is reduced, and the real-time control requirement is met. The predicted output includes important state information such as speed, acceleration and the like, and supports subsequent fine control. The convolution network can be efficiently and parallelly realized on chips such as a GPU and the like, and the calculation delay is reduced. And a fully trained target detection network is adopted to realize quick and accurate prediction of new data. The deep learning prediction is fused with road side sensing, so that high-precision vehicle motion sensing can be realized. In conclusion, the analysis unit realizes intelligent perception and motion prediction based on the convolutional neural network, and is one of key technologies for realizing refined cloud control.
And the obstacle analysis unit is used for receiving the three-dimensional point cloud data and accurately reflecting the information such as the position, the shape and the like of the obstacle. By adopting a point cloud data clustering algorithm, each independent obstacle can be segmented. And calculating the geometric properties of each obstacle, such as position, volume and the like, and providing a basis for obstacle avoidance. Noise points can be removed by clustering the analysis point cloud, and the information quality is improved. And (3) utilizing the GPU to calculate three-dimensional point cloud data in parallel, and realizing rapid clustering and obstacle avoidance analysis. The point cloud data keeps three-dimensional space information, and obstacle avoidance path planning is more accurate. And outputting a detailed track avoiding the obstacle, and performing fine vehicle control. The point cloud analysis obstacle avoidance and the sensor are fused, so that more reliable environment perception can be realized. The point cloud data is richer than the image information, and is beneficial to more accurate collision prevention control decision. The method accords with the information fusion processing principle based on multi-source heterogeneous data. In summary, the three-dimensional point cloud analysis unit is a key technical means for realizing fine obstacle avoidance control.
More specifically, the obstacle analysis unit includes: the point cloud acquisition unit is used for directly acquiring three-dimensional point cloud data of the obstacle, and the three-dimensional point cloud data comprise complete information in a space form. The point cloud data keeps accurate three-dimensional coordinates, and is a basis for realizing obstacle avoidance control. And acquiring the point cloud from the road side sensing module to ensure reliable and consistent data sources. And a standardized output format is adopted, so that the subsequent processing module can quickly analyze and calculate. The amount of the point cloud data is large, and the acquisition unit needs to ensure a stable network transmission bandwidth. Data sampling or compression coding can be set to reduce network load without distorting details. The acquired point cloud needs to contain a time stamp, so that the data synchronism is ensured. The data integrity needs to be checked to avoid missing point information. And the real-time transmission of the point cloud is ensured by using a 5G and other high-speed network technology. The point cloud acquisition is a primary link for realizing the refined obstacle avoidance control, and lays a foundation for subsequent processing. In conclusion, the point cloud acquisition unit plays an important role in obstacle sensing and collision avoidance decision of the cloud platform.
The point cloud segmentation unit can effectively segment the mixed obstacle point cloud by using a hierarchical clustering algorithm to extract each independent object. Hierarchical clustering can automatically determine reasonable clustering numbers, and the need of manually designating category numbers is avoided. After each obstacle point cloud is separated, the geometrical characteristics of each obstacle point cloud can be analyzed independently. Segmentation improves the parallelism of point cloud processing, and each object can be analyzed in parallel. And extracting the complete point cloud of each obstacle, so that the shape and motion characteristic analysis of the obstacle are more accurate. The segmentation point cloud set is matched with the acquisition original point cloud to form two key links of point cloud processing. The point cloud segmentation is a basis for realizing accurate obstacle avoidance, and provides an analysis basis for subsequent track planning. And organically combining the point cloud segmentation with other perception and positioning modules to form an integral environment perception solution. The point cloud segmentation is realized by adopting an algorithm, so that manual processing is reduced, and the degree of automation is improved. The point cloud segmentation technology is fused with cloud computing, a high-speed network and the like, and an integral intelligent collision prevention scheme is facilitated. In summary, the point cloud segmentation unit is an important part and key link for constructing a vehicle obstacle avoidance cloud control scheme. In the application, a hierarchical clustering algorithm considers the application scene of underground vehicle control, and sets a smaller clustering distance threshold value to detect small obstacles. And setting a variance threshold according to the accuracy of the sensor, and avoiding over-segmentation. And reasonable scanning range is set by utilizing the sealing characteristics of the underground scene. And the motion information of the vehicle is integrated, and the scanning direction is optimized. Non-obstacle clusters are filtered according to the cluster type and the position characteristics. And clustering close to the tracks is given priority treatment, so that collision risk is reduced. And accelerating clustering operation by utilizing the parallel processing capability of the vehicle-mounted computing unit. And outputting the clustering result to a vehicle end control module to realize closed-loop obstacle avoidance. And the data is fused with other vehicle-mounted sensors, so that the environmental perception quality is improved. Incremental cluster updating is realized at the vehicle-mounted end, and the calculation requirement is reduced.
And the size acquisition unit is used for calculating a three-dimensional boundary box of the obstacle point cloud set and directly acquiring the sizes of the object in three directions. The bounding box contains the complete contour information of the object, and the size calculation is more accurate. The size information is critical to determine whether the obstruction, whther, can pass or need to wait. The passing headroom can be calculated as compared to the body size. Considering the sizes of different directions, erroneous judgment by only one direction distance is avoided. In combination with vehicle motion status, the passing strategy and speed plan are optimized. And the size data is fused with information such as a vehicle-mounted laser radar and the like, and multi-source verification is carried out. The size acquisition module is matched with the point cloud segmentation unit, and performs calculation as soon as a segmentation result is received. And the size is calculated rapidly, so that the real-time performance of the whole control system is ensured. The size acquisition is one of key links for realizing active obstacle avoidance. And the method is fused with cloud computing and other technologies to form an overall solution for obstacle avoidance decision of the vehicle. Therefore, the size acquisition unit plays an important role in achieving fine obstacle avoidance control of the underground vehicle.
A position acquisition unit for acquiring the mass center of each point cloud set by adopting a shape fitting technology as the position information of the obstacle; the three-dimensional space position coordinates of the obstacle can be accurately obtained by utilizing a shape fitting technology. The centroid coordinates represent the overall position information of the obstacle to the greatest extent. The obstacle position is accurately obtained, and the vehicle is facilitated to plan the avoidance track. In combination with the vehicle's own location information, the risk of collision and the passage of headroom can be calculated. By using shape fitting rather than single point positioning, errors can be filtered and stability improved. Centroid position information reflects the movement trend of the obstacle, and active obstacle avoidance is achieved. Matching with units such as point cloud segmentation and size acquisition, and the like, and completing the acquisition of the position information. The calculation is light, and the real-time requirement of position tracking is met. Position tracking is a dynamic basis of obstacle avoidance decision, and directly influences control efficiency. And the method is combined with cloud computing and high-speed network technology to form an integral obstacle avoidance control solution. In conclusion, the position acquisition unit is a key module for realizing effective obstacle avoidance and efficient access of underground vehicles.
More specifically, the shape fitting technique includes: the least square method can effectively inhibit noise in the point cloud data and improve accuracy of normal vector calculation. And calculating the normal vector of the plane, and judging whether the surface of the point cloud is parallel to the moving direction of the vehicle. If the normal vector is perpendicular to the direction of motion, it indicates that the obstacle is bulky and needs to wait for passage. If the two are parallel, the vehicle can be judged to be an edge obstacle, and the vehicle can move in parallel to realize avoidance. The fitting calculated amount is relatively small, and the requirement of real-time collision prevention control can be met. The plane normal vector represents the overall orientation of the obstacle, which is beneficial to track planning. And the collision risk can be judged more accurately by combining the point cloud position. And the interference of noise points on normal vector calculation is restrained, and the stability of direction judgment is improved. The control device is one of key modules for realizing active obstacle avoidance, and can remarkably improve control efficiency. The intelligent obstacle avoidance system is deeply fused with technologies such as a cloud platform, a high-speed network and the like, so that an intelligent obstacle avoidance overall solution is formed. In conclusion, the plane normal vector calculation unit can greatly optimize the obstacle avoidance control effect of the underground vehicle. The projection along the direction of the plane normal vector can remove noise influence in other directions. Only the point cloud distribution information related to the vehicle movement direction is retained. The overall trend of the point cloud in that direction can be obtained by calculating the expectations of the projection distribution. Expected as centroid coordinates, the movement trend of the obstacle is reasonably reflected. The influence of noise can be reduced relative to calculating the geometric center of the entire point cloud. By desiring to filter errors, centroid coordinate stability is improved. The barycenter coordinates accurately reflect the translational movement of the obstacle, and active obstacle avoidance is realized. The calculation process is simple and efficient, and the real-time collision prevention requirement is met. Is an effective technical means for realizing quick and accurate centroid positioning. And the method is matched with units such as plane fitting and the like to form an integral obstacle avoidance solution. And by combining cloud computing and other technologies, the automation level of control can be further improved. In conclusion, the control efficiency of the underground vehicle in the obstacle avoidance process can be remarkably improved.
The track generation unit inputs the size information and the position information of the obstacle into an obstacle avoidance Trajectory Rollout algorithm, performs dynamic track planning through an RRT algorithm, simulates obstacle avoidance movement of the vehicle and outputs an obstacle avoidance track; RRT (Rapidly exploring Random Tree) algorithm, starting point, continues to spread the branches of the tree, creating paths by randomly sampling and connecting nodes. Upon encountering an obstacle, the RRT will attempt to bypass it, thereby avoiding collisions, and eventually find a path connecting the initial state and the target state. This path is an obstacle avoidance trajectory that takes into account obstacles in the environment to ensure that the vehicle or robot can safely reach the target location. Rapidly-explorer, "rapid explorer" refers to the main goal of an algorithm, namely to quickly generate an explorer tree to find viable paths in the environment. The RRT algorithm performs path exploration in a fast manner by randomly selecting sampling points and continuously expanding the tree structure. Random (Random): "random" means that randomness is employed in the RRT algorithm. It increases the diversity of path searches by randomly generating sampling points in space to try different directions. Tree (Tree): the "tree" represents the data structure of the RRT algorithm. The RRT starts from an initial state and gradually expands the branches of the tree until the target state is found. The nodes of this tree represent the states of the vehicle or robot at different time steps, and the edges represent possible motion trajectories. The Trajectory Rollout algorithm is a path planning algorithm whose goal is to generate possible trajectories while taking into account obstacles in the environment. It generally comprises the steps of: initializing: starting from the current position, some possible directions and trajectories of movement are defined. Unfolding the track: a series of trajectory points is generated for each possible direction to simulate the motion of a vehicle or robot. Obstacle detection: for each generated trajectory, it is checked whether there is an obstacle intersecting it. Selecting an appropriate trajectory: the trajectory that does not intersect an obstacle is selected, typically the shortest or safest path. Execution trace: the selected trajectory is used to control the motion of the vehicle or robot. The Trajectory Rollout algorithm selects the best trajectory to avoid the obstacle by simulating a plurality of possible trajectories, thereby realizing the objective of obstacle avoidance path planning. The algorithm can generate a dynamic and feasible obstacle avoidance track according to real-time obstacle information. The size and position information determine the space occupied by the obstacle and are the basis for avoidance. The algorithm fully utilizes the information to conduct prejudgment and outputs a safety track. And updating an obstacle real-time information driving algorithm to realize closed-loop obstacle avoidance. And the information input triggering algorithm plans a plurality of potential tracks in parallel, so that the instantaneity is ensured. And the cloud computing platform is fully utilized for track evaluation and selection. And selecting a feasible track meeting the dynamic constraint, and ensuring the obstacle avoidance safety. The track rolout design objective function considers factors such as obstacle distance, smoothness and the like. Effectively utilizes the barrier information to generate an optimized smooth obstacle avoidance path. And the control system is in deep coupling with a vehicle body control system, and is used for directly transmitting obstacle avoidance instructions, so that the control efficiency is improved. In conclusion, the method can fully exert the advantages of the algorithm, plan safe and feasible obstacle avoidance tracks according to abundant obstacle information, and greatly improve the obstacle avoidance efficiency of underground vehicles.
More specifically, a receiving unit that receives the size information of the obstacle output by the size acquiring unit and the position information of the obstacle output by the position acquiring unit; the receiving unit is composed of an HID interface, a serial interface and the like and is used for receiving the sensor data. The size acquisition unit may obtain the obstacle size through image processing and point cloud analysis based on the camera and the lidar. The position acquisition unit can be based on modules such as a GPS, an encoder, an IMU and the like, and can be used for determining the position of the obstacle in combination with data fusion. The receiving unit needs to configure corresponding interfaces, and analyzes and caches the output information of different modules. The serial buffer can be arranged, so that the problem of data packet loss is avoided, and the receiving reliability is ensured. The parsing algorithm needs to distinguish the message formats of the different modules and extract the size and location data. A shared memory or queue may be established for caching and communicating the received information. It is necessary to ensure time synchronization and to add time stamp information to correlate multi-source data. The performance of the receiving unit directly affects the efficiency of the subsequent processing, and low delay and high throughput need to be guaranteed. The modularized parallel structure can be designed, and the parallelism of information receiving and processing is improved. In summary, the technical solution of the receiving unit needs to solve the problem of efficiently and stably receiving the sensor data, and provide a basic input for the subsequent processing.
The input unit is used for realizing data format conversion and sorting the obstacle information output by the receiving unit into a format which can be directly used by the obstacle avoidance planning algorithm. A data structure may be established to organize properties of the obstacle such as position, speed, shape, category, etc. The data of the different sensors are aligned to a unified coordinate system using the time stamps. Sensor noise that may be present needs to be processed for optimization such as filtering, curve fitting, etc. A certain buffer mechanism can be adopted to smooth information input and avoid jitter of a planning algorithm. The input unit needs to consider the real-time requirement of the algorithm on the input information, and the data preparation time is properly shortened. Sampling revealed can be performed according to the requirements of a track planning algorithm, and a local environment grid or graph representation is constructed. The specific requirements of the algorithm on the input information format and the coordinate system are required to be known, and customized adaptation is carried out. The input requirements of different planning algorithms may differ and the input unit needs to be flexibly adapted. And finally, the environment information is efficiently and stably input to the track planning algorithm, and the environment information is a key target of an input unit.
The modeling unit is used for constructing a motion space model containing barrier information by using a barrier avoidance Trajectory Rollout algorithm; and constructing a geometric model of the obstacle in space according to the position and size information of the obstacle. The physical dimensions and kinematic constraints of the vehicle body are taken into account to define its accessible space. Within the reachable space, discrete state points are densely sampled. And generating a plurality of potential obstacle avoidance tracks by using the track unfolding method by taking each sampling point as a starting point. And performing track evaluation, calculating indexes such as safety, smoothness and the like, and selecting the track with the highest score. And continuing to spread out the sampling and track generation along the selected track direction. And repeating the process to finally form the obstacle avoidance track tree covering the whole space. The track tree represents an obstacle avoidance path from any position to a target position, and a complete motion space model is formed. And updating obstacle information in real time, and dynamically adjusting the model. The vehicle searches the track tree according to the position of the vehicle and determines obstacle avoidance actions. The method fully utilizes the algorithm to carry out high-efficiency space modeling and provides support for active obstacle avoidance of the vehicle.
And the planning unit is used for randomly sampling the expansion nodes in the motion space model to form an RRT tree of the coverage space. An incremental search is performed on the RRT tree from the current location of the vehicle to the target location. In the searching process, the node selection is optimized by utilizing an A-algorithm, and the convergence is accelerated. An initial original trajectory is formed from a sequence of connected nodes in the RRT tree. And the original track is smoothened, so that the comfort is improved. It is checked whether the trajectory satisfies a motion constraint, such as a minimum turning radius, etc. And setting a safe distance threshold value, and calculating the minimum distance between the track and the obstacle. If the track is not sufficiently safe from the obstacle distance, returning to reselect. And finally, a smooth and safe obstacle avoidance track meeting constraint conditions is obtained. And repeating the process in real time to generate a dynamic obstacle avoidance track. Outputting a track control instruction, and performing closed-loop control on the vehicle to finish obstacle avoidance. The application combines the advantages of efficient RRT search and A-optimization, can quickly generate the safe obstacle avoidance track meeting the constraint, and improves the obstacle avoidance efficiency of the underground vehicle. And the output unit is used for outputting the vehicle track meeting the obstacle avoidance constraint as the obstacle avoidance track.
And the cloud computing decision unit is used for collecting state information of the vehicle, including speed, direction and the like. And acquiring environment information acquired by the vehicle-mounted sensor. And building a vehicle motion prediction model according to the historical data. And inputting the current state, and predicting the motion state at the subsequent moment. And constructing a Digital Twain vehicle obstacle avoidance model. Vehicle state, environmental data are entered into the model. And simulating and planning a plurality of potential obstacle avoidance tracks. Each track is evaluated based on the indexes such as safety, smoothness and the like. The trajectory with the highest score is selected as the suggested output. The trajectory is fed back to the vehicle in the form of control commands. And the vehicle receives the instruction to finish obstacle avoidance. And collecting actual execution data of the vehicle, and updating the Digital Twin model. And (3) continuously optimizing the model to realize the iterative upgrading of the active obstacle avoidance capability. According to the application, cloud computing resources and a Digital Twin model are fully utilized, and efficient and intelligent active obstacle avoidance decision is realized.
And the instruction unit is used for analyzing the direction and position information of the suggested track. It is determined whether the trajectory will traverse a particular cage. The time when the vehicle enters and leaves the cage is pre-determined in advance. And calculating the number of the cage door to be opened according to the pre-judging time. A control command is generated to open the corresponding cage door. And monitoring the real-time position of the vehicle, and correcting the door opening time. When the vehicle completely enters the cage, a cage door closing command is generated. The vehicle position is continuously monitored, and the cage door is prevented from being closed in advance. And encrypting and checking the instruction to ensure the reliability of the instruction. And sending an opening/closing command through the safety controllable interface. And receiving cage door state feedback and performing closed-loop control. And the vehicle-mounted system is cooperated with the vehicle-mounted system to optimize the vehicle motion control. The application can intelligently plan and control the opening and closing of the cage door in advance according to the track, and effectively improve the efficiency of the underground vehicle cage entering and exiting.
And the priority merging unit is used for providing an interface for inputting control instructions by a user, such as a handheld terminal or a control panel. The user input instruction adopts a high priority protocol to package and identify. And receiving a standard instruction output by the cloud computing sub-module. And analyzing the two types of instructions, and extracting control content and priority information. And placing the instructions in the same control period into a queue cache. The instruction queues are ordered according to priority levels. Lower priority instructions are deleted or overridden. And finally integrating and adjusting instruction parameters. Outputting the combined unified instruction to the executing mechanism. And monitoring instruction execution feedback, and closing a control loop. And continuously receiving various control instructions and dynamically adjusting output. Allowing the user to modify or cancel the issued instruction at any time. The highest priority of user instructions may override the automatic programming instructions. The application realizes reasonable and effective combination of the priorities of the user instruction and the automatic instruction, and gives consideration to control efficiency and safety.
The output unit is used for sending the combined control instruction to the road end computing sub-module;
and the recording unit is used for collecting all relevant data of the obstacle avoidance task executed by the vehicle. The data includes vehicle status, environmental information, control instructions, and the like. And formatting the acquired data. These structured and unstructured data are stored using a NoSQL database. NoSQL databases include MongoDB, HBase, etc. And selecting a proper NoSQL database according to the data characteristics. And a flexible table structure is designed, so that data with different formats can be conveniently inserted. The stored content includes multimedia data such as text, voice, video, etc. And indexes are established by using time stamps and the like, so that query analysis is convenient. The extensibility of NoSQL is utilized to handle large storage demands. Storage and access are implemented in conjunction with cloud computing technology. Data query and utilization functions are provided through the network interface. The stored data is used for vehicle trajectory optimization and model training. By adopting the NoSQL database to elastically store multi-source heterogeneous data, data support is provided for later application analysis, and the intelligent level of the system is improved.
And an abnormal unit, a network delay detection module is deployed, and delay analysis after exponential moving average filtering is performed. Delay thresholds of different levels are set, and the different thresholds correspond to different fault levels. And monitoring the response time of the cloud computing unit, and judging whether the response time is overtime. And counting the number of faults of which the computing unit has no response for a long time. And determining the overall fault level of the cloud computing unit according to the set rule. The number of faults or delay exceeding a threshold value within a certain time is judged as abnormal. The abnormal activation mechanism switches to a standby decision unit on the owner. The standby unit realizes simplified obstacle avoidance planning on the vehicle-mounted operation platform. And when the cloud computing unit is recovered to be normal, recovering the remote decision through network reconnection. Abnormal activation and reconnection + process smoothly automatically switches control modes. The method ensures that the vehicle can still perform basic obstacle avoidance operation when the cloud fails. By the design of the exception handling unit, the robustness and usability of the system are improved.
And the control sub-module receives a control instruction from the road end or the cloud end through the standard interface. The instruction content includes control information for opening or closing a specified cage. And carrying out security check on the instruction and filtering illegal instructions. An instruction buffer queue is maintained, and the instruction execution sequence is smoothly scheduled. And gradually executing opening or closing actions on the corresponding cage doors according to the instructions. The physical movement of the cage door is realized by adopting a hydraulic, motor or pneumatic actuating mechanism. Setting a speed range and a torque limit, and controlling the execution process to be stable and safe. And a feedback sensor is used for monitoring the opening and closing state of the door body. And adjusting the execution force and speed according to the door feedback to realize accurate control. And the system is in linkage and cooperative work with a vehicle-mounted system, so that the vehicle access control efficiency is optimized. The instruction input is continuously monitored, and the newly arrived opening or closing instruction is processed. All the motion control adopts a closed loop feedback mode, so that stability and reliability are ensured. Through the design of the control submodule, the opening and closing of the tank door can be accurately and smoothly controlled according to the instruction.
The intelligent network connection judging sub-module judges whether the vehicle is an intelligent network connection vehicle or not through comprehensive detection of the vehicle; specifically, the intelligent networking judging submodule comprises: and the vehicle-mounted terminal signaling analysis unit sets an intelligent network-connected vehicle identification code distribution rule and transmits the rule to each vehicle-mounted terminal. And receiving the terminal signaling uploaded by the vehicle. An identification code field is extracted from the signaling. And analyzing the information contained in the identification code according to the allocation rule. The identification code includes information such as manufacturer ID, model, electronic serial number, etc. And matching the analyzed information with a standard database. If the identification code information is a perfect match, then it is confirmed that the vehicle supports intelligent networking. If the identification code is not compliant, the intelligent networking is not supported. And the intelligent network-connected vehicles are supported, and the identification codes and the information of the intelligent network-connected vehicles are input into a white list. The vehicle terminals not in the white list will be signaled and the function will be limited. The identification code allocation rule is updated regularly, so that the security is enhanced. Identification code verification is an important means for realizing network-connected vehicle access management. Through the vehicle-mounted terminal signaling analysis, the vehicle supporting the intelligent network connection can be effectively identified, and the system safety is ensured.
And (3) formulating sensor configuration standards of the intelligent network-connected vehicle, including requirements on sensor types, quantity, performance and the like. The standard defines the necessary sensors and optional sensor levels. And receiving the sensor data uploaded by the vehicle in real time. The sensor type is identified based on the data format. And comparing the uploaded data with the standard configuration to determine the condition of actually installing the sensor. And analyzing the update frequency of the sensor data, and judging the working state of the sensor. And evaluating the sensor performance parameters according to the fluctuation range of the data. And comparing the monitoring result with the standard configuration requirement. And if the number and the performance of the sensors reach the standards, judging that the requirements are met. If the insufficient item exists, recording the non-conforming item and issuing a reminding notification. The sensor state is continuously monitored, and the performance change trend is tracked. Sensor configuration directly affects the intelligentized capability of the vehicle. Through real-time monitoring analysis, the vehicle sensor configuration can be ensured to meet the requirements of intelligent networking.
And the communication protocol verification unit is used for presetting standard communication protocol specifications of the intelligent network-connected vehicle. The specification defines the protocol formats and contents of the various layers, such as the physical layer, the link layer, etc. A protocol library database is constructed containing all standard protocols. And receiving a communication access request initiated by the vehicle side. Communication protocol related data is extracted from the request. The protocol data format is parsed stepwise in layers. And comparing the analysis result with a prescribed format in a protocol library. Allowing compatibility comparisons between different protocol versions to be employed. If the protocol information matches perfectly, the verification passes. If there is a non-compliance, the verification is not passed. A communication request that passes authentication will enable a standard reply mechanism. Requests that fail verification may optionally issue error responses. Each vehicle communication request is continuously validated, ensuring compliance with specifications. Through the protocol verification unit, the non-compliance request can be filtered, and the communication security is ensured. The standard communication protocol specification of the intelligent network-connected vehicle is formulated in advance. The specification defines the protocol formats and contents of the various layers, such as the physical layer, the link layer, etc. A protocol library database is constructed containing all standard protocols. And receiving a communication access request initiated by the vehicle side. Communication protocol related data is extracted from the request. The protocol data format is parsed stepwise in layers. And comparing the analysis result with a prescribed format in a protocol library. Allowing compatibility comparisons between different protocol versions to be employed. If the protocol information matches perfectly, the verification passes. If there is a non-compliance, the verification is not passed. A communication request that passes authentication will enable a standard reply mechanism. Requests that fail verification may optionally issue error responses. Each vehicle communication request is continuously validated, ensuring compliance with specifications. Through the protocol verification unit, the non-compliance request can be filtered, and the communication security is ensured.
The comprehensive judging unit is used for finally judging that the vehicle is an intelligent network-connected vehicle only when the judging conditions of all the units are met, and the method comprises the following steps: and the accuracy and the reliability of the judgment are improved by combining multi-condition judgment. Meeting the identification code specification is a basic premise, and ensuring that the vehicle supports the intelligent networking function. Sensor configuration directly affects sensing capabilities and supports intelligent algorithms. Communication protocol verification ensures security and stability of network interconnection. The multi-condition constraint ensures that the judgment result is accurate and reasonable. And the situation of misjudgment or missed judgment caused by a single condition is avoided. Meanwhile, multiple conditions are met, and the disguise and cheating of illegal vehicles can be effectively prevented. The safety and the reliability of the intelligent networking system are ensured. A trusted data source is provided for various applications that rely on vehicle intelligence. The comprehensive judgment unit realizes intelligent comprehensive verification of the vehicle. Technical support is provided for standardized management of intelligent network vehicles. Therefore, the design enhances the judgment accuracy and is beneficial to constructing a reliable intelligent network vehicle-to-vehicle system.
Specifically, for the intelligent network-connected vehicle, a control instruction is sent to the vehicle and cage control module; and for the non-intelligent network-connected vehicle, sending a control instruction to a cage control module. The intelligent network-connected vehicle can receive and execute richer control instructions, and the intelligent advantage of the vehicle is exerted. Aiming at the intelligent network-connected vehicle, the complex control such as vehicle state monitoring, motion control, fault diagnosis and the like can be performed. And only a simple cage control instruction is sent to the non-intelligent network-connected vehicle, so that potential safety hazards caused by complex instructions are avoided. The non-intelligent network-connected vehicle can only realize basic well access control and does not have intelligent interaction capability. The respective control can exert the technical characteristics of the two types of vehicles. The intelligent network-connected vehicle can optimize the cooperative control of automatic driving and underground equipment. The non-intelligent network-connected vehicle can only be driven manually, and the system can ensure basic safety. The safety and efficiency of the overall system can be improved by sending instructions according to the vehicle capabilities. The management of the hybrid fleet is facilitated, and the cooperative operation of vehicles with different functions is realized. And the control strategies are distinguished, so that intelligent upgrading transition of the motorcade can be smoothly realized.
FIG. 2 is an exemplary flow chart of an access cage control method for a mining scene aware fusion technique according to some embodiments of the present description, as shown in FIG. 2, comprising: s210, a road side sensing step, namely setting an image acquisition unit, capturing image information in the driving process by adopting a camera, and acquiring the characteristics of the surrounding environment of the vehicle; a radar unit is arranged, radar electromagnetic waves are emitted, reflected signals are received, and distance information of obstacles and vehicles is obtained; an inertial measurement unit is arranged, and an inertial measurement integrated navigation system consisting of a gyroscope, an accelerometer and the like is used for measuring the motion parameters of the vehicle; setting a sensor unit, including a vehicle speed sensor, a steering angle sensor and the like, for detecting a vehicle running state parameter; setting a positioning unit, and determining the absolute position of a vehicle by adopting a satellite navigation system; the image acquisition unit, the radar unit, the inertial measurement unit, the sensor unit and the positioning unit form a multi-source heterogeneous sensor cluster perceived by a road side together, and the multi-source heterogeneous sensor cluster is used for acquiring self information and surrounding environment information of a running vehicle and providing basic data input for subsequent road side calculation and cloud computing. Through the fusion use of multiple sensors, comprehensive sensing and stable acquisition of complex underground scenes can be realized, the accuracy and reliability of information acquisition are improved, and support is provided for the control of the access cage of the whole scheme.
S220, calculating a road end, namely presetting a neural network model on an industrial personal computer, wherein the model can analyze vehicle information and environment information after being trained by a deep learning algorithm; the road end computing unit calls the neural network model and inputs the multi-source heterogeneous information acquired in the road side sensing step; analyzing information by the neural network model, extracting the state of the vehicle and the surrounding environment characteristics, and carrying out preliminary judgment; according to the preliminary judgment result, the road end computing unit can quickly generate a preliminary control instruction for the cage; the preliminary control instruction comprehensively considers the vehicle state and the local environment information and is used for autonomous response in emergency; the road end calculation step utilizes local calculation resources of the industrial personal computer and a preset neural network to realize the instant intelligent analysis of information, generate an initial obstacle avoidance control instruction and improve the control instantaneity and the autonomous response capability of the system. The road side primary instruction output and the cloud instruction are combined for use, so that the low-delay advantage of road side calculation is fully exerted, and good support is provided for overall cage access control.
S230, cloud computing, namely presetting an algorithm model based on deep learning on a cloud server, and analyzing complex scene information; the model has strong feature extraction and analysis capability, and can comprehensively understand the state and surrounding environment of the vehicle; the cloud platform collects a large amount of data acquired by the road side sensor, and training optimization is carried out by using a deep learning model; the cloud computing unit calls the model to analyze real-time data uploaded by the road side and monitors the dynamic information of the underground obstacle; combining high-precision map information and a three-dimensional environment model to perform multi-step long prediction and track planning; calculating and optimizing an obstacle avoidance track of the vehicle by using an obstacle avoidance algorithm; the cloud computing unit generates an accurate control instruction aiming at the current scene by integrating various information; the cloud computing utilizes a powerful deep learning technology and rich data to perform accurate environment perception analysis, outputs a control instruction meeting the safety obstacle avoidance requirement, and realizes intelligent decision and control on complex underground scenes.
The cloud computing step comprises the following steps: performing obstacle segmentation on the point cloud and the image information acquired by the sensor by using a clustering algorithm; analyzing the space geometric characteristics of each obstacle by adopting a two-dimensional boundary box detection and three-dimensional voxel segmentation algorithm; calculating the barycenter coordinates of the obstacle, fitting the boundary contour of the obstacle to obtain accurate size and position information; performing multi-step prediction according to the category of the obstacle and the motion trail parameters to obtain the possible future position of the obstacle; based on an obstacle avoidance planning algorithm, considering the known constraint conditions of the vehicle, and calculating to obtain an obstacle avoidance track meeting the safety constraint; according to the intelligent vehicle obstacle avoidance system, the environment factors and the vehicle self states are comprehensively considered, so that intelligent avoidance of complex dynamic obstacle is realized; the sub-step fully utilizes an advanced deep learning technology, accurately acquires dynamic obstacle information, plans a feasible obstacle avoidance track, and realizes sensing analysis and obstacle avoidance decision of a complex underground environment.
The cloud receives and analyzes the vehicle networking signaling and the sensor information uploaded by the vehicle; judging a communication protocol and a control interface supported by the vehicle according to the signaling; analyzing information such as the type, distribution, performance parameters and the like of sensors mounted on the vehicle; the intelligent level and networking capability of the vehicle are judged by combining the information; for intelligent networking vehicles, the cloud generates rich control instructions, and the autonomous obstacle avoidance capability of the vehicles is brought into play; for the non-intelligent vehicle, the cloud generates a simple control instruction to directly remotely control the vehicle to avoid the obstacle; the substep can identify the intelligent degree of different vehicles, generate control instructions in a targeted manner and adopt customized control strategies for different vehicle types. The autonomous obstacle avoidance capability of the intelligent network-connected vehicle is fully exerted, the application range of the system is enlarged, and the expansibility of the scheme is enhanced.
The cloud computing unit on the cloud server generates an accurate instruction about cage access control; a road end calculation unit on the industrial personal computer generates a preliminary control instruction about the access of the cage; the instruction priority merging unit acquires the two instructions; based on a cloud instruction priority principle, carrying out fusion optimization processing on the instructions; when the two instructions are consistent, directly executing the instructions; when the instructions are diverged, adopting cloud computing instructions, and discarding road side computing instructions; the instruction priority merging unit outputs a final control instruction after fusion optimization; the method can reasonably integrate advantages of the cloud precise instruction and the road end real-time instruction to form an optimal control decision for the current scene. The cloud instruction priority principle is followed, so that decision stability of multi-source information fusion can be ensured, and reliability and safety of the system are improved.
S240, cage control, wherein the instruction priority merging unit generates a final control instruction for the current scene; the instruction comprises control information for opening or closing the cage; the cage control execution unit receives and analyzes the final control instruction; the driving motor and the executing mechanism are used for mechanically controlling the cage; if the instruction content is that the cage is opened, opening a cage door to enable the vehicle to pass through; if the instruction content is that the cage is closed, closing a cage door to prevent the vehicle from passing through; the cage control step completes the closed loop from the instruction to the execution action, and realizes the accurate control of cage opening and closing. The step is an output link of the whole method, and finally, the active driving of the cage according to environment perception and planning decision is realized, so that the automation degree of the underground cage entering and exiting process is improved.
In summary, the road side sensing step adopts a plurality of sensors to acquire the state and environment information of the vehicle, so that the limitation of a single sensor is avoided, and the comprehensive sensing of the complex scene is realized. The road end calculation step uses a preset neural network model to carry out preliminary judgment, so that the calculation pressure of the cloud platform is reduced, and the real-time response capability of system control is improved. In the cloud computing step, the deep learning model is used for analyzing obstacle information, a planning algorithm is used for generating an obstacle avoidance track, and intelligent obstacle avoidance is achieved. And judging the vehicle information to realize customized control. The instruction merging step fully merges the advantages of the road side and cloud instructions, and comprehensively judges and forms an optimal control strategy. The cage control step completes execution action and plays a role in outputting a system. The whole system forms a closed-loop control flow of automatic information-aware decision execution. Cloud computing provides decision intelligence, road end computing reduces cloud dependence, and road side perception realizes multi-source fusion. The Cloud-Edge computing framework improves system agility, generalization capability and stability of output control. And finally, an overall solution of vehicle dynamic perception and autonomous obstacle avoidance is formed, and the control capacity and efficiency of the cage entering and exiting the underground complex scene are greatly improved.
Fig. 3 is a schematic diagram of an exemplary application scenario of an access cage control system of a mining scene sensing fusion technology according to some embodiments of the present disclosure, as shown in fig. 3, when an intelligent network vehicle a approaches a cage, a multi-sensor cluster such as a laser radar and a camera in a road side sensing module starts to collect information of the vehicle a and surrounding environment, and the vehicle a also actively uploads its state data through a DSRC module. And the industrial control computer computing unit and other wired communication of the road side sensing module call a preset convolutional neural network model to analyze and process the information, and a preliminary control instruction for opening the tank door is generated. Meanwhile, the vehicle A information is also transmitted to the cloud server through the wireless communication module such as the 5G, the deep learning model in the cloud GPU server carries out multi-source data fusion judgment, an accurate control instruction for opening the tank door is formed, and the accurate control instruction is issued to the road side sensing module for execution. And after receiving the cloud instruction, the industrial personal computer of the road side sensing module performs priority combination with the local instruction to finally form a deterministic control command for opening the cage door, and drives the PLC execution unit to open the cage door. After receiving the door opening signal, the vehicle A stably enters the cage, and meanwhile informs the road side sensing module of the cage entering state. The road side sensing module industrial personal computer generates an instruction for closing the cage door, and the vehicle A is isolated in the cage. When the following vehicle A needs to be driven out, the industrial personal computer of the road side sensing module generates a control instruction for opening the cage door again, and after the vehicle A is driven out of the cage, the industrial personal computer of the road side sensing module outputs an instruction for closing the cage door. Compared with the prior art, the intelligent network-connected vehicle can not interact information with the road side sensing module in real time, the access cage completely depends on the passive sensing and control of the road side sensing module, and the efficiency and the intelligent degree are low.
Fig. 4 is a schematic diagram of a second exemplary application scenario of an access cage control system of a mining scene perception fusion technology according to some embodiments of the present disclosure, as shown in fig. 4, when a plurality of intelligent networked vehicles A, B simultaneously approach a cage, a road side perception module perceives vehicle information in advance, and sends control instructions and speeds to A, B respectively through optimization calculation, so as to command the control instructions and speeds to form a low-speed queue, so as to pass through the cage sequentially. The calculation unit of the road side perception module comprehensively considers various factors such as routes, time and the like, actively schedules and controls the intelligent network-connected vehicles, and realizes unattended automatic team formation passing. When the intelligent vehicle meets the non-intelligent vehicle, the road side perception module or the cloud computing unit can instruct the intelligent network-connected vehicle preferentially to realize scheduling coordination. When equipment is damaged or a network is interrupted, the system supports a manual takeover mode, and an operator can restore the system through initializing operation, so that the continuity of key work is ensured. In conclusion, the application scene fully embodies the adaptability and reliability of the application to multi-vehicle coordination and equipment failure.
FIG. 5 is a schematic diagram of an exemplary scenario of application of an access cage control system for a mining scene awareness fusion technique according to some embodiments of the present disclosure, as shown in FIG. 5, where the roadside awareness module identifies the presence of an obstacle at the cage opening via V2X communications as the vehicle approaches the cage. The calculation unit of the roadside sensing module generates a parking waiting or decelerating command and transmits the command to the OBU of the vehicle through a DSRC signal or the like. The OBU transmits the instruction to the vehicle-mounted controller, and drives the actuator to complete braking or decelerating operation. Meanwhile, the road side perception module can send information to the cloud platform to perform operations such as data recording, event processing, rerouting planning and the like. DSRC (Dedicated Short Range Communications), dedicated short-range communication, is a short-range, high-speed wireless communication technology. OBU (On Board Unit), i.e. an on-board communication control unit, is a communication control module mounted on an autonomous vehicle. In the emergency scene, the road side sensing module can quickly respond to the obstacle to directly control the vehicle, so that accidents are avoided. The cloud platform has more global cooperative capability, can perform post-optimization processing, and improves the adaptability of the system. The scene shows the rapid response and the synergistic advantage of the application in a complex environment, and enhances the safety of underground automatic driving. In the technical scheme, the method comprises the following steps: DSRC is a technical means for realizing wireless communication between a road side sensing module and a vehicle-mounted unit, can rapidly transmit control instructions, and has the characteristics of low delay and high reliability. The OBU is installed on an autonomous vehicle, and is used for receiving DSRC instructions from a road side unit and forwarding the instructions to an on-board executor so as to control the vehicle. The cooperative application of DSRC and OBU can realize the accurate control of road side to vehicle, and the safety and the quick response capability of the system are improved, which is one of the key technologies of the scheme.
Fig. 6 is a schematic diagram of an exemplary application scenario of an access cage control system for a mining scene awareness fusion technique according to some embodiments of the present disclosure, as shown in fig. 6, where a roadside awareness module may actively report or personnel may manually report an emergency when the emergency or other emergency occurs downhole. The cloud computing unit grasps the overall situation, and can rapidly judge the emergency control to be performed. The cloud directly issues a control instruction to the road side sensing module execution unit to command the road side sensing module execution unit to open or close the cage. And the road side sensing module immediately drives the executing mechanism to perform corresponding operation after receiving an emergency instruction for opening/closing the cage. The scene utilizes global information and quick judging capability of the cloud, realizes quick response to an emergency, and improves adaptability of the system to underground complex conditions. The cooperative coordination of the cloud road side sensing modules ensures the unified command of man-machine under emergency conditions, and enhances the safety and reliability of the system.

Claims (10)

1. An access cage control system for a mining scene perception fusion technology, comprising:
the road side sensing module is used for detecting running vehicle information and environment information;
The computing module is connected with the road side sensing module, receives the vehicle information and the environment information sent by the road side sensing module and generates a control instruction;
the cage control module is connected with the calculation module, receives the control instruction sent by the calculation module, and executes opening or closing operation of the cage;
the communication module is respectively connected with the road side sensing module, the calculation module and the cage control module and is used for communicating through V2I or V2N.
2. The access cage control system of the mining scene perception fusion technique according to claim 1, wherein:
the road side perception module comprises:
an image acquisition unit that acquires vehicle information and environmental information using a visual technique;
the radar unit is used for detecting the position and the speed of a vehicle by using radars with different frequency bands and adopting a CFAR interference suppression and Kalman filtering target tracking algorithm;
the inertial measurement unit is used for acquiring measurement data of the vehicle by using the IMU, and processing the measurement data by using an extended Kalman filtering algorithm to acquire a motion state of the vehicle, wherein the motion state of the vehicle comprises speed and acceleration;
the sensor unit adopts an extended Kalman filter to fuse multi-source heterogeneous data comprising an IMU, a radar unit, an infrared sensor and an air image sensor to obtain vehicle posture information, wherein the vehicle posture information comprises a position, a direction and an angle;
The obstacle detection unit is used for generating three-dimensional point cloud data through a visual algorithm by utilizing the image data acquired by the image acquisition unit and the distance data acquired by the radar unit, and acquiring the category of the obstacle based on the convolutional neural network by utilizing the generated three-dimensional point cloud data;
and the positioning unit is used for obtaining positioning parameters of the vehicle and the cage by adopting an extended Kalman filtering or particle filtering algorithm through data of the inertial measurement unit and the sensor unit, wherein the positioning parameters are coordinate positions.
3. The access cage control system of the mining scene perception fusion technique according to claim 2, wherein:
an obstacle detection unit:
the image feature extraction subunit receives the image data acquired by the image acquisition unit and acquires multi-scale feature points of the image data by adopting a scale-invariant feature transform (SIFT) algorithm;
the distance information acquisition subunit receives the distance data acquired by the radar unit and acquires sampling data of the distance data through a multipath inhibition technology;
the data matching subunit is used for matching the multi-scale characteristic points and the sampling data by adopting an iterative closest point ICP algorithm to generate matched three-dimensional point cloud data;
the three-dimensional reconstruction subunit is used for generating a three-dimensional point cloud model according to the matched three-dimensional point cloud data based on a poisson surface reconstruction algorithm;
And the three-dimensional detection subunit is used for analyzing the three-dimensional Point cloud model based on the Point Net network, outputting confidence degrees of different categories through the network classification layer, and judging the category of the obstacle according to the confidence degree threshold value.
4. The access cage control system of the mining scene perception fusion technique according to claim 1, wherein:
the calculation module comprises:
the road end computing sub-module receives the collected vehicle information and environment information through a wired network, computes the received information through a preset neural network and outputs a control instruction;
the cloud computing sub-module receives the acquired vehicle information and environment information through a wireless network, computes the received information through a preset deep learning model and a preset access cage priority and outputs a control instruction;
the control submodule controls the opening and closing of the cage according to the control instruction output by the road end calculation submodule or the cloud end calculation submodule;
the intelligent network connection judging sub-module judges whether the vehicle is an intelligent network connection vehicle or not through comprehensive detection of the vehicle;
for the intelligent network connection vehicle, a control instruction is sent to the vehicle and cage control module;
and for the non-intelligent network-connected vehicle, sending a control instruction to a cage control module.
5. The access cage control system of the mining scene perception fusion technique according to claim 4, wherein:
the intelligent networking judging submodule comprises:
the vehicle-mounted terminal signaling analysis unit is used for judging whether the identification code accords with the identification specification of the intelligent network connection vehicle or not by analyzing the identification code in the terminal signaling sent by the vehicle so as to judge whether the vehicle supports the intelligent network connection function or not;
the vehicle sensor data monitoring unit monitors the type, the number and the performance parameters of sensors installed on the vehicle and judges whether the sensor configuration meets the requirements of the intelligent network-connected vehicle on the sensors;
the communication protocol verification unit is used for receiving a communication request of the vehicle, extracting protocol data adopted by the communication request, and verifying whether the protocol data accords with a standard protocol specified by an intelligent networking technology by comparing a preset intelligent networking standard protocol library;
and the comprehensive judgment unit judges that the vehicle is an intelligent network vehicle when the judgment conditions of the vehicle-mounted terminal signaling analysis unit, the vehicle sensor data monitoring unit and the communication protocol verification unit are all met.
6. The access cage control system of the mining scene perception fusion technique according to claim 4, wherein:
The road end calculation submodule comprises:
the wired communication unit adopts an industrial Ethernet to interact information with the road side sensing module;
the wireless communication unit is used for carrying out information interaction with the cloud computing unit by adopting 5G;
the information processing unit is used for carrying out format conversion, time synchronization and data calibration pretreatment on the data acquired by the road side sensing module;
the road end calculation decision unit is used for pre-storing a decision model based on a neural network, inputting the preprocessed data and outputting an opening or closing control instruction for the cage according to the decision model;
the road end priority merging unit is used for carrying out priority merging on the output instruction of the cloud computing submodule and the output instruction of the road end computing submodule, wherein the fact that the output instruction of the cloud computing submodule is higher than the output instruction of the road end computing submodule is determined;
the road end calculation control unit sends the combined control instruction to the cage control module to control the opening or closing of the cage;
the state self-checking unit is used for detecting state data of the road end computing sub-module, which comprises the utilization rate of a processor, the memory occupation and the network communication quality, judging whether the processing state of the road end computing sub-module is abnormal, and sending an abnormality report and the state data to the cloud computing unit if the processing state of the road end computing sub-module is abnormal.
7. The access cage control system of the mining scene perception fusion technique according to claim 4, wherein:
the cloud computing submodule comprises:
the receiving unit is used for acquiring the vehicle information and the environment information acquired by the road side sensing module through a 5G communication link;
the analysis unit is used for analyzing the vehicle sensor data acquired by the road side sensing module by using a deep learning model based on a convolutional neural network and predicting the vehicle motion state of the vehicle, which comprises speed and acceleration;
the obstacle analysis unit is used for receiving the three-dimensional point cloud data of the obstacle output by the road side sensing module, carrying out cluster analysis on the three-dimensional point cloud data and outputting an obstacle avoidance track;
the cloud computing decision unit is used for carrying out track planning and outputting a vehicle suggested track by considering vehicle information, environment information, a predicted vehicle motion state and a predicted obstacle avoidance track based on the vehicle obstacle avoidance model;
the command unit generates a control command for opening or closing the cage according to the vehicle proposal track;
the priority merging unit is used for carrying out priority merging on the control instruction input by the user and the control instruction output by the instruction unit, wherein the priority of the control instruction input by the user is higher than that of the control instruction output by the cloud computing submodule;
The output unit is used for sending the combined control instruction to the road end computing sub-module;
a recording unit for storing vehicle information, environment information and control instructions by using a NoSQL database;
and the abnormal unit is used for judging whether the cloud computing unit fails or not by detecting network delay and system response timeout indexes.
8. The access cage control system of the mining scene perception fusion technique of claim 7, wherein:
the obstacle analysis unit includes:
the point cloud acquisition unit is used for receiving the three-dimensional point cloud data of the obstacle output by the road side sensing module;
the point cloud segmentation unit is used for segmenting the three-dimensional point cloud data by adopting a hierarchical clustering algorithm and extracting a point cloud set of each obstacle;
a size acquisition unit that calculates a bounding box of each point cloud set as size information of the obstacle;
a position acquisition unit for acquiring the mass center of each point cloud set by adopting a shape fitting technology as the position information of the obstacle;
the track generation unit inputs the size information and the position information of the obstacle into an obstacle avoidance Trajectory Rollout algorithm, performs dynamic track planning through an RRT algorithm, simulates obstacle avoidance movement of the vehicle and outputs an obstacle avoidance track;
wherein the shape fitting technique comprises:
Performing plane fitting on the point cloud set by adopting a least square method, and calculating a plane normal vector of the point cloud set;
projecting along the plane normal vector direction to obtain projection distribution of the point cloud in the plane normal vector direction;
the mathematical expectation of the projection distribution is calculated as the centroid coordinates of the point cloud.
9. The access cage control system of the mining scene perception fusion technique of claim 8, wherein:
the track generation unit inputs the size information and the position information of the obstacle into an obstacle avoidance Trajectory Rollout algorithm, performs dynamic track planning through an RRT algorithm, simulates obstacle avoidance movement of the vehicle, and outputs an obstacle avoidance track comprising:
a receiving unit that receives the size information of the obstacle output by the size acquiring unit and the position information of the obstacle output by the position acquiring unit;
the input unit is used for inputting the size information and the position information of the obstacle into a preset obstacle avoidance Trajectory Rollout algorithm;
the modeling unit is used for constructing a motion space model containing barrier information by using a barrier avoidance Trajectory Rollout algorithm;
the planning unit is used for searching tracks in the motion space model by using a motion planning algorithm based on a rapid-growth random tree RRT, and generating vehicle tracks meeting obstacle avoidance constraint;
And the output unit is used for outputting the vehicle track meeting the obstacle avoidance constraint as the obstacle avoidance track.
10. A method for controlling an access cage of a mining scene perception fusion technology comprises the following steps:
a road side sensing step of acquiring running vehicle information and environment information by using an image acquisition unit, a radar unit, an inertial measurement unit, a sensor unit and a positioning unit;
a road end calculation step, namely analyzing the acquired vehicle information and environment information through a preset neural network model to generate a preliminary control instruction for the cage;
cloud computing, namely analyzing vehicle information and environment information through a preset model based on deep learning, and generating a control instruction for the cage by combining with planning of obstacle avoidance tracks;
the cloud computing step comprises the following steps:
analyzing obstacle information to generate an obstacle avoidance track, wherein the size and position information of the obstacle are analyzed by using a clustering algorithm, bounding box detection and centroid fitting, and the obstacle avoidance track is generated based on the obstacle avoidance algorithm;
judging whether the vehicle is an intelligent network-connected vehicle or not, judging according to signaling, sensor configuration and communication protocol of the vehicle, and generating a control instruction in a targeted manner;
A step of merging the instruction priority, wherein the control instruction generated in the cloud computing step and the control instruction generated in the road end computing step are merged according to a rule of the priority of the cloud instruction to form a final control instruction;
and a cage control step, namely driving the cage to execute opening or closing operation according to the finally formed control instruction.
CN202311460434.5A 2023-11-06 2023-11-06 System and method for controlling cage entering and exiting through underground mining scene perception fusion technology Active CN117201567B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311460434.5A CN117201567B (en) 2023-11-06 2023-11-06 System and method for controlling cage entering and exiting through underground mining scene perception fusion technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311460434.5A CN117201567B (en) 2023-11-06 2023-11-06 System and method for controlling cage entering and exiting through underground mining scene perception fusion technology

Publications (2)

Publication Number Publication Date
CN117201567A true CN117201567A (en) 2023-12-08
CN117201567B CN117201567B (en) 2024-02-13

Family

ID=88994609

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311460434.5A Active CN117201567B (en) 2023-11-06 2023-11-06 System and method for controlling cage entering and exiting through underground mining scene perception fusion technology

Country Status (1)

Country Link
CN (1) CN117201567B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117389937A (en) * 2023-12-11 2024-01-12 上海建工一建集团有限公司 Calculation method of obstacle avoidance data of vehicle, computer and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180141565A1 (en) * 2016-11-23 2018-05-24 General Electric Company Vehicle control systems and methods
CN212177217U (en) * 2020-04-26 2020-12-18 青岛英驰斯仪自动化科技有限公司 Mine car operation monitoring system
CN113467346A (en) * 2021-08-11 2021-10-01 中国矿业大学 Automatic driving robot for underground railway vehicle and control method thereof
CN115321298A (en) * 2022-09-09 2022-11-11 中信重工开诚智能装备有限公司 Mine cage overload retention detection device and method
CN116630936A (en) * 2023-05-22 2023-08-22 青岛慧拓智能机器有限公司 Obstacle sensing system and method for underground unmanned

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180141565A1 (en) * 2016-11-23 2018-05-24 General Electric Company Vehicle control systems and methods
CN212177217U (en) * 2020-04-26 2020-12-18 青岛英驰斯仪自动化科技有限公司 Mine car operation monitoring system
CN113467346A (en) * 2021-08-11 2021-10-01 中国矿业大学 Automatic driving robot for underground railway vehicle and control method thereof
CN115321298A (en) * 2022-09-09 2022-11-11 中信重工开诚智能装备有限公司 Mine cage overload retention detection device and method
CN116630936A (en) * 2023-05-22 2023-08-22 青岛慧拓智能机器有限公司 Obstacle sensing system and method for underground unmanned

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张涵;: "基于物联网的煤矿安全管理信息系统研究", 煤炭经济研究, no. 03 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117389937A (en) * 2023-12-11 2024-01-12 上海建工一建集团有限公司 Calculation method of obstacle avoidance data of vehicle, computer and readable storage medium
CN117389937B (en) * 2023-12-11 2024-03-08 上海建工一建集团有限公司 Calculation method of obstacle avoidance data of vehicle, computer and readable storage medium

Also Published As

Publication number Publication date
CN117201567B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
US11269332B2 (en) Multi-perspective system and method for behavioral policy selection by an autonomous agent
CN108292473B (en) Method and system for controlling a motion profile of an autonomous vehicle
JP7103946B2 (en) Machine learning systems and techniques for optimizing remote control and / or planner decisions
CN113721629B (en) Adaptive mapping to navigate autonomous vehicles in response to changes in physical environment
US11960290B2 (en) Systems and methods for end-to-end trajectory prediction using radar, LIDAR, and maps
US20190057314A1 (en) Joint processing for embedded data inference
US11891087B2 (en) Systems and methods for generating behavioral predictions in reaction to autonomous vehicle movement
US11597406B2 (en) Systems and methods for detecting actors with respect to an autonomous vehicle
CN117201567B (en) System and method for controlling cage entering and exiting through underground mining scene perception fusion technology
WO2017079321A1 (en) Sensor-based object-detection optimization for autonomous vehicles
Buchholz et al. Handling occlusions in automated driving using a multiaccess edge computing server-based environment model from infrastructure sensors
CN114489112A (en) Cooperative sensing system and method for intelligent vehicle-unmanned aerial vehicle
CN117037115A (en) Automatic driving obstacle avoidance system and method based on machine vision
CN115879060A (en) Multi-mode-based automatic driving perception method, device, equipment and medium
Singh Trajectory-Prediction with Vision: A Survey
JP2021008258A (en) Smart object knowledge sharing
KR102618680B1 (en) Real-time 3D object detection and tracking system using visual and LiDAR
US11556137B2 (en) Method of assisting with the driving of vehicles carried out by associated system including sensors installed along sections of road
Pethuraj et al. Tunable Error Reduction Scheme in Proximity Sensor Function Applied to Unmanned Vehicular Networks
CN115113629A (en) Machine learning systems and techniques for optimizing teleoperational and/or planner decisions
CN117521422A (en) Immersion-based team behavior simulation system and method
CN116917184A (en) Prediction and planning of mobile robots
CN117591847A (en) Model pointing evaluating method and device based on vehicle condition data
CN117173662A (en) Fusion multi-target prediction method based on fleet collaborative awareness
CN117589167A (en) Unmanned aerial vehicle routing inspection route planning method based on three-dimensional point cloud model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant