CN114803860A - Underground monorail crane unmanned driving system and method based on machine vision - Google Patents
Underground monorail crane unmanned driving system and method based on machine vision Download PDFInfo
- Publication number
- CN114803860A CN114803860A CN202210451027.7A CN202210451027A CN114803860A CN 114803860 A CN114803860 A CN 114803860A CN 202210451027 A CN202210451027 A CN 202210451027A CN 114803860 A CN114803860 A CN 114803860A
- Authority
- CN
- China
- Prior art keywords
- obstacle
- monorail crane
- monorail
- roi
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B66—HOISTING; LIFTING; HAULING
- B66C—CRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
- B66C13/00—Other constructional features or details
- B66C13/18—Control systems or devices
- B66C13/40—Applications of devices for transmitting control pulses; Applications of remote control devices
- B66C13/44—Electrical transmitters
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B66—HOISTING; LIFTING; HAULING
- B66C—CRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
- B66C13/00—Other constructional features or details
- B66C13/16—Applications of indicating, registering, or weighing devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/17—Terrestrial scenes taken from planes or by drones
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Mechanical Engineering (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Databases & Information Systems (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Automation & Control Theory (AREA)
- Remote Sensing (AREA)
- Control And Safety Of Cranes (AREA)
Abstract
The invention discloses an underground monorail crane unmanned system based on machine vision, which comprises: the method comprises the steps that a vehicle-mounted camera collects road condition video data when a monorail crane runs, a millimeter wave radar measures distance of obstacles encountered by the monorail crane when the monorail crane runs, a vehicle-mounted processor generates an ROI (region of interest) after radar data conversion, the road condition video data is input into an improved TinyYOLOv3 neural network to obtain the ROI after neural network processing, the ROI after radar data conversion and the ROI after neural network processing are matched in an associated mode to obtain the type of the obstacle, the position of the obstacle and the relative speed of the obstacle and the monorail crane, a remote monitoring subsystem is input to generate a control instruction, and the monorail crane is remotely controlled to run.
Description
Technical Field
The invention relates to the technical field of machine vision, in particular to an underground monorail crane unmanned system and method based on machine vision.
Background
The monorail crane is auxiliary transportation equipment matched with other equipment in underground transportation of a coal mine. The underground coal mine transportation system has the capability of carrying out high-difficulty operation in an underground complex environment, is easy to maintain, has high transportation safety and flexibility, and becomes an indispensable important component in modern automatic coal mine transportation.
The monorail crane has the advantages that the working environment is severe, the monorail crane belongs to a high-risk environment, how to realize unmanned operation is realized, and the monorail crane has great application prospect and strategic value for reducing or avoiding major production and transportation safety accidents of a mine, reducing transportation cost and improving production and transportation efficiency of mine materials.
Environmental perception is an important link in underground unmanned driving, and information such as obstacle types in the surrounding environment can be directly acquired by a vehicle through a vision system. Machine vision and computer vision, where image processing has many similarities, both require the extraction of target features from the image. Machine vision is generally a vision system, which includes image acquisition and processing, output and control, and is a system capable of realizing engineering application, and focuses more on solving practical problems in the industry. With the development of edge computing technology, a light neural network can be deployed on an embedded terminal, a target identification task of a monitoring video can be completed at the front end of a camera, obstacle identification in the driving process can be completed in a visual system of an underground locomotive, and an aboveground monitoring center only needs to call less computing resources to generate a control instruction for driving of the locomotive.
The current problems are that the precision of a monorail crane on a track for detecting a barrier target in a mine is not high, the collected target needs to be transmitted to the mine, the equipment is complex, and the cost is high.
Disclosure of Invention
The embodiment of the invention provides an underground monorail crane unmanned system based on machine vision, which comprises:
the vehicle-mounted camera is used for acquiring road condition video data when the monorail crane runs;
the millimeter wave radar is used for ranging the distance of the obstacle encountered by the monorail crane during travelling;
an onboard processor performing the following operations:
receiving distance data sent by a millimeter wave radar, generating a region of interest ROI after radar data conversion on the basis of road condition video data sent by a vehicle-mounted camera,
inputting road condition video data into an improved TinyYOLOv3 neural network to obtain a region of interest ROI processed by the neural network, wherein in the improved TinyYOLOv3 neural network, the structure of a feature extraction network in a basic TinyYOLOv3 network structure is converted into a downsampling residual error module from a convolution layer and a pooling layer, and a downsampling residual error module main body is formed by a depth separable convolution unit;
the ROI after radar data conversion and the ROI after neural network processing are matched in an associated mode to obtain the type of the obstacle, the position of the obstacle and the relative speed of the obstacle and the monorail crane;
and the remote monitoring subsystem generates a control instruction according to the type of the obstacle, the position of the obstacle and the relative speed of the obstacle and the monorail crane, and remotely controls the monorail crane to carry out travelling control.
Preferably, the method further comprises the following steps:
and the positioning module is used for positioning the position of the monorail crane in the underground.
Preferably, the method further comprises the following steps:
the wifi module is used for generating a data packet by the type of the obstacle, the position of the obstacle, the relative speed of the obstacle and the monorail crane and the positioning data of the monorail crane and sending the data packet to a wifi base station in the well, and the wifi base station access switch sends the data packet to the remote monitoring subsystem through an industrial Ethernet ring network.
Preferably, the remote monitoring subsystem comprises:
the communication server receives and analyzes the data packet sent by the wifi module;
a video display unit for displaying the parsed data packet
And the dispatching management unit analyzes the data packet through dispatching management software, generates a remote control instruction of the monorail crane, and sends the remote control instruction to the vehicle-mounted processor (5) through the communication server.
Preferably, the method further comprises the following steps:
the image preprocessing module is used for performing image enhancement processing on the video data according to the MSR algorithm and inputting the processed image into the vehicle-mounted processor, wherein the output expression of the MSR algorithm is as follows:
wherein, F MSR (x, y) represents the result of image processing of the MSR algorithm, K n Representing the weight coefficient at each scale, N being the total number of scales, whose value is typically 3, then K 1 =K 2 =K 3 =1/3,F SSR (x, y) is the image processing result of the single-scale Retinex (SSR) algorithm, F SSR The expression of (x, y) is:
where I (x, y) represents the input image and G (x, y) represents the low-pass gaussian filter function.
Preferably, in the improved TinyYOLOv3 network structure, a plurality of downsampling residual modules perform feature extraction on an image after image enhancement preprocessing and generate a ROI region of a video image, and a depth separable convolution unit in the downsampling residual modules includes:
a deep convolution layer with convolution kernel size of 3 multiplied by 3, a batch normalization structure and a leak ReLU activation function;
a point-by-point convolution layer with convolution kernel size of 1 x 1, a batch normalization structure and an H-Swish activation function;
after a 3 x 3 deep convolutional layer, a Leaky ReLU activation function is selected, so that a slope which is not zero can be input to all negative inputs, and the phenomenon that part of neurons do not respond can be avoided;
the H-Swish activation function was chosen after 1X 1 point-by-point convolution of the layers.
The application also provides an underground monorail crane unmanned driving method based on machine vision, which is characterized by comprising the following steps:
acquiring road condition video data when a monorail crane runs;
the method comprises the following steps of (1) ranging obstacles encountered by a monorail crane during travelling;
receiving the distance data, generating a region of interest ROI converted from the radar data on the basis of the road condition video data,
inputting road condition video data into an improved TinyYOLOv3 neural network to obtain a region of interest ROI processed by the neural network, wherein in the improved TinyYOLOv3 neural network, the structure of a feature extraction network in a basic TinyYOLOv3 network structure is converted into a downsampling residual error module from a convolution layer and a pooling layer, and a downsampling residual error module main body is formed by a depth separable convolution unit;
the ROI after radar data conversion and the ROI after neural network processing are matched in an associated mode to obtain the type of the obstacle, the position of the obstacle and the relative speed of the obstacle and the monorail crane;
and generating a control instruction according to the type of the obstacle, the position of the obstacle and the relative speed of the obstacle and the monorail crane, and remotely controlling the monorail crane to carry out driving control.
The embodiment of the invention provides an underground monorail crane unmanned system based on machine vision, which has the following beneficial effects compared with the prior art:
1. the MSR algorithm and the improved TinyYOLOv3 neural network model are adopted to process the video image, and the target recognition is carried out after the image quality is improved, so that the target detection precision is effectively improved.
2. The method has the advantages that the video target identification can be realized on the vehicle-mounted processor of the monorail crane, video data do not need to be sent to an aboveground server for processing, required server computing resources are reduced, cost is reduced, and meanwhile information such as types and positions of obstacles can be accurately obtained through information fusion of the camera and the millimeter wave radar.
3. The monorail crane unmanned system greatly reduces the number of drivers and vehicle escorting workers of the monorail crane in the transportation process of the monorail crane, analyzes the environmental perception data on site and issues driving instructions remotely, and further improves the real-time performance of the monorail crane in the unmanned process.
Drawings
FIG. 1 is a schematic view of the overall structure of the present invention;
FIG. 2 is a block diagram of a locomotive control subsystem of the present invention;
FIG. 3 is a block diagram of a remote monitoring subsystem of the present invention;
FIG. 4 is a diagram of the information fusion structure of the vehicle-mounted camera and the millimeter wave radar of the invention;
FIG. 5 is a block diagram of a downsampling residual block of the present invention;
FIG. 6 is a block diagram of a depth separable convolution element of the present invention;
fig. 7 is a diagram of the improved tinyolov 3 network architecture of the present invention.
In the figure, 1-vehicle-mounted camera, 2-millimeter wave radar, 3-positioning module, 4-wifi module, 5-vehicle-mounted processor, 6-communication server, 7-video display unit and 8-scheduling management unit.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in the figures 1-7, the unmanned system of the underground monorail crane based on the machine vision comprises a locomotive control subsystem and a remote monitoring subsystem, wherein the locomotive control subsystem comprises a vehicle-mounted camera 1, a millimeter wave radar 2, a positioning module 3, a wifi module 4 and an on-board processor 5, and the vehicle-mounted camera 1, the millimeter wave radar 2, the positioning module 3, the wifi module 4 and the on-board processor 5 are electrically connected. The remote monitoring subsystem comprises a communication server 6, a video display unit 7 and a scheduling management unit 8, wherein the communication server 6, the video display unit 7 and the scheduling management unit 8 are connected through a network.
The specific implementation process comprises the following steps:
the vehicle-mounted camera 1 shoots the road condition of the monorail crane in the process of driving, and transmits the image to the vehicle-mounted processor 5.
The millimeter wave radar 2 carries out target ranging on obstacles in the process of monorail crane traveling and transmits acquired data to the vehicle-mounted processor 5.
The positioning module 3 positions the position of the monorail crane underground and transmits the positioned information to the vehicle-mounted processor 5.
The vehicle-mounted processor 5 aligns data acquired by the vehicle-mounted camera 1 and the millimeter wave radar 2 in space and time, the vehicle-mounted processor 5 performs image enhancement preprocessing and neural network algorithm processing on video data acquired by the vehicle-mounted camera 1, the vehicle-mounted processor 5 generates an ROI (Region of interest) Region in a camera shooting image on the data acquired by the millimeter wave radar 2, and the vehicle-mounted processor 5 performs correlation matching on the ROI (Region of interest) Region generated after the neural network algorithm processing and the ROI Region converted from the radar data to obtain the type of an obstacle, the position of the obstacle and the relative speed of the obstacle and the monorail crane.
Preferably, when the onboard processor 5 performs image enhancement preprocessing on the video data, the original video image is used as the input of the multi-scale retinex (MSR) algorithm, and after the processing is enhanced by the MSR algorithm, an image with improved contrast and brightness is obtained, wherein the output expression of the MSR algorithm is as follows:
wherein, F MSR (x, y) represents the image processing result of the MSR algorithm, K n Representing the weight coefficient at each scale, N being the total number of scales, whose value is typically 3, then K 1 =K 2 =K 3 =1/3,F SSR (x, y) is the image processing result of the single-scale Retinex (SSR) algorithm, F SSR The expression of (x, y) is:
where I (x, y) represents the input image and G (x, y) represents the low-pass gaussian filter function.
Preferably, when the onboard processor 5 performs the neural network algorithm processing on the video data, the improved Tiny YOLOv3 neural network is used as an algorithm model for target identification, the structure of the feature extraction network in the basic Tiny YOLOv3 network structure is changed from a convolutional layer + a pooling layer into a downsampling residual module, the downsampling residual module comprises a convolutional layer with a convolutional kernel size of 3 × 3, a convolutional layer with a convolutional kernel size of 1 × 1 and a depth separable convolution unit, the convolutional layer with a convolutional kernel size of 3 × 3 and the convolutional layer with a convolutional kernel size of 1 × 1 and the depth separable convolution unit sequentially perform feature extraction on the input of the downsampling residual module, the processing result of the convolutional layer with a convolutional kernel size of 3 × 3 is subjected to feature fusion with the processing result of the depth separable convolution unit to obtain the output of the downsampling residual module, and in the improved Tiny YOLOv3 network structure, and the plurality of down-sampling residual modules complete feature extraction on the image subjected to image enhancement preprocessing and generate an ROI (region of interest) of the video image.
Preferably, the depth separable convolution unit in the downsampling residual error module includes: a deep convolutional layer with a convolutional kernel size of 3 × 3, a batch normalization structure, a Leaky ReLU activation function, a point-by-point convolutional layer with a convolutional kernel size of 1 × 1, a batch normalization structure and an H-Swish activation function. After a 3 x 3 deep convolutional layer, a Leaky ReLU activation function is selected, so that a slope which is not zero can be input to all negative inputs, and the phenomenon that part of neurons do not respond can be avoided; after the 1 × 1 point-by-point convolution layer, an H-Swish activation function is selected, so that the balance between the operation precision and the speed can be kept while the calculation time is reduced.
The communication server 6 receives the data packets, the video display unit 7 displays the data, the scheduling management unit 8 analyzes the data packets through scheduling management software to generate a remote control instruction of the monorail crane, the communication server 6 sends the control instruction to the onboard processor 5, and the onboard processor 5 controls the driving of a frequency converter of the monorail crane according to the control instruction.
Although the embodiments of the present invention have been disclosed in the foregoing for illustrative purposes, those skilled in the art will appreciate that various modifications, additions and substitutions are possible, without departing from the scope and spirit of the invention as disclosed in the accompanying drawings.
Claims (7)
1. A downhole monorail crane unmanned system based on machine vision, comprising:
the vehicle-mounted camera (1) is used for acquiring road condition video data when the monorail crane runs;
the millimeter wave radar (2) is used for measuring distance data of the obstacle radar encountered by the monorail crane during travelling;
an onboard processor (5) provided on the monorail trolley for:
generating a region of interest ROI (region of interest) after radar ranging data conversion according to distance data sent by the millimeter wave radar (2) and road condition video data sent by the vehicle-mounted camera (1);
inputting road condition video data into an improved Tiny Yolov3 neural network to obtain an ROI processed by the neural network; wherein, the improved Tiny YOLOv3 neural network comprises: converting a convolution layer and a pooling layer of a feature extraction network in a basic Tiny YOLOv3 network structure into a down-sampling residual error module, and forming a down-sampling residual error module main body through a depth separable convolution unit; and
the ROI after radar data conversion and the ROI after neural network processing are matched in an associated mode to obtain the type of the obstacle, the position of the obstacle and the relative speed of the obstacle and the monorail crane;
and the remote monitoring subsystem generates a control instruction according to the type of the obstacle, the position of the obstacle and the relative speed of the obstacle and the monorail crane, and remotely controls the monorail crane to travel.
2. A machine vision based downhole monorail driverless system as defined in claim 1, further comprising:
and the positioning module (3) is used for positioning the position of the monorail crane in the underground.
3. A machine vision based downhole monorail driverless system as defined in claim 2, further comprising:
the wifi module (4) is used for generating a data packet by the type of the obstacle, the position of the obstacle, the relative speed of the obstacle and the monorail crane and the positioning data of the monorail crane and sending the data packet to a wifi base station in the pit, and the wifi base station is connected to the switch and sends the data packet to the remote monitoring subsystem through an industrial Ethernet ring network.
4. A machine vision based downhole monorail driverless system as defined in claim 3, wherein said remote monitoring subsystem comprises:
the communication server (6) is used for receiving and analyzing the data packet sent by the wifi module (4);
the video display unit (7) is used for displaying the analyzed data packet;
and the scheduling management unit (8) is used for analyzing the data packets through scheduling management software, generating a remote control instruction of the monorail crane, and sending the remote control instruction to the on-board processor (5) through the communication server (6).
5. A machine vision based downhole monorail driverless system as defined in claim 1, further comprising:
the image preprocessing module is used for performing image enhancement processing on the video data according to the MSR algorithm and inputting the processed image into the vehicle-mounted processor (5), wherein the output expression of the MSR algorithm is as follows:
wherein, F MSR (x, y) represents the image processing result of the MSR algorithm, K n Representing the weight coefficient at each scale, N being the total number of scales, whose value is typically 3, then K 1 =K 2 =K 3 =1/3,F SSR (x, y) is the image processing result of the single-scale Retinex (SSR) algorithm, F SSR (x, y) ofThe expression is as follows:
where I (x, y) represents the input image and G (x, y) represents the low-pass gaussian filter function.
6. A machine vision based downhole monorail unmanned system as defined in claim 1, wherein in said modified Tiny YOLOv3 network, a plurality of downsampling residual modules perform feature extraction on an image after image enhancement preprocessing and generate ROI regions of a video image, said depth separable convolution unit in said downsampling residual modules comprising:
a deep convolution layer with convolution kernel size of 3 x 3, a batch normalization structure and a Leaky ReLU activation function;
a point-by-point convolution layer with convolution kernel size of 1 x 1, a batch normalization structure and an H-Swish activation function;
after a 3 x 3 deep convolutional layer, a Leaky ReLU activation function is selected, so that a slope which is not zero can be input to all negative inputs, and the phenomenon that part of neurons do not respond can be avoided;
the H-Swish activation function was chosen after 1X 1 point-by-point convolution of the layers.
7. A downhole monorail crane unmanned method based on machine vision is characterized by comprising the following steps:
acquiring road condition video data when a monorail crane runs;
the method comprises the following steps of (1) ranging obstacles encountered by a monorail crane during travelling;
receiving the distance data, generating a region of interest ROI after radar data conversion on the basis of road condition video data,
inputting road condition video data into an improved Tiny yollov 3 neural network to obtain a region of interest ROI processed by the neural network, wherein in the improved Tiny yollov 3 neural network, the structure of a feature extraction network in a basic Tiny yollov 3 network structure is converted into a down-sampling residual module from a convolution layer and a pooling layer, and a depth separable convolution unit forms a down-sampling residual module main body;
the ROI after radar data conversion and the ROI after neural network processing are matched in an associated mode to obtain the type of the obstacle, the position of the obstacle and the relative speed of the obstacle and the monorail crane;
and generating a control instruction according to the type of the obstacle, the position of the obstacle and the relative speed of the obstacle and the monorail crane, and remotely controlling the monorail crane to carry out driving control.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210451027.7A CN114803860A (en) | 2022-04-24 | 2022-04-24 | Underground monorail crane unmanned driving system and method based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210451027.7A CN114803860A (en) | 2022-04-24 | 2022-04-24 | Underground monorail crane unmanned driving system and method based on machine vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114803860A true CN114803860A (en) | 2022-07-29 |
Family
ID=82508601
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210451027.7A Pending CN114803860A (en) | 2022-04-24 | 2022-04-24 | Underground monorail crane unmanned driving system and method based on machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114803860A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115285187A (en) * | 2022-10-10 | 2022-11-04 | 山西易联智控科技有限公司 | Unmanned control system and method for monorail crane |
CN117228536A (en) * | 2023-11-14 | 2023-12-15 | 常州海图信息科技股份有限公司 | Intelligent analysis system and method for monorail crane |
-
2022
- 2022-04-24 CN CN202210451027.7A patent/CN114803860A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115285187A (en) * | 2022-10-10 | 2022-11-04 | 山西易联智控科技有限公司 | Unmanned control system and method for monorail crane |
CN117228536A (en) * | 2023-11-14 | 2023-12-15 | 常州海图信息科技股份有限公司 | Intelligent analysis system and method for monorail crane |
CN117228536B (en) * | 2023-11-14 | 2024-01-30 | 常州海图信息科技股份有限公司 | Intelligent analysis system and method for monorail crane |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3937079A1 (en) | Trajectory prediction method and device | |
CN114803860A (en) | Underground monorail crane unmanned driving system and method based on machine vision | |
Bai et al. | Pillargrid: Deep learning-based cooperative perception for 3d object detection from onboard-roadside lidar | |
CN108469817B (en) | Unmanned ship obstacle avoidance control system based on FPGA and information fusion | |
CN112060106A (en) | Inspection system of inspection robot for mine and inspection method of inspection robot group | |
Liu et al. | Deep learning-based localization and perception systems: approaches for autonomous cargo transportation vehicles in large-scale, semiclosed environments | |
CN113674355A (en) | Target identification and positioning method based on camera and laser radar | |
CN116300973A (en) | Autonomous obstacle avoidance method for unmanned mine car in complex weather | |
CN115200554B (en) | Unmanned aerial vehicle photogrammetry supervision system and method based on picture identification technology | |
CN115909092A (en) | Light-weight power transmission channel hidden danger distance measuring method and hidden danger early warning device | |
CN113552867A (en) | Planning method of motion trail and wheel type mobile equipment | |
CN116866520B (en) | AI-based monorail crane safe operation real-time monitoring management system | |
Liu et al. | Research on security of key algorithms in intelligent driving system | |
CN117622421A (en) | Ship auxiliary driving system for identifying obstacle on water surface | |
CN115909815B (en) | Fusion detection method, device, equipment and storage medium based on multivariate data | |
Katsamenis et al. | Real time road defect monitoring from UAV visual data sources | |
CN113298044B (en) | Obstacle detection method, system, device and storage medium based on positioning compensation | |
CN116311113A (en) | Driving environment sensing method based on vehicle-mounted monocular camera | |
CN116022657A (en) | Path planning method and device and crane | |
CN115686028A (en) | Unmanned operation method and device based on manned driving, electronic equipment and storage medium | |
CN115392559A (en) | Intelligent remote control method of unmanned carrying equipment based on 5G | |
CN113138594B (en) | Automatic driving method and device | |
Son et al. | Recognition of the Shape and Location of Multiple Power Lines Based on Deep Learning With Post-Processing | |
Lu et al. | Design and implement of control system for power substation equipment inspection robot | |
CN117612140B (en) | Road scene identification method and device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |