CN112214019B - Unmanned inspection equipment non-blind area intelligent feedback control system, method and terminal - Google Patents

Unmanned inspection equipment non-blind area intelligent feedback control system, method and terminal Download PDF

Info

Publication number
CN112214019B
CN112214019B CN202010993315.6A CN202010993315A CN112214019B CN 112214019 B CN112214019 B CN 112214019B CN 202010993315 A CN202010993315 A CN 202010993315A CN 112214019 B CN112214019 B CN 112214019B
Authority
CN
China
Prior art keywords
map
point cloud
data
inspection equipment
unmanned inspection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010993315.6A
Other languages
Chinese (zh)
Other versions
CN112214019A (en
Inventor
赵光静
洪建光
邵炜平
王志强
杨鸿珍
吕舟
范超
贺琛
张辰
史俊潇
李信
钱思源
王文龙
任白杨
龙强
蓝天
石帅
杨怀丽
李宇翔
彭卉
来骥
彭柏
王艺霏
郑汉杰
苏素燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Siji Location Service Co ltd
State Grid Corp of China SGCC
State Grid Information and Telecommunication Co Ltd
State Grid Zhejiang Electric Power Co Ltd
State Grid Fujian Electric Power Co Ltd
Information and Telecommunication Branch of State Grid Zhejiang Electric Power Co Ltd
Information and Telecommunication Branch of State Grid Jibei Electric Power Co Ltd
Original Assignee
State Grid Siji Location Service Co ltd
State Grid Corp of China SGCC
State Grid Information and Telecommunication Co Ltd
State Grid Zhejiang Electric Power Co Ltd
State Grid Fujian Electric Power Co Ltd
Information and Telecommunication Branch of State Grid Zhejiang Electric Power Co Ltd
Information and Telecommunication Branch of State Grid Jibei Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Siji Location Service Co ltd, State Grid Corp of China SGCC, State Grid Information and Telecommunication Co Ltd, State Grid Zhejiang Electric Power Co Ltd, State Grid Fujian Electric Power Co Ltd, Information and Telecommunication Branch of State Grid Zhejiang Electric Power Co Ltd, Information and Telecommunication Branch of State Grid Jibei Electric Power Co Ltd filed Critical State Grid Siji Location Service Co ltd
Priority to CN202010993315.6A priority Critical patent/CN112214019B/en
Publication of CN112214019A publication Critical patent/CN112214019A/en
Application granted granted Critical
Publication of CN112214019B publication Critical patent/CN112214019B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/027Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means comprising intertial navigation means, e.g. azimuth detector
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0278Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/028Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using a RF signal
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The invention belongs to the technical field of unmanned inspection equipment control, and discloses a non-blind area intelligent feedback control system, a non-blind area intelligent feedback control method and a non-blind area intelligent feedback control terminal for unmanned inspection equipment, wherein an indoor positioning system is used for acquiring the motion state and the surrounding environment of the unmanned inspection equipment in real time; meanwhile, the navigation device is used for carrying out non-blind area outdoor navigation positioning on unmanned inspection equipment; the indoor and outdoor positioning switching system is used for performing seamless switching between indoor and outdoor positioning; the indoor positioning system is used for carrying out non-blind area navigation of the unmanned inspection equipment in the environment of the low-density information reading equipment; the feedback control terminal is used for controlling the motion trail of the unmanned inspection equipment from the time dimension and the space dimension. The intelligent feedback control terminal is utilized to realize the indoor and outdoor non-blind area high-precision intelligent feedback control of the unmanned inspection equipment, the motion trail of the unmanned inspection equipment is controlled from time and space dimensions, the motion trail control precision of the unmanned inspection equipment is ensured to be kept within a meter level, and the operation delay time precision is ensured to be within 100 ns.

Description

Unmanned inspection equipment non-blind area intelligent feedback control system, method and terminal
Technical Field
The invention belongs to the technical field of unmanned inspection equipment control, and particularly relates to a non-blind area intelligent feedback control system of unmanned inspection equipment.
Background
At present, with the increase of the power grid scale and the improvement of the voltage level, the requirements on the power supply safety and reliability are more strict, and the normal operation of the transformer substation becomes an important link for guaranteeing the power supply safety of the power system. The manual inspection operation mode, namely, the mode of manual inspection and manual recording is adopted to inspect the power transformation equipment in operation, so that the defects of high labor intensity, low working efficiency, unstable inspection quality and the like exist, and severe weather conditions also have harm to the body of inspection personnel. The unmanned intelligent inspection system is adopted for inspection, and the manual inspection mode is replaced, so that the inspection system becomes the necessary direction of inspection development of the transformer substation/power station. The application of the current internet technology is gradually common, and the novel technology and mode are utilized under the situation, so that the power transmission efficiency in the running process of the equipment can be improved, meanwhile, the burden of a transformer substation worker on the maintenance of the equipment can be effectively reduced, and the equipment can be normally and safely operated. The intelligent inspection system can carry out strict inspection on the transformer substation in detail, can realize automatic running, automatic navigation, intelligent judgment, intelligent feedback and the like, supports the long-term stable development of the work of the transformer substation, and improves the intelligent and informatization level of electric power.
The development of unmanned inspection equipment depends on navigation positioning technology, artificial intelligent image recognition, network communication, industrial manufacturing and the like, wherein the reliable trackless navigation positioning technology is the basis of unmanned inspection and is one of the most necessary technologies. The common positioning modes comprise satellite positioning, radio frequency positioning, 2D laser positioning, visual positioning and the like, and at present, unmanned inspection equipment usually adopts one or a small number of fusion positioning navigation modes, so that the problems of poor synchronization precision, low navigation positioning precision, limitation of use environment, high power consumption, easy navigation and the like are usually caused.
Through the above analysis, the problems and defects existing in the prior art are as follows: the unmanned inspection equipment usually adopts one or a small number of fusion positioning navigation modes, and the problems of poor synchronization precision, low navigation positioning precision, limitation on the use environment, high power consumption, easy navigation and the like often exist.
Disclosure of Invention
Aiming at the problems existing in the prior art, the invention provides an intelligent feedback control system, method and terminal for unmanned inspection equipment without blind areas.
The invention is realized in such a way that the unmanned inspection equipment non-blind area intelligent feedback control method comprises the following steps:
Step one, acquiring the motion state and the surrounding environment of unmanned inspection equipment in real time by using a three-dimensional acceleration sensor, a three-axis digital compass, a three-axis gyroscope, a three-dimensional laser radar and other sensors;
step two, based on the acquired motion state and surrounding environment of the unmanned inspection equipment, carrying out non-blind area outdoor navigation positioning of the unmanned inspection equipment by utilizing a Beidou positioning technology in combination with an inertial navigation algorithm; based on the acquired motion state and surrounding environment of the unmanned inspection equipment, performing seamless switching between indoor and outdoor positioning by utilizing an inertial navigation positioning technology;
before unmanned inspection equipment operates, a space needs to be created for the unmanned inspection equipment, so that the unmanned inspection equipment can judge the position of the unmanned inspection equipment in real time based on the space information, and the problem of 'where I are located' is solved. The high-precision map is constructed, basic map data are provided for the intelligent driving system, the unmanned inspection system is helped to 'see clearly' the road, and the positioning data identified by each sensor are matched with the high-precision map data, so that the unmanned inspection position and the road condition to be faced by the vehicle are determined;
and thirdly, fusing a non-contact radio frequency positioning technology and an inertial navigation technology, and performing non-blind area navigation of unmanned inspection equipment in a low-density information reading equipment environment.
Further, the unmanned inspection equipment non-blind area intelligent feedback control method further comprises the following steps: and controlling the motion trail of the unmanned inspection equipment from time and space dimensions by using a feedback control terminal of the integrated wireless communication module, and controlling the motion trail control precision of the unmanned inspection equipment to be kept within a meter level.
Furthermore, the unmanned inspection equipment non-blind area intelligent feedback control method is used for synchronously acquiring and controlling the high integration level, the high time precision and the multi-sensor data of the synchronous controller, and comprises the following steps:
(1) Establishing a high-precision clock reference, taking a high-stability quartz crystal as a clock source of a synchronizer, and calibrating the high-precision clock reference by combining PPS pulse and NEMA data of a satellite positioning chip to establish a high-precision time reference in the whole measurement time range;
(2) The synchronization of multiple sensors is realized, active synchronization is respectively adopted for the sensors such as an inertial navigation unit, a camera and the like according to the characteristics of each sensor, time service synchronization is adopted for a three-dimensional laser radar, the data of each sensor is acquired in a pure hardware mode, and an accurate time tag is marked as a synchronous alignment mark, so that the high-precision synchronization of the original data of the multiple sensors is realized;
(3) Completing the schematic diagram of the whole hardware circuit of the synchronous controller and the design and debugging of a PCB, and designing a multi-sensor data acquisition circuit such as an inertial navigation unit, a differential GPS, a three-dimensional laser radar and the like; gigabit network, USB3.0, USB2.0, mSATA, TF card high-speed interface and memory circuit, and debug the whole hardware circuit, realize gathering, transmitting and preserving the data of multiple sensors;
(4) The method comprises the steps of completing the programming and the debugging of FPGA programs for controlling each sensor and synchronously collecting data, taking an FPGA chip as a carrier and combining an external hardware circuit, and establishing a high-precision time reference through a hardware description language; SPI interface control and inertial navigation data acquisition are designed; the UART interface is designed for synchronous timing of the three-dimensional laser radar, so that interaction between the FPGA and the TX2 instruction and transmission of synchronous data of the encoder and the camera are realized; the control program of the chip is designed to convert parallel data into USB serial data, so that high-speed transmission of a large amount of sensor original data between the FPGA and the TX2 is realized. The synchronous controller can provide high-precision synchronous original data of multiple sensors such as satellite positioning, three-dimensional laser radar, inertial navigation unit, camera and the like in real time.
Furthermore, before the unmanned inspection equipment runs, a space is created for the unmanned inspection equipment, and the position of the unmanned inspection equipment is judged in real time based on space information; matching the positioning data identified by each sensor with high-precision map data, and determining the unmanned inspection position and the road condition to be faced by the vehicle; the high-precision map is taken as a basic map for navigation and positioning, is essential basic data for navigation of unmanned inspection equipment, and has the functions of assisting in achieving high-precision positioning, planning decision-making and control feedback; the method specifically comprises the following steps:
Type of graph:
(1) Original map: the original map format is rmap, which is used for making navigation maps and dense maps;
(2) Navigation map: the navigation map format is hmap, which is used for positioning and navigation of the robot;
(3) Dense map: the dense map format is txt or pcd, which is used for making high-precision map and global map;
(4) High precision map: the high-precision map format is csv, and the path planning of the robot is used;
(5) Global map: the global map is in a grid map format of x, png and is used for global planning and local obstacle avoidance;
and (3) drawing:
(1) Survey planning;
(2) And (3) data acquisition: adopting a proper acquisition platform and image acquisition equipment to acquire data of the image building area;
(3) Data processing, namely opening SLAMmapping software after data acquisition, and importing original data to automatically perform data processing;
4) Data editing, wherein the data editing mainly comprises performing closed-loop operation on data and performing BA and graph optimization on point cloud;
(5) After the data is exported and edited, a navigation map and a dense map can be exported, wherein the dense map is a point cloud with a standard format, and is checked by SLAMmapping or third party point cloud software;
(6) After the data is exported, the high-precision map and the global map can be directly manufactured on SLAMmapping;
Map editing
The original map data editing is mainly to carry out closed-loop operation on the data and to carry out BA and map optimization on point cloud;
after the original map is edited, a navigation map and a dense map can be derived, wherein the dense map is a point cloud with a standard format, and is checked by SLAMmapping or third party point cloud software;
high-precision map making and editing
Opening a dense map pcd or txt format produced in the previous step, making a high-precision map by using SLAMmapping, and planning and editing a robot walking path on a point cloud;
global map making and editing, by means of SLAMMapping, the software automatically deletes ground points and noise points. After adding the virtual wall, generating a global map;
3D perception is integrated with semantic extraction and segmentation based on large-scale scene point cloud data, a three-dimensional semantic map foundation is built, a high-precision three-dimensional point cloud semantic map is a core for realizing accurate path planning and scene reconstruction, static obstacle perception and various dynamic obstacle perception based on the high-precision map are realized, the distance, the direction and the speed of the obstacle are calculated in real time, and a safe and reliable perception obstacle avoidance scheme is provided for unmanned inspection equipment.
The unmanned inspection equipment navigation utilizes an improved Euclidean clustering algorithm to detect the obstacle in real time, preprocesses point cloud data, and separates the point cloud of the ground and the non-ground through a ground gradient separation algorithm; performing obstacle cluster detection on the non-ground point cloud according to different cluster distance thresholds, and distinguishing by using cuboid frame markers; comparing the inherent adjacent point cloud distance of each ground laser beam with the actual distance between two adjacent points, and combining the angle difference of the adjacent points and the point cloud clustering to realize the extraction of the passable area; and merging obstacle detection and passable region extraction results, and merging and detecting the passability of the passable region. The traditional laser radar algorithm only uses the current single frame Lei Dadian cloud for sensing.
Further, the unmanned inspection equipment non-blind area intelligent feedback control method further comprises the following steps:
(1) Realizing multi-frame point cloud real-time fusion registration on a data layer, extracting feature points from original point clouds according to curvature, constructing a cost function according to the point-line distance and the point-surface distance between the feature points, and then estimating the pose change of front and rear frames from rough to fine through inter-frame registration and map registration;
(2) Detecting an obstacle by using the registered compact point cloud, and converting the historical frame point cloud into a current frame coordinate system by using the pose obtained by point cloud registration to obtain a multi-frame point cloud set; projecting the multi-frame point cloud set to a grid map, and judging whether each grid is an obstacle according to the point cloud height distribution characteristics; comparing single-frame point cloud detection, wherein the multi-frame point cloud detection distance after space-time fusion is farther;
(3) The method comprises the steps of firstly extracting outline features and laser pulse reflection intensity features of an obstacle from data obtained by a three-dimensional laser radar and data obtained by a multi-layer laser radar respectively, then fusing the extracted features and modeling the dynamic obstacle, completing matching tracking of the dynamic obstacle by constructing a similarity matrix, completing motion state estimation of the dynamic obstacle by utilizing an established obstacle model, and providing obstacle motion state information for dynamic obstacle identification and dynamic track prediction.
It is a further object of the present invention to provide a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of:
step one, acquiring the motion state and the surrounding environment of unmanned inspection equipment in real time by using a three-dimensional acceleration sensor, a three-axis digital compass, a three-axis gyroscope, a three-dimensional laser radar and other sensors;
step two, based on the acquired motion state and surrounding environment of the unmanned inspection equipment, carrying out non-blind area outdoor navigation positioning of the unmanned inspection equipment by utilizing a Beidou positioning technology in combination with an inertial navigation algorithm; based on the acquired motion state and surrounding environment of the unmanned inspection equipment, performing seamless switching between indoor and outdoor positioning by utilizing an inertial navigation positioning technology; and thirdly, fusing a non-contact radio frequency positioning technology and an inertial navigation technology, and performing non-blind area navigation of unmanned inspection equipment in a low-density information reading equipment environment.
Another object of the present invention is to provide a computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
Step one, acquiring the motion state and the surrounding environment of unmanned inspection equipment in real time by using a three-dimensional acceleration sensor, a three-axis digital compass, a three-axis gyroscope, a three-dimensional laser radar and other sensors;
step two, based on the acquired motion state and surrounding environment of the unmanned inspection equipment, carrying out non-blind area outdoor navigation positioning of the unmanned inspection equipment by utilizing a Beidou positioning technology in combination with an inertial navigation algorithm; based on the acquired motion state and surrounding environment of the unmanned inspection equipment, performing seamless switching between indoor and outdoor positioning by utilizing an inertial navigation positioning technology;
and thirdly, fusing a non-contact radio frequency positioning technology and an inertial navigation technology, and performing non-blind area navigation of unmanned inspection equipment in a low-density information reading equipment environment.
Another object of the present invention is to provide an unmanned inspection equipment non-blind area intelligent feedback control system for operating the unmanned inspection equipment non-blind area intelligent feedback control method, the unmanned inspection equipment non-blind area intelligent feedback control system comprising:
the indoor positioning system comprises a Beidou positioning module, a data acquisition module and an inertial navigation module; the method is used for acquiring the motion state and the surrounding environment of the unmanned inspection equipment in real time; the method is used for carrying out non-blind area outdoor navigation positioning of the unmanned inspection equipment based on Beidou positioning and an inertial navigation algorithm;
The indoor and outdoor positioning switching system is used for performing seamless switching between indoor and outdoor positioning based on an inertial navigation positioning technology;
the outdoor positioning system is used for fusing a non-contact radio frequency positioning technology and an inertial navigation technology and performing non-blind area navigation of unmanned inspection equipment in a low-density information reading equipment environment;
the feedback control terminal is integrated with the wireless communication module and is used for controlling the motion trail of the unmanned inspection equipment from time and space dimensions and controlling the motion trail control precision of the unmanned inspection equipment to be kept within a meter level;
unmanned equipment non-blind area intelligent feedback control system that patrols and examines still includes:
the upper system is a background remote control system, and can realize establishment, execution, monitoring, stopping and emergency control of tasks;
the indoor and outdoor integrated positioning navigation system comprises a high-precision map module, a multi-sensor fusion sensing system, a planning system and a decision making system; the control comprises transverse and longitudinal control, the chassis is controlled through the interface, the unmanned mobile equipment is controlled, and feedback adjustment is performed.
Further, the data acquisition module includes:
the data acquisition module is used for acquiring the motion state and the surrounding environment of the unmanned inspection equipment in real time by utilizing a three-dimensional acceleration sensor, a three-axis digital compass, a three-axis gyroscope, a three-dimensional laser radar and other sensors.
The invention further aims to provide a terminal which is provided with the blind-zone-free intelligent feedback control system of the unmanned inspection equipment.
By combining all the technical schemes, the invention has the advantages and positive effects that:
the project research adopts a 3D fusion navigation mode, greatly improves the conditions, has extremely strong stability and applicability, provides a new generation navigation positioning and control technology for the unmanned inspection terminal, and is widely applied to an electric unmanned inspection system.
The invention can realize the non-blind area outdoor and indoor positioning of the unmanned inspection equipment and simultaneously realize the seamless switching of the outdoor and indoor positioning, and the invention utilizes the intelligent feedback control terminal to realize the indoor and outdoor non-blind area high-precision intelligent feedback control of the unmanned inspection equipment, controls the motion trail of the unmanned inspection equipment from time and space dimensions, ensures that the motion trail control precision of the unmanned inspection equipment is kept within meter level, and the operation delay time precision is up to within 100 ns.
According to the invention, under an outdoor environment, based on Beidou positioning, sensors such as a three-dimensional acceleration sensor, a three-axis digital compass, a three-axis gyroscope and a three-dimensional laser radar are integrated, the motion state and the surrounding environment of unmanned inspection equipment are obtained in real time, an inertial navigation algorithm is developed, and a non-blind area navigation positioning terminal module integrating Beidou high-precision positioning and inertial navigation is formed. In the indoor environment, the high integration of the non-contact radio frequency positioning technology and the inertial navigation technology is studied, and the non-blind area navigation under the environment of the low-density information reading equipment is realized. Meanwhile, based on an inertial navigation positioning technology, seamless switching between indoor positioning and outdoor positioning is researched, and high-precision positioning of indoor and outdoor automatic connection is realized. Based on the technologies, a wireless communication module is integrated, an indoor and outdoor non-blind area high-precision intelligent feedback control terminal which can be used for the unmanned inspection equipment is developed, the motion trail of the unmanned inspection equipment is controlled from time and space dimensions, the motion trail control precision of the unmanned inspection equipment is ensured to be kept within a meter level, and the operation delay time precision is ensured to be within 100 ns.
Drawings
FIG. 1 is a schematic diagram of a non-blind area intelligent feedback control system of an unmanned inspection device according to an embodiment of the present invention;
in the figure: 1. an indoor positioning system; 2. an indoor and outdoor positioning switching system; 3. an outdoor positioning system; 4. and feeding back the control terminal.
FIG. 2 is a flowchart of a non-blind area intelligent feedback control method of unmanned inspection equipment provided by the embodiment of the invention;
FIG. 3 is a schematic diagram of 3D real-time fusion positioning provided by an embodiment of the present invention;
FIG. 4 is an indoor and outdoor high-precision map provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of editing diagram data provided by an embodiment of the present invention;
FIG. 6 is a schematic diagram of a basic definition relationship of a protocol stack according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of stepped jump of the rotation speed of the chassis before docking optimization, which is provided by the embodiment of the invention;
fig. 8 is a schematic diagram of linear feedback of the chassis after the butt joint optimization according to the embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following examples in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Aiming at the problems existing in the prior art, the invention provides an intelligent feedback control system without blind areas for unmanned inspection equipment, and the invention is described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the non-blind area intelligent feedback control system of the unmanned inspection equipment provided by the embodiment of the invention comprises:
the indoor positioning system 1 comprises a Beidou positioning module, a data acquisition module and an inertial navigation module; the method is used for acquiring the motion state and the surrounding environment of the unmanned inspection equipment in real time; and the method is used for carrying out non-blind area outdoor navigation positioning of the unmanned inspection equipment based on Beidou positioning and an inertial navigation algorithm.
The indoor and outdoor positioning switching system 2 is used for performing seamless switching between indoor and outdoor positioning based on an inertial navigation positioning technology;
the outdoor positioning system 3 is used for fusing a non-contact radio frequency positioning technology and an inertial navigation technology and performing non-blind area navigation of unmanned inspection equipment in a low-density information reading equipment environment;
and the feedback control terminal 4 is integrated with a wireless communication module and is used for controlling the motion trail of the unmanned inspection equipment from time and space dimensions and controlling the motion trail control precision of the unmanned inspection equipment to be kept within a meter level.
Unmanned equipment non-blind area intelligent feedback control system that patrols and examines still includes:
the upper system is a background remote control system, and can realize establishment, execution, monitoring, stopping and emergency control of tasks;
the indoor and outdoor integrated positioning navigation system comprises a high-precision map module, a multi-sensor fusion sensing system, a planning system and a decision making system; the control comprises transverse and longitudinal control, the chassis is controlled through the interface, the unmanned mobile equipment is controlled, and feedback adjustment is performed.
The data acquisition module provided by the embodiment of the invention comprises:
the data acquisition module is used for acquiring the motion state and the surrounding environment of the unmanned inspection equipment in real time by utilizing a three-dimensional acceleration sensor, a three-axis digital compass, a three-axis gyroscope, a three-dimensional laser radar and other sensors.
As shown in fig. 2, the non-blind area intelligent feedback control method for the unmanned inspection equipment provided by the embodiment of the invention comprises the following steps:
s101: acquiring the motion state and the surrounding environment of unmanned inspection equipment in real time by using a three-dimensional acceleration sensor, a three-axis digital compass, a three-axis gyroscope, a three-dimensional laser radar and other sensors;
s102: based on the acquired motion state and surrounding environment of the unmanned inspection equipment, the unmanned inspection equipment non-blind area outdoor navigation positioning is performed by utilizing a Beidou positioning technology in combination with an inertial navigation algorithm; based on the acquired motion state and surrounding environment of the unmanned inspection equipment, performing seamless switching between indoor and outdoor positioning by utilizing an inertial navigation positioning technology;
Before unmanned inspection equipment operates, a space needs to be created for the unmanned inspection equipment, so that the unmanned inspection equipment can judge the position of the unmanned inspection equipment in real time on the basis of the space information; the high-precision map is constructed, basic map data are provided for the intelligent driving system, the unmanned inspection system is helped to 'see clearly' the road, and the positioning data identified by each sensor are matched with the high-precision map data, so that the unmanned inspection position and the road condition to be faced by the vehicle are determined;
s103: and (3) fusing a non-contact radio frequency positioning technology and an inertial navigation technology, and performing non-blind area navigation of the unmanned inspection equipment in the environment of the low-density information reading equipment.
The non-blind area intelligent feedback control method of the unmanned inspection equipment provided by the embodiment of the invention further comprises the following steps: and controlling the motion trail of the unmanned inspection equipment from time and space dimensions by using a feedback control terminal of the integrated wireless communication module, and controlling the motion trail control precision of the unmanned inspection equipment to be kept within a meter level.
The technical scheme of the invention is further described below with reference to specific embodiments.
Example 1:
under outdoor environment, based on Beidou positioning, sensors such as a three-dimensional acceleration sensor, a three-axis digital compass, a three-axis gyroscope and a three-dimensional laser radar are integrated, the motion state and the surrounding environment of unmanned inspection equipment are obtained in real time, an inertial navigation algorithm is developed, and a non-blind area navigation positioning terminal module integrating Beidou high-precision positioning and inertial navigation is formed. In the indoor environment, the high integration of the non-contact radio frequency positioning technology and the inertial navigation technology is studied, and the non-blind area navigation under the environment of the low-density information reading equipment is realized. Meanwhile, based on an inertial navigation positioning technology, seamless switching between indoor positioning and outdoor positioning is researched, and high-precision positioning of indoor and outdoor automatic connection is realized. Based on the technologies, a wireless communication module is integrated, an indoor and outdoor non-blind area high-precision intelligent feedback control terminal which can be used for the unmanned inspection equipment is developed, the motion trail of the unmanned inspection equipment is controlled from time and space dimensions, the motion trail control precision of the unmanned inspection equipment is ensured to be kept within a meter level, and the operation delay time precision is ensured to be within 100 ns.
As shown in FIG. 3, the integrated navigation controller takes an FPGA as a core, and integrates multiple sensor data synchronous control. The high integration, high time precision and multi-sensor data synchronous acquisition and control for the synchronous controller comprises:
(1) A high precision clock reference is established. The high-stability quartz crystal is used as a clock source of the synchronizer, and is calibrated by combining PPS pulse and NEMA data of the satellite positioning chip, so that the advantages of high long-term stability of the PPS pulse of the satellite positioning chip and high short-time stability of the high-stability quartz crystal are fully utilized, and a high-precision time reference in the whole measurement time range is established.
(2) The synchronization of multiple sensors is realized. According to the characteristics of each sensor, active synchronization is adopted for the sensors such as an inertial navigation unit and a camera, time service synchronization is adopted for the three-dimensional laser radar, data of each sensor is collected in a pure hardware mode, and accurate time labels are marked as synchronous alignment marks, so that high-precision synchronization of original data of multiple sensors is realized.
(3) And (3) completing the schematic diagram of the whole hardware circuit of the synchronous controller and the design and debugging of the PCB. The multi-sensor data acquisition circuit of the inertial navigation unit, the differential GPS, the three-dimensional laser radar and the like is designed; the high-speed interfaces and the storage circuits of the gigabit network, the USB3.0, the USB2.0, the mSATA, the TF card and the like are used for debugging the whole hardware circuit, so that the acquisition, the transmission and the storage of the multi-sensor data are realized. (4) And finishing the programming and debugging of the FPGA program for controlling each sensor and synchronously collecting the data. An FPGA chip is taken as a carrier and combined with an external hardware circuit, and a high-precision time reference is established through a hardware description language; SPI interface control and inertial navigation data acquisition are designed; the UART interface is designed for synchronous timing of the three-dimensional laser radar, so that interaction between the FPGA and the TX2 instruction and transmission of synchronous data of the encoder and the camera are realized; the control program of the chip is designed to convert parallel data into USB serial data, so that high-speed transmission of a large amount of sensor original data between the FPGA and the TX2 is realized. The synchronous controller can provide high-precision synchronous original data of multiple sensors such as satellite positioning, three-dimensional laser radar, inertial navigation unit, camera and the like in real time. The indoor and outdoor positioning accuracy of the synchronous controller can reach 10cm. The synchronous controller also has the advantages of small volume, light weight, low manufacturing cost, high integration level, high data processing speed, strong expansibility and the like.
Indoor and outdoor high-precision map
As shown in fig. 4, before the unmanned inspection equipment operates, a space needs to be created for the unmanned inspection equipment, so that the unmanned inspection equipment can judge the position of the unmanned inspection equipment in real time based on the space information, and the problem of' where me is solved. The high-precision map is constructed, basic map data are provided for the intelligent driving system, the unmanned inspection system is helped to 'see clearly' the road, and the positioning data identified by each sensor are matched with the high-precision map data, so that the unmanned inspection position and the road condition to be faced by the vehicle are determined. The high-precision map is taken as a basic map for navigation and positioning, is essential basic data for navigation of unmanned inspection equipment, and has the functions of assisting in achieving high-precision positioning, planning decision making, controlling feedback and the like.
Types of graphs
(1) Original map: the original map format is rmap, used to make navigation maps and dense maps.
(2) Navigation map: the navigation map format is hmap, which is used for positioning and navigation of the robot.
(3) Dense map: the dense map format is txt or pcd, which is used for making high-precision map and global map.
(4) High precision map: the high-precision map format is csv, and the path planning of the robot is used.
(5) Global map: the global map is in grid map format of png and is used for global planning and local obstacle avoidance.
The main steps of the drawing construction
(1) Survey planning: before the drawing is built, the basic condition of the drawing area is known, the drawing line is planned in advance, and particularly, a closed loop line is properly added.
(2) And (3) data acquisition: and adopting a proper acquisition platform and image acquisition equipment to acquire data of the image building area. The constant speed should be kept during collection, the turning is not too fast, and the equipment is kept stable.
(3) And (3) data processing, namely opening SLAMmapping software after data acquisition, and importing original data to automatically perform data processing.
4) And data editing, namely performing closed-loop operation on the data, and performing BA and graph optimization on the point cloud.
(5) And after the data is obtained and edited, the navigation map and the dense map can be obtained. The dense map is a point cloud in a standard format and can be checked by SLAMmapping or third party point cloud software.
(6) And after the data is exported, the high-precision map and the global map can be directly manufactured on SLAMmapping.
Map editing
The original map data editing mainly comprises the steps of performing closed-loop operation on the data and performing BA and map optimization on point clouds.
And after the original map is edited, a navigation map and a dense map can be derived. The dense map is a point cloud in a standard format, and can be checked by SLAMmapping or third party point cloud software, and the main steps are shown in a figure 5.
High-precision map making and editing
And opening the format of the dense map pcd or txt produced in the previous step, and performing high-precision mapping by using SLAMmapping. Planning and editing a robot walking path on the point cloud, wherein the purple lines are shown.
Global map making and editing
With slammamapping, the software automatically deletes ground points and noise points. After adding the virtual wall, a global map can be generated.
3D perception
The method is integrated with semantic extraction and segmentation based on large-scale scene point cloud data, a basis of a three-dimensional semantic map is constructed, and the high-precision three-dimensional point cloud semantic map is a core for realizing accurate path planning and scene reconstruction. The method realizes the static obstacle sensing and various dynamic obstacle sensing based on the high-precision map, calculates the distance, the azimuth, the speed and the like of the obstacle in real time, and provides a safe and reliable sensing obstacle avoidance scheme for unmanned inspection equipment.
The 3D laser radar has wide detection range and high distance precision, and is widely used for obstacle detection and target tracking in environmental perception tasks. The unmanned inspection equipment navigation needs to accurately detect and track dynamic obstacles and estimate the motion state of the dynamic obstacles; avoiding potential collisions with dynamic obstacles. The method comprises the steps of performing real-time obstacle detection by using an improved Euclidean clustering algorithm, preprocessing point cloud data, and separating ground and non-ground point clouds by using a ground gradient separation algorithm; performing obstacle cluster detection on the non-ground point cloud according to different cluster distance thresholds, and distinguishing by using cuboid frame markers; comparing the inherent adjacent point cloud distance of each ground laser beam with the actual distance between two adjacent points, and combining the angle difference of the adjacent points and the point cloud clustering to realize the extraction of the passable area; and merging obstacle detection and passable region extraction results, and merging and detecting the passability of the passable region. The traditional laser radar algorithm only uses the current single frame Lei Dadian cloud for sensing. Because the laser radar point cloud has the problem of data sparseness, the single-frame sensing method has poor detection capability on remote obstacles. And through an effective space-time fusion method, the perception performance of the laser radar can be greatly improved.
The following works are mainly carried out: 1. and realizing multi-frame point cloud real-time fusion registration on a data layer. And extracting characteristic points from the original point cloud according to the curvature, constructing a cost function according to the point-line distance and the point-plane distance between the characteristic points, and then estimating the pose change of the front frame and the back frame from rough to fine through inter-frame registration and map registration. Compared with the traditional method, the method has the advantages that the number of laser points involved in the calculation process is small, and the registration effect and the calculation speed are both considered. 2. And detecting the obstacle by using the registered compact point cloud. By means of the pose obtained by point cloud registration, the historical frame point cloud can be converted into the current frame coordinate system, and a multi-frame point cloud set is obtained. And then projecting the multi-frame point cloud set to a grid map, and judging whether each grid is an obstacle according to the point cloud height distribution characteristics. And compared with single-frame point cloud detection, the multi-frame point cloud detection distance after space-time fusion is farther. 3. The accuracy and the speed of dynamic obstacle detection and tracking are improved, and a dynamic obstacle detection and tracking method based on multi-feature fusion is used. Firstly, extracting outline features and laser pulse reflection intensity features of an obstacle from data obtained by a three-dimensional laser radar and data obtained by a multi-layer laser radar respectively, then fusing the extracted features and modeling a dynamic obstacle, completing matching tracking of the dynamic obstacle by constructing a similarity matrix, completing motion state estimation of the dynamic obstacle by utilizing an established obstacle model, and providing obstacle motion state information for dynamic obstacle identification and dynamic track prediction.
Planning decisions
Global path planning
The PathPlanning module corresponds to global path planning, and the module can conveniently perform algorithm replacement. Default use is: 1. and (2) manually drawing a global path, and planning the global path based on the searched optimal path.
Local path planning
The Trajectory Generation & Modification module in the figure corresponds to local path planning, and the module can be conveniently replaced by an algorithm. Default use is: based on optimized local path planning.
Algorithm principle: firstly, interpolating a global planning path to obtain an initial local planning track point; then, a nonlinear optimization problem is constructed and the optimized local planning track points are solved by considering the distance constraint of the track points and the barriers, the kinematic constraint of the chassis, the dynamic constraint of the chassis and the constraint of the minimum time; and finally, calculating based on the LOR control algorithm to obtain a control instruction of the chassis.
Global path planning
The core of the global path planning algorithm is a global path searching algorithm based on the existing road network topological relation. Firstly, solving a multisource shortest path for the whole road network to obtain an optimal path and a shortest distance between any two path points. And then, carrying out TSP problem modeling according to the issued task points, and solving the TSP problem to obtain the routing inspection route.
Local path planning
The core of the local path planning algorithm is 'local path planning based on conjugate gradient optimization algorithm'. Firstly, interpolating a global planning path to obtain an initial local planning track point; then, the track is optimized by using the transverse and longitudinal change rates of the track as optimization targets and utilizing a conjugate gradient optimization algorithm to obtain a final local path; and finally, tracking the local track based on the MPC control algorithm to obtain a control instruction of the chassis.
Indoor and outdoor navigation intelligent control
Master side communication
Communication with the main control end can be realized through the network port; the real-time operation data related to the navigation system and the chassis can be collected, and the data is fed back to the main control end.
1. Summary of the design
The system adopts TCP/IP protocol to realize network communication with the main control system, and the basic definition of the protocol stack is as follows:
1) The total of the packet head is 6 bytes, the first 2 bytes are reserved bits, the last 4 bytes are integer, and the value is the message length;
2) The inclusion section is the communication content between the systems, and a character string in the JSON format is used.
2. Key communication instruction
TCP connection registration
Figure BDA0002691526430000161
Forward in the transmit direction
Figure BDA0002691526430000162
Robot coordinate information acquisition
Figure BDA0002691526430000163
Figure BDA0002691526430000171
Mission route planning
Figure BDA0002691526430000172
Figure BDA0002691526430000181
Transmitting point location advance
Figure BDA0002691526430000182
Figure BDA0002691526430000191
Run status message
Figure BDA0002691526430000192
Navigation real-time message
Figure BDA0002691526430000201
/>
Figure BDA0002691526430000211
Navigation service start/stop-
Figure BDA0002691526430000212
/>
Figure BDA0002691526430000221
Chassis communication
The interface mode of the chassis and the central control unit is serial communication; the chassis kinematic calculation and the adaptation of related sensors are realized, and the calculation from the speed of the whole machine to four wheel speeds and the reduction from the four wheel speeds to the speed of the whole machine are realized. The development, adaptation and debugging of the existing chassis motion control drive are realized according to the existing communication protocol.
And realizing control of the robot chassis through can bus communication. In the present communication protocol, the structure of the message frames is strictly compliant with the master/slave reply mechanism. The mobile robot control core module is set as a master machine, and other modules are set as slave machines.
Message frame specific structure sent by host: 2 bytes of start bit, 1 byte of target node ID, 1 byte of data length, 1 byte of function module code, 1 byte of method code, N bytes of parameters, 1 byte of checksum; the response message frame fed back by the slave is: 2 bytes of start bit, 1 node local byte ID, 1 byte of data length, 1 byte of coded by the accessed function module, 1 byte of coded by the accessed method, 1 status bit byte, N byte parameter, 1 byte checksum; the start bit consists of 2 bytes and functions to inform the target node that data transfer has begun. In the host and the slave, the two bytes of the start bit are fixed, and the 2 bytes are 0X55 and 0XAA in sequence;
The address field byte represents address information of a message frame. In the message frame sent by the host, the bytes of the address field explicitly indicate which slave accepts the message; in the message frames sent by the slaves, the task of the address field is to let the master distinguish the source of the information. The number of bytes of the address field is 1.
It should be noted that the embodiments of the present invention can be realized in hardware, software, or a combination of software and hardware. The hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or special purpose design hardware. Those of ordinary skill in the art will appreciate that the apparatus and methods described above may be implemented using computer executable instructions and/or embodied in processor control code, such as provided on a carrier medium such as a magnetic disk, CD or DVD-ROM, a programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The device of the present invention and its modules may be implemented by hardware circuitry, such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., as well as software executed by various types of processors, or by a combination of the above hardware circuitry and software, such as firmware.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical aspects of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those of ordinary skill in the art that: modifications and equivalents may be made to the specific embodiments of the invention without departing from the spirit and scope of the invention, which is intended to be covered by the claims.
The foregoing is merely illustrative of specific embodiments of the present invention, and the scope of the invention is not limited thereto, but any modifications, equivalents, improvements and alternatives falling within the spirit and principles of the present invention will be apparent to those skilled in the art within the scope of the present invention.

Claims (8)

1. The intelligent feedback control method for the unmanned inspection equipment without the dead zone is characterized by comprising the following steps of:
step one, acquiring the motion state and the surrounding environment of unmanned inspection equipment;
step two, based on the acquired motion state and surrounding environment of the unmanned inspection equipment, carrying out non-blind area outdoor navigation positioning of the unmanned inspection equipment by utilizing a Beidou positioning technology in combination with an inertial navigation algorithm; based on the acquired motion state and surrounding environment of the unmanned inspection equipment, performing seamless switching between indoor and outdoor positioning by utilizing an inertial navigation positioning technology;
step three, fusing a non-contact radio frequency positioning technology and an inertial navigation technology, and performing non-blind area navigation of unmanned inspection equipment in a low-density information reading equipment environment;
the unmanned inspection equipment non-blind area intelligent feedback control method further comprises the following steps:
(1) Realizing multi-frame point cloud real-time fusion registration on a data layer, extracting feature points from original point clouds according to curvature, constructing a cost function according to the point-line distance and the point-surface distance between the feature points, and then estimating the pose change of front and rear frames from rough to fine through inter-frame registration and map registration;
(2) Detecting an obstacle by using the registered compact point cloud, and converting the historical frame point cloud into a current frame coordinate system by using the pose obtained by point cloud registration to obtain a multi-frame point cloud set; projecting the multi-frame point cloud set to a grid map, and judging whether each grid is an obstacle according to the point cloud height distribution characteristics; comparing single-frame point cloud detection, wherein the multi-frame point cloud detection distance after space-time fusion is farther;
(3) Firstly, respectively extracting outline features and laser pulse reflection intensity features of an obstacle from data obtained by a three-dimensional laser radar and data obtained by a multi-layer laser radar, then fusing the extracted features and modeling the dynamic obstacle, completing matching tracking of the dynamic obstacle by constructing a similarity matrix, completing motion state estimation of the dynamic obstacle by utilizing an established obstacle model, and providing obstacle motion state information for dynamic obstacle identification and dynamic track prediction;
the unmanned inspection equipment non-blind area intelligent feedback control method specifically comprises the following steps:
type of graph:
(1) Original map: the original map format is rmap, which is used for making navigation maps and dense maps;
(2) Navigation map: the navigation map format is hmap, which is used for positioning and navigation of the robot;
(3) Dense map: the dense map format is txt or pcd, which are used for making a map and a global map;
(4) Map: the map format is csv, and the path planning of the robot is used;
(5) Global map: the global map is in a grid map format of x, png and is used for global planning and local obstacle avoidance;
and (3) drawing:
(1) Survey planning;
(2) After the investigation planning, data acquisition is carried out: adopting a proper acquisition platform and image acquisition equipment to acquire data of the image building area;
(3) Data processing, namely opening SLAMmapping software after data acquisition, importing original data and automatically performing data processing;
(4) After data processing, data editing is carried out, closed-loop operation is carried out on the data by the data editing, and BA and graph optimization are carried out on point cloud;
(5) The point cloud performs BA and map optimization, data export data are edited, a navigation map and a dense map are exported, the dense map is a point cloud with a standard format, and SLAMmapping or third party point cloud software is used for checking;
(6) Manufacturing a high-precision map and a global map, and directly manufacturing the high-precision map and the global map on SLAMMapping after data are exported;
Map editing
The original map data editing is mainly to carry out closed-loop operation on the data and to carry out BA and map optimization on point cloud;
after the original map is edited, a navigation map and a dense map can be derived, wherein the dense map is a point cloud with a standard format, and the point cloud is checked by SLAMmapping or third party point cloud software;
high-precision map making and editing
Opening a dense map pcd or txt format produced in the previous step, making a high-precision map by using SLAMmapping, and planning and editing a robot walking path on a point cloud;
global map making and editing, wherein by means of SLAMmapping, software automatically deletes ground points and noise points, and generates a global map after virtual walls are added;
3D perception is integrated with semantic extraction and segmentation based on large-scale scene point cloud data, static obstacle perception based on a map and various dynamic obstacle perception are achieved, the distance, the azimuth and the speed of the obstacle are calculated in real time, and a safe and reliable perception obstacle avoidance scheme is provided for unmanned inspection equipment;
the unmanned inspection equipment navigation utilizes an improved Euclidean clustering algorithm to detect the obstacle in real time, preprocesses point cloud data, and separates the point cloud of the ground and the non-ground through a ground gradient separation algorithm; performing obstacle cluster detection on the non-ground point cloud according to different cluster distance thresholds, and distinguishing by using cuboid frame markers; comparing the inherent adjacent point cloud distance of each ground laser beam with the actual distance between two adjacent points, and combining the angle difference of the adjacent points and the point cloud clustering to realize the extraction of the passable area; and combining obstacle detection and passable region extraction results, combining and detecting the passability of the passable region, and using the current single frame Lei Dadian cloud for sensing by the traditional laser radar algorithm.
2. The unmanned inspection equipment non-blind area intelligent feedback control method according to claim 1, further comprising: and controlling the motion trail of the unmanned inspection equipment from time and space dimensions by using a feedback control terminal of the integrated wireless communication module, and controlling the motion trail control precision of the unmanned inspection equipment to be kept within a meter level.
3. The unmanned inspection equipment non-blind area intelligent feedback control method according to claim 1, further comprising:
the method comprises the steps of using a high-stability quartz crystal as a clock source of a synchronizer, calibrating the high-stability quartz crystal by combining PPS pulse and NEMA data of a satellite positioning chip, and establishing a time reference in the whole measurement time range;
according to the characteristics of each sensor, active synchronization is adopted for an inertial navigation unit and a camera sensor, time service synchronization is adopted for a three-dimensional laser radar, data of each sensor is collected in a pure hardware mode, and a time tag is marked as a synchronization alignment mark, so that synchronization of original data of multiple sensors is realized;
designing a multi-sensor data acquisition circuit; gigabit network, USB3.0, USB2.0, mSATA, TF card high-speed interface and memory circuit, and debug, realize the acquisition, transmission and save of the data of multiple sensors;
An FPGA chip is taken as a carrier and combined with an external hardware circuit, and a time reference is established through a hardware description language; SPI interface control and inertial navigation data acquisition are designed; the UART interface is designed for synchronous timing of the three-dimensional laser radar, so that interaction between the FPGA and the TX2 instruction and transmission of synchronous data of the encoder and the camera are realized; and the control program of the chip is adopted to convert the parallel data into USB serial data, so that high-speed transmission of a large amount of sensor original data between the FPGA and the TX2 is realized.
4. A computer device, characterized in that the computer device comprises a memory and a processor, the memory stores a computer program, and the computer program when executed by the processor causes the processor to execute the blind zone-free intelligent feedback control method of the unmanned inspection device according to any one of claims 1 to 3.
5. A computer-readable storage medium storing a computer program that is executed by a processor to perform the blind-zone-free intelligent feedback control method of the unmanned inspection apparatus according to any one of claims 1 to 3.
6. The utility model provides an unmanned equipment non-blind area intelligent feedback control system that patrols and examines, its characterized in that, unmanned equipment non-blind area intelligent feedback control system that patrols and examines includes:
The indoor positioning system comprises a Beidou positioning module, a data acquisition module and an inertial navigation module; the method is used for acquiring the motion state and the surrounding environment of the unmanned inspection equipment in real time; the method is used for carrying out non-blind area outdoor navigation positioning of the unmanned inspection equipment based on Beidou positioning and an inertial navigation algorithm;
the indoor and outdoor positioning switching system is used for performing seamless switching between indoor and outdoor positioning based on an inertial navigation positioning technology;
the outdoor positioning system is used for fusing a non-contact radio frequency positioning technology and an inertial navigation technology and performing non-blind area navigation of unmanned inspection equipment in a low-density information reading equipment environment;
the feedback control terminal is integrated with the wireless communication module and is used for controlling the motion trail of the unmanned inspection equipment from time and space dimensions and controlling the motion trail control precision of the unmanned inspection equipment to be kept within a meter level;
unmanned equipment non-blind area intelligent feedback control system that patrols and examines still includes:
the upper system is a background remote control system, and can realize establishment, execution, monitoring, stopping and emergency control of tasks;
the indoor and outdoor integrated positioning navigation system comprises a map module, a multi-sensor fusion sensing system, a planning system and a decision making system; the control comprises transverse and longitudinal control, the chassis is controlled through an interface, the control of unmanned mobile equipment is realized, and feedback adjustment is performed;
The unmanned equipment of patrolling and examining non-blind area intelligent feedback control system still includes:
(1) Realizing multi-frame point cloud real-time fusion registration on a data layer, extracting feature points from original point clouds according to curvature, constructing a cost function according to the point-line distance and the point-surface distance between the feature points, and then estimating the pose change of front and rear frames from rough to fine through inter-frame registration and map registration;
(2) Detecting an obstacle by using the registered compact point cloud, and converting the historical frame point cloud into a current frame coordinate system by using the pose obtained by point cloud registration to obtain a multi-frame point cloud set; projecting the multi-frame point cloud set to a grid map, and judging whether each grid is an obstacle according to the point cloud height distribution characteristics; comparing single-frame point cloud detection, wherein the multi-frame point cloud detection distance after space-time fusion is farther;
(3) Firstly, respectively extracting outline features and laser pulse reflection intensity features of an obstacle from data obtained by a three-dimensional laser radar and data obtained by a multi-layer laser radar, then fusing the extracted features and modeling the dynamic obstacle, completing matching tracking of the dynamic obstacle by constructing a similarity matrix, completing motion state estimation of the dynamic obstacle by utilizing an established obstacle model, and providing obstacle motion state information for dynamic obstacle identification and dynamic track prediction;
The unmanned equipment of patrolling and examining non-blind area intelligent feedback control system specifically includes:
type of graph:
(1) Original map: the original map format is rmap, which is used for making navigation maps and dense maps;
(2) Navigation map: the navigation map format is hmap, which is used for positioning and navigation of the robot;
(3) Dense map: the dense map format is txt or pcd, which are used for making a map and a global map;
(4) Map: the map format is csv, and the path planning of the robot is used;
(5) Global map: the global map is in a grid map format of x, png and is used for global planning and local obstacle avoidance;
and (3) drawing:
(1) Survey planning;
(2) After the investigation planning, data acquisition is carried out: adopting a proper acquisition platform and image acquisition equipment to acquire data of the image building area;
(3) Data processing, namely opening SLAMmapping software after data acquisition, importing original data and automatically performing data processing;
(4) After data processing, data editing is carried out, closed-loop operation is carried out on the data by the data editing, and BA and graph optimization are carried out on point cloud;
(5) The point cloud performs BA and map optimization, data export data are edited, a navigation map and a dense map are exported, the dense map is a point cloud with a standard format, and SLAMmapping or third party point cloud software is used for checking;
(6) Manufacturing a high-precision map and a global map, and directly manufacturing the high-precision map and the global map on SLAMMapping after data are exported;
map editing
The original map data editing is mainly to carry out closed-loop operation on the data and to carry out BA and map optimization on point cloud;
after the original map is edited, a navigation map and a dense map can be derived, wherein the dense map is a point cloud with a standard format, and the point cloud is checked by SLAMmapping or third party point cloud software;
high-precision map making and editing
Opening a dense map pcd or txt format produced in the previous step, making a high-precision map by using SLAMmapping, and planning and editing a robot walking path on a point cloud;
global map making and editing, wherein by means of SLAMmapping, software automatically deletes ground points and noise points, and generates a global map after virtual walls are added;
3D perception is integrated with semantic extraction and segmentation based on large-scale scene point cloud data, static obstacle perception based on a map and various dynamic obstacle perception are achieved, the distance, the azimuth and the speed of the obstacle are calculated in real time, and a safe and reliable perception obstacle avoidance scheme is provided for unmanned inspection equipment;
the unmanned inspection equipment navigation utilizes an improved Euclidean clustering algorithm to detect the obstacle in real time, preprocesses point cloud data, and separates the point cloud of the ground and the non-ground through a ground gradient separation algorithm; performing obstacle cluster detection on the non-ground point cloud according to different cluster distance thresholds, and distinguishing by using cuboid frame markers; comparing the inherent adjacent point cloud distance of each ground laser beam with the actual distance between two adjacent points, and combining the angle difference of the adjacent points and the point cloud clustering to realize the extraction of the passable area; and combining obstacle detection and passable region extraction results, combining and detecting the passability of the passable region, and using the current single frame Lei Dadian cloud for sensing by the traditional laser radar algorithm.
7. The unmanned inspection equipment non-blind zone intelligent feedback control system of claim 6, wherein the data acquisition module comprises:
the data acquisition module is used for acquiring the motion state and the surrounding environment of the unmanned inspection equipment by utilizing a three-dimensional acceleration sensor, a three-axis digital compass, a three-axis gyroscope, a three-dimensional laser radar and other sensors.
8. A terminal, characterized in that the terminal is equipped with the unmanned inspection equipment non-blind area intelligent feedback control system according to claim 6.
CN202010993315.6A 2020-09-21 2020-09-21 Unmanned inspection equipment non-blind area intelligent feedback control system, method and terminal Active CN112214019B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010993315.6A CN112214019B (en) 2020-09-21 2020-09-21 Unmanned inspection equipment non-blind area intelligent feedback control system, method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010993315.6A CN112214019B (en) 2020-09-21 2020-09-21 Unmanned inspection equipment non-blind area intelligent feedback control system, method and terminal

Publications (2)

Publication Number Publication Date
CN112214019A CN112214019A (en) 2021-01-12
CN112214019B true CN112214019B (en) 2023-05-23

Family

ID=74049009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010993315.6A Active CN112214019B (en) 2020-09-21 2020-09-21 Unmanned inspection equipment non-blind area intelligent feedback control system, method and terminal

Country Status (1)

Country Link
CN (1) CN112214019B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112965077B (en) * 2021-02-09 2022-02-11 上海同陆云交通科技有限公司 Road inspection system and method based on vehicle-mounted laser radar
CN112965494A (en) * 2021-02-09 2021-06-15 武汉理工大学 Control system and method for pure electric automatic driving special vehicle in fixed area
CN113259855B (en) * 2021-06-16 2021-10-01 北京奇岱松科技有限公司 Indoor target operation track recognition system
CN113608248B (en) * 2021-06-25 2023-06-13 北京建筑大学 Beidou 5G fusion high-precision patrol personnel positioning method and related equipment
CN113566833A (en) * 2021-07-28 2021-10-29 上海工程技术大学 Multi-sensor fusion vehicle positioning method and system
CN113418522B (en) * 2021-08-25 2021-12-14 季华实验室 AGV path planning method, following method, device, equipment and storage medium
CN114474065A (en) * 2022-03-04 2022-05-13 美智纵横科技有限责任公司 Robot control method and device, robot and storage medium
CN117075640B (en) * 2023-10-12 2024-01-12 江苏智慧汽车研究院有限公司 Unmanned operation supervision system and method for artificial intelligent equipment based on Internet of things
CN117218743B (en) * 2023-11-07 2024-02-09 诺比侃人工智能科技(成都)股份有限公司 Intelligent inspection control method and system based on machine vision
CN117357880B (en) * 2023-12-07 2024-02-09 深圳失重魔方网络科技有限公司 Motion state identification method and system based on intelligent equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016197986A1 (en) * 2015-06-12 2016-12-15 北京中飞艾维航空科技有限公司 High-precision autonomous obstacle-avoidance flying method for unmanned plane

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106153048A (en) * 2016-08-11 2016-11-23 广东技术师范学院 A kind of robot chamber inner position based on multisensor and Mapping System
CN108225302B (en) * 2017-12-27 2020-03-17 中国矿业大学 Petrochemical plant inspection robot positioning system and method
CN110567458A (en) * 2018-06-05 2019-12-13 北京三快在线科技有限公司 Robot positioning method and device and robot
CN109015639A (en) * 2018-08-15 2018-12-18 深圳市烽焌信息科技有限公司 The device and storage medium of a kind of control robot patrol
CN109257131A (en) * 2018-08-27 2019-01-22 广州吉欧光学科技有限公司 A kind of unmanned plane LIDAR clock synchronization system and method
CN109599945A (en) * 2018-11-30 2019-04-09 武汉大学 A kind of autonomous crusing robot cruising inspection system of wisdom power plant and method
CN109509320A (en) * 2018-12-25 2019-03-22 国网河南省电力公司平顶山供电公司 A kind of substation's fire alarm crusing robot
CN110264517A (en) * 2019-06-13 2019-09-20 上海理工大学 A kind of method and system determining current vehicle position information based on three-dimensional scene images
CN110689622B (en) * 2019-07-05 2021-08-27 电子科技大学 Synchronous positioning and composition method based on point cloud segmentation matching closed-loop correction
CN110646808A (en) * 2019-10-26 2020-01-03 东北林业大学 Forestry knapsack formula laser radar multisensor integrated system
CN111310765A (en) * 2020-02-14 2020-06-19 北京经纬恒润科技有限公司 Laser point cloud semantic segmentation method and device
CN111538032B (en) * 2020-05-19 2021-04-13 北京数字绿土科技有限公司 Time synchronization method and device based on independent drawing tracks of camera and laser radar

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016197986A1 (en) * 2015-06-12 2016-12-15 北京中飞艾维航空科技有限公司 High-precision autonomous obstacle-avoidance flying method for unmanned plane

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张佳鹏.基于时空融合的激光雷达障碍物检测与运动目标跟踪.《中国优秀硕士学位论文全文数据库 信息科技辑》.2020,全文. *
黄如林.无人驾驶汽车动态障碍物避撞关键技术研究.《中国博士学位论文全文数据库 工程科技Ⅱ辑》.2017,全文. *

Also Published As

Publication number Publication date
CN112214019A (en) 2021-01-12

Similar Documents

Publication Publication Date Title
CN112214019B (en) Unmanned inspection equipment non-blind area intelligent feedback control system, method and terminal
CN108955702B (en) Lane-level map creation system based on three-dimensional laser and GPS inertial navigation system
CN110009761B (en) Automatic routing inspection path planning method and system for intelligent equipment
CN110411462B (en) GNSS/inertial navigation/lane line constraint/milemeter multi-source fusion method
CN111958592B (en) Image semantic analysis system and method for transformer substation inspection robot
CN106168805A (en) The method of robot autonomous walking based on cloud computing
CN111522339A (en) Automatic path planning and positioning method and device for inspection robot of livestock and poultry house
CN106441319A (en) System and method for generating lane-level navigation map of unmanned vehicle
CN107167139A (en) A kind of Intelligent Mobile Robot vision positioning air navigation aid and system
CN102591342A (en) Electronic-compass-based local path planning method for mowing robot
JP7245084B2 (en) Autonomous driving system
CN109443350B (en) Bluetooth/photoelectric/INS integrated navigation device and method based on neural network
CN107063242A (en) Have the positioning navigation device and robot of virtual wall function
CN114859972A (en) Inspection system and method for cooperative operation of aerial unmanned aerial vehicle and ground inspection robot
CN109491383A (en) Multirobot positions and builds drawing system and method
CN113537046A (en) Map lane marking method and system based on vehicle track big data detection
CN115435798A (en) Unmanned vehicle high-precision map road network generation system and method
CN116358515A (en) Map building and positioning method and device for low-speed unmanned system
Xiao et al. LIO-vehicle: A tightly-coupled vehicle dynamics extension of LiDAR inertial odometry
CN113311452B (en) Positioning method and system based on multiple sensors
Wang et al. Gr-fusion: Multi-sensor fusion slam for ground robots with high robustness and low drift
WO2023226574A1 (en) Scanning and observation system for coal-mine mechanical arm
Cai et al. Design of Multisensor Mobile Robot Vision Based on the RBPF-SLAM Algorithm
CN116352722A (en) Multi-sensor fused mine inspection rescue robot and control method thereof
CN115562076A (en) Simulation system, method and storage medium for unmanned mine car

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 310007 No. 8 Huanglong Road, Zhejiang, Hangzhou

Applicant after: STATE GRID ZHEJIANG ELECTRIC POWER Co.,Ltd.

Applicant after: INFORMATION AND COMMUNICATION BRANCH, STATE GRID ZHEJIANG ELECTRIC POWER Co.,Ltd.

Applicant after: State Grid Siji Location Service Co.,Ltd.

Applicant after: STATE GRID INFORMATION & TELECOMMUNICATION GROUP Co.,Ltd.

Applicant after: INFORMATION COMMUNICATION BRANCH, STATE GRID JIBEI ELECTRIC POWER Co.

Applicant after: STATE GRID FUJIAN ELECTRIC POWER Co.,Ltd.

Applicant after: STATE GRID CORPORATION OF CHINA

Address before: 310007 No. 8 Huanglong Road, Zhejiang, Hangzhou

Applicant before: STATE GRID ZHEJIANG ELECTRIC POWER Co.,Ltd.

Applicant before: INFORMATION AND COMMUNICATION BRANCH, STATE GRID ZHEJIANG ELECTRIC POWER Co.,Ltd.

Applicant before: STATE GRID SIJI SHENWANG POSITION SERVICE (BEIJING) Co.,Ltd.

Applicant before: STATE GRID INFORMATION & TELECOMMUNICATION GROUP Co.,Ltd.

Applicant before: INFORMATION COMMUNICATION BRANCH, STATE GRID JIBEI ELECTRIC POWER Co.

Applicant before: STATE GRID FUJIAN ELECTRIC POWER Co.,Ltd.

Applicant before: STATE GRID CORPORATION OF CHINA

GR01 Patent grant
GR01 Patent grant