Disclosure of Invention
Aiming at the problems existing in the prior art, the invention provides an intelligent feedback control system, method and terminal for unmanned inspection equipment without blind areas.
The invention is realized in such a way that the unmanned inspection equipment non-blind area intelligent feedback control method comprises the following steps:
Step one, acquiring the motion state and the surrounding environment of unmanned inspection equipment in real time by using a three-dimensional acceleration sensor, a three-axis digital compass, a three-axis gyroscope, a three-dimensional laser radar and other sensors;
step two, based on the acquired motion state and surrounding environment of the unmanned inspection equipment, carrying out non-blind area outdoor navigation positioning of the unmanned inspection equipment by utilizing a Beidou positioning technology in combination with an inertial navigation algorithm; based on the acquired motion state and surrounding environment of the unmanned inspection equipment, performing seamless switching between indoor and outdoor positioning by utilizing an inertial navigation positioning technology;
before unmanned inspection equipment operates, a space needs to be created for the unmanned inspection equipment, so that the unmanned inspection equipment can judge the position of the unmanned inspection equipment in real time based on the space information, and the problem of 'where I are located' is solved. The high-precision map is constructed, basic map data are provided for the intelligent driving system, the unmanned inspection system is helped to 'see clearly' the road, and the positioning data identified by each sensor are matched with the high-precision map data, so that the unmanned inspection position and the road condition to be faced by the vehicle are determined;
and thirdly, fusing a non-contact radio frequency positioning technology and an inertial navigation technology, and performing non-blind area navigation of unmanned inspection equipment in a low-density information reading equipment environment.
Further, the unmanned inspection equipment non-blind area intelligent feedback control method further comprises the following steps: and controlling the motion trail of the unmanned inspection equipment from time and space dimensions by using a feedback control terminal of the integrated wireless communication module, and controlling the motion trail control precision of the unmanned inspection equipment to be kept within a meter level.
Furthermore, the unmanned inspection equipment non-blind area intelligent feedback control method is used for synchronously acquiring and controlling the high integration level, the high time precision and the multi-sensor data of the synchronous controller, and comprises the following steps:
(1) Establishing a high-precision clock reference, taking a high-stability quartz crystal as a clock source of a synchronizer, and calibrating the high-precision clock reference by combining PPS pulse and NEMA data of a satellite positioning chip to establish a high-precision time reference in the whole measurement time range;
(2) The synchronization of multiple sensors is realized, active synchronization is respectively adopted for the sensors such as an inertial navigation unit, a camera and the like according to the characteristics of each sensor, time service synchronization is adopted for a three-dimensional laser radar, the data of each sensor is acquired in a pure hardware mode, and an accurate time tag is marked as a synchronous alignment mark, so that the high-precision synchronization of the original data of the multiple sensors is realized;
(3) Completing the schematic diagram of the whole hardware circuit of the synchronous controller and the design and debugging of a PCB, and designing a multi-sensor data acquisition circuit such as an inertial navigation unit, a differential GPS, a three-dimensional laser radar and the like; gigabit network, USB3.0, USB2.0, mSATA, TF card high-speed interface and memory circuit, and debug the whole hardware circuit, realize gathering, transmitting and preserving the data of multiple sensors;
(4) The method comprises the steps of completing the programming and the debugging of FPGA programs for controlling each sensor and synchronously collecting data, taking an FPGA chip as a carrier and combining an external hardware circuit, and establishing a high-precision time reference through a hardware description language; SPI interface control and inertial navigation data acquisition are designed; the UART interface is designed for synchronous timing of the three-dimensional laser radar, so that interaction between the FPGA and the TX2 instruction and transmission of synchronous data of the encoder and the camera are realized; the control program of the chip is designed to convert parallel data into USB serial data, so that high-speed transmission of a large amount of sensor original data between the FPGA and the TX2 is realized. The synchronous controller can provide high-precision synchronous original data of multiple sensors such as satellite positioning, three-dimensional laser radar, inertial navigation unit, camera and the like in real time.
Furthermore, before the unmanned inspection equipment runs, a space is created for the unmanned inspection equipment, and the position of the unmanned inspection equipment is judged in real time based on space information; matching the positioning data identified by each sensor with high-precision map data, and determining the unmanned inspection position and the road condition to be faced by the vehicle; the high-precision map is taken as a basic map for navigation and positioning, is essential basic data for navigation of unmanned inspection equipment, and has the functions of assisting in achieving high-precision positioning, planning decision-making and control feedback; the method specifically comprises the following steps:
Type of graph:
(1) Original map: the original map format is rmap, which is used for making navigation maps and dense maps;
(2) Navigation map: the navigation map format is hmap, which is used for positioning and navigation of the robot;
(3) Dense map: the dense map format is txt or pcd, which is used for making high-precision map and global map;
(4) High precision map: the high-precision map format is csv, and the path planning of the robot is used;
(5) Global map: the global map is in a grid map format of x, png and is used for global planning and local obstacle avoidance;
and (3) drawing:
(1) Survey planning;
(2) And (3) data acquisition: adopting a proper acquisition platform and image acquisition equipment to acquire data of the image building area;
(3) Data processing, namely opening SLAMmapping software after data acquisition, and importing original data to automatically perform data processing;
4) Data editing, wherein the data editing mainly comprises performing closed-loop operation on data and performing BA and graph optimization on point cloud;
(5) After the data is exported and edited, a navigation map and a dense map can be exported, wherein the dense map is a point cloud with a standard format, and is checked by SLAMmapping or third party point cloud software;
(6) After the data is exported, the high-precision map and the global map can be directly manufactured on SLAMmapping;
Map editing
The original map data editing is mainly to carry out closed-loop operation on the data and to carry out BA and map optimization on point cloud;
after the original map is edited, a navigation map and a dense map can be derived, wherein the dense map is a point cloud with a standard format, and is checked by SLAMmapping or third party point cloud software;
high-precision map making and editing
Opening a dense map pcd or txt format produced in the previous step, making a high-precision map by using SLAMmapping, and planning and editing a robot walking path on a point cloud;
global map making and editing, by means of SLAMMapping, the software automatically deletes ground points and noise points. After adding the virtual wall, generating a global map;
3D perception is integrated with semantic extraction and segmentation based on large-scale scene point cloud data, a three-dimensional semantic map foundation is built, a high-precision three-dimensional point cloud semantic map is a core for realizing accurate path planning and scene reconstruction, static obstacle perception and various dynamic obstacle perception based on the high-precision map are realized, the distance, the direction and the speed of the obstacle are calculated in real time, and a safe and reliable perception obstacle avoidance scheme is provided for unmanned inspection equipment.
The unmanned inspection equipment navigation utilizes an improved Euclidean clustering algorithm to detect the obstacle in real time, preprocesses point cloud data, and separates the point cloud of the ground and the non-ground through a ground gradient separation algorithm; performing obstacle cluster detection on the non-ground point cloud according to different cluster distance thresholds, and distinguishing by using cuboid frame markers; comparing the inherent adjacent point cloud distance of each ground laser beam with the actual distance between two adjacent points, and combining the angle difference of the adjacent points and the point cloud clustering to realize the extraction of the passable area; and merging obstacle detection and passable region extraction results, and merging and detecting the passability of the passable region. The traditional laser radar algorithm only uses the current single frame Lei Dadian cloud for sensing.
Further, the unmanned inspection equipment non-blind area intelligent feedback control method further comprises the following steps:
(1) Realizing multi-frame point cloud real-time fusion registration on a data layer, extracting feature points from original point clouds according to curvature, constructing a cost function according to the point-line distance and the point-surface distance between the feature points, and then estimating the pose change of front and rear frames from rough to fine through inter-frame registration and map registration;
(2) Detecting an obstacle by using the registered compact point cloud, and converting the historical frame point cloud into a current frame coordinate system by using the pose obtained by point cloud registration to obtain a multi-frame point cloud set; projecting the multi-frame point cloud set to a grid map, and judging whether each grid is an obstacle according to the point cloud height distribution characteristics; comparing single-frame point cloud detection, wherein the multi-frame point cloud detection distance after space-time fusion is farther;
(3) The method comprises the steps of firstly extracting outline features and laser pulse reflection intensity features of an obstacle from data obtained by a three-dimensional laser radar and data obtained by a multi-layer laser radar respectively, then fusing the extracted features and modeling the dynamic obstacle, completing matching tracking of the dynamic obstacle by constructing a similarity matrix, completing motion state estimation of the dynamic obstacle by utilizing an established obstacle model, and providing obstacle motion state information for dynamic obstacle identification and dynamic track prediction.
It is a further object of the present invention to provide a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of:
step one, acquiring the motion state and the surrounding environment of unmanned inspection equipment in real time by using a three-dimensional acceleration sensor, a three-axis digital compass, a three-axis gyroscope, a three-dimensional laser radar and other sensors;
step two, based on the acquired motion state and surrounding environment of the unmanned inspection equipment, carrying out non-blind area outdoor navigation positioning of the unmanned inspection equipment by utilizing a Beidou positioning technology in combination with an inertial navigation algorithm; based on the acquired motion state and surrounding environment of the unmanned inspection equipment, performing seamless switching between indoor and outdoor positioning by utilizing an inertial navigation positioning technology; and thirdly, fusing a non-contact radio frequency positioning technology and an inertial navigation technology, and performing non-blind area navigation of unmanned inspection equipment in a low-density information reading equipment environment.
Another object of the present invention is to provide a computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
Step one, acquiring the motion state and the surrounding environment of unmanned inspection equipment in real time by using a three-dimensional acceleration sensor, a three-axis digital compass, a three-axis gyroscope, a three-dimensional laser radar and other sensors;
step two, based on the acquired motion state and surrounding environment of the unmanned inspection equipment, carrying out non-blind area outdoor navigation positioning of the unmanned inspection equipment by utilizing a Beidou positioning technology in combination with an inertial navigation algorithm; based on the acquired motion state and surrounding environment of the unmanned inspection equipment, performing seamless switching between indoor and outdoor positioning by utilizing an inertial navigation positioning technology;
and thirdly, fusing a non-contact radio frequency positioning technology and an inertial navigation technology, and performing non-blind area navigation of unmanned inspection equipment in a low-density information reading equipment environment.
Another object of the present invention is to provide an unmanned inspection equipment non-blind area intelligent feedback control system for operating the unmanned inspection equipment non-blind area intelligent feedback control method, the unmanned inspection equipment non-blind area intelligent feedback control system comprising:
the indoor positioning system comprises a Beidou positioning module, a data acquisition module and an inertial navigation module; the method is used for acquiring the motion state and the surrounding environment of the unmanned inspection equipment in real time; the method is used for carrying out non-blind area outdoor navigation positioning of the unmanned inspection equipment based on Beidou positioning and an inertial navigation algorithm;
The indoor and outdoor positioning switching system is used for performing seamless switching between indoor and outdoor positioning based on an inertial navigation positioning technology;
the outdoor positioning system is used for fusing a non-contact radio frequency positioning technology and an inertial navigation technology and performing non-blind area navigation of unmanned inspection equipment in a low-density information reading equipment environment;
the feedback control terminal is integrated with the wireless communication module and is used for controlling the motion trail of the unmanned inspection equipment from time and space dimensions and controlling the motion trail control precision of the unmanned inspection equipment to be kept within a meter level;
unmanned equipment non-blind area intelligent feedback control system that patrols and examines still includes:
the upper system is a background remote control system, and can realize establishment, execution, monitoring, stopping and emergency control of tasks;
the indoor and outdoor integrated positioning navigation system comprises a high-precision map module, a multi-sensor fusion sensing system, a planning system and a decision making system; the control comprises transverse and longitudinal control, the chassis is controlled through the interface, the unmanned mobile equipment is controlled, and feedback adjustment is performed.
Further, the data acquisition module includes:
the data acquisition module is used for acquiring the motion state and the surrounding environment of the unmanned inspection equipment in real time by utilizing a three-dimensional acceleration sensor, a three-axis digital compass, a three-axis gyroscope, a three-dimensional laser radar and other sensors.
The invention further aims to provide a terminal which is provided with the blind-zone-free intelligent feedback control system of the unmanned inspection equipment.
By combining all the technical schemes, the invention has the advantages and positive effects that:
the project research adopts a 3D fusion navigation mode, greatly improves the conditions, has extremely strong stability and applicability, provides a new generation navigation positioning and control technology for the unmanned inspection terminal, and is widely applied to an electric unmanned inspection system.
The invention can realize the non-blind area outdoor and indoor positioning of the unmanned inspection equipment and simultaneously realize the seamless switching of the outdoor and indoor positioning, and the invention utilizes the intelligent feedback control terminal to realize the indoor and outdoor non-blind area high-precision intelligent feedback control of the unmanned inspection equipment, controls the motion trail of the unmanned inspection equipment from time and space dimensions, ensures that the motion trail control precision of the unmanned inspection equipment is kept within meter level, and the operation delay time precision is up to within 100 ns.
According to the invention, under an outdoor environment, based on Beidou positioning, sensors such as a three-dimensional acceleration sensor, a three-axis digital compass, a three-axis gyroscope and a three-dimensional laser radar are integrated, the motion state and the surrounding environment of unmanned inspection equipment are obtained in real time, an inertial navigation algorithm is developed, and a non-blind area navigation positioning terminal module integrating Beidou high-precision positioning and inertial navigation is formed. In the indoor environment, the high integration of the non-contact radio frequency positioning technology and the inertial navigation technology is studied, and the non-blind area navigation under the environment of the low-density information reading equipment is realized. Meanwhile, based on an inertial navigation positioning technology, seamless switching between indoor positioning and outdoor positioning is researched, and high-precision positioning of indoor and outdoor automatic connection is realized. Based on the technologies, a wireless communication module is integrated, an indoor and outdoor non-blind area high-precision intelligent feedback control terminal which can be used for the unmanned inspection equipment is developed, the motion trail of the unmanned inspection equipment is controlled from time and space dimensions, the motion trail control precision of the unmanned inspection equipment is ensured to be kept within a meter level, and the operation delay time precision is ensured to be within 100 ns.
Detailed Description
The present invention will be described in further detail with reference to the following examples in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Aiming at the problems existing in the prior art, the invention provides an intelligent feedback control system without blind areas for unmanned inspection equipment, and the invention is described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the non-blind area intelligent feedback control system of the unmanned inspection equipment provided by the embodiment of the invention comprises:
the indoor positioning system 1 comprises a Beidou positioning module, a data acquisition module and an inertial navigation module; the method is used for acquiring the motion state and the surrounding environment of the unmanned inspection equipment in real time; and the method is used for carrying out non-blind area outdoor navigation positioning of the unmanned inspection equipment based on Beidou positioning and an inertial navigation algorithm.
The indoor and outdoor positioning switching system 2 is used for performing seamless switching between indoor and outdoor positioning based on an inertial navigation positioning technology;
the outdoor positioning system 3 is used for fusing a non-contact radio frequency positioning technology and an inertial navigation technology and performing non-blind area navigation of unmanned inspection equipment in a low-density information reading equipment environment;
and the feedback control terminal 4 is integrated with a wireless communication module and is used for controlling the motion trail of the unmanned inspection equipment from time and space dimensions and controlling the motion trail control precision of the unmanned inspection equipment to be kept within a meter level.
Unmanned equipment non-blind area intelligent feedback control system that patrols and examines still includes:
the upper system is a background remote control system, and can realize establishment, execution, monitoring, stopping and emergency control of tasks;
the indoor and outdoor integrated positioning navigation system comprises a high-precision map module, a multi-sensor fusion sensing system, a planning system and a decision making system; the control comprises transverse and longitudinal control, the chassis is controlled through the interface, the unmanned mobile equipment is controlled, and feedback adjustment is performed.
The data acquisition module provided by the embodiment of the invention comprises:
the data acquisition module is used for acquiring the motion state and the surrounding environment of the unmanned inspection equipment in real time by utilizing a three-dimensional acceleration sensor, a three-axis digital compass, a three-axis gyroscope, a three-dimensional laser radar and other sensors.
As shown in fig. 2, the non-blind area intelligent feedback control method for the unmanned inspection equipment provided by the embodiment of the invention comprises the following steps:
s101: acquiring the motion state and the surrounding environment of unmanned inspection equipment in real time by using a three-dimensional acceleration sensor, a three-axis digital compass, a three-axis gyroscope, a three-dimensional laser radar and other sensors;
s102: based on the acquired motion state and surrounding environment of the unmanned inspection equipment, the unmanned inspection equipment non-blind area outdoor navigation positioning is performed by utilizing a Beidou positioning technology in combination with an inertial navigation algorithm; based on the acquired motion state and surrounding environment of the unmanned inspection equipment, performing seamless switching between indoor and outdoor positioning by utilizing an inertial navigation positioning technology;
Before unmanned inspection equipment operates, a space needs to be created for the unmanned inspection equipment, so that the unmanned inspection equipment can judge the position of the unmanned inspection equipment in real time on the basis of the space information; the high-precision map is constructed, basic map data are provided for the intelligent driving system, the unmanned inspection system is helped to 'see clearly' the road, and the positioning data identified by each sensor are matched with the high-precision map data, so that the unmanned inspection position and the road condition to be faced by the vehicle are determined;
s103: and (3) fusing a non-contact radio frequency positioning technology and an inertial navigation technology, and performing non-blind area navigation of the unmanned inspection equipment in the environment of the low-density information reading equipment.
The non-blind area intelligent feedback control method of the unmanned inspection equipment provided by the embodiment of the invention further comprises the following steps: and controlling the motion trail of the unmanned inspection equipment from time and space dimensions by using a feedback control terminal of the integrated wireless communication module, and controlling the motion trail control precision of the unmanned inspection equipment to be kept within a meter level.
The technical scheme of the invention is further described below with reference to specific embodiments.
Example 1:
under outdoor environment, based on Beidou positioning, sensors such as a three-dimensional acceleration sensor, a three-axis digital compass, a three-axis gyroscope and a three-dimensional laser radar are integrated, the motion state and the surrounding environment of unmanned inspection equipment are obtained in real time, an inertial navigation algorithm is developed, and a non-blind area navigation positioning terminal module integrating Beidou high-precision positioning and inertial navigation is formed. In the indoor environment, the high integration of the non-contact radio frequency positioning technology and the inertial navigation technology is studied, and the non-blind area navigation under the environment of the low-density information reading equipment is realized. Meanwhile, based on an inertial navigation positioning technology, seamless switching between indoor positioning and outdoor positioning is researched, and high-precision positioning of indoor and outdoor automatic connection is realized. Based on the technologies, a wireless communication module is integrated, an indoor and outdoor non-blind area high-precision intelligent feedback control terminal which can be used for the unmanned inspection equipment is developed, the motion trail of the unmanned inspection equipment is controlled from time and space dimensions, the motion trail control precision of the unmanned inspection equipment is ensured to be kept within a meter level, and the operation delay time precision is ensured to be within 100 ns.
As shown in FIG. 3, the integrated navigation controller takes an FPGA as a core, and integrates multiple sensor data synchronous control. The high integration, high time precision and multi-sensor data synchronous acquisition and control for the synchronous controller comprises:
(1) A high precision clock reference is established. The high-stability quartz crystal is used as a clock source of the synchronizer, and is calibrated by combining PPS pulse and NEMA data of the satellite positioning chip, so that the advantages of high long-term stability of the PPS pulse of the satellite positioning chip and high short-time stability of the high-stability quartz crystal are fully utilized, and a high-precision time reference in the whole measurement time range is established.
(2) The synchronization of multiple sensors is realized. According to the characteristics of each sensor, active synchronization is adopted for the sensors such as an inertial navigation unit and a camera, time service synchronization is adopted for the three-dimensional laser radar, data of each sensor is collected in a pure hardware mode, and accurate time labels are marked as synchronous alignment marks, so that high-precision synchronization of original data of multiple sensors is realized.
(3) And (3) completing the schematic diagram of the whole hardware circuit of the synchronous controller and the design and debugging of the PCB. The multi-sensor data acquisition circuit of the inertial navigation unit, the differential GPS, the three-dimensional laser radar and the like is designed; the high-speed interfaces and the storage circuits of the gigabit network, the USB3.0, the USB2.0, the mSATA, the TF card and the like are used for debugging the whole hardware circuit, so that the acquisition, the transmission and the storage of the multi-sensor data are realized. (4) And finishing the programming and debugging of the FPGA program for controlling each sensor and synchronously collecting the data. An FPGA chip is taken as a carrier and combined with an external hardware circuit, and a high-precision time reference is established through a hardware description language; SPI interface control and inertial navigation data acquisition are designed; the UART interface is designed for synchronous timing of the three-dimensional laser radar, so that interaction between the FPGA and the TX2 instruction and transmission of synchronous data of the encoder and the camera are realized; the control program of the chip is designed to convert parallel data into USB serial data, so that high-speed transmission of a large amount of sensor original data between the FPGA and the TX2 is realized. The synchronous controller can provide high-precision synchronous original data of multiple sensors such as satellite positioning, three-dimensional laser radar, inertial navigation unit, camera and the like in real time. The indoor and outdoor positioning accuracy of the synchronous controller can reach 10cm. The synchronous controller also has the advantages of small volume, light weight, low manufacturing cost, high integration level, high data processing speed, strong expansibility and the like.
Indoor and outdoor high-precision map
As shown in fig. 4, before the unmanned inspection equipment operates, a space needs to be created for the unmanned inspection equipment, so that the unmanned inspection equipment can judge the position of the unmanned inspection equipment in real time based on the space information, and the problem of' where me is solved. The high-precision map is constructed, basic map data are provided for the intelligent driving system, the unmanned inspection system is helped to 'see clearly' the road, and the positioning data identified by each sensor are matched with the high-precision map data, so that the unmanned inspection position and the road condition to be faced by the vehicle are determined. The high-precision map is taken as a basic map for navigation and positioning, is essential basic data for navigation of unmanned inspection equipment, and has the functions of assisting in achieving high-precision positioning, planning decision making, controlling feedback and the like.
Types of graphs
(1) Original map: the original map format is rmap, used to make navigation maps and dense maps.
(2) Navigation map: the navigation map format is hmap, which is used for positioning and navigation of the robot.
(3) Dense map: the dense map format is txt or pcd, which is used for making high-precision map and global map.
(4) High precision map: the high-precision map format is csv, and the path planning of the robot is used.
(5) Global map: the global map is in grid map format of png and is used for global planning and local obstacle avoidance.
The main steps of the drawing construction
(1) Survey planning: before the drawing is built, the basic condition of the drawing area is known, the drawing line is planned in advance, and particularly, a closed loop line is properly added.
(2) And (3) data acquisition: and adopting a proper acquisition platform and image acquisition equipment to acquire data of the image building area. The constant speed should be kept during collection, the turning is not too fast, and the equipment is kept stable.
(3) And (3) data processing, namely opening SLAMmapping software after data acquisition, and importing original data to automatically perform data processing.
4) And data editing, namely performing closed-loop operation on the data, and performing BA and graph optimization on the point cloud.
(5) And after the data is obtained and edited, the navigation map and the dense map can be obtained. The dense map is a point cloud in a standard format and can be checked by SLAMmapping or third party point cloud software.
(6) And after the data is exported, the high-precision map and the global map can be directly manufactured on SLAMmapping.
Map editing
The original map data editing mainly comprises the steps of performing closed-loop operation on the data and performing BA and map optimization on point clouds.
And after the original map is edited, a navigation map and a dense map can be derived. The dense map is a point cloud in a standard format, and can be checked by SLAMmapping or third party point cloud software, and the main steps are shown in a figure 5.
High-precision map making and editing
And opening the format of the dense map pcd or txt produced in the previous step, and performing high-precision mapping by using SLAMmapping. Planning and editing a robot walking path on the point cloud, wherein the purple lines are shown.
Global map making and editing
With slammamapping, the software automatically deletes ground points and noise points. After adding the virtual wall, a global map can be generated.
3D perception
The method is integrated with semantic extraction and segmentation based on large-scale scene point cloud data, a basis of a three-dimensional semantic map is constructed, and the high-precision three-dimensional point cloud semantic map is a core for realizing accurate path planning and scene reconstruction. The method realizes the static obstacle sensing and various dynamic obstacle sensing based on the high-precision map, calculates the distance, the azimuth, the speed and the like of the obstacle in real time, and provides a safe and reliable sensing obstacle avoidance scheme for unmanned inspection equipment.
The 3D laser radar has wide detection range and high distance precision, and is widely used for obstacle detection and target tracking in environmental perception tasks. The unmanned inspection equipment navigation needs to accurately detect and track dynamic obstacles and estimate the motion state of the dynamic obstacles; avoiding potential collisions with dynamic obstacles. The method comprises the steps of performing real-time obstacle detection by using an improved Euclidean clustering algorithm, preprocessing point cloud data, and separating ground and non-ground point clouds by using a ground gradient separation algorithm; performing obstacle cluster detection on the non-ground point cloud according to different cluster distance thresholds, and distinguishing by using cuboid frame markers; comparing the inherent adjacent point cloud distance of each ground laser beam with the actual distance between two adjacent points, and combining the angle difference of the adjacent points and the point cloud clustering to realize the extraction of the passable area; and merging obstacle detection and passable region extraction results, and merging and detecting the passability of the passable region. The traditional laser radar algorithm only uses the current single frame Lei Dadian cloud for sensing. Because the laser radar point cloud has the problem of data sparseness, the single-frame sensing method has poor detection capability on remote obstacles. And through an effective space-time fusion method, the perception performance of the laser radar can be greatly improved.
The following works are mainly carried out: 1. and realizing multi-frame point cloud real-time fusion registration on a data layer. And extracting characteristic points from the original point cloud according to the curvature, constructing a cost function according to the point-line distance and the point-plane distance between the characteristic points, and then estimating the pose change of the front frame and the back frame from rough to fine through inter-frame registration and map registration. Compared with the traditional method, the method has the advantages that the number of laser points involved in the calculation process is small, and the registration effect and the calculation speed are both considered. 2. And detecting the obstacle by using the registered compact point cloud. By means of the pose obtained by point cloud registration, the historical frame point cloud can be converted into the current frame coordinate system, and a multi-frame point cloud set is obtained. And then projecting the multi-frame point cloud set to a grid map, and judging whether each grid is an obstacle according to the point cloud height distribution characteristics. And compared with single-frame point cloud detection, the multi-frame point cloud detection distance after space-time fusion is farther. 3. The accuracy and the speed of dynamic obstacle detection and tracking are improved, and a dynamic obstacle detection and tracking method based on multi-feature fusion is used. Firstly, extracting outline features and laser pulse reflection intensity features of an obstacle from data obtained by a three-dimensional laser radar and data obtained by a multi-layer laser radar respectively, then fusing the extracted features and modeling a dynamic obstacle, completing matching tracking of the dynamic obstacle by constructing a similarity matrix, completing motion state estimation of the dynamic obstacle by utilizing an established obstacle model, and providing obstacle motion state information for dynamic obstacle identification and dynamic track prediction.
Planning decisions
Global path planning
The PathPlanning module corresponds to global path planning, and the module can conveniently perform algorithm replacement. Default use is: 1. and (2) manually drawing a global path, and planning the global path based on the searched optimal path.
Local path planning
The Trajectory Generation & Modification module in the figure corresponds to local path planning, and the module can be conveniently replaced by an algorithm. Default use is: based on optimized local path planning.
Algorithm principle: firstly, interpolating a global planning path to obtain an initial local planning track point; then, a nonlinear optimization problem is constructed and the optimized local planning track points are solved by considering the distance constraint of the track points and the barriers, the kinematic constraint of the chassis, the dynamic constraint of the chassis and the constraint of the minimum time; and finally, calculating based on the LOR control algorithm to obtain a control instruction of the chassis.
Global path planning
The core of the global path planning algorithm is a global path searching algorithm based on the existing road network topological relation. Firstly, solving a multisource shortest path for the whole road network to obtain an optimal path and a shortest distance between any two path points. And then, carrying out TSP problem modeling according to the issued task points, and solving the TSP problem to obtain the routing inspection route.
Local path planning
The core of the local path planning algorithm is 'local path planning based on conjugate gradient optimization algorithm'. Firstly, interpolating a global planning path to obtain an initial local planning track point; then, the track is optimized by using the transverse and longitudinal change rates of the track as optimization targets and utilizing a conjugate gradient optimization algorithm to obtain a final local path; and finally, tracking the local track based on the MPC control algorithm to obtain a control instruction of the chassis.
Indoor and outdoor navigation intelligent control
Master side communication
Communication with the main control end can be realized through the network port; the real-time operation data related to the navigation system and the chassis can be collected, and the data is fed back to the main control end.
1. Summary of the design
The system adopts TCP/IP protocol to realize network communication with the main control system, and the basic definition of the protocol stack is as follows:
1) The total of the packet head is 6 bytes, the first 2 bytes are reserved bits, the last 4 bytes are integer, and the value is the message length;
2) The inclusion section is the communication content between the systems, and a character string in the JSON format is used.
2. Key communication instruction
TCP connection registration
Forward in the transmit direction
Robot coordinate information acquisition
Mission route planning
Transmitting point location advance
Run status message
Navigation real-time message
Navigation service start/stop-
Chassis communication
The interface mode of the chassis and the central control unit is serial communication; the chassis kinematic calculation and the adaptation of related sensors are realized, and the calculation from the speed of the whole machine to four wheel speeds and the reduction from the four wheel speeds to the speed of the whole machine are realized. The development, adaptation and debugging of the existing chassis motion control drive are realized according to the existing communication protocol.
And realizing control of the robot chassis through can bus communication. In the present communication protocol, the structure of the message frames is strictly compliant with the master/slave reply mechanism. The mobile robot control core module is set as a master machine, and other modules are set as slave machines.
Message frame specific structure sent by host: 2 bytes of start bit, 1 byte of target node ID, 1 byte of data length, 1 byte of function module code, 1 byte of method code, N bytes of parameters, 1 byte of checksum; the response message frame fed back by the slave is: 2 bytes of start bit, 1 node local byte ID, 1 byte of data length, 1 byte of coded by the accessed function module, 1 byte of coded by the accessed method, 1 status bit byte, N byte parameter, 1 byte checksum; the start bit consists of 2 bytes and functions to inform the target node that data transfer has begun. In the host and the slave, the two bytes of the start bit are fixed, and the 2 bytes are 0X55 and 0XAA in sequence;
The address field byte represents address information of a message frame. In the message frame sent by the host, the bytes of the address field explicitly indicate which slave accepts the message; in the message frames sent by the slaves, the task of the address field is to let the master distinguish the source of the information. The number of bytes of the address field is 1.
It should be noted that the embodiments of the present invention can be realized in hardware, software, or a combination of software and hardware. The hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or special purpose design hardware. Those of ordinary skill in the art will appreciate that the apparatus and methods described above may be implemented using computer executable instructions and/or embodied in processor control code, such as provided on a carrier medium such as a magnetic disk, CD or DVD-ROM, a programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The device of the present invention and its modules may be implemented by hardware circuitry, such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., as well as software executed by various types of processors, or by a combination of the above hardware circuitry and software, such as firmware.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical aspects of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those of ordinary skill in the art that: modifications and equivalents may be made to the specific embodiments of the invention without departing from the spirit and scope of the invention, which is intended to be covered by the claims.
The foregoing is merely illustrative of specific embodiments of the present invention, and the scope of the invention is not limited thereto, but any modifications, equivalents, improvements and alternatives falling within the spirit and principles of the present invention will be apparent to those skilled in the art within the scope of the present invention.