CN114474061A - Robot multi-sensor fusion positioning navigation system and method based on cloud service - Google Patents

Robot multi-sensor fusion positioning navigation system and method based on cloud service Download PDF

Info

Publication number
CN114474061A
CN114474061A CN202210148066.XA CN202210148066A CN114474061A CN 114474061 A CN114474061 A CN 114474061A CN 202210148066 A CN202210148066 A CN 202210148066A CN 114474061 A CN114474061 A CN 114474061A
Authority
CN
China
Prior art keywords
robot
map
information
algorithm
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210148066.XA
Other languages
Chinese (zh)
Other versions
CN114474061B (en
Inventor
何丽
刘钰嵩
齐继超
刘志强
李顺
李可新
陈耀华
王宏伟
郑威强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinjiang University
Original Assignee
Xinjiang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinjiang University filed Critical Xinjiang University
Priority to CN202210148066.XA priority Critical patent/CN114474061B/en
Publication of CN114474061A publication Critical patent/CN114474061A/en
Application granted granted Critical
Publication of CN114474061B publication Critical patent/CN114474061B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • B25J5/007Manipulators mounted on wheels or on carriages mounted on wheels
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion

Abstract

The invention discloses a robot multi-sensor fusion positioning navigation system and a method based on cloud service, which comprises a robot and a navigation cloud service platform, the robot comprises an indoor mobile robot chassis, a vehicle-mounted computer, a three-dimensional laser radar, a depth camera and a liquid crystal display, the indoor mobile robot chassis comprises an omnidirectional wheel, a shell, a shock absorber, a section bar frame, a mounting plate, a driving motor, a driving control plate and a vehicle-mounted battery, the navigation cloud service platform comprises a robot end, a cloud server, a client and network equipment, wherein the robot autonomously completes real-time sensing and distinguishing of environmental characteristics, calibrates a current position, updates a local map, completes optimal path planning according with multiple evaluation indexes such as social constraints, task constraints and the like according to environmental information on the basis of sensing and detecting an external environment to complete map construction, and controls a robot executing mechanism to reach a target position according to a planning result.

Description

Robot multi-sensor fusion positioning navigation system and method based on cloud service
Technical Field
The invention relates to the technical field of indoor service robots, in particular to a robot multi-sensor fusion positioning navigation system and method based on cloud service.
Background
In recent years, with the continuous development and application of artificial intelligence and computer technology, a great deal of advanced technology is promoted to emerge in the technical field of robots, and a great number of algorithms are applied to the existing robot positioning and navigation system. The robot positioning navigation system is a core module of a robot system, bears the functions of perception and path planning of the robot in a complex and unknown environment, is a key technology for judging whether the robot can realize autonomous operation or not, and is a basic technology for realizing higher-level functions. Most of the existing positioning and navigation systems are deployed in portable computing equipment of a mobile robot to meet the operation requirement of the robot in the operation process.
However, due to the limited computing power of the small computing devices of the existing robot platform, the deployed positioning and navigation system often cannot apply advanced positioning and navigation algorithms, such as a positioning and navigation algorithm combined with a deep learning algorithm and an image processing algorithm, and is relatively time-consuming in deployment and operation, which is not beneficial to the research of the algorithm by researchers.
Disclosure of Invention
The invention aims to provide a robot multi-sensor fusion positioning navigation system and method based on cloud service, so as to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme:
the utility model provides a robot multisensor fuses location navigation based on cloud, includes robot and navigation cloud service platform, the robot includes indoor mobile robot chassis, on-vehicle computer, three-dimensional laser radar, degree of depth camera and LCD, indoor mobile robot chassis includes omni wheel, casing, bumper shock absorber, section bar frame, mounting panel, driving motor, drive control panel and on-vehicle battery, navigation cloud service platform includes robot end, cloud server, customer end and network equipment.
As a further scheme of the invention: the indoor mobile robot chassis is used for carrying a sensor and a driving device to realize the stable operation of the robot, the vehicle-mounted computer is used for reading the data of the packed sensor, and sent to a navigation cloud service platform through a wireless network, the depth camera is used for extracting color information and depth information in the environment, the three-dimensional laser radar is used for scanning and matching environmental information, the liquid crystal display is used for displaying real-time running information of the robot, multi-sensor fusion is realized, namely the fusion of the depth camera and the three-dimensional laser radar on a data layer, the vehicle-mounted computer is used for packaging the acquired sensor data, establishing communication with an upper cloud server through the ROS node, transmitting the data to the cloud server, and meanwhile, receiving the response information of the cloud server, and further controlling the behavior of the vehicle according to the response information of the cloud server.
As a still further scheme of the invention: the robot end comprises a robot end ROS system and a wireless network card module, the robot end ROS system is used for collecting image information and radar point cloud information generated in the running process of the sensor and packaging the information into ROS messages to be broadcasted, the wireless network card module is used for transmitting the information to a cloud server through a wireless network card to be processed after collecting the information, the cloud server comprises a server end ROS system, a processor, a memory, a hard disk, a display card and an operating system, and the network equipment comprises a router and a switch.
As a still further scheme of the invention: the robot end is used for sending sensor data in real time, the cloud server is used for carrying a robot positioning and navigation algorithm and processing data in real time, the client is used for checking and controlling the running condition of the robot in real time, and the network equipment is used for providing network service for accessing the cloud server.
As a still further scheme of the invention: the multi-sensor fusion positioning mapping method comprises a visual positioning mapping algorithm, a laser positioning mapping algorithm, a pose fusion algorithm, a rear-end optimization algorithm and a map fusion algorithm, wherein the visual positioning mapping algorithm is used for robot positioning based on depth camera information, and the semantic map construction of the environment is realized through a semantic segmentation algorithm and an optical flow method, the laser positioning mapping algorithm is used for robot positioning based on three-dimensional laser radar information and constructing a 2D grid map, the pose fusion algorithm is used for fusing the pose of the visual positioning mapping algorithm and the pose of the laser positioning mapping algorithm, the rear-end optimization algorithm is used for optimizing the pose of the robot and the pose of the road mark points and optimizing and updating the map in real time in the robot navigation process, and the map fusion algorithm is used for fusing a semantic octree map and the grid map to establish a static 2.5D map and updating the map according to real-time environment information when the robot runs.
As a still further scheme of the invention: the map fusion algorithm comprises a local map fusion algorithm and a global map fusion algorithm.
As a still further scheme of the invention: the visual positioning mapping algorithm comprises a tracking thread, a preprocessing thread and a semantic mapping thread.
A robot multi-sensor fusion positioning navigation method based on cloud service comprises the following steps:
the method comprises the following steps: the robot finishes repositioning and map updating on the basis of an experimental map, namely, the environmental characteristics of the robot are judged, the current position is calibrated, a local map is updated, the robot finishes optimal path planning according with multiple evaluation indexes such as social constraint, task constraint and the like according to environmental information, and a robot executing mechanism is controlled to reach a target position according to a planning result;
step two: extracting feature points of the non-human part image in the image by adopting a semantic segmentation algorithm in a preprocessing thread, extracting pixels with inconsistent dynamic state in the image by adopting an optical flow algorithm, removing dynamic feature points with high possibility in the image together, and obtaining a transformation matrix to obtain a stable camera pose through matching of the remaining stable features;
step three: the three-dimensional laser radar realizes the construction of a grid map by adopting a particle filtering mode, firstly randomly selecting particles according to prior probability and endowing the particles with weight values, carrying out state initialization, and then generating a next generation particle set from a current particle set according to proposed distribution and combining and calculating the weight values of the particles; the method comprises the steps of firstly, generating an octree semantic map by using the point cloud map, updating the semantic information of the octree map by semantic labels at different moments through Bayesian fusion to form the octree semantic map and laser radar information, updating the uncertainty of the grid map by using a Bayesian fusion algorithm, fusing local grid maps until all grid maps are fused, and accordingly completing two sensor data fusion construction maps to obtain a 2.5D grid map containing semantic information;
step four: after the 2.5D global semantic map is obtained, global matching is carried out on the map according to the current laser information and the visual word bag characteristics, the repositioning process is completed, the original key frame is captured, and the laser grid map and the semantic map corresponding to the original key frame are updated;
step five: the method comprises the steps of improving the map building precision through the fusion of laser and a vision sensor, then utilizing a double-layer ant colony algorithm to give full play to the advantages of parallel computation and multi-dimensional search, finally performing smoothing treatment to remove redundant turning points, optimizing a path, and coordinating global path planning and local obstacle avoidance. The multi-mode pedestrian trajectory prediction method improves the existing uniform velocity prediction method into uniform velocity and uniform acceleration motion prediction by a polynomial curve fitting algorithm to improve the short-time prediction precision, establishes a time-space state diagram to infer the interaction influence of each intelligent body (pedestrian and robot) after extracting a sufficient pedestrian position sequence, completes high-precision pedestrian position prediction, further performs clustering by multi-time-step pedestrian prediction positions, judges whether each service object is in the same group or not, and provides effective inference information for the navigation process;
step six: in a double-current network with cross-modal feature fusion, RGB and depth features are extracted and fused to realize accurate prediction of pedestrians, and on the basis of pedestrian detection, pedestrian states including information of pedestrian positions, moving directions and the like are acquired by combining laser radar point cloud data, so that 3D pedestrian detection is realized. Meanwhile, pedestrian skeleton data are obtained by combining depth image data, RGB image information, depth image information and human body skeleton information are fused in a behavior recognition network, social behavior recognition of pedestrians is achieved, and finally, a social interaction space model is built on the 2.5D grid map by combining pedestrian state extraction and social behavior recognition detection results.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the method, a robot platform based on cloud service is built, a multi-sensor integrated SLAM thread and a navigation thread are processed on the cloud service platform through a network, a 2.5D semantic grid map is built and deployed through client remote control, and a positioning navigation task of the robot is completed.
2. According to the method, data of a loaded depth camera and a loaded three-dimensional laser radar are transmitted to a cloud service platform in a wireless network mode, the pose of a robot is fused and positioned, the transmitted image information is subjected to semantic segmentation through depth learning to generate an octree map with semantics, the pose and landmark points are globally optimized at the back end, and a 2.5D semantic grid map for navigation is generated by combining a two-dimensional grid map of the three-dimensional laser radar.
3. In the invention, on the basis of a navigation module and the completion of map construction by the robot by sensing and detecting external environment, real-time sensing and judgment of the environment characteristics are automatically completed, the current position is calibrated, a local map is updated, the optimal path planning meeting multiple evaluation indexes such as social constraint, task constraint and the like is completed according to environment information, and the robot actuating mechanism is controlled to reach the target position according to the planning result.
The invention is applicable to the technical field of indoor service robots, and mainly comprises a robot navigation method based on cloud service, a navigation cloud service platform and a robot body. The robot body includes: vehicle computer, three-dimensional laser radar, LCD, depth camera, indoor mobile robot chassis. Wherein the three-dimensional lidar, the depth camera are used for collecting environmental data; the vehicle-mounted computer is used for acquiring various sensor data, communicating with the navigation cloud platform and generating corresponding control signals; the cloud service platform is used for receiving sensor data sent by the vehicle-mounted computer, completing map construction and navigation decision on the basis of the sensor data, and finally sending a result to the vehicle-mounted computer; the navigation method comprises a multi-sensor fused simultaneouslocalization and mapping (SLAM) method and a navigation method. Accurate navigation of the indoor mobile robot can be realized through the modes.
Drawings
Fig. 1 is a schematic structural diagram of a robot in a cloud service-based robot multi-sensor fusion positioning navigation system and method.
Fig. 2 is a data flow diagram in the cloud service-based robot multi-sensor fusion positioning navigation system and method.
Fig. 3 is a flowchart of a multi-sensor fusion positioning mapping method in the cloud service-based robot multi-sensor fusion positioning navigation system and method.
Fig. 4 is a flowchart of a robot navigation method in the cloud service-based robot multi-sensor fusion positioning navigation system and method.
Fig. 5 is a control schematic diagram in the cloud service-based robot multi-sensor fusion positioning navigation system and method.
Shown in the figure: the vehicle-mounted computer comprises a vehicle-mounted computer 1, a section bar frame 2, a liquid crystal display 3, a three-dimensional laser radar 4, a three-dimensional laser radar module 5, a depth camera 6, a shock absorber 7, an omnidirectional wheel 8 and a shell 9.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 to 5, in the embodiment of the invention, a robot multi-sensor fusion positioning navigation system based on cloud service comprises a robot and a navigation cloud service platform, wherein the robot comprises an indoor mobile robot chassis, a vehicle-mounted computer 1, a three-dimensional laser radar 4, a depth camera 6 and a liquid crystal display 3, the indoor mobile robot chassis comprises an omnidirectional wheel 8, a shell 9, a shock absorber 7, a profile frame 2, a mounting plate, a driving motor, a driving control plate and a vehicle-mounted battery, and the navigation cloud service platform comprises a robot end, a cloud server, a client and network equipment.
The chassis of the indoor mobile robot is connected in a hexagonal structure, the housing 9 comprises an upper housing and a lower housing, three shock absorbers 7 are arranged, the three shock absorbers 7 are respectively arranged on the housing 9 and connected with the upper surface of the lower housing, each shock absorber 7 comprises two spring structures and an upper connecting piece and a lower connecting piece, the number of driving motors is three, the front ends of the three driving motors are respectively connected with the lower connecting pieces of the shock absorbers 7, the rear ends of the three driving motors are connected with the lower housing, the three driving motors are arranged in one-to-one correspondence with the three omnidirectional wheels 8, the three omnidirectional wheels 8 are connected with the output shafts of the corresponding driving motors, each omnidirectional wheel 8 comprises 12 driven wheels, 2 hubs and connecting pieces, the driving control panel is arranged on the upper surface of the lower housing, and the vehicle-mounted battery is arranged on the lower housing, the section bar frame 2 is formed by connecting 8 aluminum section bar connecting pieces and is connected with the upper surface of the upper shell, and the mounting plate is divided into three layers.
The connected mode of robot does, vehicle-mounted computer 1 is connected with the higher authority of last casing, three-dimensional laser radar 4 is connected with upper installation panel, and three-dimensional laser radar module is connected with middle level installation panel, a degree of depth camera 6 is connected with a lateral direction of section bar frame 2, and installation direction and robot positive direction coincidence, LCD 3 is connected with the one end of section bar frame 2, and installation direction and vertical direction slope 45 degrees.
The indoor mobile robot chassis is used for carrying sensors and driving equipment to realize stable operation of the robot, the vehicle-mounted computer 1 is used for reading packed sensor data, and sent to a navigation cloud service platform through a wireless network, the depth camera 6 is used for extracting color information and depth information in the environment, the three-dimensional laser radar 4 is used for scanning and matching environmental information, the liquid crystal display 3 is used for displaying real-time running information of the robot, multi-sensor fusion, namely the fusion of the depth camera 6 and the three-dimensional laser radar 4 on a data layer, the vehicle-mounted computer 1 is used for packaging the acquired sensor data, establishing communication with an upper cloud server through an ROS node, transmitting the data to a cloud server, and meanwhile, receiving the response information of the cloud server, and further controlling the behavior of the vehicle according to the response information of the cloud server.
The invention uses message transmission based on ROS system to communicate, and the transmission between the robot end and the cloud server end is based on TCP/IP protocol.
The robot end comprises a robot end ROS system and a wireless network card module, the robot end ROS system is used for collecting image information and radar point cloud information generated in the running process of the sensor and packaging the information into ROS messages to be broadcasted, the wireless network card module is used for transmitting the information to a cloud server through a wireless network card to be processed after collecting the information, the cloud server comprises a server end ROS system, a processor, a memory, a hard disk, a display card and an operating system, and the network equipment comprises a router and a switch.
The robot end is used for sending sensor data in real time, the cloud server is used for carrying a robot positioning and navigation algorithm and processing data in real time, the client is used for checking and controlling the running condition of the robot in real time, and the network equipment is used for providing network service for accessing the cloud server.
The host model of the cloud server is Dell (DELL) T7920; the processor is intel to qiang jin brand 5118; 12 core 24 threads, main frequency 2.3 GHz; the memory is three stars DDR 42666 MHz 128GB, 64GB x 2; the hard disk is Intel solid state 2TB 2; the display card is an Nvidia Quard RTX 5000, 16GB display memory, 3072 CUDA cores; the operating system is Ubuntu 18.04LTS, and the network card is an Intel I219-LM 1000Mbps Ethernet card.
The type of the vehicle-mounted computer 1 is Intel NUC8i7 HVK; the processor is Intel core i 78809G; the display card is AMD Radeon RX Vega M GH; the memory is Wieggang DDR 42666 MHz 16 GB; the wireless network card is Intel 9560AC and supports 802.11AC wireless protocol.
The processor of the driving control board is STM32F 103; a ULN2003 motor drive module; and a CAN interface.
The router is TL-XDR6070, an IEEE 802.11a/b/g/n/ac/ax wireless protocol, and 3 10/100/1000Mbps rate self-adaptive Ethernet interfaces with the highest wireless rate of 5952Mbps (2.4GHz 1148Mbps, 5GHz 4804 Mbps).
The switch is an acute network RG-S1824+, the transmission speed is 10/100Mbps, and the bandwidth of the backboard is 4.8 Gbps.
The three-dimensional laser radar 4 is a three-dimensional multi-line three-dimensional laser radar 4VLP-16, and the number of laser lines is as follows: 16 lines, measurement range: up to 100m, measurement accuracy: plus or minus 3 cm.
The depth camera 6 is Kinectv1 with a color resolution of 640 x 480, fps of 30fps and a depth resolution of 320 x 240.
The robot navigation method is used for planning a robot path and navigating the map, and enables the robot to autonomously move to a certain target position in the environment from the current position after the autonomous positioning and planning control work is coordinated.
The multi-sensor fusion positioning map building method comprises a visual positioning map building algorithm, a laser positioning map building algorithm, a pose fusion algorithm, a back-end optimization algorithm and a map fusion algorithm.
The visual positioning mapping algorithm is used for robot positioning based on depth camera 6 information, static semantic map construction of an environment is achieved through a semantic segmentation algorithm and an optical flow method, the laser positioning mapping algorithm is used for robot positioning based on three-dimensional laser radar 4 information and construction of a 2D grid map, the pose fusion algorithm is used for fusing the pose of the visual positioning mapping algorithm and the pose of the laser positioning mapping algorithm, the back-end optimization algorithm is used for optimizing the pose of a robot and the pose of a landmark point and optimizing and updating the map in real time in the robot navigation process, and the map fusion algorithm is used for fusing a semantic octree map and the grid map to establish a static 2.5D map and updating the map according to real-time environment information when the robot runs.
The multi-sensor fusion positioning mapping method comprises the steps of subscribing depth image information and color image information of a depth camera 6 according to ROS system sensor information and a visual positioning mapping algorithm, outputting a visual positioning pose and semantic octree map, subscribing three-dimensional laser radar 4 sensor information according to the ROS system sensor information and a laser positioning algorithm, outputting a laser positioning pose and grid map, fusing pose information according to the visual positioning pose and the laser positioning pose and outputting an estimated robot pose state, optimizing the robot pose and landmark points according to visual landmark points, laser landmark points and the estimated robot pose state by using a rear-end optimization algorithm, outputting optimized landmark points and robot pose, and performing mapping according to the robot pose, the semantic octree map and the depth map in the robot mapping process, And fusing the grid map and the map by using a map fusion algorithm to construct a static 2.5D map, updating the map according to a real-time ground environment information map fusion algorithm in the robot navigation process, and outputting a real-time local map.
The laser positioning mapping algorithm is used for selecting indexes of key frames, splicing sub key frames into a local map, establishing a pose solving equation through matching of a current frame and the local map so as to obtain the pose of a laser thread, and simultaneously outputting a grid map, wherein the pose fusion algorithm flow is that the laser SLAM thread issues a topic with a message type of/pos _ laser; the visual SLAM thread issues a topic with a message type/pos _ kinect; and subscribing the two topics by using robot _ pos _ ekf and outputting a topic with a message type/pos, and completing fusion of two sensor odometers by an ExtendedKalman Filter (EKF) filter, wherein the algorithm flow of the rear-end optimization module comprises the steps of optimizing the rear end of a visual positioning module, calculating a similarity transformation matrix sim3 and a relative attitude relationship, adjusting the position of a landmark point, optimizing a position-orientation diagram, and correspondingly adjusting the position of a map point according to the optimized position and performing global Bundle Addition (BA) optimization.
The map fusion algorithm comprises a local map fusion algorithm and a global map fusion algorithm.
The global map fusion algorithm fuses a static obstacle part of a semantic map and a grid map through a map projection algorithm and a map fusion algorithm according to the grid map output by a laser positioning map-entering algorithm and the semantic octree map output by a visual positioning module to generate a 2.5D grid map, and the local map fusion algorithm is used for updating objects different from a prior map in the environment according to real-time environment information, marking pedestrians on the local map and fusing with the global map.
The visual positioning mapping algorithm comprises a tracking thread, a preprocessing thread and a semantic mapping thread.
The tracking thread is used for estimating the pose of the camera, selecting a proper frame as a key frame through the inter-frame common-view relation, updating the key frame and local map points, deleting mismatching according to the pose, storing the key frame and the map points as a basis for executing repositioning or selecting the key frame, writing the key frame into a key frame list, the preprocessing thread is used for performing semantic segmentation and optical flow estimation on an RGB image, marking a dynamic target area on an original image according to optical flow consistency, marking an area belonging to a person on the original image according to a semantic segmentation image, the semantic mapping thread is used for constructing a 3D point cloud, generating a point cloud map with semantic tags by combining the semantic segmentation image, and constructing a semantic octree map in an incremental mode through an octree generation algorithm and a semantic fusion algorithm.
The robot navigation method comprises a global path planning algorithm, a dynamic social interaction space algorithm, a multi-mode pedestrian track prediction algorithm and a local path planning algorithm, wherein the global path planning algorithm is used for carrying out global path planning according to global map information, the dynamic social interaction space algorithm is used for establishing two states of static states and moving states of individuals and crowds respectively based on task constraints and social constraint conditions, describing respective dynamic social interaction spaces and navigation obstacle avoidance, the multi-mode pedestrian track prediction algorithm is used for predicting positions of pedestrians with high precision, adding information of the individuals and the obstacles to a local map, and providing basis for assisting in avoiding the pedestrians, and the local path planning algorithm is used for carrying out local path planning according to local map information.
Specifically, the robot navigation process comprises the steps of carrying out a local map identification process according to prior 2.5D map information and a multi-sensor fusion positioning algorithm, comparing visual and laser ground landmark characteristics of a transmitted local map with an original map, completing a robot relocation task and a local map updating task, carrying out global path planning according to a 2.5D grid map, cooperating with the global path planning and local obstacle avoidance, respectively establishing two states of a person and a crowd in a static state and a moving state according to task constraint and social constraint conditions through a dynamic social interaction space algorithm, describing respective dynamic social interaction spaces, introducing real-time updated sensor collected by the robot to detect the person and the crowd, reasoning pedestrians according to a multi-mode pedestrian track prediction method, establishing a time space state diagram after extracting a sufficient pedestrian position sequence, and carrying out multi-mode map identification, And (4) the robot and the like perform interactive influence to finish high-precision prediction of the position of the moving target, and perform local dynamic path planning to avoid the moving target according to the behavior and the position information of the moving target.
The robot runs under the environment covered by a network by depending on a motion chassis and is loaded with a three-dimensional laser radar 4 sensor and a module thereof, a depth camera 6, a display and a vehicle-mounted computer 1(NUC), wherein the depth camera 6 and the three-dimensional laser radar 4 collect original image information and radar point cloud information generated by the sensors in the environment.
And the NUC performs information transmission based on an ROS open source system, packs the original image information into color image information/camera _ rgb and depth image information/camera _ depth for transmission, and transmits the color image information/camera _ rgb and the depth image information/camera _ depth to the cloud SLAM server through a wireless network port for processing.
The RGB image information respectively enters a tracking thread and a preprocessing thread of the visual SLAM, an image pyramid of each frame of RGB image is calculated in the tracking thread, ORB characteristics are extracted, descriptors are calculated, meanwhile, the preprocessing thread performs image semantic segmentation on the RGB image by using a PSPnet network, a mask is added to an original image, characteristic points belonging to a human or potential dynamic target area in the image are removed, an LB method is used for optical flow estimation, and characteristic points belonging to a dynamic target are removed through optical flow consistency.
Calculating a Bagofwords (BOW) feature vector of a current frame by a tracking thread, setting a matching threshold, performing feature matching by using continuous static feature points, performing pose estimation on a camera by using a Perproductive-n-Point (PNP) method according to whether a motion model is met or not, selecting a proper frame as a key frame according to a common-view relation between frames, updating the key frame and a local map point, performing projection matching on the local map point, optimizing the current frame by using a pose map, deleting mismatching according to the pose, storing the key frame and the map point as a basis for executing repositioning or selecting the key frame, and writing the key frame into a key frame list.
Inserting a key frame into a drawing establishing thread of the visual SLAM, removing redundant map points, executing local Bundle Adjustment (BA) optimization, removing the redundant key frames, keeping the sparse feature point cloud as a subsequent loop detection service, establishing a 3D point cloud through matching between a depth image and a reference frame, the position of pixels in the image and internal parameters of a camera, and generating a point cloud map with a semantic label by combining a semantic segmentation image. And after the point cloud is downsampled under a given certain resolution ratio through a point cloud filter, inserting the point cloud into the nodes of the octree, updating the occupancy rates of voxels with different resolution ratios, and fusing semantic information of the multi-view octree in a Bayes fusion mode to construct an incremental semantic octree map.
In the laser SLAM thread, according to the fact that the curvature of points is used as an index for extracting characteristic information of laser frames, the points with lower curvature are used as plane points, the points with higher curvature are used as edge points, matching of the two laser frames is completed through the corresponding plane points and the edge points between the two frames, the selection index of the key frame with the rotation angle exceeding 5 degrees or translation exceeding 10cm is defined, 10 laser frames near the key frame are selected as sub-key frames, the sub-key frames are spliced into a local map, a pose solving equation is established through matching of the current frame and the local map, the pose of the laser thread is obtained, and meanwhile a grid map is output.
And (3) Ros communication pose fusion: the laser SLAM thread issues a topic with a message type of/position _ laser; the visual SLAM thread issues a topic with a message type/pos _ kinect; and subscribing the two topics by adopting robot _ pos _ ekf and outputting the topic with the message type/pos so as to complete the fusion of the two sensor odometers.
Loop detection: the visual SLAM and the laser SLAM respectively carry out loop detection, the loop detection of the visual SLAM carries out loop detection by calculating closed loop candidate frames, detecting continuous candidate frames in the candidate frames, calculating a similarity transformation matrix sim3 and a relative attitude relationship, adjusting the positions of key frame poses connected with a current frame and map points observed by the key frames, matching the landmark points of the closed loop frames and the key frames connected with the closed loop frames with the points of the key frames connected with the current frame, updating a common view through the matching relationship between frames, optimizing a pose map, and correspondingly adjusting the positions of the map points according to the optimized poses to carry out global BA optimization.
After the robot finishes the map construction work, the robot acquires data according to current environment information and a sensor, the data are correspondingly matched in a 2.5D grid map, the current position is calibrated, the autonomous positioning work is finished, the robot plans a moving path meeting multiple evaluation indexes according to the target position by comprehensively considering the safety of the robot and the safety of the surrounding environment, and the planning control work is finished by an indoor mobile robot chassis executing mechanism.
And (3) global path planning, namely firstly, fusing and drawing construction through laser and a vision sensor to improve drawing construction precision, then applying a double-layer ant colony algorithm to fully exert the advantages of parallel calculation and multi-dimensional search, finally performing smoothing treatment to remove redundant turning points, optimizing paths, meeting the actual operation requirements of the robot, and coordinating global path planning and local obstacle avoidance.
A multi-mode pedestrian trajectory prediction method improves the existing uniform velocity prediction method into uniform velocity and uniform acceleration motion prediction by a polynomial curve fitting algorithm to improve short-time prediction precision, establishes a time-space state diagram to infer the interaction influence of various intelligent bodies (pedestrians and robots) after extracting a sufficient pedestrian position sequence, completes high-precision pedestrian position prediction, further performs clustering by multi-time-step pedestrian prediction positions, judges whether various service objects are in the same group or not, and provides effective inference information for a navigation process.
The method comprises the steps of local path planning, namely firstly, obtaining the current pose of a robot in a local map, and then establishing a dynamic social interaction space by combining pedestrian detection information, pedestrian state extraction information and social behavior information based on task constraint and social constraint conditions; and then, introducing real-time updated pedestrian track prediction and crowd grouping information collected by the robot, and finally, adjusting a dynamic window weight value and a weight combination mode in real time so as to improve a dynamic window evaluation function, complete safe obstacle avoidance and path planning and meet the requirements of human body comfort and safety under a social interaction environment.
Firstly, the NUC converts a path execution control instruction in the move _ base into a specific robot chassis motion control instruction: the method comprises the steps that a driver selects a mode, drive enabling configuration and a chassis motor speed regulation instruction, then a control instruction is sent in an event triggering mode through canon communication, a fixed period does not exist (the time of two frames of control instructions of the same type is 50ms), finally the robot chassis feeds back execution conditions, and real-time drive state and motor rotating speed information are fed back to an upper computer through the canon for control and adjustment.
A robot multi-sensor fusion positioning navigation method based on cloud service; the method comprises the following steps:
the method comprises the following steps: the robot finishes repositioning and map updating on the basis of an experimental map, namely, the environmental characteristics of the robot are judged, the current position is calibrated, a local map is updated, the robot finishes optimal path planning according with multiple evaluation indexes such as social constraint, task constraint and the like according to environmental information, and the robot actuating mechanism is controlled to reach a target position according to the planning result.
Step two: extracting feature points of the non-human part image in the image by adopting a semantic segmentation algorithm in a preprocessing thread, extracting pixels with inconsistent dynamic state in the image by adopting an optical flow algorithm, removing dynamic feature points with high possibility in the image together, and obtaining a transformation matrix to obtain a stable camera pose through matching of the remaining stable features;
step three: the three-dimensional laser radar 4 adopts a particle filtering mode to realize the construction of a grid map, firstly, particles are randomly selected according to prior probability and given with weights, state initialization is carried out, and then, the next generation of particle sets are generated from the current particle set according to proposed distribution and the weights of the particles are calculated in a combined mode; the method comprises the steps of firstly, generating an octree semantic map by using the point cloud map, updating the semantic information of the octree map by semantic labels at different moments through Bayesian fusion to form the octree semantic map and laser radar information, updating the uncertainty of the grid map by using a Bayesian fusion algorithm, fusing local grid maps until all grid maps are fused, and accordingly completing two sensor data fusion construction maps to obtain a 2.5D grid map containing semantic information;
step four: after the 2.5D global semantic map is obtained, global matching is carried out on the map according to the current laser information and the visual word bag characteristics, the repositioning process is completed, the original key frame is captured, and the laser grid map and the semantic map corresponding to the original key frame are updated;
step five: the method comprises the steps of improving the map building precision through the fusion of laser and a vision sensor, then utilizing a double-layer ant colony algorithm to give full play to the advantages of parallel computation and multi-dimensional search, finally performing smoothing treatment to remove redundant turning points, optimizing a path, and coordinating global path planning and local obstacle avoidance. The multi-mode pedestrian trajectory prediction method improves the existing uniform velocity prediction method into uniform velocity and uniform acceleration motion prediction by a polynomial curve fitting algorithm to improve the short-time prediction precision, establishes a time-space state diagram to infer the interaction influence of each intelligent body (pedestrian and robot) after extracting a sufficient pedestrian position sequence, completes high-precision pedestrian position prediction, further performs clustering by multi-time-step pedestrian prediction positions, judges whether each service object is in the same group or not, and provides effective inference information for the navigation process;
step six: in a double-current network with cross-modal feature fusion, RGB and depth features are extracted and fused to realize accurate prediction of pedestrians, and on the basis of pedestrian detection, pedestrian states including information of pedestrian positions, moving directions and the like are acquired by combining laser radar point cloud data, so that 3D pedestrian detection is realized. Meanwhile, pedestrian skeleton data are obtained by combining depth image data, RGB image information, depth image information and human body skeleton information are fused in a behavior recognition network, social behavior recognition of pedestrians is achieved, and finally, a social interaction space model is built on the 2.5D grid map by combining pedestrian state extraction and social behavior recognition detection results.
The invention provides a multi-sensor fusion positioning navigation System and a method of an indoor service Robot based on cloud service, the Robot CAN realize high-efficiency and accurate positioning and mapping under the environment covered by a network to realize the purpose of navigation, the Robot senses the external environment through a carried three-dimensional laser radar 4 and a depth camera 6 under the position environment, data information is transmitted to a vehicle-mounted computer 1, a vehicle-mounted computing unit encapsulates the sensor information through a Robot Operating System (ROS) Operating System, the sensor information is transmitted to a cloud server through a wireless network, the cloud server subscribes the sensor information through the ROS System and operates a Robot positioning navigation algorithm, the navigation information is transmitted to the vehicle-mounted computer 1 through the network, the vehicle-mounted computer 1 transmits the processed navigation information to a driving control panel through a CAN port, the driving motor rotates to realize the navigation target, meanwhile, the client can establish communication with the server, and can check and control the running condition of the robot in real time.
The invention provides a cloud service-based indoor service robot multi-sensor fusion positioning navigation system and method, and the robot can realize efficient and accurate positioning and mapping under a network coverage environment to realize the purpose of navigation. The Robot is under the position environment, through three-dimensional laser radar and the depth camera that carry, the perception external environment, introduce data information into on-vehicle computer, on-vehicle computing unit passes through Robot Operating System (ROS), encapsulation sensor information, send sensor message to cloud server through wireless network, cloud server passes through ROS System subscription sensor message, and the Robot location navigation algorithm of operation, send navigation message to on-vehicle computer through the network, on-vehicle computer sends the navigation information after handling to drive control panel through the CAN mouth, driving motor is rotatory, realize the navigation target, simultaneously, the client CAN establish communication by the server, CAN realize looking over in real time and control the Robot behavior.
According to one aspect of the invention, a robot positioning navigation system and method based on multi-sensor data fusion in a cloud mode are provided, and the system and method comprise:
a robot navigation method, a navigation cloud service platform and a robot body based on cloud service are provided, wherein:
the robot body is used for carrying a sensor and a vehicle-mounted computer, and is communicated with the navigation cloud service platform to realize stable motion;
the navigation cloud service platform is used for sending and receiving data of the client and the robot end, completing a multi-sensor fusion positioning navigation algorithm in the cloud server, establishing communication with the client and sending a navigation message to the robot end;
the robot navigation method based on the cloud service is used for processing laser and visual sensor information, completing robot positioning, constructing a 2.5D grid map and achieving robot navigation.
Further, in the above system and method, the robot body includes: indoor mobile robot chassis, on-vehicle computer, three-dimensional laser radar, degree of depth camera, LCD, wherein:
the indoor mobile robot chassis is used for carrying a sensor and a driving device to realize the stable operation of the robot;
the vehicle-mounted computer is used for reading the packed sensor data and sending the packed sensor data to the navigation cloud service platform through a wireless network;
the depth camera is used for extracting color information and depth information in the environment;
the three-dimensional laser radar is used for scanning and matching the environmental information;
the liquid crystal display is used for displaying real-time operation information of the robot.
Further, the indoor mobile robot chassis includes: omnidirectional wheel, shell, shock absorber, section bar frame, mounting plate, driving motor, driving control panel, vehicle-mounted battery,
specifically, the indoor mobile robot chassis is connected in a manner that the housing is of a hexagonal structure and is divided into an upper housing and a lower housing, three shock absorbers are respectively arranged on the housing and are connected with the upper surface of the lower housing, each shock absorber is composed of two spring structures and an upper connecting piece and a lower connecting piece, the front ends of three driving motors are respectively connected with the lower connecting pieces of the shock absorbers, the rear ends of the three driving motors are connected with the lower housing, three omnidirectional wheels are connected with a driving motor shaft, each omnidirectional wheel is composed of 12 driven wheels, 2 hubs and connecting pieces, a driving control panel is installed on the upper surface of the lower housing, a vehicle-mounted battery is installed on the upper surface of the lower housing, and a profile frame is formed by connecting 8 aluminum profiles through the connecting pieces, is connected with the upper surface of the upper housing, and is divided into three layers through the mounting plates;
specifically, the connected mode of robot body does, the vehicle-mounted computer is connected with the higher authority of last casing, three-dimensional laser radar is connected with upper installation panel, and three-dimensional laser radar module is connected with middle level installation panel, the degree of depth camera is connected with a lateral direction of section bar frame, and installation direction and robot positive direction coincide, LCD is connected with the one end of section bar frame, and installation direction and vertical direction slope 45 degrees.
Further, in the above system and method, the navigation cloud service platform includes: robot end, cloud ware, customer end, network equipment, wherein:
the robot end is used for sending sensor data in real time;
the cloud server is used for carrying a robot positioning and navigation algorithm and processing data in real time;
the client is used for checking and controlling the running condition of the robot in real time;
the network equipment is used for providing a cloud server access network service;
preferably, the present invention communicates using message transmission based on the ROS system.
Preferably, the transmission between the robot end and the cloud server end is based on a TCP/IP protocol.
In the robot end, include: a robot-end ROS system, a wireless network card module,
the robot end ROS system is used for collecting image information and radar point cloud information generated in the operation of the sensor, and packaging the information into ROS messages for broadcasting;
and the wireless network card module is used for transmitting the information to the cloud server through the wireless network card for processing after the information is collected.
The cloud server includes: the system comprises a server ROS system, a processor, a memory, a hard disk, a display card and an operating system.
The network device includes: router, switch.
Specifically, the cloud server, host model, is DELL (DELL) T7920; the processor is intel to qiang jin brand 5118; 12 core 24 threads, main frequency 2.3 GHz; the memory is three stars DDR 42666 MHz 128GB, 64GB x 2; the hard disk is Intel solid state 2TB 2; the display card is an Nvidia Quard RTX 5000, 16GB display memory, 3072 CUDA cores; the operating system is Ubuntu 18.04LTS, and the network card is an Intel I219-LM 1000Mbps Ethernet card.
Specifically, the vehicle-mounted computer is of the type Intel NUC8i7 HVK; the processor is Intel core i 78809G; the display card is AMD Radeon RX Vega M GH; the memory is Wieggang DDR 42666 MHz 16 GB; the wireless network card is Intel 9560AC and supports 802.11AC wireless protocol.
Specifically, the driving control board and the processor are STM32F 103; a ULN2003 motor drive module; and a CAN interface.
Specifically, the router is TL-XDR6070, an IEEE 802.11a/b/g/n/ac/ax wireless protocol, and 3 10/100/1000Mbps rate adaptive Ethernet interfaces with the highest wireless rate of 5952Mbps (2.4GHz 1148Mbps, 5GHz 4804 Mbps).
Specifically, the switch is an Sharp network RG-S1824+, the transmission speed is 10/100Mbps, and the bandwidth of the backplane is 4.8 Gbps.
Specifically, the three-dimensional laser radar is a three-dimensional multiline laser radar VLP-16, and the number of laser lines is as follows: 16 lines, measurement range: up to 100m, measurement accuracy: plus or minus 3 cm.
Specifically, the depth camera is Kinectv1, color resolution 640 × 480, fps 30fps, and depth resolution 320 × 240.
Further, in the above system and method, the robot navigation method based on cloud service includes: a multi-sensor fusion positioning mapping method, a robot navigation method, wherein:
the multi-sensor fusion positioning map building method is used for sensing and positioning of a robot in an unknown environment and building a global static map;
the robot navigation method is used for robot path planning and map navigation.
The multi-sensor fusion positioning mapping method comprises the following steps: a visual positioning mapping algorithm, a laser positioning mapping algorithm, a pose fusion algorithm, a back-end optimization algorithm, a map fusion algorithm, wherein:
the visual positioning mapping algorithm is used for robot positioning based on depth camera information, and static semantic mapping of an environment is constructed through a semantic segmentation algorithm and an optical flow method;
the laser positioning and mapping algorithm is used for robot positioning based on three-dimensional laser radar information and constructing a 2D grid map;
the pose fusion algorithm is used for fusing the pose of the visual positioning mapping algorithm and the pose of the laser positioning mapping algorithm;
the rear-end optimization method is used for optimizing the pose of the robot and the pose of the road mark points and optimizing and updating a map in real time in the navigation process of the robot;
the map fusion algorithm is used for fusing a semantic octree map and a grid map to establish a static 2.5D map, and updating the map according to real-time environment information when a robot runs.
Specifically, the multi-sensor fusion positioning mapping method comprises the steps that according to ROS system sensor information, a visual positioning mapping algorithm subscribes depth image information and color image information of a depth camera, and a visual positioning pose and semantic octree map is output; according to the sensor message of the ROS system, a laser positioning algorithm subscribes a three-dimensional laser radar sensor message and outputs a laser positioning pose and a grid map; according to the visual positioning pose and the laser positioning pose information, a pose fusion algorithm fuses the pose information and outputs an estimated robot pose state; according to the visual landmark points, the laser landmark points and the estimated pose state of the robot, optimizing the pose of the robot and the landmark points by a rear-end optimization algorithm, and outputting the optimized landmark points and the optimized pose of the robot; and in the robot map building process, a map fusion algorithm is used for fusing the map to build a static 2.5D map according to the robot pose, the semantic octree map and the grid map, and in the robot navigation process, the map is updated according to a real-time ground environment information map fusion algorithm, and a real-time local map is output.
The visual positioning mapping algorithm comprises three threads, including: tracking thread, preprocessing thread and semantic mapping thread;
the tracking thread is used for estimating the pose of the camera, selecting a proper frame as a key frame through the inter-frame common-view relation, updating the key frame and local map points, deleting mismatching according to the pose, storing the key frame and the map points as a basis for executing repositioning or selecting the key frame, and writing the key frame into a key frame list;
the preprocessing thread is used for performing semantic segmentation and optical flow estimation on the RGB image, marking a dynamic target area on the original image according to optical flow consistency, and marking an area belonging to a person on the original image according to a semantic segmentation image;
the semantic map building thread is used for building 3D point clouds, generating a point cloud map with semantic labels by combining a semantic segmentation image, and building an incremental semantic octree map through an octree generation algorithm and a semantic fusion algorithm.
The laser positioning mapping algorithm is used for selecting indexes of the key frames, splicing the sub key frames into a local map, establishing a pose solving equation through matching of the current frame and the local map so as to obtain the pose of the laser thread, and outputting the grid map. .
The pose fusion algorithm flow is that the laser SLAM thread issues a topic with a message type/position _ laser; the visual SLAM thread issues a topic with a message type/pos _ kinet; subscribing the two topics by adopting robot _ pos _ ekf and outputting a topic with a message type of/pos, and completing fusion of two sensor odometers through an ExtendedKalman Filter (EKF) filter;
the algorithm flow of the rear-end optimization module is that the rear end of the visual positioning module is optimized, a similarity transformation matrix sim3 and a relative attitude relationship are calculated, the positions of the landmark points are adjusted, the attitude map is optimized, the positions of the map points are correspondingly adjusted according to the optimized attitude, and global Bundle Adjustment (BA) optimization is carried out.
The map fusion algorithm comprises the following steps: a local map fusion algorithm and a global map fusion algorithm;
the global map fusion algorithm fuses the static obstacle part of the semantic map and the grid map through a map projection algorithm and a map fusion algorithm according to the grid map output by the laser positioning map-entering algorithm and the semantic octree map output by the visual positioning module to generate a 2.5D grid map;
and the local map fusion algorithm is used for updating objects different from the prior map in the environment according to the real-time environment information, and marking pedestrians on the local map and fusing the objects with the global map.
The robot navigation method comprises the following steps: a global path planning algorithm, a dynamic social interaction space algorithm, a multi-mode pedestrian trajectory prediction algorithm and a local path planning algorithm are disclosed, wherein:
the global path planning algorithm is used for carrying out global path planning according to global map information;
the dynamic social interaction space algorithm is used for establishing two states of static state and motion state of individuals and crowds respectively based on task constraint and social constraint conditions, describing respective dynamic social interaction space and navigating and avoiding barriers;
the multi-mode pedestrian trajectory prediction algorithm is used for high-precision pedestrian position prediction, and adding information of people and obstacles to a local map to assist in providing basis for avoiding pedestrians;
and the local path planning algorithm is used for planning a local path according to the local map information.
Specifically, the robot navigation method flow comprises the steps of performing a local map identification process according to prior 2.5D map information and a multi-sensor fusion positioning algorithm, comparing visual and laser ground landmark characteristics of a transmitted local map with an original map to complete a robot relocation task and a local map updating task, performing global path planning according to a 2.5D grid map, cooperating with the global path planning and local obstacle avoidance, respectively establishing two states of a person and a crowd in a static state and a moving state according to task constraints and social constraints through a dynamic social interaction space algorithm to describe respective dynamic social interaction spaces, introducing real-time updated sensor collected by a robot to detect data of the person and the crowd, reasoning pedestrians according to a multi-mode pedestrian track prediction method, establishing a time space state diagram after extracting a sufficient pedestrian position sequence, and performing multi-mode map prediction, And (4) the robot and the like perform interactive influence to finish high-precision prediction of the position of the moving target, and perform local dynamic path planning to avoid the moving target according to the behavior and the position information of the moving target.
Portions of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or aspects in accordance with the present application through the operation of the computer. Program instructions which invoke the methods of the present application may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal-bearing medium and/or stored within a working memory of a computer device operating in accordance with the program instructions. An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
In recent years, with the continuous development and application of artificial intelligence and computer technology, a great deal of advanced technology is promoted to emerge in the technical field of robots, and a great deal of algorithms are applied to the existing robot positioning and navigation system. The robot positioning navigation system is a core module of a robot system, bears the functions of perception and path planning of the robot in a complex and unknown environment, is a key technology for judging whether the robot can realize autonomous operation or not, and is a basic technology for realizing higher-level functions. Most of the existing positioning and navigation systems are deployed in portable computing equipment of a mobile robot to meet the operation requirement of the robot in the operation process.
However, due to the limitation of the computing power of the small computing device of the current robot platform, the deployed positioning navigation system often cannot apply advanced positioning navigation algorithms, such as a positioning navigation algorithm combined with a deep learning algorithm and an image processing algorithm, and is relatively time-consuming in deployment and operation, which is not beneficial for scientific research personnel to research the algorithms.
In order to achieve the above object, an embodiment of the present invention provides a positioning navigation system based on a multi-sensor data fusion robot in a cloud computing mode, which includes:
in the embodiment of the invention, as shown in the first figure, the positioning and navigation system based on the multi-sensor data fusion robot in the cloud computing mode comprises the following steps:
the robot runs under the environment with network coverage by depending on a motion chassis and is loaded with a three-dimensional laser radar sensor and a module thereof, a depth camera, a display and a vehicle-mounted computer (NUC);
the depth camera and the laser radar acquisition sensor acquire original image information and radar point cloud information generated in the environment.
The NUC transmits information based on an ROS open source system, packs original image information into color image information/camera _ rgb and depth image information/camera _ depth to be transmitted, and transmits the color image information/camera _ rgb and the depth image information/camera _ depth to a cloud SLAM server through a wireless network port to be processed;
the RGB image information respectively enters a tracking thread and a preprocessing thread of the visual SLAM, an image pyramid of each frame of RGB image is calculated in the tracking thread, ORB characteristics are extracted, descriptors are calculated, meanwhile, the preprocessing thread performs image semantic segmentation on the RGB image by using a PSPnet network, a mask is added to an original image, and characteristic points belonging to people or potential dynamic target areas in the image are removed. And performing optical flow estimation by using an LB method, and removing the characteristic points belonging to the dynamic target through optical flow consistency.
Calculating bag of words (BOW) feature vectors of a current frame by a tracking thread, setting a matching threshold, performing feature matching by using continuous static feature points, performing pose estimation on a camera by using a Perspective-n-Point (PNP) method according to whether a motion model is met or not, selecting a proper frame as a key frame according to a common view relation between frames, updating the key frame and a local map point, performing projection matching on the local map point, optimizing the current frame by using a pose map, deleting mismatching according to the pose, storing the key frame and the map point as a basis for executing repositioning or selecting the key frame, and writing the key frame into a key frame list.
Inserting a key frame into a drawing establishing thread of the visual SLAM, removing redundant map points, executing local Bundle Adjustment (BA) optimization, removing the redundant key frames, keeping the sparse feature point cloud as a subsequent loop detection service, establishing a 3D point cloud through matching between a depth image and a reference frame, the position of pixels in the image and internal parameters of a camera, and generating a point cloud map with a semantic label by combining a semantic segmentation image. And after the point cloud is downsampled under a given certain resolution ratio through a point cloud filter, inserting the point cloud into the nodes of the octree, updating the occupancy rates of voxels with different resolution ratios, and fusing semantic information of the multi-view octree in a Bayes fusion mode to construct an incremental semantic octree map.
In the laser SLAM thread, according to the fact that the curvature of points is used as an index for extracting characteristic information of laser frames, the points with lower curvature are used as plane points, the points with higher curvature are used as edge points, matching of the two laser frames is completed through the corresponding plane points and the edge points between the two frames, the selection index of the key frame with the rotation angle exceeding 5 degrees or translation exceeding 10cm is defined, 10 laser frames near the key frame are selected as sub-key frames, the sub-key frames are spliced into a local map, a pose solving equation is established through matching of the current frame and the local map, the pose of the laser thread is obtained, and meanwhile a grid map is output.
And (3) Ros communication pose fusion: the laser SLAM thread issues a topic with a message type of/position _ laser; the visual SLAM thread issues a topic with a message type/pos _ kinect; the robot _ pos _ ekf is adopted to subscribe the two topics and output the topic with the message type/pos, so that fusion of two sensor odometers is completed
And (4) performing loop detection by the vision SLAM and the laser SLAM respectively. The loop detection of the visual SLAM comprises the steps of calculating closed-loop candidate frames, detecting continuous candidate frames in the candidate frames, calculating a similarity transformation matrix sim3 and a relative attitude relationship, adjusting the poses of key frames connected with a current frame and the positions of map points observed by the key frames, matching the landmark points of the closed-loop frames and the key frames connected with the closed-loop frame with the points of the key frames connected with the current frame, updating a common view through the matching relationship between frames, optimizing a pose map, and correspondingly adjusting the positions of the map points according to the optimized poses to perform global BA optimization.
After the robot finishes the map building work, acquiring data by a sensor according to the current environment information, correspondingly matching the data in a 2.5D grid map, calibrating the current position, and finishing the autonomous positioning work; the robot plans a moving path meeting multiple evaluation indexes by comprehensively considering the safety of the robot and the surrounding environment according to the target position, and controls a chassis executing mechanism of the robot to finish the planning control work.
And (3) global path planning, namely firstly, fusing and drawing construction through laser and a vision sensor to improve drawing construction precision, then applying a double-layer ant colony algorithm to fully exert the advantages of parallel calculation and multi-dimensional search, finally performing smoothing treatment to remove redundant turning points, optimizing paths, meeting the actual operation requirements of the robot, and coordinating global path planning and local obstacle avoidance.
A multi-mode pedestrian track prediction method improves the existing constant speed prediction method into constant speed and uniform accelerated motion prediction by a polynomial curve fitting algorithm to improve short-time prediction precision, establishes a time-space state diagram to infer the interaction influence of all intelligent bodies (pedestrians and robots) after extracting a sufficient pedestrian position sequence, completes high-precision pedestrian position prediction, further performs clustering by pedestrian prediction positions in multiple time steps, judges whether all service objects are in the same group or not, and provides effective inference information for a navigation process.
The method comprises the steps of planning a local path, firstly, obtaining the current pose of a robot in a local map, and then, establishing a dynamic social interaction space by combining pedestrian detection information, pedestrian state extraction information and social behavior information based on task constraint and social constraint conditions; then, introducing real-time updated pedestrian track prediction and crowd grouping information collected by the robot; finally, adjusting the weight value and the weight combination mode of the dynamic window in real time so as to improve the evaluation function of the dynamic window; and finishing safe obstacle avoidance and path planning. The human comfort level and the safety under the social interaction environment are met.
Specifically, in the robot planning control method, firstly, the NUC converts a path execution control instruction in the move _ base into a specific robot chassis motion control instruction: the method comprises the steps that a driver selects a mode, drive enabling configuration and a chassis motor speed regulation instruction, then a control instruction is sent in an event triggering mode through canon communication, a fixed period does not exist (the time of two frames of control instructions of the same type is 50ms), finally the robot chassis feeds back execution conditions, and real-time drive state and motor rotating speed information are fed back to an upper computer through the canon for control and adjustment.
The robot positioning and navigation system based on multi-sensor data fusion in the cloud computing mode comprises an on-board computer, a three-dimensional laser radar, a liquid crystal display, a depth camera, an indoor mobile robot chassis and a network construction method, and further comprises a robot positioning and mapping method in the cloud computing mode, a cloud SLAM cloud computing platform and a navigation method by utilizing a priori semantic map;
the depth camera is used for extracting color information and depth information in the environment;
the laser radar is used for scanning and matching the environmental information;
the multi-sensor fusion is the fusion of a depth camera and a three-dimensional laser radar at a data layer;
the vehicle-mounted computer is used for packaging the collected sensor data, establishing communication with an upper cloud server through the ROS node, transmitting the data to the cloud server, receiving response information of the server, and further controlling the behavior of the vehicle according to the response information of the cloud server.
The positioning and map building method is used for positioning the accurate position of the robot in the environment and the map and building a semantic grid map for navigation, and is used for the robot to interact with the environment in the map;
the autonomous navigation method is used for enabling the robot to autonomously move to a certain target position in the environment from the current position after the autonomous positioning and planning control work is coordinated and completed.
The positioning and mapping method comprises the following steps:
the preprocessing thread is used for processing the color image and comprises two parts of semantic segmentation and optical flow estimation on the image: the semantic segmentation thread is used for acquiring a semantic segmentation image extracted by the deep learning network;
the tracking thread is used for simultaneously estimating the pose by adopting a visual algorithm and a laser algorithm, wherein the visual algorithm is used for estimating the pose by utilizing information provided by the depth camera, and the laser algorithm is used for estimating the pose by a method of scanning matched features;
and the pose fusion thread is used for fusing the laser pose and the visual pose by a fusion algorithm.
The back-end optimization thread is used for simultaneously introducing the pose information and the landmark information of the vision and the laser into a factor graph for optimization;
and the map construction thread is used for processing dense map points and laser data of the depth image and constructing a map for the navigation of the robot.
The autonomous navigation method comprises the following steps:
the robot finishes repositioning and map updating on the basis of the prior map, namely distinguishing the environmental characteristics of the robot, calibrating the current position and updating a local map;
the robot finishes the optimal path planning according with multiple evaluation indexes such as social constraint, task constraint and the like according to the environment information, and controls the robot actuating mechanism to reach the target position according to the planning result.
The vision algorithm carries out pose estimation: the feature points of the non-human part image in the image are extracted by adopting a semantic segmentation algorithm in a preprocessing thread, pixels with inconsistent dynamic state in the image are extracted by adopting an optical flow algorithm, the dynamic feature points with high possibility in the image are removed together, and the stable camera pose is obtained by obtaining a transformation matrix through matching the residual stable features.
The map construction method comprises the following steps: the laser radar realizes the construction of a grid map by adopting a particle filtering mode, firstly, particles are randomly selected according to prior probability and are endowed with weight values, and state initialization is carried out; generating a next generation particle set from the current particle set according to the proposed distribution, and combining and calculating the particle weight; and then resampling the particles, determining the sampling times by the weight of the particles, and finally obtaining the pose through state parameter estimation and obtaining a grid map based on laser data.
The semantic image information and the point cloud information: firstly, generating an octree semantic map by using a point cloud map, and updating semantic information of the octree map by semantic tags at different moments through Bayesian fusion to form the octree semantic map;
the laser radar information: updating the uncertainty of the grid map by using a Bayesian fusion algorithm, and fusing the local grid map until all grid maps are fused, thereby completing the fusion of two sensor data to construct a map and obtaining a 2.5D grid map containing semantic information.
The repositioning and map updating: after the 2.5D global semantic map is obtained, global matching is carried out on the map according to the current laser information and the visual word bag characteristics, the repositioning process is completed, the original key frame is grabbed, and the laser grid map and the semantic map corresponding to the original key frame are updated.
And planning the optimal path: the method comprises the steps of improving the map building precision through the fusion of laser and a vision sensor, then utilizing a double-layer ant colony algorithm to give full play to the advantages of parallel computation and multi-dimensional search, finally performing smoothing treatment to remove redundant turning points, optimizing a path, and coordinating global path planning and local obstacle avoidance. A multi-mode pedestrian trajectory prediction method improves the existing uniform velocity prediction method into uniform velocity and uniform acceleration motion prediction by a polynomial curve fitting algorithm to improve short-time prediction precision, establishes a time-space state diagram to infer the interaction influence of various intelligent bodies (pedestrians and robots) after extracting a sufficient pedestrian position sequence, completes high-precision pedestrian position prediction, further performs clustering by multi-time-step pedestrian prediction positions, judges whether various service objects are in the same group or not, and provides effective inference information for a navigation process.
The social constraint is as follows: in a double-current network with cross-modal feature fusion, RGB and depth features are extracted and fused, and accurate prediction of pedestrians is achieved. On the basis of pedestrian detection, the pedestrian state including information such as pedestrian position, moving direction is obtained to the combination of laser radar point cloud data, realizes that the 3D pedestrian detects. Meanwhile, pedestrian skeleton data are obtained by combining the depth image data, and RGB image information, depth image information and human body skeleton information are fused in a behavior recognition network to realize social behavior recognition of pedestrians. And finally, combining the pedestrian state extraction and social behavior recognition detection results to construct a social interaction space model on the 2.5D grid map.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that various changes in the embodiments and/or modifications of the invention can be made, and equivalents and modifications of some features of the invention can be made without departing from the spirit and scope of the invention.

Claims (8)

1. The utility model provides a robot multisensor fuses location navigation based on cloud, includes robot and navigation cloud service platform, its characterized in that: the robot comprises an indoor mobile robot chassis, an on-vehicle computer (1), a three-dimensional laser radar (4), a depth camera (6) and a liquid crystal display (3), wherein the indoor mobile robot chassis comprises an omnidirectional wheel (8), a shell (9), a shock absorber (7), a profile frame (2), a mounting plate, a driving motor, a driving control plate and an on-vehicle battery, and the navigation cloud service platform comprises a robot end, a cloud server, a client and network equipment.
2. The cloud service based robotic multi-sensor fusion positioning navigation system of claim 1, wherein: the indoor mobile robot chassis is used for carrying a sensor and a driving device to realize the stable operation of the robot, the vehicle-mounted computer (1) is used for reading and packaging the data of the sensor, and sent to a navigation cloud service platform through a wireless network, the depth camera (6) is used for extracting color information and depth information in the environment, the three-dimensional laser radar (4) is used for scanning and matching environmental information, the liquid crystal display (3) is used for displaying real-time running information of the robot, multi-sensor fusion is the fusion of the depth camera (6) and the three-dimensional laser radar (4) in a data layer, the vehicle-mounted computer (1) is used for packaging the acquired sensor data, establishing communication with an upper cloud server through an ROS node, transmitting the data to a cloud server, and meanwhile, receiving the response information of the cloud server, and further controlling the behavior of the vehicle according to the response information of the cloud server.
3. The cloud service based robotic multi-sensor fusion positioning navigation system of claim 1, wherein: the robot end comprises a robot end ROS system and a wireless network card module, the robot end ROS system is used for collecting image information and radar point cloud information generated in the running process of the sensor and packaging the information into ROS messages to be broadcasted, the wireless network card module is used for transmitting the information to a cloud server through a wireless network card to be processed after collecting the information, the cloud server comprises a server end ROS system, a processor, a memory, a hard disk, a display card and an operating system, and the network equipment comprises a router and a switch.
4. The cloud service based robotic multi-sensor fusion positioning navigation system of claim 1, wherein: the robot end is used for sending sensor data in real time, the cloud server is used for carrying a robot positioning and navigation algorithm and processing data in real time, the client is used for checking and controlling the running condition of the robot in real time, and the network equipment is used for providing network service for accessing the cloud server.
5. The cloud service based robotic multi-sensor fusion positioning navigation system of claim 1, wherein: the multi-sensor fusion positioning mapping method is used for sensing and positioning the robot in an unknown environment and constructing a global static map, accurately positioning the robot in the environment and the map and constructing a semantic grid map for navigation, the robot interacts with the environment in the map, the robot navigation method is used for robot path planning and map navigation, and the robot can autonomously move to a certain target position in the environment from the current position after the autonomous positioning and planning control work is coordinated, the multi-sensor fusion positioning mapping method comprises a visual positioning mapping algorithm, a laser positioning mapping algorithm, a pose fusion algorithm, a back-end optimization algorithm and a map fusion algorithm, the visual positioning mapping algorithm is used for robot positioning based on the information of a depth camera (6), the method comprises the steps of realizing static semantic map construction of an environment through a semantic segmentation algorithm and an optical flow method, constructing a 2D grid map by using a laser positioning mapping algorithm for robot positioning based on three-dimensional laser radar (4) information, fusing the pose of the visual positioning mapping algorithm and the pose of the laser positioning mapping algorithm, optimizing an update map in real time in the robot navigation process by using a back-end optimization algorithm for optimizing the pose of a robot and the pose of a landmark point, and constructing the static 2.5D map by fusing a semantic octree map and a grid map and updating the map according to real-time environment information when a robot runs.
6. The cloud service based robotic multi-sensor fusion positioning navigation system of claim 5, wherein: the map fusion algorithm comprises a local map fusion algorithm and a global map fusion algorithm.
7. The cloud service based robotic multi-sensor fusion positioning navigation system of claim 5, wherein: the visual positioning mapping algorithm comprises a tracking thread, a preprocessing thread and a semantic mapping thread.
8. A robot multi-sensor fusion positioning navigation method based on cloud service is characterized in that: the method comprises the following steps:
the method comprises the following steps: the robot finishes repositioning and map updating on the basis of an experimental map, namely, the environmental characteristics of the robot are judged, the current position is calibrated, a local map is updated, the robot finishes optimal path planning according with multiple evaluation indexes such as social constraint, task constraint and the like according to environmental information, and a robot executing mechanism is controlled to reach a target position according to a planning result;
step two: extracting feature points of the non-human part image in the image by adopting a semantic segmentation algorithm in a preprocessing thread, extracting pixels with inconsistent dynamic state in the image by adopting an optical flow algorithm, removing dynamic feature points with high possibility in the image together, and obtaining a transformation matrix to obtain a stable camera pose through matching of the remaining stable features;
step three: the three-dimensional laser radar (4) adopts a particle filtering mode to realize the construction of a grid map, firstly, particles are randomly selected according to prior probability and given with weights, state initialization is carried out, and then, the next generation of particle sets are generated from the current particle set according to proposed distribution and the weights of the particles are calculated in a combined mode; the method comprises the steps of firstly, generating an octree semantic map by using the point cloud map, updating the semantic information of the octree map by semantic labels at different moments through Bayesian fusion to form the octree semantic map and laser radar information, updating the uncertainty of the grid map by using a Bayesian fusion algorithm, fusing local grid maps until all grid maps are fused, and accordingly completing two sensor data fusion construction maps to obtain a 2.5D grid map containing semantic information;
step four: after the 2.5D global semantic map is obtained, global matching is carried out on the map according to the current laser information and the visual word bag characteristics, the repositioning process is completed, the original key frame is captured, and the laser grid map and the semantic map corresponding to the original key frame are updated;
step five: the method comprises the steps of improving the map building precision through the fusion of laser and a vision sensor, then applying a double-layer ant colony algorithm to give full play to the advantages of parallel calculation and multi-dimensional search, finally performing smoothing treatment to remove redundant turning points, optimizing a path, and coordinating global path planning and local obstacle avoidance; the multi-mode pedestrian trajectory prediction method improves the existing uniform velocity prediction method into uniform velocity and uniform acceleration motion prediction by a polynomial curve fitting algorithm to improve the short-time prediction precision, establishes a time-space state diagram to infer the interaction influence of each intelligent body after extracting a sufficient pedestrian position sequence, completes high-precision pedestrian position prediction, further performs clustering by multi-time-step pedestrian prediction positions, judges whether each service object is in the same group or not, and provides effective inference information for a navigation process;
step six: extracting and fusing RGB and depth features in a double-current network with cross-modal feature fusion to realize accurate prediction of pedestrians, and acquiring pedestrian states including information such as pedestrian positions and moving directions by combining laser radar point cloud data on the basis of pedestrian detection to realize 3D pedestrian detection; meanwhile, pedestrian skeleton data are obtained by combining depth image data, RGB image information, depth image information and human body skeleton information are fused in a behavior recognition network, social behavior recognition of pedestrians is achieved, and finally, a social interaction space model is built on the 2.5D grid map by combining pedestrian state extraction and social behavior recognition detection results.
CN202210148066.XA 2022-02-17 2022-02-17 Cloud service-based multi-sensor fusion positioning navigation system and method for robot Active CN114474061B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210148066.XA CN114474061B (en) 2022-02-17 2022-02-17 Cloud service-based multi-sensor fusion positioning navigation system and method for robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210148066.XA CN114474061B (en) 2022-02-17 2022-02-17 Cloud service-based multi-sensor fusion positioning navigation system and method for robot

Publications (2)

Publication Number Publication Date
CN114474061A true CN114474061A (en) 2022-05-13
CN114474061B CN114474061B (en) 2023-08-04

Family

ID=81482724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210148066.XA Active CN114474061B (en) 2022-02-17 2022-02-17 Cloud service-based multi-sensor fusion positioning navigation system and method for robot

Country Status (1)

Country Link
CN (1) CN114474061B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114863075A (en) * 2022-07-05 2022-08-05 深圳市新天泽消防工程有限公司 Fire-fighting evacuation path planning method, device and equipment based on multiple sensors
CN115346375A (en) * 2022-10-18 2022-11-15 深圳市九洲卓能电气有限公司 Automobile 5G information transmission method
CN115546348A (en) * 2022-11-24 2022-12-30 上海擎朗智能科技有限公司 Robot mapping method and device, robot and storage medium
CN116030213A (en) * 2023-03-30 2023-04-28 千巡科技(深圳)有限公司 Multi-machine cloud edge collaborative map creation and dynamic digital twin method and system
CN116160458A (en) * 2023-04-26 2023-05-26 广州里工实业有限公司 Multi-sensor fusion rapid positioning method, equipment and system for mobile robot
CN116500205A (en) * 2023-06-26 2023-07-28 中国农业科学院农业信息研究所 Underground leaching monitoring robot system and method for farmland nitrogen
CN117506940A (en) * 2024-01-04 2024-02-06 中国科学院自动化研究所 Robot track language description generation method, device and readable storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108890611A (en) * 2018-07-12 2018-11-27 东莞理工学院 A kind of binocular vision avoidance wheeled robot based on SLAM
CN110675307A (en) * 2019-08-19 2020-01-10 杭州电子科技大学 Implementation method of 3D sparse point cloud to 2D grid map based on VSLAM
CN111524194A (en) * 2020-04-24 2020-08-11 江苏盛海智能科技有限公司 Positioning method and terminal for mutual fusion of laser radar and binocular vision
CN111645073A (en) * 2020-05-29 2020-09-11 武汉理工大学 Robot visual semantic navigation method, device and system
CN111664843A (en) * 2020-05-22 2020-09-15 杭州电子科技大学 SLAM-based intelligent storage checking method
CN112025729A (en) * 2020-08-31 2020-12-04 杭州电子科技大学 Multifunctional intelligent medical service robot system based on ROS
CN112525202A (en) * 2020-12-21 2021-03-19 北京工商大学 SLAM positioning and navigation method and system based on multi-sensor fusion
CN112650255A (en) * 2020-12-29 2021-04-13 杭州电子科技大学 Robot indoor and outdoor positioning navigation system method based on vision and laser radar information fusion
CN213338438U (en) * 2020-11-02 2021-06-01 武汉工程大学 Intelligent logistics robot based on laser slam technology
CN113269837A (en) * 2021-04-27 2021-08-17 西安交通大学 Positioning navigation method suitable for complex three-dimensional environment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108890611A (en) * 2018-07-12 2018-11-27 东莞理工学院 A kind of binocular vision avoidance wheeled robot based on SLAM
CN110675307A (en) * 2019-08-19 2020-01-10 杭州电子科技大学 Implementation method of 3D sparse point cloud to 2D grid map based on VSLAM
CN111524194A (en) * 2020-04-24 2020-08-11 江苏盛海智能科技有限公司 Positioning method and terminal for mutual fusion of laser radar and binocular vision
CN111664843A (en) * 2020-05-22 2020-09-15 杭州电子科技大学 SLAM-based intelligent storage checking method
CN111645073A (en) * 2020-05-29 2020-09-11 武汉理工大学 Robot visual semantic navigation method, device and system
CN112025729A (en) * 2020-08-31 2020-12-04 杭州电子科技大学 Multifunctional intelligent medical service robot system based on ROS
CN213338438U (en) * 2020-11-02 2021-06-01 武汉工程大学 Intelligent logistics robot based on laser slam technology
CN112525202A (en) * 2020-12-21 2021-03-19 北京工商大学 SLAM positioning and navigation method and system based on multi-sensor fusion
CN112650255A (en) * 2020-12-29 2021-04-13 杭州电子科技大学 Robot indoor and outdoor positioning navigation system method based on vision and laser radar information fusion
CN113269837A (en) * 2021-04-27 2021-08-17 西安交通大学 Positioning navigation method suitable for complex three-dimensional environment

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114863075A (en) * 2022-07-05 2022-08-05 深圳市新天泽消防工程有限公司 Fire-fighting evacuation path planning method, device and equipment based on multiple sensors
CN114863075B (en) * 2022-07-05 2022-10-14 深圳市新天泽消防工程有限公司 Fire-fighting evacuation path planning method, device and equipment based on multiple sensors
CN115346375A (en) * 2022-10-18 2022-11-15 深圳市九洲卓能电气有限公司 Automobile 5G information transmission method
CN115546348A (en) * 2022-11-24 2022-12-30 上海擎朗智能科技有限公司 Robot mapping method and device, robot and storage medium
CN115546348B (en) * 2022-11-24 2023-03-24 上海擎朗智能科技有限公司 Robot mapping method and device, robot and storage medium
CN116030213A (en) * 2023-03-30 2023-04-28 千巡科技(深圳)有限公司 Multi-machine cloud edge collaborative map creation and dynamic digital twin method and system
CN116160458A (en) * 2023-04-26 2023-05-26 广州里工实业有限公司 Multi-sensor fusion rapid positioning method, equipment and system for mobile robot
CN116500205A (en) * 2023-06-26 2023-07-28 中国农业科学院农业信息研究所 Underground leaching monitoring robot system and method for farmland nitrogen
CN116500205B (en) * 2023-06-26 2023-09-22 中国农业科学院农业信息研究所 Underground leaching monitoring robot system and method for farmland nitrogen
CN117506940A (en) * 2024-01-04 2024-02-06 中国科学院自动化研究所 Robot track language description generation method, device and readable storage medium
CN117506940B (en) * 2024-01-04 2024-04-09 中国科学院自动化研究所 Robot track language description generation method, device and readable storage medium

Also Published As

Publication number Publication date
CN114474061B (en) 2023-08-04

Similar Documents

Publication Publication Date Title
CN114474061B (en) Cloud service-based multi-sensor fusion positioning navigation system and method for robot
US11794785B2 (en) Multi-task machine-learned models for object intention determination in autonomous driving
US20200302662A1 (en) System and Methods for Generating High Definition Maps Using Machine-Learned Models to Analyze Topology Data Gathered From Sensors
US10192113B1 (en) Quadocular sensor design in autonomous platforms
US10496104B1 (en) Positional awareness with quadocular sensor in autonomous platforms
CN114384920B (en) Dynamic obstacle avoidance method based on real-time construction of local grid map
Dickmanns et al. An integrated spatio-temporal approach to automatic visual guidance of autonomous vehicles
US20190278273A1 (en) Odometry system and method for tracking traffic lights
Eresen et al. Autonomous quadrotor flight with vision-based obstacle avoidance in virtual environment
US20210303922A1 (en) Systems and Methods for Training Object Detection Models Using Adversarial Examples
US11580851B2 (en) Systems and methods for simulating traffic scenes
US20210278523A1 (en) Systems and Methods for Integrating Radar Data for Improved Object Detection in Autonomous Vehicles
US20220032452A1 (en) Systems and Methods for Sensor Data Packet Processing and Spatial Memory Updating for Robotic Platforms
US20220036579A1 (en) Systems and Methods for Simulating Dynamic Objects Based on Real World Data
US20230027212A1 (en) Method and system for dynamically updating an environmental representation of an autonomous agent
US20220153310A1 (en) Automatic Annotation of Object Trajectories in Multiple Dimensions
CN113126115A (en) Semantic SLAM method and device based on point cloud, electronic equipment and storage medium
CN113758488A (en) Indoor positioning method and equipment based on UWB and VIO
Hirose et al. ExAug: Robot-conditioned navigation policies via geometric experience augmentation
Gajjar et al. A comprehensive study on lane detecting autonomous car using computer vision
CN112380933B (en) Unmanned aerial vehicle target recognition method and device and unmanned aerial vehicle
CN116352722A (en) Multi-sensor fused mine inspection rescue robot and control method thereof
US11960290B2 (en) Systems and methods for end-to-end trajectory prediction using radar, LIDAR, and maps
US20220035376A1 (en) Systems and Methods for End-to-End Trajectory Prediction Using Radar, Lidar, and Maps
Lai et al. A delivery robot based on IoT environment perception and real-time positioning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant