CN117970925A - Robot real-time obstacle avoidance and dynamic path planning method and system - Google Patents

Robot real-time obstacle avoidance and dynamic path planning method and system Download PDF

Info

Publication number
CN117970925A
CN117970925A CN202311863805.4A CN202311863805A CN117970925A CN 117970925 A CN117970925 A CN 117970925A CN 202311863805 A CN202311863805 A CN 202311863805A CN 117970925 A CN117970925 A CN 117970925A
Authority
CN
China
Prior art keywords
obstacle
dynamic
prediction
static
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311863805.4A
Other languages
Chinese (zh)
Inventor
莫威
罗磊
黄雅阁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Smartstate Technology Co ltd
Original Assignee
Shanghai Smartstate Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Smartstate Technology Co ltd filed Critical Shanghai Smartstate Technology Co ltd
Priority to CN202311863805.4A priority Critical patent/CN117970925A/en
Publication of CN117970925A publication Critical patent/CN117970925A/en
Pending legal-status Critical Current

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a real-time obstacle avoidance and dynamic path planning method and a system for a robot, wherein the method comprises the following steps: environmental awareness, obstacle representation and autonomous movement; the environment sensing part includes: image acquisition module, calibration module and multisensory fusion mechanism: the obstacle representation section includes: firstly classifying static and dynamic obstacles, then carrying out gesture recognition, establishing an obstacle model, simplifying the representation of the model boundary, predicting the action track and range of the dynamic obstacle, and extracting the characteristic outline of the obstacle; the autonomous movement comprises an obstacle avoidance decision and a movement control, if a dynamic obstacle exists, a moving route is adjusted in real time in a local environment by utilizing a collision detection and track re-planning algorithm, and then the movement control is output to the robot based on the planned route, so that the functions of obstacle avoidance, route selection and movement are realized. By matching the modules with the mechanism, the real-time obstacle avoidance system with reliability, rapidness, high autonomy and high precision is realized.

Description

Robot real-time obstacle avoidance and dynamic path planning method and system
Technical Field
The invention relates to the technical field of industrial robot control, in particular to a real-time obstacle avoidance and dynamic path planning method and system for a robot.
Background
Along with the development of informatization and intellectualization, the robot is widely applied to industrial production scenes such as intelligent factories and unmanned workshops, replaces manpower to finish works such as assembly, processing, measurement, quality inspection and the like, and in a dynamic environment, a mobile robot obstacle avoidance strategy is a specific measure for energizing the mobile robot, and a robot obstacle avoidance technology also becomes a research hotspot in the field of intelligent robot motion control.
The obstacle avoidance planning refers to that a robot autonomously senses the environment in a complex working environment and autonomously searches for a shortest path capable of avoiding an obstacle and moving to a target point. The traditional obstacle avoidance method comprises a visual method, a grid method, a self-in-space method, an artificial neural network method and the like, is generally applied to a scene with known and fixed obstacle information, and in actual work, environment information is possibly lacking and the obstacle is always in a dynamic state, and a robot adopting the traditional obstacle avoidance method is inconvenient to adjust in real time, has lower obstacle avoidance reliability for key obstacles such as human bodies and the like, and is easy to cause safety accidents. In order to solve the problems, the robot can obtain clear, safe and efficient movement in an unknown, complex and dynamic environment, and people generate a series of intelligent real-time obstacle avoidance and dynamic path planning methods by utilizing technologies such as visual sensing, artificial intelligence and the like.
Dynamic path planning mainly involves the following two problems: firstly, the robot can detect obstacles on a path in real time; second, the robot can dynamically adjust the movement path according to the obstacle information.
Solving the first problem involves environmental awareness technology. The prior art provides a multi-sensor fusion sensing system for solving the problems of environmental interference caused by light rays, background clutter and the like and insufficient precision existing in a pure distance sensor, and the comprehensive vision and depth sensor is used for detecting the motion data of a robot, the environment, obstacles in the environment and the information of target points.
Solving the second problem involves path planning techniques. The path planning is divided into global planning and local planning, a collision-free optimized path from a starting point to a target point is obtained according to a global map, local environment information is collected in real time through a sensor, and dynamic obstacle avoidance adjustment of the local path is carried out according to a collision prediction model.
Patent document CN111045433B (application number 201911421665.9) discloses an obstacle avoidance method for a robot, and a computer-readable storage medium. In the invention, the obstacle avoidance method of the robot comprises the following steps: acquiring an obstacle point cloud of the environment of the robot at the current moment, and establishing a dynamic layer according to the current obstacle point cloud; acquiring a static image layer of the environment where the robot is located; comparing the dynamic layer with the static layer to determine a static barrier of the environment where the robot is located; and determining the path of the robot according to the position information of the static obstacle.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a real-time obstacle avoidance and dynamic path planning method and system for a robot.
The invention provides a real-time obstacle avoidance and dynamic path planning method for a robot, which comprises the following steps:
step S1: generating a global map based on the acquired three-dimensional point cloud data;
Step S2: classifying the dynamic and static obstacles in the environment according to the global map, and carrying out track prediction, reachable area prediction and gesture estimation on the dynamic obstacles; modeling the outline of the static obstacle;
step S3: performing collision detection according to the static obstacle outline model, dynamic obstacle track prediction, reachable area prediction and gesture estimation based on the global map;
Step S4: track planning is performed based on feedback of collision detection.
Preferably, the step S1 employs:
step S1.1: detecting the environment in a mode of combining vision with a sensor, combining the collected various information, and converting the combined various information into three-dimensional point cloud data;
Step S1.2: generating a global map according to the three-dimensional point cloud data;
Step S1.3: and initializing the current state information, the coordinates of the target point, the position of the obstacle and the motion parameters of the robot.
Preferably, the step S2 employs:
Step S2.1: classifying the dynamic and static of the obstacle in the environment according to the global map, and acquiring three-dimensional edge contour characteristic information;
step S2.2: the three-dimensional edge contour feature information establishes a contour model of the static obstacle in a mode of generating an enveloping body by voxelization;
Step S2.3: track prediction, reachable area prediction and gesture estimation are carried out on the dynamic obstacle.
Preferably, the step S3 employs: and carrying out collision detection by using a geometric envelope method and a projection intersection method according to the static obstacle outline model, the dynamic obstacle track prediction, the reachable area prediction and the gesture estimation based on the global map.
Preferably, the step S4 employs:
step S4.1: establishing an optimized global path between a starting point and a target point according to the global map and the static obstacle outline model;
Step S4.2: and for the environment with the dynamic obstacle, carrying out real-time local track re-planning based on track prediction of the dynamic obstacle so as to realize the real-time obstacle avoidance function.
The invention provides a real-time obstacle avoidance and dynamic path planning method for a robot, which comprises the following steps:
module M1: generating a global map based on the acquired three-dimensional point cloud data;
Module M2: classifying the dynamic and static obstacles in the environment according to the global map, and carrying out track prediction, reachable area prediction and gesture estimation on the dynamic obstacles; modeling the outline of the static obstacle;
module M3: performing collision detection according to the static obstacle outline model, dynamic obstacle track prediction, reachable area prediction and gesture estimation based on the global map;
module M4: track planning is performed based on feedback of collision detection.
Preferably, the module M1 employs:
module M1.1: detecting the environment in a mode of combining vision with a sensor, combining the collected various information, and converting the combined various information into three-dimensional point cloud data;
Module M1.2: generating a global map according to the three-dimensional point cloud data;
module M1.3: and initializing the current state information, the coordinates of the target point, the position of the obstacle and the motion parameters of the robot.
Preferably, the module M2 employs:
module M2.1: classifying the dynamic and static of the obstacle in the environment according to the global map, and acquiring three-dimensional edge contour characteristic information;
Module M2.2: the three-dimensional edge contour feature information establishes a contour model of the static obstacle in a mode of generating an enveloping body by voxelization;
Module M2.3: track prediction, reachable area prediction and gesture estimation are carried out on the dynamic obstacle.
Preferably, the module M3 employs: and carrying out collision detection by using a geometric envelope method and a projection intersection method according to the static obstacle outline model, the dynamic obstacle track prediction, the reachable area prediction and the gesture estimation based on the global map.
Preferably, the module M4 employs:
Module M4.1: establishing an optimized global path between a starting point and a target point according to the global map and the static obstacle outline model;
module M4.2: and for the environment with the dynamic obstacle, carrying out real-time local track re-planning based on track prediction of the dynamic obstacle so as to realize the real-time obstacle avoidance function.
In summary, compared with the prior art, the invention has the following beneficial effects:
1. The invention can further improve the self-adaptive capacity of the intelligent robot and can process dynamic obstacles in real time.
2. The invention can more accurately sense the working environment in a larger range, can reduce manual work, and can improve the safety and reliability of the operation of the industrial robot; the track can be dynamically optimized, and the accuracy of reaching the target point is improved.
3. The invention uses a multi-sensing fusion mechanism in the environment sensing part, combines the sensing of a depth camera with the sensing of a laser radar and the like, not only maintains the advantages of rapid data acquisition, high measurement precision, good robustness and the like in radar ranging, but also can acquire the characteristic information of texture, vertical data of a three-dimensional space and the like through the depth camera, and improves the precision of a static environment map, the calculation efficiency of a dynamic environment and the perceivable range and dimension compared with the traditional sensing system using a single vision camera or a sensor.
4. The invention classifies dynamic and static obstacles, such as dynamic obstacle triggering motion state prediction, local track re-planning and other dynamic adjustment mechanisms, adopts different obstacle avoidance routes for the obstacles in different states, simplifies the algorithm to the greatest extent, improves the operation efficiency and reduces the environmental impact.
5. According to the invention, the local track re-planning module is added after the traditional global track planning, so that the moving object in the environment can be prevented from being blocked in real time, and the application safety and the degree of autonomy of the industrial robot are improved.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of non-limiting embodiments, given with reference to the accompanying drawings in which:
FIG. 1 is a general flow chart of the present invention for implementing dynamic planning and real-time obstacle avoidance functions for a grasping robot;
FIG. 2 is a flow chart of the invention in which the context awareness is based on multi-sensor fusion awareness;
FIG. 3 is a flow chart of dynamic path planning in the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present invention.
Example 1
The invention aims to provide the robot obstacle avoidance system which can sense the environment in real time, automatically and dynamically adjust the path, has high completion degree of grabbing a target, high stability, high safety and high control precision. The method overcomes the defects of the current intelligent robot field in the autonomous obstacle avoidance technology.
According to the invention, as shown in fig. 1-3, the real-time obstacle avoidance and dynamic path planning method for the grabbing robot comprises the following steps:
step S1: the image acquisition module acquires initial RGB images, point clouds, depth, coordinates, speed and other information, and an acquisition object comprises a working background, an obstacle and a grabbing target of a camera;
step S2: the calibration module utilizes Zhang Zhengyou camera calibration method to realize conversion of three-dimensional world coordinates and two-dimensional image coordinates, and hand-eye calibration obtains coordinates of a target object under a robot coordinate system, so as to obtain a conversion relation between a camera and a tool hand;
step S3: the multi-sensor fusion mechanism firstly unifies coordinate systems among sensors, and utilizes a depth fusion algorithm to synchronously synthesize point clouds acquired by the sensors with different types of data such as RGB (red, green and blue) and the like, so as to obtain the consistency description of robot state information, target points and position and movement information of obstacles in the environment;
Step S4: and (3) obstacle representation, namely performing dynamic and static classification on objects in the environment by using a CNN-based recognition algorithm, establishing a track prediction model, an reachable area prediction model and a gesture estimation model for the dynamic obstacle with updated states, modeling a static object to obtain a model contour simplified representation, performing gray scale treatment along an envelope contour in a static environment map, and representing the range of the obstacle.
Step S5: collision detection, namely performing collision detection based on model contour information of the obstacle and the robot and a motion prediction model, and judging that no collision occurs to regenerate the global optimal route;
Step S6: the track planning comprises global planning and local re-planning; the global planning utilizes a track generation algorithm to generate an optimal path from a starting point to a grabbing target point according to the static environment map; if a dynamic obstacle exists, enabling local re-planning; the local re-planning is a local track planning method, information such as the position, the motion, the geometric property and the like of the obstacle in a small surrounding range where the robot is located is obtained in real time, the dynamic problem is solved by utilizing equivalent static state, the action track of the robot is intersected with the position of the obstacle, the possible interference track is predicted, the safe area where the robot can avoid each obstacle is obtained, and the global track is optimized and corrected;
Step S7: the motion control module utilizes the controller and the communication module to convert the trajectory planning and the obstacle avoidance decision into instruction signals so as to realize the control of the motion of the robot.
The principle of the real-time obstacle avoidance method is as follows: firstly, detecting the environment through sensors such as cameras, radars, ultrasonic sensors and the like, initializing the current state information, target point coordinates, obstacle positions and motion parameters of a robot, and converting various information collected by the sensors into total three-dimensional point cloud data through a fusion algorithm; generating a global map according to the three-dimensional point cloud data; then dividing the obstacle into a static obstacle and a dynamic obstacle, establishing an obstacle outline and a gesture model, predicting a reachable area of the dynamic obstacle, and generating a minimum enveloping ellipse of the obstacle; estimating and predicting the state of the minimum enveloping ellipse of the obstacle through Kalman filtering, and generating an obstacle prediction track in a forward time domain. Collision detection is carried out according to the static obstacle outline model, the dynamic obstacle track prediction, the reachable area prediction and the gesture estimation; finally, a global optimal path is established according to the starting point, the ending point and the related constraint conditions; and carrying out real-time local track re-planning between the current node and the next sub-target, and sending a motion instruction.
Further, in the step S1, the sensing environment by adopting a vision and sensing fusion method specifically includes: binocular depth camera, infrared distance sensor, laser radar, ultrasonic sensor, etc.; the laser radar and the infrared distance sensor are arranged on the robot body, and are used for acquiring point cloud information and acquiring the position and motion data of the robot body, the obstacle and the target object; the binocular depth camera is suspended at four corners of a robot work environment for acquiring a background map of a work area, and comprises: position data of the robot, the obstacle and the target point.
Further, in the step S2, the camera calibration uses Zhang Zhengyou calibration method to realize coordinate system conversion between three-dimensional space and two-dimensional imaging, so as to obtain the positioning of the robot in the real world; because the coordinate systems established by the multiple sensors are inconsistent, the various sensors are required to be calibrated in a combined mode, and a conversion relation is established; for the robot with the grabbing target, hand-eye calibration is needed, so that the execution point position of the moving tail end is matched with the data obtained by the sensing part.
Specifically, a local coordinate system and a global coordinate system are established, the transformation of the coordinate system is carried out by using a camera calibration method to generate three-dimensional coordinates, and a robot kinematics model is established; the binocular depth camera captures an environment image, combines self space positioning, and establishes a static environment map by using a two-dimensional grid method. In the two-dimensional grid-form environment MAP, black grids represent obstacle areas, white colors represent blank areas, the gray values are used for describing obstacles, each grid state is mainly estimated by using a Bayesian probability formula, and a MAP model is simulated under a Linux system by combining a vector algorithm, a Cartographer algorithm, a supporting algorithm and a RTAB _MAP algorithm.
Further, in the step S3, two systems of distance sensor and visual perception are centralized by using a multi-sensor fusion algorithm, and coordinate, RGB and depth information comprehensively obtained by using a multi-source information weighted fusion method are used to obtain three-dimensional point cloud information with synchronous time and unified space, which is used to construct the map of the environment such as grid, increment, topology and geometric feature, and display static environment information; and provides real-time state data of the obstacle for the functional modules such as collision detection and track prediction in the dynamic obstacle avoidance.
Further, in the step S4, a neural network-based recognition algorithm is used to classify the static obstacle and the dynamic obstacle; and establishing a static obstacle envelope contour model by utilizing an AABB bounding box algorithm or an OBB bounding box algorithm aiming at the static obstacle. Tracking and predicting aiming at dynamic barriers; specifically, firstly, carrying out Euclidean clustering on the moving obstacle information acquired at each moment by adopting a K-mean algorithm so as to simplify the interference of irregular shapes and Gaussian noise; secondly, fitting an obstacle motion model, namely, constructing a prediction track model based on the motion rule according to the motion rule, and utilizing the acquired m known sites, combining a Bragg like interpolation and Kalman filtering algorithm to collect data according to observation, updating and iterating in real time to obtain data such as a regression equation, a state distribution mean square error and the like, so as to finish the motion prediction and the speed estimation of a dynamic object; and finally, setting a prediction time domain based on a state update equation of the dynamic obstacle, and constructing a prediction step number variance expression about the time domain based on an optimal prediction theory to obtain a multi-step elliptical envelope potential field of a region where the mass center of the dynamic obstacle can reach.
Further, in the step S5, collision detection is performed by using a geometric envelope method and a projection intersection method based on the static obstacle contour model and dynamic obstacle trajectory prediction, reachable region prediction, and pose estimation; and generating a global path based on feedback of collision detection.
Specifically, a current position of the robot and surrounding nearest nodes are utilized to generate a line segment, the line segment and a geometric enveloping method are utilized to determine a grid where an obstacle enveloping body is located and a grid of a dynamic obstacle reachable area are subjected to projection intersection detection, a road segment which is not a collision-free road segment is removed, a collision-free road segment is reserved, a line segment end point is used as a father node, and the method continues to search for new child nodes.
Further, in the step S6, an optimized global path is established between the starting point and the target point according to the static environment map in the step S2 and the static obstacle model in the step S4, and then whether a dynamic obstacle exists or not is judged, if so, real-time local trajectory re-planning is performed according to the trajectory prediction of the dynamic obstacle so as to realize the real-time obstacle avoidance function;
More specifically, firstly, according to data obtained by an environment sensing module, a robot motion model and a grid map of a background are established, and for the static obstacle only, a obstacle avoidance strategy is adopted, namely, based on constraint conditions such as a visual judgment feasible region, an optimal global path is generated by optimizing a node searching mode by utilizing an algorithm such as a particle swarm optimization algorithm, a Dijkstra algorithm, a D algorithm, an A algorithm, an LPA * and the like; for the environment with dynamic obstacles, the real-time local path adjustment is carried out by utilizing a local track re-planning algorithm such as a dynamic window method, an artificial potential field method, a random number expansion method and the like and a collision detection related algorithm so as to avoid the obstacles.
Further, in the step S7, according to the trajectory decision obtained in the step S6, communication is obtained between the control module and the robot, the robot is controlled to move to the target point according to the instruction, and then the robot arm is controlled to perform the grabbing task.
The invention also provides a real-time obstacle avoidance and dynamic path planning system of the grabbing robot, which can be realized by executing the flow steps of the real-time obstacle avoidance and dynamic path planning method of the grabbing robot, namely, the real-time obstacle avoidance and dynamic path planning method of the grabbing robot can be understood as a preferred implementation mode of the real-time obstacle avoidance and dynamic path planning system of the grabbing robot by a person skilled in the art.
Those skilled in the art will appreciate that the invention provides a system and its individual devices, modules, units, etc. that can be implemented entirely by logic programming of method steps, in addition to being implemented as pure computer readable program code, in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Therefore, the system and various devices, modules and units thereof provided by the invention can be regarded as a hardware component, and the devices, modules and units for realizing various functions included in the system can also be regarded as structures in the hardware component; means, modules, and units for implementing the various functions may also be considered as either software modules for implementing the methods or structures within hardware components.
The foregoing describes specific embodiments of the present application. It is to be understood that the application is not limited to the particular embodiments described above, and that various changes or modifications may be made by those skilled in the art within the scope of the appended claims without affecting the spirit of the application. The embodiments of the application and the features of the embodiments may be combined with each other arbitrarily without conflict.

Claims (4)

1. The real-time obstacle avoidance and dynamic path planning method for the robot is characterized by comprising the following steps of:
step S1: generating a global map based on the acquired three-dimensional point cloud data;
Step S2: classifying the dynamic and static obstacles in the environment according to the global map, and carrying out track prediction, reachable area prediction and gesture estimation on the dynamic obstacles; modeling the outline of the static obstacle;
step S3: performing collision detection according to the static obstacle outline model, dynamic obstacle track prediction, reachable area prediction and gesture estimation based on the global map;
step S4: track planning is performed based on feedback of collision detection;
The step S2 adopts:
firstly, identifying and classifying static barriers and dynamic barriers based on a neural network;
Aiming at the static obstacle, establishing a static obstacle enveloping contour model by utilizing an AABB bounding box or an OBB bounding box;
Tracking and predicting aiming at dynamic barriers; firstly, carrying out Euclidean clustering on the moving obstacle information acquired at each moment by adopting a K-mean algorithm so as to simplify the interference of irregular shapes and Gaussian noise; secondly, fitting an obstacle motion model, namely, constructing a prediction track model based on the motion rule according to the motion rule, and utilizing the acquired m known sites, combining a Bragg like interpolation and Kalman filtering algorithm to collect data according to observation, updating and iterating in real time to obtain a regression equation and state distribution mean square error data, so as to complete motion prediction and speed estimation of a dynamic object; finally, setting a prediction time domain based on a state update equation of the dynamic obstacle, and constructing a prediction step number variance expression about the time domain based on an optimal prediction theory to obtain a multi-step elliptical envelope potential field of a region where the mass center of the dynamic obstacle can be reached;
The step S3 adopts: generating a line segment by utilizing the current position of the robot and the nearest nodes around, determining a grid where an obstacle enveloping body is located and a grid of a dynamic obstacle reachable area by using a geometric enveloping method, performing projection intersection detection, removing a road section which is not a collision-free road section, reserving a collision-free road section, taking the endpoint of the line segment as a father node, and continuously searching for a new child node by using the method in a circulating way;
The step S4 employs: firstly, establishing an optimized global path between a starting point and a target point according to a static environment map and a medium static obstacle model, and then judging whether a dynamic obstacle exists or not, if so, carrying out real-time local track re-planning according to track prediction of the dynamic obstacle so as to realize a real-time obstacle avoidance function;
Firstly, according to data obtained by an environment sensing module, a robot motion model and a grid map of a background are established, constraint conditions such as a visual judgment feasible region are firstly adopted for only static obstacles, and then a node searching mode is optimized by utilizing a particle swarm optimization algorithm, a Dijkstra algorithm, a D algorithm, an A algorithm and an LPA * algorithm to generate an optimal global path; for the environment with dynamic obstacles, a local path adjustment in real time is performed by using a local track re-planning algorithm comprising a dynamic window method, an artificial potential field method and a random number expansion method and a collision detection related algorithm so as to avoid the obstacles.
2. The method for real-time obstacle avoidance and dynamic path planning for a robot according to claim 1, wherein step S1 employs:
step S1.1: detecting the environment in a mode of combining vision with a sensor, combining the collected various information, and converting the combined various information into three-dimensional point cloud data;
Step S1.2: generating a global map according to the three-dimensional point cloud data;
Step S1.3: and initializing the current state information, the coordinates of the target point, the position of the obstacle and the motion parameters of the robot.
3. The real-time obstacle avoidance and dynamic path planning method for the robot is characterized by comprising the following steps of:
module M1: generating a global map based on the acquired three-dimensional point cloud data;
Module M2: classifying the dynamic and static obstacles in the environment according to the global map, and carrying out track prediction, reachable area prediction and gesture estimation on the dynamic obstacles; modeling the outline of the static obstacle;
module M3: performing collision detection according to the static obstacle outline model, dynamic obstacle track prediction, reachable area prediction and gesture estimation based on the global map;
Module M4: track planning is performed based on feedback of collision detection;
the module M2 employs:
firstly, identifying and classifying static barriers and dynamic barriers based on a neural network;
Aiming at the static obstacle, establishing a static obstacle enveloping contour model by utilizing an AABB bounding box or an OBB bounding box;
Tracking and predicting aiming at dynamic barriers; firstly, carrying out Euclidean clustering on the moving obstacle information acquired at each moment by adopting a K-mean algorithm so as to simplify the interference of irregular shapes and Gaussian noise; secondly, fitting an obstacle motion model, namely, constructing a prediction track model based on the motion rule according to the motion rule, and utilizing the acquired m known sites, combining a Bragg like interpolation and Kalman filtering algorithm to collect data according to observation, updating and iterating in real time to obtain a regression equation and state distribution mean square error data, so as to complete motion prediction and speed estimation of a dynamic object; finally, setting a prediction time domain based on a state update equation of the dynamic obstacle, and constructing a prediction step number variance expression about the time domain based on an optimal prediction theory to obtain a multi-step elliptical envelope potential field of a region where the mass center of the dynamic obstacle can be reached;
The module M3 employs: generating a line segment by utilizing the current position of the robot and the nearest nodes around, determining a grid where an obstacle enveloping body is located and a grid of a dynamic obstacle reachable area by using a geometric enveloping method, performing projection intersection detection, removing a road section which is not a collision-free road section, reserving a collision-free road section, taking the endpoint of the line segment as a father node, and continuously searching for a new child node by using the method in a circulating way;
The module M4 employs: firstly, establishing an optimized global path between a starting point and a target point according to a static environment map and a medium static obstacle model, and then judging whether a dynamic obstacle exists or not, if so, carrying out real-time local track re-planning according to track prediction of the dynamic obstacle so as to realize a real-time obstacle avoidance function;
Firstly, according to data obtained by an environment sensing module, a robot motion model and a grid map of a background are established, constraint conditions such as a visual judgment feasible region are firstly adopted for only static obstacles, and then a node searching mode is optimized by utilizing a particle swarm optimization algorithm, a Dijkstra algorithm, a D algorithm, an A algorithm and an LPA * algorithm to generate an optimal global path; for the environment with dynamic obstacles, a local path adjustment in real time is performed by using a local track re-planning algorithm comprising a dynamic window method, an artificial potential field method and a random number expansion method and a collision detection related algorithm so as to avoid the obstacles.
4. A real-time obstacle avoidance and dynamic path planning system for robots according to claim 3 wherein the module M1 employs:
module M1.1: detecting the environment in a mode of combining vision with a sensor, combining the collected various information, and converting the combined various information into three-dimensional point cloud data;
Module M1.2: generating a global map according to the three-dimensional point cloud data;
module M1.3: and initializing the current state information, the coordinates of the target point, the position of the obstacle and the motion parameters of the robot.
CN202311863805.4A 2023-12-29 2023-12-29 Robot real-time obstacle avoidance and dynamic path planning method and system Pending CN117970925A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311863805.4A CN117970925A (en) 2023-12-29 2023-12-29 Robot real-time obstacle avoidance and dynamic path planning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311863805.4A CN117970925A (en) 2023-12-29 2023-12-29 Robot real-time obstacle avoidance and dynamic path planning method and system

Publications (1)

Publication Number Publication Date
CN117970925A true CN117970925A (en) 2024-05-03

Family

ID=90858980

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311863805.4A Pending CN117970925A (en) 2023-12-29 2023-12-29 Robot real-time obstacle avoidance and dynamic path planning method and system

Country Status (1)

Country Link
CN (1) CN117970925A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118131781A (en) * 2024-05-10 2024-06-04 中国特种设备检测研究院 Method and device for tracking invisible environment path of storage tank bottom plate in oil detection robot

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118131781A (en) * 2024-05-10 2024-06-04 中国特种设备检测研究院 Method and device for tracking invisible environment path of storage tank bottom plate in oil detection robot

Similar Documents

Publication Publication Date Title
CN113110457B (en) Autonomous coverage inspection method for intelligent robot in indoor complex dynamic environment
CN110285813B (en) Man-machine co-fusion navigation device and method for indoor mobile robot
Moras et al. Credibilist occupancy grids for vehicle perception in dynamic environments
Cheng et al. Topological indoor localization and navigation for autonomous mobile robot
CN110488811B (en) Method for predicting pedestrian track by robot based on social network model
CN117970925A (en) Robot real-time obstacle avoidance and dynamic path planning method and system
Chen et al. Real-time identification and avoidance of simultaneous static and dynamic obstacles on point cloud for UAVs navigation
CN114998276B (en) Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
CN116576857A (en) Multi-obstacle prediction navigation obstacle avoidance method based on single-line laser radar
Beinschob et al. Advances in 3d data acquisition, mapping and localization in modern large-scale warehouses
CN113778096B (en) Positioning and model building method and system for indoor robot
Valente et al. Evidential SLAM fusing 2D laser scanner and stereo camera
CN115690343A (en) Robot laser radar scanning and mapping method based on visual following
CN116352722A (en) Multi-sensor fused mine inspection rescue robot and control method thereof
Khan et al. Sonar-based SLAM using occupancy grid mapping and dead reckoning
Sun et al. Detection and state estimation of moving objects on a moving base for indoor navigation
Tungadi et al. Loop exploration for SLAM with fusion of advanced sonar features and laser polar scan matching
Su et al. A method of cliff detection in robot navigation based on multi-sensor
Ratnayake et al. OENS: an octomap based exploration and navigation system
Morioka et al. Simplified map representation and map learning system for autonomous navigation of mobile robots
CN117214908B (en) Positioning control method and system based on intelligent cable cutting machine
Xi Improved intelligent water droplet navigation method for mobile robot based on multi-sensor fusion
Ai et al. Research on mapping method based on data fusion of lidar and depth camera
Sun et al. Personal Care Robot Navigation System Based on Multi-sensor Fusion
Gal et al. Visible routes in 3D dense city using reinforcement learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination