CN113433949A - Automatic following object conveying robot and object conveying method thereof - Google Patents

Automatic following object conveying robot and object conveying method thereof Download PDF

Info

Publication number
CN113433949A
CN113433949A CN202110811313.5A CN202110811313A CN113433949A CN 113433949 A CN113433949 A CN 113433949A CN 202110811313 A CN202110811313 A CN 202110811313A CN 113433949 A CN113433949 A CN 113433949A
Authority
CN
China
Prior art keywords
main body
robot main
robot
mobile communication
obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110811313.5A
Other languages
Chinese (zh)
Inventor
肖夏
支涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yunji Technology Co Ltd
Original Assignee
Beijing Yunji Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yunji Technology Co Ltd filed Critical Beijing Yunji Technology Co Ltd
Priority to CN202110811313.5A priority Critical patent/CN113433949A/en
Publication of CN113433949A publication Critical patent/CN113433949A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses an automatic following object conveying robot and an object conveying method thereof. The invention also discloses an automatic following object conveying method, which comprises the steps that the mobile communication equipment establishes communication connection with the robot main body; acquiring the position and the navigation angle of the robot main body and the position information of the mobile communication equipment in real time, calculating to obtain the expected speed, the yaw angle and the expected distance of the robot main body, and generating a following path; controlling the robot main body to move along the following path, acquiring image information around the robot main body in real time, and judging whether a barrier exists in a next node or not; if the obstacle exists, acquiring an image of the obstacle, and obtaining the outline and the moving speed of the obstacle through feature extraction; through laser radar, the obstacle avoidance path is formulated based on the environment point cloud collected by the radar, the robot main body is controlled to move along the obstacle avoidance path, and the stability and the obstacle avoidance effect are good.

Description

Automatic following object conveying robot and object conveying method thereof
Technical Field
The invention relates to the technical field of robots, in particular to an automatic following object conveying robot.
Background
The service robot has great development potential worldwide, and the main countries such as China, America, Japan, Korea, Germany and the like all develop the robot worldwide. Especially in developed countries, areas with higher labor cost and serious aging have wider application space, and compared with industrial robots, the service robot is closer to users at terminals and people, audience groups of the service robot are wider, and the service robot has more diverse functions, types and characteristics due to the fact that the service industry has different characteristics. In a sense, the service robot has a wider market space than an industrial robot. The service robot can replace human beings in many fields, do more repetitive labor.
With the continuous development of science and technology, China gradually steps into an intelligent society, the intelligence plays an important role in various aspects of clothes, food and housing in life and the like, and the robot gradually enters the daily life of people, so that a delivery robot capable of automatically following is needed, or the robot can be used in service scenes such as hotels and the like, guests are facilitated, and the store experience is improved.
Disclosure of Invention
The invention provides an automatic following object-conveying robot, which comprises a robot main body, mobile communication equipment, an automatic following system, an infrared camera, a laser radar, an obstacle avoidance system and a controller.
The invention also aims to provide an automatic following object conveying method, which comprises the steps of establishing communication connection between mobile communication equipment and a robot main body, calculating the expected speed, the yaw angle and the expected distance of the robot main body, generating a following path, establishing an obstacle avoiding path through obstacle identification and feature extraction, and controlling the robot main body to move along the following path or the obstacle avoiding path so as to realize automatic following object conveying.
The technical scheme of the invention is as follows:
an automatically following delivery robot comprising:
a robot main body having a position sensor and an angle sensor;
a mobile communication device having a location sensor;
the automatic following system is detachably arranged on the robot main body, can be in communication connection with the mobile communication equipment, and generates a following path according to the position of the mobile communication equipment, the position of the robot main body and the navigation angle;
an infrared camera detachably provided on the robot main body, the infrared camera being capable of capturing an image of the periphery of the robot main body;
the laser radar is detachably arranged on the robot main body, is coaxially arranged with the infrared camera, and can collect environmental point cloud around the robot main body;
the obstacle avoidance system is connected with the laser radar, the infrared camera and the automatic following system, and can analyze the image, identify the obstacle and plan an obstacle avoidance path;
and the controller is connected with the automatic following system and the obstacle avoidance system and can control the robot main body to move according to the following path or the obstacle avoidance path.
Preferably, the automatic following system includes:
the connection module is connected with the mobile communication equipment through one or more of Bluetooth, APP and/or WeChat applet;
the information acquisition module is used for acquiring the position information of the mobile communication equipment, the position and the navigation angle of the robot main body;
and the algorithm module is connected with the information acquisition module, can calculate the expected speed, the yaw angle and the expected distance of the robot main body according to the position information of the mobile communication equipment, the position and the navigation angle of the robot main body, and generates a following path.
Preferably, the robot body has a storage tray and/or a detachable carrying cart.
An automatic following object conveying method based on the robot comprises the following steps:
the mobile communication equipment establishes communication connection with the robot main body;
acquiring the position and the navigation angle of the robot main body and the position information of the mobile communication equipment in real time, calculating to obtain the expected speed, the yaw angle and the expected distance of the robot main body, and generating a following path;
the robot main body moves along the following path, image information around the robot main body is obtained in real time, and whether an obstacle exists in the next node or not is judged;
if the obstacle exists, acquiring an image of the obstacle, and obtaining the outline and the moving speed of the obstacle through feature extraction;
through laser radar, the environmental point cloud around the robot is collected, an obstacle avoidance path is formulated based on the outline and the moving state of the obstacle, and the robot main body moves along the obstacle avoidance path.
Preferably, the desired speed, yaw angle and desired pitch are calculated by the formula:
Figure BDA0003168334750000031
wherein v isxIndicating the desired lateral velocity, vyIndicating the desired longitudinal speed, αfRepresents the angle of the robot body, (x)f,yf) The coordinates of the robot body are represented,
Figure BDA0003168334750000032
(xl,yl) Which represents the coordinates of the mobile communication device,
Figure BDA0003168334750000033
denotes a relative yaw angle of the robot main body and the mobile communication device, w denotes a yaw angle, v denotes a yaw anglexyIndicating a desired speed
Figure BDA0003168334750000034
L represents the wheelbase of the robot body, D represents the desired spacing, vlIndicating the speed of movement, v, of the mobile communication devicefIndicating the moving speed of the robot body, DpIndicating a safe distance.
Preferably, the feature extraction includes:
performing pixel-by-pixel sliding on pixel points in the obstacle image, and calculating the local contrast of each pixel point to obtain a local contrast map of the whole image;
performing threshold segmentation on the local contrast map, identifying an obstacle in the image, and determining an obstacle outline and a center point;
and calculating the moving speed of the obstacle by adopting an interframe difference method.
Preferably, the obstacle avoidance path includes:
mapping the environmental point cloud data to a two-dimensional grid map, and filtering to remove invalid point cloud information;
updating a grid map occupied by the obstacles according to the moving speed of the obstacles to obtain a dynamic probability grid map of the obstacles and the probability of the obstacles in different grids, and calculating a cost potential field of the obstacles;
drawing an equipotential curve of the cost potential field, and calculating the slope between each grid point in the equipotential curve and the current position of the robot main body and the current position of the mobile communication equipment to obtain a plurality of tangent lines of the equipotential curve;
traversing tangent lines, solving a minimum sub-tree containing the current position of the robot main body and the current position of the mobile communication equipment based on a minimum spanning tree method, and generating an obstacle avoidance path.
Preferably, the method further comprises the following steps:
the mobile communication device sends navigation information to the robot main body, and the robot main body moves according to the navigation information.
Preferably, the navigation information is obtained by digital map tool planning based on the position information of the mobile communication device and the position of the robot main body.
Preferably, the information transmission between the mobile communication device and the robot adopts a ZigBee protocol.
The invention has the beneficial effects that:
1. the invention designs and develops an automatic following object-conveying robot, an automatic following system, an infrared camera, a laser radar, an obstacle avoidance system and a controller are arranged on a robot main body, a following path is planned through information interaction between the automatic following system and mobile communication equipment, an obstacle avoidance system is used for identifying and avoiding obstacles, and an obstacle avoidance path is planned, so that stable dynamic path tracking is realized, and the practicability is high.
2. The invention also designs an automatic following object conveying method which can analyze the position information of the mobile communication equipment, the position and the navigation angle of the robot main body, formulate a following path, formulate an obstacle avoidance path through obstacle identification and feature extraction, control the robot main body to move along the following path or the obstacle avoidance path and have good following and obstacle avoidance effects.
Drawings
Fig. 1 is a schematic structural view of an automatically following object conveying robot provided by the present invention.
Fig. 2 is a schematic structural diagram of an automatic following system in an embodiment of the present invention.
Fig. 3 is a flow chart of a delivery method capable of automatic following according to the present invention.
Detailed Description
The present invention is described in terms of particular embodiments, other advantages and features of the invention will become apparent to those skilled in the art from the following disclosure, and it is to be understood that the described embodiments are merely exemplary of the invention and that it is not intended to limit the invention to the particular embodiments disclosed. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that in the description of the present invention, the terms "in", "upper", "lower", "lateral", "inner", etc. indicate directions or positional relationships based on those shown in the drawings, which are merely for convenience of description, and do not indicate or imply that the device or element must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Furthermore, it should be noted that, in the description of the present invention, unless otherwise explicitly specified or limited, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected; may be a mechanical connection; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
As shown in fig. 1, an automatic following delivery robot includes a robot main body 110, a mobile communication device 120, an automatic following system 130, an infrared camera 140, a laser radar 150, an obstacle avoidance system, and a controller.
Wherein, the robot main body 110 has a position sensor and an angle sensor, the mobile communication device 120 has a position sensor, the automatic following system 130 is detachably disposed on the robot main body 110, and can be in communication connection with the mobile communication device 120, and generate a following path according to the position of the mobile communication device 120, the position of the robot main body 110 and a navigation angle, the infrared camera 140 is detachably disposed on the robot main body 110, and can shoot images around the robot main body 110, the laser radar 150 is detachably disposed on the robot main body 110 and coaxially disposed with the infrared camera 140, and can collect environmental point clouds around the robot main body 110, the obstacle avoiding system is connected with the laser radar 150, the infrared camera 140 and the automatic following system 130, and can analyze the images, recognize obstacles and plan the obstacle avoiding path, the controller is connected with the automatic following system 130 and the obstacle avoiding system, the robot main body 110 can be controlled to move along a following path or an obstacle avoidance path.
Preferably, the robot main body 110 has a storage tray 111 and/or a detachable carrying cart 112 for carrying small items or large luggage.
As shown in fig. 2, the automatic following system 130 includes a connection module 131, an information collection module 132, and an algorithm module 133.
Wherein, the connection module 131 establishes a connection with the mobile communication device 120 through one or more of bluetooth, APP and/or wechat applets, the information acquisition module 132 acquires the position information of the mobile communication device 120, the position and the yaw of the robot main body, and the algorithm module 133 is connected with the information acquisition module 132, and can calculate the desired speed, the yaw and the desired distance of the robot main body 110 according to the position information of the mobile communication device 120, the position and the yaw of the robot main body 110, and generate a following path.
The robot comprises a robot main body, and is characterized in that an automatic following system, an infrared camera, a laser radar, an obstacle avoidance system and a controller are arranged on the robot main body, a following path is planned through information interaction of the automatic following system and mobile communication equipment, and the obstacle avoidance system is used for identifying and avoiding obstacles and planning an obstacle avoidance path, so that stable dynamic path tracking is realized, and the practicability is high.
As shown in fig. 3, an automatic following object conveying method based on the robot includes:
and S110, the mobile communication equipment establishes communication connection with the robot main body.
And S120, acquiring the position and the navigation angle of the robot main body and the position information of the mobile communication equipment in real time, calculating to obtain the expected speed, the yaw angle and the expected distance of the robot main body, and generating a following path.
Wherein the desired speed, the yaw angle and the desired pitch are calculated as follows:
Figure BDA0003168334750000061
wherein v isxIndicating the desired lateral velocity, vyIndicating the desired longitudinal speed, αfRepresents the angle of the robot body, (x)f,yf) The coordinates of the robot body are represented,
Figure BDA0003168334750000062
(xl,yl) Which represents the coordinates of the mobile communication device,
Figure BDA0003168334750000063
denotes a relative yaw angle of the robot main body and the mobile communication device, w denotes a yaw angle, v denotes a yaw anglexyIndicating a desired speed
Figure BDA0003168334750000064
L represents the wheel base of the robot body, and D represents the periodDistance of inspection, vlIndicating the speed of movement, v, of the mobile communication devicefIndicating the moving speed of the robot body, DpIndicating a safe distance.
S130, the robot main body moves along the following path, image information around the robot main body is obtained in real time, and whether an obstacle exists in the next node or not is judged.
And S140, if the obstacle exists, acquiring an image of the obstacle, and obtaining the outline and the moving speed of the obstacle through feature extraction.
Performing pixel-by-pixel sliding on pixel points in the obstacle image, and calculating the local contrast of each pixel point to obtain a local contrast map of the whole image;
performing threshold segmentation on the local contrast map, identifying an obstacle in the image, and determining an obstacle outline and a center point;
calculating the moving speed of the obstacle by adopting an interframe difference method, wherein the moving speed of the obstacle can be expressed by
Figure BDA0003168334750000071
Wherein, (x, y) represents the coordinates of the center point of the obstacle at the current moment, and (x)a,ya) And representing the coordinates of the center point of the obstacle at the last moment, and t represents the coordinate updating time difference.
S150, collecting environmental point clouds around the robot through a laser radar, and formulating an obstacle avoidance path based on the outline and the moving state of the obstacle, wherein the robot body moves along the obstacle avoidance path.
The object-conveying robot only moves in a two-dimensional plane, but the environment of the robot is three-dimensional, and path planning and obstacle avoidance are directly carried out under a three-dimensional map, so that the complexity of calculation is greatly increased, and the real-time realization and the application under a large-scale environment are not facilitated. In order to fully utilize the three-dimensional data, firstly, mapping the environmental point cloud data onto a two-dimensional grid map, filtering to remove invalid point cloud information, and updating a corresponding barrier occupation grid map by utilizing barrier valid point cloud information which is likely to collide with a robot;
then, updating the grid map occupied by the obstacles according to the moving speed of the obstacles to obtain a dynamic probability grid map of the obstacles and the probability of the obstacles in different grids, and calculating a cost potential field of the obstacles;
assuming that the global grid Map of the object-conveying robot is composed of m rows and n columns of grids, the global grid Map may be represented by a two-dimensional moment Map (m, n), and the grid obstacle probability may be represented as Map (p, q) ═ ρ, ρ ∈ [0,1], where 0 represents that there is no obstacle at (p, q), and 1 represents that there is an obstacle at (p, q).
Further, the cost potential field of the obstacle is calculated as:
U=UP+UV
Figure BDA0003168334750000072
Figure BDA0003168334750000081
wherein, UPRepresenting a stationary cost field of an obstacle, UVRepresenting an obstacle movement cost potential field, C representing an allowable grid cost maximum value in a map, wherein the maximum value is a constant, d represents the Euclidean distance between the robot and the obstacle, rho represents the grid obstacle probability, dmaxThe farthest distance that the barrier grid can influence is represented as a constant, lambda represents the influence coefficient of the barrier speed on the cost potential field,
Figure BDA0003168334750000082
representing a moving velocity vector
Figure BDA0003168334750000083
And vector
Figure BDA0003168334750000084
Cosine of the angle of (x)b,yb) (x) any grid coordinate indicating the probability ρ of an obstacle in the map as 0p,yp) To representDistance (x)b,yb) The grid coordinate with the closest point.
Then, selecting an equipotential curve with a certain proper height according to the shape, the size and other obstacle avoidance parameters of the object-conveying robot, drawing an equipotential curve of the cost potential field, calculating the slope between each grid point in the equipotential curve and the current position of the robot main body and the current position of the mobile communication equipment, sequentially calculating the grid slope along the clockwise direction, the grid points with extreme slope values in the clockwise direction are tangent points on the equipotential curve, and the grid points with infinite absolute slope values need to be independently judged, namely, whether the grid point in the neighborhood of the equipotential curve appears at the same side of the connecting line between the point and the current position of the robot is judged, if the tangent point is determined to be the tangent point, in order to ensure the validity of the tangent point, the length of the equipotential curve between adjacent tangent points is required to be ensured to be larger than a constant d in the process of searching the tangent point clockwise.mAnd adjacent effective tangents do not exhibit the same slope, dmThe length of the shortest allowed equipotential curve between adjacent tangent points on the equipotential curve is generally set to be a small number, and the larger the value of the length of the shortest allowed equipotential curve, the more sparse the allowed tangent points on the equipotential curve are, and the tangent lines of the equipotential curve are sequentially judged to be obtained.
And finally, traversing the tangent line, solving a minimum sub-tree containing the current position of the robot main body and the current position of the mobile communication equipment based on a minimum spanning tree method, and generating an obstacle avoidance path.
By establishing a grid map and a cost potential field of the obstacle, equipotential lines in a dynamic scene and tangent lines passing through the current position of the robot main body and the current position of the mobile communication equipment are obtained, the minimum spanning tree is solved to obtain an obstacle avoidance path, the path is smooth, the obstacle avoidance safety performance is high, and the requirement of path planning of the object-sending robot in the dynamic scene can be met.
In another embodiment, the mobile communication device transmits navigation information to the robot main body, and the robot main body moves according to the navigation information.
Preferably, the navigation information is obtained by digital map tool planning based on the position information of the mobile communication device and the position of the robot main body.
Preferably, the information transmission between the mobile communication device and the robot adopts a ZigBee protocol.
The above descriptions are only examples of the present invention, and common general knowledge of known specific structures, characteristics, and the like in the schemes is not described herein too much, and it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Without departing from the invention, several changes and modifications can be made, which should also be regarded as the protection scope of the invention, and these will not affect the effect of the invention and the practicality of the patent.

Claims (10)

1. An automatically following delivery robot, comprising:
a robot main body having a position sensor and an angle sensor;
a mobile communication device having a location sensor;
the automatic following system is detachably arranged on the robot main body, can be in communication connection with the mobile communication equipment, and generates a following path according to the position of the mobile communication equipment, the position of the robot main body and a navigation angle;
an infrared camera detachably provided on the robot main body and capable of photographing an image around the robot main body;
the laser radar is detachably arranged on the robot main body, is coaxially arranged with the infrared camera, and can collect environmental point cloud around the robot main body;
the obstacle avoidance system is connected with the laser radar, the infrared camera and the automatic following system, and can analyze the image, identify an obstacle and plan an obstacle avoidance path;
and the controller is connected with the automatic following system and the obstacle avoidance system and can control the robot main body to move according to the following path or the obstacle avoidance path.
2. The automatically following delivery robot of claim 1, wherein the automatic following system comprises:
the connection module is connected with the mobile communication equipment through one or more of Bluetooth, APP and/or WeChat applet;
the information acquisition module is used for acquiring the position information of the mobile communication equipment, the position and the navigation angle of the robot main body;
and the algorithm module is connected with the information acquisition module, can calculate the expected speed, the yaw angle and the expected distance of the robot main body according to the position information of the mobile communication equipment, the position and the navigation angle of the robot main body, and generates a following path.
3. The automatically following robot for transporting things according to claim 1 or 2, wherein the robot main body has a storage tray and/or a detachable carrying cart.
4. An automatically following object conveying method based on the robot as claimed in claims 1-3, characterized in that it comprises:
the mobile communication equipment establishes communication connection with the robot main body;
acquiring the position and the navigation angle of the robot main body and the position information of the mobile communication equipment in real time, calculating to obtain the expected speed, the yaw angle and the expected distance of the robot main body, and generating a following path;
the robot main body moves along the following path, image information around the robot main body is obtained in real time, and whether an obstacle exists in the next node or not is judged;
if the obstacle exists, acquiring an image of the obstacle, and obtaining the outline and the moving speed of the obstacle through feature extraction;
through laser radar, gather the environmental point cloud around the robot to based on the profile and the mobile state of barrier formulate and keep away the barrier route, the robot main part is followed keep away barrier route removal.
5. The automatically followable delivery method according to claim 4, wherein the desired speed, yaw angle and desired pitch are calculated by the formula:
Figure FDA0003168334740000021
wherein v isxIndicating the desired lateral velocity, vyIndicating the desired longitudinal speed, αfRepresents the angle of the robot body, (x)f,yf) The coordinates of the robot body are represented,
Figure FDA0003168334740000022
(xl,yl) Which represents the coordinates of the mobile communication device,
Figure FDA0003168334740000023
denotes a relative yaw angle of the robot main body and the mobile communication device, w denotes a yaw angle, v denotes a yaw anglexyIndicating a desired speed
Figure FDA0003168334740000024
L represents the wheelbase of the robot body, D represents the desired spacing, vlIndicating the speed of movement, v, of the mobile communication devicefIndicating the moving speed of the robot body, DpIndicating a safe distance.
6. The automatically followable delivery method according to claim 5, wherein the feature extraction includes:
performing pixel-by-pixel sliding on pixel points in the obstacle image, and calculating the local contrast of each pixel point to obtain a local contrast map of the whole image;
performing threshold segmentation on the local contrast map, identifying an obstacle in the image, and determining the outline and the central point of the obstacle;
and calculating the moving speed of the obstacle by adopting an interframe difference method.
7. The automatically followable delivery method of claim 6, wherein the obstacle avoidance path comprises:
mapping the environmental point cloud data to a two-dimensional grid map, and filtering to remove invalid point cloud information;
updating the grid map occupied by the obstacles according to the moving speed of the obstacles to obtain a dynamic probability grid map of the obstacles and the probability of the obstacles in different grids, and calculating a cost potential field of the obstacles;
drawing an equipotential curve of the cost potential field, and calculating the slope between each grid point in the equipotential curve and the current position of the robot main body and the current position of the mobile communication equipment to obtain a plurality of tangent lines of the equipotential curve;
traversing the tangent, solving a minimum sub-tree containing the current position of the robot main body and the current position of the mobile communication equipment based on a minimum spanning tree method, and generating an obstacle avoidance path.
8. The automatically followable delivery method according to claim 7, further comprising:
the mobile communication equipment sends navigation information to the robot main body, and the robot main body moves according to the navigation information.
9. The automatically followable delivery method according to claim 8, wherein the navigation information is planned by a digital map tool based on the position information of the mobile communication device and the position of the robot main body.
10. The automatically followable delivery method of claim 9, wherein the information transfer between the mobile communication device and the robot employs a ZigBee protocol.
CN202110811313.5A 2021-07-19 2021-07-19 Automatic following object conveying robot and object conveying method thereof Pending CN113433949A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110811313.5A CN113433949A (en) 2021-07-19 2021-07-19 Automatic following object conveying robot and object conveying method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110811313.5A CN113433949A (en) 2021-07-19 2021-07-19 Automatic following object conveying robot and object conveying method thereof

Publications (1)

Publication Number Publication Date
CN113433949A true CN113433949A (en) 2021-09-24

Family

ID=77760823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110811313.5A Pending CN113433949A (en) 2021-07-19 2021-07-19 Automatic following object conveying robot and object conveying method thereof

Country Status (1)

Country Link
CN (1) CN113433949A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114442636A (en) * 2022-02-10 2022-05-06 上海擎朗智能科技有限公司 Control method and device for following robot, robot and storage medium
CN114706389A (en) * 2022-03-28 2022-07-05 清华大学 Social platform-based multi-robot dynamic environment search system and method
WO2023241395A1 (en) * 2022-06-17 2023-12-21 灵动科技(北京)有限公司 Robot obstacle avoidance method, apparatus and computer program product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100106356A1 (en) * 2008-10-24 2010-04-29 The Gray Insurance Company Control and systems for autonomously driven vehicles
CN107807652A (en) * 2017-12-08 2018-03-16 灵动科技(北京)有限公司 Merchandising machine people, the method for it and controller and computer-readable medium
CN111823228A (en) * 2020-06-08 2020-10-27 中国人民解放军战略支援部队航天工程大学 Indoor following robot system and operation method
CN112904842A (en) * 2021-01-13 2021-06-04 中南大学 Mobile robot path planning and optimizing method based on cost potential field

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100106356A1 (en) * 2008-10-24 2010-04-29 The Gray Insurance Company Control and systems for autonomously driven vehicles
CN107807652A (en) * 2017-12-08 2018-03-16 灵动科技(北京)有限公司 Merchandising machine people, the method for it and controller and computer-readable medium
CN111823228A (en) * 2020-06-08 2020-10-27 中国人民解放军战略支援部队航天工程大学 Indoor following robot system and operation method
CN112904842A (en) * 2021-01-13 2021-06-04 中南大学 Mobile robot path planning and optimizing method based on cost potential field

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宋学辉: "无人车跟驰行驶的领航跟随策略和控制方法研究", 《中国优秀硕士学位论文全文库(硕士) 工程科技II辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114442636A (en) * 2022-02-10 2022-05-06 上海擎朗智能科技有限公司 Control method and device for following robot, robot and storage medium
CN114442636B (en) * 2022-02-10 2024-03-29 上海擎朗智能科技有限公司 Control method and device of following robot, robot and storage medium
CN114706389A (en) * 2022-03-28 2022-07-05 清华大学 Social platform-based multi-robot dynamic environment search system and method
CN114706389B (en) * 2022-03-28 2024-04-12 清华大学 Multi-robot dynamic environment searching system and method based on social platform
WO2023241395A1 (en) * 2022-06-17 2023-12-21 灵动科技(北京)有限公司 Robot obstacle avoidance method, apparatus and computer program product

Similar Documents

Publication Publication Date Title
CN113433949A (en) Automatic following object conveying robot and object conveying method thereof
CN110221603B (en) Remote obstacle detection method based on laser radar multi-frame point cloud fusion
CN105160702B (en) The stereopsis dense Stereo Matching method and system aided in based on LiDAR point cloud
CN108958282B (en) Three-dimensional space path planning method based on dynamic spherical window
CN111028350B (en) Method for constructing grid map by using binocular stereo camera
CN113345008B (en) Laser radar dynamic obstacle detection method considering wheel type robot position and posture estimation
US20190065824A1 (en) Spatial data analysis
McDaniel et al. Ground plane identification using LIDAR in forested environments
CN116258817B (en) Automatic driving digital twin scene construction method and system based on multi-view three-dimensional reconstruction
US20230334778A1 (en) Generating mappings of physical spaces from point cloud data
CN107403451A (en) Adaptive binary feature monocular vision odometer method and computer, robot
CN116295412A (en) Depth camera-based indoor mobile robot dense map building and autonomous navigation integrated method
CN113593035A (en) Motion control decision generation method and device, electronic equipment and storage medium
Li et al. Building variable resolution occupancy grid map from stereoscopic system—A quadtree based approach
Kanaki et al. Cooperative moving-object tracking with multiple mobile sensor nodes—Size and posture estimation of moving objects using in-vehicle multilayer laser scanner
Vatavu et al. Modeling and tracking of dynamic obstacles for logistic plants using omnidirectional stereo vision
CN112182122A (en) Method and device for acquiring navigation map of working environment of mobile robot
CN113256574B (en) Three-dimensional target detection method
KR101167099B1 (en) Autonomous mobile robot's recognition of structural landmark using vision-based vanishing points
CN114217641B (en) Unmanned aerial vehicle power transmission and transformation equipment inspection method and system in non-structural environment
EP4078087B1 (en) Method and mobile entity for detecting feature points in an image
Biundini et al. Coverage path planning optimization based on point cloud for structural inspection
Szendy et al. Simultaneous localization and mapping with TurtleBotII
JP2004144644A (en) Topography recognition device, topography recognition means, and moving quantity detection method of moving body
Drulea et al. An omnidirectional stereo system for logistic plants. Part 2: stereo reconstruction and obstacle detection using digital elevation maps

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 702, 7th floor, NO.67, Beisihuan West Road, Haidian District, Beijing 100089

Applicant after: Beijing Yunji Technology Co.,Ltd.

Address before: Room 702, 7 / F, 67 North Fourth Ring Road West, Haidian District, Beijing

Applicant before: BEIJING YUNJI TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information