CN117250956A - Mobile robot obstacle avoidance method and obstacle avoidance device with multiple observation sources fused - Google Patents
Mobile robot obstacle avoidance method and obstacle avoidance device with multiple observation sources fused Download PDFInfo
- Publication number
- CN117250956A CN117250956A CN202311242327.5A CN202311242327A CN117250956A CN 117250956 A CN117250956 A CN 117250956A CN 202311242327 A CN202311242327 A CN 202311242327A CN 117250956 A CN117250956 A CN 117250956A
- Authority
- CN
- China
- Prior art keywords
- obstacle
- value
- information
- observation
- mobile robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000011218 segmentation Effects 0.000 claims abstract description 13
- 230000000007 visual effect Effects 0.000 claims abstract description 13
- 230000004888 barrier function Effects 0.000 claims description 21
- 230000003287 optical effect Effects 0.000 claims description 13
- 230000004927 fusion Effects 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 6
- 230000001172 regenerating effect Effects 0.000 claims description 3
- 238000005457 optimization Methods 0.000 abstract description 2
- 230000006872 improvement Effects 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000005265 energy consumption Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Landscapes
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention discloses a multi-observation-source-fused mobile robot obstacle avoidance method and an obstacle avoidance device, comprising the following steps: issuing a navigation target point on a map; acquiring obstacle information in the map; and identifying the obstacle according to the obstacle information and planning a path to avoid the obstacle. According to the invention, by introducing a vision camera observation source, detecting information of different planes through the vision camera, loading a soft object segmentation model to segment and identify special obstacles, such as small objects such as ropes and plastic bags on the ground, reducing the operation amount of path planning after obstacle identification and the visual consumption resource, fusing the acquired depth information on the basis of a 2D laser radar, realizing complete obstacle identification, and completing the optimization of obstacle avoidance in mobile robot navigation.
Description
Technical Field
The invention relates to the technical field of obstacle avoidance of mobile robots, in particular to a multi-observation-source fusion mobile robot obstacle avoidance method and an obstacle avoidance device.
Background
The mobile robot performs multi-point navigation control, namely, the mobile robot is led to navigate from the target point A to the target point B, then to the target point C and then to the target point D independently by issuing the target points. The obstacle encountered in navigation needs to be identified and then the mobile robot is controlled to bypass. In the navigation process, laser sensors commonly used for identifying obstacles are divided into a 2D single-line laser radar and a 3D multi-line laser radar.
The lidar will detect and identify obstacles within range and add to the obstacle layer of the cost map. However, due to the limitation of the scanning range of the 2D laser radar, only the object on the same plane can be scanned and identified, so that the obstacle on different planes above or below the radar cannot be scanned, and the complete obstacle identification cannot be realized in a complex environment, such as small objects or soft objects on the ground, such as ropes, plastic bags and the like, and the small objects are involved in a mobile robot vehicle body, so that robot faults are caused; however, the 3D laser radar has abundant acquired information, but the cost of the scheme is high due to the high price and the complex algorithm.
Disclosure of Invention
Aiming at the problem that the mobile robot cannot detect small objects with low planes or soft objects to cause the soft objects to be involved in the body of the mobile robot, the invention provides a multi-observation-source-fused mobile robot obstacle avoidance method and an obstacle avoidance device.
In order to achieve the above purpose, the invention adopts the following technical scheme: a multi-observation source fusion mobile robot obstacle avoidance method comprises the following steps:
issuing a navigation target point on a map;
acquiring obstacle information in the map;
and identifying the obstacle according to the obstacle information and planning a path to avoid the obstacle.
According to a further improvement of the present invention, the obtaining of the obstacle information in the map includes:
opening an obstacle layer of the map, wherein the map is a cost map;
acquiring at least one observation source, analyzing and identifying information of the observation source, and then adding the information of the observation source to an obstacle layer of the cost map;
and setting different observation source information related to the barrier layer to obtain barrier information.
A further improvement of the present invention, the observation source includes a radar observation source, adding the radar observation source to the obstacle layer, and setting the radar observation source includes:
setting the topic name, TF coordinate name and topic type format of radar data;
determining an observation range and a tracking range according to parameter setting of a radar observation source;
and acquiring parameter setting of a radar observation source on the barrier layer and observing to obtain barrier information.
A further improvement of the present invention, the observation source comprises a vision camera observation source, the vision camera observation source being added to the obstacle layer of the cost map, comprising:
acquiring camera image information of an internal reference of a visual camera and the current moment;
loading a soft object segmentation model which is trained in advance to process images, and releasing segmentation pictures;
regenerating and releasing depth point clouds according to the segmented pictures;
and converting the depth point cloud setting into topic names, TF coordinates and topic type formats of radar data, determining an observation range and a tracking range, setting the observation range and the tracking range in the barrier layer, and observing and obtaining barrier information.
A further refinement of the present invention, the converting the depth point cloud setting into radar data comprising:
converting the depth image data of the divided pictures into gray image data, wherein the gray value corresponding to each pixel point represents the depth value of the point, so as to obtain depth information;
converting depth information into distance information for a pixel point;
converting the converted distance information tissue set into laser beam data;
processing NaN values in the laser beam data;
the laser beam data is converted into standard ROS information.
Further improvements of the present invention, the converting depth information into distance information includes:
the direction angle of the left ray, the direction angle of the right ray and the direction angle of the central ray are obtained by measuring the angles among the left ray, the right ray and the optical central ray;
calculating the direction angle of the left ray and the direction angle of the optical center ray to obtain the value of angle_max, and taking the value obtained by calculating the direction angle of the optical center ray and the direction angle of the right ray to negatively assign the value of angle_min;
and obtaining the distance information of the real distance by calculating the polar angle and the polar length.
Further improvements of the present invention, the processing NaN values in the laser beam data includes:
checking whether the new value and the old value are finite values, wherein the finite values are not NaN values or InF values;
if the new value and the old value are not NaN values or Inf values, further judging whether the new value is NaN value, if the new value is not NaN value, indicating that the new value is InF value, and replacing the old value with the new value at the moment; if the new value is a NaN value, the new value does not replace the old value;
checking whether the new value is within the set value range: if the new value is not in the set value range, the old value is not replaced by the new value, whether the old value is a limited value is continuously checked, if the old value is not the limited value, the new value is in the set value range and is the limited value, and the old value is replaced by the new value.
Further improvements of the present invention, identifying an obstacle from the obstacle information and planning a path to avoid the obstacle include:
when the radar scans the obstacle or the vision camera scans the obstacle and the soft object on the ground, analyzing and identifying the obstacle layer added to the cost map to obtain obstacle information;
and re-generating global path planning and local path planning through the path planning node so as to enable the control to bypass the obstacle to avoid collision, and continuing to navigate after successfully bypassing until the navigation target point, wherein the mobile robot control stops and waits for the release of the next navigation target point.
On the other hand, the invention adopts the following technical scheme: the processor executes the multi-observation source fusion mobile robot obstacle avoidance method by calling the control program stored in the memory.
On the other hand, the invention adopts the following technical scheme: a computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform a mobile robot obstacle avoidance method as described above with reference to a multi-observation source fusion.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, by introducing a vision camera observation source, detecting information of different planes through the vision camera, loading a soft object segmentation model to segment and identify special obstacles, such as small objects such as ropes and plastic bags on the ground, reducing the operation amount of path planning after obstacle identification and the visual consumption resource, fusing the acquired depth information on the basis of a 2D laser radar, realizing complete obstacle identification, and completing the optimization of obstacle avoidance in mobile robot navigation.
The invention solves the problems of high hardware cost and more operation resources consumption caused by using the 3D laser radar, the form of generating the point cloud by using the depth information of the vision camera is high in cost performance and low in energy consumption, the acquired environmental information is ensured to be rich equally, the problems of large operation amount and large visual consumption resources caused by dense point cloud of the vision information are solved, the point cloud information with the same data type format as the radar point cloud is formed by mapping the depth information from the three-dimensional world coordinates to the world coordinate system of the two-dimensional imaging plane, and the point cloud information is added into the obstacle layer observation source in the cost map, so that the problem that the fault occurs after the mobile robot is involved due to the fact that the scanning plane is single and the small object on the ground cannot be identified when only the 2D laser radar is used is solved.
According to the invention, the cost map obstacle information is increased in a form of detecting, dividing and identifying special objects, such as plastic bags, strings and the like, so that the calculation amount required by a local cost map and the resources consumed by visualization are reduced, the robot has high running smoothness, rapid identification and rapid obstacle avoidance reaction; the obstacle recognition capability is increased, objects on a certain plane are not detected any more, the ground soft object recognition information is increased, soft objects on the ground are prevented from being involved in the robot, and the safety of the mobile robot is further guaranteed.
Drawings
For a clearer description of the technical solutions, the drawings that are required to be used in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an embodiment.
Fig. 2 is a flow chart of the navigation process.
FIG. 3 is a flow chart of multi-observation source fusion.
Detailed Description
In order that the manner in which a fully and completely understood embodiment of the invention may be readily understood, it is intended that the invention be further described in connection with the accompanying drawings, in which it is to be understood that the embodiments described are merely illustrative of some of the invention and that all other embodiments may be made by those skilled in the art without the benefit of the inventive faculty.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
The embodiment of the invention provides a multi-observation source fusion mobile robot obstacle avoidance method, which can be executed in a mobile robot or a similar computing device to run on the mobile robot, as shown in fig. 1, and comprises the following steps:
issuing a navigation target point on a map;
acquiring obstacle information in the map;
and identifying the obstacle according to the obstacle information and planning a path to avoid the obstacle.
In the above steps, as shown in fig. 2, a navigation target point is issued on a map, a path planning is performed by a path planning node and results in reaching the target point, and then control is stopped and the issuance or the end of the next target point is waited. In the navigation moving process, whether the surrounding environment has the obstacle or the ground has a soft object is detected by scanning the obstacle in the surrounding environment, the obstacle information is acquired, the obstacle is identified, and a planned path is re-planned to perform static or dynamic obstacle avoidance so as to bypass the obstacle.
It should be noted that, due to the limitation of the radar scanning range of the mobile robot, only the obstacle on the same plane of the radar can be identified, so that the obstacle on a different plane above or below the radar scanning plane cannot be scanned, and the complete obstacle identification cannot be realized in a complex environment, such as a small object or a soft object on the ground, such as a string, a plastic bag, and the like, so that the small object or the soft object is involved in the mobile robot body, thereby causing the mobile robot to malfunction. Therefore, the obstacle information in the map obtained in this embodiment is analysis and identification of the observation source information fused by a plurality of observation sources.
In addition, the implementation means or mechanism of static/dynamic obstacle avoidance by re-planning the path in this embodiment adopts the conventional technical means in the prior art, so long as the re-calculation can be performed according to the specific obstacle position or information, and the execution planning path is generated, and the repeated description is omitted. The observation source (observation source) in this embodiment is a data source that generates a cost map. The observation source can be a sensor of the robot, such as a laser radar, a camera, etc., and can also be data provided by other external devices or algorithms. The observation source is responsible for converting obstacle information in the environment into cost values in the cost map and updating the state of the cost map.
Further, acquiring the obstacle information in the map includes:
as shown in fig. 3, opening an obstacle layer of the map, wherein the map is a cost map;
acquiring at least one observation source, analyzing and identifying information of the observation source, and then adding the information of the observation source to an obstacle layer of the cost map;
and setting different observation source information related to the barrier layer to obtain barrier information.
For better understanding, there are various implementations of the acquisition observation source described above, and in an alternative embodiment, the observation source includes a radar observation source that is added to the obstacle layer. The radar observation source of the present embodiment is a 2D radar observation source, and is a radar system for detecting and tracking a target, which provides position information of the target mainly by measuring a distance and an azimuth angle of the target in a horizontal direction. 2D radars are typically composed of a rotating transmitter and receiver that detect objects by transmitting and receiving radio waves. When the transmitter rotates, it emits a pulse signal, and the receiver then receives the signal reflected back through the measurement by the target. The time delay and azimuth angle of the signal, the 2D radar can determine the position and motion state of the target, and the implementation method for detecting and tracking the target adopts the prior art, so that repeated description is omitted.
In the present embodiment, setting the radar observation source includes:
setting the topic name, TF coordinate name and topic type format of radar data;
determining an observation range and a tracking range according to parameter setting of a radar observation source;
and acquiring parameter setting of a radar observation source on the barrier layer and observing to obtain barrier information.
That is, an obstacle is detected on the same plane of the radar in the mobile robot navigation, the distance and azimuth angle of the obstacle in the horizontal direction are measured, the obstacle layer is added to the cost map, and the topic name, the TF coordinate name and the topic type format of the radar data are set, for example:
obstacle_layer:
enabled:true
scan:
topic:scan
sensor_frame:laser
data_type:LaserScan
the obstacle naming space of the cost map is set by enabling whether to start the obstacle layer, scan represents an observation source, topic represents the topic name of the observation source, sensor_frame represents the name of the sensor TF coordinate system according to actual setting, and data_type represents the type format of the topic.
Determining an observation range and a tracking range according to parameter setting of a radar observation source; and acquiring parameter setting of a radar observation source on the obstacle layer, observing to obtain obstacle information, and after the obstacle information is observed, carrying out parameter analysis through nodes in navigation, and providing data to re-plan a path and issue speed topics so as to obtain relevant operation control information.
Further, in an alternative embodiment, the observation source includes a visual camera observation source, and adding the visual camera observation source to the obstacle layer of the cost map includes:
acquiring camera image information of an internal reference of a visual camera and the current moment;
loading a soft object segmentation model which is trained in advance to process images, and releasing segmentation pictures;
regenerating and releasing depth point clouds according to the segmented pictures;
and converting the depth point cloud setting into topic names, TF coordinates and topic type formats of radar data, determining an observation range and a tracking range, setting the observation range and the tracking range in the barrier layer, and observing and obtaining barrier information.
In addition, the vision camera of the present embodiment includes a binocular camera, which is a video camera having two lenses, simulating a visual system of a human eye, and the images captured by each lens are referred to as a left eye image and a right eye image, which can be used to calculate depth information. An RGB-D camera (RGB-D camera) is a video camera having red, green, blue (RGB) color images and depth (D) image outputs, which can provide more abundant and accurate scene information by combining the RGB images and the depth images. RGBD cameras acquire depth information by using additional sensors, such as infrared sensors or ToF sensors.
In the mobile robot navigation, in the navigation, targets are scanned to different planes of a radar through a vision camera and data types are converted, obstacles which are close to the planes higher or lower than the radar are identified in the navigation, point clouds which are divided by a soft object division model are obtained, topic names, TF coordinates and topic type formats after the depth point clouds are converted into radar data are set, an observation range and a tracking range are determined, obstacle information is set in a cost map obstacle layer of a vision camera observation source and a radar observation source or other observation sources, parameter analysis is carried out through a path planning node, and global path planning and local path planning can be regenerated by providing data so that control is carried out to avoid collision or being involved into the mobile robot to cause faults.
It should be noted that, the soft object segmentation model and the pre-training means and the picture segmentation means thereof in this embodiment adopt conventional technical means, only the expected picture is obtained from the image information according to the actual requirement, and no limitation is made in any other form, the soft object segmentation model can detect, segment and identify special objects, such as plastic bags, strings, and the like, so as to detect, segment and identify the special objects, increase the barrier layer of the cost map, reduce the computation amount required by the local cost map and the resources consumed by visualization, and the mobile robot has high running smoothness, quick identification and quick obstacle avoidance response.
According to the embodiment, the visual dense point cloud is converted into the data type format of Lei Dadian cloud and added into the observation source of the cost map obstacle layer, the obstacle of the same plane of the radar is identified, the obstacle of the three-dimensional coordinate in the range right in front of the fan of the camera is also identified, and the situation that small objects on the ground cannot be identified is avoided through the segmentation detection of the soft object segmentation model, so that collision-free safe operation of the mobile robot in an unknown environment is realized.
Further, the converting the depth point cloud setting into radar data includes:
and converting the depth image data of the divided pictures into gray image data, wherein the gray value corresponding to each pixel point represents the depth value of the point, and obtaining the depth information. The depth image is an image captured by a camera, but it is not a regular color image, but the gray value of each pixel point represents the depth value of the point;
converting depth information into distance information for a pixel point;
converting the converted distance information tissue set into laser beam data;
processing NaN values in the laser beam data;
the laser beam data is converted into standard ROS information.
In the above step, converting the depth information into the distance information for the pixel point includes: the values of angle_min (angle range minimum) and angle_max (angle range maximum) are determined by measuring angles between the left ray, the right ray, and the optical center ray.
Specifically, the direction angle of the left ray, the direction angle of the right ray and the direction angle of the central ray are obtained by measuring the angles among the left ray, the right ray and the optical central ray;
calculating the direction angle of the left ray and the direction angle of the optical center ray to obtain the value of angle_max, and taking the value obtained by calculating the direction angle of the optical center ray and the direction angle of the right ray to negatively assign the value of angle_min;
and obtaining the distance information of the real distance by calculating the polar angle and the polar length.
Further, the specific steps are as follows:
1) Firstly, converting a left pixel original value to obtain a corrected two-dimensional point, wherein the x coordinate of the left pixel original value two-dimensional point is 0, the y coordinate of the left pixel original value two-dimensional point is the center y coordinate of a camera model; next, the corrected two-dimensional point is projected into a three-dimensional space, resulting in the direction angle of the left ray.
Similarly, the x coordinate of the original value two-dimensional point of the right pixel is the width of the depth map, the y coordinate is the center y coordinate of the camera model, the original value two-dimensional point of the center pixel is the x coordinate and the y coordinate of the center point of the camera model, and corresponding correction and projection operations are carried out to respectively obtain the direction angle of the right ray and the direction angle of the center ray;
2) Then, a dot product (dot product) of the two rays is calculated by the following formula (1):
ab=|a||b|cosθ:
the change results in:
the modular length (magnitide) of the two rays is calculated by multiplying the components of the two rays in each direction and then summing them, and the modular length of the rays is calculated by calculating the sum of the squares of each component and then opening the square root. The modulo length represents the length or size of the ray. The angle θ between the two rays is calculated by inverse cosine, and is obtained by dividing the product ab of points by the product of the modulo lengths |a| and |b| of the two rays. The angle_max value is calculated from the left ray direction angle and the center ray direction angle. Similarly, the angle_min is given negatively to the value calculated by the direction angle of the center ray and the direction angle of the right ray, because the lidar message expects to receive an input of a rotation angle opposite to the depth image, and all the obtained information is filled in and given to the scan information.
3) Finally, calculating to obtain a real distance:
the polar angle α is calculated by the following equation to represent the direction of the laser beam data:
the calculation of the extremely long r is divided into two steps, namely, x is calculated:
and (3) transforming to obtain:
the recalculation of the extremely long r is as follows:
in the above formula, χ is a coordinate value of each pixel point in the camera coordinate system in the lateral direction, u is a pixel point coordinate horizontal position in the depth image, cx is an optical center coordinate of the camera, fx is a focal length of the camera, and depth is a depth value of each pixel point.
The extremely long r, i.e. the true distance,
in this embodiment, the converted distance information is organized and converted into laser beam data, and this process can filter and smooth redundant data, and replace the old real distance if there is a point with the smallest distance within a certain height.
Further, the processing the NaN value in the laser beam data includes:
checking whether the new value and the old value are finite values, which are not NaN or InF values, the values in this example are expressed as follows for better understanding:
NaN: no a Number, not a numerical value;
InF: infinite, infinity;
angle_max: maximum value of angle range;
angle_min: minimum value of angle range;
TF: transforming the coordinates;
range_min: detecting a minimum threshold value of the depth;
range_max: detecting a maximum threshold value of the depth;
if the new value and the old value are not NaN values or Inf values, further judging whether the new value is a NaN value, if the new value is not a NaN value, indicating that the new value is a positive or negative infinite value, namely, indicating that the new value is an InF value, and replacing the old value by the new value at the moment; if the new value is a NaN value, the new value does not replace the old value;
then checking whether the new value is within the set value range, i.e. between range_min and range_max in this embodiment;
if the new value is not in the set value range, the old value is not replaced by the new value, whether the old value is a limited value is continuously checked, if the old value is not the limited value, the new value is in the set value range and is the limited value, and the old value is replaced by the new value.
And finally, standard ROS information is published to related topics and added to an observation source of the barrier layer.
The embodiment solves the problems of high hardware cost and more operation resources consumption caused by the use of the 3D laser radar, and the use of the depth camera to generate the point cloud has the advantages of high cost performance, low energy consumption and ensuring that the acquired environmental information is equally abundant; the problems of large operation amount and large visual consumption resource caused by dense point clouds of visual information are solved, and the point cloud information with the same data type format as the radar point cloud is formed by mapping depth information from three-dimensional world coordinates to a world coordinate system of a two-dimensional imaging plane and is added into an obstacle layer observation source in a cost map; the problem of only use 2D laser radar time scan the plane singleness, can't discern the little object on ground and lead to leading to the trouble behind the mobile robot of involving in is solved.
Further, the identifying an obstacle and planning a path to avoid the obstacle according to the obstacle information fused by the plurality of observation sources includes:
when the radar scans the obstacle or the vision camera scans the obstacle and the soft object on the ground, analyzing and identifying the obstacle layer added to the cost map to obtain obstacle information;
and re-generating global path planning and local path planning through the path planning node so as to enable the control to bypass the obstacle to avoid collision, and continuing to navigate after successfully bypassing until the navigation target point, wherein the mobile robot control stops and waits for the release of the next navigation target point.
According to the method, the obstacle recognition capability is increased, objects on a certain plane are not detected, and the ground soft object recognition information is increased, so that the safety of the mobile robot is further guaranteed.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as Read-Only Memory (ROM), random access Memory (Random AccessMemory, RAM), magnetic disk, optical disk), comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present application.
For example, in an embodiment of the present invention, a mobile robot obstacle avoidance device is further provided, and the processor invokes a control program stored in the memory to execute a mobile robot obstacle avoidance method with multiple observation sources fused as described above.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
issuing a navigation target point on a map;
acquiring obstacle information in the map;
and identifying the obstacle according to the obstacle information and planning a path to avoid the obstacle.
Further, the invention adopts the following technical scheme: a computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform a mobile robot obstacle avoidance method as described above with reference to a multi-observation source fusion.
Alternatively, in the present embodiment, the above-described storage medium may be configured to store a computer program for performing the steps of:
issuing a navigation target point on a map;
acquiring obstacle information in the map;
and identifying the obstacle according to the obstacle information and planning a path to avoid the obstacle.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Any connection is properly termed a computer-readable medium.
The foregoing disclosure is merely illustrative of one or more of the preferred embodiments of the present invention and is not intended to limit the scope of the invention in any way, as it is intended to cover all modifications, variations, uses, or equivalents of the invention that fall within the spirit and scope of the invention.
Claims (10)
1. The obstacle avoidance method of the mobile robot with the multi-observation source fusion is characterized by comprising the following steps of:
issuing a navigation target point on a map;
acquiring obstacle information in the map;
and identifying the obstacle according to the obstacle information and planning a path to avoid the obstacle.
2. The method of claim 1, wherein obtaining obstacle information in the map comprises:
opening an obstacle layer of the map, wherein the map is a cost map;
acquiring at least one observation source, analyzing and identifying information of the observation source, and then adding the information of the observation source to an obstacle layer of the cost map;
and setting different observation source information related to the barrier layer to obtain barrier information.
3. The multi-observation source fusion mobile robot obstacle avoidance method of claim 2, wherein: the observation source includes a radar observation source, adding the radar observation source to the obstacle layer, and setting the radar observation source includes:
setting the topic name, TF coordinate name and topic type format of radar data;
determining an observation range and a tracking range according to parameter setting of a radar observation source;
and acquiring parameter setting of a radar observation source on the barrier layer and observing to obtain barrier information.
4. The multi-observation source fusion mobile robot obstacle avoidance method of claim 2, wherein: the observation source comprises a visual camera observation source added to an obstacle layer of the cost map, comprising:
acquiring camera image information of an internal reference of a visual camera and the current moment;
loading a soft object segmentation model which is trained in advance to process images, and releasing segmentation pictures;
regenerating and releasing depth point clouds according to the segmented pictures;
and converting the depth point cloud setting into topic names, TF coordinates and topic type formats of radar data, determining an observation range and a tracking range, setting the observation range and the tracking range in the barrier layer, and observing and obtaining barrier information.
5. The method of claim 4, wherein converting the depth point cloud settings into radar data comprises:
converting the depth image data of the divided pictures into gray image data, wherein the gray value corresponding to each pixel point represents the depth value of the point, so as to obtain depth information;
converting depth information into distance information for a pixel point;
converting the converted distance information tissue set into laser beam data;
processing NaN values in the laser beam data;
the laser beam data is converted into standard ROS information.
6. The method of claim 5, wherein converting the depth information into distance information comprises:
the direction angle of the left ray, the direction angle of the right ray and the direction angle of the central ray are obtained by measuring the angles among the left ray, the right ray and the optical central ray;
calculating the direction angle of the left ray and the direction angle of the optical center ray to obtain the value of angle_max, and taking the value obtained by calculating the direction angle of the optical center ray and the direction angle of the right ray to negatively assign the value of angle_min;
and obtaining the distance information of the real distance by calculating the polar angle and the polar length.
7. The method of claim 5, wherein processing NaN values in the laser beam data comprises:
checking whether the new value and the old value are finite values, wherein the finite values are not NaN values or InF values;
if the new value and the old value are not NaN values or Inf values, further judging whether the new value is NaN value, if the new value is not NaN value, indicating that the new value is InF value, and replacing the old value with the new value at the moment; if the new value is a NaN value, the new value does not replace the old value;
checking whether the new value is within the set value range: if the new value is not in the set value range, the old value is not replaced by the new value, whether the old value is a limited value is continuously checked, if the old value is not the limited value, the new value is in the set value range and is the limited value, and the old value is replaced by the new value.
8. The method of claim 1, wherein identifying obstacles and planning paths for obstacle avoidance based on the obstacle information comprises:
when the radar scans the obstacle or the vision camera scans the obstacle and the soft object on the ground, analyzing and identifying the obstacle layer added to the cost map to obtain obstacle information;
and re-generating global path planning and local path planning through the path planning node so as to enable the control to bypass the obstacle to avoid collision, and continuing to navigate after successfully bypassing until the navigation target point, wherein the mobile robot control stops and waits for the release of the next navigation target point.
9. The utility model provides a mobile robot keeps away barrier device which characterized in that: the processor executes a multi-observation source fusion mobile robot obstacle avoidance method according to any one of claims 1 to 8 by calling a control program stored in the memory.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which when executed by a processor causes the processor to perform a multi-view source fusion mobile robot obstacle avoidance method according to any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311242327.5A CN117250956A (en) | 2023-09-25 | 2023-09-25 | Mobile robot obstacle avoidance method and obstacle avoidance device with multiple observation sources fused |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311242327.5A CN117250956A (en) | 2023-09-25 | 2023-09-25 | Mobile robot obstacle avoidance method and obstacle avoidance device with multiple observation sources fused |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117250956A true CN117250956A (en) | 2023-12-19 |
Family
ID=89134608
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311242327.5A Pending CN117250956A (en) | 2023-09-25 | 2023-09-25 | Mobile robot obstacle avoidance method and obstacle avoidance device with multiple observation sources fused |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117250956A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117944055A (en) * | 2024-03-26 | 2024-04-30 | 中科璀璨机器人(成都)有限公司 | Humanoid robot limb cooperative balance control method and device |
-
2023
- 2023-09-25 CN CN202311242327.5A patent/CN117250956A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117944055A (en) * | 2024-03-26 | 2024-04-30 | 中科璀璨机器人(成都)有限公司 | Humanoid robot limb cooperative balance control method and device |
CN117944055B (en) * | 2024-03-26 | 2024-06-11 | 中科璀璨机器人(成都)有限公司 | Humanoid robot limb cooperative balance control method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112785702B (en) | SLAM method based on tight coupling of 2D laser radar and binocular camera | |
CN110070615B (en) | Multi-camera cooperation-based panoramic vision SLAM method | |
CN111563923B (en) | Method for obtaining dense depth map and related device | |
CN112132972B (en) | Three-dimensional reconstruction method and system for fusing laser and image data | |
CN108406731B (en) | Positioning device, method and robot based on depth vision | |
WO2021189468A1 (en) | Attitude correction method, apparatus and system for laser radar | |
KR20200005999A (en) | Slam method and slam system using dual event camaer | |
US11227395B2 (en) | Method and apparatus for determining motion vector field, device, storage medium and vehicle | |
WO2021213432A1 (en) | Data fusion | |
WO2022135594A1 (en) | Method and apparatus for detecting target object, fusion processing unit, and medium | |
CN113888639B (en) | Visual odometer positioning method and system based on event camera and depth camera | |
CN114140527B (en) | Dynamic environment binocular vision SLAM method based on semantic segmentation | |
CN110889873A (en) | Target positioning method and device, electronic equipment and storage medium | |
CN112837207B (en) | Panoramic depth measurement method, four-eye fisheye camera and binocular fisheye camera | |
WO2022217988A1 (en) | Sensor configuration scheme determination method and apparatus, computer device, storage medium, and program | |
KR20200071960A (en) | Method and Apparatus for Vehicle Detection Using Lidar Sensor and Camera Convergence | |
CN117250956A (en) | Mobile robot obstacle avoidance method and obstacle avoidance device with multiple observation sources fused | |
US10134182B1 (en) | Large scale dense mapping | |
CN115410167A (en) | Target detection and semantic segmentation method, device, equipment and storage medium | |
John et al. | Automatic calibration and registration of lidar and stereo camera without calibration objects | |
CN114662587B (en) | Three-dimensional target perception method, device and system based on laser radar | |
CN117237789A (en) | Method for generating texture information point cloud map based on panoramic camera and laser radar fusion | |
CN116642490A (en) | Visual positioning navigation method based on hybrid map, robot and storage medium | |
CN116929290A (en) | Binocular visual angle difference three-dimensional depth measurement method, binocular visual angle difference three-dimensional depth measurement system and storage medium | |
CN109618085B (en) | Electronic equipment and mobile platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |