CN116007623A - Robot navigation method, apparatus and computer readable storage medium - Google Patents

Robot navigation method, apparatus and computer readable storage medium Download PDF

Info

Publication number
CN116007623A
CN116007623A CN202211542838.4A CN202211542838A CN116007623A CN 116007623 A CN116007623 A CN 116007623A CN 202211542838 A CN202211542838 A CN 202211542838A CN 116007623 A CN116007623 A CN 116007623A
Authority
CN
China
Prior art keywords
scene
robot
cameras
coordinates
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211542838.4A
Other languages
Chinese (zh)
Inventor
杨华
李欢华
詹犇
濮正楠
宋华
李庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chunmi Technology Shanghai Co Ltd
Guangdong Chunmi Electrical Technology Co Ltd
Original Assignee
Chunmi Technology Shanghai Co Ltd
Guangdong Chunmi Electrical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chunmi Technology Shanghai Co Ltd, Guangdong Chunmi Electrical Technology Co Ltd filed Critical Chunmi Technology Shanghai Co Ltd
Priority to CN202211542838.4A priority Critical patent/CN116007623A/en
Publication of CN116007623A publication Critical patent/CN116007623A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Navigation (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The present disclosure relates to a robot navigation method, apparatus, and computer-readable storage medium, wherein the method comprises: in response to receiving a navigation task, determining coordinates of the robot in the active scene and coordinates of a target ground carried in the navigation task in the active scene through a plurality of cameras mounted in the active scene of the robot; the visible range of the cameras at least covers the ground of the whole moving scene, and a common area exists between each camera and at least one other camera, and the ground is covered by more than two visible ranges of the cameras; planning a navigation path for the robot according to the coordinates of the robot in the activity scene, the coordinates of the target ground carried in the navigation task in the activity scene and the 2D grid map of the activity scene; the 2D grid map is converted from a 3D model of an active scene established by a plurality of cameras.

Description

Robot navigation method, apparatus and computer readable storage medium
Technical Field
The present disclosure relates to the field of robotics, and in particular, to a method, an apparatus, and a computer-readable storage medium for robot navigation.
Background
At present, robots are widely used in various industries such as medical treatment, shops, hotels, catering and the like. The robot can be a traditional wheeled robot or a crawler robot, and the use scene of the robot is limited greatly, the traditional wheeled robot has poor obstacle crossing capability, poor terrain adaptability and low turning efficiency, or has large turning outer radius, is easy to slip and is not stable enough. The robot can also be a bipedal robot, and the bipedal robot can be almost suitable for various complex terrains, can cross obstacles, and has good freedom degree, flexible action, freedom and stability.
Robots are typically equipped with cameras or sensors, by means of which the robot walks. However, it is difficult to complete path planning only by means of sensors mounted on the robot while the robot walks, and to avoid obstacles while the task is completed. Meanwhile, when the robot is just started, the robot is very difficult to accurately position. When the robot needs to complete positioning and obstacle avoidance with higher precision, stronger calculation force is often needed, but a general bipedal robot is difficult to carry huge equipment to ensure the calculation force.
Disclosure of Invention
To overcome the problems in the related art, the embodiments of the present disclosure provide a robot navigation method, apparatus, and computer-readable storage medium. The technical scheme is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided a robot navigation method, including:
in response to receiving a navigation task, determining coordinates of the robot in the active scene and coordinates of a target ground carried in the navigation task in the active scene through a plurality of cameras mounted in the active scene of the robot; the camera comprises a plurality of cameras, a plurality of cameras and a plurality of control units, wherein the cameras are arranged on the ground of the whole moving scene of the robot, the visible range of the cameras is at least covered on the ground of the whole moving scene, a common area exists between each camera and at least one other camera, and the ground is covered by more than two visible ranges of the cameras;
planning a navigation path for the robot according to the coordinates of the robot in the activity scene, the coordinates of the target ground carried in the navigation task in the activity scene and the 2D grid map of the activity scene; the 2D grid map is converted from a 3D model of an activity scene established by a plurality of cameras mounted on the activity scene of the robot.
In an embodiment, the method further comprises: generating a 2D grid map of the activity scene:
Determining a first camera of which the visible range covers the coordinate origin of the active scene in the plurality of cameras;
determining three-dimensional coordinates of each first camera in the active scene;
determining three-dimensional coordinates of other cameras in the active scene according to the relation between each first camera and the cameras with the common view area;
according to the three-dimensional coordinates of all cameras and the shot common view area, a 3D model of the active scene is built;
determining the ground of the activity scene as a passable area according to the 3D model of the activity scene;
and converting the determined passable area to generate a 2D grid map.
In an embodiment, determining coordinates of a robot in an activity scene includes:
determining a second camera of the plurality of cameras, wherein the second camera covers the visual range of the robot;
determining coordinates of the robot in the moving scene according to the three-dimensional coordinates of the second camera;
determining coordinates of a destination carried in a navigation task in an active scene, including:
determining a third camera of which the visible range covers the target area in the plurality of cameras;
and determining the coordinates of the target ground in the active scene according to the three-dimensional coordinates of the third camera.
In an embodiment, the planning a navigation path for the robot according to the coordinates of the robot in the active scene and the coordinates of the destination in the active scene carried in the navigation task, and the 2D grid map of the active scene includes:
Determining whether the target ground is in a passable area according to the coordinates of the target ground in the activity scene;
and if the target ground is in the passable area, planning a navigation path according to the coordinates of the robot, the coordinates of the target ground and the 2D grid map of the active scene.
In an embodiment, the method further comprises:
determining whether an obstacle appears on a navigation path on which the robot is walking through the cameras;
if it is determined that an obstacle exists on the navigation path, sending a message to a robot, wherein the message comprises at least one of the following: a reminding message for reminding the existence of an obstacle in front of the robot; new navigation path.
According to a second aspect of embodiments of the present disclosure, there is provided a robot navigation device comprising:
the positioning module is used for responding to the received navigation task, and determining the coordinates of the robot in the activity scene and the coordinates of the target ground carried in the navigation task in the activity scene through a plurality of cameras installed in the activity scene of the robot; the camera comprises a plurality of cameras, a plurality of cameras and a plurality of control units, wherein the cameras are arranged on the ground of the whole moving scene of the robot, the visible range of the cameras is at least covered on the ground of the whole moving scene, a common area exists between each camera and at least one other camera, and the ground is covered by more than two visible ranges of the cameras;
The navigation module is used for planning a navigation path for the robot according to the coordinates of the robot in the activity scene, the coordinates of the target ground carried in the navigation task in the activity scene and the 2D grid map of the activity scene; the 2D grid map is converted from a 3D model of an activity scene established by a plurality of cameras mounted on the activity scene of the robot.
In an embodiment, the apparatus further comprises a generation module for generating a 2D grid map of the activity scene; the generation module comprises:
the first determining unit is used for determining a first camera with a visible range covering the coordinate origin of the movable scene in the plurality of cameras;
the second determining unit is used for determining the three-dimensional coordinates of each first camera in the active scene;
the third determining unit is used for determining three-dimensional coordinates of other cameras in the active scene according to the relation between each first camera and the cameras with the common view area;
the building unit is used for building a 3D model of the active scene according to the three-dimensional coordinates of all cameras and the shot common view area.
A fourth determining unit, configured to determine, according to the 3D model of the activity scene, that the ground of the activity scene is a passable area;
And the conversion unit is used for converting the determined passable area into a 2D grid map.
In one embodiment, the positioning module comprises:
the first positioning unit is used for determining a second camera of the visual range coverage robot in the plurality of cameras; determining coordinates of the robot in the moving scene according to the three-dimensional coordinates of the second camera;
the second positioning unit is used for determining a third camera with a visual range covering the target area in the plurality of cameras; and determining the coordinates of the target ground in the active scene according to the three-dimensional coordinates of the third camera.
In an embodiment, the navigation module comprises:
a fifth determining unit, configured to determine whether the destination is in a passable area according to coordinates of the destination in the activity scene;
and the navigation unit is used for planning a navigation path according to the coordinates of the robot, the coordinates of the target ground and the 2D grid map of the active scene if the target ground is in the passable area.
In an embodiment, the device further comprises:
the determining module is used for determining whether an obstacle appears on a navigation path on which the robot is walking or not through the cameras;
the prompting module is used for sending a prompting message if the fact that an obstacle appears on the navigation path is determined, wherein the prompting message comprises at least one of the following components: a message prompting that an obstacle exists in front of the robot; and (3) re-planning the navigation path.
According to a third aspect of embodiments of the present disclosure, there is provided a robot navigation device including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
in response to receiving a navigation task, determining coordinates of the robot in the active scene and coordinates of a target ground carried in the navigation task in the active scene through a plurality of cameras mounted in the active scene of the robot; the camera comprises a plurality of cameras, a plurality of cameras and a plurality of control units, wherein the cameras are arranged on the ground of the whole moving scene of the robot, the visible range of the cameras is at least covered on the ground of the whole moving scene, a common area exists between each camera and at least one other camera, and the ground is covered by more than two visible ranges of the cameras;
planning a navigation path for the robot according to the coordinates of the robot in the activity scene, the coordinates of the target ground carried in the navigation task in the activity scene and the 2D grid map of the activity scene; the 2D grid map is converted from a 3D model of an activity scene established by a plurality of cameras mounted on the activity scene of the robot.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon computer instructions, characterized in that the instructions when executed by a processor implement the steps of the method of any of the above.
The beneficial effect of the technical scheme that this application put forward: the method comprises the steps of establishing a 3D model of an activity scene through a plurality of cameras installed in the activity scene of the robot, completing positioning of the robot and a target place, and completing path planning for the robot. The robot navigation system provides reliable positioning information and navigation paths for robots, particularly bipedal robots, and solves the problem that the robots are difficult to position due to large scene change. The system does not need to be arranged on the body of the robot, and the excessive sensor requirement and high calculation force requirement of the bipedal robot are avoided.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic view of a robot shown according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating a method of robot navigation according to an exemplary embodiment.
Fig. 3 is a flow chart illustrating a method of robot navigation according to an exemplary embodiment.
Fig. 4 is a flow chart illustrating a method of robot navigation according to an exemplary embodiment.
Fig. 5 is a flow chart illustrating a method of robot navigation according to an exemplary embodiment.
Fig. 6 is a schematic structural view of a robot navigation device according to an exemplary embodiment.
Fig. 7 is a schematic structural view of a robot navigation device according to an exemplary embodiment.
Fig. 8 is a schematic structural view of a robot navigation device according to an exemplary embodiment.
Fig. 9 is a schematic structural view of a robot navigation device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The technical solution of the present application is mainly applied to a robot capable of walking, such as a biped robot, and fig. 1 is a schematic diagram of a robot shown according to an exemplary embodiment, and the biped robot 100 can implement walking of the robot to be more humanoid.
Fig. 2 is a flowchart illustrating a robot navigation method according to an exemplary embodiment, and an execution subject of the method may be a device capable of providing a navigation path for a robot, such as a server, a terminal, etc. capable of communicating with the robot and a camera installed in an active scene, or the robot itself. As shown in fig. 2, the method includes the following steps 201-202:
in step 201, in response to receiving the navigation task, determining coordinates of the robot in the active scene and coordinates of a destination carried in the navigation task in the active scene by a plurality of cameras mounted to the active scene of the robot; the camera comprises a plurality of cameras, a plurality of cameras and a plurality of control units, wherein the cameras are installed on the ground of the whole moving scene, the visible range of the cameras installed on the moving scene of the robot at least covers the ground of the whole moving scene, a common area exists between each camera and at least one other camera, and the ground is covered by more than two visible ranges of the cameras.
In an embodiment of the present application, the navigation task may be, for example, the robot uploads the voice command of the user to the server after receiving the voice command. In another embodiment of the present application, the navigation task is for example sent by the user directly to the server through the terminal APP. The navigation task may carry a destination name, a destination item name, etc. After receiving the navigation task, the server searches and positions in the activity scene through a plurality of cameras according to the names of the target places, the names of the target objects and the like, and determines the coordinates of the target places or the target objects in the activity scene. In an embodiment of the present application, a 3D model of a pre-established active scene may be displayed in a terminal used by a user, and the user directly clicks a target in the visualized 3D model, so that coordinates of the target in the active scene may be directly carried in a navigation task sent at this time.
In an embodiment of the present application, for example, the moving scene of the robot is an indoor scene, and a plurality of cameras may be installed on top of the indoor scene, where the positions of the cameras are, for example, a roof corner: angled obliquely downward, roof center: the angle is forward. When the camera is installed, the following requirements are set for the position and the angle of the camera: the total of the visible ranges of the cameras is ensured to at least cover the ground of the whole indoor scene, and each camera and at least one other camera have a common-view area, and all the scene ground is covered by more than two camera visible ranges, so that each position on the scene ground can be simultaneously shot by more than two cameras.
The method comprises the steps of determining coordinates of a robot in an activity scene and coordinates of a target ground carried in a navigation task in the activity scene by utilizing a plurality of cameras mounted in the activity scene of the robot, so that the robot and a destination can be accurately positioned.
In step 202, a navigation path is planned for the robot according to the coordinates of the robot in the active scene, the coordinates of the target ground carried in the navigation task in the active scene and the 2D grid map of the active scene; the 2D grid map is converted from a 3D model of an activity scene established by a plurality of cameras mounted on the activity scene of the robot.
In this step, when two cameras shoot an object at the same time (i.e. there is a common view area), and the distance between the two cameras is known, three-dimensional information (distance from object to camera) of all objects in the common view area can be measured, and then world coordinates of all objects in the common view area can be obtained through coordinate transformation according to world coordinates of the cameras. Therefore, a 3D model of the activity scene, particularly the ground of the activity scene, can be obtained by a plurality of cameras mounted to the activity scene of the robot.
Since the 3D model is unfavorable for navigation, the 3D model can be converted into a 2D grid map, and a navigation path is planned using the 2D grid map.
In an embodiment of the present application, a path from the coordinates of the robot to the coordinates of the target location may be planned by an a-algorithm. The a-star algorithm is a direct search algorithm that is most efficient in solving the shortest path in a static network. In the field of robots, the a-algorithm is commonly used for mobile robot path planning, and will not be described here again.
According to the technical scheme, the 3D model of the activity scene is built through the cameras arranged in the activity scene, the positioning of the robot and the target place is completed, and the path planning for the robot is completed. The robot navigation system provides reliable positioning information and navigation paths for robots, particularly bipedal robots, solves the problem that the robots are difficult to position due to larger variation of the corresponding scenes, and avoids the requirement of installing too many sensors on the bipedal robots. Furthermore, when the method is performed by a system, device, equipment other than a robot, the robot is not required to have high calculation power.
In an embodiment of the present application, the method may pre-generate a 2D grid map of the active scene before responding to receiving the navigation task. Specifically, as shown in fig. 3, generating a 2D grid map of an activity scene may include the following steps 301-306:
in step 301, a first camera of a plurality of cameras is determined whose visual range covers the origin of coordinates of an active scene.
Wherein the active scene coordinate origin is a point selected in advance in the scene as the active scene coordinate origin. May be marked with a coloured label, for example with a red tape applied to the ground. The method can determine all first cameras which shoot the coordinate origin of the active scene by traversing each camera. The number of the determined first cameras is at least 2 or more.
In step 302, three-dimensional coordinates of each first camera in the active scene are determined.
Because the two cameras have the common view area, the 3D positions of all pixel points in the common view area relative to the cameras can be calculated, and because the coordinate origin of the active scene is in the common view area, the positions of the two cameras relative to the cameras can be calculated. Then, the position, i.e. the three-dimensional coordinates, of each first camera in the active scene is reversely deduced.
In step 303, three-dimensional coordinates of the remaining cameras in the active scene are determined according to the existing common view area and the relationship between each camera and the cameras in the common view area.
Because each first camera and at least one other camera have a common view area, the relative pose between the cameras can be calculated by adding the distance between the cameras with the common view areas to the epipolar constraint, and the positions of the cameras with the common view areas with the first cameras in the active scene can be restored, so that the three-dimensional coordinates of the rest cameras in the active scene can be determined. The epipolar constraint is an algorithm for estimating the relative pose relationship of the binocular camera according to the two-dimensional plane information of the image, and is not described herein.
In step 304, a 3D model of the active scene is built according to the three-dimensional coordinates of all cameras and the photographed common view area.
Through the steps, the external parameters of all cameras are known in practice, and the depth points in all visual ranges can be calculated through the visual contents of the cameras, so that the 3D model of the active scene can be restored.
In step 305, the ground of the activity scene is determined to be a passable area from the 3D model of the activity scene.
For example, the ground of the active scene can be determined to be a passable area by using a V parallax algorithm, and the road surface area is obtained by detecting an inclined straight line formed by projecting a ground plane on a V parallax map, which comprises the following specific steps: 1) Disparity estimation: firstly, performing parallax estimation on camera data by using an SGBM (semi-global block matching ) method built in opencv (cross-platform computer vision and machine learning software library) issued based on Apache2.0 license (open source) to obtain an original parallax map. 2) Estimation of V parallax: and carrying out horizontal projection on the number of different parallaxes of each row in the original parallax map to obtain a V parallax map. 3) And (3) detecting a ground plane line: by adopting LSD (Line Segment Detection, straight line segment detection) straight line detection, the ground is mapped into an inclined straight line, and parameters of the straight line can be calculated by combining the obtained line segments: slope and offset; 4) The intersection point of each column of the V parallax and the ground oblique line is obtained through the steps, and the initial ground mask can be obtained according to the intersection point. 5) The initial mask of the ground is obtained, and because of the noise, a RANSAC (Random Sample Consensus, random sampling consistency) method is adopted to estimate the plane parameters. 6) The fitted plane also contains some small noise blocks, and the mask of the final ground can be obtained by searching the outline and then finding out the largest area block. 7) And finally, comparing according to the maximum width of the current object, and reserving a ground mask with the length being more than 1.5 times of the width to obtain a pavement passable area in an actual activity scene.
In step 306, the determined passable region is converted to generate a 2D grid map.
By way of example, the traffic area may be converted to a 2D grid map using octopus-based three-dimensional map creation tool:
1) Installing an octomap (an octree-based three-dimensional map creation tool) and a map server module (a tool that stores and provides a grid map in a service manner) in a ubuntu (a Linux operating system) system;
2) Publishing and executing a three-dimensional map publishing program in a topic form through the publishing of the ROS;
3) An octomap_server service is compiled, and grid resolution, intercepted three-dimensional point cloud range and world coordinate system topic information are set;
4) The 2D grid map is saved by the map server service.
Through the embodiment, the 2D grid map of the active scene can be generated in advance, and accurate data can be provided for positioning and navigation of the robot.
In an embodiment of the present application, determining the coordinates of the robot in the activity scene in step 201 may include the following steps A1-A2:
in step A1, a second camera of the plurality of cameras is determined that covers the range of visibility of the robot.
The step can find out the cameras which can shoot the robot from all the cameras as the second camera by traversing each camera.
In step A2, the coordinates of the robot in the active scene are determined from the three-dimensional coordinates of the second camera.
Because the external parameters of all cameras are determined when the 3D model of the activity scene is built, the coordinates of the robot in the activity scene can be calculated by combining the external parameters of the cameras through a triangulation method, so that the accurate positioning of the robot in the activity scene is completed.
In an embodiment of the present application, when the navigation task carries a destination name, a destination object name, etc., the determining the coordinates of the destination carried in the navigation task in the active scene may further include the following steps B1-B2:
in step B1, determining a third camera of the plurality of cameras, the visual range of which covers the target area;
the step can find out the cameras which can always shoot the target place and the target object by traversing each camera as the third camera.
In step B2, coordinates of the target in the active scene are determined from the three-dimensional coordinates of the third camera.
Since the external parameters of all cameras are determined when the 3D model of the activity scene is built, the coordinates of the target ground in the activity scene are calculated by combining the external parameters of the cameras through a triangulation method, so that the accurate positioning of the target ground in the activity scene is completed.
In an embodiment of the present application, to ensure feasibility of the planned path, step 202 may further include the following steps C1-C2:
in step C1, it is determined whether the destination is in a passable area according to the coordinates of the destination in the active scene.
In step C2, if the destination is in the passable area, a navigation path is planned according to the coordinates of the robot, the coordinates of the destination, and the 2D grid map of the active scene.
In this embodiment, before planning the navigation path, it is determined whether the destination is in the passable area, and if not, the task is terminated, so that wasting of resources can be avoided.
In an embodiment of the present application, when the method is performed by a system, an apparatus, or a device other than the robot, after the navigation path is planned, the navigation path may be transmitted to the robot, so that the robot walks according to the navigation path.
On the basis of the above embodiment, the present invention may further assist the robot in avoiding obstacles during the course of the robot walking according to the navigation path, as shown in fig. 4, and the method further includes the following steps 203 to 204:
in step 203, it is determined by the plurality of cameras whether an obstacle appears on the navigation path on which the robot is walking.
In this step, three-dimensional information of the ground in front of the body of the walking robot can be acquired in real time through a camera installed in the moving scene, whether the ground has a protrusion or not is judged according to the three-dimensional information of the ground, and whether the protrusion range is larger than a preset range or not is judged, and if the protrusion range is larger than the preset range, it can be determined that an obstacle exists on the navigation path. Wherein the extent of the protrusion may be the height of the protrusion and/or the width of the protrusion.
In step 204, if it is determined that an obstacle exists on the navigation path, a prompt message is sent to the robot, where the prompt message includes at least one of the following: and prompting that the obstacle exists in front of the robot, and re-planning the navigation path.
In an embodiment of the present application, if it is determined that an obstacle appears on the navigation path, a prompt message may be sent to the robot, and after the robot receives the prompt message, the robot may immediately stop advancing, or may perform a preset detour action. In an embodiment of the present application, the navigation path may be further re-planned according to the above steps, and when a new navigation path is re-planned, an obstacle that has been detected may be avoided. After the new navigation path is re-planned, the re-planned navigation path is sent to the robot so that the robot walks according to the re-planned navigation path.
The technical scheme proposed by the invention is described in the following specific examples. Fig. 5 is a flowchart illustrating a method of robot navigation according to an exemplary embodiment. As shown in fig. 5, the method includes:
step 501, a 3D model of an active scene is built by a plurality of cameras mounted to the active scene of a robot.
In the step, the cameras are required to be installed firstly, the sum of the visual ranges of the cameras is ensured to at least cover the ground of the whole indoor scene during installation, a common-view area exists between each camera and at least one other camera, and the ground of all the scenes is covered by more than two camera visual ranges; then, the distance between every two cameras in the plurality of cameras is measured, and the origin of coordinates of the confirmed active scene is set, for example, a red adhesive tape is used for being stuck on the ground. And then performing external parameter calibration on the cameras.
Step 502, determining a passable area according to a 3D model of the activity scene.
Step 503, converting the determined passable area to generate a 2D grid map.
The server may then provide a navigation path for the robot in response to the navigation task.
Step 504, a navigation task is received, where the navigation task carries coordinates of the destination.
In step 505, coordinates of the robot in the activity scene are determined by a plurality of cameras mounted to the activity scene of the robot.
Step 506, planning a navigation path for the robot in the 2D grid map according to the coordinates of the robot and the coordinates of the target place.
Step 507, determining whether an obstacle appears on the navigation path on which the robot is walking through the cameras when the robot walks according to the navigation path.
Step 508, if it is determined that an obstacle appears on the navigation path, sending a prompt message to the robot, where the prompt message includes: and prompting the robot to have a message of an obstacle in front of the robot and a re-planned navigation path.
The following are device embodiments of the present disclosure that may be used to perform method embodiments of the present disclosure.
FIG. 6 is a schematic diagram of a robotic navigation device shown in accordance with an exemplary embodiment; the apparatus may be implemented in various ways, for example, implementing all components of the apparatus in a device capable of providing a navigation path for a robot, or implementing components of the apparatus in a coupled manner at a device side capable of providing a navigation path for a robot; the apparatus may implement the method related to the present disclosure by software, hardware or a combination of both, as shown in fig. 6, the robotic navigation apparatus includes:
The positioning module 601 is configured to determine, in response to receiving the navigation task, coordinates of the robot in the activity scene and coordinates of a destination carried in the navigation task in the activity scene by a plurality of cameras installed in the activity scene of the robot; the camera comprises a plurality of cameras, a plurality of cameras and a plurality of control units, wherein the cameras are arranged on the ground of the whole moving scene of the robot, the visible range of the cameras is at least covered on the ground of the whole moving scene, a common area exists between each camera and at least one other camera, and the ground is covered by more than two visible ranges of the cameras;
the navigation module 602 is configured to plan a navigation path for the robot according to the coordinates of the robot in the active scene, the coordinates of the destination in the active scene carried in the navigation task, and the 2D grid map of the active scene; the 2D grid map is converted from a 3D model of an activity scene established by a plurality of cameras mounted on the activity scene of the robot.
The device provided in the embodiment of the present disclosure can be used to execute the technical solution of the embodiment shown in fig. 2, and the execution mode and the beneficial effects thereof are similar, and are not repeated here.
In a possible implementation manner, as shown in fig. 7, the apparatus further includes a generating module 603, configured to generate a 2D grid map of the activity scene; the generating module 603 includes:
The first determining unit is used for determining a first camera with a visible range covering the coordinate origin of the movable scene in the plurality of cameras;
the second determining unit is used for determining the three-dimensional coordinates of each first camera in the active scene;
the third determining unit is used for determining three-dimensional coordinates of other cameras in the active scene according to the relation between each first camera and the cameras with the common view area;
the building unit is used for building a 3D model of the active scene according to the three-dimensional coordinates of all cameras and the shot common view area.
A fourth determining unit, configured to determine, according to the 3D model of the activity scene, that the ground of the activity scene is a passable area;
and the conversion unit is used for converting the determined passable area into a 2D grid map.
In one possible embodiment, the positioning module comprises:
the first positioning unit is used for determining a second camera of the visual range coverage robot in the plurality of cameras; determining coordinates of the robot in the moving scene according to the three-dimensional coordinates of the second camera;
the second positioning unit is used for determining a third camera with a visual range covering the target area in the plurality of cameras; and determining the coordinates of the target ground in the active scene according to the three-dimensional coordinates of the third camera.
In one possible implementation, the navigation module includes:
a fifth determining unit, configured to determine whether the destination is in a passable area according to coordinates of the destination in the activity scene;
and the navigation unit is used for planning a navigation path according to the coordinates of the robot, the coordinates of the target ground and the 2D grid map of the active scene if the target ground is in the passable area.
In one possible embodiment, as shown in fig. 8, the apparatus further includes:
a determining module 604, configured to determine, by using the plurality of cameras, whether an obstacle appears on a navigation path along which the robot is walking;
a prompting module 605, configured to send a prompting message if it is determined that an obstacle appears on the navigation path, where the prompting message includes at least one of: a message prompting that an obstacle exists in front of the robot; and (3) re-planning the navigation path.
FIG. 9 is a block diagram of a robotic navigation device 90 shown in accordance with an exemplary embodiment, the robotic navigation device 90 may be implemented in various ways, such as implementing all of the components of the device in an apparatus capable of providing a navigation path for a robot, or implementing the components of the device in a coupled manner on the side of the apparatus capable of providing a navigation path for a robot; referring to fig. 9, the robot navigation device 90 includes:
A processor 901;
a memory 902 for storing processor-executable instructions;
wherein the processor 901 is configured to:
in response to receiving a navigation task, determining coordinates of the robot in the active scene and coordinates of a target ground carried in the navigation task in the active scene through a plurality of cameras mounted in the active scene of the robot; the camera comprises a plurality of cameras, a plurality of cameras and a plurality of control units, wherein the cameras are arranged on the ground of the whole moving scene of the robot, the visible range of the cameras is at least covered on the ground of the whole moving scene, a common area exists between each camera and at least one other camera, and the ground is covered by more than two visible ranges of the cameras;
planning a navigation path for the robot according to the coordinates of the robot in the activity scene, the coordinates of the target ground carried in the navigation task in the activity scene and the 2D grid map of the activity scene; the 2D grid map is converted from a 3D model of an activity scene established by a plurality of cameras mounted on the activity scene of the robot.
In an embodiment, is further configured to: generating a 2D grid map of the activity scene:
determining a first camera of which the visible range covers the coordinate origin of the active scene in the plurality of cameras;
Determining three-dimensional coordinates of each first camera in the active scene;
determining three-dimensional coordinates of other cameras in the active scene according to the relation between each first camera and the cameras with the common view area;
according to the three-dimensional coordinates of all cameras and the shot common view area, a 3D model of the active scene is built;
determining the ground of the activity scene as a passable area according to the 3D model of the activity scene;
and converting the determined passable area to generate a 2D grid map.
In an embodiment, determining coordinates of a robot in an activity scene includes:
determining a second camera of the plurality of cameras, wherein the second camera covers the visual range of the robot;
determining coordinates of the robot in the moving scene according to the three-dimensional coordinates of the second camera;
determining coordinates of a destination carried in a navigation task in an active scene, including:
determining a third camera of which the visible range covers the target area in the plurality of cameras;
and determining the coordinates of the target ground in the active scene according to the three-dimensional coordinates of the third camera.
In an embodiment, the planning a navigation path for the robot according to the coordinates of the robot in the active scene and the coordinates of the destination in the active scene carried in the navigation task, and the 2D grid map of the active scene includes:
Determining whether the target ground is in a passable area according to the coordinates of the target ground in the activity scene;
and if the target ground is in the passable area, planning a navigation path according to the coordinates of the robot, the coordinates of the target ground and the 2D grid map of the active scene.
In an embodiment, is further configured to:
determining whether an obstacle appears on a navigation path on which the robot is walking through the cameras;
if it is determined that an obstacle exists on the navigation path, sending a message to a robot, wherein the message comprises at least one of the following: a reminding message for reminding the existence of an obstacle in front of the robot; new navigation path.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
A non-transitory computer readable storage medium, which when executed by an apparatus capable of providing a navigation path for a robot, causes the apparatus to perform a method of grasping, the method comprising:
in response to receiving a navigation task, determining coordinates of the robot in the active scene and coordinates of a target ground carried in the navigation task in the active scene through a plurality of cameras mounted in the active scene of the robot; the camera comprises a plurality of cameras, a plurality of cameras and a plurality of control units, wherein the cameras are arranged on the ground of the whole moving scene of the robot, the visible range of the cameras is at least covered on the ground of the whole moving scene, a common area exists between each camera and at least one other camera, and the ground is covered by more than two visible ranges of the cameras;
Planning a navigation path for the robot according to the coordinates of the robot in the activity scene, the coordinates of the target ground carried in the navigation task in the activity scene and the 2D grid map of the activity scene; the 2D grid map is converted from a 3D model of an activity scene established by a plurality of cameras mounted on the activity scene of the robot.
In an embodiment, the method further comprises: generating a 2D grid map of the activity scene:
determining a first camera of which the visible range covers the coordinate origin of the active scene in the plurality of cameras;
determining three-dimensional coordinates of each first camera in the active scene;
determining three-dimensional coordinates of other cameras in the active scene according to the relation between each first camera and the cameras with the common view area;
according to the three-dimensional coordinates of all cameras and the shot common view area, a 3D model of the active scene is built;
determining the ground of the activity scene as a passable area according to the 3D model of the activity scene;
and converting the determined passable area to generate a 2D grid map.
In an embodiment, determining coordinates of a robot in an activity scene includes:
determining a second camera of the plurality of cameras, wherein the second camera covers the visual range of the robot;
Determining coordinates of the robot in the moving scene according to the three-dimensional coordinates of the second camera;
determining coordinates of a destination carried in a navigation task in an active scene, including:
determining a third camera of which the visible range covers the target area in the plurality of cameras;
and determining the coordinates of the target ground in the active scene according to the three-dimensional coordinates of the third camera.
In an embodiment, the planning a navigation path for the robot according to the coordinates of the robot in the active scene and the coordinates of the destination in the active scene carried in the navigation task, and the 2D grid map of the active scene includes:
determining whether the target ground is in a passable area according to the coordinates of the target ground in the activity scene;
and if the target ground is in the passable area, planning a navigation path according to the coordinates of the robot, the coordinates of the target ground and the 2D grid map of the active scene.
In an embodiment, the method further comprises:
determining whether an obstacle appears on a navigation path on which the robot is walking through the cameras;
if it is determined that an obstacle exists on the navigation path, sending a message to a robot, wherein the message comprises at least one of the following: a reminding message for reminding the existence of an obstacle in front of the robot; new navigation path.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A method of robotic navigation, the method comprising:
in response to receiving a navigation task, determining coordinates of the robot in the active scene and coordinates of a target ground carried in the navigation task in the active scene through a plurality of cameras mounted in the active scene of the robot; the camera comprises a plurality of cameras, a plurality of cameras and a plurality of control units, wherein the cameras are arranged on the ground of the whole moving scene of the robot, the visible range of the cameras is at least covered on the ground of the whole moving scene, a common area exists between each camera and at least one other camera, and the ground is covered by more than two visible ranges of the cameras;
Planning a navigation path for the robot according to the coordinates of the robot in the activity scene, the coordinates of the target ground carried in the navigation task in the activity scene and the 2D grid map of the activity scene; the 2D grid map is converted from a 3D model of an activity scene established by a plurality of cameras mounted on the activity scene of the robot.
2. The method according to claim 1, wherein the method further comprises: generating a 2D grid map of the activity scene:
determining a first camera of which the visible range covers the coordinate origin of the active scene in the plurality of cameras;
determining three-dimensional coordinates of each first camera in the active scene;
determining three-dimensional coordinates of other cameras in the active scene according to the relation between each first camera and the cameras with the common view area;
according to the three-dimensional coordinates of all cameras and the shot common view area, a 3D model of the active scene is built;
determining the ground of the activity scene as a passable area according to the 3D model of the activity scene;
and converting the determined passable area to generate a 2D grid map.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
Determining coordinates of the robot in the activity scene, comprising:
determining a second camera of the plurality of cameras, wherein the second camera covers the visual range of the robot;
determining coordinates of the robot in the moving scene according to the three-dimensional coordinates of the second camera;
determining coordinates of a destination carried in a navigation task in an active scene, including:
determining a third camera of which the visible range covers the target area in the plurality of cameras;
and determining the coordinates of the target ground in the active scene according to the three-dimensional coordinates of the third camera.
4. A method according to claim 3, wherein the planning the navigation path for the robot according to the coordinates of the robot in the activity scene and the coordinates of the destination in the activity scene carried in the navigation task, the 2D grid map of the activity scene, comprises:
determining whether the target ground is in a passable area according to the coordinates of the target ground in the activity scene;
and if the target ground is in the passable area, planning a navigation path according to the coordinates of the robot, the coordinates of the target ground and the 2D grid map of the active scene.
5. The method according to claim 1, wherein the method further comprises:
determining whether an obstacle appears on a navigation path on which the robot is walking through the cameras;
If it is determined that an obstacle appears on the navigation path, a prompt message is sent to the robot, wherein the prompt message comprises at least one of the following: a message prompting that an obstacle exists in front of the robot; and (3) re-planning the navigation path.
6. A robotic navigation device, the device comprising:
the positioning module is used for responding to the received navigation task, and determining the coordinates of the robot in the activity scene and the coordinates of the target ground carried in the navigation task in the activity scene through a plurality of cameras installed in the activity scene of the robot; the camera comprises a plurality of cameras, a plurality of cameras and a plurality of control units, wherein the cameras are arranged on the ground of the whole moving scene of the robot, the visible range of the cameras is at least covered on the ground of the whole moving scene, a common area exists between each camera and at least one other camera, and the ground is covered by more than two visible ranges of the cameras;
the navigation module is used for planning a navigation path for the robot according to the coordinates of the robot in the activity scene, the coordinates of the target ground carried in the navigation task in the activity scene and the 2D grid map of the activity scene; the 2D grid map is converted from a 3D model of an activity scene established by a plurality of cameras mounted on the activity scene of the robot.
7. The apparatus of claim 6, further comprising a generation module to generate a 2D grid map of an activity scene; the generation module comprises:
the first determining unit is used for determining a first camera with a visible range covering the coordinate origin of the movable scene in the plurality of cameras;
the second determining unit is used for determining the three-dimensional coordinates of each first camera in the active scene;
the third determining unit is used for determining three-dimensional coordinates of other cameras in the active scene according to the relation between each first camera and the cameras with the common view area;
the building unit is used for building a 3D model of the active scene according to the three-dimensional coordinates of all cameras and the shot common view area.
A fourth determining unit, configured to determine, according to the 3D model of the activity scene, that the ground of the activity scene is a passable area;
and the conversion unit is used for converting the determined passable area into a 2D grid map.
8. The apparatus of claim 7, wherein the positioning module comprises:
the first positioning unit is used for determining a second camera of the visual range coverage robot in the plurality of cameras; determining coordinates of the robot in the moving scene according to the three-dimensional coordinates of the second camera;
The second positioning unit is used for determining a third camera with a visual range covering the target area in the plurality of cameras; and determining the coordinates of the target ground in the active scene according to the three-dimensional coordinates of the third camera.
9. The apparatus of claim 6, wherein the navigation module comprises:
a fifth determining unit, configured to determine whether the destination is in a passable area according to coordinates of the destination in the activity scene;
and the navigation unit is used for planning a navigation path according to the coordinates of the robot, the coordinates of the target ground and the 2D grid map of the active scene if the target ground is in the passable area.
10. The apparatus of claim 6, wherein the apparatus further comprises:
the determining module is used for determining whether an obstacle appears on a navigation path on which the robot is walking or not through the cameras;
the prompting module is used for sending a prompting message if the fact that an obstacle appears on the navigation path is determined, wherein the prompting message comprises at least one of the following components: a message prompting that an obstacle exists in front of the robot; and (3) re-planning the navigation path.
11. A robotic navigation device, comprising:
A processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
in response to receiving a navigation task, determining coordinates of the robot in the active scene and coordinates of a target ground carried in the navigation task in the active scene through a plurality of cameras mounted in the active scene of the robot; the camera comprises a plurality of cameras, a plurality of cameras and a plurality of control units, wherein the cameras are arranged on the ground of the whole moving scene of the robot, the visible range of the cameras is at least covered on the ground of the whole moving scene, a common area exists between each camera and at least one other camera, and the ground is covered by more than two visible ranges of the cameras;
planning a navigation path for the robot according to the coordinates of the robot in the activity scene, the coordinates of the target ground carried in the navigation task in the activity scene and the 2D grid map of the activity scene; the 2D grid map is converted from a 3D model of an activity scene established by a plurality of cameras mounted on the activity scene of the robot.
12. A computer readable storage medium having stored thereon computer instructions, which when executed by a processor, implement the steps of the method of any of claims 1-5.
CN202211542838.4A 2022-12-02 2022-12-02 Robot navigation method, apparatus and computer readable storage medium Pending CN116007623A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211542838.4A CN116007623A (en) 2022-12-02 2022-12-02 Robot navigation method, apparatus and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211542838.4A CN116007623A (en) 2022-12-02 2022-12-02 Robot navigation method, apparatus and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN116007623A true CN116007623A (en) 2023-04-25

Family

ID=86019956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211542838.4A Pending CN116007623A (en) 2022-12-02 2022-12-02 Robot navigation method, apparatus and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN116007623A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116958265A (en) * 2023-09-19 2023-10-27 交通运输部天津水运工程科学研究所 Ship pose measurement method and system based on binocular vision

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116958265A (en) * 2023-09-19 2023-10-27 交通运输部天津水运工程科学研究所 Ship pose measurement method and system based on binocular vision

Similar Documents

Publication Publication Date Title
EP3612906B1 (en) Method and system for environment map generation and alignment
US9481982B1 (en) Method and control system for surveying and mapping a terrain while operating a bulldozer
US9142063B2 (en) Positioning system utilizing enhanced perception-based localization
Baltzakis et al. Fusion of laser and visual data for robot motion planning and collision avoidance
Harapanahalli et al. Autonomous Navigation of mobile robots in factory environment
CA2428360C (en) Autonomous multi-platform robotic system
US6496755B2 (en) Autonomous multi-platform robot system
CN112518739B (en) Track-mounted chassis robot reconnaissance intelligent autonomous navigation method
WO2018194768A1 (en) Method and system for simultaneous localization and sensor calibration
CN110986920B (en) Positioning navigation method, device, equipment and storage medium
US11550333B2 (en) Systems and methods to apply markings
CN105486311A (en) Indoor robot positioning navigation method and device
RU2740229C1 (en) Method of localizing and constructing navigation maps of mobile service robot
CN110789529B (en) Vehicle control method, device and computer-readable storage medium
CN113566808A (en) Navigation path planning method, device, equipment and readable storage medium
CN103472434B (en) Robot sound positioning method
CN106909149B (en) Method and device for avoiding obstacles by depth camera
CN113607166B (en) Indoor and outdoor positioning method and device for autonomous mobile robot based on multi-sensor fusion
CN116007623A (en) Robot navigation method, apparatus and computer readable storage medium
KR20120059428A (en) Apparatus and Method for controlling a mobile robot on the basis of past map data
JP2003247805A (en) Method for measuring volume and program for measuring volume
CN112162551B (en) Obstacle detection method, apparatus, device and computer readable medium
CN113776515B (en) Robot navigation method and device, computer equipment and storage medium
JP2021114222A (en) Robot system and method of estimating its position
CN114662760A (en) Robot-based distribution method and robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination