US20200164513A1 - Positioning and navigation method for a robot, and computing device thereof - Google Patents
Positioning and navigation method for a robot, and computing device thereof Download PDFInfo
- Publication number
- US20200164513A1 US20200164513A1 US16/715,897 US201916715897A US2020164513A1 US 20200164513 A1 US20200164513 A1 US 20200164513A1 US 201916715897 A US201916715897 A US 201916715897A US 2020164513 A1 US2020164513 A1 US 2020164513A1
- Authority
- US
- United States
- Prior art keywords
- point cloud
- cloud information
- current position
- robot
- map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000010845 search algorithm Methods 0.000 claims abstract description 14
- 238000004891 communication Methods 0.000 claims description 13
- 238000013507 mapping Methods 0.000 claims description 12
- 238000010586 diagram Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0257—Control of position or course in two dimensions specially adapted to land vehicles using a radar
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/3453—Special cost functions, i.e. other than distance or default speed limit of road segments
- G01C21/3461—Preferred or disfavoured areas, e.g. dangerous zones, toll or emission zones, intersections, manoeuvre types, segments such as motorways, toll roads, ferries
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/0011—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots associated with a remote control arrangement
- G05D1/0044—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots associated with a remote control arrangement by providing the operator with a computer generated representation of the environment of the vehicle, e.g. virtual reality, maps
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0268—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
- G05D1/0274—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
- G05D1/0285—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using signals transmitted via a public communication network, e.g. GSM network
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Definitions
- Embodiments of the present application relate to the technical field of robot navigation, and in particular, relate to a positioning and navigation method and apparatus for a robot, a computing device, and a computer-readable storage medium thereof.
- a positioning system plays an important role, and accurate and stable positioning is the basis of the system.
- common positioning methods mainly include laser radar positioning method, vision positioning method, GPS satellite positioning method, mobile base station platform positioning method and the like, and a combination of various positioning methods.
- the laser radar positioning method is the most reliable.
- Building information modeling (BIM) may provide a three-dimensional building model, which provides all the real information of the building by digital information simulation. The real information may be used as the basis and data for establishing a point cloud map, such that a precise point cloud is constructed.
- the BIM provides a huge amount of three-dimensional building model information, and the data amount of a point cloud map constructed therebased may not be processed in real time in a robot locally.
- An embodiment of the present application provides a positioning and navigation method for a robot.
- the method includes: receiving point cloud information of a current position and information of a target position sent by a robot, wherein the point cloud information of the current position is updated in real time; determining the current position of the robot according to the point cloud information; planning a plurality of motion paths according to the current position and the target position; searching for an optimal path among the plurality of motion paths by a predetermined heuristic search algorithm; and sending the current position of the robot and the optimal path to the robot, such that the robot moves according to the optimal path.
- the computing device includes: a processor, a memory, a communication interface and a communication bus; wherein the processor, the memory and the communication bus communicate with each other via the communication bus; and the memory is configured to store at least one executable instruction, wherein the executable instruction, when being executed by the processor, causes the processor to perform the steps of: receiving point cloud information of a current position and information of a target position sent by a robot, wherein the point cloud information of the current position is updated in real time; determining the current position of the robot according to the point cloud information; planning a plurality of motion paths according to the current position and the target position; searching for an optimal path among the plurality of motion paths by a predetermined heuristic search algorithm; and sending the current position of the robot and the optimal path to the robot, such that the robot moves according to the optimal path.
- Still another embodiment of the present application provides a computer-readable storage medium.
- the storage medium stores at least one executable instruction; wherein the executable instruction, when being executed, causes the processor to perform the steps of: receiving point cloud information of a current position and information of a target position sent by a robot, wherein the point cloud information of the current position is updated in real time; determining the current position of the robot according to the point cloud information; planning a plurality of motion paths according to the current position and the target position; searching for an optimal path among the plurality of motion paths by a predetermined heuristic search algorithm; and sending the current position of the robot and the optimal path to the robot, such that the robot moves according to the optimal path.
- FIG. 1 is a flowchart of a positioning and navigation method for a robot according to an embodiment of the present application
- FIG. 2 is a schematic diagram of the method according to an embodiment of the present application.
- FIG. 3 is a functional block diagram of a positioning and navigation apparatus for a robot according to an embodiment of the present application.
- FIG. 4 is a functional block diagram of a computing device according to an embodiment of the present application.
- FIG. 1 is a flowchart of a positioning and navigation method for a robot according to an embodiment of the present application. The method is applied to a cloud server. As illustrated in FIG. 1 , the method includes the following steps:
- Step S 101 Point cloud information of a current position and information of a target position sent by the robot are received, wherein the point cloud information of the current position is updated in real time.
- the point cloud information of the current position sent by the robot is a point cloud generated by acquiring environmental information of the current position of the robot by Lidar equipped in the robot.
- the point cloud information and information of a target position sent by the robot are transmitted to the cloud server over a 5G network in real time.
- An update frequency is consistent with a frequency of the Lidar equipped in the robot.
- Step S 102 The current position of the robot is determined according to the point cloud information.
- the current position of the robot Since the current position of the robot is unknown, the current position of the robot needs to be determined first so that a plurality of motion paths may be determined according to the current position and the target position.
- step S 102 may also include: when receiving the point cloud information of the current position sent by the robot, the cloud server extracts a three-dimensional map model corresponding to the point cloud information from a three-dimensional map model provided by the BIM, generates a point cloud map according to the three-dimensional map model corresponding to the point cloud information, and determines the current position of the robot according to the point cloud information and the point cloud map.
- the step of determining the current position of the robot according to the point cloud information and the point cloud map in step 102 includes the following steps:
- Step S 1021 Whether the point cloud information of the current position is the first-frame point cloud information is determined. Step S 1022 is performed if the point cloud information of the current position is the first-frame point cloud information; and step S 1025 is performed if the point cloud information of the current position is not the first-frame point cloud information.
- Step S 1022 Point cloud information corresponding to a predetermined initial position is matched with the point cloud information of the current position.
- Step S 1023 is performed if the point cloud information corresponding to the predetermined initial position is matched with the point cloud information of the current position; and step S 1024 is performed if the point cloud information corresponding to the predetermined initial position is not matched with the point cloud information of the current position.
- Step S 1023 The predetermined initial position is determined as the current position.
- Step S 1024 The current position is positioned in the vicinity of the predetermined initial position in the point cloud map according to the point cloud information of the current position.
- the robot may move back to the predetermined initial position to wait for another instruction when the robot does not need to move.
- the point cloud information of the current position is the first-frame point cloud information
- the probability that the robot is at the predetermined initial position is high. Therefore, matching the point cloud information at the predetermined initial position is favorable to improving a positioning speed of positioning the current position.
- Step S 1025 A position of the robot determined in previous-frame point cloud information is acquired.
- Step S 1026 A current estimated position of the robot is estimated according to the determined position of the robot.
- Step S 1027 Point cloud information of the estimated position is acquired, and the step of matching is performed between the point cloud information of the estimated position and the point cloud information of the current position.
- Step S 1028 is performed if the point cloud information of the estimated position is matched with the point cloud information of the current position; and step S 1029 is performed if the point cloud information of the estimated position is not matched with the point cloud information of the current position.
- Step S 1028 The estimated position is determined as the current position.
- Step S 1029 The current position is positioned in the vicinity of the estimated position in the point cloud map.
- Positioning the current position of the robot according to the estimated position may narrow the range for positioning the current position, which improves the positioning speed as compared with the method of positioning the current position of the robot in the point cloud map in the entire serving region of the robot.
- Step S 103 A plurality of motion paths are planned according to the current position and the target position.
- the plurality of motion paths are planned on a grid map, wherein the grid map is generated by the cloud server according to the three-dimensional map model provided by the BIM. Specifically, after the three-dimensional map model is acquired, all pavement models by which the robot can reach to the target position are extracted from the three-dimensional map model, and a two-dimensional plane is generated according to the pavement models; and the grid map is generated in a grid pattern according to the two-dimensional plane, wherein the grid map corresponds to the point cloud map.
- the current position and the target position are mapped to the grid map to obtain a first position and a second position, wherein the first position corresponds to the current position in the point cloud map, and the second position corresponds to the target position in the point cloud map; and then the plurality of motion paths are planned in the grid map according to the first position and the second position.
- a grid in the grid map corresponding to a travelable pavement in the point cloud map is marked as a first identification X(X is an integer, e.g., 1)
- an obstacle is marked as a second identification Y(Y is an integer, e.g., 256)
- the planned paths i.e. motion paths
- Step S 104 An optimal path is searched for among the plurality of motion paths by a predetermined heuristic search algorithm.
- Many planned paths may be determined according to the grid map.
- the robot When the robot is operating, the robot moves along one of the planned paths.
- An optimal path is searched for in the planned paths by a predetermined heuristic search algorithm.
- the predetermined heuristic search algorithm may be any algorithm in the conventional heuristic search algorithms, for example, the A* algorithm.
- Step S 105 The current position of the robot and the optimal path are sent to the robot, such that the robot moves according to the optimal path.
- the cloud server sends the current position of the robot and the optimal path to the robot over the 5G network.
- the robot Upon receiving the current position and the optimal path, the robot performs real-time partial path planning by a predetermined dynamic password algorithm to self-navigation of the robot.
- obstacles may also be avoided by virtue of an obstacle avoidance model.
- positioning is implemented in the point cloud map according to the point cloud information of the current position and information of the target position sent by the robot, path planning is carried out in the grid map, and an optimal path is searched for by the predetermined heuristic search algorithm and sent to the robot.
- path planning is carried out in the grid map, and an optimal path is searched for by the predetermined heuristic search algorithm and sent to the robot.
- FIG. 3 is a functional block diagram of a positioning and navigation apparatus for a robot according to an embodiment of the present application.
- the apparatus includes: a receiving module 301 , a position updating module 302 , a determining module 303 , a path planning module 304 , a smart searching module 305 and a sending module 306 .
- the receiving module 301 is configured to receive point cloud information of a current position and information of a target position sent by the robot, wherein the point cloud information of the current position is updated in real time.
- the position updating module 302 is configured to update the point cloud information of the current position.
- the determining module 303 is configured to determine the current position of the robot according to the point cloud information.
- the path planning module 304 is configured to plan a plurality of motion paths according to the current position and the target position.
- the smart searching module 305 is configured to search for an optimal path among the plurality of motion paths by a predetermined heuristic search algorithm.
- the sending module 306 is configured to send the current position of the robot and the optimal path to the robot, such that the robot moves according to the optimal path.
- the determining module 303 includes: an acquiring unit 3031 , a generating unit 3032 and a determining unit 3033 .
- the acquiring unit 3031 is configured to acquire a three-dimensional map model of a serving region of the robot.
- the generating unit 3032 is configured to generate a point cloud map according to the three-dimensional map model.
- the determining unit 3033 is configured to determine the current position of the robot according to the point cloud information and the point cloud map.
- the step of determining the current position of the robot according to the point cloud information and the point cloud map performed by the determining unit 3033 includes: determining whether the point cloud information of the current position is first-frame point cloud information; matching point cloud information corresponding to a predetermined initial position with the point cloud information of the current position if the point cloud information of the current position is the first-frame point cloud information; determining the predetermined initial position as the current position if the point cloud information corresponding to the predetermined initial position is matched with the point cloud information of the current position; and positioning the current position in the vicinity of the predetermined initial position in the point cloud map according to the point cloud information of the current position if the point cloud information corresponding to the predetermined initial position is not matched with the point cloud information of the current position.
- the step of determining the current position of the robot according to the point cloud information and the point cloud map performed by the determining unit 3033 further includes: acquiring a position of the robot determined in previous-frame point cloud information if the point cloud information of the current position is not the first-frame point cloud information; estimating a current estimated position of the robot according to the determined position of the robot; acquiring point cloud information of the estimated position, and matching the point cloud information of the estimated position with the point cloud information of the current position; determining the estimated position as the current position if the point cloud information of the estimated position is matched with the point cloud information of the current position; and positioning the current position in the vicinity of the estimated position in the point cloud map if the point cloud information of the estimated position is not matched with the point cloud information of the current position.
- the path planning module 304 includes: a generating unit 3041 , a first mapping unit 3042 , a second mapping unit 3043 and a planning unit 3044 .
- the generating unit 3041 is configured to generate a grid map according to the three-dimensional map model, wherein the point cloud map corresponds to the grid map.
- the first mapping unit 3042 is configured to map the determined current position of the robot to the grid map to obtain a first position.
- the second mapping unit 3043 is configured to map the target position to the grid map to obtain a second position.
- the planning unit 3044 is configured to plan the plurality of motion paths in the grid map according to the first position and the second position.
- the step of generating the grid map according to the three-dimensional map model performed by the generating unit 3041 includes: extracting all pavement models by which the robot can reach to the target position from the three-dimensional map model, and generating a two-dimensional plane; and generating the grid map in a grid pattern according to the two-dimensional plane.
- the point cloud information of the current position and information of the target position sent by the robot are received by the receiving module, the current position of the robot is determined by the determining module, and path planning is carried out by the path planning module according to the information obtained by the receiving module and the determining module.
- An embodiment of the present application provides a non-volatile computer-readable storage medium, wherein the computer-readable storage medium stores at least one computer-executable instruction, which may be executed to perform the positioning and navigation method for a robot in any of the above method embodiments.
- FIG. 4 is a schematic structural diagram of a computing device according to an embodiment of the present application.
- the specific embodiments of the present application set no limitation to the practice of the computing device.
- the computing device may include: a processor 402 , a communication interface 404 , a memory 406 and a communication bus 408 .
- the processor 402 , the communication interface 404 and the memory 406 communicate with each other via the communication bus 408 .
- the communication interface 404 is configured to communicate with a network element such as a client, a server or the like.
- the processor 402 is configured to execute a program 410 , and may specifically perform steps in the embodiments of the positioning and navigation method for a robot.
- the program 410 may include a program code, wherein the program code includes a computer-executable instruction.
- the processor 402 may be a central processing unit (CPU) or an Application Specific Integrated Circuit (ASIC), or configured as one or more integrated circuits for implementing the embodiments of the present invention.
- the computing device includes one or more processors, which may be the same type of processors, for example, one or more CPUs, or may be different types of processors, for example, one or more CPUs and one or more ASICs.
- the memory 406 is configured to store the program 410 .
- the memory 406 may include a high-speed RAM memory, or may also include a non-volatile memory, for example, at least one magnetic disk memory.
- the program 410 may be specifically configured to cause the processor 402 to perform the following operations:
- program 410 may be specifically further configured to cause the processor to perform the following operations:
- program 410 may be specifically further configured to cause the processor to perform the following operations:
- program 410 may be specifically further configured to cause the processor to perform the following operations:
- program 410 may be specifically further configured to cause the processor to perform the following operations:
- the program 410 may be specifically further configured to cause the processor to perform the following operations: extracting all pavement models by which the robot can reach to the target position from the three-dimensional map model, and generating a two-dimensional plane; and generating the grid map in a grid pattern according to the two-dimensional plane.
- modules in the devices according to the embodiments may be adaptively modified and these modules may be configured in one or more devices different from the embodiments herein.
- Modules or units or components in the embodiments may be combined into a single module or unit or component, and additionally these modules, units or components may be practiced in a plurality of sub-modules, subunits or subcomponents.
- all the features disclosed in this specification including the appended claims, abstract and accompanying drawings
- all the processes or units in such disclosed methods or devices may be combined in any way.
- Embodiments of the individual components of the present application may be implemented in hardware, or in a software module running one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that, in practice, some or all of the functions of some or all of the components in the message prompting apparatus according to individual embodiments of the present application may be implemented using a microprocessor or a digital signal processor (DSP).
- DSP digital signal processor
- the present application may also be implemented as an apparatus of a device program (e.g., a computer program and a computer program product) for performing a part or all of the method as described herein.
- Such a program implementing the present application may be stored on a computer readable medium, or may be stored in the form of one or more signals. Such a signal may be obtained by downloading it from an Internet website, or provided on a carrier signal, or provided in any other form.
- any reference sign placed between the parentheses shall not be construed as a limitation to a claim.
- the word “comprise” or “include” does not exclude the presence of an element or a step not listed in a claim.
- the word “a” or “an” used before an element does not exclude the presence of a plurality of such elements.
- the present application may be implemented by means of a hardware including several distinct elements and by means of a suitably programmed computer. In a unit claim enumerating several devices, several of the devices may be embodied by one and the same hardware item. Use of the words “first”, “second”, “third” and the like does not mean any ordering. Such words may be construed as naming.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Aviation & Aerospace Engineering (AREA)
- Theoretical Computer Science (AREA)
- Mechanical Engineering (AREA)
- Robotics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- General Engineering & Computer Science (AREA)
- Electromagnetism (AREA)
- Multimedia (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Manipulator (AREA)
Abstract
A positioning and navigation method for a robot includes: receiving point cloud information of a current position and information of a target position sent by a robot, wherein the point cloud information of the current position is updated in real time; determining the current position of the robot according to the point cloud information; planning a plurality of motion paths according to the current position and the target position; searching for an optimal path among the plurality of motion paths by a predetermined heuristic search algorithm; and sending the current position of the robot and the optimal path to the robot, such that the robot moves according to the optimal path.
Description
- This application claims priority to Chinese Patent Application No. 201811426508.2, filed with the Chinese Patent Office on Nov. 27, 2018, titled “POSITIONING AND NAVIGATION METHOD AND APPARATUS FOR A ROBOT, AND COMPUTING DEVICE THEREOF”, the entire contents of which are incorporated herein by reference.
- Embodiments of the present application relate to the technical field of robot navigation, and in particular, relate to a positioning and navigation method and apparatus for a robot, a computing device, and a computer-readable storage medium thereof.
- In mobile platforms such as robots and autonomous driving, etc., a positioning system plays an important role, and accurate and stable positioning is the basis of the system. At present, common positioning methods mainly include laser radar positioning method, vision positioning method, GPS satellite positioning method, mobile base station platform positioning method and the like, and a combination of various positioning methods. Among these positioning methods, the laser radar positioning method is the most reliable. Building information modeling (BIM) may provide a three-dimensional building model, which provides all the real information of the building by digital information simulation. The real information may be used as the basis and data for establishing a point cloud map, such that a precise point cloud is constructed.
- The BIM provides a huge amount of three-dimensional building model information, and the data amount of a point cloud map constructed therebased may not be processed in real time in a robot locally.
- An embodiment of the present application provides a positioning and navigation method for a robot. The method includes: receiving point cloud information of a current position and information of a target position sent by a robot, wherein the point cloud information of the current position is updated in real time; determining the current position of the robot according to the point cloud information; planning a plurality of motion paths according to the current position and the target position; searching for an optimal path among the plurality of motion paths by a predetermined heuristic search algorithm; and sending the current position of the robot and the optimal path to the robot, such that the robot moves according to the optimal path.
- Another embodiment of the present application provides a computing device. The computing device includes: a processor, a memory, a communication interface and a communication bus; wherein the processor, the memory and the communication bus communicate with each other via the communication bus; and the memory is configured to store at least one executable instruction, wherein the executable instruction, when being executed by the processor, causes the processor to perform the steps of: receiving point cloud information of a current position and information of a target position sent by a robot, wherein the point cloud information of the current position is updated in real time; determining the current position of the robot according to the point cloud information; planning a plurality of motion paths according to the current position and the target position; searching for an optimal path among the plurality of motion paths by a predetermined heuristic search algorithm; and sending the current position of the robot and the optimal path to the robot, such that the robot moves according to the optimal path.
- Still another embodiment of the present application provides a computer-readable storage medium. The storage medium stores at least one executable instruction; wherein the executable instruction, when being executed, causes the processor to perform the steps of: receiving point cloud information of a current position and information of a target position sent by a robot, wherein the point cloud information of the current position is updated in real time; determining the current position of the robot according to the point cloud information; planning a plurality of motion paths according to the current position and the target position; searching for an optimal path among the plurality of motion paths by a predetermined heuristic search algorithm; and sending the current position of the robot and the optimal path to the robot, such that the robot moves according to the optimal path.
- By reading the detailed description of preferred embodiments hereinafter, various other advantages and beneficial effects become clear and apparent for persons of ordinary skill in the art. The accompanying drawings are merely for illustrating the preferred embodiments, but shall not be construed as limiting the present application. In all the accompanying drawings, like reference signs denote like parts. In the drawings:
-
FIG. 1 is a flowchart of a positioning and navigation method for a robot according to an embodiment of the present application; -
FIG. 2 is a schematic diagram of the method according to an embodiment of the present application; -
FIG. 3 is a functional block diagram of a positioning and navigation apparatus for a robot according to an embodiment of the present application; and -
FIG. 4 is a functional block diagram of a computing device according to an embodiment of the present application. - Some exemplary embodiments of the present application are hereinafter described in detail with reference to the accompanying drawings. Although the accompanying drawings illustrate the exemplary embodiments of the present application, it shall be understood that the present application may be practiced in various manners, and the present application shall not be limited by the embodiments illustrated herein. On the contrary, these embodiments are described herein only for the purpose of better understanding the present application, and may integrally convey the scope of the present application to a person skilled in the art.
-
FIG. 1 is a flowchart of a positioning and navigation method for a robot according to an embodiment of the present application. The method is applied to a cloud server. As illustrated inFIG. 1 , the method includes the following steps: - Step S101: Point cloud information of a current position and information of a target position sent by the robot are received, wherein the point cloud information of the current position is updated in real time.
- In this step, the point cloud information of the current position sent by the robot is a point cloud generated by acquiring environmental information of the current position of the robot by Lidar equipped in the robot. In some embodiments, the point cloud information and information of a target position sent by the robot are transmitted to the cloud server over a 5G network in real time.
- It should be understood that the robot moves in real time according to a planned path, and therefore the current position of the robot is updated in real time. An update frequency is consistent with a frequency of the Lidar equipped in the robot.
- Step S102: The current position of the robot is determined according to the point cloud information.
- Since the current position of the robot is unknown, the current position of the robot needs to be determined first so that a plurality of motion paths may be determined according to the current position and the target position.
- In some embodiments, during positioning of the current position, the positioning may be performed by matching the point cloud information in the point cloud map, and in this case, step S102 may also include: when receiving the point cloud information of the current position sent by the robot, the cloud server extracts a three-dimensional map model corresponding to the point cloud information from a three-dimensional map model provided by the BIM, generates a point cloud map according to the three-dimensional map model corresponding to the point cloud information, and determines the current position of the robot according to the point cloud information and the point cloud map.
- In some embodiments, as illustrated in
FIG. 2 , the step of determining the current position of the robot according to the point cloud information and the point cloud map instep 102 includes the following steps: - Step S1021: Whether the point cloud information of the current position is the first-frame point cloud information is determined. Step S1022 is performed if the point cloud information of the current position is the first-frame point cloud information; and step S1025 is performed if the point cloud information of the current position is not the first-frame point cloud information.
- Step S1022: Point cloud information corresponding to a predetermined initial position is matched with the point cloud information of the current position. Step S1023 is performed if the point cloud information corresponding to the predetermined initial position is matched with the point cloud information of the current position; and step S1024 is performed if the point cloud information corresponding to the predetermined initial position is not matched with the point cloud information of the current position.
- Step S1023: The predetermined initial position is determined as the current position.
- Step S1024: The current position is positioned in the vicinity of the predetermined initial position in the point cloud map according to the point cloud information of the current position.
- In general situations, the robot may move back to the predetermined initial position to wait for another instruction when the robot does not need to move. When the point cloud information of the current position is the first-frame point cloud information, the probability that the robot is at the predetermined initial position is high. Therefore, matching the point cloud information at the predetermined initial position is favorable to improving a positioning speed of positioning the current position.
- Step S1025: A position of the robot determined in previous-frame point cloud information is acquired.
- Step S1026: A current estimated position of the robot is estimated according to the determined position of the robot.
- Step S1027: Point cloud information of the estimated position is acquired, and the step of matching is performed between the point cloud information of the estimated position and the point cloud information of the current position. Step S1028 is performed if the point cloud information of the estimated position is matched with the point cloud information of the current position; and step S1029 is performed if the point cloud information of the estimated position is not matched with the point cloud information of the current position.
- Step S1028: The estimated position is determined as the current position.
- Step S1029: The current position is positioned in the vicinity of the estimated position in the point cloud map.
- Positioning the current position of the robot according to the estimated position may narrow the range for positioning the current position, which improves the positioning speed as compared with the method of positioning the current position of the robot in the point cloud map in the entire serving region of the robot.
- Step S103: A plurality of motion paths are planned according to the current position and the target position.
- In this step, the plurality of motion paths are planned on a grid map, wherein the grid map is generated by the cloud server according to the three-dimensional map model provided by the BIM. Specifically, after the three-dimensional map model is acquired, all pavement models by which the robot can reach to the target position are extracted from the three-dimensional map model, and a two-dimensional plane is generated according to the pavement models; and the grid map is generated in a grid pattern according to the two-dimensional plane, wherein the grid map corresponds to the point cloud map. After the current position of the robot and the target position are determined in the point cloud map, the current position and the target position are mapped to the grid map to obtain a first position and a second position, wherein the first position corresponds to the current position in the point cloud map, and the second position corresponds to the target position in the point cloud map; and then the plurality of motion paths are planned in the grid map according to the first position and the second position.
- It should be understood that during planning the plurality of motion paths in the grid map, a grid in the grid map corresponding to a travelable pavement in the point cloud map is marked as a first identification X(X is an integer, e.g., 1), an obstacle is marked as a second identification Y(Y is an integer, e.g., 256), and the planned paths (i.e. motion paths) are formed by all the grids marked as 1 that connect the first position and the second position.
- Step S104: An optimal path is searched for among the plurality of motion paths by a predetermined heuristic search algorithm.
- Many planned paths may be determined according to the grid map. When the robot is operating, the robot moves along one of the planned paths. An optimal path is searched for in the planned paths by a predetermined heuristic search algorithm. The predetermined heuristic search algorithm may be any algorithm in the conventional heuristic search algorithms, for example, the A* algorithm.
- Step S105: The current position of the robot and the optimal path are sent to the robot, such that the robot moves according to the optimal path.
- In this step, the cloud server sends the current position of the robot and the optimal path to the robot over the 5G network. Upon receiving the current position and the optimal path, the robot performs real-time partial path planning by a predetermined dynamic password algorithm to self-navigation of the robot. In addition, during navigation process of the robot, obstacles may also be avoided by virtue of an obstacle avoidance model.
- In this embodiment of the present application, positioning is implemented in the point cloud map according to the point cloud information of the current position and information of the target position sent by the robot, path planning is carried out in the grid map, and an optimal path is searched for by the predetermined heuristic search algorithm and sent to the robot. In this way, the problem of insufficient operation capabilities of the robot is addressed, and all the calculations for navigation are performed at the cloud server, such that the speed of navigation of the robot is improved and self-navigation of the robot is achieved.
-
FIG. 3 is a functional block diagram of a positioning and navigation apparatus for a robot according to an embodiment of the present application. As illustrated inFIG. 1 , the apparatus includes: a receivingmodule 301, aposition updating module 302, a determiningmodule 303, apath planning module 304, asmart searching module 305 and a sendingmodule 306. - The receiving
module 301 is configured to receive point cloud information of a current position and information of a target position sent by the robot, wherein the point cloud information of the current position is updated in real time. - The
position updating module 302 is configured to update the point cloud information of the current position. - The determining
module 303 is configured to determine the current position of the robot according to the point cloud information. - The
path planning module 304 is configured to plan a plurality of motion paths according to the current position and the target position. - The
smart searching module 305 is configured to search for an optimal path among the plurality of motion paths by a predetermined heuristic search algorithm. - The sending
module 306 is configured to send the current position of the robot and the optimal path to the robot, such that the robot moves according to the optimal path. - In this embodiment, the determining
module 303 includes: an acquiringunit 3031, agenerating unit 3032 and a determiningunit 3033. The acquiringunit 3031 is configured to acquire a three-dimensional map model of a serving region of the robot. Thegenerating unit 3032 is configured to generate a point cloud map according to the three-dimensional map model. The determiningunit 3033 is configured to determine the current position of the robot according to the point cloud information and the point cloud map. - Further, the step of determining the current position of the robot according to the point cloud information and the point cloud map performed by the determining
unit 3033 includes: determining whether the point cloud information of the current position is first-frame point cloud information; matching point cloud information corresponding to a predetermined initial position with the point cloud information of the current position if the point cloud information of the current position is the first-frame point cloud information; determining the predetermined initial position as the current position if the point cloud information corresponding to the predetermined initial position is matched with the point cloud information of the current position; and positioning the current position in the vicinity of the predetermined initial position in the point cloud map according to the point cloud information of the current position if the point cloud information corresponding to the predetermined initial position is not matched with the point cloud information of the current position. - Further, the step of determining the current position of the robot according to the point cloud information and the point cloud map performed by the determining
unit 3033 further includes: acquiring a position of the robot determined in previous-frame point cloud information if the point cloud information of the current position is not the first-frame point cloud information; estimating a current estimated position of the robot according to the determined position of the robot; acquiring point cloud information of the estimated position, and matching the point cloud information of the estimated position with the point cloud information of the current position; determining the estimated position as the current position if the point cloud information of the estimated position is matched with the point cloud information of the current position; and positioning the current position in the vicinity of the estimated position in the point cloud map if the point cloud information of the estimated position is not matched with the point cloud information of the current position. - In this embodiment, the
path planning module 304 includes: a generatingunit 3041, afirst mapping unit 3042, asecond mapping unit 3043 and aplanning unit 3044. Thegenerating unit 3041 is configured to generate a grid map according to the three-dimensional map model, wherein the point cloud map corresponds to the grid map. Thefirst mapping unit 3042 is configured to map the determined current position of the robot to the grid map to obtain a first position. Thesecond mapping unit 3043 is configured to map the target position to the grid map to obtain a second position. Theplanning unit 3044 is configured to plan the plurality of motion paths in the grid map according to the first position and the second position. - Further, the step of generating the grid map according to the three-dimensional map model performed by the
generating unit 3041 includes: extracting all pavement models by which the robot can reach to the target position from the three-dimensional map model, and generating a two-dimensional plane; and generating the grid map in a grid pattern according to the two-dimensional plane. - In this embodiment of the present application, the point cloud information of the current position and information of the target position sent by the robot are received by the receiving module, the current position of the robot is determined by the determining module, and path planning is carried out by the path planning module according to the information obtained by the receiving module and the determining module. In this way, the problem of insufficient operation capabilities of the robot is addressed, such that the speed of navigation of the robot is improved and self-navigation of the robot is achieved.
- An embodiment of the present application provides a non-volatile computer-readable storage medium, wherein the computer-readable storage medium stores at least one computer-executable instruction, which may be executed to perform the positioning and navigation method for a robot in any of the above method embodiments.
-
FIG. 4 is a schematic structural diagram of a computing device according to an embodiment of the present application. The specific embodiments of the present application set no limitation to the practice of the computing device. - As illustrated in
FIG. 4 , the computing device may include: aprocessor 402, acommunication interface 404, amemory 406 and a communication bus 408. - The
processor 402, thecommunication interface 404 and thememory 406 communicate with each other via the communication bus 408. - The
communication interface 404 is configured to communicate with a network element such as a client, a server or the like. - The
processor 402 is configured to execute aprogram 410, and may specifically perform steps in the embodiments of the positioning and navigation method for a robot. - Specifically, the
program 410 may include a program code, wherein the program code includes a computer-executable instruction. - The
processor 402 may be a central processing unit (CPU) or an Application Specific Integrated Circuit (ASIC), or configured as one or more integrated circuits for implementing the embodiments of the present invention. The computing device includes one or more processors, which may be the same type of processors, for example, one or more CPUs, or may be different types of processors, for example, one or more CPUs and one or more ASICs. - The
memory 406 is configured to store theprogram 410. Thememory 406 may include a high-speed RAM memory, or may also include a non-volatile memory, for example, at least one magnetic disk memory. - The
program 410 may be specifically configured to cause theprocessor 402 to perform the following operations: - receiving point cloud information of a current position and information of a target position sent by the robot, wherein the point cloud information of the current position is updated in real time;
- determining the current position of the robot according to the point cloud information;
- planning a plurality of motion paths according to the current position and the target position;
- searching for an optimal path among the plurality of motion paths by a predetermined heuristic search algorithm; and
- sending the current position of the robot and the optimal path to the robot, such that the robot moves according to the optimal path.
- In an optional implementation, the
program 410 may be specifically further configured to cause the processor to perform the following operations: - acquiring a three-dimensional map model of a serving region of the robot;
- generating a point cloud map according to the three-dimensional map model; and
- determining the current position of the robot according to the point cloud information and the point cloud map.
- In an optional implementation, the
program 410 may be specifically further configured to cause the processor to perform the following operations: - determining whether the point cloud information of the current position is first-frame point cloud information;
- matching point cloud information corresponding to a predetermined initial position with the point cloud information of the current position if the point cloud information of the current position is the first-frame point cloud information;
- determining the predetermined initial position as the current position if the point cloud information corresponding to the predetermined initial position is matched with the point cloud information of the current position; and
- positioning the current position in the vicinity of the predetermined initial position in the point cloud map according to the point cloud information of the current position if the point cloud information corresponding to the predetermined initial position is not matched with the point cloud information of the current position.
- In an optional implementation, the
program 410 may be specifically further configured to cause the processor to perform the following operations: - acquiring a position of the robot determined in previous-frame point cloud information if the point cloud information of the current position is not the first-frame point cloud information;
- estimating a current estimated position of the robot according to the determined position of the robot;
- acquiring point cloud information of the estimated position, and matching the point cloud information of the estimated position with the point cloud information of the current position;
- determining the estimated position as the current position if the point cloud information of the estimated position is matched with the point cloud information of the current position; and
- positioning the current position in the vicinity of the estimated position in the point cloud map if the point cloud information of the estimated position is not matched with the point cloud information of the current position.
- In an optional implementation, the
program 410 may be specifically further configured to cause the processor to perform the following operations: - generating a grid map according to the three-dimensional map model, wherein the point cloud map corresponds to the grid map;
- mapping the determined current position of the robot to the grid map to obtain a first position;
- mapping the target position to the grid map to obtain a second position; and
- planning the plurality of motion paths in the grid map according to the first position and the second position.
- Further, the
program 410 may be specifically further configured to cause the processor to perform the following operations: extracting all pavement models by which the robot can reach to the target position from the three-dimensional map model, and generating a two-dimensional plane; and generating the grid map in a grid pattern according to the two-dimensional plane. - The algorithms and displays provided herein are not inherently related to any specific computer, virtual system or other device. Various general-purpose systems may also be used with the teachings herein. According to the above description, the structure required for constructing such systems is obvious. In addition, the present application is not directed to any specific programming language. It should be understood that the content of the present application described herein may be carried out utilizing various programming languages, and that the above description for a specific language is for the sake of disclosing preferred embodiments of the present application.
- In the specification provided herein, a plenty of particular details are described. However, it may be understood that an embodiment of the present application may also be practiced without these particular details. In some embodiments, well known methods, structures and technologies are not illustrated in detail so as not to obscure the understanding of the specification.
- Likewise, it shall be understood that, to streamline the present application and facilitate understanding of one or more of various aspects of the present application, in the above description of the exemplary embodiments of the present application, various features of the present application are sometimes incorporated in an individual embodiment, drawing or description thereof. However, the method according to the present application shall not be explained to embody the following intention: the present application for which protection is sought claims more features than those explicitly disclosed in each of the appended claims. To be more exact, as embodied in the appended claims, the inventive aspects lie in that fewer features than all the features embodied in an individual embodiment as described above. Therefore, the claims observing the specific embodiments are herein incorporated into the specific embodiments, and each claim may be deemed as an individual embodiment of the present application.
- Those skilled in the art should understand that modules in the devices according to the embodiments may be adaptively modified and these modules may be configured in one or more devices different from the embodiments herein. Modules or units or components in the embodiments may be combined into a single module or unit or component, and additionally these modules, units or components may be practiced in a plurality of sub-modules, subunits or subcomponents. Besides that such features and/or processes or at least some of the units are mutually exclusive, all the features disclosed in this specification (including the appended claims, abstract and accompanying drawings) and all the processes or units in such disclosed methods or devices may be combined in any way. Unless otherwise stated, each of the features disclosed in this specification (including the appended claims, abstract and accompanying drawings) may be replaced by a provided same, equivalent or similar substitution.
- In addition, those skilled in the art shall understand that, although some embodiments described herein include some features included in other embodiments, rather than other features, a combination of the features in different embodiments signifies that the features are within the scope of the present application and different embodiments may be derived. For example, in the claims appended hereinafter, any one of the embodiments for which protection is sought may be practiced in any combination manner.
- Embodiments of the individual components of the present application may be implemented in hardware, or in a software module running one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that, in practice, some or all of the functions of some or all of the components in the message prompting apparatus according to individual embodiments of the present application may be implemented using a microprocessor or a digital signal processor (DSP). The present application may also be implemented as an apparatus of a device program (e.g., a computer program and a computer program product) for performing a part or all of the method as described herein. Such a program implementing the present application may be stored on a computer readable medium, or may be stored in the form of one or more signals. Such a signal may be obtained by downloading it from an Internet website, or provided on a carrier signal, or provided in any other form.
- It should be noted that the above embodiments illustrate rather than limit the present application, and those skilled in the art may design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference sign placed between the parentheses shall not be construed as a limitation to a claim. The word “comprise” or “include” does not exclude the presence of an element or a step not listed in a claim. The word “a” or “an” used before an element does not exclude the presence of a plurality of such elements. The present application may be implemented by means of a hardware including several distinct elements and by means of a suitably programmed computer. In a unit claim enumerating several devices, several of the devices may be embodied by one and the same hardware item. Use of the words “first”, “second”, “third” and the like does not mean any ordering. Such words may be construed as naming.
Claims (20)
1. A positioning and navigation method for a robot, comprising:
receiving point cloud information of a current position and information of a target position sent by a robot, wherein the point cloud information of the current position is updated in real time;
determining the current position of the robot according to the point cloud information;
planning a plurality of motion paths according to the current position and the target position;
searching for an optimal path among the plurality of motion paths by a predetermined heuristic search algorithm; and
sending the current position of the robot and the optimal path to the robot, such that the robot moves according to the optimal path.
2. The method according to claim 1 , wherein the determining the current position of the robot according to the point cloud information comprises:
acquiring a three-dimensional map model of a serving region of the robot;
generating a point cloud map according to the three-dimensional map model; and
determining the current position of the robot according to the point cloud information and the point cloud map.
3. The method according to claim 2 , wherein the determining the current position of the robot according to the point cloud information and the point cloud map comprises:
determining whether the point cloud information of the current position is first-frame point cloud information;
matching point cloud information corresponding to a predetermined initial position with the point cloud information of the current position if the point cloud information of the current position is the first-frame point cloud information;
determining the predetermined initial position as the current position if the point cloud information corresponding to the predetermined initial position is matched with the point cloud information of the current position; and
positioning the current position in the vicinity of the predetermined initial position in the point cloud map according to the point cloud information of the current position if the point cloud information corresponding to the predetermined initial position is not matched with the point cloud information of the current position.
4. The method according to claim 3 , further comprising:
acquiring a position of the robot determined in previous-frame point cloud information if the point cloud information of the current position is not the first-frame point cloud information;
estimating a current estimated position of the robot according to the determined position of the robot;
acquiring point cloud information of the estimated position;
matching the point cloud information of the estimated position with the point cloud information of the current position;
determining the estimated position as the current position if the point cloud information of the estimated position is matched with the point cloud information of the current position; and
positioning the current position in the vicinity of the estimated position in the point cloud map if the point cloud information of the estimated position is not matched with the point cloud information of the current position.
5. The method according to claim 2 , wherein the planning the plurality of motion paths according to the current position and the target position comprises:
generating a grid map according to the three-dimensional map model, wherein the point cloud map corresponds to the grid map;
mapping the determined current position of the robot to the grid map to obtain a first position;
mapping the target position to the grid map to obtain a second position; and
planning the plurality of motion paths in the grid map according to the first position and the second position.
6. The method according to claim 5 , wherein the generating the grid map according to the three-dimensional map model comprises:
extracting all pavement models by which the robot reaches to the target position from the three-dimensional map model;
generating a two-dimensional plane according to the pavement models; and
generating the grid map in a grid pattern according to the two-dimensional plane.
7. The method according to claim 5 , wherein the planning the plurality of motion paths in the grid map according to the first position and the second position comprises:
marking a grid in the grid map corresponding to a travelable pavement in the point cloud map as a first identification;
marking a grid in the grid map corresponding to an obstacle in the point cloud map as a second identification; and
forming the plurality of motion paths by all the grids which are connecting the first position and the second position and marked as the first identification.
8. A computing device, comprising: a processor, a memory, a communication interface and a communication bus; wherein the processor, the memory and the communication bus communicate with each other via the communication bus; and the memory is configured to store at least one executable instruction, wherein the executable instruction, when being executed by the processor, causes the processor to perform the steps of:
receiving point cloud information of a current position and information of a target position sent by a robot, wherein the point cloud information of the current position is updated in real time;
determining the current position of the robot according to the point cloud information;
planning a plurality of motion paths according to the current position and the target position;
searching for an optimal path among the plurality of motion paths by a predetermined heuristic search algorithm; and
sending the current position of the robot and the optimal path to the robot, such that the robot moves according to the optimal path.
9. The computing device according to claim 8 , wherein the determining the current position of the robot according to the point cloud information comprises:
acquiring a three-dimensional map model of a serving region of the robot;
generating a point cloud map according to the three-dimensional map model; and
determining the current position of the robot according to the point cloud information and the point cloud map.
10. The computing device according to claim 9 , wherein the determining the current position of the robot according to the point cloud information and the point cloud map comprises:
determining whether the point cloud information of the current position is first-frame point cloud information;
matching point cloud information corresponding to a predetermined initial position with the point cloud information of the current position if the point cloud information of the current position is the first-frame point cloud information;
determining the predetermined initial position as the current position if the point cloud information corresponding to the predetermined initial position is matched with the point cloud information of the current position; and
positioning the current position in the vicinity of the predetermined initial position in the point cloud map according to the point cloud information of the current position if the point cloud information corresponding to the predetermined initial position is not matched with the point cloud information of the current position.
11. The computing device according to claim 10 , wherein the executable instruction, when being executed by the processor, further causes the processor to perform the steps of:
acquiring a position of the robot determined in previous-frame point cloud information if the point cloud information of the current position is not the first-frame point cloud information;
estimating a current estimated position of the robot according to the determined position of the robot;
acquiring point cloud information of the estimated position;
matching the point cloud information of the estimated position with the point cloud information of the current position;
determining the estimated position as the current position if the point cloud information of the estimated position is matched with the point cloud information of the current position; and
positioning the current position in the vicinity of the estimated position in the point cloud map if the point cloud information of the estimated position is not matched with the point cloud information of the current position.
12. The computing device according to claim 9 , wherein the planning the plurality of motion paths according to the current position and the target position comprises:
generating a grid map according to the three-dimensional map model, wherein the point cloud map corresponds to the grid map;
mapping the determined current position of the robot to the grid map to obtain a first position;
mapping the target position to the grid map to obtain a second position; and
planning the plurality of motion paths in the grid map according to the first position and the second position.
13. The computing device according to claim 12 , wherein the generating the grid map according to the three-dimensional map model comprises:
extracting all pavement models by which the robot reaches to the target position from the three-dimensional map model;
generating a two-dimensional plane according to the pavement models; and
generating the grid map in a grid pattern according to the two-dimensional plane.
14. The computing device according to claim 12 , wherein the planning the plurality of motion paths in the grid map according to the first position and the second position comprises:
marking a grid in the grid map corresponding to a travelable pavement in the point cloud map as a first identification;
marking a grid in the grid map corresponding to an obstacle in the point cloud map as a second identification, wherein the second identification is different from the first identification; and
forming the plurality of motion paths by all the grids which are connecting the first position and the second position and marked as the first identification.
15. A computer-readable storage medium, the storage medium storing at least one executable instruction; wherein the executable instruction, when being executed, causes the processor to perform the steps of:
receiving point cloud information of a current position and information of a target position sent by a robot, wherein the point cloud information of the current position is updated in real time;
determining the current position of the robot according to the point cloud information;
planning a plurality of motion paths according to the current position and the target position;
searching for an optimal path among the plurality of motion paths by a predetermined heuristic search algorithm; and
sending the current position of the robot and the optimal path to the robot, such that the robot moves according to the optimal path.
16. The computer-readable storage medium according to claim 15 , wherein the determining the current position of the robot according to the point cloud information comprises:
acquiring a three-dimensional map model of a serving region of the robot;
generating a point cloud map according to the three-dimensional map model; and
determining the current position of the robot according to the point cloud information and the point cloud map.
17. The computer-readable storage medium according to claim 16 , wherein the determining the current position of the robot according to the point cloud information and the point cloud map comprises:
determining whether the point cloud information of the current position is first-frame point cloud information;
matching point cloud information corresponding to a predetermined initial position with the point cloud information of the current position if the point cloud information of the current position is the first-frame point cloud information;
determining the predetermined initial position as the current position if the point cloud information corresponding to the predetermined initial position is matched with the point cloud information of the current position; and
positioning the current position in the vicinity of the predetermined initial position in the point cloud map according to the point cloud information of the current position if the point cloud information corresponding to the predetermined initial position is not matched with the point cloud information of the current position.
18. The computer-readable storage medium according to claim 17 , wherein the executable instruction, when being executed, further causes the processor to perform the steps of:
acquiring a position of the robot determined in previous-frame point cloud information if the point cloud information of the current position is not the first-frame point cloud information;
estimating a current estimated position of the robot according to the determined position of the robot;
acquiring point cloud information of the estimated position;
matching the point cloud information of the estimated position with the point cloud information of the current position;
determining the estimated position as the current position if the point cloud information of the estimated position is matched with the point cloud information of the current position; and
positioning the current position in the vicinity of the estimated position in the point cloud map if the point cloud information of the estimated position is not matched with the point cloud information of the current position.
19. The computer-readable storage medium according to claim 16 , wherein the planning the plurality of motion paths according to the current position and the target position comprises:
generating a grid map according to the three-dimensional map model, wherein the point cloud map corresponds to the grid map;
mapping the determined current position of the robot to the grid map to obtain a first position;
mapping the target position to the grid map to obtain a second position; and
planning the plurality of motion paths in the grid map according to the first position and the second position.
20. The computer-readable storage medium according to claim 19 , wherein the generating the grid map according to the three-dimensional map model comprises:
extracting all pavement models by which the robot reaches to the target position from the three-dimensional map model;
generating a two-dimensional plane according to the pavement models; and
generating the grid map in a grid pattern according to the two-dimensional plane.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811426508.2A CN109540142B (en) | 2018-11-27 | 2018-11-27 | Robot positioning navigation method and device, and computing equipment |
CN201811426508.2 | 2018-11-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
US20200164513A1 true US20200164513A1 (en) | 2020-05-28 |
Family
ID=65851610
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/698,475 Active 2041-09-29 US11638997B2 (en) | 2018-11-27 | 2019-11-27 | Positioning and navigation method for a robot, and computing device thereof |
US16/715,897 Abandoned US20200164513A1 (en) | 2018-11-27 | 2019-12-16 | Positioning and navigation method for a robot, and computing device thereof |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/698,475 Active 2041-09-29 US11638997B2 (en) | 2018-11-27 | 2019-11-27 | Positioning and navigation method for a robot, and computing device thereof |
Country Status (2)
Country | Link |
---|---|
US (2) | US11638997B2 (en) |
CN (1) | CN109540142B (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111552300A (en) * | 2020-06-09 | 2020-08-18 | 南开大学 | Crop picking system based on instance segmentation and path planning |
CN111709355A (en) * | 2020-06-12 | 2020-09-25 | 北京百度网讯科技有限公司 | Method and device for identifying target area, electronic equipment and road side equipment |
CN111750862A (en) * | 2020-06-11 | 2020-10-09 | 深圳优地科技有限公司 | Multi-region-based robot path planning method, robot and terminal equipment |
CN112014857A (en) * | 2020-08-31 | 2020-12-01 | 上海宇航系统工程研究所 | Three-dimensional laser radar positioning and navigation method for intelligent inspection and inspection robot |
CN112304318A (en) * | 2020-11-10 | 2021-02-02 | 河北工业大学 | Autonomous navigation method of robot under virtual-real coupling constraint environment |
CN112859893A (en) * | 2021-01-08 | 2021-05-28 | 中国商用飞机有限责任公司北京民用飞机技术研究中心 | Obstacle avoidance method and device for aircraft |
CN113099463A (en) * | 2021-03-28 | 2021-07-09 | 国网浙江省电力有限公司经济技术研究院 | UWB base station layout analysis system and method based on BIM and progress plan |
CN113433937A (en) * | 2021-06-08 | 2021-09-24 | 杭州未名信科科技有限公司 | Heuristic exploration-based layered navigation obstacle avoidance system and layered navigation obstacle avoidance method |
CN114862875A (en) * | 2022-05-20 | 2022-08-05 | 中国工商银行股份有限公司 | Method and device for determining moving path of robot and electronic equipment |
CN114911244A (en) * | 2021-06-17 | 2022-08-16 | 北京博创联动科技有限公司 | Ridge obstacle avoidance control method and device and agricultural automatic driving equipment |
CN115248430A (en) * | 2021-09-23 | 2022-10-28 | 上海仙途智能科技有限公司 | Target object positioning method, device, terminal and medium |
CN115855068A (en) * | 2023-02-24 | 2023-03-28 | 派欧尼尔环境净化工程(北京)有限公司 | Robot path autonomous navigation method and system based on BIM |
US11638997B2 (en) * | 2018-11-27 | 2023-05-02 | Cloudminds (Beijing) Technologies Co., Ltd. | Positioning and navigation method for a robot, and computing device thereof |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108012326B (en) * | 2017-12-07 | 2019-06-11 | 珠海市一微半导体有限公司 | The method and chip of robot monitoring pet based on grating map |
CN110095752B (en) * | 2019-05-07 | 2021-08-10 | 百度在线网络技术(北京)有限公司 | Positioning method, apparatus, device and medium |
CN110398964B (en) * | 2019-07-16 | 2022-02-01 | 浙江大学 | Low-energy-loss robot full-coverage path planning method and system |
CN114174766A (en) * | 2019-08-06 | 2022-03-11 | 波士顿动力公司 | Intermediate path point generator |
CN110370287B (en) * | 2019-08-16 | 2022-09-06 | 中铁第一勘察设计院集团有限公司 | Subway train inspection robot path planning system and method based on visual guidance |
CN112805534B (en) * | 2019-08-27 | 2024-05-17 | 北京航迹科技有限公司 | System and method for locating a target object |
CN111427979B (en) * | 2020-01-15 | 2021-12-21 | 深圳市镭神智能系统有限公司 | Dynamic map construction method, system and medium based on laser radar |
CN111272172A (en) * | 2020-02-12 | 2020-06-12 | 深圳壹账通智能科技有限公司 | Unmanned aerial vehicle indoor navigation method, device, equipment and storage medium |
JP7503426B2 (en) | 2020-06-12 | 2024-06-20 | 株式会社竹中工務店 | Map conversion system and map conversion program |
WO2022016152A1 (en) * | 2020-07-17 | 2022-01-20 | Path Robotics, Inc. | Real time feedback and dynamic adjustment for welding robots |
CN112598737B (en) * | 2020-12-24 | 2024-08-16 | 中建科技集团有限公司 | Indoor robot positioning method, device, terminal equipment and storage medium |
JP2024508563A (en) | 2021-02-24 | 2024-02-27 | パス ロボティクス, インコーポレイテッド | autonomous welding robot |
CN113276130B (en) * | 2021-05-28 | 2022-10-04 | 山东大学 | Free-form surface spraying path planning method and system based on point cloud slice |
CN114299039B (en) * | 2021-12-30 | 2022-08-19 | 广西大学 | Robot and collision detection device and method thereof |
CN115576332B (en) * | 2022-12-07 | 2023-03-24 | 广东省科学院智能制造研究所 | Task-level multi-robot collaborative motion planning system and method |
Family Cites Families (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110097140A (en) * | 2010-02-24 | 2011-08-31 | 삼성전자주식회사 | Apparatus for estimating location of moving robot and method thereof |
KR101615687B1 (en) * | 2014-05-27 | 2016-04-26 | 한국생산기술연구원 | Collision detection robot remote control system and method thereof |
US10614204B2 (en) * | 2014-08-28 | 2020-04-07 | Facetec, Inc. | Facial recognition authentication system including path parameters |
US11307042B2 (en) * | 2015-09-24 | 2022-04-19 | Allstate Insurance Company | Three-dimensional risk maps |
US20190227553A1 (en) * | 2015-11-04 | 2019-07-25 | Zoox, Inc. | Interactive autonomous vehicle command controller |
US9754490B2 (en) * | 2015-11-04 | 2017-09-05 | Zoox, Inc. | Software application to request and control an autonomous vehicle service |
WO2017079341A2 (en) * | 2015-11-04 | 2017-05-11 | Zoox, Inc. | Automated extraction of semantic information to enhance incremental mapping modifications for robotic vehicles |
CN106052691B (en) * | 2016-05-17 | 2019-03-15 | 武汉大学 | Close ring error correction method in the mobile drawing of laser ranging |
CN106056664B (en) * | 2016-05-23 | 2018-09-21 | 武汉盈力科技有限公司 | A kind of real-time three-dimensional scene reconstruction system and method based on inertia and deep vision |
CN106052697B (en) * | 2016-05-24 | 2017-11-14 | 百度在线网络技术(北京)有限公司 | Unmanned vehicle, unmanned vehicle localization method, device and system |
US10739786B2 (en) * | 2016-07-01 | 2020-08-11 | Uatc, Llc | System and method for managing submaps for controlling autonomous vehicles |
CN106296693B (en) * | 2016-08-12 | 2019-01-08 | 浙江工业大学 | Based on 3D point cloud FPFH feature real-time three-dimensional space-location method |
CN106443641B (en) * | 2016-09-28 | 2019-03-08 | 中国林业科学研究院资源信息研究所 | A kind of laser radar scanning homogeneity measurement method |
EP3324209A1 (en) * | 2016-11-18 | 2018-05-23 | Dibotics | Methods and systems for vehicle environment map generation and updating |
WO2018136889A1 (en) * | 2017-01-23 | 2018-07-26 | Built Robotics Inc. | Excavating earth from a dig site using an excavation vehicle |
CN106940186B (en) * | 2017-02-16 | 2019-09-24 | 华中科技大学 | A kind of robot autonomous localization and navigation methods and systems |
CN108571967B (en) * | 2017-03-13 | 2020-06-26 | 深圳市朗驰欣创科技股份有限公司 | Positioning method and device |
US10539676B2 (en) * | 2017-03-22 | 2020-01-21 | Here Global B.V. | Method, apparatus and computer program product for mapping and modeling a three dimensional structure |
CN106960445A (en) * | 2017-03-31 | 2017-07-18 | 卢涵宇 | A kind of cloud motion vector calculating method based on pyramid light stream |
CN108036793B (en) * | 2017-12-11 | 2021-07-23 | 北京奇虎科技有限公司 | Point cloud-based positioning method and device and electronic equipment |
WO2019127347A1 (en) * | 2017-12-29 | 2019-07-04 | 深圳前海达闼云端智能科技有限公司 | Three-dimensional mapping method, apparatus and system, cloud platform, electronic device, and computer program product |
CN108319655B (en) * | 2017-12-29 | 2021-05-07 | 百度在线网络技术(北京)有限公司 | Method and device for generating grid map |
CN108230379B (en) * | 2017-12-29 | 2020-12-04 | 百度在线网络技术(北京)有限公司 | Method and device for fusing point cloud data |
US11294060B2 (en) * | 2018-04-18 | 2022-04-05 | Faraday & Future Inc. | System and method for lidar-based vehicular localization relating to autonomous navigation |
CN108303099B (en) * | 2018-06-14 | 2018-09-28 | 江苏中科院智能科学技术应用研究院 | Autonomous navigation method in unmanned plane room based on 3D vision SLAM |
US10909866B2 (en) * | 2018-07-20 | 2021-02-02 | Cybernet Systems Corp. | Autonomous transportation system and methods |
DK180774B1 (en) * | 2018-10-29 | 2022-03-04 | Motional Ad Llc | Automatic annotation of environmental features in a map during navigation of a vehicle |
CN109540142B (en) * | 2018-11-27 | 2021-04-06 | 达闼科技(北京)有限公司 | Robot positioning navigation method and device, and computing equipment |
-
2018
- 2018-11-27 CN CN201811426508.2A patent/CN109540142B/en active Active
-
2019
- 2019-11-27 US US16/698,475 patent/US11638997B2/en active Active
- 2019-12-16 US US16/715,897 patent/US20200164513A1/en not_active Abandoned
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11638997B2 (en) * | 2018-11-27 | 2023-05-02 | Cloudminds (Beijing) Technologies Co., Ltd. | Positioning and navigation method for a robot, and computing device thereof |
CN111552300A (en) * | 2020-06-09 | 2020-08-18 | 南开大学 | Crop picking system based on instance segmentation and path planning |
CN111750862A (en) * | 2020-06-11 | 2020-10-09 | 深圳优地科技有限公司 | Multi-region-based robot path planning method, robot and terminal equipment |
CN111709355A (en) * | 2020-06-12 | 2020-09-25 | 北京百度网讯科技有限公司 | Method and device for identifying target area, electronic equipment and road side equipment |
CN112014857A (en) * | 2020-08-31 | 2020-12-01 | 上海宇航系统工程研究所 | Three-dimensional laser radar positioning and navigation method for intelligent inspection and inspection robot |
CN112304318A (en) * | 2020-11-10 | 2021-02-02 | 河北工业大学 | Autonomous navigation method of robot under virtual-real coupling constraint environment |
CN112859893A (en) * | 2021-01-08 | 2021-05-28 | 中国商用飞机有限责任公司北京民用飞机技术研究中心 | Obstacle avoidance method and device for aircraft |
CN113099463A (en) * | 2021-03-28 | 2021-07-09 | 国网浙江省电力有限公司经济技术研究院 | UWB base station layout analysis system and method based on BIM and progress plan |
CN113433937A (en) * | 2021-06-08 | 2021-09-24 | 杭州未名信科科技有限公司 | Heuristic exploration-based layered navigation obstacle avoidance system and layered navigation obstacle avoidance method |
CN114911244A (en) * | 2021-06-17 | 2022-08-16 | 北京博创联动科技有限公司 | Ridge obstacle avoidance control method and device and agricultural automatic driving equipment |
CN115248430A (en) * | 2021-09-23 | 2022-10-28 | 上海仙途智能科技有限公司 | Target object positioning method, device, terminal and medium |
CN114862875A (en) * | 2022-05-20 | 2022-08-05 | 中国工商银行股份有限公司 | Method and device for determining moving path of robot and electronic equipment |
CN115855068A (en) * | 2023-02-24 | 2023-03-28 | 派欧尼尔环境净化工程(北京)有限公司 | Robot path autonomous navigation method and system based on BIM |
Also Published As
Publication number | Publication date |
---|---|
CN109540142A (en) | 2019-03-29 |
US11638997B2 (en) | 2023-05-02 |
CN109540142B (en) | 2021-04-06 |
US20200164521A1 (en) | 2020-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11638997B2 (en) | Positioning and navigation method for a robot, and computing device thereof | |
US11030803B2 (en) | Method and apparatus for generating raster map | |
KR102145109B1 (en) | Methods and apparatuses for map generation and moving entity localization | |
CN109270545B (en) | Positioning true value verification method, device, equipment and storage medium | |
CN109285188B (en) | Method and apparatus for generating position information of target object | |
CN108279670B (en) | Method, apparatus and computer readable medium for adjusting point cloud data acquisition trajectory | |
CN108764187A (en) | Extract method, apparatus, equipment, storage medium and the acquisition entity of lane line | |
KR20210111181A (en) | Method, apparatus, device and medium for detecting environmental change | |
US20200380742A1 (en) | Systems and methods for generating road map | |
WO2023087758A1 (en) | Positioning method, positioning apparatus, computer-readable storage medium, and computer program product | |
CN114398455B (en) | Heterogeneous multi-robot collaborative SLAM map fusion method | |
EP4194807A1 (en) | High-precision map construction method and apparatus, electronic device, and storage medium | |
AU2022223991A1 (en) | Computer vision systems and methods for supplying missing point data in point clouds derived from stereoscopic image pairs | |
CN113091737A (en) | Vehicle-road cooperative positioning method and device, automatic driving vehicle and road side equipment | |
CN114186007A (en) | High-precision map generation method and device, electronic equipment and storage medium | |
CN113091736A (en) | Robot positioning method, device, robot and storage medium | |
US20240221215A1 (en) | High-precision vehicle positioning | |
JP2022130588A (en) | Registration method and apparatus for autonomous vehicle, electronic device, and vehicle | |
CN113932796A (en) | High-precision map lane line generation method and device and electronic equipment | |
CN113920158A (en) | Training and traffic object tracking method and device of tracking model | |
CN115239899B (en) | Pose map generation method, high-precision map generation method and device | |
CN115619954A (en) | Sparse semantic map construction method, device, equipment and storage medium | |
CN110389349B (en) | Positioning method and device | |
CN114419564A (en) | Vehicle pose detection method, device, equipment, medium and automatic driving vehicle | |
CN110136181B (en) | Method and apparatus for generating information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: EXPRESSLY ABANDONED -- DURING EXAMINATION |