WO2022127572A1 - 机器人三维地图位姿显示方法、装置、设备及存储介质 - Google Patents

机器人三维地图位姿显示方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2022127572A1
WO2022127572A1 PCT/CN2021/134005 CN2021134005W WO2022127572A1 WO 2022127572 A1 WO2022127572 A1 WO 2022127572A1 CN 2021134005 W CN2021134005 W CN 2021134005W WO 2022127572 A1 WO2022127572 A1 WO 2022127572A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
dimensional map
dimensional
map
constructed
Prior art date
Application number
PCT/CN2021/134005
Other languages
English (en)
French (fr)
Other versions
WO2022127572A9 (zh
Inventor
于炀
吴震
Original Assignee
北京石头创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京石头创新科技有限公司 filed Critical 北京石头创新科技有限公司
Priority to EP21905501.9A priority Critical patent/EP4261789A1/en
Priority to US18/257,346 priority patent/US20240012425A1/en
Publication of WO2022127572A1 publication Critical patent/WO2022127572A1/zh
Publication of WO2022127572A9 publication Critical patent/WO2022127572A9/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20068Projection on vertical or horizontal image axis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts

Definitions

  • the present disclosure is based on the Chinese patent application "Robot 3D Map Pose Display Method, Device, Equipment and Storage Medium” with the application number of 202011471918.6 and the application date of December 14, 2020, and claims the priority of the Chinese patent application.
  • the entire contents of the Chinese patent application are hereby incorporated by reference into the present disclosure.
  • the present disclosure relates to the technical field of data processing, and in particular, to a method, apparatus, device, and readable storage medium for displaying a three-dimensional map pose of a robot.
  • LDS laser distance sensor
  • the present disclosure provides a method, device, device, and readable storage medium for displaying a three-dimensional map pose of a robot, which at least to a certain extent overcomes the technical problem that the related technology cannot obtain the height information of obstacles in the area where the robot is located.
  • a method for displaying a three-dimensional map pose of a robot including: obtaining a three-dimensional map of the space where the robot is located; obtaining a two-dimensional map constructed by the robot; and constructing the three-dimensional map with the robot Match the two-dimensional map of the robot to obtain the corresponding relationship between the three-dimensional map and the two-dimensional map constructed by the robot; obtain the pose of the robot on the two-dimensional map constructed by the robot; The pose on the two-dimensional map constructed by the robot and the correspondence between the three-dimensional map and the two-dimensional map constructed by the robot displays the pose of the robot in the three-dimensional map.
  • the matching of the three-dimensional map with the two-dimensional map constructed by the robot to obtain the correspondence between the three-dimensional map and the two-dimensional map constructed by the robot includes: acquiring the effective part of the three-dimensional map; project the effective part of the three-dimensional map to the horizontal plane to obtain a two-dimensional projected map; match the two-dimensional projected map and the two-dimensional map constructed by the robot to obtain the two-dimensional projected map and all The corresponding relationship of the two-dimensional map constructed by the robot is described.
  • the obstacle data acquired in the process of constructing a two-dimensional map by the robot is three-dimensional data; the acquiring an effective part of the three-dimensional map includes: determining the scan of the robot according to the three-dimensional data range; determine that the three-dimensional map located within the scanning range of the robot is an effective part of the three-dimensional map.
  • the matching of the two-dimensional projected map and the two-dimensional map constructed by the robot to obtain the correspondence between the two-dimensional projected map and the two-dimensional map constructed by the robot includes: using The method for maximizing the overlapping area matches the two-dimensional projected map with the two-dimensional map constructed by the robot; the two-dimensional projected map and the two-dimensional map constructed by the robot are obtained when the overlapping area of the two-dimensional projected map and the two-dimensional map constructed by the robot is maximized.
  • the corresponding relationship between the 2D projection map and the 2D map constructed by the robot includes: using The method for maximizing the overlapping area matches the two-dimensional projected map with the two-dimensional map constructed by the robot; the two-dimensional projected map and the two-dimensional map constructed by the robot are obtained when the overlapping area of the two-dimensional projected map and the two-dimensional map constructed by the robot is maximized.
  • the corresponding relationship between the 2D projection map and the 2D map constructed by the robot includes: using The method for maximizing the overlapping area matches the two-dimensional projected map with the two-dimensional
  • the matching of the three-dimensional map with the two-dimensional map constructed by the robot to obtain the correspondence between the three-dimensional map and the two-dimensional map constructed by the robot includes: acquiring a specified obstacle obtain the mark of the designated obstacle in the three-dimensional map; obtain the mark of the designated obstacle in the two-dimensional map constructed by the robot; match the mark of the designated obstacle in the three-dimensional map with the designated obstacle in the two-dimensional map From the markers in the two-dimensional map constructed by the robot, the corresponding relationship between the three-dimensional map and the two-dimensional map constructed by the robot is obtained.
  • the designated obstacles include a plurality of designated obstacles, and the plurality of designated obstacles are not located on a straight line.
  • the designated obstacles include charging piles and walls.
  • the method further includes: modifying the three-dimensional map according to the two-dimensional map constructed by the robot when the robot constructs the two-dimensional map.
  • the method further includes: displaying the three-dimensional model of the robot and the three-dimensional map in an equal scale.
  • a robot three-dimensional map pose display device comprising: a three-dimensional map acquisition module for acquiring a three-dimensional map of the space where the robot is located; and a construction map acquisition module for acquiring the robot construction The two-dimensional map; the map matching module is used to match the three-dimensional map with the two-dimensional map constructed by the robot, and obtain the corresponding relationship between the three-dimensional map and the two-dimensional map constructed by the robot; the pose acquisition module , used to obtain the pose of the robot on the two-dimensional map constructed by the robot; the three-dimensional display module is used to obtain the pose of the robot on the two-dimensional map constructed by the robot and the three-dimensional map and The corresponding relationship of the two-dimensional map constructed by the robot displays the pose of the robot in the three-dimensional map.
  • the map matching module includes: a map selection module for acquiring an effective part of the three-dimensional map; a two-dimensional projection module for projecting the effective part of the three-dimensional map to a horizontal plane to obtain A two-dimensional projected map; a two-dimensional map matching module, configured to match the two-dimensional projected map with the two-dimensional map constructed by the robot, and obtain the correspondence between the two-dimensional projected map and the two-dimensional map constructed by the robot.
  • the obstacle data acquired in the process of constructing a two-dimensional map by the robot is three-dimensional data; the map selection module is further configured to: determine the scanning range of the robot according to the three-dimensional data; The three-dimensional map within the scanning range of the robot is an effective part of the three-dimensional map.
  • the two-dimensional map matching module is further configured to: match the two-dimensional projected map with the two-dimensional map constructed by the robot by adopting a method of maximizing overlapping area; The corresponding relationship between the two-dimensional projected map and the two-dimensional map constructed by the robot when the overlapping area of the two-dimensional projected map and the two-dimensional map constructed by the robot is the largest.
  • the map matching module further includes: a first obstacle mark acquisition module, used to obtain the mark of a designated obstacle in the three-dimensional map; a second obstacle mark acquisition module, used to obtain The marking of the designated obstacle in the two-dimensional map constructed by the robot; the marker matching module is used to match the marking of the designated obstacle in the three-dimensional map with the designated obstacle constructed by the robot. The corresponding relationship between the three-dimensional map and the two-dimensional map constructed by the robot is obtained.
  • the designated obstacles include a plurality of designated obstacles, and the plurality of designated obstacles are not located on a straight line.
  • the designated obstacles include charging piles and walls.
  • the apparatus further includes: a three-dimensional map correction module, configured to modify the three-dimensional map according to the two-dimensional map constructed by the robot when the robot constructs the two-dimensional map.
  • the three-dimensional display module is further configured to display the three-dimensional model of the robot and the three-dimensional map in an equal scale.
  • an apparatus comprising: a memory, a processor, and executable instructions stored in the memory and executable in the processor, the processor executing the executable instructions When implementing any of the above methods.
  • a computer-readable storage medium on which computer-executable instructions are stored, and when the executable instructions are executed by a processor, implement any of the above methods.
  • the 3D map and the 2D map constructed by the robot are obtained by matching the 3D map and the 2D map constructed by the robot by obtaining the 3D map of the space where the robot is located and the 2D map constructed by the robot. 2D map correspondence, and then display the robot's pose in the 3D map according to the robot's pose on the 2D map constructed by the robot and the correspondence between the 3D map and the 2D map constructed by the robot, so that the region where the robot is located can be obtained. height information of obstacles in the middle.
  • FIG. 1 shows a schematic diagram of a system structure in an embodiment of the present disclosure.
  • FIG. 2 shows a flowchart of a method for displaying a three-dimensional map pose of a robot in an embodiment of the present disclosure.
  • FIG. 3A shows a schematic diagram of a three-dimensional map of a space where a robot is located in an embodiment of the present disclosure.
  • FIG. 3B shows a schematic diagram of a three-dimensional map of a space where another robot is located in an embodiment of the present disclosure.
  • FIG. 3C shows a schematic diagram of a three-dimensional map of a space where a robot is located in an embodiment of the present disclosure.
  • FIG. 4A shows a schematic diagram of an AR device for drawing a three-dimensional map in an embodiment of the present disclosure.
  • FIG. 4B shows a schematic diagram of another AR device for drawing a three-dimensional map in an embodiment of the present disclosure.
  • FIG. 5 shows a flowchart of a method for matching a three-dimensional map and a two-dimensional grid map in an embodiment of the present disclosure.
  • FIG. 6 shows a schematic diagram of a process of matching a two-dimensional projection map and a two-dimensional grid map in an embodiment of the present disclosure.
  • FIG. 7 shows a schematic diagram of the processing procedure of step S502 shown in FIG. 5 in an embodiment.
  • Fig. 8 is a flowchart of another method for matching a three-dimensional map and a two-dimensional grid map according to an exemplary embodiment.
  • Fig. 9A is a flowchart showing a working method of a robot according to an exemplary embodiment.
  • Fig. 9B is a schematic diagram showing the architecture of a cleaning robot according to an exemplary embodiment.
  • FIG. 10 shows a block diagram of a robot three-dimensional map pose display device in an embodiment of the present disclosure.
  • FIG. 11 shows a block diagram of another robot three-dimensional map pose display device in an embodiment of the present disclosure.
  • FIG. 12 shows a block diagram of a robot three-dimensional map pose display system in an embodiment of the present disclosure.
  • FIG. 13 shows a schematic structural diagram of an electronic device in an embodiment of the present disclosure.
  • Example embodiments will now be described more fully with reference to the accompanying drawings.
  • Example embodiments can be embodied in various forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
  • the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale.
  • the same reference numerals in the drawings denote the same or similar parts, and thus their repeated descriptions will be omitted.
  • plural means at least two, such as two, three, etc., unless expressly and specifically defined otherwise.
  • the symbol “/” generally indicates that the related objects are an “or” relationship.
  • connection should be interpreted in a broad sense, for example, it may be an electrical connection or may communicate with each other; it may be directly connected or indirectly connected through an intermediate medium.
  • connection should be interpreted in a broad sense, for example, it may be an electrical connection or may communicate with each other; it may be directly connected or indirectly connected through an intermediate medium.
  • the present disclosure provides a three-dimensional map pose display method of a robot, by obtaining a three-dimensional map of the space where the robot is located and a two-dimensional grid map constructed by the robot, matching the three-dimensional map and the two-dimensional grid map to obtain a corresponding relationship, and then According to the pose of the robot on the two-dimensional grid map and the corresponding relationship between the three-dimensional map and the two-dimensional grid map, the pose of the robot is displayed in the three-dimensional map, so that the height information of the obstacles in the area where the robot is located can be obtained.
  • the following first explains several terms involved in the present disclosure.
  • Intelligent robot is a comprehensive system that integrates environmental perception, dynamic decision-making and planning, behavior control and execution, etc.
  • the research results represent the highest achievement of mechatronics, which is one of the most active fields of scientific and technological development.
  • Intelligent robots can be divided into fixed robots and mobile robots by mobile methods. Fixed robots such as robotic arms are widely used in industry.
  • Mobile robots can be divided into: wheeled mobile robots, walking mobile robots, crawler mobile robots, crawling robots, creeping robots and swimming robots according to their moving methods; according to the working environment, they can be divided into: indoor mobile robots and outdoor mobile robots Robot; according to the control system structure, it can be divided into: functional (horizontal) structure robot, behavioral (vertical) structure robot and hybrid robot; according to function and use, it can be divided into: medical robot, military robot, disabled robot, cleaning robot Robots and more. With the continuous improvement of robot performance, the application range of mobile robots has been greatly expanded, not only in industry, agriculture, medical care, service and other industries, but also in harmful and dangerous occasions such as urban security, national defense and space exploration. Nice application.
  • a mobile robot is a robotic system composed of sensors, remote manipulators and automatically controlled mobile carriers.
  • Mobile robots have mobile functions and are more maneuverable than stationary robots in terms of replacing people in dangerous and harsh (such as radiation, toxic, etc.) environments and in environments beyond human reach (such as space, underwater, etc.) sex, flexibility.
  • Augmented Reality is a new technology that "seamlessly" integrates real world information and virtual world information. information, sound, taste, touch, etc.), through computer and other science and technology, simulate and then superimpose, apply virtual information to the real world, and be perceived by human senses, so that the real environment and virtual objects can be superimposed in real time. The same picture or space exists at the same time to achieve a sensory experience beyond reality.
  • FIG. 1 shows an exemplary system architecture 10 to which the robot three-dimensional map pose display method or the robot three-dimensional map pose display device of the present disclosure may be applied.
  • the system architecture 10 may include terminal devices 102 , a network 104 , a server 106 and a database 108 .
  • the terminal device 102 can be various electronic devices with a display screen and supporting input and output, including but not limited to smart phones, tablet computers, laptop computers, desktop computers, AR headsets, mobile robots (such as cleaning robots, guide robot) and so on.
  • the network 104 is the medium used to provide the communication link between the terminal device 102 and the server 106 .
  • the network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
  • the server 106 may be a server or server cluster that provides various services, for example, a background processing server that provides support for map modeling of the data sent by the cleaning robot 102 to perceive the working environment.
  • the database 108 may be a warehouse that organizes, stores, and manages data according to a data structure, including but not limited to a relational database, a cloud database, and the like, such as a database that stores map data of a robot's work area.
  • the user can use the terminal device 102 to interact with the server 106 and the database 108 through the network 104 to receive or send data and the like.
  • the server 106 may also receive data from the database 108 or send data to the database 108 through the network 104, or the like.
  • the server 106 can plan a working route for it, and send the planned working route information to the AR headset 102 through the network 104, and the user can use the AR headset View the simulated work route of the cleaning robot in the AR map.
  • terminal devices, networks and servers in FIG. 1 are merely illustrative. There can be any number of terminal devices, networks and servers according to implementation needs.
  • Fig. 2 is a flow chart of a method for displaying a robot's three-dimensional map pose according to an exemplary embodiment.
  • the method shown in FIG. 2 can be applied to, for example, the server side of the above-mentioned system, and can also be applied to the terminal device of the above-mentioned system.
  • the method 20 provided by the embodiment of the present disclosure may include the following steps.
  • step S202 a three-dimensional map of the space where the robot is located is obtained.
  • 3D map drawing equipment can be set up in the space where the robot works, and augmented reality applications (such as AR applications based on ARkit, ARcore, etc.) on the 3D map drawing equipment can be used to draw, first obtain the dense point cloud of the space, and then based on the points The cloud generates the 3D mesh model of the space, and then completes the model map based on the generated Mesh model, that is, the texture is mapped to the 3D Mesh model of the space through coordinates to complete the 3D map drawing.
  • augmented reality applications such as AR applications based on ARkit, ARcore, etc.
  • FIG. 3C show a schematic diagram of a 3D map of a space where three kinds of robots are located
  • Fig. 3A is a schematic view of a panoramic angle of a 3D map of a space where a robot is located
  • Fig. 3B is a location where a robot is located
  • FIG. 3C is a schematic top view of the 3D map of the space where a robot is located.
  • ARkit is an AR application framework. By combining hardware such as cameras, motion sensors, and graphics processors of mobile devices with algorithms such as depth sensing and artificial light rendering, developers can implement AR applications (such as 3D mapping applications) on mobile devices. . What ARcore can do is similar to ARkit.
  • the 3D map can be drawn by a device with a 3D map drawing function.
  • an AR device including an AR drawing device can be used to draw a 3D map
  • the AR drawing device can be a mobile terminal device with a 3D map drawing function.
  • iPAD Pro a schematic diagram of the AR rendering device CANVAS iPAD
  • Figure 4B is a schematic diagram of the AR rendering device iPAD Pro.
  • the 3D map drawn by the AR device can be sent to the 3D display device, where the 3D display device can be an AR display device, and the AR display device can be included in the AR device.
  • the AR display device may be a mobile terminal device capable of displaying a 3D map and a preset model, including but not limited to: iPAD, iPhone, and other devices having a display function.
  • the AR drawing device and the AR display device may be the same device.
  • a two-dimensional map constructed by the robot is obtained.
  • the robot can build a two-dimensional map in real time during the moving process.
  • the cleaning robot can measure the distance between itself and various obstacles in the working area through the installed LDS during the cleaning process, so as to draw a real-time map of the area. map.
  • a variety of Lidar simultaneous localization and mapping (SLAM) methods can be used to draw real-time maps, such as HectorSLAM algorithm based on optimization (solving least squares problem), Gmapping algorithm based on particle filter, Cartographer, etc., where Cartographer is a 2D and 3D SLAM library supported by Google's open source Robot Operating System (ROS).
  • ROS Robot Operating System
  • the acquiring of the three-dimensional map of the space where the robot is located involved in step S202 may be performed in advance, and the acquired map is stored in the server or terminal device or the like. In addition, this step may also be performed simultaneously with the obtaining of the two-dimensional map constructed by the robot involved in step S204. This embodiment of the present disclosure does not limit this.
  • step S206 the three-dimensional map and the two-dimensional map constructed by the robot are matched to obtain the correspondence between the three-dimensional map and the two-dimensional map constructed by the robot.
  • the objects in the two-dimensional map can be correspondingly displayed in the three-dimensional map through the correspondence between the three-dimensional map and the two-dimensional map constructed by the robot.
  • the three-dimensional map can be projected toward the ground plane to generate a two-dimensional projected map, and the corresponding relationship between the two-dimensional projected map and the three-dimensional map can be obtained, and then the two-dimensional projected map can be compared with the two-dimensional projected map constructed by the robot.
  • an optimization algorithm can be used for iterating to obtain the corresponding relationship between the two maps when the overlapping area of the two maps is the largest, and then the corresponding relationship between the three-dimensional map and the two-dimensional map can be obtained by correlating the three-dimensional map and the two-dimensional map.
  • FIG. 5 which will not be described in detail here.
  • the three-dimensional map and the two-dimensional map can be matched according to the positions of the markers in the three-dimensional map and the corresponding markers in the two-dimensional map.
  • FIG. 8 please refer to FIG. 8 , which will not be detailed here. described.
  • step S208 the pose of the robot on the two-dimensional map constructed by the robot is obtained.
  • a variety of internal sensors such as odometer, compass, accelerometer, etc.
  • external sensors such as laser ranging
  • the obtaining of the pose of the robot on the two-dimensional map involved in step S208 may be performed while the robot is constructing the two-dimensional map, or may be obtained after the two-dimensional map is constructed.
  • the pose on the two-dimensional map is not limited in this embodiment of the present disclosure.
  • step S210 the pose of the robot is displayed in the three-dimensional map according to the pose of the robot on the two-dimensional map constructed by the robot and the correspondence between the three-dimensional map and the two-dimensional map constructed by the robot.
  • the corresponding relationship between the three-dimensional map and the two-dimensional map is obtained, and the pose of the robot on the two-dimensional map can be mapped to the three-dimensional map, and the three-dimensional model of the robot and the three-dimensional map can be displayed in an equal proportion when displayed, which can be displayed in AR
  • the real-time pose of the robot in the three-dimensional map is visually observed on the display device.
  • the method for displaying a 3D map pose of a robot provided by an embodiment of the present disclosure, by obtaining a 3D map of the space where the robot is located and a 2D map constructed by the robot, the 3D map and the 2D map are matched to obtain a corresponding relationship, and then the corresponding relationship is obtained by matching the 3D map with the 2D map.
  • the pose on the map and the correspondence between the three-dimensional map and the two-dimensional map display the pose of the robot in the three-dimensional map, so that the height information of the obstacles in the area where the robot is located can be obtained.
  • Fig. 5 is a flowchart of a method for matching a three-dimensional map and a two-dimensional map according to an exemplary embodiment.
  • the method shown in FIG. 5 can be applied to, for example, the server side of the above-mentioned system, and can also be applied to the terminal device of the above-mentioned system.
  • the method 50 provided by the embodiment of the present disclosure may include the following steps.
  • step S502 a valid part of the three-dimensional map is acquired.
  • the robot installs the LDS to realize the measurement of the surrounding obstacles and the drawing of the two-dimensional map by the robot.
  • the LDS set on the robot may have a certain field of view in the vertical direction, so the range scanned by the LDS may also be a three-dimensional area.
  • the LDS cannot scan the higher position in the environment where the robot is located.
  • the two-dimensional map generated based on the LDS only includes the environment where the robot is located. the part close to the ground.
  • a part of the corresponding height can be selected from the 3D map according to the height set by the LDS and the field angle of the LDS in the vertical direction to obtain the part corresponding to the 3D map and the scanning range of the robot LDS.
  • the point cloud of the three-dimensional map can be filtered out of the robot scanning range, and then the corresponding relationship between the point cloud and the three-dimensional map within the robot scanning range of the three-dimensional map can be obtained as part of the three-dimensional map.
  • the method can refer to FIG. 7, which will not be described in detail here.
  • step S504 the effective part of the three-dimensional map is projected onto the horizontal plane to obtain a two-dimensional projected map.
  • the point cloud of part of the three-dimensional map can be projected in the direction of the ground plane according to the corresponding relationship with the three-dimensional map to obtain a two-dimensional projected map.
  • step S506 the two-dimensional projected map and the two-dimensional map constructed by the robot are matched to obtain a correspondence between the two-dimensional projected map and the two-dimensional map constructed by the robot.
  • the method of maximizing the overlapping area can be used to match the two-dimensional projected map with the two-dimensional grid map, that is, the two-dimensional projected map can be represented in the coordinate system of the two-dimensional grid map (or The two-dimensional grid map is represented in the coordinate system of the two-dimensional projected map), and the overlapping area of the two maps is calculated while performing operations such as rotation and translation, and iteratively is performed to obtain the two-dimensional projected map and the two-dimensional grid map The corresponding relationship between the two-dimensional projection map and the two-dimensional grid map when the overlap area is the largest.
  • FIG. 6 shows a schematic diagram of a process of matching a two-dimensional projected map and a two-dimensional map constructed by a robot.
  • the overlapping area of the two-dimensional projected map 602 and the two-dimensional map 604 gradually increases , until the process of close to coincidence, the two-dimensional projection map 602 and the two-dimensional map 604 are coincident, that is, the matching is completed, and the rotation and translation parameters can be obtained.
  • step S508 the corresponding relationship between the three-dimensional map and the two-dimensional map constructed by the robot is determined according to the corresponding relationship between the two-dimensional projection map and the two-dimensional map. Since the two-dimensional projected map is obtained from the three-dimensional map, the corresponding relationship between the two-dimensional projected map and the three-dimensional map can be obtained, and then the corresponding relationship between the three-dimensional map and the two-dimensional map can be determined according to the corresponding relationship between the two-dimensional projected map and the two-dimensional map.
  • a two-dimensional projected map is obtained by projecting a part of the three-dimensional map in the three-dimensional map located within the scanning range of the robot in the direction of the ground plane, and then the two-dimensional projected map is matched with the two-dimensional map constructed by the robot. , and then determine the corresponding relationship between the three-dimensional map and the two-dimensional map according to the corresponding relationship between the two-dimensional projection map and the two-dimensional map constructed by the robot, which can effectively prevent the image of objects outside the scanning range of the robot in the three-dimensional map from being projected into the two-dimensional projection map.
  • the matching effect is poor.
  • FIG. 7 shows a schematic diagram of the processing procedure of step S502 shown in FIG. 5 in an embodiment.
  • the 3D map includes the 3D point cloud of the space where the robot is located, and some 3D maps include the 3D point cloud of the space within the scanning range of the robot LDS.
  • the foregoing step S502 may further include the following steps.
  • a three-dimensional point cloud not larger than the scanning range of the robot LDS is selected from the three-dimensional point cloud.
  • the three-dimensional point cloud that is not larger than the scanning range of the robot LDS can be selected based on the coordinates of the coordinate axis of the three-dimensional point cloud in the map coordinate system (such as the earth coordinate system) perpendicular to the ground.
  • the scanning height of the cleaning robot LDS in the vertical direction may be 15 cm, 20 cm or 25 cm, and so on.
  • step S5024 a partial three-dimensional map is obtained based on a three-dimensional point cloud not larger than the scanning range of the robot LDS.
  • Fig. 8 is a flowchart of another method for matching a three-dimensional map and a two-dimensional map according to an exemplary embodiment.
  • the method shown in FIG. 8 can be applied to, for example, the server side of the above-mentioned system, and can also be applied to the terminal device of the above-mentioned system.
  • the method 80 provided by this embodiment of the present disclosure may include steps S802 to S806.
  • step S802 the mark of the designated obstacle in the three-dimensional map is acquired.
  • the AR scanning device can automatically identify designated obstacles during shooting, and obtain their marks in the three-dimensional map.
  • the designated obstacles can be, for example, charging piles, tables, chairs, walls, and so on. Wall planes can also be identified.
  • the marker After the marker is photographed by the AR scanning device, it can be recognized by the object recognition algorithm on the server through the Internet, or the photo of the marker in the cleaning environment can be pre-stored locally, and the object recognition algorithm on the local device can be used for matching and identification.
  • step S804 the marks of the designated obstacles in the two-dimensional map constructed by the robot are acquired.
  • a shooting device can be set up on the robot, and after identifying the corresponding obstacles through the Internet or local algorithms, the designated obstacles can be marked by the robot during the 2D map drawing process.
  • step S806 the marking of the designated obstacle in the three-dimensional map and the marking of the designated obstacle in the two-dimensional map constructed by the robot are matched to obtain the correspondence between the three-dimensional map and the two-dimensional map constructed by the robot. If matching is performed by marking of designated obstacles, at least three designated obstacles that are not on the same line (connecting two of the markers into a marked line), or a line of designated obstacles (such as a vertical wall) The projection of the plane on the ground plane, etc.) and a specified obstacle, first calculate the rotation parameters through the marker line, and then calculate the translation parameters by correlating the feature points of the marker to obtain the rotation parameters that match the three-dimensional map and the two-dimensional map. Pan parameter.
  • the three-dimensional map and the two-dimensional map are matched by the designated obstacles identified during the mapping process, thereby improving the accuracy of map matching to a certain extent.
  • Fig. 9A is a flowchart showing a working method of a robot according to an exemplary embodiment.
  • the method shown in FIG. 9A can be applied to, for example, the server side of the above-mentioned system, and can also be applied to the terminal device of the above-mentioned system.
  • the method 90 provided by the embodiment of the present disclosure may include the following steps.
  • step S902 a three-dimensional map of the space where the robot is located is obtained. After the AR scanning device draws a three-dimensional map of the space where the robot is located, it can be shared with the AR display device.
  • step S904 the real-time scanning result of the robot is obtained.
  • the robot can scan the surrounding environment during the work process to obtain information about objects such as obstacles.
  • Figure 9B shows a sweeper architecture.
  • the sweeper is provided with front and side ToF (Time of Flight) sensor modules for sensing the environment.
  • the ToF sensor continuously sends light pulses to the target, and then receives them with the sensor.
  • the distance to the target object is obtained by detecting the flight (round-trip) time of the light pulse.
  • step S904 the three-dimensional map is modified according to the real-time scanning result of the robot.
  • the robot can send the real-time scanning results to the AR display device, and the AR display device can supplement or correct the three-dimensional map according to the real-time scanning results of the robot.
  • the accuracy of the displayed 3D map can be improved by supplementing or correcting the 3D map generated by the AR scanning device according to the real-time scanning result of the robot.
  • Fig. 10 is a block diagram of a robot three-dimensional map pose display device according to an exemplary embodiment.
  • the apparatus shown in FIG. 10 can be applied to, for example, the server side of the above-mentioned system, and can also be applied to the terminal device of the above-mentioned system.
  • the apparatus 100 may include a three-dimensional map acquisition module 1002 , a construction map acquisition module 1004 , a map matching module 1006 , a pose acquisition module 1008 , and a three-dimensional display module 1010 .
  • the three-dimensional map obtaining module 1002 can be used to obtain a three-dimensional map of the space where the robot is located.
  • the construction map acquisition module 1004 may be used to acquire a two-dimensional map constructed by the robot.
  • the map matching module 1006 can be used to match the three-dimensional map with the two-dimensional map constructed by the robot, and obtain the correspondence between the three-dimensional map and the two-dimensional map constructed by the robot.
  • the pose obtaining module 1008 can be used to obtain the pose of the robot on the two-dimensional map constructed by the robot.
  • the three-dimensional display module 1010 can be configured to display the pose of the robot in the three-dimensional map according to the pose of the robot on the two-dimensional map constructed by the robot and the correspondence between the three-dimensional map and the two-dimensional map constructed by the robot.
  • Fig. 11 is a block diagram of a robot three-dimensional map pose display device according to an exemplary embodiment.
  • the apparatus shown in FIG. 11 can be applied to, for example, the server side of the above-mentioned system, and can also be applied to the terminal device of the above-mentioned system.
  • the apparatus 110 may include a three-dimensional map acquisition module 1102, a construction map acquisition module 1104, a map matching module 1106, a pose acquisition module 1108, a three-dimensional display module 1110, and a three-dimensional map correction module 1112; wherein,
  • the map matching module 1106 may include a map selection module 11062, a 2D projection module 11064, a 2D map matching module 11066, a 3D map matching module 11068, a first obstacle marker acquisition module 110692, a second obstacle marker acquisition module 110694, and marker matching Module 110696.
  • the three-dimensional map obtaining module 1102 can be used to obtain a three-dimensional map of the space where the robot is located.
  • the construction map acquisition module 1104 may be used to acquire a two-dimensional map constructed by the robot.
  • the obstacle data obtained in the process of building a two-dimensional map by the robot is three-dimensional data.
  • the map matching module 1106 can be used to match the three-dimensional map with the two-dimensional map constructed by the robot, and obtain the correspondence between the three-dimensional map and the two-dimensional map constructed by the robot.
  • a map extraction module 11062 may be used to obtain valid portions of a three-dimensional map.
  • the map selection module 11062 can also be used to: determine the scanning range of the robot according to the three-dimensional data; and determine that the three-dimensional map located within the scanning range of the robot is an effective part of the three-dimensional map.
  • the two-dimensional projection module 11064 can be used to project the effective part of the three-dimensional map to the horizontal plane to obtain a two-dimensional projected map.
  • the two-dimensional map matching module 11066 can be used to match the two-dimensional projected map with the two-dimensional map constructed by the robot, and obtain the correspondence between the two-dimensional projected map and the two-dimensional map constructed by the robot.
  • the two-dimensional map matching module 11066 can also be used to match the two-dimensional projected map with the two-dimensional map constructed by the robot by adopting the method of maximizing the overlapping area; The correspondence between the 2D projection map and the 2D map constructed by the robot.
  • the three-dimensional map matching module 11068 may be configured to determine the corresponding relationship between the three-dimensional map and the two-dimensional grid map according to the corresponding relationship between the two-dimensional projection map and the two-dimensional grid map.
  • the first obstacle mark obtaining module 110692 can be used to obtain the mark of the designated obstacle in the three-dimensional map.
  • the specified obstacles include multiple ones, and the multiple specified obstacles are not located on a straight line.
  • Designated obstacles include charging piles and walls.
  • the second obstacle marker obtaining module 110694 can be used to obtain the marker of the specified obstacle in the two-dimensional map constructed by the robot.
  • the marker matching module 110696 can be used to match the marker of the designated obstacle in the three-dimensional map with the marker of the designated obstacle in the two-dimensional map constructed by the robot, so as to obtain the correspondence between the three-dimensional map and the two-dimensional map constructed by the robot.
  • the pose obtaining module 1108 can be used to obtain the pose of the robot on the two-dimensional map constructed by the robot.
  • the three-dimensional display module 1110 can be configured to display the pose of the robot in the three-dimensional map according to the pose of the robot on the two-dimensional map constructed by the robot and the correspondence between the three-dimensional map and the two-dimensional map constructed by the robot.
  • the three-dimensional display module 1110 can also be used to display the three-dimensional model of the robot and the three-dimensional map in an equal scale.
  • the three-dimensional map correction module 1112 can be used to modify the three-dimensional map according to the two-dimensional map constructed by the robot when the robot constructs the two-dimensional map.
  • Fig. 12 is a block diagram of a robot three-dimensional map pose display system according to an exemplary embodiment.
  • the cleaning robot 1202 equipped with the depth sensor 12021 can be connected to the augmented reality scanning device 1204, and the cleaning robot 1202 can obtain a 3D map of the environment in which it is located in real time through the augmented reality scanning device 1204; the cleaning robot 1202 can also be used for the first time When cleaning or user reset, the 3D map drawn by the augmented reality scanning device 1204 is obtained and saved.
  • the cleaning robot 1202 can be connected with the augmented reality display device 1206, and send the 2D grid map and its pose generated based on the depth sensor 12021 to the augmented reality display device 1206 in real time, and can also upload the point cloud of the observed obstacles to the augmented reality display device 1206.
  • the augmented reality scanning device 1204 can also be connected with the augmented reality display device 1206, and the augmented reality scanning device 1204 can share with the augmented reality display device after drawing the 3D map 1206, so that the augmented reality display device 1206 matches the 2D grid map generated by the cleaning robot 1202 with the 3D map, obtains the corresponding relationship between the 2D grid map and the 3D map, and saves it.
  • the cleaning robot 1202 When the cleaning robot 1202 cleans the area again, the cleaning robot 1202 uploads the pose information to the augmented reality display device 1206 in real time, and the augmented reality display device 1206 displays the information in the 3D map according to the stored correspondence between the 2D grid map and the 3D map. The pose of the cleaning robot 1202 is displayed in real time.
  • FIG. 13 shows a schematic structural diagram of an electronic device in an embodiment of the present disclosure. It should be noted that the device shown in FIG. 13 is only an example of a computer system, and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
  • the apparatus 1300 includes a central processing unit (CPU) 1301, which can be processed according to a program stored in a read only memory (ROM) 1302 or a program loaded into a random access memory (RAM) 1303 from a storage section 1308 Various appropriate actions and processes are performed.
  • ROM read only memory
  • RAM random access memory
  • various programs and data necessary for the operation of the device 1300 are also stored.
  • the CPU 1301, the ROM 1302, and the RAM 1303 are connected to each other through a bus 1304.
  • An input/output (I/O) interface 1305 is also connected to bus 1304 .
  • the following components are connected to the I/O interface 1305: an input section 1306 including a keyboard, a mouse, etc.; an output section 1307 including a cathode ray tube (CRT), a liquid crystal display (LCD), etc., and a speaker, etc.; a storage section 1308 including a hard disk, etc. ; and a communication section 1309 including a network interface card such as a LAN card, a modem, and the like.
  • the communication section 1309 performs communication processing via a network such as the Internet.
  • Drivers 1310 are also connected to I/O interface 1305 as needed.
  • a removable medium 1311 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is mounted on the drive 1310 as needed so that a computer program read therefrom is installed into the storage section 1308 as needed.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communication portion 1309, and/or installed from the removable medium 1311.
  • CPU central processing unit
  • the computer-readable medium shown in the embodiments of the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • a computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections having one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • the modules involved in the embodiments of the present disclosure may be implemented in software or hardware.
  • the described modules can also be set in the processor, for example, it can be described as: a processor includes a three-dimensional map acquisition module, a construction map acquisition module, a map matching module, a pose acquisition module and a three-dimensional display module.
  • the names of these modules do not limit the module itself in some cases.
  • the three-dimensional map acquisition module can also be described as "a module that acquires a three-dimensional map from a connected AR drawing device".
  • the present disclosure also provides a computer-readable medium.
  • the computer-readable medium may be included in the device described in the above-mentioned embodiments, or it may exist alone without being assembled into the device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by a device, the device includes: obtaining a three-dimensional map of the space where the robot is located; obtaining a two-dimensional map constructed by the robot; The map is matched with the two-dimensional map constructed by the robot to obtain the correspondence between the three-dimensional map and the two-dimensional map constructed by the robot; the pose of the robot on the two-dimensional map constructed by the robot is obtained; The correspondence between the pose and the 3D map and the 2D map constructed by the robot shows the pose of the robot in the 3D map.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Software Systems (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Optics & Photonics (AREA)
  • Electromagnetism (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

本公开提供一种机器人三维地图位姿显示方法、装置、设备及存储介质,涉及数据处理技术领域。该方法包括:获得所述机器人所在空间的三维地图;获得所述机器人构建的二维地图;将所述三维地图与所述机器人构建的二维地图进行匹配,获得所述三维地图与所述机器人构建的二维地图的对应关系;获得所述机器人在所述机器人构建的二维地图上的位姿;根据所述机器人在所述机器人构建的二维地图上的位姿和所述三维地图与所述机器人构建的二维地图的对应关系在所述三维地图中显示所述机器人的位姿。该方法实现了获取机器人所在区域中障碍物的高度信息。

Description

机器人三维地图位姿显示方法、装置、设备及存储介质
本公开基于申请号为202011471918.6、申请日为2020年12月14日的中国专利申请《机器人三维地图位姿显示方法、装置、设备及存储介质》提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。
技术领域
本公开涉及数据处理技术领域,具体而言,涉及一种机器人三维地图位姿显示方法、装置、设备及可读存储介质。
背景技术
随着计算机技术及人工智能技术的发展,出现了各种各样的具有智能系统的机器人,比如扫地机器人、拖地机器人、吸尘器、除草机等等。这些机器人可以在无用户操作的情况下,在某一区域自动行进并进行清洁或清除等操作。机器人中通常安装有激光测距传感器(Laser Distance Sensor,LDS),在工作过程中,机器人通过LDS测量与工作区域中的各种障碍物之间的距离,从而绘制所在区域的即时地图,并将所绘制的地图反馈给用户,使得用户可以掌握机器人所在区域的地图信息。
发明内容
本公开提供一种机器人三维地图位姿显示方法、装置、设备及可读存储介质,至少在一定程度上克服由于相关技术无法获取机器人所在区域中障碍物的高度信息的技术问题。
本公开的其他特性和优点将通过下面的详细描述变得显然,或部分地通过本公开的实践而习得。
根据本公开的一方面,提供一种机器人三维地图位姿显示方法,包括:获得所述机器人所在空间的三维地图;获得所述机器人构建的二维地图;将所述三维地图与所述机器人构建的二维地图进行匹配,获得所述三维地图与所述机器人构建的二维地图的对应关系;获得所述机器人在所述机器人构建的二维地图上的位姿;根据所述机器人在所述机器人构建的二维地图上的位姿和所述三维地图与所述机器人构建的二维地图的对应关系在所述三维地图中显示所述机器人的位姿。
根据本公开的一实施例,所述将所述三维地图与所述机器人构建的二维地图进行匹配,获得所述三维地图与所述机器人构建的二维地图的对应关系,包括:获取所述三维地图的有效部分;将所述三维地图的有效部分投影至水平面,获得二维投影地图;匹配所述二维投影地图与所述机器人构建的二维地图,获得所述二维投影地图与所述机器人构建的二维地图的对应关系。
根据本公开的一实施例,所述机器人构建二维地图过程中获取的障碍物数据为三维数据;所述获取所述三维地图的有效部分,包括:根据所述三维数据确定所述机器人的扫描范围;确定位于所述机器人的扫描范围内的三维地图为所述三维地图的有效部分。
根据本公开的一实施例,所述匹配所述二维投影地图与所述机器人构建的二维地图,获得所述二维投影地图与所述机器人构建的二维地图的对应关系,包括:采用重叠面积最大化的方法将所述二维投影地图与所述机器人构建的二维地图进行匹配;获得使所述二维投影地图与所述机器人构建的二维地图的重叠面积最大时所述二维投影地图与所述机器人构建的二维地图的对应关系。
根据本公开的一实施例,所述将所述三维地图与所述机器人构建的二维地图进行匹配,获得所述三维地图与所述机器人构建的二维地图的对应关系,包括:获取指定障碍物在所述三维地图中的标记;获取所述指定障碍物在所述机器人构建的二维地图中的标记;匹配所述指定障碍物在所述三维地图中的标记和所述指定障碍物在所述机器人构建的二维地图中的标记,获得所述三维地图与所述机器人构建的二维地图的对应关系。
根据本公开的一实施例,所述指定障碍物包括多个,且多个所述指定障碍物不位于一条直线上。
根据本公开的一实施例,所述指定障碍物包括充电桩、墙壁。
根据本公开的一实施例,所述方法还包括:在所述机器人构建二维地图时根据所述机器人构建的二维地图对所述三维地图进行修改。
根据本公开的一实施例,所述方法还包括:将所述机器人的三维模型与所述三维地图等比例进行显示。
根据本公开的再一方面,提供一种机器人三维地图位姿显示装置,包括:三维地图获取模块,用于获得所述机器人所在空间的三维地图;构建地图获取模块,用于获得所述机器人构建的二维地图;地图匹配模块,用于将所述三维地图与所述机器人构建的二维地图进行匹配,获得所述三维地图与所述机器人构建的二维地图的对应关系;位姿获取模块,用于获得所述机器人在所述机器人构建的二维地图上的位姿;三维显示模块,用于根据所述机器人在所述机器人构建的二维地图上的位姿和所述三维地图与所述机器人构建的二维地图的对应关系在所述三维地图中显示所述机器人的位姿。
根据本公开的一实施例,所述地图匹配模块包括:地图选取模块,用于获取所述三维地图的有效部分;二维投影模块,用于将所述三维地图的有效部分投影至水平面,获得二维投影地图;二维地图匹配模块,用于匹配所述二维投影地图与所述机器人构建的二维地图,获得所述二维投影地图与所述机器人构建的二维地图的对应关系。
根据本公开的一实施例,所述机器人构建二维地图过程中获取的障碍物数据为三维数据;所述地图选取模块还用于:根据所述三维数据确定所述机器人的扫描范围;确定位于所述机器人的扫描范围内的三维地图为所述三维地图的有效部分。
根据本公开的一实施例,所述二维地图匹配模块,还用于:采用重叠面积最大化的方法将所述二维投影地图与所述机器人构建的二维地图进行匹配;获得使所述二维投影地图与所 述机器人构建的二维地图的重叠面积最大时所述二维投影地图与所述机器人构建的二维地图的对应关系。
根据本公开的一实施例,所述地图匹配模块还包括:第一障碍物标记获取模块,用于获取指定障碍物在所述三维地图中的标记;第二障碍物标记获取模块,用于获取所述指定障碍物在所述机器人构建的二维地图中的标记;标记物匹配模块,用于匹配所述指定障碍物在所述三维地图中的标记和所述指定障碍物在所述机器人构建的二维地图中的标记,获得所述三维地图与所述机器人构建的二维地图的对应关系。
根据本公开的一实施例,所述指定障碍物包括多个,且多个所述指定障碍物不位于一条直线上。
根据本公开的一实施例,所述指定障碍物包括充电桩、墙壁。
根据本公开的一实施例,所述装置还包括:三维地图修正模块,用于在所述机器人构建二维地图时根据所述机器人构建的二维地图对所述三维地图进行修改。
根据本公开的一实施例,所述三维显示模块,还用于将所述机器人的三维模型与所述三维地图等比例进行显示。
根据本公开的再一方面,提供一种设备,包括:存储器、处理器及存储在所述存储器中并可在所述处理器中运行的可执行指令,所述处理器执行所述可执行指令时实现如上述任一种方法。
根据本公开的再一方面,提供一种计算机可读存储介质,其上存储有计算机可执行指令,所述可执行指令被处理器执行时实现如上述任一种方法。
本公开的实施例提供的机器人三维地图位姿显示方法,通过获得机器人所在空间的三维地图和机器人构建的二维地图,将三维地图与机器人构建的二维地图进行匹配获得三维地图与机器人构建的二维地图对应关系,然后根据机器人在机器人构建的二维地图上的位姿和三维地图与机器人构建的二维地图的对应关系在三维地图中显示机器人的位姿,从而可实现获取机器人所在区域中障碍物的高度信息。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性的,并不能限制本公开。
附图说明
通过参照附图详细描述其示例实施例,本公开的上述和其它目标、特征及优点将变得更加显而易见。
图1示出本公开实施例中一种系统结构的示意图。
图2示出本公开实施例中一种机器人三维地图位姿显示方法的流程图。
图3A示出本公开实施例中一种机器人所在空间的三维地图示意图。
图3B示出本公开实施例中另一种机器人所在空间的三维地图示意图。
图3C示出本公开实施例中再一种机器人所在空间的三维地图示意图。
图4A示出本公开实施例中一种用于绘制三维地图的AR设备的示意图。
图4B示出本公开实施例中另一种用于绘制三维地图的AR设备的示意图。
图5示出本公开实施例中一种匹配三维地图与二维网格地图的方法的流程图。
图6示出本公开实施例中一种匹配二维投影地图与二维网格地图的过程示意图。
图7示出了图5中所示的步骤S502在一实施例中的处理过程示意图。
图8是根据一示例性实施例示出的另一种匹配三维地图与二维网格地图的方法的流程图。
图9A是根据一示例性实施例示出的一种机器人工作方法的流程图。
图9B是根据一示例性实施例示出的一种清洁机器人架构示意图。
图10示出本公开实施例中一种机器人三维地图位姿显示装置的框图。
图11示出本公开实施例中另一种机器人三维地图位姿显示装置的框图。
图12示出本公开实施例中一种机器人三维地图位姿显示系统的框图。
图13示出本公开实施例中一种电子设备的结构示意图。
具体实施方式
现在将参考附图更全面地描述示例实施例。然而,示例实施例能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施例使得本公开将更加全面和完整,并将示例实施例的构思全面地传达给本领域的技术人员。附图仅为本公开的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。
此外,所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施例中。在下面的描述中,提供许多具体细节从而给出对本公开的实施例的充分理解。然而,本领域技术人员将意识到,可以实践本公开的技术方案而省略所述特定细节中的一个或更多,或者可以采用其它的方法、装置、步骤等。在其它情况下,不详细示出或描述公知结构、方法、装置、实现或者操作以避免喧宾夺主而使得本公开的各方面变得模糊。
在本公开的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。符号“/”一般表示前后关联对象是一种“或”的关系。
在本公开中,除非另有明确的规定和限定,“连接”等术语应做广义理解,例如,可以是电连接或可以互相通讯;可以是直接相连,也可以通过中间媒介间接相连。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本公开中的具体含义。
如上所述,因相关技术中机器人所提供给用户的即时地图为二维地图,用户通过此种地图仅可知道机器人所在区域的平面状态,无法获取机器人所在区域中障碍物的高度信息。因此,本公开提供了一种机器人三维地图位姿显示方法,通过获得机器人所在空间的三维地图和机器人构建的二维网格地图,将三维地图与二维网格地图进行匹配获得对应关系,然后根据机器人在二维网格地图上的位姿和三维地图与二维网格地图的对应关系在三维地图中显示机器人的位姿,从而可实现获取机器人所在区域中障碍物的高度信息。为了便于理解,下面首先对本公开涉及到的几个名词进行解释。
智能机器人,是一个集环境感知、动态决策与规划、行为控制与执行等多功能于一体的 综合系统,集中了传感器技术、信息处理、电子工程、计算机工程、自动化控制工程以及人工智能等多学科的研究成果,代表机电一体化的最高成就,是目前科学技术发展最活跃的领域之一。智能机器人以移动方式可分为固定式机器人和移动机器人,固定机器人如机器手臂等等,在工业上广泛应用。移动机器人按移动方式可分为:轮式移动机器人、步行移动机器人、履带式移动机器人、爬行机器人、蠕动式机器人和游动式机器人等类型;按工作环境可分为:室内移动机器人和室外移动机器人;按控制体系结构可分为:功能式(水平式)结构机器人、行为式(垂直式)结构机器人和混合式机器人;按功能和用途可分为:医疗机器人、军用机器人、助残机器人、清洁机器人等等。随着机器人性能不断地完善,移动机器人的应用范围大为扩展,不仅在工业、农业、医疗、服务等行业中得到广泛的应用,而且在城市安全、国防和空间探测领域等有害与危险场合得到很好的应用。
移动机器人是一种由传感器、遥控操作器和自动控制的移动载体组成的机器人系统。移动机器人具有移动功能,在代替人从事危险、恶劣(如辐射、有毒等)环境下作业和人所不及的(如宇宙空间、水下等)环境作业方面,比固定式机器人有更大的机动性、灵活性。
增强现实(Augmented Reality,AR),是一种将真实世界信息和虚拟世界信息“无缝”集成的新技术,是把原本在现实世界的一定时间空间范围内很难体验到的实体信息(视觉信息,声音,味道,触觉等),通过电脑等科学技术,模拟仿真后再叠加,将虚拟的信息应用到真实世界,被人类感官所感知,从而实现真实的环境和虚拟的物体实时地叠加到了同一个画面或空间同时存在,达到超越现实的感官体验。
图1示出了可以应用本公开的机器人三维地图位姿显示方法或机器人三维地图位姿显示装置的示例性系统架构10。
如图1所示,系统架构10可以包括终端设备102、网络104、服务器106和数据库108。终端设备102可以是具有显示屏并且支持输入、输出的各种电子设备,包括但不限于智能手机、平板电脑、膝上型便携计算机、台式计算机、AR头戴设备、移动机器人(如清洁机器人、向导机器人)等等。网络104用以在终端设备102和服务器106之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。服务器106可以是提供各种服务的服务器或服务器集群等,例如对清洁机器人102发送的感知工作环境的数据进行地图建模提供支持的后台处理服务器。数据库108可以是按照数据结构来组织、存储和管理数据的仓库,包括但不限于关系型数据库、云数据库等等,例如存储机器人工作区域的地图数据的数据库。
用户可以使用终端设备102通过网络104与服务器106和数据库108交互,以接收或发送数据等。在服务器106也可通过网络104从数据库108接收数据或向数据库108发送数据等。例如服务器106从数据库108获取清洁机器人102的工作区域3D地图后,可为其规划工作路线,并将规划完成的工作路线信息通过网络104发送至AR头戴设备102,用户可通过AR头戴设备查看清洁机器人在AR地图中的模拟工作路线。
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。
图2是根据一示例性实施例示出的一种机器人三维地图位姿显示方法的流程图。如图2所示的方法例如可以应用于上述系统的服务器端,也可以应用于上述系统的终端设备。
参考图2,本公开实施例提供的方法20可以包括以下步骤。
在步骤S202中,获得机器人所在空间的三维地图。可在机器人工作区域所在空间设置3D地图绘制设备,可采用3D地图绘制设备上增强现实应用(如基于ARkit、ARcore等等平台的AR应用)进行绘制,先获取空间的密集点云,然后基于点云生成空间的3D网格(Mesh)模型,再基于生成的Mesh模型,完成模型贴图,即将纹理通过坐标映射到空间的3DMesh模型上,完成3D地图绘制。如图3A至图3C所示,图3A至图3C示出了绘制的三种机器人所在空间的3D地图示意图,图3A为一个机器人所在空间的3D地图的全景角度示意,图3B为一个机器人所在空间的3D地图的局部放大图示意,图3C为一个机器人所在空间的3D地图的俯视图示意。
ARkit是AR应用框架,通过将移动设备的相机、动作传感器以及图形处理器等硬件与深度感应、人造光渲染等算法结合,可以让开发者将AR应用(例如3D地图绘制应用)在移动设备实现。ARcore可实现的功能与ARkit类似。
可以通过具有3D地图绘制功能的设备进行3D地图的绘制,在一些实施例中,例如可采用包括AR绘制设备的AR设备进行3D地图绘制,AR绘制设备可为具备3D地图绘制功能的移动端设备,包括但不限于:iPAD Pro,CANVAS iPAD等具备深度成像能力的设备,如图4A至图4B所示,图4A为AR绘制设备CANVAS iPAD的示意图,图4B为AR绘制设备iPAD Pro的示意图。AR设备绘制完成的3D地图可发送给3D显示装置,该3D显示装置可为AR显示装置,该AR显示装置可包括在AR设备中。AR显示装置可为具备显示3D地图及预设模型的移动端设备,包括但不限于:iPAD、iPhone等具备显示功能的设备。AR绘制设备与AR显示设备可为同一设备。
在步骤S204中,获得机器人构建的二维地图。机器人在移动过程中可实时构建二维地图建图,例如清洁机器人通过安装的LDS在清扫工作过程中,测量与自身与工作区域中的各种障碍物之间的距离,从而绘制所在区域的即时地图。可采用多种激光雷达(Lidar)同步定位与建图(Simultaneous Localization and Mapping,SLAM)方法绘制即时地图,例如可采用基于优化(解最小二乘问题)的HectorSLAM算法、基于粒子滤波的Gmapping算法、Cartographer等,其中Cartographer是Google开源的一个机器人操作系统(Robot Operating System,ROS)支持的2D和3D SLAM库。
在本公开实施例所提供的方法中,步骤S202所涉及的获取机器人所在空间的三维地图可以预先执行,并将所获取地图存储于服务器端或终端设备等。另外,该步骤也可与步骤S204所涉及的获得机器人构建的二维地图同时进行。本公开实施例对此不作限制。
在步骤S206中,将三维地图与机器人构建的二维地图进行匹配,获得三维地图与机器人构建的二维地图的对应关系。通过三维地图与机器人构建的二维地图的对应关系即可将二维地图中的对象对应显示在三维地图中。
在一些实施例中,例如,可将三维地图向地平面方向投影,生成二维投影地图,获得该 二维投影地图与三维地图的与对应关系,然后二维投影地图将与机器人所构建的二维地图进行匹配,例如可通过优化算法进行迭代,获得两个地图交叠面积最大时两个地图的对应关系,再关联三维地图与二维地图获得三维地图与二维地图的对应关系。具体实施方式可参照图5,此处不予详述。
在另一些实施例中,例如,可根据三维地图中的标记物位置与二维地图中对应的标记物将三维地图与二维地图进行匹配,具体实施方式可参照图8,此处不予详述。
在步骤S208中,获得机器人在机器人构建的二维地图上的位姿。可通过SLAM方法,利用机器人自身设置的多种内部传感器(例如里程仪、罗盘、加速度计等等),通过多种传感信息的融合进行位姿估计,并同时使用外部传感器(例如激光测距仪、视觉设备等等)感知环境,通过对环境特征的比较对自身位姿进行校正,以获得机器人在二维地图上的位姿。
在本公开实施例所提供的方法中,步骤S208中所涉及的获取机器人在二维地图上的位姿可在机器人构建二维地图的同时执行,也可在构建二维地图完成后,获取机器人在二维地图上的位姿,本公开实施例对此不作限制。
在步骤S210中,根据机器人在机器人构建的二维地图上的位姿和三维地图与机器人构建的二维地图的对应关系在三维地图中显示机器人的位姿。获得了三维地图与二维地图的对应关系,可将机器人在二维地图上的位姿对应到三维地图中,并在显示时将机器人的三维模型与三维地图等比例进行组合显示,可在AR显示设备上直观观察到机器人在三维地图中实时位姿。
根据本公开实施例提供的机器人三维地图位姿显示方法,通过获得机器人所在空间的三维地图和机器人构建的二维地图,将三维地图与二维地图进行匹配获得对应关系,然后根据机器人在二维地图上的位姿和三维地图与二维地图的对应关系在三维地图中显示机器人的位姿,从而可实现获取机器人所在区域中障碍物的高度信息。
图5是根据一示例性实施例示出的一种匹配三维地图与二维地图的方法的流程图。如图5所示的方法例如可以应用于上述系统的服务器端,也可以应用于上述系统的终端设备。
参考图5,本公开实施例提供的方法50可以包括以下步骤。
在步骤S502中,获取三维地图的有效部分。本公开实施例所提供的方案以机器人通过安装LDS以实现机器人对周围障碍物的测量以及二维地图的绘制。机器人上所设置的LDS在垂直方向可能具有一定的视场角,因此,LDS所扫描到的范围可以同样为一个三维区域。通常情况下,由于存在机器人自身较矮、LDS在垂直方向视场角有限等问题,LDS无法扫描机器人所处环境中较高的位置,此时基于LDS生成的二维地图仅包括机器人所处环境中接近地面的部分。
在一些实施例中,例如,可根据LDS设置高度以及LDS在垂直方向的视场角从三维地图中选取相应高度的部分,获得三维地图与机器人LDS所扫描范围的对应部分。
在另一些实施例中,例如,可在将三维地图的点云滤去机器人扫描范围外点云,然后获得三维地图的机器人扫描范围内点云与三维地图的对应关系作为部分三维地图,具体实施方式可参照图7,此处不予详述。
在步骤S504中,将三维地图的有效部分投影至水平面,获得二维投影地图。可将部分三维地图的点云根据与三维地图的对应关系在地平面方向投影,获得二维投影地图。
在步骤S506中,匹配二维投影地图与机器人构建的二维地图,获得二维投影地图与机器人所构建二维地图的对应关系。在本公开实施例中,可采用重叠面积最大化的方法将二维投影地图与二维网格地图进行匹配,即可将二维投影地图表示在二维网格地图的坐标系中(或将二维网格地图表示在二维投影地图的坐标系中),进行旋转、平移等操作的同时计算两个地图的交叠面积,不断进行迭代以获得使二维投影地图与二维网格地图的重叠面积最大时二维投影地图与二维网格地图的对应关系。图6示出一种匹配二维投影地图与机器人所构建二维地图的过程示意图,如图6所示,从上至下则为二维投影地图602与二维地图604的重叠面积逐渐变大,直至接近重合的过程,二维投影地图602与二维地图604重合即匹配完成,可获得旋转、平移参数。
在步骤S508中,根据二维投影地图与二维地图的对应关系确定三维地图与机器人所构建二维地图的对应关系。由于二维投影地图由三维地图获得,因此可获得二维投影地图与三维地图的对应关系,再根据二维投影地图与二维地图的对应关系即可确定三维地图与二维地图的对应关系。
根据本公开实施例提供的方法,通过将三维地图中位于机器人扫描范围内的部分三维地图在地平面方向进行投影获得二维投影地图,然后将二维投影地图与机器人所构建二维地图进行匹配,再根据二维投影地图与机器人所构建二维地图的对应关系确定三维地图与二维地图的对应关系,可有效防止三维地图中机器人扫描范围外的物体影像投影到二维投影地图中而导致的匹配效果差的情况发生。
图7示出了图5中所示的步骤S502在一实施例中的处理过程示意图。三维地图包括机器人所在空间的三维点云,部分三维地图包括机器人LDS扫描范围内空间的三维点云。如图7所示,本公开实施例中,上述步骤S502可以进一步包括以下步骤。
在步骤S5022中,从三维点云中选取不大于机器人LDS扫描范围内的三维点云。可基于三维点云在地图坐标系(如地球坐标系)中垂直地面的坐标轴的坐标,选取不大于机器人LDS扫描范围的三维点云。例如清洁机器人LDS在垂直方向的扫描高度可为15cm、20cm或25cm等等。
在步骤S5024中,基于不大于机器人LDS扫描范围的三维点云获得部分三维地图。
根据本公开实施例提供的方法,在进行三维地图的3D-2D的转换过程中,只保留机器人可扫描到范围内的点云经3D地图投影至地平面,生成2D投影地图,可提高三维地图与机器人所构建二维地图匹配的精确性。
图8是根据一示例性实施例示出的另一种匹配三维地图与二维地图的方法的流程图。如图8所示的方法例如可以应用于上述系统的服务器端,也可以应用于上述系统的终端设备。
参考图8,本公开实施例提供的方法80可以包括步骤S802至步骤S806。
在步骤S802中,获取指定障碍物在三维地图中的标记。可通过AR扫描设备在拍摄时自动识别指定障碍物,获取其在三维地图中的标记,指定障碍物可例如充电桩、桌、椅、墙 壁等等。也可对墙壁平面进行识别。可由AR扫描设备将标记物拍摄后,联网通过服务器上的物体识别算法进行识别,也可将清扫环境中标记物照片预存在本地,通过本地设备上的物体识别算法进行匹配识别。
在步骤S804中,获取指定障碍物在机器人构建的二维地图中的标记。可在机器人上设置拍摄设备,通过联网或本地算法识别对应的障碍物后,通过机器人在2D地图绘制过程中对的指定障碍物进行标记。
在步骤S806中,匹配指定障碍物在三维地图中的标记和指定障碍物在机器人构建的二维地图中的标记,获得三维地图与机器人构建的二维地图的对应关系。若通过指定障碍物的标记进行匹配,则至少需要三个不在同一条直线上的指定障碍物(将其中两个标记物相连为一条标记线),或一条指定障碍物的线(如墙壁竖直平面在地平面上的投影等等)与一个指定障碍物,先通过标记线计算旋转参数,再通过对标记物特征点进行关联计算平移参数,获得将三维地图与二维地图匹配的旋转参数和平移参数。
根据本公开实施例提供的方法,通过在建图过程中识别的指定障碍物对三维地图与二维地图进行匹配,一定程度提高地图匹配的准确性。
图9A是根据一示例性实施例示出的一种机器人工作方法的流程图。如图9A所示的方法例如可以应用于上述系统的服务器端,也可以应用于上述系统的终端设备。
参考图9A,本公开实施例提供的方法90可以包括以下步骤。
在步骤S902中,获得机器人所在空间的三维地图。AR扫描设备绘制机器人所在空间的三维地图后,可分享给AR显示设备。
在步骤S904中,获得机器人的实时扫描结果。机器人可基于SLAM方法在工作过程中扫描周围环境,获得障碍物等物体的信息。图9B示出了一种扫地机架构,该扫地机上设有用于感知环境的前置和边置ToF(Time of Flight)传感器模组,ToF传感器是通过给目标连续发送光脉冲,然后用传感器接收从物体返回的光,通过探测光脉冲的飞行(往返)时间来得到目标物距离。
在步骤S904中,根据机器人的实时扫描结果修改三维地图。机器人可将实时扫描结果发送给AR显示设备,AR显示设备根据机器人的实时扫描结果对三维地图进行补充或修正。
根据本公开实施例提供的方法,可通过根据机器人的实时扫描结果补充或修正AR扫描设备生成的3D地图,提高了显示的3D地图的准确性。
图10是根据一示例性实施例示出的一种机器人三维地图位姿显示装置的框图。如图10所示的装置例如可以应用于上述系统的服务器端,也可以应用于上述系统的终端设备。
参考图10,本公开实施例提供的装置100可以包括三维地图获取模块1002、构建地图获取模块1004、地图匹配模块1006、位姿获取模块1008和三维显示模块1010。
三维地图获取模块1002可用于获得机器人所在空间的三维地图。
构建地图获取模块1004可用于获得机器人构建的二维地图。
地图匹配模块1006可用于将三维地图与机器人构建的二维地图进行匹配,获得三维地图与机器人构建的二维地图的对应关系。
位姿获取模块1008可用于获得机器人在机器人构建的二维地图上的位姿。
三维显示模块1010可用于根据机器人在机器人构建的二维地图上的位姿和三维地图与机器人构建的二维地图的对应关系在三维地图中显示机器人的位姿。
图11是根据一示例性实施例示出的一种机器人三维地图位姿显示装置的框图。如图11所示的装置例如可以应用于上述系统的服务器端,也可以应用于上述系统的终端设备。
参考图11,本公开实施例提供的装置110可以包括三维地图获取模块1102、构建地图获取模块1104、地图匹配模块1106、位姿获取模块1108、三维显示模块1110和三维地图修正模块1112;其中,地图匹配模块1106可以包括地图选取模块11062、二维投影模块11064、二维地图匹配模块11066、三维地图匹配模块11068第一障碍物标记获取模块110692、第二障碍物标记获取模块110694和标记物匹配模块110696。
三维地图获取模块1102可用于获得机器人所在空间的三维地图。
构建地图获取模块1104可用于获得机器人构建的二维地图。机器人构建二维地图过程中获取的障碍物数据为三维数据。
地图匹配模块1106可用于将三维地图与机器人构建的二维地图进行匹配,获得三维地图与机器人构建的二维地图的对应关系。
地图选取模块11062可用于获取三维地图的有效部分。
地图选取模块11062还可用于:根据三维数据确定机器人的扫描范围;确定位于机器人的扫描范围内的三维地图为三维地图的有效部分。
二维投影模块11064可用于将三维地图的有效部分投影至水平面,获得二维投影地图。
二维地图匹配模块11066可用于匹配二维投影地图与机器人构建的二维地图,获得二维投影地图与机器人构建的二维地图的对应关系。
二维地图匹配模块11066还可用于采用重叠面积最大化的方法将二维投影地图与机器人构建的二维地图进行匹配;获得使二维投影地图与机器人构建的二维地图的重叠面积最大时二维投影地图与机器人构建的二维地图的对应关系。
三维地图匹配模块11068可用于根据二维投影地图与二维网格地图的对应关系确定三维地图与二维网格地图的对应关系。
第一障碍物标记获取模块110692可用于获取指定障碍物在三维地图中的标记。指定障碍物包括多个,且多个指定障碍物不位于一条直线上。指定障碍物包括充电桩、墙壁。
第二障碍物标记获取模块110694可用于获取指定障碍物在机器人构建的二维地图中的标记。
标记物匹配模块110696可用于匹配指定障碍物在三维地图中的标记和指定障碍物在机器人构建的二维地图中的标记,获得三维地图与机器人构建的二维地图的对应关系。
位姿获取模块1108可用于获得机器人在机器人构建的二维地图上的位姿。
三维显示模块1110可用于根据机器人在机器人构建的二维地图上的位姿和三维地图与机器人构建的二维地图的对应关系在三维地图中显示机器人的位姿。
三维显示模块1110还可用于将机器人的三维模型与三维地图等比例进行显示。
三维地图修正模块1112可用于在机器人构建二维地图时根据机器人构建的二维地图对三维地图进行修改。
图12是根据一示例性实施例示出的一种机器人三维地图位姿显示系统的框图。如图12所示,设有深度传感器12021的清洁机器人1202可与增强现实扫描设备1204连接,清洁机器人1202可通过增强现实扫描设备1204实时获取所在环境的3D地图;清洁机器人1202可也将在首次清扫或用户重置时,获得增强现实扫描设备1204绘制的3D地图并进行保存。清洁机器人1202可与增强现实显示设备1206连接,将基于深度传感器12021生成的2D网格地图及其位姿实时发送给增强现实显示设备1206,也可将观测到的障碍物的点云上传至增强现实显示设备1206,以便增强现实显示设备1206对3D地图进行更新;增强现实扫描设备1204也可与增强现实显示设备1206连接,增强现实扫描设备1204在绘制完成3D地图后可分享给增强现实显示设备1206,以便增强现实显示设备1206将清洁机器人1202生成的2D网格地图与3D地图进行匹配,获得2D网格地图与3D地图的对应关系并进行保存。当清洁机器人1202再次对该区域进行清扫时,清洁机器人1202实时上传位姿信息到增强现实显示设备1206,增强现实显示设备1206根据保存的2D网格地图与3D地图的对应关系,在3D地图中实时显示清洁机器人1202的位姿。
图13示出本公开实施例中一种电子设备的结构示意图。需要说明的是,图13示出的设备仅以计算机系统为示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图13所示,设备1300包括中央处理单元(CPU)1301,其可以根据存储在只读存储器(ROM)1302中的程序或者从存储部分1308加载到随机访问存储器(RAM)1303中的程序而执行各种适当的动作和处理。在RAM 1303中,还存储有设备1300操作所需的各种程序和数据。CPU 1301、ROM 1302以及RAM 1303通过总线1304彼此相连。输入/输出(I/O)接口1305也连接至总线1304。
以下部件连接至I/O接口1305:包括键盘、鼠标等的输入部分1306;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分1307;包括硬盘等的存储部分1308;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分1309。通信部分1309经由诸如因特网的网络执行通信处理。驱动器1310也根据需要连接至I/O接口1305。可拆卸介质1311,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器1310上,以便于从其上读出的计算机程序根据需要被安装入存储部分1308。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分1309从网络上被下载和安装,和/或从可拆卸介质1311被安装。在该计算机程序被中央处理单元(CPU)1301执行时,执行本公开的系统中限定的上述功能。
需要说明的是,本公开实施例所示的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限 于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、RF等等,或者上述的任意合适的组合。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,上述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图或流程图中的每个方框、以及框图或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的模块可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的模块也可以设置在处理器中,例如,可以描述为:一种处理器包括三维地图获取模块、构建地图获取模块、地图匹配模块、位姿获取模块和三维显示模块。其中,这些模块的名称在某种情况下并不构成对该模块本身的限定,例如,三维地图获取模块还可以被描述为“向所连接的AR绘制设备获取三维地图的模块”。
作为另一方面,本公开还提供了一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的设备中所包含的;也可以是单独存在,而未装配入该设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被一个该设备执行时,使得该设备包括:获得机器人所在空间的三维地图;获得机器人构建的二维地图;将三维地图与机器人构建的二维地图进行匹配,获得三维地图与机器人构建的二维地图的对应关系;获得机器人在机器人构建的二维地图上的位姿;根据机器人在机器人构建的二维地图上的位姿和三维地图与机器人构建的二维地图的对应关系在三维地图中显示机器人的位姿。
以上示出和描述了本公开的示例性实施例。应可理解的是,本公开不限于这里描述的详细结构、设置方式或实现方法;相反,本公开意图涵盖包含在所附权利要求的精神和范围内 的各种修改和等效设置。

Claims (12)

  1. 一种机器人三维地图位姿显示方法,其中,包括:
    获得所述机器人所在空间的三维地图;
    获得所述机器人构建的二维地图;
    将所述三维地图与所述机器人构建的二维地图进行匹配,获得所述三维地图与所述机器人构建的二维地图的对应关系;
    获得所述机器人在所述机器人构建的二维地图上的位姿;
    根据所述机器人在所述机器人构建的二维地图上的位姿和所述三维地图与所述机器人构建的二维地图的对应关系在所述三维地图中显示所述机器人的位姿。
  2. 根据权利要求1所述的方法,其中,将所述三维地图与所述机器人构建的二维地图进行匹配,获得所述三维地图与所述机器人构建的二维地图的对应关系,包括:
    获取所述三维地图的有效部分;
    将所述三维地图的有效部分投影至水平面,获得二维投影地图;
    匹配所述二维投影地图与所述机器人构建的二维地图,获得所述二维投影地图与所述机器人构建的二维地图的对应关系。
  3. 根据权利要求2所述的方法,其中,所述机器人构建二维地图过程中获取的障碍物数据为三维数据;
    所述获取所述三维地图的有效部分,包括:
    根据所述三维数据确定所述机器人的扫描范围;
    确定位于所述机器人的扫描范围内的三维地图为所述三维地图的有效部分。
  4. 根据权利要求2所述的方法,其中,匹配所述二维投影地图与所述机器人构建的二维地图,获得所述二维投影地图与所述机器人构建的二维地图的对应关系,包括:
    采用重叠面积最大化的方法将所述二维投影地图与所述机器人构建的二维地图进行匹配;
    获得使所述二维投影地图与所述机器人构建的二维地图的重叠面积最大时所述二维投影地图与所述机器人构建的二维地图的对应关系。
  5. 根据权利要求1所述的方法,其中,将所述三维地图与所述机器人构建的二维地图进行匹配,获得所述三维地图与所述机器人构建的二维地图的对应关系,包括:
    获取指定障碍物在所述三维地图中的标记;
    获取所述指定障碍物在所述机器人构建的二维地图中的标记;
    匹配所述指定障碍物在所述三维地图中的标记和所述指定障碍物在所述机器人构建的二维地图中的标记,获得所述三维地图与所述机器人构建的二维地图的对应关系。
  6. 根据权利要求5所述的方法,其中,所述指定障碍物包括多个,且多个所述指定障碍物不位于一条直线上。
  7. 根据权利要求6所述的方法,其中,所述指定障碍物包括充电桩、墙壁。
  8. 根据权利要求1至7中任意一项所述的方法,其中,还包括:
    在所述机器人构建二维地图时根据所述机器人构建的二维地图对所述三维地图进行修改。
  9. 根据权利要求1至7中任意一项所述的方法,其中,还包括:
    将所述机器人的三维模型与所述三维地图等比例进行显示。
  10. 一种机器人三维地图位姿显示装置,其中,包括:
    三维地图获取模块,用于获得所述机器人所在空间的三维地图;
    构建地图获取模块,用于获得所述机器人构建的二维地图;
    地图匹配模块,用于将所述三维地图与所述机器人构建的二维地图进行匹配,获得所述三维地图与所述机器人构建的二维地图的对应关系;
    位姿获取模块,用于获得所述机器人在所述机器人构建的二维地图上的位姿;
    三维显示模块,用于根据所述机器人在所述机器人构建的二维地图上的位姿和所述三维地图与所述机器人构建的二维地图的对应关系在所述三维地图中显示所述机器人的位姿。
  11. 一种设备,包括:存储器、处理器及存储在所述存储器中并可在所述处理器中运行的可执行指令,其中,所述处理器执行所述可执行指令时实现如权利要求1-9任一项所述的方法。
  12. 一种计算机可读存储介质,其上存储有计算机可执行指令,其中,所述可执行指令被处理器执行时实现如权利要求1-9任一项所述的方法。
PCT/CN2021/134005 2020-12-14 2021-11-29 机器人三维地图位姿显示方法、装置、设备及存储介质 WO2022127572A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21905501.9A EP4261789A1 (en) 2020-12-14 2021-11-29 Method for displaying posture of robot in three-dimensional map, apparatus, device, and storage medium
US18/257,346 US20240012425A1 (en) 2020-12-14 2021-11-29 Method for displaying posture of robot in three-dimensional map, apparatus, device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011471918.6A CN114612622A (zh) 2020-12-14 2020-12-14 机器人三维地图位姿显示方法、装置、设备及存储介质
CN202011471918.6 2020-12-14

Publications (2)

Publication Number Publication Date
WO2022127572A1 true WO2022127572A1 (zh) 2022-06-23
WO2022127572A9 WO2022127572A9 (zh) 2022-07-21

Family

ID=81857069

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/134005 WO2022127572A1 (zh) 2020-12-14 2021-11-29 机器人三维地图位姿显示方法、装置、设备及存储介质

Country Status (4)

Country Link
US (1) US20240012425A1 (zh)
EP (1) EP4261789A1 (zh)
CN (1) CN114612622A (zh)
WO (1) WO2022127572A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758157A (zh) * 2023-06-14 2023-09-15 深圳市华赛睿飞智能科技有限公司 一种无人机室内三维空间测绘方法、系统及存储介质
CN117351140A (zh) * 2023-09-15 2024-01-05 中国科学院自动化研究所 融合全景相机和激光雷达的三维重建方法、装置及设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096713A (zh) * 2011-01-29 2011-06-15 广州都市圈网络科技有限公司 一种基于网格化二三维地图匹配方法及系统
CN105045263A (zh) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 一种基于Kinect的机器人自定位方法
CN107145153A (zh) * 2017-07-03 2017-09-08 北京海风智能科技有限责任公司 一种基于ros的服务机器人及其室内导航方法
CN108269305A (zh) * 2017-12-27 2018-07-10 武汉网信安全技术股份有限公司 一种二维、三维数据联动展示方法和系统
WO2019055281A2 (en) * 2017-09-14 2019-03-21 United Parcel Service Of America, Inc. AUTOMATIC GUIDANCE FOR DISPLACING AUTONOMOUS VEHICLES INSIDE AN INSTALLATION
CN111552764A (zh) * 2020-05-15 2020-08-18 弗徕威智能机器人科技(上海)有限公司 一种车位检测方法、装置、系统及机器人和存储介质
CN111637890A (zh) * 2020-07-15 2020-09-08 济南浪潮高新科技投资发展有限公司 一种结合终端增强现实技术的移动机器人导航方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096713A (zh) * 2011-01-29 2011-06-15 广州都市圈网络科技有限公司 一种基于网格化二三维地图匹配方法及系统
CN105045263A (zh) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 一种基于Kinect的机器人自定位方法
CN107145153A (zh) * 2017-07-03 2017-09-08 北京海风智能科技有限责任公司 一种基于ros的服务机器人及其室内导航方法
WO2019055281A2 (en) * 2017-09-14 2019-03-21 United Parcel Service Of America, Inc. AUTOMATIC GUIDANCE FOR DISPLACING AUTONOMOUS VEHICLES INSIDE AN INSTALLATION
CN108269305A (zh) * 2017-12-27 2018-07-10 武汉网信安全技术股份有限公司 一种二维、三维数据联动展示方法和系统
CN111552764A (zh) * 2020-05-15 2020-08-18 弗徕威智能机器人科技(上海)有限公司 一种车位检测方法、装置、系统及机器人和存储介质
CN111637890A (zh) * 2020-07-15 2020-09-08 济南浪潮高新科技投资发展有限公司 一种结合终端增强现实技术的移动机器人导航方法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758157A (zh) * 2023-06-14 2023-09-15 深圳市华赛睿飞智能科技有限公司 一种无人机室内三维空间测绘方法、系统及存储介质
CN116758157B (zh) * 2023-06-14 2024-01-30 深圳市华赛睿飞智能科技有限公司 一种无人机室内三维空间测绘方法、系统及存储介质
CN117351140A (zh) * 2023-09-15 2024-01-05 中国科学院自动化研究所 融合全景相机和激光雷达的三维重建方法、装置及设备
CN117351140B (zh) * 2023-09-15 2024-04-05 中国科学院自动化研究所 融合全景相机和激光雷达的三维重建方法、装置及设备

Also Published As

Publication number Publication date
WO2022127572A9 (zh) 2022-07-21
US20240012425A1 (en) 2024-01-11
CN114612622A (zh) 2022-06-10
EP4261789A1 (en) 2023-10-18

Similar Documents

Publication Publication Date Title
US10896497B2 (en) Inconsistency detecting system, mixed-reality system, program, and inconsistency detecting method
US11499832B1 (en) Method for constructing a map while performing work
US10176589B2 (en) Method and system for completing point clouds using planar segments
EP2671210B1 (en) Three-dimensional environment reconstruction
CN102622762B (zh) 使用深度图的实时相机跟踪
TWI467494B (zh) 使用深度圖進行移動式攝影機定位
CN111492403A (zh) 用于生成高清晰度地图的激光雷达到相机校准
WO2022127572A1 (zh) 机器人三维地图位姿显示方法、装置、设备及存储介质
US20160253814A1 (en) Photogrammetric methods and devices related thereto
US10157478B2 (en) Enabling use of three-dimensional locations of features with two-dimensional images
CN110741395B (zh) 现场命令视觉
CN105164726A (zh) 用于3d重构的相机姿态估计
Xiao et al. 3D point cloud registration based on planar surfaces
US11209277B2 (en) Systems and methods for electronic mapping and localization within a facility
TW202238449A (zh) 室內定位系統及室內定位方法
CN116349222A (zh) 利用集成图像帧渲染基于深度的三维模型
WO2023088127A1 (zh) 室内导航方法、服务器、装置和终端
Tiozzo Fasiolo et al. Combining LiDAR SLAM and deep learning-based people detection for autonomous indoor mapping in a crowded environment
Klaser et al. Simulation of an autonomous vehicle with a vision-based navigation system in unstructured terrains using OctoMap
CN111429576A (zh) 信息显示方法、电子设备和计算机可读介质
Cai et al. Heads-up lidar imaging with sensor fusion
WO2022071315A1 (ja) 自律移動体制御装置、自律移動体制御方法及びプログラム
WO2024095744A1 (ja) 情報処理装置、情報処理方法、およびプログラム
US20230326074A1 (en) Using cloud computing to improve accuracy of pose tracking
US20240069203A1 (en) Global optimization methods for mobile coordinate scanners

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21905501

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18257346

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021905501

Country of ref document: EP

Effective date: 20230714