CN117948959A - Map generation method, electronic device, storage medium, and computer program product - Google Patents

Map generation method, electronic device, storage medium, and computer program product Download PDF

Info

Publication number
CN117948959A
CN117948959A CN202310702847.3A CN202310702847A CN117948959A CN 117948959 A CN117948959 A CN 117948959A CN 202310702847 A CN202310702847 A CN 202310702847A CN 117948959 A CN117948959 A CN 117948959A
Authority
CN
China
Prior art keywords
map
map data
positioning
data frame
mobile robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310702847.3A
Other languages
Chinese (zh)
Inventor
马云飞
沈毅
高江峰
崔志航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Robot Technology Co Ltd
Original Assignee
Beijing Kuangshi Robot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Robot Technology Co Ltd filed Critical Beijing Kuangshi Robot Technology Co Ltd
Priority to CN202310702847.3A priority Critical patent/CN117948959A/en
Publication of CN117948959A publication Critical patent/CN117948959A/en
Pending legal-status Critical Current

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The embodiment of the application provides a map generation method, electronic equipment, storage medium and computer program product, wherein the method comprises the following steps: detecting whether the positioning of the mobile robot is recovered after the mobile robot is lost; if so, acquiring a multi-frame map data frame corresponding to the mobile robot, and acquiring a first positioning map used by the mobile robot during the positioning loss period and a second positioning map used by the mobile robot before the positioning loss and after the positioning recovery; optimizing the positions of the robots corresponding to the multiple frames of map data frames respectively based on the first positioning map, the second positioning map and the multiple frames of map data frames to obtain an optimized position set of the mobile robot; and aligning each map data frame in the multi-frame map data frames based on the optimized pose set, and fusing the aligned map data frames to generate a smooth map, wherein the smooth map is used for updating the second positioning map. The scheme can ensure the accuracy of the generated map.

Description

Map generation method, electronic device, storage medium, and computer program product
Technical Field
The present application relates to the field of robotics, and more particularly, to a map generation method, an electronic device, a storage medium, and a computer program product.
Background
With the advancement of technology, mobile robotics have been rapidly developed, such as many factories or warehouses, in order to improve working efficiency and solve the problem of difficulty in job recruitment, and many transportation tasks are changed from people to robots, so that synchronous positioning and mapping (Simultaneous Localization AND MAPPING, SLAM) robots are favored due to easy deployment.
However, the environment of the factory or warehouse is not a complete one. For example, in a factory or warehouse scenario, there may be an increase or decrease in the number and/or a change in the location of devices within the scenario, resulting in a change in the environment, which in turn may cause the mobile robot to fail to recognize the current environment, resulting in a decrease in positioning accuracy. When the robot cannot identify the environment, a manual map splicing mode is generally adopted. Although the method can solve the problem of large-scale map change, maintenance personnel are required to build map puzzles for many times, and the pose of a map splicing position is not smooth, so that the accuracy of a map is seriously affected.
Disclosure of Invention
The present application has been made in view of the above-described problems. The application provides a map generation method, electronic equipment, storage medium and computer program product.
According to an aspect of the present application, there is provided a map generation method including: detecting whether the positioning of the mobile robot is recovered after the mobile robot is lost; if so, acquiring a multi-frame map data frame corresponding to the mobile robot, and acquiring a first positioning map used by the mobile robot during the positioning loss period and a second positioning map used by the mobile robot before the positioning loss and after the positioning recovery; the multi-frame map data frames comprise map data frames corresponding to the mobile robot in the positioning lost period and map data frames adjacent to the map data frames corresponding to the mobile robot before and/or after the mobile robot is positioned and retrieved in the positioning lost period; optimizing the positions of the robots corresponding to the multiple frames of map data frames respectively based on the first positioning map, the second positioning map and the multiple frames of map data frames to obtain an optimized position set of the mobile robot; and aligning each map data frame in the multi-frame map data frames based on the optimized pose set, and fusing the aligned map data frames to generate a smooth map, wherein the smooth map is used for updating the second positioning map.
Illustratively, after fusing the aligned frames of map data together to generate a smooth map, the method further comprises: for each of the plurality of frames of map data frames, determining a map update range corresponding to the optimized robot pose on the smooth map based on the optimized robot pose in the set of optimized poses corresponding to the map data frame; fusing first map information on the smooth map, which is positioned in a map updating range corresponding to each multi-frame map data frame, to a second positioning map to obtain a fused map; displaying the fused map on a display interface; and replacing the second positioning map with the fused map based on the updating instruction input by the user, or reserving the second positioning map and deleting the fused map based on the deleting instruction input by the user.
Illustratively, after fusing the aligned frames of map data together to generate a smooth map, the method further comprises: displaying map association information of the fused map on a display interface; wherein the map-associated information includes one or more of: the method comprises the steps of identifying information of a mobile robot, track information of the mobile robot during positioning loss, obtaining time of a fused map and map pose corresponding to a smooth map; the track information is represented by robot poses corresponding to map data frames corresponding to the lost positioning period, the map poses are used for representing the relative positions of the smooth map in the second positioning map, and the map poses are determined based on the robot poses corresponding to the multi-frame map data frames.
Illustratively, fusing map information on the smooth map within a map update range corresponding to each of the multiple frames of map data to the second positioning map to obtain a fused map, including: replacing the map information in the area corresponding to the map updating range on the second positioning map with the first map information in the map updating range corresponding to each multi-frame map data frame on the smooth map to obtain a fused map; or based on the first transparency and the second transparency, overlapping the first map information in the map updating range corresponding to each multi-frame map data frame on the smooth map on the map information in the area corresponding to the map updating range on the second positioning map to obtain the fused map, wherein the first transparency is the transparency corresponding to the first map information, and the second transparency is the transparency corresponding to the second map information.
Illustratively, the robot pose includes a position of the mobile robot and an orientation of the mobile robot; determining a map update range corresponding to the optimized robot pose on the smoothed map based on the optimized robot pose corresponding to the map data frame in the optimized pose set, comprising: on a smooth map, starting from a target map point, respectively making rays along a plurality of directions, wherein the target map point is the position of the optimized mobile robot corresponding to the map data frame, and the included angle between the two rays at the outermost side and the direction of the optimized mobile robot corresponding to the map data frame is smaller than or equal to 90 degrees; determining the intersection point of the ray and the obstacle; and determining a target area containing the intersection point as a map updating range corresponding to the map data frame, wherein the distance from each map point in the target area to the intersection point is smaller than or equal to a target distance threshold value.
Illustratively, optimizing the robot pose of the mobile robot corresponding to each of the multiple frames of map data based on the first positioning map, the second positioning map and the multiple frames of map data includes: acquiring the robot pose corresponding to each multi-frame map data frame; calculating an overall constraint error corresponding to the pose of the robot based on a first matching error between each map data frame and a first positioning map during the positioning loss, a second matching error between each map data frame and a second positioning map before and/or after the positioning loss, and a third matching error corresponding to every two adjacent map data frames in the multi-frame map data frames; the third matching error is an error generated by an odometer based on the mobile robot; and optimizing the robot pose corresponding to each multi-frame map data frame so as to minimize the total constraint error.
Illustratively, the first positioning map is constructed in real time based on at least one frame of first map data frame, wherein the at least one frame of first map data frame comprises map data frames corresponding to the mobile robot during the positioning loss period, and the first positioning map corresponding to any one of the first map data frames is a first positioning map constructed based on the first map data frame and a first map data frame before the first map data frame; aiming at a map data frame of a mobile robot in a positioning loss period, acquiring robot pose corresponding to each multi-frame map data frame, comprising: when the mobile robot is in a positioning lost period, for any current first map data frame, acquiring a first robot pose corresponding to the current first map data frame determined by an odometer of the mobile robot; and/or matching the current first map data frame with a first positioning map corresponding to the previous first map data frame to obtain a fourth matching error, and determining a second robot pose corresponding to the current first map data frame based on the obtained fourth matching error; and determining the robot pose corresponding to the current first map data frame based on the first robot pose and/or the second robot pose corresponding to the current first map data frame.
Illustratively, matching the current first map data frame with the first positioning map corresponding to the previous first map data frame to obtain a fourth matching error, and determining the second robot pose corresponding to the current first map data frame based on the obtained fourth matching error, including: under the condition that the first positioning map is a laser grid map, matching a current first map data frame with a first positioning map corresponding to a previous first map data frame by adopting a correlation scanning matching and least squares solution-based optimization method to obtain a fourth matching error, and determining a second robot pose corresponding to the current first map data frame based on the robot pose corresponding to the previous first map data frame and the fourth matching error; and/or, under the condition that the first positioning map is a visual map, tracking the position of a map point to be matched in the first positioning map in the current first map data frame by adopting an optical flow method, obtaining a fourth matching error between the current first map data frame and the first positioning map corresponding to the previous first map data frame, and determining a second robot pose corresponding to the current first map data frame based on the robot pose corresponding to the previous first map data frame and the fourth matching error.
Illustratively, determining the second robot pose corresponding to the current first map data frame based on the robot pose corresponding to the previous first map data frame and the fourth matching error includes: determining an initial robot pose corresponding to the current first map data frame based on the robot pose corresponding to the previous first map data frame and a fourth matching error; based on the initial robot pose, respectively projecting a current first map data frame to a first positioning map corresponding to each of at least one previous first map data frame, and obtaining at least one re-projection error between the current first map data frame and the first positioning map corresponding to each of the at least one previous first map data frame, wherein the at least one previous first map data frame is a first map data frame positioned before the current first map data frame, and the at least one re-projection error corresponds to the at least one previous first map data frame one by one; and optimizing the initial robot pose based on at least one reprojection error to obtain a second robot pose corresponding to the current first map data frame.
Illustratively, the method further comprises: acquiring matching information between each second map data frame acquired by the mobile robot in real time and a second positioning map, wherein the second map data frames acquired by the mobile robot in real time comprise multi-frame map data frames, and the matching information is used for indicating whether the corresponding second map data frames are matched with the second positioning map or not; if the matching information corresponding to the second map data frames with the continuous first target number indicates no matching, determining that the mobile robot is lost in positioning; if the matching information corresponding to the second map data frames with the continuous second target number indicates matching when the mobile robot is in the positioning lost period, determining the positioning recovery of the mobile robot; and/or if the repositioning information input by the user is received when the mobile robot is in the period of positioning loss, determining the positioning and retrieving of the mobile robot, wherein the repositioning information comprises the current robot pose of the mobile robot.
Illustratively, the matching information between any of the second map data frames acquired by the mobile robot and the second positioning map is calculated by: under the condition that the second positioning map is a laser grid map, each laser point on the second map data frame is projected onto the second positioning map based on the pose of the robot corresponding to the second map data frame, and matching information between the second map data frame and the second positioning map is determined based on the re-projection error between the projected laser point and each laser point on the second positioning map; and/or, when the second positioning map is a visual map, projecting each feature point on the second map data frame onto the second positioning map based on the robot pose corresponding to the second map data frame, and determining matching information between the second map data frame and the second positioning map based on the reprojection error between the projected feature point and each feature point on the second positioning map.
According to another aspect of the present application there is provided an electronic device comprising a processor and a memory, wherein the memory has stored therein computer program instructions which, when executed by the processor, are adapted to carry out the map updating method described above.
According to another aspect of the present application, there is provided a storage medium on which program instructions are stored, wherein the program instructions are for executing the map updating method described above at run-time.
According to another aspect of the present application, there is provided a computer program product comprising a computer program, wherein the computer program is adapted to perform the above-described map updating method when run.
According to the map generation method, the electronic device, the storage medium and the computer program product, the map data frames collected before and/or after the mobile robot is lost in positioning and/or after the mobile robot is retrieved are utilized to optimize the pose with the map data frames collected during the positioning loss period of the mobile robot, so that the robot pose among the map data frames of each frame is smoother, and an accurate smooth map is generated according to the smooth robot pose. According to the scheme, the accurate smooth map can be automatically generated based on the map data frames before the positioning is lost and/or after the positioning is recovered without manual jigsaw, so that the second positioning map can be updated in time, and the accuracy of the updated map can be guaranteed.
Drawings
The above and other objects, features and advantages of the present application will become more apparent from the following more particular description of embodiments of the present application, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 shows a schematic block diagram of an example electronic device for implementing a map generation method according to an embodiment of the application;
FIG. 2 shows a schematic flow chart of a map generation method according to one embodiment of the application;
FIG. 3 illustrates a schematic diagram of the matching between map data frames and a positioning map when optimizing the pose of a robot according to one embodiment of the present application;
FIG. 4 shows a schematic diagram of a fused map according to one embodiment of the application;
Fig. 5 shows a schematic block diagram of a map generating apparatus according to an embodiment of the present application; and
Fig. 6 shows a schematic block diagram of a map generation system according to one embodiment of the application.
Detailed Description
Along with the development of intelligent technologies such as the Internet of things, artificial intelligence and big data, the demands of transformation and upgrading of traditional logistics industry by utilizing the intelligent technologies are stronger, and intelligent logistics (INTELLIGENT LOGISTICS SYSTEM) becomes a research hotspot in the logistics field. The intelligent logistics utilizes the Internet of things devices and technologies such as artificial intelligence, big data, various information sensors, radio frequency identification technology, global Positioning System (GPS) and the like, is widely applied to basic movable links such as transportation, storage, distribution, packaging, loading and unloading of materials, information service and the like, and realizes intelligent analysis decision, automatic operation and high-efficiency optimization management of the material management process. The internet of things technology comprises sensing equipment, RFID technology, laser infrared scanning, infrared sensing identification and the like, and can effectively connect materials in logistics with a network, monitor the materials in real time, sense environmental data such as humidity and temperature of a warehouse and guarantee the storage environment of the materials. All data in the logistics can be perceived and collected through a big data technology, the data are uploaded to an information platform data layer, operations such as filtering, excavating, analyzing and the like are carried out on the data, and finally accurate data support is provided for business processes (such as links of transportation, warehousing, access, picking, packaging, sorting, warehouse-out, inventory, distribution and the like). The application direction of artificial intelligence in logistics can be broadly divided into two types: 1) The intelligent equipment such as an unmanned truck, an AGV, an AMR, a forklift, a shuttle, a stacker, an unmanned delivery vehicle, an unmanned plane, a service robot, a mechanical arm, an intelligent terminal and the like which are energized by the AI technology is used for replacing part of manpower; 2) The manual efficiency is improved through a software system driven by technologies or algorithms such as computer vision, machine learning, operation optimization and the like, such as a transportation equipment management system, warehouse management, equipment scheduling system, order distribution system and the like. With the research and advancement of smart logistics, the technology has expanded applications in numerous fields, such as retail and electronics, tobacco, medicine, industrial manufacturing, footwear, textiles, food, etc.
In order to make the objects, technical solutions and advantages of the present application more apparent, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein. Based on the embodiments of the application described in the present application, all other embodiments that a person skilled in the art would have without inventive effort shall fall within the scope of the application.
The embodiment of the application provides a map generation method, electronic equipment, storage medium and computer program product. According to the map generation method provided by the embodiment of the application, the smoothness of the robot pose at the map splicing position can be ensured, and the accuracy of the map is improved. The map generation technique according to the embodiment of the present application can be applied to any field involving map generation.
First, an example electronic apparatus 100 for implementing a map generation method according to an embodiment of the present application is described with reference to fig. 1.
As shown in fig. 1, the electronic device 100 includes one or more processors 102, one or more storage devices 104. Optionally, the electronic device 100 may also include an input device 106, an output device 108, and an image capture device 110, which are interconnected by a bus system 112 and/or other forms of connection mechanisms (not shown). It should be noted that the components and structures of the electronic device 100 shown in fig. 1 are exemplary only and not limiting, as the electronic device may have other components and structures as desired.
The processor 102 may be implemented in at least one hardware form of a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a microprocessor, the processor 102 may be one or a combination of several of a Central Processing Unit (CPU), an image processor (GPU), an Application Specific Integrated Circuit (ASIC), or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 102 to implement client functions and/or other desired functions in embodiments of the present application as described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, mouse, microphone, touch screen, and the like.
The output device 108 may output various information (e.g., images and/or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like. Alternatively, the input device 106 and the output device 108 may be integrated together and implemented using the same interaction device (e.g., a touch screen).
The image acquisition device 110 may acquire images and store the acquired images in the storage device 104 for use by other components. The image capturing mechanism 110 may be a separate camera or a video camera in a mobile terminal, etc. It should be understood that the image capturing apparatus 110 is merely an example, and the electronic device 100 may not include the image capturing apparatus 110. In this case, other devices having image capturing capability may be used to capture images and transmit the captured images to the electronic device 100.
By way of example, an example electronic device for implementing the map generation method according to an embodiment of the present application may be implemented on a device such as a personal computer, a terminal device, an attendance machine, a panel machine, a camera, a mobile robot, or a remote server. Wherein the terminal device includes, but is not limited to: tablet computers, cell phones, PDAs (Personal DIGITAL ASSISTANT, personal digital assistants), touch screen all-in-one machines, wearable devices, etc.
Next, a map generation method according to an embodiment of the present application will be described with reference to fig. 2. Fig. 2 shows a schematic flow chart of a map generation method 200 according to one embodiment of the application. As shown in fig. 2, the map generation method 200 includes the following steps S210, S220, S230, and S240.
In step S210, after the mobile robot has lost its positioning, it is detected whether the positioning of the mobile robot is recovered.
In step S220, if the mobile robot is located and retrieved, a multi-frame map data frame corresponding to the mobile robot is obtained, and a first location map used by the mobile robot during the lost location and a second location map used by the mobile robot before and after the lost location are obtained. The multi-frame map data frames comprise map data frames corresponding to the mobile robot in the positioning lost period and map data frames adjacent to the map data frames corresponding to the mobile robot in the positioning lost period before and/or after the mobile robot is positioned and retrieved.
In step S230, based on the first positioning map, the second positioning map and the multi-frame map data frame, the robot pose corresponding to each of the multi-frame map data frame is optimized, and an optimized pose set of the mobile robot is obtained.
In step S240, each of the plurality of frames of map data is aligned based on the optimized pose set, and the aligned frames of map data are fused together to generate a smooth map. The smooth map is used to update the second positioning map.
It can be appreciated that the loss of the positioning of the mobile robot means that when the current environment changes, the mobile robot cannot recognize the current environment, and thus cannot determine the pose of the mobile robot on the initial base map. The initial base map is the map used before the mobile robot's position is lost and after the position is retrieved (i.e., the second position map described herein). Whether the mobile robot is located or not can be judged by using a comparison result between an image acquired by the mobile robot and the initial base map. In one embodiment, when the positioning of the mobile robot is in a lost state, the image acquired by the current mobile robot may be compared with the image of the corresponding area in the initial base map, and if the image is the same as the image of the corresponding area in the initial base map, the positioning recovery of the mobile robot is determined.
Illustratively, in step S220, acquiring the multi-frame map data frame corresponding to the mobile robot may include: a first set of data frames is acquired and a second and/or third set of data frames is acquired. The first group of data frames comprises map data frames corresponding to the mobile robot during the lost positioning period, the second group of data frames comprises map data frames adjacent to the map data frames corresponding to the lost positioning period before the mobile robot is lost in positioning, and the third group of data frames comprises map data frames adjacent to the map data frames corresponding to the lost positioning period after the mobile robot is lost in positioning. For convenience of description, hereinafter, a map data frame corresponding to a mobile robot during a loss of positioning will be referred to simply as a first group of data frames. Preferably, each map data frame in the first set of data frames may be a key frame selected from real-time map data frames collected by the mobile robot in real-time during the loss of positioning, and the first set of data frames may be a first set of key frames. Map data frames of the mobile robot before the loss of the positioning and adjacent to map data frames corresponding to the period of the loss of the positioning are abbreviated as a second group of data frames. Preferably, each map data frame in the second set of data frames may be a key frame selected from real-time map data frames collected by the mobile robot in real-time before the positioning is lost, and the second set of data frames may be a second set of key frames. Map data frames of the mobile robot after positioning and retrieving and adjacent to map data frames corresponding to the positioning lost period are simply called a third group of data frames. Preferably, each map data frame in the third set of data frames may be a key frame selected from real-time map data frames collected by the mobile robot in real time after the positioning is lost, and the third set of data frames may be a third set of key frames. Those skilled in the art will understand the meaning of a key frame and is not described in detail herein.
The map data frame described herein is a data frame acquired by a mobile robot and including surrounding information, for example, a data frame acquired by a laser radar in a laser SLAM technology, or a data frame acquired by an image acquisition device in a visual SLAM technology, or the like. Alternatively, the multi-frame map data frame may be selected from real-time map data frames (real-time frames) acquired in real-time by the mobile robot. The acquisition mode of the real-time frame can be executed according to a first preset strategy. The first predetermined strategy may be a first predetermined distance or a first predetermined time, etc. In one embodiment, the real-time frames may be acquired based on a first predetermined distance. I.e. the mobile robot acquires one real-time frame every time it moves a first predetermined distance. For example, the first predetermined distance may be 0.5m, and the mobile robot acquires one real-time frame every 0.5m of movement. Alternatively, the real-time frames may be acquired based on a first predetermined time. I.e. the mobile robot collects a real-time frame every a first predetermined time. For example, the first predetermined time may be 30s, and the mobile robot automatically collects one real-time frame every 30 s. In one embodiment, each real-time frame may be considered a map data frame. I.e. each frame of map data may be a common real-time frame. In another embodiment, a portion of the real-time frames may be selected as map data frames from the real-time frames according to a second predetermined policy, i.e., such that the selected map data frames satisfy the requirements of the second predetermined policy, which may be a second predetermined distance or a second predetermined time, etc.
For example, the first positioning map may be a map established in real time during a loss of positioning of the mobile robot from each map data frame acquired during the loss of positioning of the mobile robot, which may be referred to as an active subgraph (ACTIVE MAP). The specific updating method will be described in detail below.
It is understood that the number of map data frames acquired by the mobile robot within the predetermined period may be set according to the accuracy of the first positioning map. As indicated above, the first location map may be generated based on the map data frames during the loss of location. The more the number of map data frames acquired within a predetermined period, the higher the accuracy of the first positioning map; the fewer the number of frames of map data acquired within a predetermined period, the lower the accuracy of the first positioning map.
Illustratively, the second positioning map may be the initial base map described above. I.e. the second positioning map is a map used before the mobile robot positioning is lost and after the positioning is retrieved, which may for example be a map that has been constructed before the mobile robot positioning is lost.
Illustratively, optimizing the pose of the robot corresponding to each of the multiple frames of map data frames, before obtaining the optimized pose set of the mobile robot, the method may further include the following steps: a first set of data frames is determined based on all real-time frames acquired by the mobile robot during a loss of position. In one embodiment, all real-time frames may be determined directly as the first set of data frames. For example, during a loss of positioning of the mobile robot, 6 real-time frames, I 1、I2、I3、I4、I5 and I 6, respectively, are acquired consecutively. Both I 1、I2、I3、I4、I5 and I 6 may be determined to be the first set of data frames. Alternatively, a portion of all real-time frames may be selected as the first set of data frames. The selection may be a fixed interval sampling or a random sampling. For example, in the above-described embodiment of continuously acquiring 6 frames of real-time frames, I 1、I3、I4 and I 5 may be decimated as the first set of data frames. Optionally, the selecting manner may further select a part of the real-time frames from the real-time frames according to a third predetermined policy as map data frames during the positioning loss, that is, the selected map data frames satisfy a requirement of the third predetermined policy, and the third predetermined policy may be a third predetermined distance or a third predetermined time. The determination manners of the second group of data frames and the third group of data frames are similar to those of the first group of data frames, and are not repeated.
Illustratively, the multi-frame map data frame may include a first data frame and a second data frame. In the embodiment described above in which the first data frame includes I 1、I2、I3、I4、I5 and I 6, the second data frame may include at least one continuous map data frame adjacent to I 1 acquired by the mobile robot prior to the loss of position. It will be appreciated that the number of map data frames in the second data frame may be selected based on pose optimization desirability. The greater the number of map data frames in the second data frame, the higher the pose optimization desirability, and the smoother the map generated based on the set of data frames. Otherwise, the other way round.
Alternatively, the multi-frame map data frame may include a first data frame and a third data frame. In the embodiment where the first data frame includes I 1、I2、I3、I4、I5 and I 6, the third data frame may include at least one continuous map data frame adjacent to I 6 acquired by the mobile robot after the position fix is retrieved. It will be appreciated that, similar to the second data frame, the number of map data frames in the third data frame may also be selected based on pose optimization desirability.
Still alternatively, the multi-frame map data frame may include a first set of data frames, a second set of data frames, and a third set of data frames. The manner of acquiring the first set of data frames, the second set of data frames and the third set of data frames in the above embodiment has been described in detail, and for brevity, will not be described herein again. In the embodiment, the multi-frame map data frames before the mobile robot is lost and after the mobile robot is located and retrieved are simultaneously utilized to jointly optimize the robot pose of the map data frames acquired during the lost mobile robot location, so that the optimization effect is better, and the data frame set of the embodiment is preferably adopted.
It will be appreciated that the robot pose for each map data frame in the second and third sets of data frames is known. In the process of optimizing the robot pose corresponding to each map data frame in the first group of data frames, the robot pose of each map data frame in the second group of data frames and the third group of data frames can be optimized according to the optimizing result, so that the change of the robot pose among each map data frame is smoother.
FIG. 3 illustrates a schematic diagram of the matching between map data frames and a localization map when optimizing the pose of a robot according to one embodiment of the present application. As shown in fig. 3, the first set of data frames includes I 1、I2、I3 and I 4. The second set of data frames includes I a1 and I a2. The third set of data frames includes I b1 and I b2. In this embodiment, in the case of positioning recovery of the mobile robot, the robot poses corresponding to I a1、Ia2、I1、I2、I3、I4、Ib1 and I b2 may be optimized, which may be referred to as joint optimization.
For example, after the optimized pose set is obtained, a smooth map may be generated according to each map data frame in the multi-frame map data frames and the optimized robot pose (may be simply referred to as an optimized pose) corresponding to each map data frame. The smooth map may be generated using any existing or future map generation method. In one embodiment, when the mobile robot is a laser SLAM-based robot, a bayesian update scheme may be employed to generate the smooth map. Specifically, for example, a smooth map may be generated in a manner such as a mapping library (cartographer). In another embodiment, when the mobile robot is a vision SLAM-based robot, a smooth map may be generated based on feature points of each map data frame. Specifically, for example, vins feature point management policies may be employed to generate a smooth map.
According to the technical scheme, the map data frames collected before and/or after the mobile robot is lost in positioning and/or after the mobile robot is retrieved are utilized to optimize the pose with the map data frames collected during the lost positioning period of the mobile robot, so that the robot pose among the map data frames of each frame is smoother, and an accurate smooth map is generated according to the smooth robot pose. According to the scheme, the accurate smooth map can be automatically generated based on the map data frames before the positioning is lost and/or after the positioning is recovered without manual jigsaw, so that the second positioning map can be updated in time, and the accuracy of the updated map can be guaranteed.
Illustratively, after fusing the aligned map data frames together to generate a smooth map, the method 200 may further include the following steps S250, S260, S270, and S280.
In step S250, for each of the plurality of frames of map data frames, a map update range corresponding to the optimized robot pose is determined on the smoothed map based on the optimized robot pose corresponding to the map data frame in the set of optimized poses.
In step S260, the first map information on the smooth map within the map updating range corresponding to each of the multiple frames of map data frames is fused to the second positioning map, and the fused map is obtained.
In step S270, the fused map is displayed on the display interface.
In step S280, the second positioning map is replaced with the fused map based on the update instruction input by the user, or the second positioning map is retained and the fused map is deleted based on the delete instruction input by the user.
It will be appreciated that the smoothing map is generated based on the optimized pose set and a plurality of frames of map data including at least one of the second set of data frames and the third set of data frames. Thus, the smooth map not only includes the map during the loss of the mobile robot positioning, but also includes a partial map before the loss of the mobile robot positioning and/or after the positioning recovery. And determining a map updating range on the smooth map, namely determining a map part which corresponds to each map data frame and needs to be updated in the smooth map.
As described above, the trajectory information of the mobile robot can be obtained according to the optimized pose corresponding to the multi-frame map data frame acquired by the mobile robot. The optimized pose corresponding to each frame of map data frame is a track point on the track. The map update range may be determined on a smooth map based on the track points. In one embodiment, it may be determined whether the map update range is included according to the distance between the feature point and the track point on the smooth map.
After the map updating range is determined on the smooth map, the map information in the map updating range on the smooth map can be determined based on the map information in the map updating range in the smooth map, and then the map information in each map updating range on the smooth map is fused onto the second positioning map, and other parts on the second positioning map are kept unchanged, namely the original map information is kept.
In one embodiment, the smoothed map is a laser grid map and the map information may be probability values for grids in the laser grid map. The probability value represents the state of the grid occupied in the laser grid map. In one embodiment, for a probability value of one mesh, the state where the mesh is occupied is denoted as 1, and the state where the mesh is unoccupied is denoted as 0.p (0) represents the probability that the grid is unoccupied, and p (1) represents the probability that the grid is occupied. p (0) =1-p (1). Map information on the smoothed map that is located within the map update range may be determined based on the probability value of each grid. Alternatively, the smooth map is a visual map and the map information is a pixel value of the visual map. Map information on the smooth map that is within the map update range may be determined from pixel values of the visual map that is within the map update range in the map data frame.
Alternatively, the fusion manner of the smooth map on the second positioning map may be that the smooth map is directly superimposed on the corresponding area of the second positioning map, or that the smooth map is used to replace the map information of the corresponding area of the second positioning map. The specific manner of fusion is described in detail below. For brevity, no further description is provided herein.
Alternatively, the type of smoothing map may be a laser grid map or a visual map. In one embodiment, the smooth map is a laser grid map, which may be directly blended with the second positioning map. Alternatively, the smooth map is a visual map. Because the visual map is a three-dimensional map, the smooth map can be fused with the second positioning map after being subjected to plane compression based on the map pose of the smooth map.
Alternatively, the display interface may be a user interaction interface on the display device. It will be appreciated that after obtaining the fused map, the user may choose to keep or delete the smooth map according to the actual situation. If the fused map meets the expected requirement, an update instruction can be input to replace the initial second positioning map with the fused map. If the fused map does not meet the expected requirement, a deleting instruction can be input, the fused map and/or the smooth map are deleted, and the initial second positioning map is reserved. The expected requirement can be the definition of the fused map, and can also be judged according to the actual condition of the corresponding area. For example, if the environment information of the area is still changing, that is, the map corresponding to the area is still changing, the fused map of this time may be discarded. The user may input update instructions and/or delete instructions via an input device, such as input device 106 described above. The input device may be included in or communicatively connected with the device for performing the map generation method 200.
According to the technical scheme, whether the second positioning map is updated or not is determined according to the user instruction, so that the problems of error accumulation and accuracy reduction caused by long-time autonomous updating of the map are avoided, and the accuracy of the updated map is improved.
Illustratively, after fusing the aligned map data frames together to generate a smooth map, the method 200 may further include the steps of: and displaying map association information of the fused map on a display interface. Wherein the map-associated information includes one or more of: the method comprises the steps of identifying information of a mobile robot, track information of the mobile robot during positioning loss, obtaining time of a fused map and map pose corresponding to a smooth map; the track information is represented by robot poses corresponding to map data frames corresponding to the lost positioning period, the map poses are used for representing the relative positions of the smooth map in the second positioning map, and the map poses are determined based on the robot poses corresponding to the multi-frame map data frames.
Alternatively, the fused map and/or map-related information of the map may be output to a display interface via an output device (e.g., the output device 108 described above), which may be included in or communicatively coupled with the device for performing the map generation method 200. Outputting the fused map together with the map-associated information of the map may be uploading both together to a device such as an upper computer. The user can view the fused map and the map association information of the map through devices such as an upper computer.
For example, the identification Information (ID) of the mobile robot may be a number of the mobile robot, which may be represented by an arabic numeral. For example, the mobile robot may be set to No.1, i.e., the identification information of the mobile robot is No. 1. In one embodiment, the map-related information may include identification information of the mobile robot, an acquisition time of the fused map (i.e., a time of updating the second positioning map), and a map pose corresponding to the smooth map. For example, if the identification information of the mobile robot is No.1 and the acquisition time of the fused map is 11:08, the association information of the mobile robot may be represented as "No. 1+11:08".
For example, trajectory information of the mobile robot during a loss of positioning may be obtained from an optimized pose corresponding to map data frames (i.e., a first set of data frames) acquired by the mobile robot during the loss of positioning. And determining a coordinate point of the mobile robot on a second positioning map during the positioning loss according to the optimized pose corresponding to each map data frame acquired by the mobile robot during the positioning loss, wherein the coordinate point is a track point of the mobile robot. And connecting the track points to determine the track information of the mobile robot during the lost positioning.
The map pose is determined based on the robot poses corresponding to the multiple frames of map data, and the robot poses can represent the relative positions of the smooth map in the second positioning map, so that a person skilled in the art can understand the determination manner of the map pose, and the determination manner is not repeated herein.
According to the technical scheme, the fused map is displayed on the display interface, and the corresponding map association information is output, so that a user can conveniently find the smooth map position in the fused map, and further the user can conveniently judge whether the fused map is reserved or not according to different information.
Illustratively, in step S260, fusing the first map information on the smooth map within the map update range corresponding to each of the multiple frames of map data frames onto the second positioning map to obtain a fused map, which may include the following steps: and replacing the second map information in the area corresponding to the map updating range on the second positioning map with the first map information in the map updating range corresponding to each multi-frame map data frame on the smooth map to obtain the fused map.
It will be appreciated that each map point in the smoothed map and the second location map has corresponding map information. Of course, other maps are similar. In one embodiment, an area in the second positioning map corresponding to the map update range may be first determined, and the second map information in the area may be deleted. The first map information is then added to the region, resulting in a fused map.
The embodiment realizes the fusion of the maps in a direct replacement mode, and is beneficial to ensuring the accuracy of the updated map.
Illustratively, in step S260, fusing the first map information on the smooth map within the map update range corresponding to each of the multiple frames of map data frames onto the second positioning map to obtain a fused map, which may include the following steps: and based on the first transparency and the second transparency, overlapping the first map information in the map updating range corresponding to each multi-frame map data frame on the smooth map on the second map information in the area corresponding to the map updating range on the second positioning map to obtain a fused map, wherein the first transparency is transparency corresponding to the first map information, and the second transparency is transparency corresponding to the second map information.
In one embodiment, the transparency of 50% may be used as the transparency of the first map information located in the map update range corresponding to each of the plurality of frames of map data on the smooth map, and the transparency of 100% may be used as the transparency of the second map information located in the area corresponding to the map update range on the second positioning map, and the first map information may be superimposed on the second map information located in the area corresponding to the map update range on the second positioning map, thereby obtaining the fused map. In a specific embodiment, the outline of the first map information may be marked by an identifying color, so that the user may conveniently observe the area to be updated.
According to the embodiment, the first map information is superimposed on the area, corresponding to the map updating range, of the second positioning map by utilizing different transparencies, so that a user can visually compare the second positioning map before updating with the second positioning map after updating, and the user experience can be improved.
Illustratively, the robot pose includes a position of the mobile robot and an orientation of the mobile robot.
In step S250, determining a map update range corresponding to the optimized robot pose on the smooth map based on the optimized robot pose corresponding to the map data frame in the optimized pose set may include the following steps S251, S252 and S253.
In step S251, rays are made in a plurality of directions on the smooth map, respectively, starting from the target map point. The target map point is the position of the optimized mobile robot corresponding to the map data frame, and the included angle between the two outermost rays and the direction of the optimized mobile robot corresponding to the map data frame is smaller than or equal to 90 degrees.
In step S252, the intersection of the ray with the obstacle is determined.
In step S253, a target area including the intersection is determined as a map update range corresponding to the map data frame. Wherein the distance from each map point in the target area to the intersection point is less than or equal to the target distance threshold.
It will be appreciated that for an optimized pose corresponding to a frame of map data, the number of rays emitted may be set according to the required accuracy. The greater the number of rays emitted in the target angular range, the greater the accuracy. Otherwise, the other way round.
FIG. 4 shows a schematic diagram of a fused map according to one embodiment of the application. As shown in fig. 4, the dashed box represents the map update range, the arrow represents a ray, and the a point can be regarded as one track point on the track. As described above, the optimized pose corresponding to each map data frame is a track point on the track. And the point A is the optimized pose corresponding to the map data frame. From the map point corresponding to the point a (i.e., the target map point), rays may be emitted in multiple directions until an obstacle is encountered. And determining the intersection points of the rays and the obstacle, and taking the area containing the intersection points as a target area. In the embodiment shown in fig. 4, the target area is area B, C. The area B, C is the map update scope.
When rays are made, attention can be paid to the fact that the included angle between the two outermost rays and the direction of the mobile robot corresponding to the point A is not more than 90 degrees. It will be appreciated that the angle between the outermost two rays will thus not exceed 180. Alternatively, the target distance threshold may be set as desired. The larger the target distance threshold, the larger the map update range determined based on the intersection, and the higher the update accuracy. The smaller the target distance threshold, the smaller the map update range determined based on the intersection, the more likely map information to be updated is missed, and the lower the update accuracy is.
According to the technical scheme, rays are made from the optimized pose corresponding to the map data frame to a plurality of directions, so that the map updating range corresponding to the map data frame can be accurately determined.
Illustratively, in step S230, optimizing the robot pose of the mobile robot corresponding to each of the multi-frame map data frames based on the first positioning map, the second positioning map and the multi-frame map data frames may include the following steps S231, S232 and S233.
In step S231, the robot pose corresponding to each of the multiple frames of map data frames is acquired.
In step S232, based on the first matching error between each map data frame (i.e., the first group of data frames) during the lost positioning and the first positioning map, the second matching error between each map data frame (i.e., the second group of data frames and/or the third group of data frames) before the lost positioning and/or after the retrieved positioning and the second positioning map, and the third matching error corresponding to every two adjacent map data frames in the multi-frame map data frames, the overall constraint error corresponding to the pose of the robot is calculated. Wherein the third matching error is an error generated by the mobile robot-based odometer.
In step S233, the robot pose corresponding to each of the multiple frames of map data is optimized to minimize the overall constraint error.
The first match error may be obtained by means of a re-projection, for example. In one embodiment, the map data frame may be projected into the first positioning map based on the robot pose, and a reprojection error of the map data frame with a corresponding region in the first positioning map may be calculated. The reprojection error is the first match error.
Illustratively, the second match error may be obtained by means of a re-projection, similar to the first match error. The map data frame can be projected into the second positioning map based on the robot pose, the re-projection error of the map data frame and the corresponding area in the second positioning map is calculated, and the re-projection error is determined to be a second matching error.
For example, the third matching error may be predefined in terms of the error of the mobile robot itself. For example, the third matching error may be set to be 1cm per 10 meters of travel.
For example, the pose optimization may be performed by optimizing the robot pose corresponding to each map data frame by using a nonlinear least square method, so that the overall constraint error is minimized. The overall constraint error is the combination of the following: a first match error, a second match error, and a third match error. And acquiring the robot pose corresponding to each map data frame when the total constraint error is minimum, wherein the pose is the optimized pose corresponding to the map data frame.
In the embodiment shown in fig. 3, the overall robot pose may be optimized with a nonlinear least squares method based on a first match error between I 1、I2、I3、I4 and the first map, a second match error between I a1、Ia2、Ib1 and I b2 and the second map, and a third match error between I a1、Ia2、I1、I2、I3、I4、Ib1 and I b2, such that the overall constraint error is minimized. And acquiring the pose of the robot corresponding to each of I a1、Ia2、I1、I2、I3、I4、Ib1 and I b2 when the total constraint error is minimum, wherein the pose is the optimized pose corresponding to each of I a1、Ia2、I1、I2、I3、I4、Ib1 and I b2.
According to the technical scheme, the robot pose of each map data frame with the minimum total error can be obtained by using the first matching error, the second matching error and the third matching error, so that the robot pose of each map data frame is smoother.
The first positioning map is illustratively constructed in real-time based on at least one frame of first map data. The at least one first frame of map data includes a map data frame corresponding to a period of loss of positioning of the mobile robot. The first positioning map corresponding to any one of the first map data frames is a first positioning map constructed based on the first map data frame and a first map data frame preceding the first map data frame. For a map data frame of a mobile robot during a positioning loss period, acquiring robot poses corresponding to multiple frames of map data frames respectively can comprise the following steps: when the mobile robot is in a positioning lost period, for any current first map data frame, acquiring a first robot pose corresponding to the current first map data frame determined by an odometer of the mobile robot; and/or matching the current first map data frame with the first positioning map corresponding to the previous first map data frame to obtain a fourth matching error, and determining the second robot pose corresponding to the current first map data frame based on the obtained fourth matching error. And determining the robot pose corresponding to the current first map data frame based on the first robot pose and/or the second robot pose corresponding to the current first map data frame.
Alternatively, the first robot pose corresponding to the current first map data frame may be determined based on the odometer of the mobile robot. In one embodiment, if the current first map data frame is a first frame of the at least one first map data frame, the robot pose of the first map data frame may be determined by the robot pose of at least one frame of map data frame acquired before the positioning of the mobile robot is lost, for example, the last frame of map data frame. In a specific embodiment, after the mobile robot loses its positioning, the pose of the first robot corresponding to the first map data frame acquired during the loss of the positioning of the mobile robot may be determined according to the pose of the last map data frame acquired before the loss of the positioning of the mobile robot, and information such as the distance and/or the rotated angle of the first map data frame in the first map data frames acquired during the loss of the positioning. The first robot pose corresponding to the first map data frame of the second frame acquired during the lost positioning of the mobile robot can be determined by the first robot pose corresponding to the first map data frame of the first frame, and the distance moved and/or the rotated angle and other information relative to the first map data frame of the first frame acquired during the first map data frame of the second frame. And repeating the process to obtain the first robot pose of the first map data frame of each frame. Taking the embodiment of continuously acquiring 6 real-time frames as an example, in this embodiment, the last frame of map data frame acquired before the positioning of the mobile robot is lost may be I. If each real-time frame is regarded as a map data frame in this embodiment, the first robot pose corresponding to I 1 may be determined based on the pose of the robot of I and information such as the distance moved and/or the angle rotated when the mobile robot collects I 1 with respect to the time of collecting I. After the first robot pose corresponding to I 1 is determined, the first robot pose corresponding to I 2 may be determined based on the first robot pose corresponding to I 1, and information such as a distance moved and/or an angle rotated when the mobile robot collects I 2 with respect to the time of collecting I 1. And repeating the process to obtain the first robot pose corresponding to each of the I 1、I2、I3、I4、I5 and the I 6.
Alternatively, the second robot pose corresponding to the current first map data frame may be determined based on the fourth matching error. The fourth matching error determination method may be determined according to the type of the first positioning map. Specific methods are described in detail below.
In one example, a first robot pose corresponding to an arbitrary first map data frame may be taken as the final robot pose (which may be referred to as a target robot pose) of the first map data frame. In another example, the second robot pose corresponding to any first map data frame may be taken as the target robot pose for that first map data frame. In yet another example, the target robot pose of the first map data frame may also be obtained based on a combination of the first robot pose and the second robot pose corresponding to any of the first map data frames, e.g., averaging the first robot pose and the second robot pose.
According to the technical scheme, the odometer and/or the fourth matching error of the mobile robot are/is utilized, so that the pose of the robot corresponding to the current first map data frame can be accurately determined.
Illustratively, the matching of the current first map data frame and the first positioning map corresponding to the previous first map data frame to obtain a fourth matching error, and determining the second robot pose corresponding to the current first map data frame based on the obtained fourth matching error may include the following steps: and under the condition that the first positioning map is a laser grid map, matching the current first map data frame with a first positioning map corresponding to the previous first map data frame by adopting a correlation scanning matching and least squares solution-based optimization method to obtain a fourth matching error, and determining a second robot pose corresponding to the current first map data frame based on the robot pose corresponding to the previous first map data frame and the fourth matching error.
In one embodiment, when the first positioning map is a laser grid map, correlation scan matching may be performed based on the first positioning map corresponding to the current first map data frame and the previous first map data frame. And performing traversal matching on the laser point cloud corresponding to the current first map data frame and the first positioning map by adopting a violent search matching method. The obtained result can be optimized by adopting an optimization method based on least squares solution. For example, the results may be optimized, such as by a least squares solution library (cere), to obtain a fourth match error. And determining the second robot pose corresponding to the current first map data frame according to the fourth matching error and the robot pose corresponding to the previous first map data frame.
According to the embodiment, under the condition that the first positioning map is the laser grid map, the second robot pose corresponding to the current first map data frame can be accurately determined by utilizing the correlation scanning matching and the least squares solution-based optimization method.
Illustratively, the step of matching the current first map data frame with the first positioning map corresponding to the previous first map data frame to obtain a matching error, and determining the second robot pose corresponding to the current first map data frame based on the obtained matching error may include the steps of: and under the condition that the first positioning map is a visual map, tracking the position of map points to be matched in the first positioning map in a current first map data frame by adopting an optical flow method, obtaining a fourth matching error between the current first map data frame and the first positioning map corresponding to a previous first map data frame, and determining a second robot pose corresponding to the current first map data frame based on the robot pose corresponding to the previous first map data frame and the fourth matching error.
It can be understood that the map points to be matched are feature points existing in the first map data frame of each frame. For example, the map points to be matched may be desks, hydrants, etc. that are present in the first map data frame of each frame.
Optionally, before optical flow tracking is adopted, a binocular estimation method may be adopted to estimate depth information of map points to be matched in any first map data frame, and optical flow tracking may be implemented based on the depth information. For example, the estimated depth information of the map points to be matched may be shared with the first map data frame per frame, i.e., the first map data frame per frame may employ the estimated depth information of the map points to be matched. By tracking the positions of the map points to be matched in each first map data frame by adopting an optical flow method, the position deviation of the map points to be matched in two adjacent first map data frames can be determined, and the fourth matching error between the two adjacent first map data frames can be determined based on the deviation. This way of determining the matching error by the optical flow method may be implemented by using an existing optical flow method, which is not described in detail herein. And determining the second robot pose corresponding to the current first map data frame according to the fourth matching error and the robot pose corresponding to the previous first map data frame.
In the above embodiment, when the first positioning map is a visual map, the second robot pose corresponding to the current first map data frame can be accurately determined by using the optical flow tracking method.
Illustratively, after determining the robot pose corresponding to the current first map data frame, the method 200 may further include the steps of: and updating the first positioning map based on the robot pose corresponding to the current first map data frame to obtain the first positioning map corresponding to the current first map data frame.
The updating mode of the first positioning map can adopt any existing or any available updating method. In one embodiment, when the first location map is a laser grid map, a bayesian update scheme may be employed to update the first location map. Specifically, for example, the first positioning map may be generated in a manner such as a mapping library (cartographer), or the like. In another embodiment, when the first positioning map is a visual map, the first positioning map may be updated based on map points to be matched of each frame of map data frame. Specifically, for example, the first positioning map may be updated in a manner such as vins feature point management policies.
According to the technical scheme, the first positioning map can be updated according to the robot pose corresponding to the current first map data frame, and accuracy of the first positioning map is guaranteed.
Illustratively, determining the second robot pose corresponding to the current first map data frame based on the robot pose corresponding to the previous first map data frame and the fourth matching error may include the steps of:
and determining the initial robot pose corresponding to the current first map data frame based on the robot pose corresponding to the previous first map data frame and the fourth matching error.
Based on the initial robot pose, the current first map data frame is respectively projected to the first positioning map corresponding to each of the at least one previous first map data frame, and at least one re-projection error between the current first map data frame and the first positioning map corresponding to each of the at least one previous first map data frame is obtained. The at least one frame of previous first map data frame is a first map data frame preceding the current first map data frame, and the at least one re-projection error corresponds one-to-one to the at least one frame of previous first map data frame.
And optimizing the initial robot pose based on at least one reprojection error to obtain a second robot pose corresponding to the current first map data frame.
The at least one previous first map data frame may be any one or more first map data frames preceding the current first map data frame. At least one previous first map data frame may or may not be adjacent to the current first map data frame. Alternatively, the number of at least one frame of previous first map data frames may be one or more. For example, the at least one frame of previous first map data frames may be all of the first map data frames preceding the current first map data frame. Alternatively, the at least one frame of the previous first map data frame may be a previous first map data frame to the current first map data frame.
In one embodiment, the first frame of map data is I 1、I2 and I 3 in sequence. When the robot pose of I 1 is known, the initial robot pose of I 2 may be determined according to the robot pose of I 1 and the matching result between the first positioning map corresponding to I 2 and I 1. The initial robot pose may be obtained, for example, by a multi-Point perspective imaging algorithm (PERSPECTIVE-n-Point, pnP). Then, based on the initial robot pose of I 2, projecting I 2 into the first positioning map corresponding to I 1, and obtaining a re-projection error between the first positioning maps corresponding to I 2 and I 1. According to the re-projection error, the initial robot pose of I 2 is optimized, so that the re-projection error is minimized, and the robot pose of I 2 is obtained. When the robot poses of I 1 and I 2 are known, the initial robot pose of I 3 may be determined according to the robot pose of I 2 and the matching result between the first positioning map corresponding to I 3 and I 2. Then, based on the initial robot pose of I 3, projecting I 3 into the first positioning maps corresponding to I 1 and I 2, and obtaining a re-projection error between I 3 and the first positioning maps corresponding to I 1 and I 2. Based on these re-projection errors, the initial robot pose of I 3 is optimized so that the re-projection errors are minimized, thereby obtaining the robot pose of I 3.
Optimizing the initial robot pose based on the at least one re-projection error may be optimizing the initial robot pose such that the at least one re-projection error is minimized. Preferably, the number of first map data frames in the at least one previous first map data frame is greater than or equal to 2. The at least one previous first frame of map data may or may not be continuous. For example, assuming that the initial robot pose of the 20 th first map data frame has been obtained, it may be re-projected to the first positioning maps corresponding to the 17 th, 18 th and 19 th first map data frames, respectively (i.e., the updated first positioning maps based on the 17 th, 18 th and 19 th first map data frames), and the respective re-projection errors may be calculated. All re-projection errors can be minimized, for example, by a least squares method, to optimize the initial robot pose for the 20 th first map data frame.
According to the technical scheme, the initial robot pose corresponding to the current first map data frame is estimated by utilizing the robot pose corresponding to the previous first map data frame and the matching information, and the initial robot pose is optimized by utilizing the re-projection error between the current first map data frame and the first positioning map corresponding to at least one frame of previous first map data frame, so that the second robot pose of the current first map data frame is obtained. The scheme is helpful for accurately determining the second robot pose of the current first map data frame.
Illustratively, the method 200 may further include the steps of:
And acquiring matching information between each second map data frame acquired by the mobile robot in real time and the second positioning map. The second map data frame acquired by the mobile robot in real time comprises a plurality of frame map data frames. The matching information is used for indicating whether the corresponding second map data frame is matched with the second positioning map.
And if the matching information corresponding to the second map data frames with the continuous first target number indicates no matching, determining that the mobile robot has lost positioning.
If the matching information corresponding to the second map data frames with the continuous second target number indicates matching when the mobile robot is in the positioning lost period, determining the positioning recovery of the mobile robot; and/or if the repositioning information input by the user is received when the mobile robot is in the period of positioning loss, determining the positioning and retrieving of the mobile robot, wherein the repositioning information comprises the current robot pose of the mobile robot.
Alternatively, the second map data frame collected by the mobile robot may be identical to the first map data frame, partially different, or completely different. It will be appreciated that the matching information between the second map data frame and the second positioning map may be uploaded automatically by the mobile robot or may be calculated by means (such as a server or the like) for performing the map generation method 200. The specific calculation mode is as follows.
After the matching information between each second map data frame acquired by the mobile robot and the second positioning map is acquired, if the matching information corresponding to the second map data frames with the continuous first target number indicates no matching, determining that the positioning of the mobile robot is lost. If the matching information corresponding to the second map data frames of the continuous second target number indicates matching in the case that the positioning of the mobile robot is lost, determining the positioning recovery of the mobile robot.
The first target number may be set according to an accuracy in determining whether the positioning of the mobile robot is lost. The greater the number of first targets, the greater the number of second map data frames used to determine whether the positioning of the mobile robot is lost, the greater the accuracy. Otherwise, the other way round.
The second target number may be set according to the accuracy in determining whether the positioning of the mobile robot is retrieved. The greater the number of second targets, the greater the number of second map data frames used to determine whether the position of the mobile robot is retrieved, and the greater the accuracy. Otherwise, the other way round.
It is understood that the first target number and the second target number may be the same. For example, the first target number and the second target number may both be 5. The first target number and the second target number may also be different. For example, the first target number is 5 and the second target number is 3. In one embodiment of the application, the first target number is smaller than the second target number. That is, in this embodiment, the condition for determining the location retrieval of the mobile robot is more stringent than the condition for determining the location loss of the mobile robot. The arrangement of this embodiment is advantageous in that the accuracy of the determination result in determining the positioning recovery of the mobile robot is improved, so that this is preferable.
As described above, the first target number is a continuous number. In the above embodiment with the first target number of 5, it is determined that the mobile robot positioning is lost when there are consecutive 5 second map data frames that do not match the second positioning map. In one embodiment, the first target number is 3. Let k 1 denote the matching information of the second map data frame and the second positioning map, k 1 =0 denote no match, k 1 =1 denote a match. If the values of k 1 of the consecutive 4 second map data frames are 0, 1, 0in order, although there are 3 second map data frames that do not match the second positioning map, the first target number is not satisfied because the 3 second map data frames that do not match are not consecutive. At this time, it is determined that the mobile robot positioning is not lost.
The second target number is also a consecutive number, and in the embodiment with the second target number of 3, when there are consecutive 3 second map data frames matched with the second positioning map, the mobile robot positioning recovery is determined. In one embodiment, the second target number is 3. Let k 2 denote the matching information of the second map data frame and the second positioning map, k 2 =0 denote no match, k 2 =1 denote a match. If the values of k 2 of the consecutive 4 second map data frames are sequentially 1,0, 1, although there are 3 second map data frames that match the second positioning map, the second target number is not satisfied because the 3 matching second map data frames are not consecutive. At this time, it is determined that the mobile robot positioning is still in a lost state.
Exemplarily, whether the mobile robot positioning is lost may be represented by f 1, and when f 1 =0, the positioning is not lost; f 1 = 1, the positioning is lost. In one embodiment, the first target number is set to 3. Let k 1 denote the matching information of the second map data frame and the second positioning map, k 1 =0 denote no match, k 1 =1 denote a match. The values of k 1 of the consecutive 10 second map data frames are 0,1, 0 in order. When it is determined for the 1 st to 9 th second map data frames whether the mobile robot positioning is lost, no mismatch between the consecutive first target number of second map data frames and the second positioning map occurs, and f 1 is 0. When it is determined for the 10 th second map data frame whether the mobile robot positioning is lost, then the 8 th to 10 th second map data frames do not match the second positioning map. I.e. there is a consecutive first target number of second map data frames not matching the second positioning map, f 1 =1, determining that the positioning of the mobile robot is lost.
Illustratively, after the mobile robot's position is lost, it may be indicated by f 2 whether the mobile robot's position is retrieved. If 2 =0, positioning is not recovered; f 2 = 1, the position is retrieved. In one embodiment, the second target number is set to 4. Let k 2 denote the matching information of the second map data frame and the second positioning map, k 2 =0 denote no match, k 2 =2 denote a match. The values of k 2 of the consecutive 10 second map data frames are 0, 1, 0, 1, in order. When determining whether the mobile robot positioning is retrieved for the 1 st to 9 th second map data frames, no consecutive second target number of second map data frames are matched with the second positioning map, and f 2 is 0. When it is determined for the 10 th second map data frame whether the mobile robot positioning is lost, then the 7 th to 10 th second map data frames do not match the second positioning map. I.e. there is a consecutive second target number of second map data frames matching the second positioning map, f 2 =1, determining the positioning recovery of the mobile robot.
In addition, the user can input repositioning information at any time, and the repositioning information comprises the robot pose of the mobile robot at the current moment. Once the repositioning information input by the user is received, the positioning recovery of the mobile robot can be directly determined.
According to the technical scheme, the loss and recovery states of the mobile robot positioning are respectively indicated by setting the continuous first target number and the continuous second map data frames of the second target number, so that the influence of errors caused by the acquisition of single second map data frames on the accuracy of a judgment result can be prevented. The scheme has good fault tolerance and reliability.
Illustratively, the match information is a match score. After acquiring the matching information between each second map data frame acquired by the mobile robot in real time and the second positioning map, the method 200 may further include the steps of: comparing the matching score corresponding to any second map data frame with a score threshold, if the matching score corresponding to the second map data frame is larger than the score threshold, determining that the second map data frame is matched with the second positioning map, otherwise, determining that the second map data frame is not matched with the second positioning map.
In the above-described embodiment in which the matching information is determined using the average of the distance errors of all the projection points, the matching score is inversely related to the average. I.e. the larger the mean, the lower the match score. Otherwise, the other way round. In one embodiment, the average of the distance errors for all proxels is denoted as E 1. The matching score N may be expressed as n=λ 1×E1 -11 as a weight coefficient, which may be set according to the actual needs.
In the above-described embodiment of determining the matching information between the second map data frame and the second positioning map based on the re-projection error, the matching score is inversely related to the re-projection error. I.e. the larger the re-projection error, the lower the match score. Otherwise, the other way round. In one embodiment, if E 2 represents the sum or average of the re-projection errors of all map points to be matched on the corresponding image of the second map data frame, the matching score N may be represented as n=λ 2×E2 -12 as a weight coefficient, which may be set according to actual needs.
According to the technical scheme, whether the second map data frame is matched with the second positioning map or not can be accurately determined by utilizing the matching score, so that the accuracy of a judgment result of whether the mobile robot positioning is lost or retrieved is improved.
Illustratively, the matching information between any of the second map data frames acquired by the mobile robot and the second positioning map is calculated by: and under the condition that the second positioning map is a laser grid map, each laser point on the second map data frame is projected onto the second positioning map based on the pose of the robot corresponding to the second map data frame, and the matching information between the second map data frame and the second positioning map is determined based on the re-projection error between the projected laser point and each laser point on the second positioning map.
In one embodiment, the mobile robot is a laser SLAM-based robot, and the acquired image and the second positioning map are both laser grid maps. For one second map data frame, based on the positioning of the second map data frame, each laser point on the second map data frame is projected onto a second positioning map in a grid projection mode. And calculating the re-projection errors of the projection points of each laser point and the grid points corresponding to the laser points in the second positioning map, calculating the average value of the distance errors of all the projection points, and determining the matching information by using the average value. It will be appreciated that since the points on the laser grid map are discrete, each positioned as a floating point, the distance error may be calculated by linear interpolation in order to improve the calculation accuracy. In particular, bilinear interpolation may be employed when the laser grid map is a two-dimensional map. When the laser grid map is a three-dimensional map, tri-linear interpolation may be employed.
Illustratively, the matching information between any of the second map data frames acquired by the mobile robot and the second positioning map is calculated by: and under the condition that the second positioning map is a visual map, projecting each characteristic point on the second map data frame onto the second positioning map based on the robot pose corresponding to the second map data frame, and determining matching information between the second map data frame and the second positioning map based on the re-projection error between the projected characteristic point and each characteristic point on the second positioning map.
In one embodiment, the mobile robot is a vision SLAM-based robot, and the acquired image and the second positioning map are vision maps. The step of calculating the matching information may specifically include the following steps. And based on the positioning corresponding to the second map data frame, projecting the map points to be matched on the second map data frame onto a second positioning map. And calculating the distance between the projection point of the map point to be matched on the second positioning map and the corresponding matching point of the map point to be matched on the second positioning map, wherein the corresponding matching point on the second positioning map is the map point to be matched corresponding to the map point to be matched of the second map data frame on the second positioning map. The distance is the reprojection error corresponding to the map points to be matched of the second map data frame. And repeating the process to obtain the re-projection error corresponding to each map point to be matched on the second map data frame. And summing or averaging the re-projection errors of all map points to be matched on the second map data frame to determine the re-projection error corresponding to the second map data frame. And determining the matching information between the second map data frame and the second positioning map according to the reprojection error.
According to the technical scheme, by adopting the reprojection mode, the reprojection error between the second map data frame and the second positioning map can be accurately calculated, so that the matching information can be accurately determined based on the reprojection error.
For example, the map generating method 200 may be implemented by a program module independent of the SLAM main program, and when the map generating method 200 needs to be executed, an apparatus (e.g., a server) for executing the map generating method 200 may call back the program module to start to execute the program corresponding to the map generating method 200. Therefore, the method does not occupy resources when no map update exists, and the normal operation of other functions in the SLAM program is not influenced when the map update exists.
For example, the map generation method according to the embodiment of the present application may be implemented in a device, apparatus or system having a memory and a processor.
The map generation method according to the embodiment of the application can be deployed at an image acquisition end, for example, at a personal terminal or a server end.
Alternatively, the map generation method according to the embodiment of the present application may be distributed and deployed at the server side (or cloud side) and the personal terminal. For example, an image may be acquired at a client, where the client transmits the acquired image to a server (or cloud) and the server (or cloud) generates a map.
According to another aspect of the present application, there is provided a map generating apparatus. Fig. 5 shows a schematic block diagram of a map generating apparatus 500 according to an embodiment of the application.
As shown in fig. 5, the map generating apparatus 500 according to an embodiment of the present application includes a detection module 510, an acquisition module 520, an optimization module 530, and a generation module 540. The respective modules may perform the respective steps of the map generation method described in fig. 2 above, respectively. Only the main functions of the respective components of the map generating apparatus 500 will be described below, and the details already described above will be omitted.
The detection module 510 is configured to detect whether the mobile robot is located and recovered after the mobile robot is located and lost. The determination module 510 may be implemented by the processor 102 in the electronic device shown in fig. 1 running program instructions stored in the storage 104.
The acquiring module 520 is configured to acquire a multi-frame map data frame corresponding to the mobile robot if the mobile robot is located and retrieved, and acquire a first positioning map used by the mobile robot during a lost positioning period and a second positioning map used by the mobile robot before the lost positioning and after the retrieved positioning; the multi-frame map data frames comprise map data frames corresponding to the mobile robot in the positioning lost period and map data frames adjacent to the map data frames corresponding to the mobile robot in the positioning lost period before and/or after the mobile robot is positioned and retrieved. The update module 520 may be implemented by the processor 102 in the electronic device shown in fig. 1 running program instructions stored in the storage 104.
The optimizing module 530 is configured to optimize the poses of the robots corresponding to the multiple frames of map data frames based on the first positioning map, the second positioning map, and the multiple frames of map data frames, so as to obtain an optimized pose set of the mobile robot. The optimization module 530 may be implemented by the processor 102 in the electronic device shown in fig. 1 running program instructions stored in the storage 104.
The generating module 540 is configured to align each of the multiple frames of map data frames based on the optimized pose set, and fuse the aligned map data frames together to generate a smooth map. The smooth map is used to update the second positioning map. The generation module 540 may be implemented by the processor 102 in the electronic device shown in fig. 1 running program instructions stored in the storage 104.
Fig. 6 shows a schematic block diagram of an electronic device 600 according to an embodiment of the application. The electronic device 600 includes a memory 610 and a processor 620.
The memory 610 stores computer program instructions for implementing the respective steps in the map generation method according to an embodiment of the present application.
The processor 620 is operative to execute computer program instructions stored in the memory 610 to perform corresponding steps of a map generation method in accordance with an embodiment of the present application.
In one embodiment, the computer program instructions, when executed by the processor 620, are configured to perform the steps of: after the mobile robot is lost in positioning, detecting whether the positioning of the mobile robot is recovered. If the mobile robot is located and retrieved, acquiring a multi-frame map data frame corresponding to the mobile robot, and acquiring a first locating map used by the mobile robot during the lost locating period and a second locating map used by the mobile robot before the lost locating and after the retrieving locating; the multi-frame map data frames comprise map data frames corresponding to the mobile robot in the positioning lost period and map data frames adjacent to the map data frames corresponding to the mobile robot in the positioning lost period before and/or after the mobile robot is positioned and retrieved. And optimizing the positions of the robots corresponding to the multiple frames of map data frames respectively based on the first positioning map, the second positioning map and the multiple frames of map data frames to obtain an optimized position set of the mobile robot. And aligning each map data frame in the multi-frame map data frames based on the optimized pose set, and fusing the aligned map data frames together to generate a smooth map. The smooth map is used to update the second positioning map.
Illustratively, the electronic device 600 may further include an image capture device 630. The image acquisition device 630 is used for acquiring a target image. The target image may be any of the map data frames described herein, such as a first map data frame or a second map data frame, etc. The image capturing device 630 is optional, and the electronic apparatus 600 may not include the image capturing device 630. The processor 620 may then obtain the target image by other means, such as from an external device or from the memory 610.
Furthermore, according to an embodiment of the present application, there is also provided a storage medium on which program instructions are stored for performing the respective steps of the map generation method of the embodiment of the present application when the program instructions are executed by a computer or a processor, and for realizing the respective modules in the map generation apparatus according to the embodiment of the present application. The storage medium may include, for example, a memory card of a smart phone, a memory component of a tablet computer, a hard disk of a personal computer, read-only memory (ROM), erasable programmable read-only memory (EPROM), portable compact disc read-only memory (CD-ROM), USB memory, or any combination of the foregoing storage media.
In one embodiment, the program instructions, when executed by a computer or processor, may cause the computer or processor to implement the respective functional modules of the map generating apparatus according to the embodiment of the present application, and/or may perform the map generating method according to the embodiment of the present application.
In one embodiment, the program instructions, when executed, are configured to perform the steps of: after the mobile robot is lost in positioning, detecting whether the positioning of the mobile robot is recovered. If the mobile robot is located and retrieved, acquiring a multi-frame map data frame corresponding to the mobile robot, and acquiring a first locating map used by the mobile robot during the lost locating period and a second locating map used by the mobile robot before the lost locating and after the retrieving locating; the multi-frame map data frames comprise map data frames corresponding to the mobile robot in the positioning lost period and map data frames adjacent to the map data frames corresponding to the mobile robot in the positioning lost period before and/or after the mobile robot is positioned and retrieved. And optimizing the positions of the robots corresponding to the multiple frames of map data frames respectively based on the first positioning map, the second positioning map and the multiple frames of map data frames to obtain an optimized position set of the mobile robot. And aligning each map data frame in the multi-frame map data frames based on the optimized pose set, and fusing the aligned map data frames together to generate a smooth map. The smooth map is used to update the second positioning map.
Furthermore, according to an embodiment of the present application, there is also provided a computer program product comprising a computer program for executing the above-mentioned map generation method 200 when the computer program is run.
The modules in the electronic device according to the embodiments of the present application may be implemented by a processor of the electronic device implementing map generation or map generation according to the embodiments of the present application running computer program instructions stored in a memory, or may be implemented when computer instructions stored in a computer readable storage medium of a computer program product according to the embodiments of the present application are run by a computer.
Furthermore, according to an embodiment of the present application, there is also provided a computer program for executing the above-described map generation method 200 when running.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above illustrative embodiments are merely illustrative and are not intended to limit the scope of the present application thereto. Various changes and modifications may be made therein by one of ordinary skill in the art without departing from the scope and spirit of the application. All such changes and modifications are intended to be included within the scope of the present application as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, e.g., the division of elements is merely a logical function division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another device, or some features may be omitted, or not performed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in order to streamline the application and aid in understanding one or more of the various application aspects, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof in the description of exemplary embodiments of the application. However, the method of the present application should not be construed as reflecting the following intent: i.e., the claimed application requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be combined in any combination, except combinations where the features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some of the modules in the map generation apparatus according to the embodiments of the present application may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present application can also be implemented as an apparatus program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present application may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
The above description is merely illustrative of the embodiments of the present application and the protection scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes or substitutions are covered by the protection scope of the present application. The protection scope of the application is subject to the protection scope of the claims.

Claims (14)

1. A map generation method, comprising:
detecting whether the positioning of the mobile robot is recovered or not after the mobile robot is lost in positioning;
If yes, acquiring multi-frame map data frames corresponding to the mobile robot, and acquiring a first positioning map used by the mobile robot during the positioning loss period and a second positioning map used by the mobile robot before the positioning loss and after the positioning recovery; the multi-frame map data frame comprises a map data frame corresponding to the mobile robot in a positioning lost period and a map data frame adjacent to the map data frame corresponding to the mobile robot in a positioning lost period before and/or after positioning retrieval;
optimizing the robot pose corresponding to each multi-frame map data frame based on the first positioning map, the second positioning map and the multi-frame map data frame to obtain an optimized pose set of the mobile robot;
And aligning each map data frame in the multi-frame map data frames based on the optimized pose set, and fusing the aligned map data frames together to generate a smooth map, wherein the smooth map is used for updating the second positioning map.
2. The method of claim 1, wherein after the fusing of the aligned map data frames together to generate a smooth map, the method further comprises:
For each of the plurality of frames of map data frames, determining a map update range corresponding to the optimized robot pose on the smooth map based on the optimized robot pose in the set of optimized poses corresponding to the map data frame;
fusing first map information on the smooth map, which is positioned in the map updating range corresponding to each multi-frame map data frame, to the second positioning map to obtain a fused map;
displaying the fused map on a display interface;
And replacing the second positioning map with the fused map based on an updating instruction input by a user, or reserving the second positioning map and deleting the fused map based on a deleting instruction input by the user.
3. The method of claim 2, wherein after the fusing of the aligned map data frames together to generate a smooth map, the method further comprises:
Displaying map related information of the fused map on the display interface; wherein the map-associated information includes one or more of: the identification information of the mobile robot, the track information of the mobile robot during the positioning loss period, the acquisition time of the fused map and the map pose corresponding to the smooth map; the track information is represented by robot poses corresponding to map data frames corresponding to the lost positioning period, the map poses are used for representing the relative positions of the smooth map in the second positioning map, and the map poses are determined based on the robot poses corresponding to the multi-frame map data frames.
4. The method of claim 2, wherein the fusing the map information on the smooth map within the map update range corresponding to each of the plurality of frames of map data onto the second positioning map, to obtain the fused map, includes:
Replacing second map information in the area corresponding to the map updating range on the second positioning map with first map information in the map updating range corresponding to each multi-frame map data frame on the smooth map to obtain the fused map;
Or alternatively
And based on a first transparency and a second transparency, superposing first map information, which is positioned in a map updating range corresponding to each multi-frame map data frame, on second map information, which is positioned in a region corresponding to the map updating range, on the second positioning map, so as to obtain the fused map, wherein the first transparency is transparency corresponding to the first map information, and the second transparency is transparency corresponding to the second map information.
5. The method of claim 2, wherein,
The robot pose comprises the position of the mobile robot and the orientation of the mobile robot;
The determining, on the smooth map, a map update range corresponding to the optimized robot pose based on the optimized robot pose corresponding to the map data frame in the optimized pose set, including:
On the smooth map, rays are respectively made along a plurality of directions from a target map point, wherein the target map point is the position of the optimized mobile robot corresponding to the map data frame, and the included angle between the two outermost rays and the orientation of the optimized mobile robot corresponding to the map data frame is smaller than or equal to 90 degrees;
determining an intersection point of the ray with an obstacle;
and determining a target area containing the intersection point as a map updating range corresponding to the map data frame, wherein the distance from each map point in the target area to the intersection point is smaller than or equal to a target distance threshold value.
6. The method of any one of claim 1 to 5, wherein,
The optimizing the robot pose of the mobile robot corresponding to each of the multiple frames of map data frames based on the first positioning map, the second positioning map and the multiple frames of map data frames includes:
Acquiring the robot pose corresponding to each multi-frame map data frame;
Calculating an overall constraint error corresponding to the robot pose based on a first matching error between each map data frame during the lost positioning and the first positioning map, a second matching error between each map data frame before the lost positioning and/or after the retrieved positioning and the second positioning map, and a third matching error corresponding to every two adjacent map data frames in the multi-frame map data frames; wherein the third matching error is an error generated based on an odometer of the mobile robot;
And optimizing the robot pose corresponding to each multi-frame map data frame so as to minimize the total constraint error.
7. The method of claim 6, wherein the first positioning map is constructed in real time based on at least one frame of first map data frames including map data frames corresponding to the mobile robot during a loss of positioning, the first positioning map corresponding to any one of the first map data frames being the first positioning map constructed based on the first map data frame and a first map data frame preceding the first map data frame;
for a map data frame of the mobile robot during a positioning loss period, the acquiring the robot pose corresponding to each of the multi-frame map data frame includes:
For any current first map data frame, acquiring a first robot pose corresponding to the current first map data frame determined by an odometer of the mobile robot; and/or matching the current first map data frame with the first positioning map corresponding to the previous first map data frame to obtain a fourth matching error, and determining a second robot pose corresponding to the current first map data frame based on the obtained fourth matching error;
And determining the robot pose corresponding to the current first map data frame based on the first robot pose and/or the second robot pose corresponding to the current first map data frame.
8. The method of claim 7, wherein the matching the current first map data frame with the first positioning map corresponding to the previous first map data frame to obtain a fourth matching error, and determining the second robot pose corresponding to the current first map data frame based on the obtained fourth matching error, comprises:
Under the condition that the first positioning map is a laser grid map, matching the current first map data frame with the first positioning map corresponding to the previous first map data frame by adopting a correlation scanning matching and least squares solution-based optimization method to obtain a fourth matching error, and determining a second robot pose corresponding to the current first map data frame based on the robot pose corresponding to the previous first map data frame and the fourth matching error; and/or the number of the groups of groups,
And under the condition that the first positioning map is a visual map, tracking the position of a map point to be matched in the first positioning map in the current first map data frame by adopting an optical flow method, obtaining the fourth matching error between the current first map data frame and the first positioning map corresponding to the previous first map data frame, and determining the second robot pose corresponding to the current first map data frame based on the robot pose corresponding to the previous first map data frame and the fourth matching error.
9. The method of claim 8, wherein the determining the second robot pose corresponding to the current first map data frame based on the robot pose corresponding to the previous first map data frame and the fourth matching error comprises:
Determining an initial robot pose corresponding to the current first map data frame based on the robot pose corresponding to the previous first map data frame and the fourth matching error;
based on the initial robot pose, respectively projecting the current first map data frame to the first positioning map corresponding to each of at least one frame of previous first map data frame, and obtaining at least one re-projection error between the current first map data frame and the first positioning map corresponding to each of the at least one frame of previous first map data frame, wherein the at least one frame of previous first map data frame is a first map data frame positioned before the current first map data frame, and the at least one re-projection error corresponds to the at least one frame of previous first map data frame one by one;
And optimizing the initial robot pose based on the at least one reprojection error to obtain a second robot pose corresponding to the current first map data frame.
10. The method of any one of claims 1-5, wherein the method further comprises:
Acquiring matching information between each second map data frame acquired by the mobile robot in real time and the second positioning map, wherein the second map data frames acquired by the mobile robot in real time comprise the multi-frame map data frames, and the matching information is used for indicating whether the corresponding second map data frames are matched with the second positioning map or not;
If the matching information corresponding to the second map data frames with the continuous first target number indicates no matching, determining that the mobile robot is lost in positioning;
If the matching information corresponding to the second map data frames with the continuous second target number indicates matching when the mobile robot is in the positioning lost period, determining the positioning recovery of the mobile robot; and/or if repositioning information input by a user is received when the mobile robot is in a period of positioning loss, determining positioning recovery of the mobile robot, wherein the repositioning information comprises the current robot pose of the mobile robot.
11. The method of claim 10, wherein the matching information between any second map data frame acquired by the mobile robot and the second positioning map is calculated by:
When the second positioning map is a laser grid map, each laser point on the second map data frame is projected onto the second positioning map based on the pose of the robot corresponding to the second map data frame, and matching information between the second map data frame and the second positioning map is determined based on the re-projection error between the projected laser point and each laser point on the second positioning map; and/or the number of the groups of groups,
And under the condition that the second positioning map is a visual map, projecting each characteristic point on the second map data frame onto the second positioning map based on the robot pose corresponding to the second map data frame, and determining matching information between the second map data frame and the second positioning map based on the re-projection error between the projected characteristic point and each characteristic point on the second positioning map.
12. An electronic device comprising a processor and a memory, wherein the memory has stored therein computer program instructions which, when executed by the processor, are adapted to carry out the map generation method of any of claims 1 to 11.
13. A storage medium having stored thereon program instructions, wherein the program instructions, when executed, are for performing the map generation method of any of claims 1 to 11.
14. A computer program product comprising a computer program, wherein the computer program is operative when executed to perform the map generation method of any one of claims 1 to 11.
CN202310702847.3A 2023-06-13 2023-06-13 Map generation method, electronic device, storage medium, and computer program product Pending CN117948959A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310702847.3A CN117948959A (en) 2023-06-13 2023-06-13 Map generation method, electronic device, storage medium, and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310702847.3A CN117948959A (en) 2023-06-13 2023-06-13 Map generation method, electronic device, storage medium, and computer program product

Publications (1)

Publication Number Publication Date
CN117948959A true CN117948959A (en) 2024-04-30

Family

ID=90802202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310702847.3A Pending CN117948959A (en) 2023-06-13 2023-06-13 Map generation method, electronic device, storage medium, and computer program product

Country Status (1)

Country Link
CN (1) CN117948959A (en)

Similar Documents

Publication Publication Date Title
US11935288B2 (en) Systems and methods for generating of 3D information on a user display from processing of sensor data for objects, components or features of interest in a scene and user navigation thereon
US11216663B1 (en) Systems and methods for generating of 3D information on a user display from processing of sensor data for objects, components or features of interest in a scene and user navigation thereon
EP4064192A1 (en) Performing object modeling by combining visual data from images with motion data of the image acquisition device
CN108827249B (en) Map construction method and device
JP2019532433A (en) Laser scanner with real-time online egomotion estimation
JP7280452B2 (en) Machine learning-based object identification using scale maps and 3D models
US10949803B2 (en) RFID inventory and mapping system
US11209277B2 (en) Systems and methods for electronic mapping and localization within a facility
JP7280450B2 (en) Image search for walkthrough videos
WO2021113268A1 (en) Systems and methods for generating of 3d information on a user display from processing of sensor data
WO2023005384A1 (en) Repositioning method and device for mobile equipment
CN108710367A (en) Laser data recognition methods, device, robot and storage medium
CN111709988A (en) Method and device for determining characteristic information of object, electronic equipment and storage medium
EP4040400A1 (en) Guided inspection with object recognition models and navigation planning
CN114545426A (en) Positioning method, positioning device, mobile robot and computer readable medium
US11592826B2 (en) Method, system and apparatus for dynamic loop closure in mapping trajectories
GB2605948A (en) Warehouse monitoring system
CN117948959A (en) Map generation method, electronic device, storage medium, and computer program product
US20200182623A1 (en) Method, system and apparatus for dynamic target feature mapping
Ibrahim et al. Reinforcement learning for high-quality reality mapping of indoor construction using unmanned ground vehicles
CN113310484B (en) Mobile robot positioning method and system
CN114463429A (en) Robot, map creation method, positioning method, and medium
KR20220163731A (en) User equipment and control method for the same
ELzaiady et al. Next-best-view planning for environment exploration and 3D model construction
David et al. Context-Aware Visual Search Using a Pan-Tilt-Zoom Camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination