WO2022222345A1 - 移动机器人的定位修正方法和装置、存储介质、电子装置 - Google Patents

移动机器人的定位修正方法和装置、存储介质、电子装置 Download PDF

Info

Publication number
WO2022222345A1
WO2022222345A1 PCT/CN2021/116171 CN2021116171W WO2022222345A1 WO 2022222345 A1 WO2022222345 A1 WO 2022222345A1 CN 2021116171 W CN2021116171 W CN 2021116171W WO 2022222345 A1 WO2022222345 A1 WO 2022222345A1
Authority
WO
WIPO (PCT)
Prior art keywords
mobile robot
image
object category
score
poses
Prior art date
Application number
PCT/CN2021/116171
Other languages
English (en)
French (fr)
Inventor
张新静
Original Assignee
追觅创新科技(苏州)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 追觅创新科技(苏州)有限公司 filed Critical 追觅创新科技(苏州)有限公司
Publication of WO2022222345A1 publication Critical patent/WO2022222345A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • G01C25/005Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass initial alignment, calibration or starting-up of inertial devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes

Definitions

  • the present invention relates to the field of communications, and in particular, to a method and device for positioning correction of a mobile robot, a storage medium, and an electronic device.
  • mobile robots are equipped with laser ranging units, visual sensors and inertial navigation units to perceive environmental information, combine real-time positioning and mapping technology to build environmental maps, and perform autonomous positioning and navigation to perform tasks, but in practical applications,
  • the mobile robot may be removed by the user and other emergencies. Because the positioning of the mobile robot cannot be corrected, the accuracy of the environment map constructed by the mobile robot is not high.
  • the embodiments of the present invention provide a positioning correction method and device, a storage medium, and an electronic device for a mobile robot, so as to at least solve the problem of inability to correct the positioning of the mobile robot when an emergency situation occurs in the mobile robot in the related art, thereby leading to the construction of the mobile robot.
  • the environment map is inaccurate and so on.
  • a positioning correction method for a mobile robot including: when it is detected that the mobile robot is in a preset state, acquiring an overall environment map corresponding to the travel area of the mobile robot, wherein all the The overall environment map includes: a first grid map, different poses of the mobile robot corresponding to different pixel points of the first grid map; acquiring a second grid map, wherein the second grid The map is used to indicate the current area where the mobile robot is located; the first grid map and the second grid map are matched to determine from the different poses that the mobile robot is in the current area one or more first poses; perform position verification on the one or more first poses, and use the target pose that has passed the position verification in the one or more poses as the mobile robot after the positioning correction 's pose.
  • performing the position verification on the one or more first poses includes: for any one of the one or more first poses, instructing the mobile robot to perform the position verification according to the one or more first poses. moving the mobile robot to the position corresponding to the target landmark in the overall environment map; in the case that the mobile robot moves to the position according to the The target road sign is photographed at the location to obtain a first image; the first image is matched with the second image of the target road sign in the overall environment map, and any pose is determined according to the matching result. Do location verification.
  • performing position verification on any pose according to a matching result includes: when the matching result indicates that the similarity between the first image and the second image exceeds a preset threshold Under the condition that the matching result indicates that the similarity between the first image and the second image does not exceed a preset threshold, it is determined that the any pose has passed the position verification; The pose fails the verification of the stated position.
  • matching the first image with the second image of the target landmark in the overall environment map includes: acquiring at least one of the following first parameter information of the first image: first pixel information, the first object category of the first image, and the first score of the first object category, and second parameter information of at least one of the following obtained from the second image: second pixel information, The second object category of the second image, and the second score of the second object category; the first parameter information and the second parameter information are matched, wherein, if the matching result satisfies the preset condition In this case, it is determined that any pose passes the position verification, and the preset condition includes at least one of the following: the first similarity between the first pixel information and the second pixel information is greater than a first preset threshold, The second similarity between the first object category and the second object category is greater than a second preset threshold, and the difference between the first score and the second score is less than a third preset threshold.
  • a first object category of the first image and a first score of the first object category are acquired, and a second object category of the second image is acquired, and the second object category is acquired
  • the second score of the object category includes: inputting the first image into the trained artificial intelligence image model to obtain the first object category of the first image, and the first object category of the first object category. a score; and input the second image into the trained artificial intelligence image model to obtain a second object category of the second image and a second score of the second object category, and use
  • the second pixel information of the second image, the second object category of the second image, and the second score of the second object category are stored in the overall environment map, and all information obtained from the overall environment map is obtained.
  • second pixel information of the second image, a second object category of the second image, and a second score of the second object category are stored in the overall environment map, and all information obtained from the overall environment map is obtained.
  • At least one of the following first parameter information of the first image is acquired: first pixel information, a first object category of the first image, and a first object category of the first object category score, and obtain at least one of the following second parameter information of the second image: second pixel information, a second object category of the second image, and after the second score of the second object category, the The method further includes: in the event that the first object category and the second object category indicate the same object category, updating the higher of the first score and the second score in the overall environment in the map.
  • the method further includes: if the first object category is not detected in the overall environment map, The first object category is stored in the overall environment map; in the case that the first object category and the second object category indicate the same object category, the first score and the second score in the Higher scoring updates are saved in the overall environment map.
  • the method before acquiring the overall environment map corresponding to the travel area of the mobile robot, the method further includes: determining that the mobile robot is in the preset state by: acquiring pose data of the mobile robot ; in the case of abnormality in the pose data, it is determined that the mobile robot needs positioning correction, so as to determine that the mobile robot is in the preset state.
  • the pose data is abnormal in one of the following ways: detecting abnormal data obtained by a cliff sensor provided by the mobile robot; detecting data obtained by a laser ranging sensor provided by the mobile robot.
  • the data point is abnormal; the inertial navigation sensor set on the mobile robot detects that the inclination angle of the mobile robot is abnormal; it is detected that the coordinate difference between two adjacent frames of object images obtained by the mobile robot is abnormal.
  • the method further includes: if none of the one or more first poses pass the position verification , clear the overall environment map.
  • acquiring the second grid map includes: scanning a current area where the mobile robot is located by using a laser ranging sensor provided on the mobile robot, so as to acquire the construction of the laser ranging sensor of the second raster map.
  • a positioning correction device for a mobile robot includes: a first acquisition module configured to acquire the travel of the mobile robot when it is detected that the mobile robot is in a preset state The overall environment map corresponding to the area, wherein the overall environment map includes: a first grid map, different poses of the mobile robot corresponding to different pixel points of the first grid map; a second acquisition module , used to obtain a second grid map, wherein the second grid map is used to indicate the current area where the mobile robot is located; a matching module is used to compare the first grid map and the second grid map The grid map is matched to determine one or more first poses of the mobile robot in the current area from the different poses; a verification module is used to perform the one or more first poses on the one or more first poses. For position verification, the target pose of the one or more poses that has passed the position verification is used as the corrected pose of the mobile robot.
  • a computer-readable storage medium where a computer program is stored in the computer-readable storage medium, wherein the computer program is configured to execute the positioning of the above-mentioned mobile robot when running Correction method.
  • an electronic device comprising a memory and a processor, wherein the memory stores a computer program, the processor is configured to run the computer program to execute any of the above Steps in Method Examples.
  • the overall environment map corresponding to the travel area of the mobile robot is obtained, wherein the overall environment map includes: a first grid map, and the mobile robot is in the first grid different poses corresponding to different pixel points of the map; obtain a second grid map, wherein the second grid map is used to indicate the current area where the mobile robot is located; compare the first grid map and the second grid map The grid map is matched to determine one or more first poses of the mobile robot in the current area from different poses; The verified target pose is used as the corrected pose of the mobile robot, that is, when the mobile robot needs to be corrected for positioning, one or more first poses of the mobile robot can be determined first, and then one or more first poses can be determined.
  • the position and posture are verified to determine the position and posture after the positioning and correction of the mobile robot.
  • the above solution solves the problem that in the related art, when the mobile robot encounters an emergency, the positioning of the mobile robot cannot be corrected, which leads to the environment constructed by the mobile robot. Due to the inaccurate map and other problems, the positioning of the mobile robot was corrected in time after the mobile robot had an emergency.
  • the invention has the following beneficial effects: after the mobile robot has an emergency situation, the positioning of the mobile robot is corrected in time, so that the environmental map constructed by the mobile robot is more accurate.
  • Fig. 1 is the hardware structure block diagram of the mobile robot of a kind of positioning correction method of the mobile robot according to an embodiment of the present invention
  • FIG. 2 is a flowchart of a method for positioning correction of a mobile robot according to an embodiment of the present invention
  • FIG. 3 is a structural block diagram of a positioning correction device for a mobile robot according to an embodiment of the present invention.
  • FIG. 4 is another structural block diagram of a positioning correction device for a mobile robot according to an embodiment of the present invention.
  • FIG. 1 is a block diagram of the hardware structure of a mobile robot according to a method for positioning correction of a mobile robot according to an embodiment of the present invention.
  • the mobile robot may include one or more (only one is shown in FIG. 1 ) processor 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA)
  • the memory 104 for storing data
  • the mobile robot described above may also include a transmission device 106 and an input and output device 108 for communication functions.
  • FIG. 1 is only a schematic diagram, which does not limit the structure of the above-mentioned mobile robot.
  • the mobile robot may also include more or fewer components than those shown in FIG. 1 , or have a different configuration that is functionally equivalent or more functional than that shown in FIG. 1 .
  • the memory 104 can be used to store computer programs, for example, software programs and modules of application software, such as the computer program corresponding to the method for correcting the positioning of the mobile robot of the mobile robot in the embodiment of the present invention.
  • a computer program thereby executing various functional applications and data processing, implements the above-mentioned method.
  • Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • memory 104 may further include memory located remotely from processor 102, which may be connected to the mobile robot through a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • Transmission means 106 are used to receive or transmit data via a network.
  • the specific example of the above-mentioned network may include the wireless network provided by the communication provider of the mobile robot.
  • the transmission device 106 includes a network adapter (Network Interface Controller, NIC for short), which can be connected to other network devices through a base station so as to communicate with the Internet.
  • the transmission device 106 may be a radio frequency (Radio Frequency, RF for short) module, which is used to communicate with the Internet in a wireless manner.
  • RF Radio Frequency
  • FIG. 2 is a flowchart of a positioning correction method for a mobile robot according to an embodiment of the present invention, and the flow includes the following steps:
  • Step S202 In the case of detecting that the mobile robot is in a preset state, obtain an overall environment map corresponding to the travel area of the mobile robot, wherein the overall environment map includes: a first grid map, where the mobile robot is located. different poses corresponding to different pixels of the first grid map;
  • the overall environment map corresponding to the travel area constructed by the mobile robot under normal conditions is obtained from the memory of the mobile robot or from a remote server, wherein the mobile robot constructs the overall environment map under normal conditions.
  • the method is: constructing a grid map through a laser ranging sensor set on the mobile robot, and co-locating the current pose of the mobile robot through the laser ranging sensor and inertial navigation sensor, wherein the pose includes the coordinates and inclination of the mobile robot;
  • the visual sensor set on the above collects images, inputs the images collected by the visual sensor into the trained artificial intelligence image model, and obtains the object category and score in the image in real time. Categories and scores are saved to the environment map. It should be noted that the above is only an example of constructing an overall environment map, which is not limited in this embodiment.
  • Step S204 acquiring a second grid map, wherein the second grid map is used to indicate the current area where the mobile robot is located;
  • the current area where the mobile robot is located is scanned by the laser ranging sensor provided on the mobile robot to obtain the second grid map constructed by the laser ranging sensor.
  • Step S206 Matching the first grid map and the second grid map to determine one or more first poses of the mobile robot in the current area from the different poses;
  • Step S208 Perform position verification on the one or more first poses, and use a target pose that has passed the position verification among the one or more poses as the position and pose after the positioning and correction of the mobile robot.
  • the overall environment map corresponding to the travel area of the mobile robot is obtained; the second grid map is obtained, wherein the second grid map is used for Indicate the current area where the mobile robot is located; match the first grid map and the second grid map to determine one or more first postures of the mobile robot in the current area from different postures; Multiple first poses are used for position verification, and the target pose that has passed the position verification in one or more poses is used as the pose after the positioning correction of the mobile robot, that is, when the mobile robot needs to perform positioning correction, it can be determined first.
  • One or more first poses of the mobile robot and then the position verification of the one or more first poses is performed to determine the position and pose after the positioning and correction of the mobile robot.
  • the positioning of the mobile robot cannot be corrected, which leads to inaccurate environmental maps constructed by the mobile robot, and thus makes the mobile robot's positioning corrected in time after an emergency occurs.
  • step S208 there are many ways to implement the above step S208, and the embodiment of the present invention provides an implementation way, specifically:
  • Step 1 for any one of the one or more first poses, instruct the mobile robot to move to the position corresponding to the target landmark in the overall environment map according to the any pose;
  • the mobile robot moves to the target position with any one of the first poses,
  • the any pose is any one of the multiple first poses, wherein the target position is the position corresponding to the target road sign, and the target road sign can be the road sign closest to the mobile robot in the overall environment map, or it can be a pre-
  • the set road sign is not limited in this embodiment of the present invention.
  • the road sign may be a preset symbolic object, such as a sofa, a refrigerator, and the like.
  • Step 2 in the case that the mobile robot moves to the position according to any posture, the mobile robot takes a picture of the target road sign at the position to obtain a first image;
  • the mobile robot takes a picture of the target landmark to obtain the first image corresponding to the target landmark, wherein the mobile robot A visual sensor is arranged on the device, that is, the target road sign is photographed by the visual sensor, and the above-mentioned first image is obtained.
  • Step 3 Acquire at least one of the following first parameter information of the first image: first pixel information, a first object category of the first image, and a first score of the first object category;
  • the first image is input into the trained artificial intelligence image model to obtain the first object category of the first image and the first score of the first object category; it needs to be explained Yes, the artificial intelligence image model can be obtained by training the images collected on the network and the images obtained by the mobile robot. After the mobile robot collects the first image through the vision sensor, the first image collected by the vision sensor is input to the training.
  • the first object category and the first score in the first image are obtained in real time, wherein the pixel information of the first image is obtained through the first image, and the first score can be understood as the artificial intelligence image model.
  • the reliability score of an object recognized by an image For example, if the first image is input into the artificial intelligence image model, and the reliability of the object in the first image as a chair is 90 points, it can also be understood as the first image. There is a 90% chance that the object in the image is a chair.
  • Step 4 Acquire at least one of the following second parameter information of the second image: second pixel information, a second object category of the second image, and a second score of the second object category;
  • the second pixel information of the second image corresponding to the target landmark, the second object category of the second image, and the second score of the second object category are acquired in the overall environment map; wherein , input the second image into the trained artificial intelligence image model, obtain the second object category of the second image, and the second score of the second object category, and use the second The second object category of the image, the second score of the second object category, and the second pixel information obtained through the second image are stored in the overall environment map.
  • the second image collected by the vision sensor is input into the trained artificial intelligence image model, and the second image in the second image is obtained in real time.
  • Object category and second score and will be saved to the overall environment map in real time.
  • first pixel information and second pixel information may be obtained from the first image and the second image.
  • Step 5 matching the first parameter information and the second parameter information, wherein, in the case that the matching result satisfies the preset condition, it is determined that the any pose passes the position verification;
  • the preset condition includes at least one of the following: a first similarity between the first pixel information and the second pixel information is greater than a first preset threshold, the first object category and the second The second similarity of the object category is greater than a second preset threshold, and the difference between the first score and the second score is less than a third preset threshold.
  • the first pixel information and the second pixel information, and/or, the first object category and the second object category, and/or, the first score and the second score are matched to determine the first pixel information and Whether the first similarity of the second pixel information is greater than a first preset threshold; whether the second similarity of the first object category and the second object category is greater than a second preset threshold, and the first Whether the difference between the score and the second score is less than a third preset threshold, according to the situation, when the first similarity between the first pixel information and the second pixel information is greater than the first preset threshold, and/or, so the second similarity between the first object category and the second object category is greater than a second preset threshold, and/or in the case that the difference between the first score and the second score is less than a third preset threshold, It is to say, when obtaining the first pixel information, the first object category, and the first score of the first image, and the second pixel information, second object category, and second score of the second image, the
  • the above-mentioned one or more first poses can be understood as an initial set of poses, and the selection of any pose can be understood as being realized by traversing the first poses in the initial set of poses, Specifically in the scheme, the first poses in the initial set of poses are traversed in turn to instruct the mobile robot to move to the position corresponding to the target landmark in the overall environment map according to the corresponding first pose; In the case where the mobile robot moves to the position according to the first posture, the mobile robot takes a picture of the target road sign at the position to obtain a first image; and the first image is combined with the overall The second image of the target landmark in the environment map is matched, and the position of any pose is verified according to the matching result, and then it is judged whether the first pose in the initial set of poses satisfies the position and the corrected pose conditions of.
  • the first pixel information of the first image, the first object category of the first image, and the first score of the first object category are acquired, and the second image is acquired the second pixel information of the second image, the second object category of the second image, and the second score of the second object category, in the case that the first object category and the second object category indicate the same object category
  • the higher of the first score and the second score is updated in the overall environment map.
  • the first object category corresponding to the first image and the second object category corresponding to the second image comparing the magnitude relationship between the first score and the second score, and indicating that the first score is greater than the second score.
  • scoring the first pixel information, the first object category, and the first score corresponding to the first image are stored in the overall environment map.
  • the first object category is not detected in the overall environment map, and the first object category is saved in the In the overall environment map, for example, after the position verification is passed, any pose is taken as the target pose, the mobile robot moves according to the target pose, continues to scan the travel area of the mobile robot, and the mobile robot obtains For the first image, determine the first object category corresponding to the first image, determine whether there is a first object category in the overall environment map, and save the first object category if the first object category cannot be detected In the overall environment map, this embodiment is also applicable to a situation where the positioning of the mobile robot does not need to be corrected, and the embodiment of the present invention does not limit the usage scenarios.
  • the first object category is saved in the overall environment map; Where the category and the second object category indicate the same object category, the higher of the first score and the second score is updated in the overall environment map.
  • the second object category corresponding to the saved second image in the overall map is a refrigerator, and the second score is 80.
  • the first object category corresponding to the first image obtained this time is also a refrigerator, and the first score is 90 points, then save the first pixel information, the first object category, and the first score of 90 points corresponding to the first image to the overall environment map.
  • the first object category corresponding to the first image obtained this time is a sofa
  • the first score is 80 points, but the second image with the object type as sofa is not detected in the overall map.
  • the first pixel information corresponding to the first image, the first object type: sofa, and the first score are 80. are saved to the overall environment map.
  • the mobile robot before acquiring the overall environment map corresponding to the travel area of the mobile robot, it is determined that the mobile robot is in the preset state by the following methods: acquiring pose data of the mobile robot; In the case of abnormal data, it is determined that the mobile robot needs a positioning correction, so as to determine that the mobile robot is in the preset state.
  • the mobile robot monitors the acquired pose data in real time, and when the acquired pose data is abnormal with the preset threshold, it is determined that the positioning of the mobile robot needs to be corrected.
  • the pose data is abnormal in one of the following ways: detecting abnormal data acquired by a cliff sensor provided by the mobile robot; detecting abnormal data points acquired by the laser ranging sensor;
  • the inertial navigation sensor provided on the above detects that the inclination angle of the mobile robot is abnormal; and detects that the coordinate difference between two adjacent frames of object images obtained by the mobile robot is abnormal.
  • the inertial navigation unit detects that the mobile robot is tilted. The angle exceeds the fifth preset threshold; the coordinate difference between two adjacent frames of object images acquired by the mobile robot is greater than the sixth preset threshold, indicating that the mobile robot is in an abnormal state, and the mobile robot needs to be corrected for positioning.
  • the overall environment map is cleared if none of the one or more first poses pass the location verification.
  • none of the one or more first poses is the current pose of the mobile robot, that is, the positioning of the mobile robot is not accurate at this time, so the overall environment map is cleared to determine one of the mobile robots again. or multiple first poses.
  • the specific steps of the positioning correction method of the mobile robot are as follows:
  • Step A During the normal operation of the mobile robot, an environment map is constructed
  • a grid map is constructed by the laser ranging sensor set on the mobile robot, and the current pose of the mobile robot is co-located by the laser ranging sensor and the inertial navigation sensor, wherein the pose includes the coordinates and the inclination of the mobile robot, which can also be understood It is the position and posture of the mobile robot; the image collected by the vision sensor set on the mobile robot is collected, and the image collected by the vision sensor is input into the trained artificial intelligence image model, and the object category and reliability score in the image are obtained in real time, among which,
  • the artificial intelligence image model can be trained according to the images collected on the network and the images collected by the visual sensor.
  • the saved environment map search for the local environment map within the preset threshold of the mobile robot, such as the environment map within 2m of the mobile robot. If the currently detected object category is an object category that does not exist in the local map, the obtained The image, object category and reliability score are added to the environment map; if it is a known object category in the local map, the reliability score of the obtained image and the reliability score of the image of the local map are judged, and the reliability score is compared. High object information is stored in the environment map, and the environment map stores at least one of the following: grid map, mobile robot pose, object image, object category and reliability score.
  • Step B obtaining the current running state of the mobile robot, the running state includes: a normal state and an abnormal state;
  • the current running state of the mobile robot is determined to be abnormal by one of the following methods: detecting that the data obtained by the cliff sensor set on the mobile robot is abnormal, such as the mobile robot being moved; detecting the data points obtained by the laser ranging sensor Abnormal, such as the data points collected by the mobile robot are too few; the inertial navigation unit of the mobile robot detects that the tilt of the mobile robot exceeds the first threshold; the positioning difference between the images obtained by the mobile robot between two adjacent frames exceeds the second threshold; this In other cases, it is necessary to correct the positioning of the mobile robot, and it is normal in other cases.
  • Step C Correct the positioning of the mobile robot when the mobile robot is in an abnormal state.
  • a temporary grid map is constructed by the laser ranging sensor set on the mobile robot, and the temporary grid map is compared with the saved environment map.
  • the grid map (equivalent to the first grid map and the second grid map in the above embodiment) is matched, one or more current poses are determined, and one or more current poses are used as the initial pose.
  • Value candidate set determine the final current pose in the pose initial value candidate set.
  • the candidate set of initial pose values traverse the candidate set of initial pose values, assume that the initial pose value is the current pose, and try to navigate the mobile robot to the position of a specific road sign in the saved environment map, where the specific road sign (equivalent to the target in the above embodiment) Road sign) can be the nearest road sign to the mobile robot or a preset iconic road sign. If the mobile robot can reach the position corresponding to the specific road sign, the first image information of the current visual sensor and the first image recognized by the artificial intelligence image model will be used.
  • the specific road sign equivalent to the target in the above embodiment
  • Road sign can be the nearest road sign to the mobile robot or a preset iconic road sign.
  • the first object category and the first object score corresponding to the information are compared with the second image information corresponding to the road signs in the saved environment map, the second object category and the second object score corresponding to the second image information, and if the image similarity is Both category similarity and score similarity are higher than the preset threshold, and the current pose is the final current pose; if the landmark position or image similarity cannot be reached, and the object similarity or score similarity is lower than the preset threshold, continue to traverse the position.
  • the next pose of the initial pose candidate set If all the poses of the pose initial value candidate set do not meet the preset conditions, it is considered that the mobile robot has failed to locate, and the environment map information is cleared.
  • the overall environment map corresponding to the travel area of the mobile robot is obtained, and the second grid map is obtained; the first grid map and the second grid map are obtained.
  • Maps are matched to determine one or more first poses of the mobile robot in the current area from different poses; position verification is performed on one or more first poses, and one or more poses are verified by the position.
  • the target pose of the mobile robot is used as the corrected pose of the mobile robot, that is, when the mobile robot needs to perform positioning correction, one or more first poses of the mobile robot can be determined first, and then one or more first poses can be determined.
  • the position verification of the mobile robot is carried out to determine the position and posture of the mobile robot after the positioning correction.
  • a positioning correction device for a mobile robot is also provided, and the device is used to implement the above-mentioned embodiments and preferred implementations, and what has been described will not be repeated.
  • the term "module” may be a combination of software and/or hardware that implements a predetermined function.
  • the apparatus described in the following embodiments is preferably implemented in software, implementations in hardware, or a combination of software and hardware, are also possible and contemplated.
  • FIG. 3 is a structural block diagram of a positioning correction device for a mobile robot according to an embodiment of the present invention, as shown in FIG. 3 , including:
  • the first acquisition module 32 is configured to acquire an overall environment map corresponding to the travel area of the mobile robot when it is detected that the mobile robot is in a preset state, wherein the overall environment map includes: a first grid map, all different poses of the mobile robot corresponding to different pixels of the first grid map;
  • the overall environment map corresponding to the travel area constructed by the mobile robot under normal conditions is obtained from the memory of the mobile robot or from a remote server, wherein the mobile robot constructs the overall environment map under normal conditions.
  • the method is: constructing a grid map through a laser ranging sensor set on the mobile robot, and co-locating the current pose of the mobile robot through the laser ranging sensor and inertial navigation sensor, wherein the pose includes the coordinates and inclination of the mobile robot;
  • the visual sensor set on the above collects images, inputs the images collected by the visual sensor into the trained artificial intelligence image model, and obtains the object category and score in the image in real time.
  • the categories and scores are stored in the environment map. It should be noted that the above is only an example of constructing an overall environment map, which is not limited in this embodiment.
  • the second obtaining module 34 obtains a second grid map, wherein the second grid map is used to indicate the current area where the mobile robot is located;
  • the current area where the mobile robot is located is scanned by the laser ranging sensor provided on the mobile robot to obtain the second grid map constructed by the laser ranging sensor.
  • a matching module 36 configured to match the first grid map and the second grid map to determine one or more first or more first positions of the mobile robot in the current area from the different poses pose;
  • the verification module 38 is configured to perform position verification on the one or more first poses, and use a target pose that has passed the position verification among the one or more poses as the position and pose after the positioning and correction of the mobile robot.
  • the overall environment map corresponding to the travel area of the mobile robot is obtained, and the second grid map is obtained, wherein the second grid map is used for Indicate the current area where the mobile robot is located; match the first grid map and the second grid map to determine one or more first postures of the mobile robot in the current area from different postures; Multiple first poses are used for position verification, and the target pose that has passed the position verification in one or more poses is used as the pose after the positioning correction of the mobile robot, that is, when the mobile robot needs to perform positioning correction, it can be determined first.
  • One or more first poses of the mobile robot and then the position verification of the one or more first poses is performed to determine the position and pose after the positioning and correction of the mobile robot.
  • the positioning of the mobile robot cannot be corrected, which leads to inaccurate environmental maps constructed by the mobile robot, and thus makes the mobile robot's positioning corrected in time after an emergency occurs.
  • the above-mentioned apparatus and the verification module are further configured to instruct the mobile robot to move according to any one of the one or more first poses to the position corresponding to the target landmark in the overall environment map; in the case that the mobile robot moves to the position according to any of the postures, the target landmark is detected by the mobile robot at the position. taking pictures to obtain a first image; and also used for matching the first image with the second image of the target landmark in the overall environment map, and performing a position on any pose according to the matching result verify.
  • the mobile robot moves to the target position with any one of the first poses,
  • the any pose is any one of the multiple first poses, wherein the target position is the position corresponding to the target road sign, and the target road sign can be the road sign closest to the mobile robot in the overall environment map, or it can be a pre-
  • the road sign set is not limited in this embodiment of the present invention.
  • the mobile robot After the mobile robot moves to the position corresponding to the target road sign in any of the first postures, the mobile robot is provided with a vision sensor, and the mobile robot passes the vision
  • the sensor takes pictures of the target landmark to obtain a first image corresponding to the target landmark, obtains a second image corresponding to the target landmark in the overall environment map, and verifies any pose by matching the first image and the second image, wherein, Road signs can be pre-set iconic objects, such as sofas, refrigerators, etc.
  • the matching module is further configured to determine the any bit when the matching result indicates that the similarity between the first image and the second image exceeds a preset threshold The pose passes the position verification; in the case that the matching result indicates that the similarity between the first image and the second image does not exceed a preset threshold, it is determined that the any pose fails the position verification.
  • the preset threshold is 60%, and when the similarity between the first image and the second image is 70%, it is determined that any pose passes the position verification. When the similarity is 30%, it is determined that any pose fails the position verification.
  • the second acquiring module is further configured to acquire at least one of the following first parameter information of the first image: first pixel information, a first object category of the first image, and all obtaining the first score of the first object category, and acquiring at least one of the following second parameter information of the second image: second pixel information, the second object category of the second image, and the second object category
  • the second score of the The conditions include at least one of the following: a first similarity between the first pixel information and the second pixel information is greater than a first preset threshold, and a second similarity between the first object category and the second object category is greater than a second preset threshold, and the difference between the first score and the second score is less than a third preset threshold.
  • the first pixel information and the second pixel information, and/or, the first object category and the second object category, and/or, the first score and the second score are matched to determine the first pixel information and Whether the first similarity of the second pixel information is greater than a first preset threshold; whether the second similarity of the first object category and the second object category is greater than a second preset threshold, and the first Whether the difference between the score and the second score is less than a third preset threshold, according to the situation, when the first similarity between the first pixel information and the second pixel information is greater than the first preset threshold, and/or, so the second similarity between the first object category and the second object category is greater than a second preset threshold, and/or in the case that the difference between the first score and the second score is less than a third preset threshold, It is to say, when obtaining the first pixel information, the first object category, and the first score of the first image, and the second pixel information, second object category, and second score of the second image, the
  • the above-mentioned one or more first poses can be understood as an initial set of poses, and the selection of any pose can be understood as being realized by traversing the first poses in the initial set of poses, Specifically in the scheme, the first poses in the initial set of poses are traversed in turn to instruct the mobile robot to move to the position corresponding to the target landmark in the overall environment map according to the corresponding first pose; In the case where the mobile robot moves to the position according to the first posture, the mobile robot takes a picture of the target road sign at the position to obtain a first image; and the first image is combined with the overall The second image of the target landmark in the environment map is matched, and the position of any pose is verified according to the matching result, and then it is judged whether the first pose in the initial set of poses satisfies the position and the corrected pose conditions of.
  • FIG. 4 is another structural block diagram of a positioning correction device for a mobile robot according to an embodiment of the present invention.
  • the above device further includes an input module 40 for inputting the first image into a In the trained artificial intelligence image model, to obtain the first object category of the first image, and the first score of the first object category; and input the second image into the trained artificial intelligence In the image model, the second object category of the second image and the second score of the second object category are obtained, and the second pixel information of the second image, the first The second object category and the second score of the second object category are stored in the overall environment map, and the second pixel information of the second image is obtained from the overall environment map, and the second pixel information of the second image is obtained from the overall environment map an object category, and a second score for the second object category.
  • the artificial intelligence image model can be obtained by training the images collected on the network and the images obtained by the mobile robot. After the mobile robot collects the first image through the vision sensor, the first image collected by the vision sensor is Input into the trained artificial intelligence image model, and obtain the first object category and first score in the first image in real time; under normal working conditions, the mobile robot collects the second image through the vision sensor, and then collects the second image through the vision sensor. The second image is input into the trained artificial intelligence image model, the second object category and the second score in the second image are obtained in real time, and the second image is saved in the overall environment map in real time.
  • the above-mentioned apparatus further includes: a saving module 42, configured to save all the The higher of the first score and the second score is updated in the overall environment map.
  • the first object category corresponding to the first image and the second object category corresponding to the second image comparing the magnitude relationship between the first score and the second score, and indicating that the first score is greater than the second score.
  • scoring the first pixel information, the first object category, and the first score corresponding to the first image are stored in the overall environment map.
  • the first object category is not detected in the overall environment map, and the first object category is saved in the In the overall environment map, for example, after the position verification is passed, any pose is taken as the target pose, the mobile robot moves according to the target pose, continues to scan the travel area of the mobile robot, and the mobile robot obtains For the first image, determine the first object category corresponding to the first image, determine whether there is a first object category in the overall environment map, and save the first object category if the first object category cannot be detected In the overall environment map, this embodiment is also applicable to a situation where the positioning of the mobile robot does not need to be corrected, and the embodiment of the present invention does not limit the usage scenario.
  • the above-mentioned apparatus further includes: a saving module 42 , which is further configured to, in a state where the positioning of the mobile robot does not need to be corrected, fail to detect in the overall environment map In the case of the first object category, save the first object category in the overall environment map; in the case that the first object category and the second object category indicate the same object category, save the first object category in the overall environment map; The higher of the first score and the second score is updated in the overall environment map.
  • a saving module 42 which is further configured to, in a state where the positioning of the mobile robot does not need to be corrected, fail to detect in the overall environment map In the case of the first object category, save the first object category in the overall environment map; in the case that the first object category and the second object category indicate the same object category, save the first object category in the overall environment map; The higher of the first score and the second score is updated in the overall environment map.
  • the second object category corresponding to the saved second image in the overall map is a refrigerator, and the second score is 80.
  • the first object category corresponding to the first image obtained this time is also a refrigerator, and the first score is 90 points, then save the first pixel information, the first object category, and the first score of 90 points corresponding to the first image to the overall environment map.
  • the first object category corresponding to the first image obtained this time is a sofa
  • the first score is 80 points, but the second image with the object type as sofa is not detected in the overall map.
  • the first pixel information corresponding to the first image, the first object type: sofa, and the first score are 80. are saved to the overall environment map.
  • the determining module is further configured to determine that the mobile robot is in the preset state by the following methods: acquiring pose data of the mobile robot; in the case that the pose data is abnormal, determining The mobile robot needs a positioning correction to determine that the mobile robot is in the preset state.
  • the mobile robot monitors the acquired pose data in real time, and when the acquired pose data is abnormal with the preset threshold, it is determined that the positioning of the mobile robot needs to be corrected.
  • the determining module is further configured to determine that the pose data is abnormal in one of the following ways: detecting that the data acquired by the cliff sensor provided by the mobile robot is abnormal; detecting that the laser ranging sensor is abnormal The acquired data points are abnormal; the inertial navigation sensor set on the mobile robot detects that the inclination angle of the mobile robot is abnormal; it is detected that the coordinate difference between two adjacent frames of object images acquired by the mobile robot is abnormal.
  • the inertial navigation unit detects that the mobile robot is tilted. The angle exceeds the fifth preset threshold; the coordinate difference between two adjacent frames of object images acquired by the mobile robot is greater than the sixth preset threshold, indicating that the mobile robot is in an abnormal state, and the mobile robot needs to be corrected for positioning.
  • the above-mentioned apparatus further includes an emptying module 44 for emptying all the first postures when none of the one or more first poses pass the position verification. Describe the overall environment map.
  • none of the one or more first poses is the current pose of the mobile robot, that is, the positioning of the mobile robot is not accurate at this time, so the overall environment map is cleared to determine one of the mobile robots again. or multiple first poses.
  • An embodiment of the present invention further provides a storage medium, where the storage medium includes a stored program, wherein the above-mentioned program executes any one of the above-mentioned methods when running.
  • the above-mentioned storage medium may be configured to store program codes for executing the following steps:
  • the mobile robot when it is detected that the mobile robot is in a preset state, acquire an overall environment map corresponding to the travel area of the mobile robot, wherein the overall environment map includes: a first grid map, and the mobile robot is in the different poses corresponding to different pixels of the first grid map;
  • An embodiment of the present invention also provides an electronic device, comprising a memory and a processor, where a computer program is stored in the memory, and the processor is configured to run the computer program to execute the steps in any of the above method embodiments.
  • the above-mentioned electronic device may further include a transmission device and an input-output device, wherein the transmission device is connected to the above-mentioned processor, and the input-output device is connected to the above-mentioned processor.
  • the above-mentioned processor may be configured to execute the following steps through a computer program:
  • the mobile robot when it is detected that the mobile robot is in a preset state, acquire an overall environment map corresponding to the travel area of the mobile robot, wherein the overall environment map includes: a first grid map, and the mobile robot is in the different poses corresponding to different pixels of the first grid map;
  • S4 Perform position verification on the one or more first poses, and use a target pose of the one or more poses that has passed the position verification as the position and pose after positioning and correction of the mobile robot.
  • the above-mentioned storage medium may include but is not limited to: a USB flash drive, a read-only memory (Read-Only Memory, referred to as ROM), a random access memory (Random Access Memory, referred to as RAM), Various media that can store program codes, such as removable hard disks, magnetic disks, or optical disks.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • Embodiments of the present invention also provide a mobile robot, which includes: a main body, a driving assembly, a laser ranging sensor, a vision sensor, and a controller.
  • the drive assembly is used to move the robot around the work area.
  • Laser ranging sensors are used to build raster maps, and vision sensors are used to capture images.
  • the controller is used to obtain the overall environment map corresponding to the travel area of the mobile robot when it is detected that the mobile robot needs to perform positioning correction; the current area where the mobile robot is located is scanned by the laser ranging sensor to obtain the laser ranging sensor.
  • the constructed second grid map; the first grid map and the second grid map are matched to determine one or more first poses of the mobile robot in the current area from different poses;
  • the first pose is used for position verification, and the target pose that has passed the position verification in one or more poses is used as the corrected pose of the mobile robot.
  • modules or steps of the present invention can be implemented by a general-purpose computing device, which can be centralized on a single computing device, or distributed in a network composed of multiple computing devices Alternatively, they may be implemented in program code executable by a computing device, such that they may be stored in a storage device and executed by the computing device, and in some cases, in a different order than here
  • the steps shown or described are performed either by fabricating them separately into individual integrated circuit modules, or by fabricating multiple modules or steps of them into a single integrated circuit module.
  • the present invention is not limited to any particular combination of hardware and software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Manufacturing & Machinery (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

一种移动机器人的定位修正方法和装置、存储介质、电子装置,该移动机器人的定位修正方法包括:在检测到移动机器人处于预设状态的情况下,获取移动机器人的行进区域所对应的整体环境地图(S202);获取第二栅格地图,其中,第二栅格地图用于指示移动机器人所在的当前区域(S204);对第一栅格地图和第二栅格地图进行匹配,以从不同位姿中确定移动机器人在当前区域的一个或多个第一位姿(S206);对一个或多个第一位姿进行位置验证,将一个或多个位姿中通过位置验证的目标位姿作为移动机器人定位修正后的位姿(S208)。

Description

移动机器人的定位修正方法和装置、存储介质、电子装置 技术领域
本发明涉及通信领域,具体而言,涉及一种移动机器人的定位修正方法和装置、存储介质、电子装置。
背景技术
随着科学技术的进步和人工智能的发展,智能机器人也应用到了各个领域,移动机器人的定位和地图构建是移动机器人领域的热点问题。
目前,移动机器人通过搭载激光测距单元、视觉传感器及惯性导航单元等感知环境信息,结合即时定位与建图技术来构建环境地图,并进行自主定位和导航以执行任务,但是在实际应用中,移动机器人可能会发生被用户移走等突发状况,因为无法对移动机器人的定位进行修正,导致移动机器人构建的环境地图的准确性不高。
针对相关技术中,移动机器人发生突发状况时,无法对移动机器人的定位进行修正,进而导致移动机器人构建的环境地图不准确等问题,尚未提出有效的技术方案。
因此,有必要对相关技术予以改良以克服相关技术中的所述缺陷。
发明内容
本发明实施例提供了一种移动机器人的定位修正方法和装置、存储介质、电子装置,以至少解决相关技术中移动机器人发生突发状况时,无法对移动机器人的定位进行修正,进而导致移动机器人构建的环境地图不准确等问题。
根据本发明的一个实施例,提供了一种移动机器人的定位修正方法,包括:在检测到移动机器人处于预设状态的情况下,获取移动机器人的行进区域所对应的整体环境地图,其中,所述整体环境地图包括:第一栅格地图,所述移动机器人在所述第一栅格地图的不同像素点所对应的不同位姿;获取第二栅格地图,其中,所述第二栅格地图用于指示所述移动机器人所在的当前区域;对所述第一栅格地图和所述第二栅格地图进行匹配,以从所述不同位姿中确定所述移动机器人在所述当前区域的一个或多个第一位姿;对所述一个或多个第一位姿进行位置验证,将所述一个或多个位姿中通过位置验证的目标位姿作为所述移动机器人定位修正后的位姿。
在一个示例性实施例中,对所述一个或多个第一位姿进行位置验证,包括:对于所述一个或多个第一位姿中的任一位姿,指示所述移动机器人按照所述任一位姿移动至所述整体环境地图中的目标路标所对应的位置;在所述移动机器人按照所述任一位姿移动到所述位置的情况下,通过所述移动机器人在所述位置对所述目标路标进行拍照,以获取第一图像;将所述第一图像与所述整体环境地图中所述目标路标的第二图像进行匹配,并根据匹配结果对所述任一位姿进行位置验证。
在一个示例性实施例中,根据匹配结果对所述任一位姿进行位置验证,包括:在所述匹配结果指示所述第一图像和所述第二图像的相似度超过预设阈值的情况下,确定所述任一位姿通过所述位置验证;在所述匹配结果指示所述第一图像和所述第二图像的相似度未超过预设阈值的情况下,确定所述任一位姿未通过所述位置验证。
在一个示例性实施例中,将所述第一图像与所述整体环境地图中所述目标路标的第二图像进行匹配,包括:获取所述第一图像的以下至少之一第一参数信息:第一像素信息,所述第一图像的第一物体类别,以及所述第一物体类别的第一得分,以及获取所述第二图像的以下至少之一第二参数信息:第二像素信息,所述第二图像的第二物体类别,以及所述第二物体类别的第二得分;将所述第一参数信息和所述第二参数信息进行匹配,其中,在匹配结果满足预设条件的情况下,确定所述任一位姿通过位置验证,所述预设条件包括以下至少之一:所述第一像素信息和所述第二像素信息的第一相似度大于第一预设阈值,所述第一物体类别和所述第二物体类别的第二相似度大于第二预设阈值,以及所述第一得分和第二得分的差值小于第三预设阈值。
在一个示例性实施例中,获取所述第一图像的第一物体类别,以及所述第一物体类别的第一得分,以及获取所述第二图像的第二物体类别,以及所述第二物体类别的第二得分,包括:将所述第一图像输入到已训练好的人工智能图像模型中,以获取到所述第一图像的第一物体类别,以及所述第一物体类别的第一得分;以及将所述第二图像输入到已训练好的人工智能图像模型中,以获取到所述第二图像的第二物体类别,以及所述第二物体类别的第二得分,并将所述第二图像的第二像素信息,所述第二图像的第二物体类别,以及所述第二物体类别的第二得分保存至所述整体环境地图中,在所述整体环境地图获取所述第二图像的第二像素信息,所述第二图像的第二物体类别,以及所述第二物体类别的第二得分。
在一个示例性实施例中,获取所述第一图像的以下至少之一第一参数信息:第一像素信息,所述第一图像的第一物体类别,以及所述第一物体类别的第一得分,以及获取所述第二图像的以下至少之一第二参数信息:第二像素信息,所述第二图像的第二物体类别,以及所 述第二物体类别的第二得分之后,所述方法还包括:在所述第一物体类别和所述第二物体类别指示同一物体类别的情况下,将所述第一得分和所述第二得分中的较高得分更新保存在所述整体环境地图中。
在一个示例性实施例中,获取移动机器人的行进区域所对应的整体环境地图之后,所述方法还包括:在所述整体环境地图中未检测到所述第一物体类别的情况下,将所述第一物体类别保存在所述整体环境地图中;在所述第一物体类别和所述第二物体类别指示同一物体类别的情况下,将所述第一得分和所述第二得分中的较高得分更新保存在所述整体环境地图中。
在一个示例性实施例中,获取移动机器人的行进区域所对应的整体环境地图之前,所述方法还包括:通过以下方式确定移动机器人处于所述预设状态:获取所述移动机器人的位姿数据;在所述位姿数据出现异常的情况下,确定所述移动机器人需要定位修正,以确定所述移动机器人处于所述预设状态。
在一个示例性实施例中,通过以下方式之一确定所述位姿数据出现异常:检测所述移动机器人设置的悬崖传感器获取的数据异常;检测到所述移动机器人设置的激光测距传感器获取的数据点异常;通过所述移动机器人上设置的惯性导航传感器检测到所述移动机器人的倾斜角度异常;检测到所述移动机器人获取的相邻两帧物体图像的坐标差值异常。
在一个示例性实施例中,对所述一个或多个第一位姿进行位置验证之后,所述方法还包括:在所述一个或多个第一位姿均未通过所述位置验证的情况下,清空所述整体环境地图。
在一个示例性实施例中,获取第二栅格地图,包括:通过所述移动机器人上设置的激光测距传感器对所述移动机器人所在的当前区域进行扫描,以获取所述激光测距传感器构建的第二栅格地图。
根据本发明的另一个实施例,提供了一种移动机器人的定位修正装置,所述装置包括:第一获取模块,用于在检测到移动机器人处于预设状态的情况下,获取移动机器人的行进区域所对应的整体环境地图,其中,所述整体环境地图包括:第一栅格地图,所述移动机器人在所述第一栅格地图的不同像素点所对应的不同位姿;第二获取模块,用于获取第二栅格地图,其中,所述第二栅格地图用于指示所述移动机器人所在的当前区域;匹配模块,用于对所述第一栅格地图和所述第二栅格地图进行匹配,以从所述不同位姿中确定所述移动机器人在所述当前区域的一个或多个第一位姿;验证模块,用于对所述一个或多个第一位姿进行位置验证,将所述一个或多个位姿中通过位置验证的目标位姿作为所述移动机器人定位修正后的位姿。
根据本发明的又一实施例,还提供了一种计算机可读的存储介质,该计算机可读的存储 介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述移动机器人的定位修正方法。
根据本发明的又一个实施例,还提供了一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行上述任一项方法实施例中的步骤。
通过本发明,在检测到移动机器人处于预设状态的情况下,获取移动机器人的行进区域所对应的整体环境地图,其中,整体环境地图包括:第一栅格地图,移动机器人在第一栅格地图的不同像素点所对应的不同位姿;获取第二栅格地图,其中,所述第二栅格地图用于指示所述移动机器人所在的当前区域;对第一栅格地图和第二栅格地图进行匹配,以从不同位姿中确定移动机器人在当前区域的一个或多个第一位姿;对一个或多个第一位姿进行位置验证,将一个或多个位姿中通过位置验证的目标位姿作为移动机器人定位修正后的位姿,即在移动机器人需要进行定位修正的情况下,可以先确定移动机器人的一个或多个第一位姿,进而对一个或多个第一位姿进行位置验证,确定移动机器人定位修正后的位姿,采用上述方案,解决了相关技术中,移动机器人发生突发状况时,无法对移动机器人的定位进行修正,进而导致移动机器人构建的环境地图不准确等问题,进而使得移动机器人发生突发状况后,对移动机器人的定位进行了及时地修正。
本发明具有如下有益效果:移动机器人发生突发状况后,对移动机器人的定位进行了及时地修正,使得移动机器人构建的环境地图准确性更高。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1是本发明实施例的一种移动机器人的定位修正方法的移动机器人的硬件结构框图;
图2是根据本发明实施例的移动机器人的定位修正方法的流程图;
图3是根据本发明实施例的移动机器人的定位修正装置的结构框图;
图4是根据本发明实施例的移动机器人的定位修正装置的另一结构框图。
具体实施方式
下文中将参考附图并结合实施例来详细说明本发明。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。
本申请实施例所提供的方法实施例可以在移动机器人,或者类似的运算装置中执行。以运行在移动机器人上为例,图1是本发明实施例的一种移动机器人的定位修正方法的移动机器人的硬件结构框图。如图1所示,移动机器人可以包括一个或多个(图1中仅示出一个)处理器102(处理器102可以包括但不限于微处理器MCU或可编程逻辑器件FPGA等的处理装置)和用于存储数据的存储器104,在一个示例性实施例中,上述移动机器人还可以包括用于通信功能的传输设备106以及输入输出设备108。本领域普通技术人员可以理解,图1所示的结构仅为示意,其并不对上述移动机器人的结构造成限定。例如,移动机器人还可包括比图1中所示更多或者更少的组件,或者具有与图1所示等同功能或比图1所示功能更多的不同的配置。
存储器104可用于存储计算机程序,例如,应用软件的软件程序以及模块,如本发明实施例中的移动机器人的移动机器人的定位修正方法对应的计算机程序,处理器102通过运行存储在存储器104内的计算机程序,从而执行各种功能应用以及数据处理,即实现上述的方法。存储器104可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器104可进一步包括相对于处理器102远程设置的存储器,这些远程存储器可以通过网络连接至移动机器人。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
传输装置106用于经由一个网络接收或者发送数据。上述的网络具体实例可包括移动机器人的通信供应商提供的无线网络。在一个实例中,传输装置106包括一个网络适配器(Network Interface Controller,简称为NIC),其可通过基站与其他网络设备相连从而可与互联网进行通讯。在一个实例中,传输装置106可以为射频(Radio Frequency,简称为RF)模块,其用于通过无线方式与互联网进行通讯。
在本实施例中提供了一种移动机器人的定位修正方法,应用于移动机器人中,图2是根据本发明实施例的移动机器人的定位修正方法的流程图,该流程包括如下步骤:
步骤S202:在检测到移动机器人处于预设状态的情况下,获取移动机器人的行进区域所对应的整体环境地图,其中,所述整体环境地图包括:第一栅格地图,所述移动机器人在所述第一栅格地图的不同像素点所对应的不同位姿;
需要说明的是,在移动机器人的存储器或从远程服务器中获取所述移动机器人在正常情况下构建的行进区域所对应的整体环境地图,其中,所述移动机器人在正常情况下构建整体 环境地图的方法为:通过移动机器人上设置的激光测距传感器构建栅格地图,通过激光测距传感器和惯性导航传感器共同定位移动机器人的当前位姿,其中,位姿包括移动机器人坐标和倾角;通过移动机器人上设置的视觉传感器采集图像,将视觉传感器采集到的图像输入到训练好的人工智能图像模型中,实时得到图像中的物体类别和得分,将栅格地图、移动机器人位姿、物体图像、物体类别和得分保存至环境地图中。需要说明的是,以上仅为构建整体环境地图的一个示例,本实施例并不对此进行限定。
步骤S204:获取第二栅格地图,其中,所述第二栅格地图用于指示所述移动机器人所在的当前区域;
具体的,通过所述移动机器人上设置的激光测距传感器对所述移动机器人所在的当前区域进行扫描,以获取所述激光测距传感器构建的第二栅格地图。
步骤S206:对所述第一栅格地图和所述第二栅格地图进行匹配,以从所述不同位姿中确定所述移动机器人在所述当前区域的一个或多个第一位姿;
步骤S208:对所述一个或多个第一位姿进行位置验证,将所述一个或多个位姿中通过位置验证的目标位姿作为所述移动机器人定位修正后的位姿。
通过本发明实施例,在检测到移动机器人处于预设状态的情况下,获取移动机器人的行进区域所对应的整体环境地图;获取第二栅格地图,其中,所述第二栅格地图用于指示所述移动机器人所在的当前区域;对第一栅格地图和第二栅格地图进行匹配,以从不同位姿中确定移动机器人在当前区域的一个或多个第一位姿;对一个或多个第一位姿进行位置验证,将一个或多个位姿中通过位置验证的目标位姿作为移动机器人定位修正后的位姿,即在移动机器人需要进行定位修正的情况下,可以先确定移动机器人的一个或多个第一位姿,进而对一个或多个第一位姿进行位置验证,确定移动机器人定位修正后的位姿,采用上述方案,解决了相关技术中,移动机器人发生突发状况时,无法对移动机器人的定位进行修正,进而导致移动机器人构建的环境地图不准确等问题,进而使得移动机器人发生突发状况后,对移动机器人的定位进行了及时地修正。
上述步骤S208的实现方式有很多种,本发明实施例给出了一种实现方式,具体的:
步骤1:对于所述一个或多个第一位姿中的任一位姿,指示所述移动机器人按照所述任一位姿移动至所述整体环境地图中的目标路标所对应的位置;
也就是说,从所述不同位姿中确定所述移动机器人在所述当前区域的一个或多个第一位姿后,移动机器人以第一位姿中的任一位姿移动到目标位置,所述任一位姿为多个第一位姿中的任意一个位姿,其中,目标位置为目标路标对应的位置,目标路标可以为整体环境地图 中距离移动机器人最近的路标,也可以是预先设置的路标,本发明实施例对此不做限定。其中,路标可以是预先设置的标志性物体,例如沙发、冰箱等。
步骤2:在所述移动机器人按照所述任一位姿移动到所述位置的情况下,通过所述移动机器人在所述位置对所述目标路标进行拍照,以获取第一图像;
可以理解为,移动机器人以第一位姿中的任一位姿移动到目标路标对应的位置后,移动机器人为所述目标路标拍照以获取目标路标对应的第一图像,其中,所述移动机器人上设置有视觉传感器,即通过视觉传感器对目标路标进行拍照,进而得到上述第一图像。
步骤3:获取所述第一图像的以下至少之一第一参数信息:第一像素信息,所述第一图像的第一物体类别,以及所述第一物体类别的第一得分;
具体的,将所述第一图像输入到已训练好的人工智能图像模型中,以获取到所述第一图像的第一物体类别,以及所述第一物体类别的第一得分;需要说明的是,人工智能图像模型可以通过在网络上收集到的图像和移动机器人获取到的图像训练得到的,移动机器人通过视觉传感器采集到第一图像后,将视觉传感器采集到的第一图像输入到训练好的人工智能图像模型中,实时得到第一图像中第一物体类别和第一得分,其中,第一图像像素信息通过第一图像获取,第一得分可以理解为所述人工智能图像模型将第一图像识别到的物体的可信度的得分,例如,将第一图像输入到人工智能图像模型中,将第一图像中的物体为椅子的可信度为90分,也可以理解为第一图像中的物体为椅子的可能性为90%。
步骤4:获取所述第二图像的以下至少之一第二参数信息:第二像素信息,所述第二图像的第二物体类别,以及所述第二物体类别的第二得分;
具体的,在所述整体环境地图中获取所述目标路标对应的第二图像的第二像素信息,所述第二图像的第二物体类别,以及所述第二物体类别的第二得分;其中,将所述第二图像输入到已训练好的人工智能图像模型中,获取到所述第二图像的第二物体类别,以及所述第二物体类别的第二得分,并将所述第二图像的第二物体类别,所述第二物体类别的第二得分,以及通过第二图像获取的第二像素信息保存至所述整体环境地图中。
也就是说,移动机器人在正常工作状态下,通过视觉传感器采集到第二图像后,将视觉传感器采集到的第二图像输入到训练好的人工智能图像模型中,实时得到第二图像中第二物体类别和第二得分,并将实时保存至整体环境地图中。
需要说明的是,上述第一像素信息和第二像素信息可以是通过第一图像和第二图像获取得到。
步骤5:将所述第一参数信息和所述第二参数信息进行匹配,其中,在匹配结果满足预 设条件的情况下,确定所述任一位姿通过位置验证;
具体的,所述预设条件包括以下至少之一:所述第一像素信息和所述第二像素信息的第一相似度大于第一预设阈值,所述第一物体类别和所述第二物体类别的第二相似度大于第二预设阈值,以及所述第一得分和第二得分的差值小于第三预设阈值。
也就是说,获取到第一图像的第一像素信息、第一物体类别、第一得分和第二图像的第二像素信息、第二物体类别、第二得分之后,将所述第一像素信息和所述第二像素信息,和/或,所述第一物体类别和所述第二物体类别,和/或,所述第一得分和所述第二得分进行匹配,判断第一像素信息和所述第二像素信息的第一相似度是否大于第一预设阈值;所述第一物体类别和所述第二物体类别的第二相似度是否大于第二预设阈值,以及所述第一得分和第二得分的差值是否小于第三预设阈值,根据情况,在所述第一像素信息和所述第二像素信息的第一相似度大于第一预设阈值,和/或,所述第一物体类别和所述第二物体类别的第二相似度大于第二预设阈值,和/或,所述第一得分和第二得分的差值小于第三预设阈值的情况下,确定所述任一位姿通过位置验证。
可以理解的是,上述一个或多个第一位姿可以理解为一个位姿初始集合,任一位姿的选取可以理解为是遍历所述位姿初始集合中的第一位姿来实现的,具体到方案中,依次遍历位姿初始集合中的第一位姿,以依次指示所述移动机器人按照对应的第一位姿移动至所述整体环境地图中的目标路标所对应的位置;在所述移动机器人按照第一位姿移动到所述位置的情况下,通过所述移动机器人在所述位置对所述目标路标进行拍照,以获取第一图像;将所述第一图像与所述整体环境地图中所述目标路标的第二图像进行匹配,并根据匹配结果对所述任一位姿进行位置验证,进而判断位姿初始集合中的第一位姿是否满足作为定位修正后的位姿的条件。
进一步的,假设所述移动机器人不能移动到所述位置的情况下或在所述匹配结果指示所述第一图像和所述第二图像的相似度未超过预设阈值的情况下,确定所述任一位姿未通过所述位置验证,则继续遍历下一个位姿,直至确定了当前位姿。
在一个可选实施例中,获取所述第一图像的第一像素信息,所述第一图像的第一物体类别,以及所述第一物体类别的第一得分,以及获取所述第二图像的第二像素信息,所述第二图像的第二物体类别,以及所述第二物体类别的第二得分之后,在所述第一物体类别和所述第二物体类别指示同一物体类别的情况下,将所述第一得分和所述第二得分中的较高得分更新保存在所述整体环境地图中。
可以理解为,在第一图像对应的第一物体类别和第二图像对应的第二物体类别的情况下, 比较第一得分和所述第二得分的大小关系,在指示第一得分大于第二得分的情况下,将第一图像对应的第一像素信息、第一物体类别、第一得分保存至整体环境地图中。
进一步的,在所述移动机器人的定位需要修正的状态下,获取到第一物体类别后,在所述整体环境地图中未检测到第一物体类别,将所述第一物体类别保存在所述整体环境地图中,例如,在所述任一位姿通过位置验证之后件任一位姿作为目标位姿,移动机器人按照目标位姿进行移动,继续对移动机器人的行进区域进行扫描,移动机器人获取第一图像,确定第一图像对应的第一物体类别,在所述整体环境地图中确定是否具有第一物体类别,在检测不到第一物体类别的情况下,将所述第一物体类别保存在所述整体环境地图中,本实施例也适用于在移动机器人的定位不需要修正的情境下,本发明实施例对使用场景不做限定。
在一个可选实施例中,在所述整体环境地图中未检测到所述第一物体类别的情况下,将所述第一物体类别保存在所述整体环境地图中;在所述第一物体类别和所述第二物体类别指示同一物体类别的情况下,将所述第一得分和所述第二得分中的较高得分更新保存在所述整体环境地图中。
举例来说,在整体地图中的保存的第二图像对应的第二物体类别为冰箱,第二得分为80,本次获取的第一图像对应的第一物体类别也为冰箱,第一得分为90分,则将第一图像对应的第一像素信息、第一物体类别、第一得分90分保存至整体环境地图中,如果,本次获取的第一图像对应的第一物体类别为沙发,第一得分为80分,但是未在在整体地图中检测到物体类别为沙发的第二图像,此时直接将第一图像对应的第一像素信息、第一物体类别:沙发、第一得分80分保存至整体环境地图中。
在一个示例性实施例中,获取移动机器人的行进区域所对应的整体环境地图之前,通过以下方式确定移动机器人处于所述预设状态:获取所述移动机器人的位姿数据;在所述位姿数据出现异常的情况下,确定所述移动机器人需要定位修正,以确定所述移动机器人处于所述预设状态。
可以理解为,移动机器人实时监测获取的位姿数据,在获取到的位姿数据与预设阈值存在异常的情况下,确定移动机器人的定位需要修正。
具体的,通过以下方式之一确定所述位姿数据出现异常:检测所述移动机器人设置的悬崖传感器获取的数据异常;检测到所述激光测距传感器获取的数据点异常;通过所述移动机器人上设置的惯性导航传感器检测到所述移动机器人的倾斜角度异常;检测到所述移动机器人获取的相邻两帧物体图像的坐标差值异常。
也可以理解为,在检测到当悬崖传感器获取的数据异常,如移动机器人被搬动;检测到 激光测距传感器数据异常,如移动机器人采集的数据点过少;惯性导航单元检测到移动机器人倾斜角度超过第五预设阈值;移动机器人获取的相邻两帧物体图像的坐标差值大于第六预设阈值,说明移动机器人处于非正常状态,此时需要对移动机器人进行定位修正。
在一个示例性实施例中,在所述一个或多个第一位姿均未通过所述位置验证的情况下,清空所述整体环境地图。
也就是说,所述一个或多个第一位姿均不是移动机器人的当前位姿,即此时对移动机器人的定位并不准确,因此将整体环境地图进行清空,以便再次确定移动机器人的一个或多个第一位姿。
为了更好理解上述移动机器人的定位修正方法,以下结合可选实施例对上述技术方案进行解释说明,但不用于限定本发明实施例的技术方案。
在一个示例性实施例中,移动机器人的定位修正方法具体步骤如下:
步骤A:移动机器人在正常运行过程中,构建环境地图;
具体的,通过移动机器人上设置的激光测距传感器构建栅格地图,通过激光测距传感器和惯性导航传感器共同定位移动机器人的当前位姿,其中,位姿包括移动机器人坐标和倾角,也可以理解为移动机器人位置和姿态;通过移动机器人上设置的视觉传感器采集图像,将视觉传感器采集到的图像输入到训练好的人工智能图像模型中,实时得到图像中的物体类别和可靠性得分,其中,所述人工智能图像模型可以根据网络上收集到的图像和视觉传感器采集到的图像进行训练。在已保存的环境地图中,搜索移动机器人预设阈值内的局部环境地图,如移动机器人2m范围内的环境地图,如果当前检测的物体类别是局部地图中不存在的物体类别,则将获取的图像、物体类别和可靠性得分新增到环境地图中;如果是局部地图中已知的物体类别,则判断获取的图像的可靠性得分和局部地图的图像的可靠性得分,将可靠性得分较高的物体信息保存至环境地图中,环境地图中保存以下至少之一:栅格地图、移动机器人位姿、物体图像、物体类别和可靠性得分。
步骤B:获取移动机器人当前的运行状态,所述运行状态包括:正常状态和非正常状态;
其中,通过以下方式之一确定移动机器人当前的运行状态为非正常状态:检测所述移动机器人设置的悬崖传感器获取的数据异常,如移动机器人被搬动;检测到激光测距传感器获取的数据点异常,如移动机器人采集的数据点过少;通过移动机器人的惯性导航单元检测到移动机器人倾斜超过第一阈值;移动机器人获取的相邻两帧间的图像的定位差值超过第二阈值;此时需要对移动机器人进行定位修正,在其他情况下均为正常状态。
步骤C:移动机器人处于非正常状态下,对移动机器人的定位进行修正。
具体的,根据预先设定的移动机器人行走策略,如以移动机器人为中心进行十字型行走,由移动机器人上设置的激光测距传感器构建临时栅格地图,将临时栅格地图与保存的环境地图中的栅格地图(相当于上述实施例中的第一栅格地图和第二栅格地图)进行匹配,确定一个或多个当前位姿,并将一个或多个当前位姿作为位姿初值候选集,在位姿初值候选集中确定最终当前位姿。具体的,遍历位姿初值候选集,假设位姿初值为当前位姿,尝试将移动机器人导航至保存的环境地图中特定路标的位置,其中,特定路标(相当于上述实施例中的目标路标)可为距离移动机器人最近的路标或预先设定的标志性路标,若移动机器人可以到达特定路标对应的位置,则将当前视觉传感器的第一图像信息及人工智能图像模型识别的第一图像信息对应的第一物体类别和第一物体得分与保存的环境地图中的路标对应的第二图像信息、第二图像信息对应的第二物体类别和第二物体得分进行比较,若图像相似度,类别相似度及得分相似度均高于预设阈值,当前位姿为最终当前位姿;若不能到达路标位置或图像相似度,物体相似度或得分相似度低于预设阈值,则继续遍历位姿初值候选集的下一个位姿。若在位姿初值候选集的所有位姿均不满足预设条件的情况下,则认为移动机器人定位失败,清空环境地图信息。
通过本发明实施例,在检测到移动机器人处于预设状态的情况下,获取移动机器人的行进区域所对应的整体环境地图,获取第二栅格地图;对第一栅格地图和第二栅格地图进行匹配,以从不同位姿中确定移动机器人在当前区域的一个或多个第一位姿;对一个或多个第一位姿进行位置验证,将一个或多个位姿中通过位置验证的目标位姿作为移动机器人定位修正后的位姿,即在移动机器人需要进行定位修正的情况下,可以先确定移动机器人的一个或多个第一位姿,进而对一个或多个第一位姿进行位置验证,确定移动机器人定位修正后的位姿,采用上述方案,解决了相关技术中,移动机器人发生突发状况时,无法对移动机器人的定位进行修正,进而导致移动机器人构建的环境地图不准确等问题,进而使得移动机器人发生突发状况后,对移动机器人的定位进行了及时地修正。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。
在本实施例中还提供了移动机器人的定位修正装置,该装置用于实现上述实施例及优选实施方式,已经进行过说明的不再赘述。如以下所使用的,术语“模块”可以实现预定功能的软件和/或硬件的组合。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。
图3是根据本发明实施例的移动机器人的定位修正装置的结构框图,如图3所示,包括:
第一获取模块32,用于在检测到移动机器人处于预设状态的情况下,获取移动机器人的行进区域所对应的整体环境地图,其中,所述整体环境地图包括:第一栅格地图,所述移动机器人在所述第一栅格地图的不同像素点所对应的不同位姿;
需要说明的是,在移动机器人的存储器或从远程服务器中获取所述移动机器人在正常情况下构建的行进区域所对应的整体环境地图,其中,所述移动机器人在正常情况下构建整体环境地图的方法为:通过移动机器人上设置的激光测距传感器构建栅格地图,通过激光测距传感器和惯性导航传感器共同定位移动机器人的当前位姿,其中,位姿包括移动机器人坐标和倾角;通过移动机器人上设置的视觉传感器采集图像,将视觉传感器采集到的图像输入到训练好的人工智能图像模型中,实时得到图像中的物体类别和得分,将栅格地图、移动机器人位姿、物体图像、物体类别和得分保存至环境地图中,需要说明的是,以上仅为构建整体环境地图的一个示例,本实施例并不对此进行限定。
第二获取模块34,获取第二栅格地图,其中,所述第二栅格地图用于指示所述移动机器人所在的当前区域;
具体的,通过所述移动机器人上设置的激光测距传感器对所述移动机器人所在的当前区域进行扫描,以获取所述激光测距传感器构建的第二栅格地图。
匹配模块36,用于对所述第一栅格地图和所述第二栅格地图进行匹配,以从所述不同位姿中确定所述移动机器人在所述当前区域的一个或多个第一位姿;
验证模块38,用于对所述一个或多个第一位姿进行位置验证,将所述一个或多个位姿中通过位置验证的目标位姿作为所述移动机器人定位修正后的位姿。
通过本发明实施例,在检测到移动机器人处于预设状态的情况下,获取移动机器人的行进区域所对应的整体环境地图,获取第二栅格地图,其中,所述第二栅格地图用于指示所述移动机器人所在的当前区域;对第一栅格地图和第二栅格地图进行匹配,以从不同位姿中确定移动机器人在当前区域的一个或多个第一位姿;对一个或多个第一位姿进行位置验证,将一个或多个位姿中通过位置验证的目标位姿作为移动机器人定位修正后的位姿,即在移动机器人需要进行定位修正的情况下,可以先确定移动机器人的一个或多个第一位姿,进而对一 个或多个第一位姿进行位置验证,确定移动机器人定位修正后的位姿,采用上述方案,解决了相关技术中,移动机器人发生突发状况时,无法对移动机器人的定位进行修正,进而导致移动机器人构建的环境地图不准确等问题,进而使得移动机器人发生突发状况后,对移动机器人的定位进行了及时地修正。
在一个示例性实施例中,上述装置,所述验证模块,还用于对于所述一个或多个第一位姿中的任一位姿,指示所述移动机器人按照所述任一位姿移动至所述整体环境地图中的目标路标所对应的位置;在所述移动机器人按照所述任一位姿移动到所述位置的情况下,通过所述移动机器人在所述位置对所述目标路标进行拍照,以获取第一图像;以及还用于将所述第一图像与所述整体环境地图中所述目标路标的第二图像进行匹配,并根据匹配结果对所述任一位姿进行位置验证。
也就是说,从所述不同位姿中确定所述移动机器人在所述当前区域的一个或多个第一位姿后,移动机器人以第一位姿中的任一位姿移动到目标位置,所述任一位姿为多个第一位姿中的任意一个位姿,其中,目标位置为目标路标对应的位置,目标路标可以为整体环境地图中距离移动机器人最近的路标,也可以是预先设置的路标,本发明实施例对此不做限定,移动机器人以第一位姿中的任一位姿移动到目标路标对应的位置后,所述移动机器人上设置有视觉传感器,移动机器人通过视觉传感器为所述目标路标拍照以获取目标路标对应的第一图像,在整体环境地图中获取目路标对应的第二图像,通过对第一图像和第二图像进行匹配验证任一位姿,其中,路标可以是预先设置的标志性物体,例如沙发、冰箱等。
在一个示例性实施例中,所述匹配模块,还用于在所述匹配结果指示所述第一图像和所述第二图像的相似度超过预设阈值的情况下,确定所述任一位姿通过所述位置验证;在所述匹配结果指示所述第一图像和所述第二图像的相似度未超过预设阈值的情况下,确定所述任一位姿未通过所述位置验证。
举例来说,预设阈值为60%,在第一图像和第二图像的相似度为70%的情况下,确定所述任一位姿通过所述位置验证,在第一图像和第二图像的相似度为30%的情况下,确定所述任一位姿未通过所述位置验证。
在一个示例性实施例中,第二获取模块,还用于获取所述第一图像的以下至少之一第一参数信息:第一像素信息,所述第一图像的第一物体类别,以及所述第一物体类别的第一得分,以及获取所述第二图像的以下至少之一第二参数信息:第二像素信息,所述第二图像的第二物体类别,以及所述第二物体类别的第二得分;将所述第一参数信息和所述第二参数信息进行匹配,其中,在匹配结果满足预设条件的情况下,确定所述任一位姿通过位置验证, 所述预设条件包括以下至少之一:所述第一像素信息和所述第二像素信息的第一相似度大于第一预设阈值,所述第一物体类别和所述第二物体类别的第二相似度大于第二预设阈值,以及所述第一得分和第二得分的差值小于第三预设阈值。
也就是说,获取到第一图像的第一像素信息、第一物体类别、第一得分和第二图像的第二像素信息、第二物体类别、第二得分之后,将所述第一像素信息和所述第二像素信息,和/或,所述第一物体类别和所述第二物体类别,和/或,所述第一得分和所述第二得分进行匹配,判断第一像素信息和所述第二像素信息的第一相似度是否大于第一预设阈值;所述第一物体类别和所述第二物体类别的第二相似度是否大于第二预设阈值,以及所述第一得分和第二得分的差值是否小于第三预设阈值,根据情况,在所述第一像素信息和所述第二像素信息的第一相似度大于第一预设阈值,和/或,所述第一物体类别和所述第二物体类别的第二相似度大于第二预设阈值,和/或,所述第一得分和第二得分的差值小于第三预设阈值的情况下,确定所述任一位姿通过位置验证。
可以理解的是,上述一个或多个第一位姿可以理解为一个位姿初始集合,任一位姿的选取可以理解为是遍历所述位姿初始集合中的第一位姿来实现的,具体到方案中,依次遍历位姿初始集合中的第一位姿,以依次指示所述移动机器人按照对应的第一位姿移动至所述整体环境地图中的目标路标所对应的位置;在所述移动机器人按照第一位姿移动到所述位置的情况下,通过所述移动机器人在所述位置对所述目标路标进行拍照,以获取第一图像;将所述第一图像与所述整体环境地图中所述目标路标的第二图像进行匹配,并根据匹配结果对所述任一位姿进行位置验证,进而判断位姿初始集合中的第一位姿是否满足作为定位修正后的位姿的条件。
进一步的,假设所述移动机器人不能移动到所述位置的情况下或在所述匹配结果指示所述第一图像和所述第二图像的相似度未超过预设阈值的情况下,确定所述任一位姿未通过所述位置验证,则继续遍历下一个位姿,直至确定了当前位姿。
在一个示例性实施例中,图4是根据本发明实施例的移动机器人的定位修正装置的另一结构框图,上述装置,还包括,输入模块40,用于将所述第一图像输入到已训练好的人工智能图像模型中,以获取到所述第一图像的第一物体类别,以及所述第一物体类别的第一得分;以及将所述第二图像输入到已训练好的人工智能图像模型中,以获取到所述第二图像的第二物体类别,以及所述第二物体类别的第二得分,并将所述第二图像的第二像素信息,所述第二图像的第二物体类别,以及所述第二物体类别的第二得分保存至所述整体环境地图中,在所述整体环境地图获取所述第二图像的第二像素信息,所述第二图像的第二物体类别,以及 所述第二物体类别的第二得分。
需要说明的是,人工智能图像模型可以通过在网络上收集到的图像和移动机器人获取到的图像训练得到的,移动机器人通过视觉传感器采集到第一图像后,将视觉传感器采集到的第一图像输入到训练好的人工智能图像模型中,实时得到第一图像中的第一物体类别和第一得分;移动机器人在正常工作状态下,通过视觉传感器采集到第二图像后,将视觉传感器采集到的第二图像输入到训练好的人工智能图像模型中,实时得到第二图像中的第二物体类别和第二得分,并将实时保存至整体环境地图中。
在一个示例性实施例中,如图4所示,上述装置,还包括:保存模块42,用于在所述第一物体类别和所述第二物体类别指示同一物体类别的情况下,将所述第一得分和所述第二得分中的较高得分更新保存在所述整体环境地图中。
可以理解为,在第一图像对应的第一物体类别和第二图像对应的第二物体类别的情况下,比较第一得分和所述第二得分的大小关系,在指示第一得分大于第二得分的情况下,将第一图像对应的第一像素信息、第一物体类别、第一得分保存至整体环境地图中。
进一步的,在所述移动机器人的定位需要修正的状态下,获取到第一物体类别后,在所述整体环境地图中未检测到第一物体类别,将所述第一物体类别保存在所述整体环境地图中,例如,在所述任一位姿通过位置验证之后件任一位姿作为目标位姿,移动机器人按照目标位姿进行移动,继续对移动机器人的行进区域进行扫描,移动机器人获取第一图像,确定第一图像对应的第一物体类别,在所述整体环境地图中确定是否具有第一物体类别,在检测不到第一物体类别的情况下,将所述第一物体类别保存在所述整体环境地图中,本实施例也适用于在移动机器人的定位不需要修正的情境下,本发明实施例对使用场景不做限定。
在一个示例性实施例中,如图4所示,上述装置,还包括:保存模块42,还用于在所述移动机器人的定位不需要修正的状态下,在所述整体环境地图中未检测到所述第一物体类别的情况下,将所述第一物体类别保存在所述整体环境地图中;在所述第一物体类别和所述第二物体类别指示同一物体类别的情况下,将所述第一得分和所述第二得分中的较高得分更新保存在所述整体环境地图中。
举例来说,在整体地图中的保存的第二图像对应的第二物体类别为冰箱,第二得分为80,本次获取的第一图像对应的第一物体类别也为冰箱,第一得分为90分,则将第一图像对应的第一像素信息、第一物体类别、第一得分90分保存至整体环境地图中,如果,本次获取的第一图像对应的第一物体类别为沙发,第一得分为80分,但是未在在整体地图中检测到物体类别为沙发的第二图像,此时直接将第一图像对应的第一像素信息、第一物体类别:沙发、第 一得分80分保存至整体环境地图中。
在一个示例性实施例中,确定模块,还用于通过以下方式确定移动机器人处于所述预设状态:获取所述移动机器人的位姿数据;在所述位姿数据出现异常的情况下,确定所述移动机器人需要定位修正,以确定所述移动机器人处于所述预设状态。
可以理解为,移动机器人实时监测获取的位姿数据,在获取到的位姿数据与预设阈值存在异常的情况下,确定移动机器人的定位需要修正。
在一个示例性实施例中,确定模块,还用于通过以下方式之一确定所述位姿数据出现异常:检测所述移动机器人设置的悬崖传感器获取的数据异常;检测到所述激光测距传感器获取的数据点异常;通过所述移动机器人上设置的惯性导航传感器检测到所述移动机器人的倾斜角度异常;检测到所述移动机器人获取的相邻两帧物体图像的坐标差值异常。
也可以理解为,在检测到当悬崖传感器获取的数据异常,如移动机器人被搬动;检测到激光测距传感器数据异常,如移动机器人采集的数据点过少;惯性导航单元检测到移动机器人倾斜角度超过第五预设阈值;移动机器人获取的相邻两帧物体图像的坐标差值大于第六预设阈值,说明移动机器人处于非正常状态,此时需要对移动机器人进行定位修正。
在一个示例性实施例中,如图4所示,上述装置,还包括,清空模块44,用于在所述一个或多个第一位姿均未通过所述位置验证的情况下,清空所述整体环境地图。
也就是说,所述一个或多个第一位姿均不是移动机器人的当前位姿,即此时对移动机器人的定位并不准确,因此将整体环境地图进行清空,以便再次确定移动机器人的一个或多个第一位姿。
本发明的实施例还提供了一种存储介质,该存储介质包括存储的程序,其中,上述程序运行时执行上述任一项的方法。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的程序代码:
S1,在检测到移动机器人处于预设状态的情况下,获取移动机器人的行进区域所对应的整体环境地图,其中,所述整体环境地图包括:第一栅格地图,所述移动机器人在所述第一栅格地图的不同像素点所对应的不同位姿;
S2,获取第二栅格地图,其中,所述第二栅格地图用于指示所述移动机器人所在的当前区域;
S3,对所述第一栅格地图和所述第二栅格地图进行匹配,以从所述不同位姿中确定所述移动机器人在所述当前区域的一个或多个第一位姿;
S4,对所述一个或多个第一位姿进行位置验证,将所述一个或多个位姿中通过位置验证 的目标位姿作为所述移动机器人定位修正后的位姿。
本发明的实施例还提供了一种电子装置,包括存储器和处理器,该存储器中存储有计算机程序,该处理器被设置为运行计算机程序以执行上述任一项方法实施例中的步骤。
可选地,上述电子装置还可以包括传输设备以及输入输出设备,其中,该传输设备和上述处理器连接,该输入输出设备和上述处理器连接。
可选地,在本实施例中,上述处理器可以被设置为通过计算机程序执行以下步骤:
S1,在检测到移动机器人处于预设状态的情况下,获取移动机器人的行进区域所对应的整体环境地图,其中,所述整体环境地图包括:第一栅格地图,所述移动机器人在所述第一栅格地图的不同像素点所对应的不同位姿;
S2,获取第二栅格地图,其中,所述第二栅格地图用于指示所述移动机器人所在的当前区域;
S3,对所述第一栅格地图和所述第二栅格地图进行匹配,以从所述不同位姿中确定所述移动机器人在所述当前区域的一个或多个第一位姿;
S4,对所述一个或多个第一位姿进行位置验证,将所述一个或多个位姿中通过位置验证的目标位姿作为所述移动机器人定位修正后的位姿。
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(Read-Only Memory,简称为ROM)、随机存取存储器(Random Access Memory,简称为RAM)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
可选地,本实施例中的具体示例可以参考上述实施例及可选实施方式中所描述的示例,本实施例在此不再赘述。
本发明的实施例还提供了一种移动机器人,该移动机器人包括:主体、驱动组件、激光测距传感器、视觉传感器以及控制器。驱动组件用于移动机器人在工作区域行走。激光测距传感器用于构建栅格地图,视觉传感器用于采集图像。控制器用于在检测到移动机器人需要进行定位修正的情况下,获取移动机器人的行进区域所对应的整体环境地图;通过激光测距传感器对移动机器人所在的当前区域进行扫描,以获取激光测距传感器构建的第二栅格地图;对第一栅格地图和第二栅格地图进行匹配,以从不同位姿中确定移动机器人在当前区域的一个或多个第一位姿;对一个或多个第一位姿进行位置验证,将一个或多个位姿中通过位置验证的目标位姿作为移动机器人定位修正后的位姿。
可选地,本实施例中的具体示例可以参考上述实施例及可选实施方式中所描述的示例,本实施例在此不再赘述。
显然,本领域的技术人员应该明白,上述的本发明的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本发明不限制于任何特定的硬件和软件结合。
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (14)

  1. 一种移动机器人的定位修正方法,其特征在于:包括:
    在检测到移动机器人处于预设状态的情况下,获取移动机器人的行进区域所对应的整体环境地图,其中,所述整体环境地图包括:第一栅格地图,所述移动机器人在所述第一栅格地图的不同像素点所对应的不同位姿;
    获取第二栅格地图,其中,所述第二栅格地图用于指示所述移动机器人所在的当前区域;
    对所述第一栅格地图和所述第二栅格地图进行匹配,以从所述不同位姿中确定所述移动机器人在所述当前区域的一个或多个第一位姿;
    对所述一个或多个第一位姿进行位置验证,将所述一个或多个位姿中通过位置验证的目标位姿作为所述移动机器人定位修正后的位姿。
  2. 根据权利要求1所述的移动机器人的定位修正方法,其特征在于:对所述一个或多个第一位姿进行位置验证,包括:
    对于所述一个或多个第一位姿中的任一位姿,指示所述移动机器人按照所述任一位姿移动至所述整体环境地图中的目标路标所对应的位置;
    在所述移动机器人按照所述任一位姿移动到所述位置的情况下,通过所述移动机器人在所述位置对所述目标路标进行拍照,以获取第一图像;
    将所述第一图像与所述整体环境地图中所述目标路标的第二图像进行匹配,并根据匹配结果对所述任一位姿进行位置验证。
  3. 根据权利要求2所述的移动机器人的定位修正方法,其特征在于:根据匹配结果对所述任一位姿进行位置验证,包括:
    在所述匹配结果指示所述第一图像和所述第二图像的相似度超过预设阈值的情况下,确定所述任一位姿通过所述位置验证;
    在所述匹配结果指示所述第一图像和所述第二图像的相似度未超过预设阈值的情况下,确定所述任一位姿未通过所述位置验证。
  4. 根据权利要求2所述的移动机器人的定位修正方法,其特征在于:将所述第一图像与所述整体环境地图中所述目标路标的第二图像进行匹配,包括:
    获取所述第一图像的以下至少之一第一参数信息:第一像素信息,所述第一图像的第一物体类别,以及所述第一物体类别的第一得分,以及获取所述第二图像的以下至少之一第二参数信息:第二像素信息,所述第二图像的第二物体类别,以及所述第二物体类别的第二得分;
    将所述第一参数信息和所述第二参数信息进行匹配,其中,在匹配结果满足预设条件的 情况下,确定所述任一位姿通过位置验证,所述预设条件包括以下至少之一:所述第一像素信息和所述第二像素信息的第一相似度大于第一预设阈值,所述第一物体类别和所述第二物体类别的第二相似度大于第二预设阈值,以及所述第一得分和第二得分的差值小于第三预设阈值。
  5. 根据权利要求4所述的移动机器人的定位修正方法,其特征在于:获取所述第一图像的第一物体类别,以及所述第一物体类别的第一得分,以及所述第二图像的第二物体类别,以及所述第二物体类别的第二得分,包括:
    将所述第一图像输入到已训练好的人工智能图像模型中,以获取所述第一图像的第一物体类别,以及所述第一物体类别的第一得分;以及
    将所述第二图像输入到已训练好的人工智能图像模型中,以获取到所述第二图像的第二物体类别,以及所述第二物体类别的第二得分,并将所述第二图像的第二像素信息,所述第二图像的第二物体类别,以及所述第二物体类别的第二得分保存至所述整体环境地图中,在所述整体环境地图获取所述第二图像的第二像素信息,所述第二图像的第二物体类别,以及所述第二物体类别的第二得分。
  6. 根据权利要求4所述的移动机器人的定位修正方法,其特征在于:获取所述第一图像的以下至少之一第一参数信息:第一像素信息,所述第一图像的第一物体类别,以及所述第一物体类别的第一得分,以及获取所述第二图像的以下至少之一第二参数信息:第二像素信息,所述第二图像的第二物体类别,以及所述第二物体类别的第二得分之后,所述方法还包括:
    在所述第一物体类别和所述第二物体类别指示同一物体类别的情况下,将所述第一得分和所述第二得分中的较高得分更新保存在所述整体环境地图中。
  7. 根据权利要求4所述的移动机器人的定位修正方法,其特征在于:获取移动机器人的行进区域所对应的整体环境地图之后,所述方法还包括:
    在所述整体环境地图中未检测到所述第一物体类别的情况下,将所述第一物体类别保存在所述整体环境地图中;
    在所述第一物体类别和所述第二物体类别指示同一物体类别的情况下,将所述第一得分和所述第二得分中的较高得分更新保存在所述整体环境地图中。
  8. 根据权利要求1所述的移动机器人的定位修正方法,其特征在于:获取移动机器人的行进区域所对应的整体环境地图之前,所述方法还包括:
    通过以下方式确定移动机器人处于所述预设状态:
    获取所述移动机器人的位姿数据;
    在所述位姿数据出现异常的情况下,确定所述移动机器人需要定位修正,以确定所述移动机器人处于所述预设状态。
  9. 根据权利要求8所述的移动机器人的定位修正方法,其特征在于:
    通过以下方式之一确定所述位姿数据出现异常:
    检测所述移动机器人设置的悬崖传感器获取的数据异常;
    检测到所述移动机器人设置的激光测距传感器获取的数据异常;
    通过所述移动机器人上设置的惯性导航传感器检测到所述移动机器人的倾斜角度异常;
    检测到所述移动机器人获取的相邻两帧物体图像的坐标差值异常。
  10. 根据权利要求1至9任一项所述的移动机器人的定位修正方法,其特征在于:对所述一个或多个第一位姿进行位置验证之后,所述方法还包括:
    在所述一个或多个第一位姿均未通过所述位置验证的情况下,清空所述整体环境地图。
  11. 根据权利要求1所述的移动机器人的定位修正方法,其特征在于:获取第二栅格地图,包括:
    通过所述移动机器人上设置的激光测距传感器对所述移动机器人所在的当前区域进行扫描,以获取所述激光测距传感器构建的第二栅格地图。
  12. 一种移动机器人的定位修正装置,其特征在于:所述装置包括:
    第一获取模块,用于在检测到移动机器人处于预设状态的情况下,获取移动机器人的行进区域所对应的整体环境地图,其中,所述整体环境地图包括:第一栅格地图,所述移动机器人在所述第一栅格地图的不同像素点所对应的不同位姿;
    第二获取模块,用于获取第二栅格地图,其中,所述第二栅格地图用于指示所述移动机器人所在的当前区域;
    匹配模块,用于对所述第一栅格地图和所述第二栅格地图进行匹配,以从所述不同位姿中确定所述移动机器人在所述当前区域的一个或多个第一位姿;
    验证模块,用于对所述一个或多个第一位姿进行位置验证,将所述一个或多个位姿中通过位置验证的目标位姿作为所述移动机器人定位修正后的位姿。
  13. 一种计算机可读的存储介质,其特征在于,所述计算机可读的存储介质包括存储的程序,其中,所述程序运行时执行上述权利要求1至11任一项中所述的方法。
  14. 一种电子装置,包括存储器和处理器,其特征在于,所述存储器中存储有计算机程序,所述处理器被设置为通过所述计算机程序执行所述权利要求1至11任一项中所述的方 法。
PCT/CN2021/116171 2021-04-19 2021-09-02 移动机器人的定位修正方法和装置、存储介质、电子装置 WO2022222345A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110420056.2 2021-04-19
CN202110420056.2A CN113124902B (zh) 2021-04-19 2021-04-19 移动机器人的定位修正方法和装置、存储介质、电子装置

Publications (1)

Publication Number Publication Date
WO2022222345A1 true WO2022222345A1 (zh) 2022-10-27

Family

ID=76777757

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/116171 WO2022222345A1 (zh) 2021-04-19 2021-09-02 移动机器人的定位修正方法和装置、存储介质、电子装置

Country Status (2)

Country Link
CN (1) CN113124902B (zh)
WO (1) WO2022222345A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118362113A (zh) * 2024-04-23 2024-07-19 深圳市蓝色极光储能科技有限公司 基于移动机器人的构图方法以及其系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113124902B (zh) * 2021-04-19 2024-05-14 追创科技(苏州)有限公司 移动机器人的定位修正方法和装置、存储介质、电子装置
CN113656418B (zh) * 2021-07-27 2023-08-22 追觅创新科技(苏州)有限公司 语义地图的保存方法和装置、存储介质、电子装置

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105928505A (zh) * 2016-04-19 2016-09-07 深圳市神州云海智能科技有限公司 移动机器人的位姿确定方法和设备
CN105953798A (zh) * 2016-04-19 2016-09-21 深圳市神州云海智能科技有限公司 移动机器人的位姿确定方法和设备
CN108917759A (zh) * 2018-04-19 2018-11-30 电子科技大学 基于多层次地图匹配的移动机器人位姿纠正算法
CN109506641A (zh) * 2017-09-14 2019-03-22 深圳乐动机器人有限公司 移动机器人的位姿丢失检测与重定位系统及机器人
CN111442722A (zh) * 2020-03-26 2020-07-24 达闼科技成都有限公司 定位方法、装置、存储介质及电子设备
CN111982108A (zh) * 2019-05-24 2020-11-24 北京京东尚科信息技术有限公司 移动机器人定位方法、装置、设备及存储介质
CN112179330A (zh) * 2020-09-14 2021-01-05 浙江大华技术股份有限公司 移动设备的位姿确定方法及装置
US20210104064A1 (en) * 2019-10-07 2021-04-08 Lg Electronics Inc. System, apparatus and method for indoor positioning
CN113124902A (zh) * 2021-04-19 2021-07-16 追创科技(苏州)有限公司 移动机器人的定位修正方法和装置、存储介质、电子装置

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105928505A (zh) * 2016-04-19 2016-09-07 深圳市神州云海智能科技有限公司 移动机器人的位姿确定方法和设备
CN105953798A (zh) * 2016-04-19 2016-09-21 深圳市神州云海智能科技有限公司 移动机器人的位姿确定方法和设备
CN109506641A (zh) * 2017-09-14 2019-03-22 深圳乐动机器人有限公司 移动机器人的位姿丢失检测与重定位系统及机器人
CN108917759A (zh) * 2018-04-19 2018-11-30 电子科技大学 基于多层次地图匹配的移动机器人位姿纠正算法
CN111982108A (zh) * 2019-05-24 2020-11-24 北京京东尚科信息技术有限公司 移动机器人定位方法、装置、设备及存储介质
US20210104064A1 (en) * 2019-10-07 2021-04-08 Lg Electronics Inc. System, apparatus and method for indoor positioning
CN111442722A (zh) * 2020-03-26 2020-07-24 达闼科技成都有限公司 定位方法、装置、存储介质及电子设备
CN112179330A (zh) * 2020-09-14 2021-01-05 浙江大华技术股份有限公司 移动设备的位姿确定方法及装置
CN113124902A (zh) * 2021-04-19 2021-07-16 追创科技(苏州)有限公司 移动机器人的定位修正方法和装置、存储介质、电子装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118362113A (zh) * 2024-04-23 2024-07-19 深圳市蓝色极光储能科技有限公司 基于移动机器人的构图方法以及其系统

Also Published As

Publication number Publication date
CN113124902A (zh) 2021-07-16
CN113124902B (zh) 2024-05-14

Similar Documents

Publication Publication Date Title
WO2022222345A1 (zh) 移动机器人的定位修正方法和装置、存储介质、电子装置
EP3974778B1 (en) Method and apparatus for updating working map of mobile robot, and storage medium
CN109813319B (zh) 一种基于slam建图的开环优化方法及系统
CN108021884B (zh) 基于视觉重定位的扫地机断电续扫方法、装置及扫地机
WO2020223974A1 (zh) 更新地图的方法及移动机器人
CN110587597B (zh) 一种基于激光雷达的slam闭环检测方法及检测系统
WO2023045644A1 (zh) 移动机器人的定位方法及装置、存储介质及电子装置
JP7507964B2 (ja) 可動ロボットによる棚の位置姿勢の調整のための方法および装置
WO2021081774A1 (zh) 一种参数优化方法、装置及控制设备、飞行器
CN111814752A (zh) 室内定位实现方法、服务器、智能移动设备、存储介质
WO2023005377A1 (zh) 一种机器人的建图方法及机器人
Iocchi et al. Self-localization in the RoboCup environment
JP2020021372A (ja) 情報処理方法および情報処理システム
CN110069057A (zh) 一种基于机器人的障碍物感测方法
CN113984068A (zh) 定位方法、定位装置和计算机可读存储介质
Pérez et al. Enhanced monte carlo localization with visual place recognition for robust robot localization
CN111656138A (zh) 构建地图及定位方法、客户端、移动机器人及存储介质
US20230004739A1 (en) Human posture determination method and mobile machine using the same
WO2023005379A1 (zh) 语义地图的保存方法和装置、存储介质、电子装置
CN112799389B (zh) 自动行走区域路径规划方法及自动行走设备
WO2020149149A1 (en) Information processing apparatus, information processing method, and program
Panzieri et al. A low cost vision based localization system for mobile robots
CN114935341B (zh) 一种新型slam导航计算视频识别方法及装置
CN112650207A (zh) 机器人的定位校正方法、设备和存储介质
CN113379850B (zh) 移动机器人控制方法、装置、移动机器人及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21937567

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21937567

Country of ref document: EP

Kind code of ref document: A1