WO2022027611A1 - Positioning method and map construction method for mobile robot, and mobile robot - Google Patents

Positioning method and map construction method for mobile robot, and mobile robot Download PDF

Info

Publication number
WO2022027611A1
WO2022027611A1 PCT/CN2020/107863 CN2020107863W WO2022027611A1 WO 2022027611 A1 WO2022027611 A1 WO 2022027611A1 CN 2020107863 W CN2020107863 W CN 2020107863W WO 2022027611 A1 WO2022027611 A1 WO 2022027611A1
Authority
WO
WIPO (PCT)
Prior art keywords
mobile robot
data
measurement data
landmark
positioning
Prior art date
Application number
PCT/CN2020/107863
Other languages
French (fr)
Chinese (zh)
Inventor
崔彧玮
Original Assignee
苏州珊口智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州珊口智能科技有限公司 filed Critical 苏州珊口智能科技有限公司
Priority to PCT/CN2020/107863 priority Critical patent/WO2022027611A1/en
Priority to CN202080001821.0A priority patent/CN112041634A/en
Publication of WO2022027611A1 publication Critical patent/WO2022027611A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/18Stabilised platforms, e.g. by gyroscope
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/06Systems determining position data of a target
    • G01S13/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/02Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
    • G01S15/06Systems determining the position data of a target
    • G01S15/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/02Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
    • G01S15/06Systems determining the position data of a target
    • G01S15/46Indirect determination of position data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/93Sonar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/46Indirect determination of position data
    • G01S17/48Active triangulation systems, i.e. using the transmission and reflection of electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/02Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
    • G01S15/06Systems determining the position data of a target
    • G01S15/46Indirect determination of position data
    • G01S2015/465Indirect determination of position data by Trilateration, i.e. two transducers determine separately the distance to a target, whereby with the knowledge of the baseline length, i.e. the distance between the transducers, the position data of the target is determined

Definitions

  • the present application relates to the field of positioning technology, and in particular to a method for positioning a mobile robot, a method for constructing a map, and a mobile robot.
  • a mobile robot is a machine device that automatically performs specific tasks. It can be commanded by people, run pre-programmed programs, or act according to principles and programs formulated with artificial intelligence technology.
  • Mobile robots are widely used in large indoor occasions such as airports, railway stations, warehouses, hotels, etc. due to their autonomous mobility. For example, commercial cleaning robots, handling/distribution robots, welcome robots, etc., use the autonomous movement function to realize functions such as cleaning, handling, and guiding.
  • mobile robots sometimes rely on the environmental information provided by a single detection device, which is not conducive to accurate positioning. For example, in an open area where obstacles are far away, it is not conducive for a mobile robot to measure the surrounding environment only by relying on laser measurement, and thus is not conducive to determining the exact positional relationship between it and the obstacle. For another example, in an area with a high similarity of the roof environment and a large range, it is not conducive for the mobile robot to obtain the landmark data only by relying on the camera device to determine the exact position in the area where it is located.
  • the purpose of the present application is to provide a method for overcoming the positioning and composition accuracy problems existing in the above-mentioned related art.
  • a first aspect disclosed in the present application provides a method for a mobile robot to construct a map, including: controlling the image capturing device and the ranging sensing device to obtain image data and measurement data synchronously at different positions, respectively; Use the measurement data obtained at different positions to analyze the moving state of the mobile robot in the physical space, so as to obtain the landmark feature in each image data mapped to the landmark position data in the physical coordinate system, and/or to obtain the different positions in the physical space. Corresponding positioning position data in the physical coordinate system; record the obtained location data and/or positioning position data in the map data constructed based on the physical coordinate system.
  • a second aspect disclosed in the present application provides a method for positioning a mobile robot, including: controlling an image capturing device and a ranging sensing device to acquire image data synchronously at a first position of the mobile robot and a second position different from the first position, respectively. and measurement data; wherein, the first position is mapped to the first positioning position data in the map data; the measurement data measured at the first position and the second position are used to analyze the moving state of the mobile robot in the physical space to When the mobile robot is in a second position, second positioning position data of the second position in the map data is determined.
  • a third aspect disclosed in the present application provides a server, including: an interface device for performing data communication with a mobile robot; a storage device for storing at least one program; a processing device, connected to the storage device and the interface device, for Execute the at least one program to coordinate the storage device and the interface device to execute and implement the method for constructing a map by a mobile robot as described in any one of the first aspects disclosed in the present application, or as in the second aspect disclosed in the present application Any of the described positioning methods for a mobile robot.
  • a fourth aspect disclosed in the present application provides a mobile robot, including: an image capturing device for acquiring image data; a ranging sensing device for acquiring measurement data; a moving device for performing a moving operation; and a storage device for Store at least one program; a processing device, connected to the mobile device, the image capturing device, the ranging sensing device, and the storage device, for executing the at least one program, so as to execute any one of the first aspects disclosed in this application.
  • a fifth aspect disclosed in the present application provides a computer-readable storage medium, which stores at least one computer program, and when the computer program is run by a processor, the computer program controls the device where the storage medium is located to execute the first aspect disclosed in the present application. Any of the methods for constructing a map for a mobile robot, or a method for positioning a mobile robot according to any one of the second aspects disclosed in the present application.
  • the positioning method of the mobile robot the method of constructing the map, the server, the mobile robot and the storage medium in this application combine the advantages that multiple sensors can obtain more information in the physical space, and utilize the advantages of the image capturing device.
  • the positioning capability and the measurement capability of the ranging sensing device, and the optimization of errors in multiple sensors provides a new method and structure to realize the process of building maps and positioning, improving the accuracy and reliability of composition and positioning sex.
  • FIG. 1 shows a schematic diagram of a method of constructing a map in the present application in one embodiment.
  • Figures 2a and 2b show a brief schematic diagram of a map constructed in this application in one embodiment.
  • Fig. 3 shows a schematic diagram of measurement data obtained by the mobile robot in the present application at two locations, respectively, in one embodiment.
  • FIG. 4 is a schematic diagram of another embodiment of the measurement data obtained by the mobile robot in the present application at two positions respectively.
  • FIG. 5 shows a schematic diagram of the positioning method of the mobile robot in the present application in one embodiment.
  • FIG. 6 is a schematic diagram of a server in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a module structure of a mobile robot in an embodiment.
  • first, second, etc. are used herein to describe various positioning position data, information or parameters, these positioning position data or parameters should not be limited by these terms. These terms are only used to distinguish one location data or parameter from another.
  • first positioning position data may be referred to as second positioning position data
  • second positioning position data may be referred to as first positioning position data, without departing from the scope of the various described embodiments.
  • the first positioning position data and the second positioning position data are both describing one positioning position data, but unless the context clearly indicates otherwise, they are not the same positioning position data.
  • the word "if” as used herein can be interpreted as "at the time of" or "when".
  • A, B or C or “A, B and/or C” means “any of the following: A; B; C; A and B; A and C; B and C; A, B and C” . Exceptions to this definition arise only when combinations of location data, functions, steps or operations are inherently mutually exclusive in some way.
  • the positioning technologies of mobile robots mainly include the following: positioning based on dead reckoning (DR), positioning based on map matching, and positioning based on laser SLAM (Simultaneous Localization and Mapping, SLAM) or visual SLAM, etc.
  • DR dead reckoning
  • SLAM Simultaneous Localization and Mapping
  • visual SLAM visual SLAM
  • an example of the positioning method based on dead reckoning includes using an inertial measurement unit (Inertial Measurement Unit, IMU) and an odometer installed in the wheel set of the robot to measure the acceleration and angular velocity of the mobile robot during the movement process, and use The increments of these data are accumulated to derive the relative position of the mobile robot at a certain moment relative to the starting moment, so as to realize the positioning of the mobile robot.
  • IMU Inertial Measurement Unit
  • this method has the problem of accumulation of errors. For example, when the mobile robot is moved during a large-scale operation, the ground material (carpet, tile, wooden board, etc.) that the wheel group passes through is different, and the actual distance traveled is different from the sensor sensed. There are errors in the data, and the accumulated errors will gradually increase over time. If there is no additional positioning data to help correct it, the positioning of the mobile robot will eventually fail.
  • an example of a localization method based on map matching includes using sensors on the mobile robot to detect the surrounding environment, constructing a local map, and matching it with the pre-stored complete map, so as to obtain the current position of the mobile robot in the entire environment.
  • this method is limited by the layout of the environment and can only be applied to environments with relatively simple sites and obvious changes.
  • examples of positioning methods based on laser SLAM or visual SLAM include the use of laser detectors or cameras on mobile robots to detect the surrounding environment, combined with laser SLAM or visual SLAM technology, by using lasers to measure the distance and orientation data of landmark features. , and build an environment map corresponding to the measured landmark features to determine the movement trajectory and pose of the mobile robot.
  • the indoor space of the family is small, and the environmental similarity of each location is low.
  • the landmark features of the space area close to the ceiling such as ceilings, walls, and wardrobes, are easily converted into data through images and ranging to achieve positioning.
  • the above-mentioned workplace has a wider spatial range, which makes it possible that if the above-mentioned positioning technology is used for positioning, the object image space within the viewing angle of the same image capturing device is far away from the mobile robot, and the distance measuring sensor cannot provide accurate distance data. , which makes the map constructed by the mobile robot inconsistent with the real environment.
  • the mobile robot cannot rely on the configured image capture device to provide stable image data sources of landmark features for robot positioning.
  • the mobile robot due to the failure to continuously detect the matchable landmark features, the actual path of the mobile robot may not match the planned path when it moves autonomously.
  • an embodiment of the first aspect of the present application provides a method for constructing a map, so as to jointly optimize the error through data provided by the image capturing device and the ranging sensing device, respectively, to construct a map of the physical space where the mobile robot is located.
  • the method for constructing a map may be executed by a processor in the mobile robot, or may be executed by a server communicating with the mobile robot.
  • the mobile robot refers to an autonomous mobile device with the ability to build a map, including but not limited to: drones, industrial robots, home companion mobile devices, medical mobile devices, household cleaning robots, commercial cleaning robots, intelligent One or more of vehicles, and patrol mobile equipment.
  • the physical space refers to the actual three-dimensional space in which the mobile robot is located, and can be described by abstract data constructed in the space coordinate system.
  • the physical space includes, but is not limited to, family residences, public places (eg, offices, shopping malls, hospitals, underground parking lots, and banks), and the like.
  • the physical space usually refers to an indoor space, that is, the space has boundaries in the directions of length, width, and height. In particular, it includes physical spaces such as shopping malls, waiting halls and other physical spaces with large spatial scope and high scene repetition.
  • the mobile robot is provided with an image capturing device, wherein the image capturing device is a device for providing a two-dimensional image according to a preset pixel resolution.
  • the image capturing device includes, but is not limited to, a camera, a video camera, a camera module integrated with an optical system or a CCD chip, a camera module integrated with an optical system and a CMOS chip, and the like.
  • lenses that can be used by the camera or video camera include, but are not limited to, standard lenses, telephoto lenses, fisheye lenses, and wide-angle lenses.
  • the image capturing device may be disposed on the mobile robot and the main optical axis is between the horizontal plane and the vertical direction to the ceiling, such as the main optical axis is located at 0° in the vertical direction to the ceiling and within the range of 0 ⁇ 90°.
  • the image capturing device is assembled on the upper half of the commercial cleaning robot, and its main optical axis is a preset angle obliquely upward to obtain image data within a corresponding viewing angle range.
  • the images captured by the image capturing device may be one or more of a single image, a continuous image sequence, a non-sequential image sequence, or a video. If the image capturing device captures an image sequence or video, one or more image frames can be extracted from the sequence or video as image data for subsequent processing.
  • the image data reflects the perception of the physical space where the mobile robot is located by the image capturing device.
  • the image acquired by the image capturing device may be directly used as image data; in other cases, the image acquired by the image capturing device may also be processed as data.
  • the image acquired by the image capturing device is a single image
  • the single image captured at the first position and the second position can be directly used as the image data.
  • an image frame may be extracted from the image sequence or video as image data.
  • the mobile device stores the captured images in a local storage medium.
  • the storage medium may include read-only memory, random access memory, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices or other magnetic storage devices, flash memory, U disk, removable hard disk, or can be used for storage Any other medium that has the desired program code in the form of instructions or data structures and can be accessed.
  • the mobile robot transmits the captured image to an external device connected in communication for storage, and the communication connection includes a wired or wireless communication connection.
  • the external device may be a server located in the network, and the server includes but is not limited to one or more of a single server, a server cluster, a distributed server group, and a cloud server.
  • the cloud server may be a cloud computing platform provided by a cloud computing provider.
  • the types of the cloud server include but are not limited to: Software-as-a-Service (Software as a Service, SaaS), Platform-as-a-Service (Platform as a Service, PaaS) , and Infrastructure-as-a-Service (Infrastructure as a Service, IaaS).
  • the types of the cloud server include but are not limited to: public cloud (Public Cloud) server, private cloud (Private Cloud) server, and hybrid cloud (Hybrid Cloud) server and the like.
  • the public cloud server is, for example, Amazon's elastic computing cloud (Amazon EC2), IBM's Blue Cloud, Google's AppEngine, and Windows' Azure service platform, etc.; the private cloud server For example, Facebook Cloud Computing Service Platform, Amazon Cloud Computing Service Platform, Baidu Cloud Computing Platform, and Tencent Cloud Computing Platform.
  • Amazon's elastic computing cloud Amazon EC2
  • IBM's Blue Cloud IBM's Blue Cloud
  • Google's AppEngine IBM's Blue Cloud
  • Windows' Azure service platform etc.
  • the private cloud server For example, Facebook Cloud Computing Service Platform, Amazon Cloud Computing Service Platform, Baidu Cloud Computing Platform, and Tencent Cloud Computing Platform.
  • the mobile robot is further provided with a ranging sensing device, and the ranging sensing device can measure the distance of the landmark feature in the physical space where the mobile robot is located relative to the mobile robot.
  • the landmark feature refers to a feature that is easy to distinguish from other objects in the physical space where the mobile robot is located.
  • the landmark feature can be a table corner, a contour feature of a ceiling light on a ceiling, a space between a wall and the ground. connection etc.
  • the ranging sensing device examples include ranging sensing devices that provide one-dimensional measurement data such as laser sensors and ultrasonic sensors, or include two-dimensional or binocular camera devices such as ToF sensors, multi-line lidars, millimeter-wave radars, or binocular cameras.
  • the ranging sensing device for three-dimensional measurement data may also include the above two ranging sensing devices.
  • a laser sensor can determine its distance from a landmark feature based on the time difference between the time it emits a laser beam and the time it receives the laser beam; for another example, an ultrasonic sensor can determine movement based on the vibration signal that the sound wave it emits is bounced back by the landmark feature.
  • the binocular camera device can use the triangulation principle to determine the distance of the mobile robot relative to the landmark feature according to the images captured by its two cameras; another example, ToF (Time of Flight, The infrared light projector of the time-of-flight) sensor projects infrared light outward, and the infrared light is reflected after encountering the measured object and received by the receiving module.
  • the depth of the illuminated object is calculated by recording the time from infrared light emission to being received. information.
  • the range sensing device generates corresponding measurement data by detecting entities in the surrounding environment of the mobile robot.
  • the ranging sensing device is installed on the body side of the mobile robot.
  • the ranging sensing device is used to detect the edge of the corresponding image data in the physical space or the environmental data in the area that cannot be covered by the image data.
  • the ranging sensing device may include, but is not limited to, be disposed at a position between 10-80 cm from the ground on the side of the commercial cleaning robot.
  • the ranging sensing device is mounted on one side of the mobile robot in the direction of travel, so that the mobile robot can learn about landmark features in the direction of travel so as to avoid or take other behavioral controls.
  • the ranging sensing device can also be installed at other positions of the mobile robot, as long as the distances relative to each landmark feature in the surrounding physical space can be obtained.
  • ranging sensing devices there may be one or more types of ranging sensing devices installed on the mobile robot.
  • a laser sensor may be installed on the mobile robot, a laser sensor and a binocular camera device may be installed at the same time, or a laser sensor, a binocular camera device and a ToF sensor may be installed at the same time.
  • the installed quantity of the same sensor can also be configured according to requirements to obtain measurement data in different directions.
  • the measurement data reflects the perception of the physical space where the mobile robot is located by the ranging sensing device.
  • the measurement data reflects the physical quantities with physical meanings in the physical space that can be detected by the ranging sensing device. For example, relative positions of entities in physical space relative to mobile robots, etc.
  • the physical meaning means that the measurement data can provide physical data in basic units such as time unit/length unit/angle unit.
  • the measurement data at least reflects the positional relationship between the mobile robot and the entities in the surrounding environment, wherein the measurement data includes the distances of each landmark feature that can be observed in the physical space where the mobile robot is located relative to the mobile robot and the relative orientation.
  • the data acquired by the ranging sensing device can be directly used as the measurement data; in other cases, the data acquired by the ranging sensing device can also be processed to obtain the measurement data.
  • the ranging sensing device when the ranging sensing device is a laser sensor, the distance detected by the laser sensor relative to each landmark feature can be directly used as the measurement data; for another example, when the ranging sensing device is a binocular camera device , then the image obtained by the binocular camera device is processed to obtain the distance relative to each landmark feature and use it as the measurement data.
  • the measurement data also reflects the mobile robot's perception data of entities in its surroundings.
  • the measurement sensing device is a laser sensor, which emits a light beam to a solid surface in the surrounding environment to detect a set of measurement points that can reflect the light beam, and acquires corresponding measurement data.
  • the relative position is obtained by 3D reconstruction of the acquired measurement data through predetermined physical parameters.
  • the viewing angle range of the image capturing device and the measuring range of the ranging sensing device may not overlap, or the image There is at most partial overlap between the viewing angle range of the capturing device and the measurement range of the ranging sensing device, that is, the execution of each step in this application does not necessarily depend on the landmark features detected by the image capturing device and the ranging sensing device, even if There is no common landmark feature in the image capturing device and the ranging sensing device, nor does it affect the process of constructing the map in this application.
  • FIG. 1 shows a schematic diagram of a method of constructing a map in the present application in one embodiment.
  • the image capturing device and the ranging sensing device are controlled to acquire image data and measurement data simultaneously at at least two positions, respectively.
  • the mobile robot travels through a number of locations during its movement in the current physical space.
  • the image capturing device and the ranging sensing device of the mobile robot respectively acquire the image data and the measurement data corresponding to each position at each position.
  • the measurement data measured at at least two locations includes both fixed objects and possibly moving objects.
  • the image data acquired at the same two positions at least include image data of entities in the physical space whose vertical direction is higher than the detection range of the ranging sensing device, and may include image data of corresponding part of the measurement data.
  • the image data and measurement data belonging to the same location are acquired synchronously.
  • the synchronous acquisition method includes that the mobile robot stops at multiple locations along the way and acquires image data and measurement data, and then continues to move to achieve synchronous acquisition of image data and measurement data; Synchronized acquisition of image data and measurement data.
  • the synchronization acquisition method may be based on an external synchronization signal, and the image capturing device and the ranging sensing device may acquire the image data and measurement data corresponding to the position synchronously; it may also be based on the
  • the synchronization signal in the image capturing device or the ranging sensing device is used to obtain the image data and measurement data corresponding to the position synchronously by the image capturing device and the ranging sensing device.
  • the control device in the mobile robot sends a synchronization signal to the image capturing device and the ranging sensing device at certain time intervals, so as to make the image capturing device and the ranging device
  • the sensing device acquires image data and measurement data, respectively.
  • the image capturing device and the ranging sensing device are provided with a clock module, and the same clock signal mechanism is preset to send out signals synchronously.
  • the image capturing device and the ranging sensing device receive their respective synchronization signals, Perform the steps to acquire image data and measurement data.
  • the external synchronization signal may also be generated based on the signal in the inertial navigation sensor of the mobile robot.
  • the inertial navigation sensor (IMU) is used to acquire inertial navigation data of the mobile robot.
  • the inertial navigation sensor includes, but is not limited to, one or more of a gyroscope, an odometer, an optical flow meter, and an accelerometer.
  • the inertial navigation data acquired by the inertial navigation sensor includes, but is not limited to, one or more of the speed data, acceleration data, moving distance, rolling circles of the roller, and deflection angle of the roller, etc. of the mobile robot.
  • an IMU is configured in the driving component of the mobile robot (that is, the component such as the wheel set for advancing the mobile robot), the IMU is connected with the lower computer of the mobile robot, and the lower computer is connected with the image capturing device; and, in An IMU is also arranged in the ranging sensing device, and the ranging sensing device and the image capturing device are connected with the upper computer of the mobile robot.
  • the time stamps of the IMU in the ranging sensing device and the IMU in the mobile robot driving assembly are kept synchronized, so that the IMU in the mobile robot driving assembly generates a synchronization signal to the image capturing device so that the image capturing device acquires image data , the IMU in the ranging sensing device also generates a synchronization signal at the same time, so that the ranging sensing device acquires measurement data.
  • an IMU is configured in the driving component of the mobile robot (ie, components such as a wheel set for moving the mobile robot forward), and the IMU is connected with the lower computer of the mobile robot, and the lower computer is respectively connected with the image capturing device and the distance measuring device. Induction device connection.
  • the IMU sends a synchronization signal to the image capturing device and the ranging sensing device when it detects that the wheel set rotates a preset number of turns, so that the image capturing device can obtain image data and the ranging sensing device can obtain measurement data.
  • the mobile robot at least analyzes the movement state of the mobile robot in the physical space by using the measurement data obtained at different positions, so as to obtain at least the mapping of landmark features in each image data to physical coordinates location data of the landmark in the system, and/or to obtain positioning location data corresponding to the at least two locations in the physical coordinate system.
  • both the measurement data and the image data obtained in step S110 include data helpful for positioning/building a map within different spatial height ranges in the physical space where the mobile robot is located, such as landmark data, distance/ angle etc.
  • the ranging sensing device and the image capturing device move as a whole with the mobile robot, at least the movement state determined by the measurement data obtained at different positions can solve the problem that the landmark features reflected in the image data have no physical scale.
  • the location of landmarks in the coordinate system (such as scale), and only using measurement data is not conducive to obtaining high-precision positioning.
  • An example of the set coordinate system includes a camera coordinate system, or a virtual coordinate system that lacks a mapping relationship determined based on a physical scale with the physical coordinate system.
  • the moving state of the mobile robot described by the measurement data and the landmark data corresponding to the landmark features in the image data are used to calculate the landmark position data mapped from the landmark features captured by the image data to the physical coordinate system.
  • the mobile robot if there is no reference to reflect the actual movement of the mobile robot in the physical space, the mobile robot cannot determine the physical distance of its own movement, the physical distance between it and an entity, etc. based on the image data to reflect the actual physical Data about positional relationships in space.
  • the mobile robot obtains the moving state of the mobile robot in the physical space by analyzing the measurement data with physical meaning; the mobile robot is assembled on the mobile robot by using the moving state and the preset image capturing device and distance measuring device.
  • the mobile robot constructs the correspondence between the landmark data in the image data and the landmark position data in the physical coordinate system, thereby determining the landmark position data of the corresponding landmark feature in the physical coordinate system.
  • the moving state includes at least one of the following: data related to the positioning of the mobile robot itself, data used to help the mobile robot determine positioning, or used to reflect the change of the moving position of the mobile robot in the physical space and the landmark feature reflected in the image data Data such as the physical scale of the mapping relationship between image pixel position changes.
  • the data related to the positioning of the mobile robot itself includes, but is not limited to: changes in the posture of the mobile robot relative to the entity, changes in the posture and posture between the front and rear positions of the mobile robot, or information on each position the mobile robot passes through, etc.
  • the data for helping the mobile robot to determine positioning includes, but is not limited to: the relative positional relationship (such as pose change data, etc.) between the landmark data corresponding to the same landmark feature in the measurement data and the mobile robot, the Landmark locations, etc.
  • the landmark feature described in the measurement data and the landmark feature described in the image data are not necessarily the same landmark feature.
  • the landmark data reflecting the landmark features in the measurement data is referred to as landmark measurement data
  • the landmark data reflecting the landmark features in the image data is referred to as landmark image data.
  • the manner in which the mobile robot analyzes the movement state of the mobile robot in the physical space by at least using the measurement data obtained at different positions includes the following examples:
  • the mobile robot obtains the corresponding movement state by analyzing the measurement data measured at different positions to reflect the change of its physical position.
  • the analysis process reflecting the change of its physical position includes a process of analyzing the movement behavior of the mobile robot by using measurement data with physical meaning.
  • the mobile robot analyzes the data values of the measurement data relative to the same entity at at least two positions, and the positional deviation between the measurement data relative to the same entity, etc., to obtain the mobile robot's position and posture change caused by the movement. mobile state.
  • the mobile robot calculates the moving state by using the landmark measurement data corresponding to the common landmark feature among the identified measurement data.
  • the measurement data obtained by the mobile robot at the first position and the second position respectively have at least one common landmark feature.
  • the moving state of the mobile robot can be calculated relative to the distances of the common landmark features.
  • the mobile robot In the process of constructing the map from the mobile robot's preset initial moving position as the starting point, the mobile robot, when the positioning position information of the previous position in the physical coordinate system is known, uses the above-mentioned example to obtain the difference between the different positions before and after.
  • positioning position data corresponding to the at least two positions in the physical coordinate system, and/or landmark position data corresponding to the landmark feature in the measurement data in the physical coordinate system, etc. are obtained. It should be understood that, since the measurement data acquired by the distance-measuring sensing device is data with actual physical units, the calculated movement state also has actual physical units.
  • the mobile robot obtains the moving state by calculating the coincidence degree of each measurement data obtained at different positions.
  • the single measurement data provides the positional relationship between the entity part in the corresponding measurement plane and the mobile robot in the physical space.
  • the mobile robot measures the same entity at different positions, it reflects the changes caused by the pose change of the mobile robot.
  • a change in the measurement position of the same entity in the measurement data For example, a mobile robot can rotate, translate, and zoom the measurement data obtained at one position to obtain measurement data that coincides with some measurement points in the measurement data at another position.
  • the processing process reflects the process of simulating a mobile robot to perform a moving operation in order to make the corresponding measurement points coincide.
  • the mobile robot can simulate the data and physical scale of the mobile robot's pose change between different positions and other moving states, and use the pose change data and other moving states, and the coincident
  • the physical quantity provided by the measurement data determines other data in the moving state, such as the positioning position data corresponding to the at least two positions in the physical coordinate system, and/or the landmark measurement data in the measurement data corresponding to the physical coordinate system Landmark location data, etc.
  • the mobile robot uses the measurement data obtained at different positions The degree of coincidence between them is optimized to obtain at least the coincident measurement data in the measurement data that meet the coincidence conditions and map to the landmark position data under the physical coordinate system, and/or the mobile robot obtains at least the data when the data is obtained. Positioning position data corresponding to two positions in the physical coordinate system. Wherein, the mobile robot may regard the overlapping measurement data part as the landmark data corresponding to the landmark feature.
  • An example of the coincidence condition includes that the number of iterative optimizations of the coincidence degree reaches a preset threshold of the number of iterations, and/or the gradient value of the coincidence degree is smaller than a preset gradient threshold value, and the like. This method is beneficial to calculate the movement state by reflecting the physical quantity of the fixed object in each measurement data as much as possible.
  • FIG. 3 is a schematic diagram of an embodiment of the measurement data obtained by the mobile robot in the present application at two positions respectively.
  • the mobile robot obtains measurement data MData_1 and MData_2 at the first position and the second position, respectively, wherein if there is an entity with a fixed position in the physical space, the measurement data MData_1 and MData_2 have corresponding and available entities in the physical space. Measurement points that are coincident after data processing.
  • entity 311 and entity 313 in the measurement data MData_1 and MData_2 can be overlapped after data processing, while entity 312 cannot Coincidence with entity 311 and entity 313 at the same time, so entity 311 and entity 313 can be regarded as entities with fixed positions.
  • the data processing of the two measurement data MData_1 and MData_2 by the mobile robot including rotation, translation, and zooming reflects the relative positional relationship between the first position and the second position of the mobile robot.
  • the mobile robot uses an optimization function constructed according to the two measurement data MData_1 and MData_2 to perform an optimal calculation of the coincidence degree until the coincidence degree of the two measurement data MData_1 and MData_2 meets the preset coincidence condition.
  • the pose change data reflecting the relative position relationship is obtained, and under the preset coincidence condition, the coincident measurement data points in the two measurement data are used as the landmark measurement data, and based on the obtained pose change data, we obtain
  • the landmark measurement data is mapped to the landmark position data in the physical coordinate system, and/or the positioning position data corresponding to the at least two positions in the physical coordinate system. It should be noted that the above example is also applicable to the calculation processing in which the measurement data is two-dimensional data.
  • the mobile robot optimizes the degree of coincidence between the measurement data of landmarks in the measurement data obtained at different positions, so as to obtain the movement of the mobile robot under the coincidence condition. state.
  • the optimization process is similar to the above example of one-dimensional data.
  • the method for constructing a map further includes the step of extracting landmark features in each measurement data, so as to use the landmark measurement data corresponding to the landmark features in each measurement data to pair the movement. status is analyzed.
  • the landmark measurement data includes landmark measurement data corresponding to moving objects and landmark measurement data corresponding to stationary objects.
  • the measurement data obtained by the ranging sensing device includes its distance relative to each feature in the physical space. It includes some static features (such as flower pots, coffee tables, shelves, etc.) and some moving features (such as people, pets, shopping carts in progress, etc.).
  • some features in the physical space are not suitable to be used as landmark features, for example, movable objects such as people and pets in the physical space. If these inappropriate features are marked in the map data, it will affect the subsequent positioning of the mobile robot, and easily lead to large errors. Therefore, it is necessary to filter each data in the measurement data to determine which ones can be used as landmark features, so as to use these extracted landmark features to analyze the moving state and improve the calculation accuracy.
  • the features that meet the requirements are selected as landmark features by means of a threshold.
  • the measurement data obtained at the first position and the measurement data obtained at the second position have five common features a, b, c, d, and e, wherein the calculated values based on a, b, c, and d are respectively
  • the numerical difference between the moving states is within 1cm, and the numerical difference between the moving state calculated based on e and the moving state calculated by other features is in the range of 74-75cm. If the screening threshold is set to 5cm, then a, b , c, and d can be extracted as landmark features, while e may be an active feature and cannot be used as landmark features.
  • landmark features may also be extracted directly based on numerical changes in measurement data acquired at each location.
  • the changes of static features in each position data are relatively regular, while the changes of active features are relatively different from other static features. Therefore, a predetermined change threshold can be used to find a feature whose numerical value changes relatively regularly in the measurement data relative to the distance of each feature, and use it as a landmark feature.
  • the measurement data obtained at the first position and the measurement data obtained at the second position have five common features a, b, c, d, and e.
  • the distance of feature a relative to the mobile robot is 2m
  • the distance of feature b relative to the mobile robot is 3.4m
  • the distance of feature c relative to the mobile robot is 4.2m
  • the distance of feature d relative to the mobile robot is 2.8m
  • the distance of the e feature relative to the mobile robot is 5m
  • the distance of the a feature relative to the mobile robot is 2.5m
  • the distance of the b feature relative to the mobile robot is 3.8m
  • the distance of the c feature relative to the mobile robot is 3.8m.
  • the distance of the mobile robot is 4.9m
  • the distance of the d feature relative to the mobile robot is 3.6m
  • the distance of the e feature relative to the mobile robot is 2m.
  • the variation of a feature at two positions is 0.5m
  • the variation of b feature at two positions is 0.4m
  • the variation of c feature at two positions is 0.7m
  • the variation of d feature at two positions is 0.7m.
  • the variation at each position is 0.8m
  • the variation of the e-feature at two positions is 3m.
  • the preset change threshold is 0.5m
  • a, b, c, and d can be extracted as landmark features
  • e may be an active feature and cannot be used as landmark features.
  • the coincidence degree is calculated by using the landmark measurement data corresponding to the static landmark feature in the measurement data to determine the corresponding moving state. For example, the coincidence degree is calculated according to the positional deviation between the landmark measurement data corresponding to the same static landmark feature at different locations, so that the positional deviation between the landmark measurement data is minimized, and the corresponding coincidence degree is determined when the coincidence degree condition is met. mobile state.
  • the coincidence degree optimization is performed on each landmark measurement data including the moving object and the fixed object, and when the coincidence degree satisfies the coincidence degree condition, it is determined that the coincident landmark measurement data corresponds to the landmark feature of the fixed object.
  • FIG. 4 shows the measurement data obtained by the mobile robot in the present application at two positions respectively in another embodiment.
  • the measurement data MData_3 and MData_4 obtained by the mobile robot at the first position and the second position, wherein the circle represents the matching landmark measurement data in the two measurement data, and the star represents the two measurement data in one measurement data.
  • Coincidence calculates the coincident landmark measurement data.
  • the mobile robot iteratively optimizes the coincidental landmark measurement data so that the number of landmark measurement data represented by the star in previous coincidence calculations reaches the coincidence condition, and determines the movement state obtained in the corresponding times of coincidence calculation; Using the obtained movement state and the coincident landmark measurement data, at least the landmark feature corresponding to the landmark measurement data in the measurement data is mapped to the landmark position data in the physical coordinate system, and/or the mobile robot acquires data Positioning position data corresponding to at least two positions in the physical coordinate system.
  • the mobile robot uses the movement state obtained in any of the above examples to determine the landmark position information in the physical coordinate system of the landmark feature corresponding to the landmark image data in the synchronously obtained image data.
  • the image capturing device also acquires image data synchronously at each position.
  • the movement amount and rotation amount of the mobile robot in the set coordinate system are calculated.
  • the relative position relationship of the mobile robot, etc. Assuming that in the set coordinate system, the initial position of the mobile robot is taken as the coordinate origin, the coordinates of each positioning position of the mobile robot in the set coordinate system and the landmark features reflected by the image data can be obtained in the set coordinate system.
  • the coordinates of the landmark location below.
  • the mobile robot can calculate the coordinates of each position the mobile robot passes through in its set coordinate system and the coordinates of each landmark image feature, these coordinates lack the actual physical unit, that is, the mobile robot cannot know the coordinates in its image data.
  • the mobile robot converts the coordinates of each position marked in the set coordinate system to the physical coordinate system by using the movement state with physical units calculated by the measurement data, so as to obtain the landmark features in each image data and map them to the physical coordinate system. and/or to obtain positioning position data corresponding to the at least two positions in the physical coordinate system, etc.
  • the moving state obtained by the mobile robot using the measurement data reflects the movement of the mobile robot relative to the landmark feature corresponding to the landmark image data in the image data.
  • the mobile robot maps the landmark image features in the set coordinate system to the physical coordinate system, so as to obtain The corresponding landmark features are mapped to the landmark location data in the physical coordinate system.
  • the corresponding landmark features in the image data that meet the coincidence conditions are mapped to the landmark position data in the physical coordinate system, and/ Or the positioning position data corresponding to the at least two positions in the physical coordinate system are described as an example to describe the execution process of obtaining each position data in the physical coordinate system:
  • the mobile robot converts the measurement data obtained at the first position and the second position into the coordinate system of the same measurement data, and performs iterative calculation of the coincidence degree to obtain the coincidence degree of the two measurement data in the corresponding coordinate system.
  • the coincidence condition is met, thereby obtaining the moving state of the mobile robot under the coincidence condition.
  • the movement state includes the physical scale between the relative positional relationship between the first position and the second position of the mobile robot and the measured positional relationship between the coincident landmark measurement data, the relative position of the mobile robot in the second position relative to the first position. Position and pose change data, etc.
  • the physical scale also corresponds to the conversion relationship between the set coordinate system and the physical coordinate system (ie, the map scale).
  • the conversion relationship between the set coordinate system and the physical coordinate system is determined based on the physical scale and internal and external parameters of the image pickup device, wherein the internal and external parameters include the internal optical parameters and assembly parameters of the image pickup device, and the image Assembly parameters between the capture device and the ranging sensing device, etc.
  • the mobile robot After obtaining the conversion relationship between the set coordinate system and the physical coordinate system, the mobile robot calculates the positioning position data corresponding to each position of the mobile robot in the set coordinate system in the physical coordinate system, and the corresponding positioning position data in the set coordinate system
  • the landmark features under the system are mapped to the landmark location data under the physical coordinate system.
  • the required data can be obtained according to the requirements. For example, in some cases, only the landmark location data of the landmark feature mapped to the physical coordinate system needs to be obtained; Positioning position data corresponding to each position in the physical space in the physical coordinate system; in some cases, the landmark position data and the positioning position data need to be obtained at the same time.
  • the coincidence condition may be a preset coincidence degree condition, or select a local optimal coincidence degree, which includes but is not limited to: the coincidence degree of common landmark features in the two measurement data obtained based on multiple coincidence iterations The average value is the highest, or the standard deviation of the coincidence degree of each common landmark feature in the two measurement data is the smallest, and the gradient of the coincidence degree is locally minimal.
  • the positioning position data may also be determined according to the movement state obtained when the measurement data meets the coincidence condition, or the positioning position data may be determined after error correction according to the above two calculation methods.
  • the coordinates of T wc1 in the set coordinate system are: Then the coordinates in the real physical coordinate system are: The coordinates of T wc2 in the set coordinate system are: Then the coordinates in the real physical coordinate system are: Among them, R wc1 and R wc2 both represent the rotation amount, and t wc1 and t wc2 both represent the translation amount.
  • the coordinates of each landmark feature in the set coordinate system and the coordinates corresponding to the physical coordinate system can also be obtained in a similar way. This will not be repeated.
  • T c1c2 T c1w T wc2 .
  • Twc2 is the position of the image pickup device in the physical coordinate system at time 2 .
  • the measurement data at time 2 is mapped to the coordinate system where the measurement data at time 1 is located to form simulated measurement data SData (or the measurement data at time 1 can also be mapped to time 2, the principle The same, and will not be repeated), and the measurement data acquired by the ranging sensing device at time 1 is taken as the actual measurement data RData. It should be understood that by mapping the measurement data at time 2 to time 1, the positions and angles at time 2 relative to the positions and angles of various landmark features at time 1 can be simulated theoretically.
  • one of the simulated measurement data SData and the actual measurement data RData is adjusted by a scaling factor (ie, physical scale), so that the coincidence degree of the simulated measurement data SData and the actual measurement data RData reaches the coincidence degree condition, thereby determining the scaling
  • the value of the coefficient, the scale scaling factor is and The value of s in . Therefore, by substituting s, the coordinates of the mobile robot in the physical coordinate system (that is, the positioning position data) and the coordinates of each landmark feature in the physical coordinate system (that is, the landmark position data) at time 1 and time 2 can be determined. .
  • the mobile robot measures the measurement data and The image data is analyzed to reflect the change of its physical position, and the corresponding movement state is obtained.
  • the present application considers the error in the image capturing device to jointly optimize the error of the image capturing device and the ranging sensing device, thereby improving the accuracy of constructing the map.
  • the degree of coincidence between the measurement data acquired at different positions and the degree of coincidence between the acquired image data are jointly optimized to obtain the corresponding landmarks in the image data under the coincidence conditions that meet the joint optimization.
  • the feature is mapped to landmark location data in the physical coordinate system, and/or positioning location data corresponding to the at least two locations in the physical coordinate system.
  • each measurement data and each image data are adjusted respectively to optimize the degree of coincidence of each measurement data and the degree of coincidence of each image data; when the errors between the moving states obtained based on the respective degrees of coincidence conform to the error
  • the corresponding movement states are used to determine the landmark position data in the image data mapped to the corresponding landmark feature in the physical coordinate system, and/or the positioning position data corresponding to the at least two positions in the physical coordinate system.
  • An example of the error condition includes a preset error threshold, or a minimum error value selected based on a preset number of adjustments.
  • the measurement data obtained when the mobile robot is at the first position is mapped to the second position, and the simulation is obtained by calculating the coincidence degree.
  • the measured data corresponding to the second position that is, the mapped measured data determined by using the current coincidence degree corresponds to the simulated moving state of the mobile robot, which represents the measured data at the first position according to the corresponding
  • the measurement data at the second location inferred from the movement state of .
  • the image capturing device also calculates the moving state of the mobile robot in the set coordinate system (that is, in the set coordinate system) by comparing the landmark image data corresponding to the common landmark feature in the image data captured at the two positions. amount of movement and rotation below).
  • the movement state obtained by using the image data and the movement state obtained by the measurement data correspond to the assembly parameters between the image capturing device and the ranging sensing device.
  • the error between the two moving states is adjusted iteratively so that the adjusted error meets an error condition, and the two moving states are used to determine the landmark position data of the landmark feature in the physical coordinate system, and/or the first Positioning position data of a position and a second position in a physical coordinate system.
  • the coincident measurement data is used as the landmark measurement data corresponding to the landmark feature
  • the landmark position data of each landmark measurement data in the physical coordinate system can also be determined.
  • the degree of coincidence between the measurement data of the two positions is adjusted, and the movement state obtained under the corresponding degree of coincidence determines the degree of coincidence of the landmark image data in the image data;
  • the degree of coincidence between the image data and the adjusted image data meets the coincidence condition, use the corresponding movement state to determine that the corresponding landmark feature in the image data is mapped to the landmark position data in the physical coordinate system, and/or Positioning position data corresponding to the at least two positions in the physical coordinate system.
  • the measurement data obtained when the mobile robot is at the first position is mapped to the second position, and obtained by calculating the coincidence degree.
  • the simulated measurement data corresponding to the second position that is, the mapped measurement data determined by using the current coincidence degree corresponds to the simulated movement state of the mobile robot, which represents the measurement data at the first position according to The measurement data at the second location inferred from the corresponding movement state.
  • the moving state obtained by using the image data and the moving state obtained by the measurement data are corresponding to the assembly parameters between the image capturing device and the ranging sensing device.
  • the degree of coincidence of the matched landmark image data in the image data is corresponding to the assembly parameters between the image capturing device and the ranging sensing device.
  • the errors of the image capturing device and the ranging sensing device can be jointly optimized, so as to utilize the advantages of each sensor while avoiding the accumulation of errors and improving the accuracy of composition.
  • the range-measuring sensing devices include multiple types, and each range-measuring sensing device synchronously acquires its own measurement data according to its own measurement method, so as to analyze the movement state.
  • the ranging sensing device includes any two or more of the following: a laser ranging sensing device, a binocular camera device, a ToF ranging sensing device, a structured light ranging sensing device, and the like.
  • the respective measurement data can be obtained synchronously based on the respective measurement methods of the ranging sensing devices, and the measurement data of the ranging sensing devices can be integrated to determine the movement of the mobile robot. state.
  • the measurement data from each ranging sensing device belonging to the same position can be fused, and then the fused measurement data corresponding to each position can be optimized to obtain the measurement data that is consistent with each measurement data.
  • the landmark feature under the coincidence condition between them is mapped to the position data in the physical coordinate system, and/or the position data corresponding to the at least two positions in the physical coordinate system.
  • each ranging sensing device synchronously acquires measurement data at each position according to their respective measurement methods, and after obtaining the measurement data from each ranging sensing device, the measurement data from each ranging sensing device belonging to the same position
  • the measurement data is fused to perform optimization processing based on the fused measurement data.
  • the ranging sensing device includes a binocular camera device and a laser sensor
  • the binocular camera device uses the triangulation principle to determine the characteristics of the mobile robot relative to each target according to the images captured by its two cameras.
  • the laser sensor also determines the distance relative to the landmark feature at the first position according to the time difference between the time it emits the laser beam and the time it receives the laser beam to obtain the laser beam.
  • the first measurement data acquired by the sensor At the second position, the binocular camera device uses the triangulation principle to determine the distance of the mobile robot relative to each landmark feature according to the images captured by its two cameras to obtain the second measurement data obtained by the binocular camera device; at the same time, The laser sensor also determines its distance from the landmark feature at the second location based on the time difference between the time it emits the laser beam and the time it receives the laser beam to obtain second measurement data acquired by the laser sensor.
  • the coincidence degree optimization process is performed based on each first measurement data after fusion and the second measurement data after fusion; similar to that mentioned in the foregoing example various ways to obtain each position data in the physical coordinate system, which will not be described in detail here.
  • the fusion processing includes: performing landmark analysis on measurement data provided by different ranging sensing devices to obtain landmark measurement data corresponding to common landmark features. For example, the measurement data obtained by each ranging sensing device has a distance relative to the same landmark feature, and the distances detected by each ranging sensing device relative to the same landmark feature are processed by means of averaging or median.
  • the fusion processing further includes: performing interpolation, averaging and other processing on the measurement data provided by different ranging sensing devices according to the respective measured direction angles. For example, the measurement value of a certain voxel position measured by the binocular camera device and the measurement data of the corresponding voxel position measured by the single-point laser ranging sensing device are averaged.
  • the ranging value D1 at a certain direction angle measured by the single-point laser ranging sensing device corresponds to the direction between the measured values D21 and D22 of two voxel positions measured by the binocular camera device.
  • angle, the distance measurement value D1 and the measurement values D21 and D22 are respectively averaged to obtain new measurement values D21' and D22' corresponding to the two voxel positions, and replace the measurement values D21 and D22.
  • the ranging value D1 at a certain direction angle measured by the single-point laser ranging sensing device corresponds to the direction between the measured values D21 and D22 of two voxel positions measured by the binocular camera device.
  • the coincidence degree optimization process may be performed on the measurement data obtained from different ranging sensing devices obtained at different positions, and the obtained results are obtained under the condition that the coincidence degrees between the measurement data meet the coincidence conditions.
  • the landmark feature is mapped to the position data in the physical coordinate system, and/or the position data corresponding to the at least two positions in the physical coordinate system.
  • each ranging sensing device synchronously acquires measurement data at each position according to their respective measurement methods, and after obtaining the measurement data from each ranging sensing device, analyzes the moving state of the mobile robot according to the respective measurement data,
  • the coincidence degree of the measurement data obtained by each ranging sensing device at different positions is optimized respectively to obtain each moving state under the coincidence condition.
  • the location data under the system will not be described in detail here.
  • step S130 the obtained location data of each landmark and positioning location data are recorded in the map data constructed based on the physical coordinate system.
  • the landmark features in each image data are mapped to the landmark position data in the physical coordinate system and/or the positioning position data corresponding to at least two positions in the physical coordinate system, and the obtained
  • the landmark location data and/or positioning location data are recorded in the map data constructed based on the physical coordinate system, thereby obtaining a map of the current physical space, and the map includes each location the mobile robot passes through, as well as each landmark feature s position.
  • the map data is data stored in a storage medium through a database, which can be displayed to the user through grids/unit vectors or the like.
  • each image data captured by the image pickup device at each location is also stored in the map for relocation.
  • the relocation method includes: after the map data of the current physical space is constructed for the first time, the mobile robot can pass the image data captured by the image capturing device at a certain position with the map during the working process of the current physical space. Each image data stored in the data is compared, so as to determine its own position in the physical space through the comparison result.
  • the map also stores historical measurement data (eg, point cloud data) acquired by the ranging sensing device at each location for relocation.
  • the relocation method includes: after the map data of the current physical space is constructed for the first time, the mobile robot can pass the image data captured by the image capturing device at a certain position with the map during the working process of the current physical space. Compare the image data stored in the data, and compare the measurement data obtained by the ranging sensing device at a certain position with the measurement data stored in the map data, so as to determine the physical presence of the image data through the comparison results. position in space.
  • the comparison method of measurement data includes but is not limited to iterative Closest Point (Iterative Closest Point, ICP) algorithm.
  • ICP iterative Closest Point
  • the way of spatial transformation corresponds each point in the two sets of point sets one by one, so as to judge the similarity of the two sets of point sets.
  • the ranging sensing device when the ranging sensing device includes a binocular camera device, it also includes data corresponding to each positioning position of the mobile robot in the physical coordinate system, where the binocular camera device is located. The procedure for storing each captured image in the map data.
  • each image captured by the binocular camera device at each position is also stored in the map for relocation.
  • the relocation method includes: after the map data of the current physical space is constructed for the first time, the mobile robot can pass the image captured by the binocular camera device at a certain position during the working process of the current physical space. Compare with each image stored in the map data, so as to determine its own position in the physical space through the comparison result.
  • each image data captured by the image pickup device at each location and each image captured by the binocular camera device are simultaneously stored in the map.
  • the mobile robot can compare the image data captured by the image capturing device at a certain position with each image data stored in the map data. Comparing the image captured by the binocular camera device with each image stored in the map data, so as to determine its own position in the physical space through the comparison result.
  • the method for constructing a map further includes the step of determining an environment boundary in the map data according to the measurement data measured by the range-finding sensing device.
  • the environmental boundary include, but are not limited to, space dividers such as doors, walls, and screens.
  • FIG. 2a and FIG. 2b show a brief schematic diagram of the map constructed in the present application in one embodiment.
  • the position and distance obtained by the ranging sensing device relative to each landmark feature are marked in the map data.
  • the space divider is a continuous and uninterrupted object in space, in the process of sensing it by the ranging sensing device, the measurement data also presents continuous or dense lines based on the type of the ranging sensing device, thus forming a pattern as shown in Figure 2a and the environmental boundary 11 shown in Figure 2b.
  • the horizontal axis and the vertical axis represent the X axis and the Y axis in the physical coordinate system, respectively
  • the discrete point (0,0) in Figure 2a represents the initial position of the mobile robot
  • the Other discrete points are the locations the mobile robot passes through during the movement and other landmark features in the physical environment.
  • the above embodiment takes the mobile robot to acquire image data and measurement data synchronously at two positions as an example, in practical applications, it can also be three positions, four positions, etc., I won't go into details here.
  • the present application obtains the measurement data reflecting the high complexity of the mobile robot's body-side environment and other environments by synchronously acquiring and including the mobile robot.
  • Image data with lower environmental complexity, such as the above, realizes a wider environmental perception in physical space; and the method of constructing a map using measurement data and image data solves the problem of low accuracy in the map constructed by using any of these data. , Landmark density low difference and other issues that help to accurately locate.
  • An embodiment of the second aspect of the present application provides a positioning method to jointly optimize the error through data provided by the image capturing device and the ranging sensing device, respectively, to determine the position of the mobile robot in the physical space.
  • the positioning method may be executed by a processor in the mobile robot, or may be executed by a server communicating with the mobile robot.
  • map data of the current physical space may be pre-stored in the mobile robot or the server, and the current position of the mobile robot in the physical space may be determined based on the map data.
  • the current position of the mobile robot in the physical space can also be determined based on the data in the physical space obtained by the mobile robot at the current position and the data in the physical space obtained at the previous position, and Build map data of the current physical space while moving.
  • FIG. 5 is a schematic diagram of an embodiment of the positioning method of the mobile robot in the present application.
  • the image capturing device and the ranging sensing device are controlled to obtain image data and measurement data simultaneously at least at the first position and the second position of the mobile robot, respectively; wherein, the first position is mapped on the map The first positioning position data in the data.
  • the mobile robot travels through a number of locations during its movement in the current physical space.
  • the image capturing device and the ranging sensing device of the mobile robot respectively acquire the image data and the measurement data corresponding to each position at each position.
  • the measurement data measured at at least two locations includes both fixed objects and possibly moving objects.
  • the image data acquired at the same two positions at least include image data of entities in the physical space whose vertical direction is higher than the detection range of the ranging sensing device, and may include image data of corresponding part of the measurement data.
  • the image data and measurement data belonging to the same location are acquired synchronously.
  • the synchronous acquisition method includes that the mobile robot stops at multiple locations along the way and acquires image data and measurement data, and then continues to move to achieve synchronous acquisition of image data and measurement data; Synchronized acquisition of image data and measurement data.
  • the first location map corresponds to the first positioning location data in the map data.
  • the first position may be a position passed by the mobile robot in the current moving operation.
  • the first position may also be a certain position stored in the map data by the mobile robot during historical movement operations.
  • the synchronization acquisition method may be based on an external synchronization signal, and the image capturing device and the ranging sensing device may acquire the image data and measurement data corresponding to the position synchronously; it may also be based on the
  • the synchronization signal in the image capturing device or the ranging sensing device is used to obtain the image data and measurement data corresponding to the position synchronously by the image capturing device and the ranging sensing device.
  • the control device in the mobile robot sends a synchronization signal to the image capturing device and the ranging sensing device at certain time intervals, so as to make the image capturing device and the ranging device
  • the sensing device acquires image data and measurement data, respectively.
  • the image capturing device and the ranging sensing device are provided with a clock module, and the same clock signal mechanism is preset to send out signals synchronously.
  • the image capturing device and the ranging sensing device receive their respective synchronization signals, Perform the steps to acquire image data and measurement data.
  • the external synchronization signal may also be generated based on signals in the inertial navigation sensor of the mobile robot.
  • the inertial navigation sensor (IMU) is used to acquire inertial navigation data of the mobile robot.
  • the inertial navigation sensor includes, but is not limited to, one or more of a gyroscope, an odometer, an optical flow meter, and an accelerometer.
  • the inertial navigation data acquired by the inertial navigation sensor includes, but is not limited to, one or more of the speed data, acceleration data, moving distance, rolling circles of the roller, and deflection angle of the roller, etc. of the mobile robot.
  • an IMU is configured in the driving component of the mobile robot (that is, the component such as the wheel set for advancing the mobile robot), the IMU is connected with the lower computer of the mobile robot, and the lower computer is connected with the image capturing device; and, in An IMU is also arranged in the ranging sensing device, and the ranging sensing device and the image capturing device are connected with the upper computer of the mobile robot.
  • the time stamps of the IMU in the ranging sensing device and the IMU in the mobile robot driving assembly are kept synchronized, so that the IMU in the mobile robot driving assembly generates a synchronization signal to the image capturing device so that the image capturing device acquires image data , the IMU in the ranging sensing device also generates a synchronization signal at the same time, so that the ranging sensing device acquires measurement data.
  • an IMU is configured in the driving component of the mobile robot (ie, components such as a wheel set for moving the mobile robot forward), and the IMU is connected with the lower computer of the mobile robot, and the lower computer is respectively connected with the image capturing device and the distance measuring device. Induction device connection.
  • the IMU sends a synchronization signal to the image capturing device and the ranging sensing device when it detects that the wheel set rotates a preset number of turns, so that the image capturing device can obtain image data and the ranging sensing device can obtain measurement data.
  • step S220 at least use the measurement data respectively measured at the first position and the second position to analyze the moving state of the mobile robot in the physical space to determine when the mobile robot is in a position relative to the first position.
  • the second location is the second location
  • the second location is the second positioning location data in the map data.
  • both the measurement data and the image data acquired in step S210 include data helpful for positioning within different spatial height ranges in the physical space where the mobile robot is located, such as landmark data, distances/angles relative to landmark features, and the like.
  • data helpful for positioning within different spatial height ranges in the physical space where the mobile robot is located such as landmark data, distances/angles relative to landmark features, and the like.
  • the ranging sensing device and the image capturing device move as a whole with the mobile robot, at least the movement state determined by the measurement data obtained at different positions can solve the problem that the landmark features reflected in the image data have no physical scale.
  • the location of landmarks in the coordinate system (such as scale), and only using measurement data is not conducive to obtaining high-precision positioning.
  • An example of the set coordinate system includes a camera coordinate system, or a virtual coordinate system that lacks a mapping relationship determined based on a physical scale with the physical coordinate system.
  • the mobile robot if there is no reference to reflect the actual movement of the mobile robot in the physical space, the mobile robot cannot determine the physical distance of its own movement, the physical distance between it and an entity, etc. based on the image data to reflect the actual physical Data about positional relationships in space.
  • the mobile robot obtains the moving state of the mobile robot in the physical space by analyzing the measurement data with physical meaning; the mobile robot is assembled on the mobile robot by using the moving state and the preset image capturing device and distance measuring device.
  • the mobile robot constructs the correspondence between the landmark data in the image data and the landmark position data in the physical coordinate system, thereby determining the landmark position data of the corresponding landmark feature in the physical coordinate system.
  • the moving state includes at least one of the following: data related to the positioning of the mobile robot itself, data used to help the mobile robot determine positioning, or used to reflect the change of the moving position of the mobile robot in the physical space and the landmark feature reflected in the image data Data such as the physical scale of the mapping relationship between image pixel position changes.
  • the data related to the positioning of the mobile robot itself includes, but is not limited to: changes in the posture of the mobile robot relative to the entity, changes in the posture and posture between the front and rear positions of the mobile robot, or information on each position the mobile robot passes through, etc.
  • the data for helping the mobile robot to determine positioning includes, but is not limited to: the relative positional relationship (such as pose change data, etc.) between the landmark data corresponding to the same landmark feature in the measurement data and the mobile robot, the Landmark locations, etc.
  • the manner in which the mobile robot analyzes the movement state of the mobile robot in the physical space by at least using the measurement data obtained at different positions includes the following examples:
  • the mobile robot obtains the corresponding movement state by analyzing the measurement data measured at different positions to reflect the change of its physical position.
  • the analysis process reflecting the change of its physical position includes a process of analyzing the movement behavior of the mobile robot by using measurement data with physical meaning.
  • the mobile robot analyzes the data values of the measurement data relative to the same entity at at least two positions, and the positional deviation between the measurement data relative to the same entity, etc., to obtain the mobile robot's position and posture change caused by the movement. mobile state.
  • the mobile robot calculates the moving state by using the landmark measurement data corresponding to the common landmark feature among the identified measurement data.
  • the measurement data obtained by the mobile robot at the first position and the second position respectively have at least one common landmark feature.
  • the moving state of the mobile robot can be calculated relative to the distances of the common landmark features.
  • the mobile robot presets the initial movement position as the starting point for positioning
  • the mobile robot knows the positioning position information of the previous position in the physical coordinate system
  • the movement between different positions before and after the above example is used. state, to obtain positioning position data corresponding to the at least two positions in the physical coordinate system.
  • the mobile robot obtains the moving state by calculating the coincidence degree of each measurement data obtained at different positions.
  • the single measurement data provides the positional relationship between the entity part in the corresponding measurement plane and the mobile robot in the physical space.
  • the mobile robot measures the same entity at different positions, it reflects the changes caused by the pose change of the mobile robot.
  • a change in the measurement position of the same entity in the measurement data For example, a mobile robot can rotate, translate, and zoom the measurement data obtained at one position to obtain measurement data that coincides with some measurement points in the measurement data at another position.
  • the processing process reflects the process of simulating a mobile robot to perform a moving operation in order to make the corresponding measurement points coincide.
  • the mobile robot can simulate the data and physical scale of the mobile robot's pose change between different positions and other moving states, and use the pose change data and other moving states, and the coincident
  • the physical quantity provided by the measurement data is used to determine other data in the moving state, such as the second positioning position data in which the second position is mapped in the map data.
  • the mobile robot uses the measurement data obtained at different positions The degree of coincidence between them is optimized, so as to obtain the second positioning position data corresponding to the second position in the map data under the coincidence condition between the measurement data.
  • the mobile robot may regard the overlapping measurement data part as the landmark data corresponding to the landmark feature.
  • An example of the coincidence condition includes that the number of iterative optimizations of the coincidence degree reaches a preset threshold of the number of iterations, and/or the gradient value of the coincidence degree is smaller than a preset gradient threshold value, and the like. This method is beneficial to calculate the movement state by reflecting the physical quantity of the fixed object in each measurement data as much as possible.
  • FIG. 3 is a schematic diagram of an embodiment of the measurement data obtained by the mobile robot in the present application at two positions respectively.
  • the mobile robot obtains measurement data MData_1 and MData_2 at the first position and the second position, respectively, wherein if there is an entity with a fixed position in the physical space, the measurement data MData_1 and MData_2 have corresponding and available entities in the physical space. Measurement points that are coincident after data processing.
  • entity 311 and entity 313 in the measurement data MData_1 and MData_2 From the measurement points of entity 311, entity 312 and entity 313 in the measurement data MData_1 and MData_2 in the figure, it can be seen that both entity 311 and entity 313 in the two measurement data can be overlapped after data processing, while entity 312 cannot Coincidence with entity 311 and entity 313 at the same time, so entity 311 and entity 313 can be regarded as entities with fixed positions.
  • the data processing of the two measurement data MData_1 and MData_2 by the mobile robot including rotation, translation, zoom, etc., reflects the relative positional relationship between the first position and the second position of the mobile robot.
  • the mobile robot uses an optimization function constructed according to the two measurement data MData_1 and MData_2 to perform an optimal calculation of the coincidence degree until the coincidence degree of the two measurement data MData_1 and MData_2 meets the preset coincidence condition.
  • the pose change data reflecting the relative position relationship is obtained, and under the preset coincidence condition, the coincident measurement data points in the two measurement data are used as the landmark measurement data, and based on the obtained pose change data, we obtain The second positioning position data corresponding to the second position in the map data under the condition of coincidence between the measurement data.
  • the above example is also applicable to the calculation processing in which the measurement data is two-dimensional data.
  • the movement state of the second position relative to the first position can be used to determine the first position based on the first position.
  • a positioning position data is calculated, and the second positioning position data in which the second position is mapped in the map data is calculated.
  • the mobile robot In the current moving operation, the mobile robot is located at the second position, and the second position has a common landmark feature with the first position, so the movement state of the second position relative to the first position can be determined based on the common landmark feature, And based on the first positioning position data corresponding to the first position and the moving state, the second positioning position data corresponding to the second position is determined.
  • the second position where the mobile robot is currently located may be the same position or very close to the corresponding first position in the map data.
  • the first positioning position data corresponding to the first position can be determined as The second positioning position data corresponding to the second position.
  • the mobile robot optimizes the coincidence degree between the landmark measurement data in the measurement data obtained at different positions, so as to obtain a coincidence condition that meets the coincidence conditions between the measurement data. Move the robot down to move the state.
  • the optimization process is similar to the above example of one-dimensional data.
  • the positioning method further includes the step of extracting landmark features in each measurement data, so as to use the landmark measurement data corresponding to the landmark features in each measurement data to determine the movement state. analysis.
  • the landmark measurement data includes landmark measurement data corresponding to moving objects and landmark measurement data corresponding to fixed objects.
  • the measurement data obtained by the ranging sensing device includes its distance relative to each feature in the physical space. It includes some static features (such as flower pots, coffee tables, shelves, etc.) and some moving features (such as people, pets, shopping carts in progress, etc.).
  • some features in the physical space are not suitable to be used as landmark features, for example, movable objects such as people and pets in the physical space. If these inappropriate features are marked in the map data, it will affect the subsequent positioning of the mobile robot, and easily lead to large errors. Therefore, it is necessary to filter each data in the measurement data to determine which ones can be used as landmark features, so as to use these extracted landmark features to analyze the moving state and improve the calculation accuracy.
  • some features in the physical space are not suitable to be used as landmark features, for example, movable objects such as people and pets in the physical space. If these inappropriate features are used as landmark features, it will lead to large errors. Therefore, it is necessary to filter each data in the measurement data to determine which ones can be used as landmark features, so as to use these extracted landmark features to analyze the moving state and improve the calculation accuracy.
  • the features that meet the requirements are selected as landmark features by means of a threshold.
  • the measurement data obtained at the first position and the measurement data obtained at the second position have five common features a, b, c, d, and e, wherein the calculated values based on a, b, c, and d are respectively
  • the numerical difference between the moving states is within 1cm, and the numerical difference between the moving state calculated based on e and the moving state calculated by other features is in the range of 74-75cm. If the screening threshold is set to 5cm, then a, b , c, and d can be extracted as landmark features, while e may be an active feature and cannot be used as landmark features.
  • landmark features may also be extracted directly based on numerical changes in measurement data acquired at each location.
  • the changes of static features in each position data are relatively regular, while the changes of moving features are relatively different from those of other static features. Therefore, a predetermined change threshold can be used to find a feature whose numerical value changes relatively regularly in the measurement data relative to the distance of each feature, and use it as a landmark feature.
  • the measurement data obtained at the first position and the measurement data obtained at the second position have five common features a, b, c, d, and e.
  • the distance of feature a relative to the mobile robot is 2m
  • the distance of feature b relative to the mobile robot is 3.4m
  • the distance of feature c relative to the mobile robot is 4.2m
  • the distance of feature d relative to the mobile robot is 2.8m
  • the distance of the e feature relative to the mobile robot is 5m
  • the distance of the a feature relative to the mobile robot is 2.5m
  • the distance of the b feature relative to the mobile robot is 3.8m
  • the distance of the c feature relative to the mobile robot is 3.8m.
  • the distance of the mobile robot is 4.9m
  • the distance of the d feature relative to the mobile robot is 3.6m
  • the distance of the e feature relative to the mobile robot is 2m.
  • the variation of a feature at two positions is 0.5m
  • the variation of b feature at two positions is 0.4m
  • the variation of c feature at two positions is 0.7m
  • the variation of d feature at two positions is 0.7m.
  • the variation at each position is 0.8m
  • the variation of the e-feature at two positions is 3m.
  • the preset change threshold is 0.5m
  • a, b, c, and d can be extracted as landmark features
  • e may be an active feature and cannot be used as landmark features.
  • the coincidence degree is calculated by using the landmark measurement data corresponding to the static landmark feature in the measurement data to determine the corresponding moving state. For example, the coincidence degree is calculated according to the positional deviation between the landmark measurement data corresponding to the same static landmark feature at different locations, so that the positional deviation between the landmark measurement data is minimized, and the corresponding coincidence degree is determined when the coincidence degree condition is met. mobile state.
  • the coincidence degree optimization is performed on each landmark measurement data including the moving object and the fixed object, and when the coincidence degree satisfies the coincidence degree condition, it is determined that the coincident landmark measurement data corresponds to the landmark feature of the fixed object.
  • FIG. 4 shows the measurement data obtained by the mobile robot in the present application at two positions respectively in another embodiment.
  • the measurement data MData_3 and MData_4 obtained by the mobile robot at the first position and the second position, wherein the circle represents the matching landmark measurement data in the two measurement data, and the star represents the two measurement data in one measurement data.
  • Coincidence calculates the coincident landmark measurement data.
  • the mobile robot iteratively optimizes the coincidental landmark measurement data so that the number of landmark measurement data represented by the star in previous coincidence calculations reaches the coincidence condition, and determines the movement state obtained in the corresponding times of coincidence calculation; Using the obtained movement state and the coincident landmark measurement data, the second positioning position data corresponding to the second position in the map data is obtained under the coincidence condition between the measurement data.
  • the mobile robot uses the movement state obtained in any of the above examples to determine the landmark position information in the physical coordinate system of the ground surface feature corresponding to the landmark image data in the synchronously obtained image data.
  • the image capturing device also acquires image data synchronously at each position.
  • the movement amount and rotation amount of the mobile robot in the set coordinate system are calculated.
  • the relative position relationship of the mobile robot, etc. Assuming that in the set coordinate system, the initial position of the mobile robot is taken as the coordinate origin, the coordinates of each positioning position of the mobile robot in the set coordinate system and the landmark features reflected by the image data can be obtained in the set coordinate system.
  • the coordinates of the landmark location below.
  • the mobile robot can calculate the coordinates of each position the mobile robot passes through in its set coordinate system and the coordinates of each landmark image feature, these coordinates lack the actual physical unit, that is, the mobile robot cannot know the coordinates in its image data.
  • the mobile robot converts the coordinates of each position marked in the set coordinate system to the physical coordinate system by using the movement state with physical units calculated by relying on the measurement data, so as to obtain the position that meets the coincidence conditions between the measurement data.
  • second positioning position data corresponding to the second position in the map data, etc.
  • the moving state obtained by the mobile robot using the measurement data reflects the movement of the mobile robot relative to the landmark feature corresponding to the landmark image data in the image data.
  • the mobile robot maps the landmark image features in the set coordinate system to the physical coordinate system, so as to obtain The corresponding landmark features are mapped to the landmark location data in the physical coordinate system.
  • the mobile robot performs optimization processing on the coincidence degree between the measurement data obtained at different positions, and obtains the second positioning position corresponding to the second position in the map data under the coincidence conditions between the measurement data.
  • the data is used as an example to describe the execution process of obtaining the second positioning position data in the physical coordinate system:
  • the measurement data obtained by the mobile robot at the first position and the second position respectively are converted into the coordinate system of the same measurement data, and the iterative calculation of the coincidence degree is performed to obtain the coincidence of the two measurement data in the corresponding coordinate system. meet the coincidence conditions.
  • the moving state of the mobile robot under the coincidence condition is obtained.
  • the movement state includes the physical scale between the relative positional relationship between the first position and the second position of the mobile robot and the measured positional relationship between the coincident landmark measurement data, the relative position of the mobile robot in the second position relative to the first position. Position and pose change data, etc.
  • the physical scale also corresponds to the conversion relationship between the set coordinate system and the physical coordinate system (ie, the map scale).
  • the conversion relationship between the set coordinate system and the physical coordinate system is determined based on the physical scale and internal and external parameters of the image pickup device, wherein the internal and external parameters include the internal optical parameters and assembly parameters of the image pickup device, and the image Assembly parameters between the capture device and the ranging sensing device, etc.
  • the mobile robot After obtaining the conversion relationship between the set coordinate system and the physical coordinate system, the mobile robot calculates the positioning position data corresponding to each position of the mobile robot in the set coordinate system in the physical coordinate system.
  • the coincidence condition may be a preset coincidence degree condition, or select a local optimal coincidence degree, which includes but is not limited to: the average coincidence degree of the common landmark feature in the two measurement data is the highest, or the two measurement data The standard deviation of the coincidence degree of each common landmark feature is the smallest, and the gradient of the coincidence degree is locally minimal, etc.
  • the positioning position data may also be determined according to the movement state obtained when the measurement data meets the coincidence condition, or the positioning position data may be determined after error correction according to the above two calculation methods.
  • Twc1 and Twc2 the positions of the image capturing device itself in the current physical space at two moments (time 1 and time 2) before and after the set coordinate system
  • T wb1 and T wb2 the position of the ranging sensing device in the set coordinate system
  • the coordinates of T wc1 in the set coordinate system (in the form of a matrix) are: The coordinates in the real physical coordinate system are: The coordinates of T wc2 in the set coordinate system are: Then the coordinates in the real physical coordinate system are: Among them, R wc1 and R wc2 both represent the rotation amount, and t wc1 and t wc2 both represent the translation amount.
  • the coordinates of each landmark feature in the set coordinate system and the coordinates corresponding to the physical coordinate system can also be obtained in a similar way. This will not be repeated.
  • T c1c2 T c1w T wc2 .
  • Twc2 is the position of the image pickup device in the physical coordinate system at time 2 .
  • the measurement data at time 2 is mapped to the coordinate system where the measurement data at time 1 is located to form simulated measurement data SData (or the measurement data at time 1 can also be mapped to time 2, the principle The same, and will not be repeated), and the measurement data acquired by the ranging sensing device at time 1 is taken as the actual measurement data RData. It should be understood that by mapping the measurement data at time 2 to time 1, the positions and angles at time 2 relative to the positions and angles of various landmark features at time 1 can be simulated theoretically.
  • one of the simulated measurement data SData and the actual measurement data RData is adjusted by a scaling factor (ie, physical scale), so that the coincidence degree of the simulated measurement data SData and the actual measurement data RData reaches the coincidence degree condition, thereby determining the scaling
  • the value of the coefficient, the scale scaling factor is and The value of s in . Therefore, by substituting s, the second positioning position data of the second position in the map data can be determined.
  • the mobile robot measures the measurement data and The image data is analyzed to reflect the change of its physical position, and the corresponding movement state is obtained.
  • the present application considers the error in the image pickup device to jointly optimize the error of the image pickup device and the ranging sensing device, thereby improving the accuracy of positioning.
  • a joint optimization process is performed on the degree of coincidence between the measurement data acquired at different positions and the degree of coincidence between the acquired image data, so as to obtain that the second position is in the The second positioning position data in the map data.
  • each measurement data and each image data are adjusted respectively to optimize the degree of coincidence of each measurement data and the degree of coincidence of each image data; when the errors between the moving states obtained based on the respective degrees of coincidence conform to the error
  • the corresponding movement state is used to determine the second positioning position data corresponding to the second position in the map data.
  • An example of the error condition includes a preset error threshold, or a minimum error value selected based on a preset number of adjustments.
  • the measurement data obtained when the mobile robot is at the first position is mapped to the second position, and the simulation is obtained by calculating the coincidence degree.
  • the measured data corresponding to the second position that is, the mapped measured data determined by using the current coincidence degree corresponds to the simulated moving state of the mobile robot, which represents the measured data at the first position according to the corresponding
  • the measurement data at the second location inferred from the movement state of .
  • the image capturing device also calculates the moving state of the mobile robot in the set coordinate system (that is, in the set coordinate system) by comparing the landmark image data corresponding to the common landmark feature in the image data captured at the two positions. amount of movement and rotation below).
  • the movement state obtained by using the image data and the movement state obtained by the measurement data correspond to the assembly parameters between the image capturing device and the ranging sensing device.
  • the error between the two moving states is adjusted iteratively so that the adjusted error meets an error condition, and the two moving states are used to determine the landmark position data of the landmark feature in the physical coordinate system, and/or the first Positioning position data of a position and a second position in a physical coordinate system.
  • the coincident measurement data is used as the landmark measurement data corresponding to the landmark feature
  • the landmark position data of each landmark measurement data in the physical coordinate system can also be determined.
  • the degree of coincidence between the measurement data of the two positions is adjusted, and the movement state obtained under the corresponding degree of coincidence determines the degree of coincidence of the landmark image data in the image data;
  • the degree of coincidence between the image data and the adjusted image data meets the coincidence condition, use the corresponding movement state to determine that the corresponding landmark feature in the image data is mapped to the landmark position data in the physical coordinate system, and/or Positioning position data corresponding to the at least two positions in the physical coordinate system.
  • the measurement data obtained when the mobile robot is at the first position is mapped to the second position, and obtained by calculating the coincidence degree.
  • the simulated measurement data corresponding to the second position that is, the mapped measurement data determined by using the current coincidence degree corresponds to the simulated movement state of the mobile robot, which represents the measurement data at the first position according to The measurement data at the second location inferred from the corresponding movement state.
  • the moving state obtained by using the image data and the moving state obtained by the measurement data are corresponding to the assembly parameters between the image capturing device and the ranging sensing device.
  • the degree of coincidence of the matched landmark image data in the image data is corresponding to the assembly parameters between the image capturing device and the ranging sensing device.
  • the degree of coincidence obtained by using the measurement data and the degree of coincidence obtained by using the image data are evaluated and processed to determine whether the evaluation structure meets the coincidence conditions. If so, map each position to the physical coordinate system with the corresponding moving state. , continue to adjust the coincidence of the measured data until the coincidence conditions are met.
  • the errors of the image capturing device and the ranging sensing device can be jointly optimized, so that the advantages of each sensor are used while the accumulation of errors is avoided, and the positioning accuracy is improved.
  • the measurement data from each ranging sensing device belonging to the same position can be fused, and then the fused measurement data corresponding to each position can be optimized to obtain the measurement data that is consistent with each measurement data.
  • each ranging sensing device synchronously acquires measurement data at each position according to their respective measurement methods, and after obtaining the measurement data from each ranging sensing device, the measurement data from each ranging sensing device belonging to the same position
  • the measurement data is fused to perform optimization processing based on the fused measurement data.
  • the ranging sensing device includes a binocular camera device and a laser sensor
  • the binocular camera device uses the triangulation principle to determine the characteristics of the mobile robot relative to each target according to the images captured by its two cameras.
  • the laser sensor also determines the distance relative to the landmark feature at the first position according to the time difference between the time it emits the laser beam and the time it receives the laser beam to obtain the laser beam.
  • the binocular camera device uses the triangulation principle to determine the distance of the mobile robot relative to each landmark feature according to the images captured by its two cameras to obtain the second measurement data obtained by the binocular camera device;
  • the laser sensor also determines its distance from the landmark feature at the second location based on the time difference between the time it emits the laser beam and the time it receives the laser beam to obtain second measurement data acquired by the laser sensor.
  • the coincidence degree optimization processing is performed based on each first measurement data after fusion and the second measurement data after fusion; and various ways to obtain each position data in the physical coordinate system, which will not be described in detail here.
  • the fusion processing includes: performing landmark analysis on measurement data provided by different ranging sensing devices to obtain landmark measurement data corresponding to common landmark features. For example, the measurement data obtained by each ranging sensing device has a distance relative to the same landmark feature, and the distances detected by each ranging sensing device relative to the same landmark feature are processed by means of averaging or median.
  • the fusion processing further includes: performing interpolation, averaging and other processing on the measurement data provided by different ranging sensing devices according to the respective measured direction angles. For example, the measurement value of a certain voxel position measured by the binocular camera device and the measurement data of the corresponding voxel position measured by the single-point laser ranging sensing device are averaged.
  • the ranging value D1 at a certain direction angle measured by the single-point laser ranging sensing device corresponds to the direction between the measured values D21 and D22 of two voxel positions measured by the binocular camera device.
  • angle, the distance measurement value D1 and the measurement values D21 and D22 are respectively averaged to obtain new measurement values D21' and D22' corresponding to the two voxel positions, and replace the measurement values D21 and D22.
  • the ranging value D1 at a certain direction angle measured by the single-point laser ranging sensing device corresponds to the direction between the measured values D21 and D22 of two voxel positions measured by the binocular camera device.
  • the coincidence degree optimization process may be performed on the measurement data obtained from different ranging sensing devices obtained at different positions, and the obtained results are obtained under the condition that the coincidence degrees between the measurement data meet the coincidence conditions.
  • second positioning position data of the second position in the map data may be performed on the measurement data obtained from different ranging sensing devices obtained at different positions, and the obtained results are obtained under the condition that the coincidence degrees between the measurement data meet the coincidence conditions.
  • each ranging sensing device synchronously acquires measurement data at each position according to their respective measurement methods, and after obtaining the measurement data from each ranging sensing device, analyzes the moving state of the mobile robot according to the respective measurement data,
  • the coincidence degree of the measurement data obtained by each ranging sensing device at different positions is optimized respectively to obtain each moving state under the coincidence condition.
  • the present application obtains the measurement data reflecting the high complexity of the mobile robot's body-side environment and other environments by synchronously acquiring and including the mobile robot.
  • Image data with lower environmental complexity, such as above, realizes wider environmental perception in physical space; and the method of positioning using measurement data and image data solves the problem of using either of these data positioning accuracy is low, which is helpful for accurate positioning. problems such as low density of landmarks.
  • the present application also provides a server.
  • the server includes, but is not limited to, a single server, a server cluster, a distributed server cluster, a cloud server, and the like.
  • the server is provided by a cloud server provided by a cloud provider.
  • the cloud server includes a public cloud (Public Cloud) server and a private cloud (Private Cloud) server, wherein the public or private cloud server includes Software-as-a-Service (Software as a Service, SaaS ), Platform-as-a-Service (Platform as a Service, PaaS) and Infrastructure-as-a-Service (Infrastructure as a Service, IaaS), etc.
  • the private cloud service end is, for example, Facebook cloud computing service platform, Amazon (Amazon) cloud computing service platform, Baidu cloud computing platform, Tencent cloud computing platform and so on.
  • the server can accept image data and measurement data from the mobile robot, so as to construct a map of the current physical space or obtain the position of the mobile robot in the physical coordinate system based on the acquired image data and measurement data.
  • the server is connected in communication with a mobile robot located in a physical space.
  • the physical space refers to the physical space provided for the mobile robot to navigate and move, and the physical space includes but is not limited to any of the following: indoor/outdoor space, road space, flight space, etc.
  • the mobile robot is a drone, and the physical space corresponds to a flight space; in other embodiments, the mobile robot is a vehicle with an automatic driving function, then the physical space Corresponds to a tunnel road where positioning cannot be obtained or a road space where the network signal is weak but needs to be navigated; in still other embodiments, the mobile robot is a sweeping robot, and the physical space corresponds to an indoor or outdoor space.
  • the mobile robot is equipped with sensing devices such as a camera device, a movement sensing device, and the like, which provide navigation data for autonomous movement.
  • the server 40 includes an interface device 41 , a storage device 42 , and a processing device 43 .
  • the storage device 42 includes a non-volatile memory, a storage server, and the like. Wherein, the non-volatile memory is, for example, a solid state disk or a U disk.
  • the storage server is used for storing various obtained power consumption related information and power supply related information.
  • the interface device 41 includes a network interface, a data line interface, and the like.
  • the network interfaces include but are not limited to: Ethernet network interface devices, network interface devices based on mobile networks (3G, 4G, 5G, etc.), network interface devices based on short-range communication (WiFi, Bluetooth, etc.), and the like.
  • the data line interface includes but is not limited to: USB interface, RS232, RS485, etc.
  • the interface device is data-connected with the control system, a third-party system, the Internet, and the like.
  • the processing device 43 is connected to the interface device 41 and the storage device 42, and includes at least one of a CPU or a chip integrated with the CPU, a programmable logic device (FPGA) and a multi-core processor.
  • the processing device 43 also includes memory, registers, etc., for temporarily storing data.
  • the interface device 41 is used for data communication with a mobile robot located in a physical space.
  • the storage device 42 is used to store at least one program.
  • the storage device 42 includes, for example, a hard disk disposed on the server and stores the at least one program.
  • the processing device 43 is configured to call the at least one program to coordinate the interface device and the storage device to execute the method for constructing a map mentioned in the example of the first aspect, or to execute the method mentioned in the example of the second aspect. positioning method.
  • the method for constructing a map is shown in FIG. 1 and the corresponding description, and the positioning method is shown in FIG. 5 and the corresponding description, which will not be repeated here.
  • the map data obtained based on the method for constructing a map can be sent to the mobile robot through the interface device, and can also be stored on the server side, so that the map data can be used during positioning, or the server side can provide other devices with the information. Describe the map data.
  • the storage device can provide the map data to the processing device; when other devices need to call the map data, the storage device can provide the map data to the interface device, so that the interface device can map the map data to the interface device. data to other devices.
  • FIG. 7 is a schematic diagram of a module structure of a mobile robot in an embodiment.
  • the mobile robot 50 includes a storage device 54 , a moving device 53 , an image capturing device 51 , a distance sensing device 55 and a processing device 52 .
  • the image pickup device is used to collect image data.
  • the image capturing device is a device for providing a two-dimensional image according to a preset pixel resolution.
  • the image capturing device includes, but is not limited to, a camera, a video camera, a camera module integrated with an optical system or a CCD chip, a camera module integrated with an optical system and a CMOS chip, and the like.
  • lenses that can be used by the camera or video camera include, but are not limited to, standard lenses, telephoto lenses, fisheye lenses, and wide-angle lenses.
  • the image capturing device may be disposed on a mobile robot with a main optical axis between a horizontal plane and a vertical ceiling direction, such as the main optical axis being located at 0° in a vertical ceiling direction and within the range of 0 ⁇ 90°.
  • the image capturing device is assembled on the upper half of the commercial cleaning robot, and its main optical axis is a preset angle obliquely upward to obtain image data within a corresponding viewing angle range.
  • the images captured by the image capturing device may be one or more of a single image, a continuous image sequence, a non-sequential image sequence, or a video. If the image capturing device captures an image sequence or video, one or more image frames can be extracted from the sequence or video as image data for subsequent processing.
  • the ranging sensing device is used for acquiring measurement data, and the ranging sensing device can measure the distance of the landmark feature in the physical space where the mobile robot is located relative to the mobile robot.
  • the landmark feature refers to a feature that is easy to distinguish from other objects in the physical space where the mobile robot is located.
  • the landmark feature can be a table corner, a contour feature of a ceiling light on a ceiling, a space between a wall and the ground. connection etc.
  • the ranging sensing device include ranging sensing devices that provide one-dimensional measurement data such as laser sensors and ultrasonic sensors, or include two-dimensional or binocular camera devices such as ToF sensors, multi-line lidars, millimeter-wave radars, or binocular cameras.
  • the ranging sensing device for three-dimensional measurement data may also include the above two ranging sensing devices.
  • a laser sensor can determine its distance from a landmark feature based on the time difference between the time it emits a laser beam and the time it receives the laser beam; for another example, an ultrasonic sensor can determine movement based on the vibration signal that the sound wave it emits is bounced back by the landmark feature.
  • the distance of the robot relative to the landmark feature; another example, the binocular camera device can use the triangulation principle to determine the distance of the mobile robot relative to the landmark feature according to the images captured by its two cameras; another example, ToF (Time of Flight, The infrared light projector of the time-of-flight) sensor projects infrared light outward, and the infrared light is reflected after encountering the measured object and received by the receiving module.
  • the depth of the illuminated object is calculated by recording the time from infrared light emission to being received. information.
  • the range sensing device generates corresponding measurement data by detecting entities in the surrounding environment of the mobile robot.
  • the ranging sensing device is assembled on the body side of the mobile robot.
  • the ranging sensing device is used to detect the edge of the corresponding image data in the physical space or the environmental data in the area that cannot be covered by the image data.
  • the ranging sensing device is arranged at a position between 10-80 cm from the ground on the side of the commercial cleaning robot.
  • the ranging sensing device is mounted on one side of the mobile robot in the direction of travel, so that the mobile robot can learn about landmark features in the direction of travel so as to avoid or take other behavioral controls.
  • the ranging sensing device can also be installed at other positions of the mobile robot, as long as the distances relative to each landmark feature in the surrounding physical space can be obtained.
  • ranging sensing devices there may be one or more types of ranging sensing devices installed on the mobile robot.
  • a laser sensor may be installed on the mobile robot, a laser sensor and a binocular camera device may be installed at the same time, or a laser sensor, a binocular camera device and a ToF sensor may be installed at the same time.
  • the installed quantity of the same sensor can also be configured according to requirements to obtain measurement data in different directions.
  • the mobile device is configured to perform a movement operation based on a navigation route.
  • the moving device includes components such as wheels arranged at the bottom of the mobile robot to drive the mobile robot to move.
  • the storage device is used to store at least one program.
  • the storage device includes a nonvolatile memory, a storage server, and the like.
  • the non-volatile memory is, for example, a solid state disk or a U disk.
  • the processing device is connected with the storage device, the image capturing device, the distance measuring sensing device and the mobile device, and is used for calling and executing the at least one program to coordinate the storage device, the image capturing device, the distance measuring sensing device and the mobile device.
  • the mobile device executes the method for building a map mentioned in the example of the aforementioned first aspect, or performs the positioning method mentioned in the example for the aforementioned second aspect.
  • the method for constructing a map is shown in FIG. 1 and the corresponding description, and the positioning method is shown in FIG. 5 and the corresponding description, which will not be repeated here.
  • map data obtained based on the method of constructing a map may be stored in a storage device so that the map data can be used in positioning.
  • the storage device may provide the map data to the processing device when the map data needs to be used for positioning.
  • the mobile robot may further include an interface device, so that map data can be provided to other devices or the acquired image data and measurement data can be sent to other devices through the interface device.
  • the interface device includes a network interface, a data line interface, and the like.
  • the network interfaces include but are not limited to: Ethernet network interface devices, network interface devices based on mobile networks (3G, 4G, 5G, etc.), network interface devices based on short-range communication (WiFi, Bluetooth, etc.), and the like.
  • the data line interface includes but is not limited to: USB interface, RS232, RS485, etc.
  • the interface device is data-connected with the control system, a third-party system, the Internet, and the like.
  • the mobile robot in the present application is installed with an image capturing device and a ranging sensing device, so that the advantages and disadvantages of each sensor are combined, and the accuracy and reliability of composition or positioning are improved by means of multi-sensor fusion.
  • the present application also provides a computer readable and writable storage medium, which stores a computer program.
  • the device where the storage medium is located implements at least one embodiment described above for the method for constructing a map, such as FIG. 1 .
  • the corresponding embodiment described in any one of the embodiments; or when the computer program is executed, the device where the storage medium is located implements at least one of the embodiments described above for the positioning method for a mobile robot, such as the one corresponding to FIG. 5 . Any of the described embodiments in various implementations.
  • the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the computer readable and writable storage medium may include read-only memory, random access memory, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices or other magnetic storage devices, flash memory, A USB stick, a removable hard disk, or any other medium that can be used to store the desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium.
  • the instructions are sent from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave
  • coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave
  • computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead intended to be non-transitory, tangible storage media.
  • Disk and disc includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc, where disks typically reproduce data magnetically, while discs use lasers to optically reproduce data replicate the data.
  • the functions described by the computer programs of the methods described herein may be implemented in hardware, software, firmware, or any combination thereof.
  • the functions When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
  • the steps of the methods or algorithms disclosed herein may be embodied in processor-executable software modules, where the processor-executable software modules may reside on a tangible, non-transitory computer readable and writable storage medium.
  • Tangible, non-transitory computer-readable storage media can be any available media that can be accessed by a computer.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which contains one or more possible functions for implementing the specified logical function(s) Execute the instruction.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by dedicated hardware-based systems that perform the specified functions or operations , or can be implemented by a combination of dedicated hardware and computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Acoustics & Sound (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A map construction method and a positioning method for mobile robot, a serving end, a mobile robot, and a computer-readable storage medium. The map construction method comprises: controlling an image capturing device and a ranging and sensing device to respectively acquire image data and measurement data synchronously at at least two positions (S110); analyzing the movement state of a mobile robot in a physical space by using at least the measurement data measured at different positions, so as to obtain at least first position data of a landmark feature in each piece of image data mapped to a physical coordinate system, and/or obtain second position data corresponding to the at least two positions in the physical coordinate system (S120); and recording the obtained first position data and second position data in map data constructed on the basis of the physical coordinate system (S130). The method takes the advantages and disadvantages of various sensors into consideration, and improves the accuracy and reliability of map construction by using a multi-sensor fusion manner.

Description

移动机器人的定位方法、构建地图的方法及移动机器人Positioning method of mobile robot, method of constructing map and mobile robot 技术领域technical field
本申请涉及定位技术领域,具体的涉及移动机器人的定位方法、构建地图的方法及移动机器人。The present application relates to the field of positioning technology, and in particular to a method for positioning a mobile robot, a method for constructing a map, and a mobile robot.
背景技术Background technique
移动机器人为自动执行特定工作的机器装置,它既可以接受人们指挥,又可以运行预先编排的程序,也可以根据以人工智能技术制定的原则纲领行动。移动机器人由于具有自主移动功能,在机场、火车站、仓储、酒店等大型室内场合应用颇多。例如,商用清洁机器人、搬运/配送机器人、迎宾机器人等,利用自主移动功能实现清洁、搬运、引路等功能。A mobile robot is a machine device that automatically performs specific tasks. It can be commanded by people, run pre-programmed programs, or act according to principles and programs formulated with artificial intelligence technology. Mobile robots are widely used in large indoor occasions such as airports, railway stations, warehouses, hotels, etc. due to their autonomous mobility. For example, commercial cleaning robots, handling/distribution robots, welcome robots, etc., use the autonomous movement function to realize functions such as cleaning, handling, and guiding.
受室内场地复杂度等外部因素,以及移动机器人所配置的硬件精准度等内部因素的影响,移动机器人有时依靠单一探测装置所提供的环境信息不利于准确定位。例如,在障碍物较远的空旷区域中,移动机器人仅依靠激光测量方式不利于对周围环境进行测量,进而不利于确定其与障碍物之间的准确位置关系。又如,在屋顶环境相似度高且范围大的区域中,移动机器人仅依靠摄像装置获取地标数据不利于确定其所在所述区域中的准确位置。Affected by external factors such as the complexity of indoor venues and internal factors such as the hardware accuracy configured by mobile robots, mobile robots sometimes rely on the environmental information provided by a single detection device, which is not conducive to accurate positioning. For example, in an open area where obstacles are far away, it is not conducive for a mobile robot to measure the surrounding environment only by relying on laser measurement, and thus is not conducive to determining the exact positional relationship between it and the obstacle. For another example, in an area with a high similarity of the roof environment and a large range, it is not conducive for the mobile robot to obtain the landmark data only by relying on the camera device to determine the exact position in the area where it is located.
发明内容SUMMARY OF THE INVENTION
鉴于以上所述相关技术的缺点,本申请的目的在于提供一种,用以克服上述相关技术中存在的定位和构图精度问题。In view of the above-mentioned shortcomings of the related art, the purpose of the present application is to provide a method for overcoming the positioning and composition accuracy problems existing in the above-mentioned related art.
为实现上述目的及其他相关目的,本申请公开的第一方面提供一种移动机器人构建地图的方法,包括:控制图像摄取装置和测距感应装置在不同的位置分别同步获取图像数据和测量数据;利用在不同位置所获取的测量数据分析移动机器人在物理空间中的移动状态,以得到各图像数据中的地标特征映射到物理坐标系下的地标位置数据,和/或以得到所述不同位置在物理坐标系中对应的定位位置数据;将所得到的各地标位置数据和/或定位位置数据记录在基于所述物理坐标系而构建的地图数据中。In order to achieve the above purpose and other related purposes, a first aspect disclosed in the present application provides a method for a mobile robot to construct a map, including: controlling the image capturing device and the ranging sensing device to obtain image data and measurement data synchronously at different positions, respectively; Use the measurement data obtained at different positions to analyze the moving state of the mobile robot in the physical space, so as to obtain the landmark feature in each image data mapped to the landmark position data in the physical coordinate system, and/or to obtain the different positions in the physical space. Corresponding positioning position data in the physical coordinate system; record the obtained location data and/or positioning position data in the map data constructed based on the physical coordinate system.
本申请公开的第二方面提供一种移动机器人的定位方法,包括:控制图像摄取装置和测距感应装置在移动机器人的第一位置和与第一位置不相同的第二位置分别同步获取图像数据和测量数据;其中,所述第一位置映射于地图数据中的第一定位位置数据;利用在第一位置和第二位置分别测得的测量数据分析移动机器人在物理空间中的移动状态,以确定当所述移动机器人处于第二位置时,所述第二位置在所述地图数据中的第二定位位置数据。A second aspect disclosed in the present application provides a method for positioning a mobile robot, including: controlling an image capturing device and a ranging sensing device to acquire image data synchronously at a first position of the mobile robot and a second position different from the first position, respectively. and measurement data; wherein, the first position is mapped to the first positioning position data in the map data; the measurement data measured at the first position and the second position are used to analyze the moving state of the mobile robot in the physical space to When the mobile robot is in a second position, second positioning position data of the second position in the map data is determined.
本申请公开的第三方面提供一种服务端,包括:接口装置,用于与移动机器人进行数据通信;存储装置,存储至少一个程序;处理装置,与所述存储装置和接口装置相连,用于执行所述至少一个程序,以协调所述存储装置和接口装置执行并实现如本申请公开的第一方面中任一所述的移动机器人构建地图的方法,或者如本申请公开的第二方面中任一所述的移动机器人的定位方法。A third aspect disclosed in the present application provides a server, including: an interface device for performing data communication with a mobile robot; a storage device for storing at least one program; a processing device, connected to the storage device and the interface device, for Execute the at least one program to coordinate the storage device and the interface device to execute and implement the method for constructing a map by a mobile robot as described in any one of the first aspects disclosed in the present application, or as in the second aspect disclosed in the present application Any of the described positioning methods for a mobile robot.
本申请公开的第四方面提供一种移动机器人,包括:图像摄取装置,用以采集图像数据;测距感应装置,用以获取测量数据;移动装置,用于执行移动操作;存储装置,用以存储至少一个程序;处理装置,与所述移动装置、图像摄取装置、测距感应装置以及存储装置相连,用于执行所述至少一个程序,以执行如本申请公开的第一方面中任一所述的移动机器人构建地图的方法,或者如本申请公开的第二方面中任一所述的移动机器人的定位方法。A fourth aspect disclosed in the present application provides a mobile robot, including: an image capturing device for acquiring image data; a ranging sensing device for acquiring measurement data; a moving device for performing a moving operation; and a storage device for Store at least one program; a processing device, connected to the mobile device, the image capturing device, the ranging sensing device, and the storage device, for executing the at least one program, so as to execute any one of the first aspects disclosed in this application. The method for constructing a map for a mobile robot described above, or the positioning method for a mobile robot according to any one of the second aspects disclosed in this application.
本申请公开的第五方面提供一种计算机可读存储介质,存储有至少一种计算机程序,所述计算机程序被处理器运行时控制所述存储介质所在设备执行如本申请公开的第一方面中任一所述的移动机器人构建地图的方法,或者如本申请公开的第二方面中任一所述的移动机器人的定位方法。A fifth aspect disclosed in the present application provides a computer-readable storage medium, which stores at least one computer program, and when the computer program is run by a processor, the computer program controls the device where the storage medium is located to execute the first aspect disclosed in the present application. Any of the methods for constructing a map for a mobile robot, or a method for positioning a mobile robot according to any one of the second aspects disclosed in the present application.
综上所述,本申请中的移动机器人的定位方法、构建地图的方法、服务端、移动机器人和存储介质结合了多个传感器能够获取更多物理空间中信息的优点,利用了图像摄取装置的定位能力以及测距感应装置的测量能力,并对多个传感器中的误差进行优化,从而提供了一种新的方法和结构以实现构建地图和定位的过程,提高了构图和定位的精度和可靠性。To sum up, the positioning method of the mobile robot, the method of constructing the map, the server, the mobile robot and the storage medium in this application combine the advantages that multiple sensors can obtain more information in the physical space, and utilize the advantages of the image capturing device. The positioning capability and the measurement capability of the ranging sensing device, and the optimization of errors in multiple sensors, provides a new method and structure to realize the process of building maps and positioning, improving the accuracy and reliability of composition and positioning sex.
本领域技术人员能够从下文的详细描述中容易地洞察到本申请的其它方面和优势。下文的详细描述中仅显示和描述了本申请的示例性实施方式。如本领域技术人员将认识到的,本申请的内容使得本领域技术人员能够对所公开的具体实施方式进行改动而不脱离本申请所涉及发明的精神和范围。相应地,本申请的附图和说明书中的描述仅仅是示例性的,而非为限制性的。Other aspects and advantages of the present application can be readily appreciated by those skilled in the art from the following detailed description. Only exemplary embodiments of the present application are shown and described in the following detailed description. As those skilled in the art will recognize, the content of this application enables those skilled in the art to make changes to the specific embodiments disclosed without departing from the spirit and scope of the invention to which this application relates. Accordingly, the drawings and descriptions in the specification of the present application are only exemplary and not restrictive.
附图说明Description of drawings
本申请所涉及的发明的具体特征如所附权利要求书所显示。通过参考下文中详细描述的示例性实施方式和附图能够更好地理解本申请所涉及发明的特点和优势。对附图简要说明书如下:The invention to which this application relates is set forth with particularity characteristic of the appended claims. The features and advantages of the inventions involved in this application can be better understood by reference to the exemplary embodiments described in detail hereinafter and the accompanying drawings. A brief description of the drawings is as follows:
图1显示为本申请中构建地图的方法在一实施方式中的示意图。FIG. 1 shows a schematic diagram of a method of constructing a map in the present application in one embodiment.
图2a、图2b显示为本申请中构建的地图在一实施方式中的简要示意图。Figures 2a and 2b show a brief schematic diagram of a map constructed in this application in one embodiment.
图3显示为本申请中的移动机器人分别在两个位置处获取的测量数据在一实施方式中的 示意图。Fig. 3 shows a schematic diagram of measurement data obtained by the mobile robot in the present application at two locations, respectively, in one embodiment.
图4显示为本申请中的移动机器人分别在两个位置处获取的测量数据在另一实施方式中的示意图。FIG. 4 is a schematic diagram of another embodiment of the measurement data obtained by the mobile robot in the present application at two positions respectively.
图5显示为本申请中的移动机器人的定位方法在一实施方式中的示意图。FIG. 5 shows a schematic diagram of the positioning method of the mobile robot in the present application in one embodiment.
图6显示为本申请中的服务端在一实施方式中的示意图。FIG. 6 is a schematic diagram of a server in an embodiment of the present application.
图7显示为移动机器人在一实施方式中的模块结构示意图。FIG. 7 is a schematic diagram of a module structure of a mobile robot in an embodiment.
具体实施方式detailed description
以下由特定的具体实施例说明本申请的实施方式,熟悉此技术的人士可由本说明书所揭露的内容轻易地了解本申请的其他优点及功效。The embodiments of the present application are described below by specific specific examples, and those skilled in the art can easily understand other advantages and effects of the present application from the contents disclosed in this specification.
在下述描述中,参考附图,附图描述了本申请的若干实施例。应当理解,还可使用其他实施例,并且可以在不背离本公开的精神和范围的情况下进行模块或单元组成、电气以及操作上的改变。下面的详细描述不应该被认为是限制性的,并且本申请的实施例的范围仅由公布的专利的权利要求所限定。这里使用的术语仅是为了描述特定实施例,而并非旨在限制本申请。In the following description, reference is made to the accompanying drawings, which describe several embodiments of the present application. It is to be understood that other embodiments may be utilized and modular or unit compositional, electrical, as well as operational changes may be made without departing from the spirit and scope of the present disclosure. The following detailed description should not be considered limiting, and the scope of embodiments of the present application is limited only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to limit the application.
虽然在一些实例中术语第一、第二等在本文中用来描述各种定位位置数据、信息或参数,但是这些定位位置数据或参数不应当被这些术语限制。这些术语仅用来将一个定位位置数据或参数与另一个定位位置数据或参数进行区分。例如,第一定位位置数据可以被称作第二定位位置数据,并且类似地,第二定位位置数据可以被称作第一定位位置数据,而不脱离各种所描述的实施例的范围。第一定位位置数据和第二定位位置数据均是在描述一个定位位置数据,但是除非上下文以其他方式明确指出,否则它们不是同一个定位位置数据。取决于语境,比如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”。Although in some instances the terms first, second, etc. are used herein to describe various positioning position data, information or parameters, these positioning position data or parameters should not be limited by these terms. These terms are only used to distinguish one location data or parameter from another. For example, first positioning position data may be referred to as second positioning position data, and similarly, second positioning position data may be referred to as first positioning position data, without departing from the scope of the various described embodiments. The first positioning position data and the second positioning position data are both describing one positioning position data, but unless the context clearly indicates otherwise, they are not the same positioning position data. Depending on the context, for example, the word "if" as used herein can be interpreted as "at the time of" or "when".
再者,如同在本文中所使用的,单数形式“一”、“一个”和“该”旨在也包括复数形式,除非上下文中有相反的指示。应当进一步理解,术语“包含”、“包括”表明存在所述的特征、步骤、操作、定位位置数据、组件、项目、种类、和/或组,但不排除一个或多个其他特征、步骤、操作、定位位置数据、组件、项目、种类、和/或组的存在、出现或添加。此处使用的术语“或”和“和/或”被解释为包括性的,或意味着任一个或任何组合。因此,“A、B或C”或者“A、B和/或C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A、B和C”。仅当定位位置数据、功能、步骤或操作的组合在某些方式下内在地互相排斥时,才会出现该定义的例外。Also, as used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context dictates otherwise. It should be further understood that the terms "comprising", "comprising" indicate the presence of the stated features, steps, operations, location data, components, items, categories, and/or groups, but do not exclude one or more other features, steps, The existence, appearance, or addition of operations, location data, components, items, categories, and/or groups. The terms "or" and "and/or" as used herein are to be construed to be inclusive or to mean any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: A; B; C; A and B; A and C; B and C; A, B and C" . Exceptions to this definition arise only when combinations of location data, functions, steps or operations are inherently mutually exclusive in some way.
目前移动机器人的定位技术主要包括如下几种:基于航位推算(Dead Reckoning,DR)的定位、基于地图匹配的定位、以及基于激光SLAM(Simultaneous Localization and Mapping,SLAM)或视觉SLAM的定位等。At present, the positioning technologies of mobile robots mainly include the following: positioning based on dead reckoning (DR), positioning based on map matching, and positioning based on laser SLAM (Simultaneous Localization and Mapping, SLAM) or visual SLAM, etc.
其中,基于航位推算的定位的方式举例包括利用惯性测量单元(Inertial Measurement Unit,IMU)和安装在机器人轮组中的里程计,测量移动机器人在移动过程中的加速度和角速度等数据,并利用这些数据的增量进行累计,以此推导移动机器人在某一时刻相对于开始时刻的相对位置,从而实现对移动机器人的定位。然而,这种方式存在误差累积的问题,例如移动机器人在大范围工作期间受到移动时轮组所经过的地面材质(地毯、瓷砖、木板等)不同,其实际行进的距离与各传感器所感应的数据存在误差,累计误差会随着时间逐渐增大。若没有额外的定位数据帮助修正,最终会导致对移动机器人的定位失败。Among them, an example of the positioning method based on dead reckoning includes using an inertial measurement unit (Inertial Measurement Unit, IMU) and an odometer installed in the wheel set of the robot to measure the acceleration and angular velocity of the mobile robot during the movement process, and use The increments of these data are accumulated to derive the relative position of the mobile robot at a certain moment relative to the starting moment, so as to realize the positioning of the mobile robot. However, this method has the problem of accumulation of errors. For example, when the mobile robot is moved during a large-scale operation, the ground material (carpet, tile, wooden board, etc.) that the wheel group passes through is different, and the actual distance traveled is different from the sensor sensed. There are errors in the data, and the accumulated errors will gradually increase over time. If there is no additional positioning data to help correct it, the positioning of the mobile robot will eventually fail.
其次,基于地图匹配的定位的方式举例包括利用移动机器人上的传感器探测周围环境,构建局部地图,并将其与预先存储的完整地图进行匹配,从而获得当前时刻移动机器人在整个环境中的位置。然而,这种方式受限于环境布局,仅能适用于场地相对简单、变化明显的环境。Secondly, an example of a localization method based on map matching includes using sensors on the mobile robot to detect the surrounding environment, constructing a local map, and matching it with the pre-stored complete map, so as to obtain the current position of the mobile robot in the entire environment. However, this method is limited by the layout of the environment and can only be applied to environments with relatively simple sites and obvious changes.
再者,基于激光SLAM或视觉SLAM的定位的方式举例包括利用移动机器人上的激光探测器或摄像头探测周围的环境,并结合激光SLAM或视觉SLAM技术,通过利用激光测量地标特征的距离和方位数据,对应所测得的地标特征构建环境地图,以确定移动机器人的移动轨迹和位姿。上述定位技术之所以不易于直接转用在具有移动范围大、周边环境复杂、或顶视装修环境相似度高等特点的工作场所,通常考虑到以下因素:Furthermore, examples of positioning methods based on laser SLAM or visual SLAM include the use of laser detectors or cameras on mobile robots to detect the surrounding environment, combined with laser SLAM or visual SLAM technology, by using lasers to measure the distance and orientation data of landmark features. , and build an environment map corresponding to the measured landmark features to determine the movement trajectory and pose of the mobile robot. The reason why the above-mentioned positioning technology is not easy to be directly transferred to the workplace with the characteristics of large moving range, complex surrounding environment, or high similarity of top-view decoration environment, usually consider the following factors:
一方面,家庭类的室内空间较小,各位置的环境相似度低,依靠天花板、墙、衣柜等接近天花板的空间区域的地标特征容易通过图像和测距转换为数据以实现定位。而上述工作场所具有更开阔的空间范围,这使得若利用上述定位技术进行定位,同样的图像摄取装置的视角范围内的物像空间相距移动机器人很远,同时测距传感器无法提供准确的距离数据,这使得移动机器人所构建出的地图与真实环境不符。On the one hand, the indoor space of the family is small, and the environmental similarity of each location is low. The landmark features of the space area close to the ceiling, such as ceilings, walls, and wardrobes, are easily converted into data through images and ranging to achieve positioning. The above-mentioned workplace has a wider spatial range, which makes it possible that if the above-mentioned positioning technology is used for positioning, the object image space within the viewing angle of the same image capturing device is far away from the mobile robot, and the distance measuring sensor cannot provide accurate distance data. , which makes the map constructed by the mobile robot inconsistent with the real environment.
另一方面,由于移动机器人周边的物体流动性、或地标特征被遮挡等环境复杂度高的问题,移动机器人不足以依靠所配置的图像摄取装置为机器人定位提供稳定的地标特征的图像数据来源,以至于会出现由于未能连续探测可匹配的地标特征而使得移动机器人在自主移动时出现实际路径与规划路径不符等情况。On the other hand, due to the high complexity of the environment such as the fluidity of objects around the mobile robot or the occlusion of landmark features, the mobile robot cannot rely on the configured image capture device to provide stable image data sources of landmark features for robot positioning. As a result, due to the failure to continuously detect the matchable landmark features, the actual path of the mobile robot may not match the planned path when it moves autonomously.
有鉴于此,本申请第一方面的实施方式中提供一种构建地图的方法,以通过图像摄取装置和测距感应装置分别提供的数据联合优化误差,构建移动机器人所在的物理空间的地图。In view of this, an embodiment of the first aspect of the present application provides a method for constructing a map, so as to jointly optimize the error through data provided by the image capturing device and the ranging sensing device, respectively, to construct a map of the physical space where the mobile robot is located.
所述构建地图的方法可以由移动机器人中的处理器执行,也可以由与所述移动机器人通信的服务端执行。The method for constructing a map may be executed by a processor in the mobile robot, or may be executed by a server communicating with the mobile robot.
其中,所述移动机器人指的是具有构建地图能力的自主移动设备,包括但不限于:无人机、工业机器人、家庭陪伴式移动设备、医疗用移动设备、家用清洁机器人、商用清洁机器人、智能车辆、以及巡逻式移动设备等中的一种或多种。Among them, the mobile robot refers to an autonomous mobile device with the ability to build a map, including but not limited to: drones, industrial robots, home companion mobile devices, medical mobile devices, household cleaning robots, commercial cleaning robots, intelligent One or more of vehicles, and patrol mobile equipment.
所述物理空间指的是所述移动机器人所在的实际三维空间,可由在所述空间坐标系中构建的抽象数据进行描述。例如,所述物理空间包括但不限于家庭住所、公共场所(例如办公场所、商场、医院、地下停车场、以及银行)等。对于移动机器人而言,所述物理空间通常指的是室内的空间,即空间在长、宽、和高等方向上存在边界。特别包含例如商场、候机大厅等空间范围大、场景重复度高等特点的物理空间。The physical space refers to the actual three-dimensional space in which the mobile robot is located, and can be described by abstract data constructed in the space coordinate system. For example, the physical space includes, but is not limited to, family residences, public places (eg, offices, shopping malls, hospitals, underground parking lots, and banks), and the like. For a mobile robot, the physical space usually refers to an indoor space, that is, the space has boundaries in the directions of length, width, and height. In particular, it includes physical spaces such as shopping malls, waiting halls and other physical spaces with large spatial scope and high scene repetition.
所述移动机器人设置有图像摄取装置,其中,所述图像摄取装置为用于按照预设像素分辨率提供二维图像的装置。在一些实施例中,所述图像摄取装置包括但不限于:照相机、视频摄像机、集成有光学系统或CCD芯片的摄像模块、以及集成有光学系统和CMOS芯片的摄像模块等。根据实际成像的需求,所述摄像机或视频摄像机可以采用的镜头包括但不限于:标准镜头、远摄镜头、鱼眼镜头、以及广角镜头等。The mobile robot is provided with an image capturing device, wherein the image capturing device is a device for providing a two-dimensional image according to a preset pixel resolution. In some embodiments, the image capturing device includes, but is not limited to, a camera, a video camera, a camera module integrated with an optical system or a CCD chip, a camera module integrated with an optical system and a CMOS chip, and the like. According to actual imaging requirements, lenses that can be used by the camera or video camera include, but are not limited to, standard lenses, telephoto lenses, fisheye lenses, and wide-angle lenses.
在一些实施例中,所述图像摄取装置可设置在移动机器人的上且主光轴在水平平面与竖直向天花板方向之间,如所述主光轴位于以竖直向天花板方向为0°以及0±90°的范围内。以移动机器人为商用清洁机器人为例,所述图像摄取装置装配在商用清洁机器人上半部,其主光轴为向斜上方的预设角度,以获取相应视角范围内的图像数据。In some embodiments, the image capturing device may be disposed on the mobile robot and the main optical axis is between the horizontal plane and the vertical direction to the ceiling, such as the main optical axis is located at 0° in the vertical direction to the ceiling and within the range of 0±90°. Taking the mobile robot as a commercial cleaning robot as an example, the image capturing device is assembled on the upper half of the commercial cleaning robot, and its main optical axis is a preset angle obliquely upward to obtain image data within a corresponding viewing angle range.
所述图像摄取装置摄取的图像可以是单张图像、连续图像序列、非连续图像序列、或者视频等中的一种或多种。若图像摄取装置摄取的是图像序列或视频,则可以通过在该序列或视频中提取一或多个图像帧,作为后续进行处理的图像数据。The images captured by the image capturing device may be one or more of a single image, a continuous image sequence, a non-sequential image sequence, or a video. If the image capturing device captures an image sequence or video, one or more image frames can be extracted from the sequence or video as image data for subsequent processing.
其中,所述图像数据反映了图像摄取装置对移动机器人所在物理空间的感知情况。在一些情况下,可以将图像摄取装置所获取的图像直接作为图像数据;在另一些情况下,也可以将图像摄取装置获取的图像处理后得到的数据。例如,当所述图像摄取装置所获取的图像为单张图像时,可直接将第一位置和第二位置处拍摄的单张图像作为图像数据。又如,当所述图像摄取装置所获取的图像为图像序列或视频时,则可以通过在该图像序列或视频中提取图像帧作为图像数据。Wherein, the image data reflects the perception of the physical space where the mobile robot is located by the image capturing device. In some cases, the image acquired by the image capturing device may be directly used as image data; in other cases, the image acquired by the image capturing device may also be processed as data. For example, when the image acquired by the image capturing device is a single image, the single image captured at the first position and the second position can be directly used as the image data. For another example, when the image acquired by the image capturing device is an image sequence or video, an image frame may be extracted from the image sequence or video as image data.
在一些实施例中,所述移动设备将摄取的图像存储在本地的存储介质内。其中,所述存储介质可以包括只读存储器、随机存取存储器、EEPROM、CD-ROM或其它光盘存储装置、 磁盘存储装置或其它磁存储设备、闪存、U盘、移动硬盘、或者能够用于存储具有指令或数据结构形式的期望的程序代码并能够进行存取的任何其它介质。In some embodiments, the mobile device stores the captured images in a local storage medium. Wherein, the storage medium may include read-only memory, random access memory, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices or other magnetic storage devices, flash memory, U disk, removable hard disk, or can be used for storage Any other medium that has the desired program code in the form of instructions or data structures and can be accessed.
在一些实施例中,所述移动机器人将拍摄的图像传输至通信连接的外部设备进行存储,所述通信连接包括有线或无线的通信连接方式。其中,所述外部设备可以是位于网络中的服务端,所述服务端包括但不限于单台服务器、服务器集群、分布式服务器群、以及云服务端等中的一种或多种。在具体的实现中,所述云服务端可以是云计算提供商所提供的云计算平台。以所述云服务端的架构作为依据,所述云服务端的类型包括但不限于:Software-as-a-Service(软件即服务,SaaS)、Platform-as-a-Service(平台即服务,PaaS)、以及Infrastructure-as-a-Service(基础设施即服务,IaaS)。以所述云服务端的性质作为依据,所述云服务端的类型包括但不限于:公共云(Public Cloud)服务端、私有云(Private Cloud)服务端、以及混合云(Hybrid Cloud)服务端等。In some embodiments, the mobile robot transmits the captured image to an external device connected in communication for storage, and the communication connection includes a wired or wireless communication connection. The external device may be a server located in the network, and the server includes but is not limited to one or more of a single server, a server cluster, a distributed server group, and a cloud server. In a specific implementation, the cloud server may be a cloud computing platform provided by a cloud computing provider. Based on the architecture of the cloud server, the types of the cloud server include but are not limited to: Software-as-a-Service (Software as a Service, SaaS), Platform-as-a-Service (Platform as a Service, PaaS) , and Infrastructure-as-a-Service (Infrastructure as a Service, IaaS). Based on the nature of the cloud server, the types of the cloud server include but are not limited to: public cloud (Public Cloud) server, private cloud (Private Cloud) server, and hybrid cloud (Hybrid Cloud) server and the like.
在一些实施例中,所述公共云服务端例如为亚马逊(Amazon)的弹性计算云(Amazon EC2)、IBM的Blue Cloud、谷歌的AppEngine、以及Windows的Azure服务平台等;所述私有云服务端例如为阿里云计算服务平台、亚马逊(Amazon)云计算服务平台、百度云计算平台、以及腾讯云计算平台等。In some embodiments, the public cloud server is, for example, Amazon's elastic computing cloud (Amazon EC2), IBM's Blue Cloud, Google's AppEngine, and Windows' Azure service platform, etc.; the private cloud server For example, Alibaba Cloud Computing Service Platform, Amazon Cloud Computing Service Platform, Baidu Cloud Computing Platform, and Tencent Cloud Computing Platform.
所述移动机器人还设置有测距感应装置,所述测距感应装置可对移动机器人所在的物理空间中的地标特征相对移动机器人的距离进行测量。The mobile robot is further provided with a ranging sensing device, and the ranging sensing device can measure the distance of the landmark feature in the physical space where the mobile robot is located relative to the mobile robot.
其中,所述地标特征指的是移动机器人所在的物理空间中易于与其他物体进行区分的特征,例如,所述地标特征可以是桌角、天花板上的顶灯的轮廓特征、墙壁与地面之间的连线等。The landmark feature refers to a feature that is easy to distinguish from other objects in the physical space where the mobile robot is located. For example, the landmark feature can be a table corner, a contour feature of a ceiling light on a ceiling, a space between a wall and the ground. connection etc.
所述测距感应装置举例包括如激光传感器、超声波传感器等提供一维测量数据的测距感应装置,或者包括如ToF传感器、多线激光雷达、毫米波雷达或者双目摄像装置等提供二维或三维测量数据的测距感应装置,还或者包含上述两种测距感应装置。例如,激光传感器可根据其发射激光束的时间和接收激光束的时间差,确定其相对于地标特征的距离;又如,超声波传感器可根据其发射的声波被地标特征反弹回的振动信号来确定移动机器人相对于地标特征的距离;再如,双目摄像装置可根据其两个摄像头所拍摄到的图像,利用三角原理来确定移动机器人相对于地标特征的距离;还如,ToF(Time of Flight,飞行时间)传感器的红外光投射器向外投射红外光,红外光遇到被测物体后反射,并被接收模组接收,通过记录红外光从发射到被接收的时间,计算出被照物体深度信息。Examples of the ranging sensing device include ranging sensing devices that provide one-dimensional measurement data such as laser sensors and ultrasonic sensors, or include two-dimensional or binocular camera devices such as ToF sensors, multi-line lidars, millimeter-wave radars, or binocular cameras. The ranging sensing device for three-dimensional measurement data may also include the above two ranging sensing devices. For example, a laser sensor can determine its distance from a landmark feature based on the time difference between the time it emits a laser beam and the time it receives the laser beam; for another example, an ultrasonic sensor can determine movement based on the vibration signal that the sound wave it emits is bounced back by the landmark feature. The distance of the robot relative to the landmark feature; another example, the binocular camera device can use the triangulation principle to determine the distance of the mobile robot relative to the landmark feature according to the images captured by its two cameras; another example, ToF (Time of Flight, The infrared light projector of the time-of-flight) sensor projects infrared light outward, and the infrared light is reflected after encountering the measured object and received by the receiving module. The depth of the illuminated object is calculated by recording the time from infrared light emission to being received. information.
所述测距感应装置通过探测移动机器人周围环境中的实体而产生相应的测量数据。为防 止移动机器人与周围环境中的实体发生如碰撞、缠绕等移动异常情况,或者防止如移动机器人跌落等移动异常情况,测距感应装置装配在移动机器人的体侧。测距感应装置用于探测物理空间中对应图像数据的边缘或图像数据无法覆盖区域的环境数据。以移动机器人是商用清洁机器人为例,测距感应装置可举例包括但不限于配置在商用清洁机器人体侧相距地面10-80cm之间的位置。The range sensing device generates corresponding measurement data by detecting entities in the surrounding environment of the mobile robot. In order to prevent the mobile robot from abnormal movements such as collision and entanglement with entities in the surrounding environment, or to prevent abnormal movements such as the mobile robot falling, the ranging sensing device is installed on the body side of the mobile robot. The ranging sensing device is used to detect the edge of the corresponding image data in the physical space or the environmental data in the area that cannot be covered by the image data. Taking the mobile robot being a commercial cleaning robot as an example, the ranging sensing device may include, but is not limited to, be disposed at a position between 10-80 cm from the ground on the side of the commercial cleaning robot.
在一些实施例中,所述测距感应装置装配在所述移动机器人行进方向上的一侧,以便于移动机器人了解其行进方向上的地标特征从而躲避或采取其他行为控制。当然,在某些情况下所述测距感应装置也可以安装在移动机器人的其他位置上,只要能获取到周围物理空间中的相对各地标特征的距离即可。In some embodiments, the ranging sensing device is mounted on one side of the mobile robot in the direction of travel, so that the mobile robot can learn about landmark features in the direction of travel so as to avoid or take other behavioral controls. Of course, in some cases, the ranging sensing device can also be installed at other positions of the mobile robot, as long as the distances relative to each landmark feature in the surrounding physical space can be obtained.
在一些实施例中,安装在移动机器人上的测距感应装置种类可以为一种或多种。例如,在移动机器人上可仅安装有激光传感器,也可同时安装有激光传感器和双目摄像装置,或者同时安装有激光传感器、双目摄像装置和ToF传感器等。并且,同一种传感器的安装数量也可根据需求来配置,以获取不同方向上的测量数据。In some embodiments, there may be one or more types of ranging sensing devices installed on the mobile robot. For example, only a laser sensor may be installed on the mobile robot, a laser sensor and a binocular camera device may be installed at the same time, or a laser sensor, a binocular camera device and a ToF sensor may be installed at the same time. In addition, the installed quantity of the same sensor can also be configured according to requirements to obtain measurement data in different directions.
所述测量数据反映了测距感应装置对移动机器人所在物理空间的感知情况。换言之,测量数据反映了测距感应装置所能探测到的所述物理空间中具有物理含义的物理量。例如,物理空间中的实体相对于移动机器人的相对位置等。其中,所述物理含义是指所述测量数据能够提供以如时间单位/长度单位/角度单位等基本单位的物理数据。The measurement data reflects the perception of the physical space where the mobile robot is located by the ranging sensing device. In other words, the measurement data reflects the physical quantities with physical meanings in the physical space that can be detected by the ranging sensing device. For example, relative positions of entities in physical space relative to mobile robots, etc. Wherein, the physical meaning means that the measurement data can provide physical data in basic units such as time unit/length unit/angle unit.
所述测量数据至少反映移动机器人与周围环境中的实体之间的位置关系,其中,所述测量数据中包括了对移动机器人所在物理空间中能观测到的各地标特征相对于移动机器人的距离以及相对方位。在一些情况下,可以直接将测距感应装置所获取的数据作为测量数据;在另一些情况下,也可以将测距感应装置获取的数据处理后得到测量数据。例如,当所述测距感应装置为激光传感器时,可直接将激光传感器探测到的相对于每一地标特征的距离作为测量数据;又如,当所述测距感应装置为双目摄像装置时,则将双目摄像装置所获取的图像进行处理后,得到相对于每一地标特征的距离并作为测量数据。The measurement data at least reflects the positional relationship between the mobile robot and the entities in the surrounding environment, wherein the measurement data includes the distances of each landmark feature that can be observed in the physical space where the mobile robot is located relative to the mobile robot and the relative orientation. In some cases, the data acquired by the ranging sensing device can be directly used as the measurement data; in other cases, the data acquired by the ranging sensing device can also be processed to obtain the measurement data. For example, when the ranging sensing device is a laser sensor, the distance detected by the laser sensor relative to each landmark feature can be directly used as the measurement data; for another example, when the ranging sensing device is a binocular camera device , then the image obtained by the binocular camera device is processed to obtain the distance relative to each landmark feature and use it as the measurement data.
通常,由于物理空间中具有多个地标特征,因此在测量数据中具有相对于多个地标特征的距离和相对方位,由此呈现出深度点云数据。Generally, since there are multiple landmark features in the physical space, there are distances and relative orientations relative to the multiple landmark features in the measurement data, thereby presenting the depth point cloud data.
在一些示例中,测量数据还反映移动机器人对其周围环境中实体的感知数据。例如,测量感应装置为激光传感器,其通过向周围环境中实体表面发出光束来探测能够反射所述光束的测量点的集合,并获取到相应的测量数据。通过预先确定的物理参数对所获取的测量数据进行3D重构,以得到所述相对位置。In some examples, the measurement data also reflects the mobile robot's perception data of entities in its surroundings. For example, the measurement sensing device is a laser sensor, which emits a light beam to a solid surface in the surrounding environment to detect a set of measurement points that can reflect the light beam, and acquires corresponding measurement data. The relative position is obtained by 3D reconstruction of the acquired measurement data through predetermined physical parameters.
需要说明的是,虽然在移动机器人中同时安装有图像摄取装置和测距感应装置,但所述图像摄取装置的视角范围和测距感应装置的测量范围之间可以不重叠,或者说所述图像摄取装置的视角范围和测距感应装置的测量范围之间至多具有部分重叠,即本申请中各步骤的执行并不必然依赖于图像摄取装置和测距感应装置中共同检测到的地标特征,即使在图像摄取装置和测距感应装置中没有共同的地标特征,也不影响本申请中构建地图的过程。It should be noted that although an image capturing device and a ranging sensing device are installed in the mobile robot, the viewing angle range of the image capturing device and the measuring range of the ranging sensing device may not overlap, or the image There is at most partial overlap between the viewing angle range of the capturing device and the measurement range of the ranging sensing device, that is, the execution of each step in this application does not necessarily depend on the landmark features detected by the image capturing device and the ranging sensing device, even if There is no common landmark feature in the image capturing device and the ranging sensing device, nor does it affect the process of constructing the map in this application.
在一个示例性的实施例中,请参阅图1,其显示为本申请中构建地图的方法在一实施方式中的示意图。如图所示,在步骤S110中,控制图像摄取装置和测距感应装置在至少两个位置分别同步获取图像数据和测量数据。In an exemplary embodiment, please refer to FIG. 1 , which shows a schematic diagram of a method of constructing a map in the present application in one embodiment. As shown in the figure, in step S110, the image capturing device and the ranging sensing device are controlled to acquire image data and measurement data simultaneously at at least two positions, respectively.
在此,移动机器人在当前物理空间中的移动过程中途经多个位置。为了准确地关联图像数据和测量数据,移动机器人的图像摄取装置和测距感应装置在各位置分别获取对应于各位置处的图像数据和测量数据。Here, the mobile robot travels through a number of locations during its movement in the current physical space. In order to correlate the image data and the measurement data accurately, the image capturing device and the ranging sensing device of the mobile robot respectively acquire the image data and the measurement data corresponding to each position at each position.
以移动机器人在机场、超市等工作环境中工作举例,由于移动机器人所处物理空间中包含如行走的旅客及其行李箱、或行走的顾客及其购物车等移动对象,以及包含如围栏、货架、站定的人等固定对象,因此,在至少两个位置所测得的测量数据中既包含固定对象还可能包含移动对象。在同样两个位置所获取的图像数据中至少包含物理空间内竖直方向高于测距感应装置的探测范围内的实体的影像数据,以及可能包含对应部分测量数据的影像数据。Take mobile robots working in airports, supermarkets and other working environments as an example, because the physical space where mobile robots are located includes moving objects such as walking passengers and their suitcases, or walking customers and their shopping carts, as well as fences, shelves, etc. , standing people and other fixed objects, therefore, the measurement data measured at at least two locations includes both fixed objects and possibly moving objects. The image data acquired at the same two positions at least include image data of entities in the physical space whose vertical direction is higher than the detection range of the ranging sensing device, and may include image data of corresponding part of the measurement data.
为了提高来自两个独立装置的数据融合使用时的数据准确度,属于同一位置的图像数据和测量数据是同步获取的。在此,所述同步获取的方式包括移动机器人在途经的多个位置停止并获取图像数据和测量数据,再继续移动的方式来实现同步获取图像数据和测量数据;或者利用同步信号在不停止的情况下同步获取图像数据和测量数据。In order to improve the data accuracy when the data from two independent devices is used in fusion, the image data and measurement data belonging to the same location are acquired synchronously. Here, the synchronous acquisition method includes that the mobile robot stops at multiple locations along the way and acquires image data and measurement data, and then continues to move to achieve synchronous acquisition of image data and measurement data; Synchronized acquisition of image data and measurement data.
在可能的实施方式中,所述同步获取的方式可以是基于外部的同步信号,由图像摄取装置和测距感应装置同步获取对应于该位置处的图像数据和测量数据;也可以是基于所述图像摄取装置或测距感应装置中的同步信号,由图像摄取装置和测距感应装置同步获取对应于该位置处的图像数据和测量数据。例如,移动机器人在当前物理空间中的工作过程中,移动机器人中的控制装置按照一定的时间间隔向所述图像摄取装置和测距感应装置发出同步信号,以令所述图像摄取装置和测距感应装置分别获取图像数据和测量数据。又如,所述图像摄取装置和测距感应装置中设有时钟模块并预设相同的时钟信号机制以同步发出信号,当图像摄取装置和测距感应装置分别收到各自的同步信号时,分别执行获取图像数据和测量数据的步骤。In a possible implementation manner, the synchronization acquisition method may be based on an external synchronization signal, and the image capturing device and the ranging sensing device may acquire the image data and measurement data corresponding to the position synchronously; it may also be based on the The synchronization signal in the image capturing device or the ranging sensing device is used to obtain the image data and measurement data corresponding to the position synchronously by the image capturing device and the ranging sensing device. For example, during the working process of the mobile robot in the current physical space, the control device in the mobile robot sends a synchronization signal to the image capturing device and the ranging sensing device at certain time intervals, so as to make the image capturing device and the ranging device The sensing device acquires image data and measurement data, respectively. For another example, the image capturing device and the ranging sensing device are provided with a clock module, and the same clock signal mechanism is preset to send out signals synchronously. When the image capturing device and the ranging sensing device receive their respective synchronization signals, Perform the steps to acquire image data and measurement data.
在一些情况下,所述外部的同步信号也可以是基于所述移动机器人的惯导传感器中的信 号而产生的。所述惯导传感器(IMU)用于获取移动机器人的惯导数据。所述惯导传感器包括但不限于陀螺仪、里程计、光流计、以及加速度计等中的一种或多种。所述惯导传感器所获取的惯导数据包括但不限于移动机器人的速度数据、加速度数据、移动距离、滚轮的滚动圈数、以及滚轮的偏转角度等中的一种或多种。In some cases, the external synchronization signal may also be generated based on the signal in the inertial navigation sensor of the mobile robot. The inertial navigation sensor (IMU) is used to acquire inertial navigation data of the mobile robot. The inertial navigation sensor includes, but is not limited to, one or more of a gyroscope, an odometer, an optical flow meter, and an accelerometer. The inertial navigation data acquired by the inertial navigation sensor includes, but is not limited to, one or more of the speed data, acceleration data, moving distance, rolling circles of the roller, and deflection angle of the roller, etc. of the mobile robot.
例如,所述移动机器人的驱动组件(即轮组等用于使移动机器人前进的组件)中配置有IMU,该IMU与移动机器人的下位机连接,同时下位机与图像摄取装置连接;并且,在测距感应装置中也配置有IMU,测距感应装置及图像摄取装置与移动机器人的上位机连接。在此,将所述测距感应装置中的IMU与移动机器人驱动组件中IMU的时间戳保持同步,由此移动机器人驱动组件中的IMU产生同步信号给图像摄取装置以使图像摄取装置获取图像数据,测距感应装置中的IMU也同时产生同步信号以使测距感应装置获取测量数据。For example, an IMU is configured in the driving component of the mobile robot (that is, the component such as the wheel set for advancing the mobile robot), the IMU is connected with the lower computer of the mobile robot, and the lower computer is connected with the image capturing device; and, in An IMU is also arranged in the ranging sensing device, and the ranging sensing device and the image capturing device are connected with the upper computer of the mobile robot. Here, the time stamps of the IMU in the ranging sensing device and the IMU in the mobile robot driving assembly are kept synchronized, so that the IMU in the mobile robot driving assembly generates a synchronization signal to the image capturing device so that the image capturing device acquires image data , the IMU in the ranging sensing device also generates a synchronization signal at the same time, so that the ranging sensing device acquires measurement data.
又如,所述移动机器人的驱动组件(即轮组等用于使移动机器人前进的组件)中配置有IMU,该IMU与移动机器人的下位机连接,同时下位机分别与图像摄取装置和测距感应装置连接。IMU在检测到轮组每转动预设圈数时发出同步信号至所述图像摄取装置和测距感应装置,由此供图像摄取装置获取图像数据,以及供测距感应装置获取测量数据。For another example, an IMU is configured in the driving component of the mobile robot (ie, components such as a wheel set for moving the mobile robot forward), and the IMU is connected with the lower computer of the mobile robot, and the lower computer is respectively connected with the image capturing device and the distance measuring device. Induction device connection. The IMU sends a synchronization signal to the image capturing device and the ranging sensing device when it detects that the wheel set rotates a preset number of turns, so that the image capturing device can obtain image data and the ranging sensing device can obtain measurement data.
为便于描述,在以下实施例中将以移动机器人在两个位置处(即第一位置处和第二位置处)如何构建地图为例。For ease of description, in the following embodiments, how the mobile robot constructs the map at two positions (ie, the first position and the second position) will be taken as an example.
请继续参阅图1,在步骤S120中,所述移动机器人至少利用在不同位置所获取的测量数据分析移动机器人在物理空间中的移动状态,以至少得到各图像数据中的地标特征映射到物理坐标系下的地标位置数据,和/或以得到所述至少两个位置在物理坐标系中对应的定位位置数据。Please continue to refer to FIG. 1, in step S120, the mobile robot at least analyzes the movement state of the mobile robot in the physical space by using the measurement data obtained at different positions, so as to obtain at least the mapping of landmark features in each image data to physical coordinates location data of the landmark in the system, and/or to obtain positioning location data corresponding to the at least two locations in the physical coordinate system.
在此,由步骤S110所获取的测量数据和图像数据中均包含移动机器人所在的物理空间中不同空间高度范围内有助于定位/构建地图的数据,如地标数据、相对于地标特征的距离/角度等。另外,由于测距感应装置和图像摄取装置是随移动机器人整体移动的,故而,至少利用在不同位置所获取的测量数据而确定的移动状态可解决图像数据中所反映的地标特征在没有物理尺度(如比例尺)的设定坐标系中的地标位置、以及仅利用测量数据不利于获得高精度定位等问题。其中,所述设定坐标系举例包括相机坐标系、或与物理坐标系之间缺少基于物理尺度而确定的映射关系的一虚拟坐标系。由此,利用测量数据所描述的移动机器人的移动状态和图像数据中对应地标特征的地标数据,来计算被图像数据捕获的各地标特征映射到物理坐标系下的地标位置数据。Here, both the measurement data and the image data obtained in step S110 include data helpful for positioning/building a map within different spatial height ranges in the physical space where the mobile robot is located, such as landmark data, distance/ angle etc. In addition, since the ranging sensing device and the image capturing device move as a whole with the mobile robot, at least the movement state determined by the measurement data obtained at different positions can solve the problem that the landmark features reflected in the image data have no physical scale. The location of landmarks in the coordinate system (such as scale), and only using measurement data is not conducive to obtaining high-precision positioning. An example of the set coordinate system includes a camera coordinate system, or a virtual coordinate system that lacks a mapping relationship determined based on a physical scale with the physical coordinate system. Thus, the moving state of the mobile robot described by the measurement data and the landmark data corresponding to the landmark features in the image data are used to calculate the landmark position data mapped from the landmark features captured by the image data to the physical coordinate system.
具体地,在没有可参照的反映移动机器人在物理空间中真实移动情况的情况下,移动机 器人无法依据图像数据来确定自身移动的物理距离、其与某一实体之间的物理距离等反映实际物理空间中位置关系的数据。为此,移动机器人通过对具有物理含义的测量数据进行分析,来得到移动机器人在物理空间中的移动状态;利用所述移动状态、以及预设的图像摄取装置和测距装置在移动机器人上装配及图像获取时的内外参数,移动机器人构建图像数据中的地标数据与物理坐标系下的地标位置数据之间的对应关系,由此确定相应的地标特征在物理坐标系下的地标位置数据。Specifically, if there is no reference to reflect the actual movement of the mobile robot in the physical space, the mobile robot cannot determine the physical distance of its own movement, the physical distance between it and an entity, etc. based on the image data to reflect the actual physical Data about positional relationships in space. To this end, the mobile robot obtains the moving state of the mobile robot in the physical space by analyzing the measurement data with physical meaning; the mobile robot is assembled on the mobile robot by using the moving state and the preset image capturing device and distance measuring device. As well as the internal and external parameters during image acquisition, the mobile robot constructs the correspondence between the landmark data in the image data and the landmark position data in the physical coordinate system, thereby determining the landmark position data of the corresponding landmark feature in the physical coordinate system.
其中,所述移动状态包括以下至少一种:与移动机器人自身定位相关的数据、用于帮助移动机器人确定定位的数据、或者用于反映移动机器人在物理空间中移动位置变化与图像数据中反映地标特征在图像像素位置变化之间映射关系的物理尺度等数据。其中,所述与移动机器人自身定位相关的数据包括但不限于:移动机器人相对于所述实体的位姿变化、移动机器人前后位置之间的位姿变化、或移动机器人所途径的各位置信息等;所述用于帮助移动机器人确定定位的数据包括但不限于:所述测量数据中对应同一地标特征的地标数据与移动机器人之间的相对位置关系(如位姿变化数据等)、地标特征的地标位置等。Wherein, the moving state includes at least one of the following: data related to the positioning of the mobile robot itself, data used to help the mobile robot determine positioning, or used to reflect the change of the moving position of the mobile robot in the physical space and the landmark feature reflected in the image data Data such as the physical scale of the mapping relationship between image pixel position changes. Wherein, the data related to the positioning of the mobile robot itself includes, but is not limited to: changes in the posture of the mobile robot relative to the entity, changes in the posture and posture between the front and rear positions of the mobile robot, or information on each position the mobile robot passes through, etc. ; The data for helping the mobile robot to determine positioning includes, but is not limited to: the relative positional relationship (such as pose change data, etc.) between the landmark data corresponding to the same landmark feature in the measurement data and the mobile robot, the Landmark locations, etc.
需要说明的是,所述测量数据中所描述的地标特征与图像数据中所描述的地标特征并非一定为同一地标特征。后续为便于描述,将测量数据中反映地标特征的地标数据称为地标测量数据,以及将图像数据中反映地标特征的地标数据称为地标图像数据。It should be noted that the landmark feature described in the measurement data and the landmark feature described in the image data are not necessarily the same landmark feature. In the following, for convenience of description, the landmark data reflecting the landmark features in the measurement data is referred to as landmark measurement data, and the landmark data reflecting the landmark features in the image data is referred to as landmark image data.
其中,所述移动机器人至少利用在不同位置所获取的测量数据分析移动机器人在物理空间中的移动状态的方式包括以下各示例:The manner in which the mobile robot analyzes the movement state of the mobile robot in the physical space by at least using the measurement data obtained at different positions includes the following examples:
在一些示例中,移动机器人通过对不同位置所测得的测量数据进行反映其物理位置变化的分析,得到相应的移动状态。其中,所述反映其物理位置变化的分析过程包括利用具有物理含义的测量数据对移动机器人进行移动行为分析的过程。移动机器人通过在至少两个位置处相对于同一实体的测量数据的数据值、以及相对于同一实体的各测量数据之间的位置偏差等进行分析,得到受移动位姿变化而产生的移动机器人的移动状态。In some examples, the mobile robot obtains the corresponding movement state by analyzing the measurement data measured at different positions to reflect the change of its physical position. Wherein, the analysis process reflecting the change of its physical position includes a process of analyzing the movement behavior of the mobile robot by using measurement data with physical meaning. The mobile robot analyzes the data values of the measurement data relative to the same entity at at least two positions, and the positional deviation between the measurement data relative to the same entity, etc., to obtain the mobile robot's position and posture change caused by the movement. mobile state.
为提高对移动状态的测量准确度,在一些具体示例中,移动机器人利用经识别的各测量数据中对应共同地标特征的地标测量数据来计算所述移动状态。In order to improve the measurement accuracy of the moving state, in some specific examples, the mobile robot calculates the moving state by using the landmark measurement data corresponding to the common landmark feature among the identified measurement data.
以移动机器人在第一位置和第二位置获取测量数据为例,移动机器人在第一位置处和第二位置处分别获取的测量数据中具有至少一个共同地标特征,通过移动机器人在两个位置处相对于共同地标特征的距离,可以计算出移动机器人的移动状态。Taking the measurement data obtained by the mobile robot at the first position and the second position as an example, the measurement data obtained by the mobile robot at the first position and the second position respectively have at least one common landmark feature. The moving state of the mobile robot can be calculated relative to the distances of the common landmark features.
在移动机器人预设初始移动位置为起点构建地图的过程中,移动机器人在已知前一位置在物理坐标系下的定位位置信息的情况下,再利用上述示例所得到的前后不同位置之间的移 动状态,得到所述至少两个位置在物理坐标系中对应的定位位置数据、和/或测量数据中所对应的地标特征在物理坐标系中对应的地标位置数据等。应当理解,由于测距感应装置所获取的测量数据为具有实际物理单位的数据,因此所计算出的移动状态也具有实际物理单位。In the process of constructing the map from the mobile robot's preset initial moving position as the starting point, the mobile robot, when the positioning position information of the previous position in the physical coordinate system is known, uses the above-mentioned example to obtain the difference between the different positions before and after. In the moving state, positioning position data corresponding to the at least two positions in the physical coordinate system, and/or landmark position data corresponding to the landmark feature in the measurement data in the physical coordinate system, etc. are obtained. It should be understood that, since the measurement data acquired by the distance-measuring sensing device is data with actual physical units, the calculated movement state also has actual physical units.
在另一些具体示例中,移动机器人通过对不同位置所获取的各测量数据进行重合度计算的方式来得到所述移动状态。其中,单一测量数据提供在物理空间中对应测量平面内的实体部分与移动机器人之间的位置关系,移动机器人在不同位置对同一实体测量时,其反映出因移动机器人的位姿变化而引起的同一实体在测量数据中的测量位置的变化。例如,移动机器人可通过对其中一位置处获取的测量数据进行旋转、平移、缩放等处理,得到与另一位置处的测量数据中部分测量点相重合的测量数据,其旋转、平移、缩放等处理过程反映为使相应测量点重合而模拟移动机器人执行移动操作的过程。为此,移动机器人通过执行重合度优化操作,可模拟出移动机器人在不同位置之间位姿变化的数据、物理尺度等移动状态,以及利用该位姿变化数据等一些移动状态、和相重合的测量数据所提供的物理量,确定移动状态中的其他数据,如所述至少两个位置在物理坐标系中对应的定位位置数据、和/或测量数据中的地标测量数据在物理坐标系中对应的地标位置数据等。In some other specific examples, the mobile robot obtains the moving state by calculating the coincidence degree of each measurement data obtained at different positions. Among them, the single measurement data provides the positional relationship between the entity part in the corresponding measurement plane and the mobile robot in the physical space. When the mobile robot measures the same entity at different positions, it reflects the changes caused by the pose change of the mobile robot. A change in the measurement position of the same entity in the measurement data. For example, a mobile robot can rotate, translate, and zoom the measurement data obtained at one position to obtain measurement data that coincides with some measurement points in the measurement data at another position. The processing process reflects the process of simulating a mobile robot to perform a moving operation in order to make the corresponding measurement points coincide. To this end, by performing the coincidence optimization operation, the mobile robot can simulate the data and physical scale of the mobile robot's pose change between different positions and other moving states, and use the pose change data and other moving states, and the coincident The physical quantity provided by the measurement data determines other data in the moving state, such as the positioning position data corresponding to the at least two positions in the physical coordinate system, and/or the landmark measurement data in the measurement data corresponding to the physical coordinate system Landmark location data, etc.
由于测量数据中包含测距感应装置在相应位置测得的对应固定对象的物理量和对应移动对象的物理量,为此,在一示例性的实施例中,移动机器人将在不同位置所获取的测量数据之间的重合度进行优化处理,得到在符合重合条件下的所述测量数据中至少相重合的测量数据映射至物理坐标系下的地标位置数据,和/或所述移动机器人获取数据时的至少两个位置在物理坐标系中对应的定位位置数据。其中,移动机器人可将相重合的测量数据部分视为对应地标特征的地标数据。其中,所述重合条件举例包括重合度的迭代优化次数达到预设迭代次数阈值,和/或重合度的梯度值小于预设的梯度阈值等。该方式有利于尽量以各测量数据中反映固定对象的物理量来计算移动状态。Since the measurement data includes the physical quantity of the fixed object and the physical quantity of the corresponding moving object measured by the ranging sensing device at the corresponding position, for this reason, in an exemplary embodiment, the mobile robot uses the measurement data obtained at different positions The degree of coincidence between them is optimized to obtain at least the coincident measurement data in the measurement data that meet the coincidence conditions and map to the landmark position data under the physical coordinate system, and/or the mobile robot obtains at least the data when the data is obtained. Positioning position data corresponding to two positions in the physical coordinate system. Wherein, the mobile robot may regard the overlapping measurement data part as the landmark data corresponding to the landmark feature. An example of the coincidence condition includes that the number of iterative optimizations of the coincidence degree reaches a preset threshold of the number of iterations, and/or the gradient value of the coincidence degree is smaller than a preset gradient threshold value, and the like. This method is beneficial to calculate the movement state by reflecting the physical quantity of the fixed object in each measurement data as much as possible.
以测量数据为一维数据为例,请参阅图3,其显示为本申请中的移动机器人分别在两个位置处获取的测量数据在一实施方式中的示意图。如图所示,移动机器人在第一位置和第二位置分别获取测量数据MData_1和MData_2,其中若物理空间中存在位置固定的实体,则所述测量数据MData_1和MData_2中具有对应于该实体且可在数据处理后被重合的测量点。从图中的实体311、实体312和实体313在测量数据MData_1和MData_2中的测量点可以看出,两个测量数据中的实体311和实体313均可在数据处理后重合,而实体312则无法与实体311和实体313同时重合,因此实体311和实体313可被视为位置固定的实体。移动机器人对两测量数据MData_1和MData_2的包括旋转、平移、缩放等数据处理即反映移动机器人在第一 位置和第二位置之间的相对位置关系。为此,移动机器人利用依据两个测量数据MData_1和MData_2而构建的优化函数进行重合度优化计算,直到两个测量数据MData_1和MData_2的重合度符合预设重合条件。由此得到反映所述相对位置关系的位姿变化数据,以在该预设重合条件下将两测量数据中相重合的测量数据点作为地标测量数据,以及基于所得到的位姿变化数据,得到所述测量数据中至少地标测量数据映射至物理坐标系下的地标位置数据,和/或所述至少两个位置在物理坐标系中对应的定位位置数据。需要说明的是,上述示例也适用于测量数据为二维数据的计算处理。Taking the measurement data as one-dimensional data as an example, please refer to FIG. 3 , which is a schematic diagram of an embodiment of the measurement data obtained by the mobile robot in the present application at two positions respectively. As shown in the figure, the mobile robot obtains measurement data MData_1 and MData_2 at the first position and the second position, respectively, wherein if there is an entity with a fixed position in the physical space, the measurement data MData_1 and MData_2 have corresponding and available entities in the physical space. Measurement points that are coincident after data processing. From the measurement points of entity 311, entity 312 and entity 313 in the measurement data MData_1 and MData_2 in the figure, it can be seen that both entity 311 and entity 313 in the two measurement data can be overlapped after data processing, while entity 312 cannot Coincidence with entity 311 and entity 313 at the same time, so entity 311 and entity 313 can be regarded as entities with fixed positions. The data processing of the two measurement data MData_1 and MData_2 by the mobile robot including rotation, translation, and zooming reflects the relative positional relationship between the first position and the second position of the mobile robot. To this end, the mobile robot uses an optimization function constructed according to the two measurement data MData_1 and MData_2 to perform an optimal calculation of the coincidence degree until the coincidence degree of the two measurement data MData_1 and MData_2 meets the preset coincidence condition. Thereby, the pose change data reflecting the relative position relationship is obtained, and under the preset coincidence condition, the coincident measurement data points in the two measurement data are used as the landmark measurement data, and based on the obtained pose change data, we obtain In the measurement data, at least the landmark measurement data is mapped to the landmark position data in the physical coordinate system, and/or the positioning position data corresponding to the at least two positions in the physical coordinate system. It should be noted that the above example is also applicable to the calculation processing in which the measurement data is two-dimensional data.
为提高计算效率,在又一个示例性的实施例中,移动机器人将在不同位置所获取的测量数据中地标测量数据之间的重合度进行优化处理,得到在符合重合条件下的移动机器人的移动状态。In order to improve computational efficiency, in another exemplary embodiment, the mobile robot optimizes the degree of coincidence between the measurement data of landmarks in the measurement data obtained at different positions, so as to obtain the movement of the mobile robot under the coincidence condition. state.
其中,所述优化处理过程与上述一维数据的示例类似。与之不同之处在于,在一个示例性的实施例中,所述构建地图的方法还包括提取各测量数据中的地标特征的步骤,以利用各测量数据中对应地标特征的地标测量数据对移动状态进行分析。其中,所述地标测量数据包含对应于移动对象的地标测量数据以及对应于固定对象的地标测量数据。例如,在测距感应装置所获取的测量数据中包括了其相对于物理空间中各特征的距离。其中包括了一些静止的特征(例如花盆、茶几、货架等),也包括了一些活动的特征(例如人、宠物、推行中的购物车等)。Wherein, the optimization process is similar to the above example of one-dimensional data. The difference is that, in an exemplary embodiment, the method for constructing a map further includes the step of extracting landmark features in each measurement data, so as to use the landmark measurement data corresponding to the landmark features in each measurement data to pair the movement. status is analyzed. The landmark measurement data includes landmark measurement data corresponding to moving objects and landmark measurement data corresponding to stationary objects. For example, the measurement data obtained by the ranging sensing device includes its distance relative to each feature in the physical space. It includes some static features (such as flower pots, coffee tables, shelves, etc.) and some moving features (such as people, pets, shopping carts in progress, etc.).
应当理解,在某些场景下,物理空间中的一些特征并不适宜用以作为地标特征,例如物理空间中的人、宠物等可移动的对象。若将这些不适宜的特征标记在地图数据中会影响后续移动机器人的定位,以及易导致较大的误差。因此,需要对测量数据中的各数据进行筛选,以确定哪些可用以作为地标特征,以便利用这些提取的地标特征对移动状态进行分析,提高计算精度。It should be understood that in some scenarios, some features in the physical space are not suitable to be used as landmark features, for example, movable objects such as people and pets in the physical space. If these inappropriate features are marked in the map data, it will affect the subsequent positioning of the mobile robot, and easily lead to large errors. Therefore, it is necessary to filter each data in the measurement data to determine which ones can be used as landmark features, so as to use these extracted landmark features to analyze the moving state and improve the calculation accuracy.
在一实施方式中,通过阈值的方式筛选出符合要求的特征作为地标特征。例如,在第一位置处所获取的测量数据与第二位置处所获取的测量数据中,具有a、b、c、d、e五个共同特征,其中分别基于a、b、c、d计算出的移动状态之间数值相差范围均在1cm以内,而基于e计算出的移动状态与其他特征计算出的移动状态之间数值相差范围在74~75cm,若将筛选阈值设置在5cm,则a、b、c、d可作为地标特征提取,而e则可能为活动的特征,不能作为地标特征。In one embodiment, the features that meet the requirements are selected as landmark features by means of a threshold. For example, the measurement data obtained at the first position and the measurement data obtained at the second position have five common features a, b, c, d, and e, wherein the calculated values based on a, b, c, and d are respectively The numerical difference between the moving states is within 1cm, and the numerical difference between the moving state calculated based on e and the moving state calculated by other features is in the range of 74-75cm. If the screening threshold is set to 5cm, then a, b , c, and d can be extracted as landmark features, while e may be an active feature and cannot be used as landmark features.
在另一实施方式中,也可直接基于在各位置处获取的测量数据中的数值变化来提取地标特征。具体地说,在不同的位置获取的测量数据中,静止的特征在各位置数据中的变化均比 较规律,而活动的特征则相对于其他静止的特征数值变化相差较大。因此,可通过一预设的变化阈值,在测量数据中相对各特征的距离找到其中数值变化较为规律的特征,以此作为地标特征。例如,第一位置处所获取的测量数据与第二位置处所获取的测量数据中,具有a、b、c、d、e五个共同特征。在第一位置处时,a特征相对于移动机器人的距离为2m,b特征相对于移动机器人的距离为3.4m,c特征相对于移动机器人的距离为4.2m,d特征相对于移动机器人的距离为2.8m,e特征相对于移动机器人的距离为5m;在第二位置处时,a特征相对于移动机器人的距离为2.5m,b特征相对于移动机器人的距离为3.8m,c特征相对于移动机器人的距离为4.9m,d特征相对于移动机器人的距离为3.6m,e特征相对于移动机器人的距离为2m。则可以看出a特征在两个位置处的变化量为0.5m,b特征在两个位置处的变化量为0.4m,c特征在两个位置处的变化量为0.7m,d特征在两个位置处的变化量为0.8m,e特征在两个位置处的变化量为3m。假设预设的变化阈值为0.5m,则a、b、c、d可作为地标特征提取,而e则可能为活动的特征,不能作为地标特征。In another embodiment, landmark features may also be extracted directly based on numerical changes in measurement data acquired at each location. Specifically, in the measurement data obtained from different positions, the changes of static features in each position data are relatively regular, while the changes of active features are relatively different from other static features. Therefore, a predetermined change threshold can be used to find a feature whose numerical value changes relatively regularly in the measurement data relative to the distance of each feature, and use it as a landmark feature. For example, the measurement data obtained at the first position and the measurement data obtained at the second position have five common features a, b, c, d, and e. At the first position, the distance of feature a relative to the mobile robot is 2m, the distance of feature b relative to the mobile robot is 3.4m, the distance of feature c relative to the mobile robot is 4.2m, and the distance of feature d relative to the mobile robot is 2.8m, the distance of the e feature relative to the mobile robot is 5m; at the second position, the distance of the a feature relative to the mobile robot is 2.5m, the distance of the b feature relative to the mobile robot is 3.8m, and the distance of the c feature relative to the mobile robot is 3.8m. The distance of the mobile robot is 4.9m, the distance of the d feature relative to the mobile robot is 3.6m, and the distance of the e feature relative to the mobile robot is 2m. It can be seen that the variation of a feature at two positions is 0.5m, the variation of b feature at two positions is 0.4m, the variation of c feature at two positions is 0.7m, and the variation of d feature at two positions is 0.7m. The variation at each position is 0.8m, and the variation of the e-feature at two positions is 3m. Assuming that the preset change threshold is 0.5m, a, b, c, and d can be extracted as landmark features, while e may be an active feature and cannot be used as landmark features.
在确定上述两个实施方式中任一实施方式所确定的静态的地标特征时,利用测量数据中对应静态的地标特征的地标测量数据进行重合度计算,以确定其对应的移动状态。例如,根据不同位置处对应同一静态的地标特征的地标测量数据之间的位置偏差进行重合度计算,使各地标测量数据之间的位置偏差最小,并在确定符合重合度条件时确定其对应的移动状态。When determining the static landmark feature determined in any one of the above two embodiments, the coincidence degree is calculated by using the landmark measurement data corresponding to the static landmark feature in the measurement data to determine the corresponding moving state. For example, the coincidence degree is calculated according to the positional deviation between the landmark measurement data corresponding to the same static landmark feature at different locations, so that the positional deviation between the landmark measurement data is minimized, and the corresponding coincidence degree is determined when the coincidence degree condition is met. mobile state.
在又一些实施方式中,对包含移动对象和固定对象的各地标测量数据进行重合度优化,当其重合度满足重合度条件时,认定相重合的地标测量数据对应固定对象的地标特征。In still other embodiments, the coincidence degree optimization is performed on each landmark measurement data including the moving object and the fixed object, and when the coincidence degree satisfies the coincidence degree condition, it is determined that the coincident landmark measurement data corresponds to the landmark feature of the fixed object.
上述任一实施方式中所提及的重合度优化的过程可举例如下:请参阅图4,其显示为本申请中的移动机器人分别在两个位置处获取的测量数据在另一实施方式中的示意图。如图所示,在移动机器人在第一位置和第二位置获取的测量数据MData_3和MData_4,其中,圆形表示两测量数据中相匹配的地标测量数据,以及星型表示两测量数据中在一次重合计算时相重合的地标测量数据。移动机器人通过迭代地优化可重合的地标测量数据,使其在历次重合计算时由星型表示的地标测量数据的数量变化达到重合条件,则确定在相应次的重合计算时所得到的移动状态;利用所得到的移动状态、和相重合的地标测量数据,得到所述测量数据中至少地标测量数据所对应的地标特征映射至物理坐标系下的地标位置数据,和/或所述移动机器人获取数据时的至少两个位置在物理坐标系中对应的定位位置数据。The process of optimizing the coincidence degree mentioned in any of the above embodiments can be exemplified as follows: please refer to FIG. 4 , which shows the measurement data obtained by the mobile robot in the present application at two positions respectively in another embodiment. Schematic. As shown in the figure, the measurement data MData_3 and MData_4 obtained by the mobile robot at the first position and the second position, wherein the circle represents the matching landmark measurement data in the two measurement data, and the star represents the two measurement data in one measurement data. Coincidence calculates the coincident landmark measurement data. The mobile robot iteratively optimizes the coincidental landmark measurement data so that the number of landmark measurement data represented by the star in previous coincidence calculations reaches the coincidence condition, and determines the movement state obtained in the corresponding times of coincidence calculation; Using the obtained movement state and the coincident landmark measurement data, at least the landmark feature corresponding to the landmark measurement data in the measurement data is mapped to the landmark position data in the physical coordinate system, and/or the mobile robot acquires data Positioning position data corresponding to at least two positions in the physical coordinate system.
利用上述任一示例所得到的移动状态,移动机器人还确定同步获得的图像数据中的地标图像数据所对应的地标特征在物理坐标系下的地标位置信息。Using the movement state obtained in any of the above examples, the mobile robot also determines the landmark position information in the physical coordinate system of the landmark feature corresponding to the landmark image data in the synchronously obtained image data.
在移动机器人移动的过程中,图像摄取装置在各位置处还同步获取图像数据。通过对比 各位置处拍摄到的图像数据中对应共同地标特征的地标图像数据,计算出移动机器人在设定坐标系下移动机器人的移动量和旋转量,又如地标特征在设定坐标系中与移动机器人的相对位置关系等。假设在设定坐标系中,以移动机器人的初始位置为坐标原点,则可以得出移动机器人在设定坐标系下的各定位位置坐标,以及图像数据所反映的各地标特征在设定坐标系下的地标位置坐标。During the movement of the mobile robot, the image capturing device also acquires image data synchronously at each position. By comparing the landmark image data corresponding to the common landmark features in the image data captured at each position, the movement amount and rotation amount of the mobile robot in the set coordinate system are calculated. The relative position relationship of the mobile robot, etc. Assuming that in the set coordinate system, the initial position of the mobile robot is taken as the coordinate origin, the coordinates of each positioning position of the mobile robot in the set coordinate system and the landmark features reflected by the image data can be obtained in the set coordinate system. The coordinates of the landmark location below.
虽然移动机器人可计算出在其设定坐标系下移动机器人所途经的各位置坐标以及各地标图像特征的坐标,但这些坐标缺少实际的物理单位,即移动机器人无法得知在其图像数据中的一个像素点及其对应物理空间中的实体测量点分别在该设定坐标系下的位置坐标与实际物理坐标系中的位置之间的映射关系。Although the mobile robot can calculate the coordinates of each position the mobile robot passes through in its set coordinate system and the coordinates of each landmark image feature, these coordinates lack the actual physical unit, that is, the mobile robot cannot know the coordinates in its image data. The mapping relationship between the position coordinates of a pixel point and its corresponding physical measurement point in the physical space under the set coordinate system and the position in the actual physical coordinate system.
移动机器人利用依靠测量数据所计算出的具有物理单位的移动状态,将设定坐标系下所标记的各位置坐标转换至物理坐标系下,以得到各图像数据中的地标特征映射到物理坐标系下的地标位置数据,和/或以得到所述至少两个位置在物理坐标系中对应的定位位置数据等。其中,由于测距感应装置和图像摄取装置随移动机器人整体移动,因此,移动机器人利用测量数据所得到的移动状态反映移动机器人相对于图像数据中地标图像数据所对应的地标特征的移动情况。在此,移动机器人基于借由移动状态而确定的所述设定坐标系和物理坐标系之间的映射关系,将所述设定坐标系下的地标图像特征映射至物理坐标系中,以得到相应地标特征映射到物理坐标系下的地标位置数据。The mobile robot converts the coordinates of each position marked in the set coordinate system to the physical coordinate system by using the movement state with physical units calculated by the measurement data, so as to obtain the landmark features in each image data and map them to the physical coordinate system. and/or to obtain positioning position data corresponding to the at least two positions in the physical coordinate system, etc. Wherein, since the ranging sensing device and the image capturing device move with the mobile robot as a whole, the moving state obtained by the mobile robot using the measurement data reflects the movement of the mobile robot relative to the landmark feature corresponding to the landmark image data in the image data. Here, based on the mapping relationship between the set coordinate system and the physical coordinate system determined by the moving state, the mobile robot maps the landmark image features in the set coordinate system to the physical coordinate system, so as to obtain The corresponding landmark features are mapped to the landmark location data in the physical coordinate system.
以移动机器人将在不同位置所获取的测量数据之间的重合度进行优化处理,得到在符合重合条件下的所述图像数据中对应的地标特征映射到物理坐标系下的地标位置数据,和/或所述至少两个位置在物理坐标系中对应的定位位置数据为例描述其得到物理坐标系下各位置数据的执行过程:Using a mobile robot to optimize the degree of coincidence between the measurement data acquired at different positions, the corresponding landmark features in the image data that meet the coincidence conditions are mapped to the landmark position data in the physical coordinate system, and/ Or the positioning position data corresponding to the at least two positions in the physical coordinate system are described as an example to describe the execution process of obtaining each position data in the physical coordinate system:
移动机器人将在第一位置和第二位置处分别获取的测量数据转换至同一测量数据的坐标系中,并进行重合度的迭代计算,得到以使两个测量数据在相应坐标系下的重合度符合重合条件,由此得到在该重合条件下的移动机器人的移动状态。其中,移动状态包括移动机器人在第一位置和第二位置之间的相对位置关系与相重合的地标测量数据之间的测量位置关系之间的物理尺度、相对于第一位置移动机器人在第二位置的位姿变化数据等。其中,在移动机器人的整体移动的作用下,该物理尺度也对应于设定坐标系与物理坐标系之间的转换关系(即地图比例尺)。例如,设定坐标系与物理坐标系之间的转换关系是基于所述物理尺度与图像摄取装置内外参数而确定的,其中,所述内外参数包括图像摄取装置内部光学参数和装配参数,以及图像摄取装置与测距感应装置之间的装配参数等。The mobile robot converts the measurement data obtained at the first position and the second position into the coordinate system of the same measurement data, and performs iterative calculation of the coincidence degree to obtain the coincidence degree of the two measurement data in the corresponding coordinate system. The coincidence condition is met, thereby obtaining the moving state of the mobile robot under the coincidence condition. Wherein, the movement state includes the physical scale between the relative positional relationship between the first position and the second position of the mobile robot and the measured positional relationship between the coincident landmark measurement data, the relative position of the mobile robot in the second position relative to the first position. Position and pose change data, etc. Wherein, under the action of the overall movement of the mobile robot, the physical scale also corresponds to the conversion relationship between the set coordinate system and the physical coordinate system (ie, the map scale). For example, the conversion relationship between the set coordinate system and the physical coordinate system is determined based on the physical scale and internal and external parameters of the image pickup device, wherein the internal and external parameters include the internal optical parameters and assembly parameters of the image pickup device, and the image Assembly parameters between the capture device and the ranging sensing device, etc.
在得到了设定坐标系与物理坐标系之间的转换关系后,移动机器人计算出在设定坐标系下的移动机器人的各位置在物理坐标系中对应的定位位置数据、以及在设定坐标系下各地标特征映射到物理坐标系下的地标位置数据。当然,在实际应用中可根据需求来得到所需要的数据,例如在一些情况下,仅需得到地标特征映射到物理坐标系下的地标位置数据;在另一些情况下,仅需得到移动机器人在物理空间中的各位置在物理坐标系中对应的定位位置数据;在还有一些情况下,需要同时得到所述地标位置数据和定位位置数据。After obtaining the conversion relationship between the set coordinate system and the physical coordinate system, the mobile robot calculates the positioning position data corresponding to each position of the mobile robot in the set coordinate system in the physical coordinate system, and the corresponding positioning position data in the set coordinate system The landmark features under the system are mapped to the landmark location data under the physical coordinate system. Of course, in practical applications, the required data can be obtained according to the requirements. For example, in some cases, only the landmark location data of the landmark feature mapped to the physical coordinate system needs to be obtained; Positioning position data corresponding to each position in the physical space in the physical coordinate system; in some cases, the landmark position data and the positioning position data need to be obtained at the same time.
需要补充的是,所述重合条件可以是预设的重合度条件,或选择局部最优重合度,其包括但不限于:基于多次重合迭代而得到的两测量数据中共同地标特征的重合度平均值最高、或者两测量数据中各共同地标特征重合度的标准差最小、重合度的梯度局部最小等。It should be added that the coincidence condition may be a preset coincidence degree condition, or select a local optimal coincidence degree, which includes but is not limited to: the coincidence degree of common landmark features in the two measurement data obtained based on multiple coincidence iterations The average value is the highest, or the standard deviation of the coincidence degree of each common landmark feature in the two measurement data is the smallest, and the gradient of the coincidence degree is locally minimal.
需要说明的是,所述定位位置数据还可以依据测量数据在符合重合度条件时所得到的移动状态而确定,或者所述定位位置数据依据上述两种计算方式进行误差纠偏后确定。It should be noted that the positioning position data may also be determined according to the movement state obtained when the measurement data meets the coincidence condition, or the positioning position data may be determined after error correction according to the above two calculation methods.
以下将通过一具体的示例说明如何对不同位置所获取的测量数据之间的重合度进行优化处理,得到在符合重合条件下的所述地标特征映射到物理坐标系下的地标位置数据和所述至少两个位置在物理坐标系中对应的定位位置数据。需要说明的是,本示例仅用于解释本申请而非对本申请的限制。The following will use a specific example to illustrate how to optimize the degree of coincidence between the measurement data obtained at different locations, so as to obtain the landmark location data and the landmark location data mapped to the physical coordinate system from the landmark feature that meets the coincidence condition. Positioning position data corresponding to at least two positions in the physical coordinate system. It should be noted that this example is only used to explain the present application and not to limit the present application.
假设在设定坐标系下图像摄取装置本身在当前物理空间中前后两个时刻(时刻1和时刻2)的位置为T wc1和T wc2,测距感应装置在设定坐标系下的位置为T wb1和T wb2。利用图像摄取装置与测距感应装置之间的装配参数,即测距感应装置相对于图像摄取装置的位置T bc,可以得出:T wc1=T wb1×T bc,T wc2=T wb2×T bcAssuming that the positions of the image capturing device itself in the current physical space at two moments (time 1 and time 2) before and after the set coordinate system are Twc1 and Twc2 , and the position of the ranging sensing device in the set coordinate system is T wb1 and T wb2 . Using the assembly parameters between the image capturing device and the ranging sensing device, that is, the position T bc of the ranging sensing device relative to the image capturing device, it can be obtained: T wc1 =T wb1 ×T bc , T wc2 =T wb2 ×T bc .
假设设定坐标系的距离单位与实际的物理坐标系之间的比例因子为s,T wc1在设定坐标系下的坐标(以矩阵形式表示)为:
Figure PCTCN2020107863-appb-000001
则在真实的物理坐标系下的坐标为:
Figure PCTCN2020107863-appb-000002
T wc2在设定坐标系下的坐标为:
Figure PCTCN2020107863-appb-000003
则在真实的物理坐标系下的坐标为:
Figure PCTCN2020107863-appb-000004
其中,R wc1、R wc2均代表旋转量,t wc1和t wc2均代表平移量。
Assuming that the scale factor between the distance unit of the set coordinate system and the actual physical coordinate system is s, the coordinates of T wc1 in the set coordinate system (in matrix form) are:
Figure PCTCN2020107863-appb-000001
Then the coordinates in the real physical coordinate system are:
Figure PCTCN2020107863-appb-000002
The coordinates of T wc2 in the set coordinate system are:
Figure PCTCN2020107863-appb-000003
Then the coordinates in the real physical coordinate system are:
Figure PCTCN2020107863-appb-000004
Among them, R wc1 and R wc2 both represent the rotation amount, and t wc1 and t wc2 both represent the translation amount.
同理,根据在设定坐标系下各地标特征相对于移动机器人的距离,也可用类似的方式得出各地标特征在设定坐标系下的坐标、以及对应于物理坐标系下的坐标,在此不予以赘述。Similarly, according to the distance of each landmark feature relative to the mobile robot in the set coordinate system, the coordinates of each landmark feature in the set coordinate system and the coordinates corresponding to the physical coordinate system can also be obtained in a similar way. This will not be repeated.
利用T′ wc1矩阵,可以求解出图像摄取装置在时刻1和时刻2的相对位置T c1c2=T c1wT wc2。其中:
Figure PCTCN2020107863-appb-000005
T wc2为在时刻2时,图像摄取装置在物理坐标系中的位置。
Using the T′ wc1 matrix, the relative positions of the image pickup device at time 1 and time 2 can be solved T c1c2 =T c1w T wc2 . in:
Figure PCTCN2020107863-appb-000005
Twc2 is the position of the image pickup device in the physical coordinate system at time 2 .
利用T c1c2=T c1wT wc2,将时刻2中的测量数据映射至时刻1的测量数据所在坐标系中形成 模拟测量数据SData(或者也可以将时刻1中的测量数据映射至时刻2中,原理相同,不再赘述),并将在时刻1时由测距感应装置获取的测量数据作为实际测量数据RData。应当理解,通过将时刻2中的测量数据映射至时刻1中,可以从理论上模拟出时刻2中相对于各地标特征的位置和角度在时刻1中的位置和角度。 Using T c1c2 =T c1w T wc2 , the measurement data at time 2 is mapped to the coordinate system where the measurement data at time 1 is located to form simulated measurement data SData (or the measurement data at time 1 can also be mapped to time 2, the principle The same, and will not be repeated), and the measurement data acquired by the ranging sensing device at time 1 is taken as the actual measurement data RData. It should be understood that by mapping the measurement data at time 2 to time 1, the positions and angles at time 2 relative to the positions and angles of various landmark features at time 1 can be simulated theoretically.
在此,通过尺度缩放系数(即物理尺度)调整模拟测量数据SData与实际测量数据RData中的一者,以使模拟测量数据SData与实际测量数据RData的重合度达到重合度条件,从而确定尺度缩放系数的值,该尺度缩放系数即为
Figure PCTCN2020107863-appb-000006
Figure PCTCN2020107863-appb-000007
中s的值。由此通过代入s,即可确定在时刻1和时刻2时,移动机器人在物理坐标系中的坐标(即定位位置数据)、以及各地标特征在物理坐标系中的坐标(即地标位置数据)。
Here, one of the simulated measurement data SData and the actual measurement data RData is adjusted by a scaling factor (ie, physical scale), so that the coincidence degree of the simulated measurement data SData and the actual measurement data RData reaches the coincidence degree condition, thereby determining the scaling The value of the coefficient, the scale scaling factor is
Figure PCTCN2020107863-appb-000006
and
Figure PCTCN2020107863-appb-000007
The value of s in . Therefore, by substituting s, the coordinates of the mobile robot in the physical coordinate system (that is, the positioning position data) and the coordinates of each landmark feature in the physical coordinate system (that is, the landmark position data) at time 1 and time 2 can be determined. .
在另一些示例中,为防止移动机器人在移动期间因颠簸、被抱起等外界因素对测距感应装置和图像摄取装置获取相应数据的干扰,移动机器人通过对不同位置所测得的测量数据和图像数据进行反映其物理位置变化的分析,得到相应的移动状态。In other examples, in order to prevent the mobile robot from interfering with the acquisition of corresponding data by the ranging sensing device and the image capturing device due to external factors such as bumping and being picked up during the movement, the mobile robot measures the measurement data and The image data is analyzed to reflect the change of its physical position, and the corresponding movement state is obtained.
在一个示例性的实施例中,本申请考虑图像摄取装置中的误差,以联合优化图像摄取装置和测距感应装置的误差,从而提高构建地图的精度。In an exemplary embodiment, the present application considers the error in the image capturing device to jointly optimize the error of the image capturing device and the ranging sensing device, thereby improving the accuracy of constructing the map.
在此,将在不同位置所获取的测量数据之间的重合度和所采集的图像数据之间的重合度进行联合优化处理,得到在符合联合优化的重合条件下所述图像数据中对应的地标特征映射到物理坐标系下的地标位置数据,和/或所述至少两个位置在物理坐标系中对应的定位位置数据。Here, the degree of coincidence between the measurement data acquired at different positions and the degree of coincidence between the acquired image data are jointly optimized to obtain the corresponding landmarks in the image data under the coincidence conditions that meet the joint optimization. The feature is mapped to landmark location data in the physical coordinate system, and/or positioning location data corresponding to the at least two locations in the physical coordinate system.
在一实施方式中,分别对各测量数据和各图像数据进行调整以优化各测量数据的重合度以及各图像数据的重合度;当基于各自重合度而得到的各移动状态之间的误差符合误差条件时,利用相应的移动状态确定所述图像数据中对应的地标特征映射到物理坐标系下的地标位置数据,和/或所述至少两个位置在物理坐标系中对应的定位位置数据。其中所述误差条件举例包括预设的误差阈值、或者基于预设调整次数而选取误差最小值等。In one embodiment, each measurement data and each image data are adjusted respectively to optimize the degree of coincidence of each measurement data and the degree of coincidence of each image data; when the errors between the moving states obtained based on the respective degrees of coincidence conform to the error When conditions are met, the corresponding movement states are used to determine the landmark position data in the image data mapped to the corresponding landmark feature in the physical coordinate system, and/or the positioning position data corresponding to the at least two positions in the physical coordinate system. An example of the error condition includes a preset error threshold, or a minimum error value selected based on a preset number of adjustments.
以移动机器人在第一位置和第二位置分别同步获取图像数据和测量数据为例,将移动机器人在第一位置处时所获取的测量数据映射到第二位置处,并通过计算重合度得到模拟出的对应于第二位置处的测量数据,即利用当前重合度而确定的该映射后的测量数据对应于模拟出的移动机器人的移动状态,其表征了在第一位置处的测量数据按照相应的移动状态所推测出的在第二位置处的测量数据。Taking the image data and measurement data obtained by the mobile robot at the first position and the second position synchronously as an example, the measurement data obtained when the mobile robot is at the first position is mapped to the second position, and the simulation is obtained by calculating the coincidence degree. The measured data corresponding to the second position, that is, the mapped measured data determined by using the current coincidence degree corresponds to the simulated moving state of the mobile robot, which represents the measured data at the first position according to the corresponding The measurement data at the second location inferred from the movement state of .
同时,所述图像摄取装置亦通过对比两个位置处拍摄到的图像数据中对应共同的地标特征的地标图像数据,计算出移动机器人在设定坐标系下的移动状态(即在设定坐标系下的移 动量和旋转量)。其中,利用图像数据得到的移动状态与通过测量数据得到的移动状态是通过图像摄取装置和测距感应装置之间的装配参数而对应的。At the same time, the image capturing device also calculates the moving state of the mobile robot in the set coordinate system (that is, in the set coordinate system) by comparing the landmark image data corresponding to the common landmark feature in the image data captured at the two positions. amount of movement and rotation below). Wherein, the movement state obtained by using the image data and the movement state obtained by the measurement data correspond to the assembly parameters between the image capturing device and the ranging sensing device.
利用该对应关系,通过迭代式调整两个移动状态之间的误差,使得调整后的误差符合一误差条件,则利用两移动状态确定地标特征在物理坐标系下的地标位置数据、和/或第一位置和第二位置在物理坐标系下的定位位置数据。在将相重合的测量数据作为对应地标特征的地标测量数据的情况下,还可以确定各地标测量数据在物理坐标系下的地标位置数据。Using this correspondence, the error between the two moving states is adjusted iteratively so that the adjusted error meets an error condition, and the two moving states are used to determine the landmark position data of the landmark feature in the physical coordinate system, and/or the first Positioning position data of a position and a second position in a physical coordinate system. When the coincident measurement data is used as the landmark measurement data corresponding to the landmark feature, the landmark position data of each landmark measurement data in the physical coordinate system can also be determined.
在另一实施方式中,对两位置的测量数据之间的重合度进行调整,以及在相应重合度下得到的移动状态确定图像数据中的地标图像数据的重合度;当调整后的测量数据之间的重合度、以及调整后的图像数据之间的重合度符合重合条件时,利用相应的移动状态确定所述图像数据中对应的地标特征映射到物理坐标系下的地标位置数据,和/或所述至少两个位置在物理坐标系中对应的定位位置数据。In another embodiment, the degree of coincidence between the measurement data of the two positions is adjusted, and the movement state obtained under the corresponding degree of coincidence determines the degree of coincidence of the landmark image data in the image data; When the degree of coincidence between the image data and the adjusted image data meets the coincidence condition, use the corresponding movement state to determine that the corresponding landmark feature in the image data is mapped to the landmark position data in the physical coordinate system, and/or Positioning position data corresponding to the at least two positions in the physical coordinate system.
仍以移动机器人在第一位置和第二位置分别同步获取图像数据和测量数据为例,将移动机器人在第一位置处时所获取的测量数据映射到第二位置处,并通过计算重合度得到模拟出的对应于第二位置处的测量数据,即利用当前重合度而确定的该映射后的测量数据对应于模拟出的移动机器人的移动状态,其表征了在第一位置处的测量数据按照相应的移动状态所推测出的在第二位置处的测量数据。Still taking the image data and measurement data obtained by the mobile robot synchronously at the first position and the second position as an example, the measurement data obtained when the mobile robot is at the first position is mapped to the second position, and obtained by calculating the coincidence degree. The simulated measurement data corresponding to the second position, that is, the mapped measurement data determined by using the current coincidence degree corresponds to the simulated movement state of the mobile robot, which represents the measurement data at the first position according to The measurement data at the second location inferred from the corresponding movement state.
利用图像数据得到的移动状态与通过测量数据得到的移动状态是通过图像摄取装置和测距感应装置之间的装配参数而对应的,计算在通过测量数据得到的移动状态下,两位置所获取的图像数据中相匹配的地标图像数据的重合度。The moving state obtained by using the image data and the moving state obtained by the measurement data are corresponding to the assembly parameters between the image capturing device and the ranging sensing device. The degree of coincidence of the matched landmark image data in the image data.
将利用测量数据得到的重合度与利用图像数据得到的重合度进行评估处理,以确定其评估结构是否符合重合条件,若是,则以相应的移动状态将各地标特征和位置映射到物理坐标系中,若否,继续调整测量数据的重合度,直至符合重合条件为止。Evaluate the coincidence degree obtained by using the measurement data and the coincidence degree obtained by using the image data to determine whether the evaluation structure meets the coincidence conditions. , if not, continue to adjust the coincidence of the measured data until the coincidence conditions are met.
通过本实施例中的优化方式,可将图像摄取装置和测距感应装置的误差联合优化,从而利用各传感器的优势同时避免了误差的累积,提高了构图的精度。Through the optimization method in this embodiment, the errors of the image capturing device and the ranging sensing device can be jointly optimized, so as to utilize the advantages of each sensor while avoiding the accumulation of errors and improving the accuracy of composition.
在一个示例性的实施例中,所述测距感应装置包含多种,每种测距感应装置依据各自的测量方式同步获取各自的测量数据,以供对移动状态进行分析。例如,测距感应装置包含以下任意两种或两种以上:激光测距感应装置、双目摄像装置、ToF测距感应装置、结构光测距感应装置等。In an exemplary embodiment, the range-measuring sensing devices include multiple types, and each range-measuring sensing device synchronously acquires its own measurement data according to its own measurement method, so as to analyze the movement state. For example, the ranging sensing device includes any two or more of the following: a laser ranging sensing device, a binocular camera device, a ToF ranging sensing device, a structured light ranging sensing device, and the like.
具体地说,当移动机器人具有多种测距感应装置时,可基于各测距感应装置各自的测量方式同步获取各自的测量数据,并综合各测距感应装置的测量数据以确定移动机器人的移动 状态。Specifically, when the mobile robot has multiple ranging sensing devices, the respective measurement data can be obtained synchronously based on the respective measurement methods of the ranging sensing devices, and the measurement data of the ranging sensing devices can be integrated to determine the movement of the mobile robot. state.
在一种实施方式中,可将属于同一位置的来自各测距感应装置的测量数据进行融合处理,再将各位置处对应的融合后的测量数据之间进行优化处理,得到在符合各测量数据之间的重合条件下的所述地标特征映射到物理坐标系下的位置数据,和/或所述至少两个位置在物理坐标系中对应的位置数据。In one embodiment, the measurement data from each ranging sensing device belonging to the same position can be fused, and then the fused measurement data corresponding to each position can be optimized to obtain the measurement data that is consistent with each measurement data. The landmark feature under the coincidence condition between them is mapped to the position data in the physical coordinate system, and/or the position data corresponding to the at least two positions in the physical coordinate system.
在此,各测距感应装置在各位置处根据各自的测量方式同步获取测量数据,在得到了来自于各测距感应装置的测量数据后,将属于同一位置的来自于各测距感应装置的测量数据进行融合处理,以基于融合后的测量数据进行优化处理。例如,当测距感应装置包括双目摄像装置和激光传感器时,在第一位置处,双目摄像装置根据其两个摄像头所拍摄到的图像,利用三角原理来确定移动机器人相对于各地标特征的距离以得到双目摄像装置获取的第一测量数据;同时,激光传感器也在第一位置处根据其发射激光束的时间和接收激光束的时间差,确定其相对于地标特征的距离以得到激光传感器获取的第一测量数据。在第二位置处,双目摄像装置根据其两个摄像头所拍摄到的图像,利用三角原理来确定移动机器人相对于各地标特征的距离以得到双目摄像装置获取的第二测量数据;同时,激光传感器也在第二位置处根据其发射激光束的时间和接收激光束的时间差,确定其相对于地标特征的距离以得到激光传感器获取的第二测量数据。将各第一测量数据融合处理以及将各第二测量数据融合处理后,基于融合后的各第一测量数据和融合后的第二测量数据进行重合度优化处理;类似于前述示例中所提及的各种方式,得到在物理坐标系下的各位置数据,在此不再详述。Here, each ranging sensing device synchronously acquires measurement data at each position according to their respective measurement methods, and after obtaining the measurement data from each ranging sensing device, the measurement data from each ranging sensing device belonging to the same position The measurement data is fused to perform optimization processing based on the fused measurement data. For example, when the ranging sensing device includes a binocular camera device and a laser sensor, at the first position, the binocular camera device uses the triangulation principle to determine the characteristics of the mobile robot relative to each target according to the images captured by its two cameras. At the same time, the laser sensor also determines the distance relative to the landmark feature at the first position according to the time difference between the time it emits the laser beam and the time it receives the laser beam to obtain the laser beam. The first measurement data acquired by the sensor. At the second position, the binocular camera device uses the triangulation principle to determine the distance of the mobile robot relative to each landmark feature according to the images captured by its two cameras to obtain the second measurement data obtained by the binocular camera device; at the same time, The laser sensor also determines its distance from the landmark feature at the second location based on the time difference between the time it emits the laser beam and the time it receives the laser beam to obtain second measurement data acquired by the laser sensor. After the fusion processing of each first measurement data and the fusion processing of each second measurement data, the coincidence degree optimization process is performed based on each first measurement data after fusion and the second measurement data after fusion; similar to that mentioned in the foregoing example various ways to obtain each position data in the physical coordinate system, which will not be described in detail here.
其中,所述融合处理包括:对基于不同测距感应装置提供的测量数据进行地标分析以得到对应共同地标特征的地标测量数据。例如,各测距感应装置获取的测量数据中具有相对同一地标特征的距离,将各测距感应装置检测到的相对于同一地标特征的距离以取均值或中值等方式进行数据处理。所述融合处理还包括:依据各自测量的方向角度对不同测距感应装置提供的测量数据进行插值、均值等处理。例如,将利用双目摄像装置所测得的某一体素位置的测量数值与利用单点激光测距感应装置所测得的对应体素位置的测量数据做均值处理。又如,利用单点激光测距感应装置测得的某一方向角度上的测距数值D1对应于利用双目摄像装置所测得的某两个体素位置的测量数值D21、D22之间的方向角度,则将测距数值D1分别和测量数值D21、D22做均值处理,得到新的对应两个体素位置的测量数值D21’、D22’,并将其替换测量数值D21、D22。再如,利用单点激光测距感应装置测得的某一方向角度上的测距数值D1对应于利用双目摄像装置所测得的某两个体素位置的测量数值D21、D22之间的方向角度,则将双目摄像装置所测得的二维的距离点阵数据中对应两个体素位置的测量 数值D21、D22所在列C1和C2之间增加一列C21,所增加的列C21上的其他体素位置的数值可依据列C1和C2所在各行的测量数值的均值做插值处理。The fusion processing includes: performing landmark analysis on measurement data provided by different ranging sensing devices to obtain landmark measurement data corresponding to common landmark features. For example, the measurement data obtained by each ranging sensing device has a distance relative to the same landmark feature, and the distances detected by each ranging sensing device relative to the same landmark feature are processed by means of averaging or median. The fusion processing further includes: performing interpolation, averaging and other processing on the measurement data provided by different ranging sensing devices according to the respective measured direction angles. For example, the measurement value of a certain voxel position measured by the binocular camera device and the measurement data of the corresponding voxel position measured by the single-point laser ranging sensing device are averaged. For another example, the ranging value D1 at a certain direction angle measured by the single-point laser ranging sensing device corresponds to the direction between the measured values D21 and D22 of two voxel positions measured by the binocular camera device. angle, the distance measurement value D1 and the measurement values D21 and D22 are respectively averaged to obtain new measurement values D21' and D22' corresponding to the two voxel positions, and replace the measurement values D21 and D22. For another example, the ranging value D1 at a certain direction angle measured by the single-point laser ranging sensing device corresponds to the direction between the measured values D21 and D22 of two voxel positions measured by the binocular camera device. angle, then add a column C21 between the columns C1 and C2 where the measured values D21 and D22 corresponding to the two voxel positions in the two-dimensional distance lattice data measured by the binocular camera device are located, and the other columns on the added column C21 The value of the voxel position can be interpolated according to the mean of the measured values of the rows where columns C1 and C2 are located.
在另一种实施方式中,可将在不同位置所获取的来自不同测距感应装置的测量数据分别进行重合度优化处理,并在各测量数据之间的各重合度符合重合条件下,得到所述地标特征映射到物理坐标系下的位置数据,和/或所述至少两个位置在物理坐标系中对应的位置数据。In another embodiment, the coincidence degree optimization process may be performed on the measurement data obtained from different ranging sensing devices obtained at different positions, and the obtained results are obtained under the condition that the coincidence degrees between the measurement data meet the coincidence conditions. The landmark feature is mapped to the position data in the physical coordinate system, and/or the position data corresponding to the at least two positions in the physical coordinate system.
在此,各测距感应装置在各位置处根据各自的测量方式同步获取测量数据,在得到了来自于各测距感应装置的测量数据后,分别依据各自的测量数据分析移动机器人的移动状态,并对各测距感应装置在不同位置所获取的测量数据的重合度分别进行优化处理以得到符合重合条件下各移动状态。将各移动状态做如平均值、中值等数据处理,得到可供将图像数据中的地标特征数据映射到物理坐标系下的移动状态,按照前述任一示例所提供的方式,得到在物理坐标系下的各位置数据,在此不再详述。Here, each ranging sensing device synchronously acquires measurement data at each position according to their respective measurement methods, and after obtaining the measurement data from each ranging sensing device, analyzes the moving state of the mobile robot according to the respective measurement data, The coincidence degree of the measurement data obtained by each ranging sensing device at different positions is optimized respectively to obtain each moving state under the coincidence condition. Perform data processing such as average value and median value on each moving state to obtain a moving state that can be used to map the landmark feature data in the image data to the physical coordinate system. The location data under the system will not be described in detail here.
请继续参阅图1,在步骤S130中,将所得到的各地标位置数据和定位位置数据记录在基于所述物理坐标系而构建的地图数据中。Please continue to refer to FIG. 1 , in step S130 , the obtained location data of each landmark and positioning location data are recorded in the map data constructed based on the physical coordinate system.
在此,在执行步骤S110和S120后确定了各图像数据中的地标特征映射到物理坐标系下的地标位置数据和/或至少两个位置在物理坐标系中对应的定位位置数据,将得到的地标位置数据和/或定位位置数据记录在基于物理坐标系而构建的地图数据中,由此得到当前物理空间的地图,且所述地图中包括了移动机器人所途经的各位置,以及各地标特征的位置。在此,所述地图数据为通过数据库存储在存储介质中的数据,其可通过栅格/单位向量等方式展示给用户。Here, after performing steps S110 and S120, it is determined that the landmark features in each image data are mapped to the landmark position data in the physical coordinate system and/or the positioning position data corresponding to at least two positions in the physical coordinate system, and the obtained The landmark location data and/or positioning location data are recorded in the map data constructed based on the physical coordinate system, thereby obtaining a map of the current physical space, and the map includes each location the mobile robot passes through, as well as each landmark feature s position. Here, the map data is data stored in a storage medium through a database, which can be displayed to the user through grids/unit vectors or the like.
在一些情况下,所述地图中还保存有在各位置处由图像摄取装置所拍摄的各图像数据,以供重定位使用。所述重定位方法包括:在首次构建了当前物理空间的地图数据后,所述移动机器人在当前物理空间的工作过程中可通过将在某一位置处由图像摄取装置所拍摄的图像数据与地图数据中存储的各图像数据进行比对,从而通过比对结果确定自身在物理空间中的位置。In some cases, each image data captured by the image pickup device at each location is also stored in the map for relocation. The relocation method includes: after the map data of the current physical space is constructed for the first time, the mobile robot can pass the image data captured by the image capturing device at a certain position with the map during the working process of the current physical space. Each image data stored in the data is compared, so as to determine its own position in the physical space through the comparison result.
在另一些情况下,所述地图中还保存有在各位置处由测距感应装置所获取的历史的测量数据(例如点云数据)以供重定位使用。所述重定位方法包括:在首次构建了当前物理空间的地图数据后,所述移动机器人在当前物理空间的工作过程中可通过将在某一位置处由图像摄取装置所拍摄的图像数据与地图数据中存储的各图像数据进行比对、以及将在某一位置处由测距感应装置所获取的测量数据与地图数据中存储的各测量数据进行比对,从而通过比对结果确定自身在物理空间中的位置。In other cases, the map also stores historical measurement data (eg, point cloud data) acquired by the ranging sensing device at each location for relocation. The relocation method includes: after the map data of the current physical space is constructed for the first time, the mobile robot can pass the image data captured by the image capturing device at a certain position with the map during the working process of the current physical space. Compare the image data stored in the data, and compare the measurement data obtained by the ranging sensing device at a certain position with the measurement data stored in the map data, so as to determine the physical presence of the image data through the comparison results. position in space.
其中,测量数据的比对方法包括但不限于迭代最近点(Iterative Closest Point,ICP)算法,即将当前的测量数据与待比对的地图数据中存储的测量数据分别作为两组点集,通过三维空间变换的方式将两组点集中的各点一一对应,从而判断两组点集的相似程度。Among them, the comparison method of measurement data includes but is not limited to iterative Closest Point (Iterative Closest Point, ICP) algorithm. The way of spatial transformation corresponds each point in the two sets of point sets one by one, so as to judge the similarity of the two sets of point sets.
在一个示例性的实施例中,当所述测距感应装置包括双目摄像装置时,还包括对应于所述移动机器人在物理坐标系中的各定位位置数据,将所述双目摄像装置所拍摄的各图像保存在地图数据中的步骤。In an exemplary embodiment, when the ranging sensing device includes a binocular camera device, it also includes data corresponding to each positioning position of the mobile robot in the physical coordinate system, where the binocular camera device is located. The procedure for storing each captured image in the map data.
在此,所述地图中还保存有在各位置处由双目摄像装置所拍摄的各图像,以供重定位使用。Here, each image captured by the binocular camera device at each position is also stored in the map for relocation.
其中,所述重定位方法包括:在首次构建了当前物理空间的地图数据后,所述移动机器人在当前物理空间的工作过程中可通过将在某一位置处由双目摄像装置所拍摄的图像与地图数据中存储的各图像进行比对,从而通过比对结果确定自身在物理空间中的位置。Wherein, the relocation method includes: after the map data of the current physical space is constructed for the first time, the mobile robot can pass the image captured by the binocular camera device at a certain position during the working process of the current physical space. Compare with each image stored in the map data, so as to determine its own position in the physical space through the comparison result.
在一些情况下,所述地图中同时保存有在各位置处由图像摄取装置所拍摄的各图像数据、以及由双目摄像装置所拍摄的各图像。在首次构建了当前物理空间的地图数据后,所述移动机器人在当前物理空间的工作过程中可通过将在某一位置处由图像摄取装置所拍摄的图像数据与地图数据中存储的各图像数据进行比对、将由双目摄像装置所拍摄的图像与地图数据中存储的各图像进行比对,从而通过比对结果确定自身在物理空间中的位置。In some cases, each image data captured by the image pickup device at each location and each image captured by the binocular camera device are simultaneously stored in the map. After the map data of the current physical space is constructed for the first time, during the working process of the current physical space, the mobile robot can compare the image data captured by the image capturing device at a certain position with each image data stored in the map data. Comparing the image captured by the binocular camera device with each image stored in the map data, so as to determine its own position in the physical space through the comparison result.
在一个示例性的实施例中,所述构建地图的方法还包括根据所述测距感应装置所测量的测量数据确定所述地图数据中的环境边界的步骤。其中,所述环境边界举例包括但不限于门、墙、屏风等空间分隔体。In an exemplary embodiment, the method for constructing a map further includes the step of determining an environment boundary in the map data according to the measurement data measured by the range-finding sensing device. Wherein, examples of the environmental boundary include, but are not limited to, space dividers such as doors, walls, and screens.
请参阅图2a、图2b,其显示为本申请中构建的地图在一实施方式中的简要示意图。如图所示,移动机器人在当前物理空间的移动过程中,将测距感应装置所获取的相对于各地标特征的位置和距离标记在地图数据中。由于空间分隔体在空间上呈连续不间断的物体,因此在测距感应装置对其感测的过程中,测量数据也基于测距感应装置的类型呈连续或密集的线条,从而形成如图2a和图2b中所示的环境边界11。其中,在图2a和图2b中,横轴和纵轴分别代表物理坐标系中的X轴和Y轴,图2a中的离散点(0,0)代表移动机器人的初始位置,图2b中的其他离散点为移动机器人在移动过程中途经的位置以及物理环境中的其他地标特征。Please refer to FIG. 2a and FIG. 2b, which show a brief schematic diagram of the map constructed in the present application in one embodiment. As shown in the figure, during the movement of the mobile robot in the current physical space, the position and distance obtained by the ranging sensing device relative to each landmark feature are marked in the map data. Since the space divider is a continuous and uninterrupted object in space, in the process of sensing it by the ranging sensing device, the measurement data also presents continuous or dense lines based on the type of the ranging sensing device, thus forming a pattern as shown in Figure 2a and the environmental boundary 11 shown in Figure 2b. Among them, in Figure 2a and Figure 2b, the horizontal axis and the vertical axis represent the X axis and the Y axis in the physical coordinate system, respectively, the discrete point (0,0) in Figure 2a represents the initial position of the mobile robot, and the Other discrete points are the locations the mobile robot passes through during the movement and other landmark features in the physical environment.
需要说明的是,虽然上述实施例中以移动机器人在两个位置处分别同步获取图像数据和测量数据为例说明,但在实际的应用中也可以是三个位置、四个位置……等,在此不一一赘述。It should be noted that, although the above embodiment takes the mobile robot to acquire image data and measurement data synchronously at two positions as an example, in practical applications, it can also be three positions, four positions, etc., I won't go into details here.
本申请针对移动机器人在大范围、装修环境具有一定相似度等特点的物理空间内不易于 定位等问题,通过同步地获取反映移动机器人体侧环境等环境复杂度较高的测量数据和包含移动机器人上方等环境复杂度较低的图像数据,实现对物理空间中更宽泛的环境感知;以及利用测量数据和图像数据构建地图的方式,解决了利用其中任一种数据所构建出的地图中精度低、有助于准确定位的地标密度低差等问题。Aiming at the problem that the mobile robot is not easy to locate in the physical space with the characteristics of a large area and a certain similarity of the decoration environment, the present application obtains the measurement data reflecting the high complexity of the mobile robot's body-side environment and other environments by synchronously acquiring and including the mobile robot. Image data with lower environmental complexity, such as the above, realizes a wider environmental perception in physical space; and the method of constructing a map using measurement data and image data solves the problem of low accuracy in the map constructed by using any of these data. , Landmark density low difference and other issues that help to accurately locate.
本申请第二方面的实施方式中提供一种定位方法,以通过图像摄取装置和测距感应装置分别提供的数据联合优化误差,确定移动机器人在物理空间中的位置。An embodiment of the second aspect of the present application provides a positioning method to jointly optimize the error through data provided by the image capturing device and the ranging sensing device, respectively, to determine the position of the mobile robot in the physical space.
所述定位方法可以由移动机器人中的处理器执行,也可以由与所述移动机器人通信的服务端执行。The positioning method may be executed by a processor in the mobile robot, or may be executed by a server communicating with the mobile robot.
其中,所述移动机器人或服务器中可以预存有当前物理空间的地图数据,则可以基于地图数据确定移动机器人当前在物理空间中的位置。或者,没有预存地图数据时,也可以基于移动机器人在当前位置处获取的物理空间中的数据与上一位置处获取的物理空间中的数据确定所述移动机器人当前在物理空间中的位置,并在移动过程中构建当前物理空间的地图数据。Wherein, map data of the current physical space may be pre-stored in the mobile robot or the server, and the current position of the mobile robot in the physical space may be determined based on the map data. Alternatively, when there is no pre-stored map data, the current position of the mobile robot in the physical space can also be determined based on the data in the physical space obtained by the mobile robot at the current position and the data in the physical space obtained at the previous position, and Build map data of the current physical space while moving.
请参阅图5,其显示为本申请中的移动机器人的定位方法在一实施方式中的示意图。如图所示,在步骤S210中,控制图像摄取装置和测距感应装置至少在移动机器人的第一位置和第二位置分别同步获取图像数据和测量数据;其中,所述第一位置映射于地图数据中的第一定位位置数据。Please refer to FIG. 5 , which is a schematic diagram of an embodiment of the positioning method of the mobile robot in the present application. As shown in the figure, in step S210, the image capturing device and the ranging sensing device are controlled to obtain image data and measurement data simultaneously at least at the first position and the second position of the mobile robot, respectively; wherein, the first position is mapped on the map The first positioning position data in the data.
在此,移动机器人在当前物理空间中的移动过程中途经多个位置。为了准确地关联图像数据和测量数据,移动机器人的图像摄取装置和测距感应装置在各位置分别获取对应于各位置处的图像数据和测量数据。Here, the mobile robot travels through a number of locations during its movement in the current physical space. In order to correlate the image data and the measurement data accurately, the image capturing device and the ranging sensing device of the mobile robot respectively acquire the image data and the measurement data corresponding to each position at each position.
以移动机器人在机场、超市等工作环境中工作举例,由于移动机器人所处物理空间中包含如行走的旅客及其行李箱、或行走的顾客及其购物车等移动对象,以及包含如围栏、货架、站定的人等固定对象,因此,在至少两个位置所测得的测量数据中既包含固定对象还可能包含移动对象。在同样两个位置所获取的图像数据中至少包含物理空间内竖直方向高于测距感应装置的探测范围内的实体的影像数据,以及可能包含对应部分测量数据的影像数据。Take mobile robots working in airports, supermarkets and other working environments as an example, because the physical space where mobile robots are located includes moving objects such as walking passengers and their suitcases, or walking customers and their shopping carts, as well as fences, shelves, etc. , standing people and other fixed objects, therefore, the measurement data measured at at least two locations includes both fixed objects and possibly moving objects. The image data acquired at the same two positions at least include image data of entities in the physical space whose vertical direction is higher than the detection range of the ranging sensing device, and may include image data of corresponding part of the measurement data.
为了提高来自两个独立装置的数据融合使用时的数据准确度,属于同一位置的图像数据和测量数据是同步获取的。在此,所述同步获取的方式包括移动机器人在途经的多个位置停止并获取图像数据和测量数据,再继续移动的方式来实现同步获取图像数据和测量数据;或者利用同步信号在不停止的情况下同步获取图像数据和测量数据。In order to improve the data accuracy when the data from two independent devices is used in fusion, the image data and measurement data belonging to the same location are acquired synchronously. Here, the synchronous acquisition method includes that the mobile robot stops at multiple locations along the way and acquires image data and measurement data, and then continues to move to achieve synchronous acquisition of image data and measurement data; Synchronized acquisition of image data and measurement data.
所述第一位置映射在地图数据中对应为第一定位位置数据。其中,所述第一位置可以是移动机器人在当前的移动操作中所途经的位置。或者,当预存有当前物理空间的地图数据时, 所述第一位置也可以是移动机器人在历史的移动操作中保存在地图数据中的某一位置。The first location map corresponds to the first positioning location data in the map data. Wherein, the first position may be a position passed by the mobile robot in the current moving operation. Alternatively, when the map data of the current physical space is pre-stored, the first position may also be a certain position stored in the map data by the mobile robot during historical movement operations.
在可能的实施方式中,所述同步获取的方式可以是基于外部的同步信号,由图像摄取装置和测距感应装置同步获取对应于该位置处的图像数据和测量数据;也可以是基于所述图像摄取装置或测距感应装置中的同步信号,由图像摄取装置和测距感应装置同步获取对应于该位置处的图像数据和测量数据。例如,移动机器人在当前物理空间中的工作过程中,移动机器人中的控制装置按照一定的时间间隔向所述图像摄取装置和测距感应装置发出同步信号,以令所述图像摄取装置和测距感应装置分别获取图像数据和测量数据。又如,所述图像摄取装置和测距感应装置中设有时钟模块并预设相同的时钟信号机制以同步发出信号,当图像摄取装置和测距感应装置分别收到各自的同步信号时,分别执行获取图像数据和测量数据的步骤。In a possible implementation manner, the synchronization acquisition method may be based on an external synchronization signal, and the image capturing device and the ranging sensing device may acquire the image data and measurement data corresponding to the position synchronously; it may also be based on the The synchronization signal in the image capturing device or the ranging sensing device is used to obtain the image data and measurement data corresponding to the position synchronously by the image capturing device and the ranging sensing device. For example, during the working process of the mobile robot in the current physical space, the control device in the mobile robot sends a synchronization signal to the image capturing device and the ranging sensing device at certain time intervals, so as to make the image capturing device and the ranging device The sensing device acquires image data and measurement data, respectively. For another example, the image capturing device and the ranging sensing device are provided with a clock module, and the same clock signal mechanism is preset to send out signals synchronously. When the image capturing device and the ranging sensing device receive their respective synchronization signals, Perform the steps to acquire image data and measurement data.
在一些情况下,所述外部的同步信号也可以是基于所述移动机器人的惯导传感器中的信号而产生的。所述惯导传感器(IMU)用于获取移动机器人的惯导数据。所述惯导传感器包括但不限于陀螺仪、里程计、光流计、以及加速度计等中的一种或多种。所述惯导传感器所获取的惯导数据包括但不限于移动机器人的速度数据、加速度数据、移动距离、滚轮的滚动圈数、以及滚轮的偏转角度等中的一种或多种。In some cases, the external synchronization signal may also be generated based on signals in the inertial navigation sensor of the mobile robot. The inertial navigation sensor (IMU) is used to acquire inertial navigation data of the mobile robot. The inertial navigation sensor includes, but is not limited to, one or more of a gyroscope, an odometer, an optical flow meter, and an accelerometer. The inertial navigation data acquired by the inertial navigation sensor includes, but is not limited to, one or more of the speed data, acceleration data, moving distance, rolling circles of the roller, and deflection angle of the roller, etc. of the mobile robot.
例如,所述移动机器人的驱动组件(即轮组等用于使移动机器人前进的组件)中配置有IMU,该IMU与移动机器人的下位机连接,同时下位机与图像摄取装置连接;并且,在测距感应装置中也配置有IMU,测距感应装置及图像摄取装置与移动机器人的上位机连接。在此,将所述测距感应装置中的IMU与移动机器人驱动组件中IMU的时间戳保持同步,由此移动机器人驱动组件中的IMU产生同步信号给图像摄取装置以使图像摄取装置获取图像数据,测距感应装置中的IMU也同时产生同步信号以使测距感应装置获取测量数据。For example, an IMU is configured in the driving component of the mobile robot (that is, the component such as the wheel set for advancing the mobile robot), the IMU is connected with the lower computer of the mobile robot, and the lower computer is connected with the image capturing device; and, in An IMU is also arranged in the ranging sensing device, and the ranging sensing device and the image capturing device are connected with the upper computer of the mobile robot. Here, the time stamps of the IMU in the ranging sensing device and the IMU in the mobile robot driving assembly are kept synchronized, so that the IMU in the mobile robot driving assembly generates a synchronization signal to the image capturing device so that the image capturing device acquires image data , the IMU in the ranging sensing device also generates a synchronization signal at the same time, so that the ranging sensing device acquires measurement data.
又如,所述移动机器人的驱动组件(即轮组等用于使移动机器人前进的组件)中配置有IMU,该IMU与移动机器人的下位机连接,同时下位机分别与图像摄取装置和测距感应装置连接。IMU在检测到轮组每转动预设圈数时发出同步信号至所述图像摄取装置和测距感应装置,由此供图像摄取装置获取图像数据,以及供测距感应装置获取测量数据。For another example, an IMU is configured in the driving component of the mobile robot (ie, components such as a wheel set for moving the mobile robot forward), and the IMU is connected with the lower computer of the mobile robot, and the lower computer is respectively connected with the image capturing device and the distance measuring device. Induction device connection. The IMU sends a synchronization signal to the image capturing device and the ranging sensing device when it detects that the wheel set rotates a preset number of turns, so that the image capturing device can obtain image data and the ranging sensing device can obtain measurement data.
为便于描述,在以下实施例中将以移动机器人在两个位置处(即第一位置处和第二位置处)如何定位为例。For ease of description, in the following embodiments, how the mobile robot is positioned at two positions (ie, the first position and the second position) will be taken as an example.
请继续参阅图5,在步骤S220中,至少利用在第一位置和第二位置分别测得的测量数据分析移动机器人在物理空间中的移动状态,以确定当所述移动机器人处于相对于第一位置的第二位置时,所述第二位置在所述地图数据中的第二定位位置数据。Please continue to refer to FIG. 5, in step S220, at least use the measurement data respectively measured at the first position and the second position to analyze the moving state of the mobile robot in the physical space to determine when the mobile robot is in a position relative to the first position. When the second location is the second location, the second location is the second positioning location data in the map data.
在此,由步骤S210所获取的测量数据和图像数据中均包含移动机器人所在的物理空间中不同空间高度范围内有助于定位的数据,如地标数据、相对于地标特征的距离/角度等。另外,由于测距感应装置和图像摄取装置是随移动机器人整体移动的,故而,至少利用在不同位置所获取的测量数据而确定的移动状态可解决图像数据中所反映的地标特征在没有物理尺度(如比例尺)的设定坐标系中的地标位置、以及仅利用测量数据不利于获得高精度定位等问题。其中,所述设定坐标系举例包括相机坐标系、或与物理坐标系之间缺少基于物理尺度而确定的映射关系的一虚拟坐标系。由此,利用测量数据所描述的移动机器人的移动状态和图像数据中对应地标特征的地标数据,来计算被图像数据捕获的各地标特征映射到物理坐标系下的地标位置数据。Here, both the measurement data and the image data acquired in step S210 include data helpful for positioning within different spatial height ranges in the physical space where the mobile robot is located, such as landmark data, distances/angles relative to landmark features, and the like. In addition, since the ranging sensing device and the image capturing device move as a whole with the mobile robot, at least the movement state determined by the measurement data obtained at different positions can solve the problem that the landmark features reflected in the image data have no physical scale. The location of landmarks in the coordinate system (such as scale), and only using measurement data is not conducive to obtaining high-precision positioning. An example of the set coordinate system includes a camera coordinate system, or a virtual coordinate system that lacks a mapping relationship determined based on a physical scale with the physical coordinate system. Thus, the moving state of the mobile robot described by the measurement data and the landmark data corresponding to the landmark features in the image data are used to calculate the landmark position data mapped from the landmark features captured by the image data to the physical coordinate system.
具体地,在没有可参照的反映移动机器人在物理空间中真实移动情况的情况下,移动机器人无法依据图像数据来确定自身移动的物理距离、其与某一实体之间的物理距离等反映实际物理空间中位置关系的数据。为此,移动机器人通过对具有物理含义的测量数据进行分析,来得到移动机器人在物理空间中的移动状态;利用所述移动状态、以及预设的图像摄取装置和测距装置在移动机器人上装配及图像获取时的内外参数,移动机器人构建图像数据中的地标数据与物理坐标系下的地标位置数据之间的对应关系,由此确定相应的地标特征在物理坐标系下的地标位置数据。Specifically, if there is no reference to reflect the actual movement of the mobile robot in the physical space, the mobile robot cannot determine the physical distance of its own movement, the physical distance between it and an entity, etc. based on the image data to reflect the actual physical Data about positional relationships in space. To this end, the mobile robot obtains the moving state of the mobile robot in the physical space by analyzing the measurement data with physical meaning; the mobile robot is assembled on the mobile robot by using the moving state and the preset image capturing device and distance measuring device. As well as the internal and external parameters during image acquisition, the mobile robot constructs the correspondence between the landmark data in the image data and the landmark position data in the physical coordinate system, thereby determining the landmark position data of the corresponding landmark feature in the physical coordinate system.
其中,所述移动状态包括以下至少一种:与移动机器人自身定位相关的数据、用于帮助移动机器人确定定位的数据、或者用于反映移动机器人在物理空间中移动位置变化与图像数据中反映地标特征在图像像素位置变化之间映射关系的物理尺度等数据。其中,所述与移动机器人自身定位相关的数据包括但不限于:移动机器人相对于所述实体的位姿变化、移动机器人前后位置之间的位姿变化、或移动机器人所途径的各位置信息等;所述用于帮助移动机器人确定定位的数据包括但不限于:所述测量数据中对应同一地标特征的地标数据与移动机器人之间的相对位置关系(如位姿变化数据等)、地标特征的地标位置等。Wherein, the moving state includes at least one of the following: data related to the positioning of the mobile robot itself, data used to help the mobile robot determine positioning, or used to reflect the change of the moving position of the mobile robot in the physical space and the landmark feature reflected in the image data Data such as the physical scale of the mapping relationship between image pixel position changes. Wherein, the data related to the positioning of the mobile robot itself includes, but is not limited to: changes in the posture of the mobile robot relative to the entity, changes in the posture and posture between the front and rear positions of the mobile robot, or information on each position the mobile robot passes through, etc. ; The data for helping the mobile robot to determine positioning includes, but is not limited to: the relative positional relationship (such as pose change data, etc.) between the landmark data corresponding to the same landmark feature in the measurement data and the mobile robot, the Landmark locations, etc.
其中,所述移动机器人至少利用在不同位置所获取的测量数据分析移动机器人在物理空间中的移动状态的方式包括以下各示例:The manner in which the mobile robot analyzes the movement state of the mobile robot in the physical space by at least using the measurement data obtained at different positions includes the following examples:
在一些示例中,移动机器人通过对不同位置所测得的测量数据进行反映其物理位置变化的分析,得到相应的移动状态。其中,所述反映其物理位置变化的分析过程包括利用具有物理含义的测量数据对移动机器人进行移动行为分析的过程。移动机器人通过在至少两个位置处相对于同一实体的测量数据的数据值、以及相对于同一实体的各测量数据之间的位置偏差等进行分析,得到受移动位姿变化而产生的移动机器人的移动状态。In some examples, the mobile robot obtains the corresponding movement state by analyzing the measurement data measured at different positions to reflect the change of its physical position. Wherein, the analysis process reflecting the change of its physical position includes a process of analyzing the movement behavior of the mobile robot by using measurement data with physical meaning. The mobile robot analyzes the data values of the measurement data relative to the same entity at at least two positions, and the positional deviation between the measurement data relative to the same entity, etc., to obtain the mobile robot's position and posture change caused by the movement. mobile state.
为提高对移动状态的测量准确度,在一些具体示例中,移动机器人利用经识别的各测量数据中对应共同地标特征的地标测量数据来计算所述移动状态。In order to improve the measurement accuracy of the moving state, in some specific examples, the mobile robot calculates the moving state by using the landmark measurement data corresponding to the common landmark feature among the identified measurement data.
以移动机器人在第一位置和第二位置获取测量数据为例,移动机器人在第一位置处和第二位置处分别获取的测量数据中具有至少一个共同地标特征,通过移动机器人在两个位置处相对于共同地标特征的距离,可以计算出移动机器人的移动状态。Taking the measurement data obtained by the mobile robot at the first position and the second position as an example, the measurement data obtained by the mobile robot at the first position and the second position respectively have at least one common landmark feature. The moving state of the mobile robot can be calculated relative to the distances of the common landmark features.
在移动机器人预设初始移动位置为起点定位的过程中,移动机器人在已知前一位置在物理坐标系下的定位位置信息的情况下,再利用上述示例所得到的前后不同位置之间的移动状态,得到所述至少两个位置在物理坐标系中对应的定位位置数据。应当理解,由于测距感应装置所获取的测量数据为具有实际物理单位的数据,因此所计算出的移动状态也具有实际物理单位。In the process that the mobile robot presets the initial movement position as the starting point for positioning, when the mobile robot knows the positioning position information of the previous position in the physical coordinate system, the movement between different positions before and after the above example is used. state, to obtain positioning position data corresponding to the at least two positions in the physical coordinate system. It should be understood that, since the measurement data acquired by the distance-measuring sensing device is data with actual physical units, the calculated movement state also has actual physical units.
在另一些具体示例中,移动机器人通过对不同位置所获取的各测量数据进行重合度计算的方式来得到所述移动状态。其中,单一测量数据提供在物理空间中对应测量平面内的实体部分与移动机器人之间的位置关系,移动机器人在不同位置对同一实体测量时,其反映出因移动机器人的位姿变化而引起的同一实体在测量数据中的测量位置的变化。例如,移动机器人可通过对其中一位置处获取的测量数据进行旋转、平移、缩放等处理,得到与另一位置处的测量数据中部分测量点相重合的测量数据,其旋转、平移、缩放等处理过程反映为使相应测量点重合而模拟移动机器人执行移动操作的过程。为此,移动机器人通过执行重合度优化操作,可模拟出移动机器人在不同位置之间位姿变化的数据、物理尺度等移动状态,以及利用该位姿变化数据等一些移动状态、和相重合的测量数据所提供的物理量,确定移动状态中的其他数据,如第二位置映射在所述地图数据中的第二定位位置数据等。In some other specific examples, the mobile robot obtains the moving state by calculating the coincidence degree of each measurement data obtained at different positions. Among them, the single measurement data provides the positional relationship between the entity part in the corresponding measurement plane and the mobile robot in the physical space. When the mobile robot measures the same entity at different positions, it reflects the changes caused by the pose change of the mobile robot. A change in the measurement position of the same entity in the measurement data. For example, a mobile robot can rotate, translate, and zoom the measurement data obtained at one position to obtain measurement data that coincides with some measurement points in the measurement data at another position. The processing process reflects the process of simulating a mobile robot to perform a moving operation in order to make the corresponding measurement points coincide. To this end, by performing the coincidence optimization operation, the mobile robot can simulate the data and physical scale of the mobile robot's pose change between different positions and other moving states, and use the pose change data and other moving states, and the coincident The physical quantity provided by the measurement data is used to determine other data in the moving state, such as the second positioning position data in which the second position is mapped in the map data.
由于测量数据中包含测距感应装置在相应位置测得的对应固定对象的物理量和对应移动对象的物理量,为此,在一示例性的实施例中,移动机器人将在不同位置所获取的测量数据之间的重合度进行优化处理,得到在符合各测量数据之间的重合条件下所述第二位置在所述地图数据中对应的第二定位位置数据。其中,移动机器人可将相重合的测量数据部分视为对应地标特征的地标数据。其中,所述重合条件举例包括重合度的迭代优化次数达到预设迭代次数阈值,和/或重合度的梯度值小于预设的梯度阈值等。该方式有利于尽量以各测量数据中反映固定对象的物理量来计算移动状态。Since the measurement data includes the physical quantity of the fixed object and the physical quantity of the corresponding moving object measured by the ranging sensing device at the corresponding position, for this reason, in an exemplary embodiment, the mobile robot uses the measurement data obtained at different positions The degree of coincidence between them is optimized, so as to obtain the second positioning position data corresponding to the second position in the map data under the coincidence condition between the measurement data. Wherein, the mobile robot may regard the overlapping measurement data part as the landmark data corresponding to the landmark feature. An example of the coincidence condition includes that the number of iterative optimizations of the coincidence degree reaches a preset threshold of the number of iterations, and/or the gradient value of the coincidence degree is smaller than a preset gradient threshold value, and the like. This method is beneficial to calculate the movement state by reflecting the physical quantity of the fixed object in each measurement data as much as possible.
以测量数据为一维数据为例,请参阅图3,其显示为本申请中的移动机器人分别在两个位置处获取的测量数据在一实施方式中的示意图。如图所示,移动机器人在第一位置和第二位置分别获取测量数据MData_1和MData_2,其中若物理空间中存在位置固定的实体,则所 述测量数据MData_1和MData_2中具有对应于该实体且可在数据处理后被重合的测量点。从图中的实体311、实体312和实体313在测量数据MData_1和MData_2中的测量点可以看出,两个测量数据中的实体311和实体313均可在数据处理后重合,而实体312则无法与实体311和实体313同时重合,因此实体311和实体313可被视为位置固定的实体。移动机器人对两测量数据MData_1和MData_2的包括旋转、平移、缩放等数据处理即反映移动机器人在第一位置和第二位置之间的相对位置关系。为此,移动机器人利用依据两个测量数据MData_1和MData_2而构建的优化函数进行重合度优化计算,直到两个测量数据MData_1和MData_2的重合度符合预设重合条件。由此得到反映所述相对位置关系的位姿变化数据,以在该预设重合条件下将两测量数据中相重合的测量数据点作为地标测量数据,以及基于所得到的位姿变化数据,得到在符合各测量数据之间的重合条件下所述第二位置在所述地图数据中对应的第二定位位置数据。需要说明的是,上述示例也适用于测量数据为二维数据的计算处理。Taking the measurement data as one-dimensional data as an example, please refer to FIG. 3 , which is a schematic diagram of an embodiment of the measurement data obtained by the mobile robot in the present application at two positions respectively. As shown in the figure, the mobile robot obtains measurement data MData_1 and MData_2 at the first position and the second position, respectively, wherein if there is an entity with a fixed position in the physical space, the measurement data MData_1 and MData_2 have corresponding and available entities in the physical space. Measurement points that are coincident after data processing. From the measurement points of entity 311, entity 312 and entity 313 in the measurement data MData_1 and MData_2 in the figure, it can be seen that both entity 311 and entity 313 in the two measurement data can be overlapped after data processing, while entity 312 cannot Coincidence with entity 311 and entity 313 at the same time, so entity 311 and entity 313 can be regarded as entities with fixed positions. The data processing of the two measurement data MData_1 and MData_2 by the mobile robot, including rotation, translation, zoom, etc., reflects the relative positional relationship between the first position and the second position of the mobile robot. To this end, the mobile robot uses an optimization function constructed according to the two measurement data MData_1 and MData_2 to perform an optimal calculation of the coincidence degree until the coincidence degree of the two measurement data MData_1 and MData_2 meets the preset coincidence condition. Thereby, the pose change data reflecting the relative position relationship is obtained, and under the preset coincidence condition, the coincident measurement data points in the two measurement data are used as the landmark measurement data, and based on the obtained pose change data, we obtain The second positioning position data corresponding to the second position in the map data under the condition of coincidence between the measurement data. It should be noted that the above example is also applicable to the calculation processing in which the measurement data is two-dimensional data.
在另一实施方式中,当预存有当前物理空间的地图数据时,第一位置对应于地图数据中的第一定位位置数据,则可通过第二位置相对于第一位置的移动状态,基于第一定位位置数据,计算出第二位置映射在地图数据中的第二定位位置数据。例如,在移动机器人的历史工作过程中,通过本申请第一方面的实施方式所述的构建地图的方法(即图1~图5对应的实施方式部分)获得了当前物理空间的地图数据后,在当前的移动操作中,移动机器人位于第二位置,在第二位置处具有与第一位置处共同的地标特征,因此可基于共同的地标特征确定第二位置相对于第一位置的移动状态,并基于第一位置对应的第一定位位置数据、以及所述移动状态,确定第二位置对应的第二定位位置数据。In another embodiment, when the map data of the current physical space is pre-stored, and the first position corresponds to the first positioning position data in the map data, the movement state of the second position relative to the first position can be used to determine the first position based on the first position. A positioning position data is calculated, and the second positioning position data in which the second position is mapped in the map data is calculated. For example, in the historical working process of the mobile robot, after the map data of the current physical space is obtained through the method for constructing a map described in the embodiment of the first aspect of the present application (ie, the embodiment parts corresponding to FIG. 1 to FIG. 5 ), In the current moving operation, the mobile robot is located at the second position, and the second position has a common landmark feature with the first position, so the movement state of the second position relative to the first position can be determined based on the common landmark feature, And based on the first positioning position data corresponding to the first position and the moving state, the second positioning position data corresponding to the second position is determined.
当然,在一些情况下,由于移动机器人在构建地图数据时已在物理空间的多个位置处获取并保存了各位置处对应的图像数据和测量数据。因此移动机器人当前所处在的第二位置可能与地图数据中对应的第一位置之间为同一位置或位置非常接近。此时,通过比对第二位置处获取的图像数据、测量数据与地图数据中保存的第一位置处对应的图像数据、测量数据,即可将第一位置对应的第一定位位置数据确定为第二位置对应的第二定位位置数据。Of course, in some cases, since the mobile robot has acquired and saved image data and measurement data corresponding to each location at multiple locations in the physical space when constructing the map data. Therefore, the second position where the mobile robot is currently located may be the same position or very close to the corresponding first position in the map data. At this time, by comparing the image data and measurement data obtained at the second position with the image data and measurement data corresponding to the first position saved in the map data, the first positioning position data corresponding to the first position can be determined as The second positioning position data corresponding to the second position.
为提高计算效率,在又一个示例性的实施例中,移动机器人将在不同位置所获取的测量数据中地标测量数据之间的重合度进行优化处理,得到在符合各测量数据之间的重合条件下移动机器人的移动状态。In order to improve the calculation efficiency, in another exemplary embodiment, the mobile robot optimizes the coincidence degree between the landmark measurement data in the measurement data obtained at different positions, so as to obtain a coincidence condition that meets the coincidence conditions between the measurement data. Move the robot down to move the state.
其中,所述优化处理过程与上述一维数据的示例类似。与之不同之处在于,在一个示例性的实施例中,所述定位的方法还包括提取各测量数据中的地标特征的步骤,以利用各测量数据中对应地标特征的地标测量数据对移动状态进行分析。其中,所述地标测量数据包含对 应于移动对象的地标测量数据以及对应于固定对象的地标测量数据。例如,在测距感应装置所获取的测量数据中包括了其相对于物理空间中各特征的距离。其中包括了一些静止的特征(例如花盆、茶几、货架等),也包括了一些活动的特征(例如人、宠物、推行中的购物车等)。Wherein, the optimization process is similar to the above example of one-dimensional data. The difference is that, in an exemplary embodiment, the positioning method further includes the step of extracting landmark features in each measurement data, so as to use the landmark measurement data corresponding to the landmark features in each measurement data to determine the movement state. analysis. Wherein, the landmark measurement data includes landmark measurement data corresponding to moving objects and landmark measurement data corresponding to fixed objects. For example, the measurement data obtained by the ranging sensing device includes its distance relative to each feature in the physical space. It includes some static features (such as flower pots, coffee tables, shelves, etc.) and some moving features (such as people, pets, shopping carts in progress, etc.).
应当理解,在某些场景下,物理空间中的一些特征并不适宜用以作为地标特征,例如物理空间中的人、宠物等可移动的对象。若将这些不适宜的特征标记在地图数据中会影响后续移动机器人的定位,以及易导致较大的误差。因此,需要对测量数据中的各数据进行筛选,以确定哪些可用以作为地标特征,以便利用这些提取的地标特征对移动状态进行分析,提高计算精度。It should be understood that in some scenarios, some features in the physical space are not suitable to be used as landmark features, for example, movable objects such as people and pets in the physical space. If these inappropriate features are marked in the map data, it will affect the subsequent positioning of the mobile robot, and easily lead to large errors. Therefore, it is necessary to filter each data in the measurement data to determine which ones can be used as landmark features, so as to use these extracted landmark features to analyze the moving state and improve the calculation accuracy.
应当理解,在某些场景下,物理空间中的一些特征并不适宜用以作为地标特征,例如物理空间中的人、宠物等可移动的对象。若将这些不适宜的特征作为地标特征,则会导致较大的误差。因此,需要对测量数据中的各数据进行筛选,以确定哪些可用以作为地标特征,以便利用这些提取的地标特征对移动状态进行分析,提高计算精度。It should be understood that in some scenarios, some features in the physical space are not suitable to be used as landmark features, for example, movable objects such as people and pets in the physical space. If these inappropriate features are used as landmark features, it will lead to large errors. Therefore, it is necessary to filter each data in the measurement data to determine which ones can be used as landmark features, so as to use these extracted landmark features to analyze the moving state and improve the calculation accuracy.
在一实施方式中,通过阈值的方式筛选出符合要求的特征作为地标特征。例如,在第一位置处所获取的测量数据与第二位置处所获取的测量数据中,具有a、b、c、d、e五个共同特征,其中分别基于a、b、c、d计算出的移动状态之间数值相差范围均在1cm以内,而基于e计算出的移动状态与其他特征计算出的移动状态之间数值相差范围在74~75cm,若将筛选阈值设置在5cm,则a、b、c、d可作为地标特征提取,而e则可能为活动的特征,不能作为地标特征。In one embodiment, the features that meet the requirements are selected as landmark features by means of a threshold. For example, the measurement data obtained at the first position and the measurement data obtained at the second position have five common features a, b, c, d, and e, wherein the calculated values based on a, b, c, and d are respectively The numerical difference between the moving states is within 1cm, and the numerical difference between the moving state calculated based on e and the moving state calculated by other features is in the range of 74-75cm. If the screening threshold is set to 5cm, then a, b , c, and d can be extracted as landmark features, while e may be an active feature and cannot be used as landmark features.
在另一实施方式中,也可直接基于在各位置处获取的测量数据中的数值变化来提取地标特征。具体地说,在不同的位置获取的测量数据中,静止的特征在各位置数据中的变化均比较规律,而活动的特征则相对于其他静止的特征数值变化相差较大。因此,可通过一预设的变化阈值,在测量数据中相对各特征的距离找到其中数值变化较为规律的特征,以此作为地标特征。例如,第一位置处所获取的测量数据与第二位置处所获取的测量数据中,具有a、b、c、d、e五个共同特征。在第一位置处时,a特征相对于移动机器人的距离为2m,b特征相对于移动机器人的距离为3.4m,c特征相对于移动机器人的距离为4.2m,d特征相对于移动机器人的距离为2.8m,e特征相对于移动机器人的距离为5m;在第二位置处时,a特征相对于移动机器人的距离为2.5m,b特征相对于移动机器人的距离为3.8m,c特征相对于移动机器人的距离为4.9m,d特征相对于移动机器人的距离为3.6m,e特征相对于移动机器人的距离为2m。则可以看出a特征在两个位置处的变化量为0.5m,b特征在两个位置处的变化量为0.4m,c特征在两个位置处的变化量为0.7m,d特征在两个位置处的变化量为0.8m,e特征 在两个位置处的变化量为3m。假设预设的变化阈值为0.5m,则a、b、c、d可作为地标特征提取,而e则可能为活动的特征,不能作为地标特征。In another embodiment, landmark features may also be extracted directly based on numerical changes in measurement data acquired at each location. Specifically, in the measurement data obtained at different positions, the changes of static features in each position data are relatively regular, while the changes of moving features are relatively different from those of other static features. Therefore, a predetermined change threshold can be used to find a feature whose numerical value changes relatively regularly in the measurement data relative to the distance of each feature, and use it as a landmark feature. For example, the measurement data obtained at the first position and the measurement data obtained at the second position have five common features a, b, c, d, and e. At the first position, the distance of feature a relative to the mobile robot is 2m, the distance of feature b relative to the mobile robot is 3.4m, the distance of feature c relative to the mobile robot is 4.2m, and the distance of feature d relative to the mobile robot is 2.8m, the distance of the e feature relative to the mobile robot is 5m; at the second position, the distance of the a feature relative to the mobile robot is 2.5m, the distance of the b feature relative to the mobile robot is 3.8m, and the distance of the c feature relative to the mobile robot is 3.8m. The distance of the mobile robot is 4.9m, the distance of the d feature relative to the mobile robot is 3.6m, and the distance of the e feature relative to the mobile robot is 2m. It can be seen that the variation of a feature at two positions is 0.5m, the variation of b feature at two positions is 0.4m, the variation of c feature at two positions is 0.7m, and the variation of d feature at two positions is 0.7m. The variation at each position is 0.8m, and the variation of the e-feature at two positions is 3m. Assuming that the preset change threshold is 0.5m, a, b, c, and d can be extracted as landmark features, while e may be an active feature and cannot be used as landmark features.
在确定上述两个实施方式中任一实施方式所确定的静态的地标特征时,利用测量数据中对应静态的地标特征的地标测量数据进行重合度计算,以确定其对应的移动状态。例如,根据不同位置处对应同一静态的地标特征的地标测量数据之间的位置偏差进行重合度计算,使各地标测量数据之间的位置偏差最小,并在确定符合重合度条件时确定其对应的移动状态。When determining the static landmark feature determined in any one of the above two embodiments, the coincidence degree is calculated by using the landmark measurement data corresponding to the static landmark feature in the measurement data to determine the corresponding moving state. For example, the coincidence degree is calculated according to the positional deviation between the landmark measurement data corresponding to the same static landmark feature at different locations, so that the positional deviation between the landmark measurement data is minimized, and the corresponding coincidence degree is determined when the coincidence degree condition is met. mobile state.
在又一些实施方式中,对包含移动对象和固定对象的各地标测量数据进行重合度优化,当其重合度满足重合度条件时,认定相重合的地标测量数据对应固定对象的地标特征。In still other embodiments, the coincidence degree optimization is performed on each landmark measurement data including the moving object and the fixed object, and when the coincidence degree satisfies the coincidence degree condition, it is determined that the coincident landmark measurement data corresponds to the landmark feature of the fixed object.
上述任一实施方式中所提及的重合度优化的过程可举例如下:请参阅图4,其显示为本申请中的移动机器人分别在两个位置处获取的测量数据在另一实施方式中的示意图。如图所示,在移动机器人在第一位置和第二位置获取的测量数据MData_3和MData_4,其中,圆形表示两测量数据中相匹配的地标测量数据,以及星型表示两测量数据中在一次重合计算时相重合的地标测量数据。移动机器人通过迭代地优化可重合的地标测量数据,使其在历次重合计算时由星型表示的地标测量数据的数量变化达到重合条件,则确定在相应次的重合计算时所得到的移动状态;利用所得到的移动状态、和相重合的地标测量数据,得到在符合各测量数据之间的重合条件下所述第二位置在所述地图数据中对应的第二定位位置数据。The process of optimizing the coincidence degree mentioned in any of the above embodiments can be exemplified as follows: please refer to FIG. 4 , which shows the measurement data obtained by the mobile robot in the present application at two positions respectively in another embodiment. Schematic. As shown in the figure, the measurement data MData_3 and MData_4 obtained by the mobile robot at the first position and the second position, wherein the circle represents the matching landmark measurement data in the two measurement data, and the star represents the two measurement data in one measurement data. Coincidence calculates the coincident landmark measurement data. The mobile robot iteratively optimizes the coincidental landmark measurement data so that the number of landmark measurement data represented by the star in previous coincidence calculations reaches the coincidence condition, and determines the movement state obtained in the corresponding times of coincidence calculation; Using the obtained movement state and the coincident landmark measurement data, the second positioning position data corresponding to the second position in the map data is obtained under the coincidence condition between the measurement data.
利用上述任一示例所得到的移动状态,移动机器人还确定同步获得的图像数据中的地标图像数据所对应的地表特征在物理坐标系下的地标位置信息。Using the movement state obtained in any of the above examples, the mobile robot also determines the landmark position information in the physical coordinate system of the ground surface feature corresponding to the landmark image data in the synchronously obtained image data.
在移动机器人移动的过程中,图像摄取装置在各位置处还同步获取图像数据。通过对比各位置处拍摄到的图像数据中对应共同地标特征的地标图像数据,计算出移动机器人在设定坐标系下移动机器人的移动量和旋转量,又如地标特征在设定坐标系中与移动机器人的相对位置关系等。假设在设定坐标系中,以移动机器人的初始位置为坐标原点,则可以得出移动机器人在设定坐标系下的各定位位置坐标,以及图像数据所反映的各地标特征在设定坐标系下的地标位置坐标。During the movement of the mobile robot, the image capturing device also acquires image data synchronously at each position. By comparing the landmark image data corresponding to the common landmark features in the image data captured at each position, the movement amount and rotation amount of the mobile robot in the set coordinate system are calculated. The relative position relationship of the mobile robot, etc. Assuming that in the set coordinate system, the initial position of the mobile robot is taken as the coordinate origin, the coordinates of each positioning position of the mobile robot in the set coordinate system and the landmark features reflected by the image data can be obtained in the set coordinate system. The coordinates of the landmark location below.
虽然移动机器人可计算出在其设定坐标系下移动机器人所途经的各位置坐标以及各地标图像特征的坐标,但这些坐标缺少实际的物理单位,即移动机器人无法得知在其图像数据中的一个像素点及其对应物理空间中的实体测量点分别在该设定坐标系下的位置坐标与实际物理坐标系中的位置之间的映射关系。Although the mobile robot can calculate the coordinates of each position the mobile robot passes through in its set coordinate system and the coordinates of each landmark image feature, these coordinates lack the actual physical unit, that is, the mobile robot cannot know the coordinates in its image data. The mapping relationship between the position coordinates of a pixel point and its corresponding physical measurement point in the physical space under the set coordinate system and the position in the actual physical coordinate system.
移动机器人利用依靠测量数据所计算出的具有物理单位的移动状态,将设定坐标系下所标记的各位置坐标转换至物理坐标系下,以得到在符合各测量数据之间的重合条件下所述第 二位置在所述地图数据中对应的第二定位位置数据等。其中,由于测距感应装置和图像摄取装置随移动机器人整体移动,因此,移动机器人利用测量数据所得到的移动状态反映移动机器人相对于图像数据中地标图像数据所对应的地标特征的移动情况。在此,移动机器人基于借由移动状态而确定的所述设定坐标系和物理坐标系之间的映射关系,将所述设定坐标系下的地标图像特征映射至物理坐标系中,以得到相应地标特征映射到物理坐标系下的地标位置数据。The mobile robot converts the coordinates of each position marked in the set coordinate system to the physical coordinate system by using the movement state with physical units calculated by relying on the measurement data, so as to obtain the position that meets the coincidence conditions between the measurement data. second positioning position data corresponding to the second position in the map data, etc. Wherein, since the ranging sensing device and the image capturing device move with the mobile robot as a whole, the moving state obtained by the mobile robot using the measurement data reflects the movement of the mobile robot relative to the landmark feature corresponding to the landmark image data in the image data. Here, based on the mapping relationship between the set coordinate system and the physical coordinate system determined by the moving state, the mobile robot maps the landmark image features in the set coordinate system to the physical coordinate system, so as to obtain The corresponding landmark features are mapped to the landmark location data in the physical coordinate system.
以移动机器人将在不同位置所获取的测量数据之间的重合度进行优化处理,得到在符合各测量数据之间的重合条件下所述第二位置在所述地图数据中对应的第二定位位置数据为例描述其得到物理坐标系下第二定位位置数据的执行过程:The mobile robot performs optimization processing on the coincidence degree between the measurement data obtained at different positions, and obtains the second positioning position corresponding to the second position in the map data under the coincidence conditions between the measurement data. The data is used as an example to describe the execution process of obtaining the second positioning position data in the physical coordinate system:
移动机器人在第一位置处和第二位置处分别所获取的测量数据转换至同一测量数据的坐标系中,并进行重合度的迭代计算,得到以使两个测量数据在相应坐标系下的重合度符合重合条件。由此得到在该重合条件下的移动机器人的移动状态。其中,移动状态包括移动机器人在第一位置和第二位置之间的相对位置关系与相重合的地标测量数据之间的测量位置关系之间的物理尺度、相对于第一位置移动机器人在第二位置的位姿变化数据等。其中,在移动机器人的整体移动的作用下,该物理尺度也对应于设定坐标系与物理坐标系之间的转换关系(即地图比例尺)。例如,设定坐标系与物理坐标系之间的转换关系是基于所述物理尺度与图像摄取装置内外参数而确定的,其中,所述内外参数包括图像摄取装置内部光学参数和装配参数,以及图像摄取装置与测距感应装置之间的装配参数等。The measurement data obtained by the mobile robot at the first position and the second position respectively are converted into the coordinate system of the same measurement data, and the iterative calculation of the coincidence degree is performed to obtain the coincidence of the two measurement data in the corresponding coordinate system. meet the coincidence conditions. Thereby, the moving state of the mobile robot under the coincidence condition is obtained. Wherein, the movement state includes the physical scale between the relative positional relationship between the first position and the second position of the mobile robot and the measured positional relationship between the coincident landmark measurement data, the relative position of the mobile robot in the second position relative to the first position. Position and pose change data, etc. Wherein, under the action of the overall movement of the mobile robot, the physical scale also corresponds to the conversion relationship between the set coordinate system and the physical coordinate system (ie, the map scale). For example, the conversion relationship between the set coordinate system and the physical coordinate system is determined based on the physical scale and internal and external parameters of the image pickup device, wherein the internal and external parameters include the internal optical parameters and assembly parameters of the image pickup device, and the image Assembly parameters between the capture device and the ranging sensing device, etc.
在得到了设定坐标系与物理坐标系之间的转换关系后,移动机器人计算出在设定坐标系下的移动机器人的各位置在物理坐标系中对应的定位位置数据。After obtaining the conversion relationship between the set coordinate system and the physical coordinate system, the mobile robot calculates the positioning position data corresponding to each position of the mobile robot in the set coordinate system in the physical coordinate system.
需要补充的是,所述重合条件可以是预设的重合度条件,或选择局部最优重合度,其包括但不限于:两测量数据中共同地标特征的重合度平均值最高、或者两测量数据中各共同地标特征重合度的标准差最小、重合度的梯度局部最小等。It should be added that the coincidence condition may be a preset coincidence degree condition, or select a local optimal coincidence degree, which includes but is not limited to: the average coincidence degree of the common landmark feature in the two measurement data is the highest, or the two measurement data The standard deviation of the coincidence degree of each common landmark feature is the smallest, and the gradient of the coincidence degree is locally minimal, etc.
需要说明的是,所述定位位置数据还可以依据测量数据在符合重合度条件时所得到的移动状态而确定,或者所述定位位置数据依据上述两种计算方式进行误差纠偏后确定。It should be noted that the positioning position data may also be determined according to the movement state obtained when the measurement data meets the coincidence condition, or the positioning position data may be determined after error correction according to the above two calculation methods.
以下将通过一具体的示例说明如何对不同位置所获取的测量数据之间的重合度进行优化处理,得到在符合重合条件下的第二位置在所述地图数据中的第二定位位置数据。需要说明的是,本示例仅用于解释本申请而非对本申请的限制。The following will use a specific example to describe how to optimize the degree of coincidence between the measurement data obtained at different locations, so as to obtain the second positioning location data of the second location in the map data under the coincidence condition. It should be noted that this example is only used to explain the present application and not to limit the present application.
假设在设定坐标系下图像摄取装置本身在当前物理空间中前后两个时刻(时刻1和时刻2)的位置为T wc1和T wc2,测距感应装置在设定坐标系下的位置为T wb1和T wb2。利用图像摄取 装置与测距感应装置之间的安装参数,即测距感应装置相对于图像摄取装置的位置T bc,可以得出:T wc1=T wb1T bc,T wc2=T wb2T bcAssuming that the positions of the image capturing device itself in the current physical space at two moments (time 1 and time 2) before and after the set coordinate system are Twc1 and Twc2 , and the position of the ranging sensing device in the set coordinate system is T wb1 and T wb2 . Using the installation parameters between the image capturing device and the ranging sensing device, that is, the position T bc of the ranging sensing device relative to the image capturing device, it can be obtained: T wc1 =T wb1 T bc , T wc2 =T wb2 T bc .
假设设定坐标系的距离单位与实际的物理坐标系之间的转换关系比例因子为s,T wc1在设定坐标系下的坐标(以矩阵形式表示)为:
Figure PCTCN2020107863-appb-000008
则在真实的物理坐标系下的坐标为:
Figure PCTCN2020107863-appb-000009
T wc2在设定坐标系下的坐标为:
Figure PCTCN2020107863-appb-000010
则在真实的物理坐标系下的坐标为:
Figure PCTCN2020107863-appb-000011
其中,R wc1、R wc2均代表旋转量,t wc1和t wc2均代表平移量。
Assuming that the scaling factor of the conversion relationship between the distance unit of the set coordinate system and the actual physical coordinate system is s, the coordinates of T wc1 in the set coordinate system (in the form of a matrix) are:
Figure PCTCN2020107863-appb-000008
Then the coordinates in the real physical coordinate system are:
Figure PCTCN2020107863-appb-000009
The coordinates of T wc2 in the set coordinate system are:
Figure PCTCN2020107863-appb-000010
Then the coordinates in the real physical coordinate system are:
Figure PCTCN2020107863-appb-000011
Among them, R wc1 and R wc2 both represent the rotation amount, and t wc1 and t wc2 both represent the translation amount.
同理,根据在设定坐标系下各地标特征相对于移动机器人的距离,也可用类似的方式得出各地标特征在设定坐标系下的坐标、以及对应于物理坐标系下的坐标,在此不予以赘述。Similarly, according to the distance of each landmark feature relative to the mobile robot in the set coordinate system, the coordinates of each landmark feature in the set coordinate system and the coordinates corresponding to the physical coordinate system can also be obtained in a similar way. This will not be repeated.
利用T′ wc1矩阵,可以求解出图像摄取装置在时刻1和时刻2的相对位置T c1c2=T c1wT wc2。其中:
Figure PCTCN2020107863-appb-000012
T wc2为在时刻2时,图像摄取装置在物理坐标系中的位置。
Using the T′ wc1 matrix, the relative positions of the image pickup device at time 1 and time 2 can be solved T c1c2 =T c1w T wc2 . in:
Figure PCTCN2020107863-appb-000012
Twc2 is the position of the image pickup device in the physical coordinate system at time 2 .
利用T c1c2=T c1wT wc2,将时刻2中的测量数据映射至时刻1的测量数据所在坐标系中形成模拟测量数据SData(或者也可以将时刻1中的测量数据映射至时刻2中,原理相同,不再赘述),并将在时刻1时由测距感应装置获取的测量数据作为实际测量数据RData。应当理解,通过将时刻2中的测量数据映射至时刻1中,可以从理论上模拟出时刻2中相对于各地标特征的位置和角度在时刻1中的位置和角度。 Using T c1c2 =T c1w T wc2 , the measurement data at time 2 is mapped to the coordinate system where the measurement data at time 1 is located to form simulated measurement data SData (or the measurement data at time 1 can also be mapped to time 2, the principle The same, and will not be repeated), and the measurement data acquired by the ranging sensing device at time 1 is taken as the actual measurement data RData. It should be understood that by mapping the measurement data at time 2 to time 1, the positions and angles at time 2 relative to the positions and angles of various landmark features at time 1 can be simulated theoretically.
在此,通过尺度缩放系数(即物理尺度)调整模拟测量数据SData与实际测量数据RData中的一者,以使模拟测量数据SData与实际测量数据RData的重合度达到重合度条件,从而确定尺度缩放系数的值,该尺度缩放系数即为
Figure PCTCN2020107863-appb-000013
Figure PCTCN2020107863-appb-000014
中s的值。由此通过代入s,即可确定所述第二位置在所述地图数据中的第二定位位置数据。
Here, one of the simulated measurement data SData and the actual measurement data RData is adjusted by a scaling factor (ie, physical scale), so that the coincidence degree of the simulated measurement data SData and the actual measurement data RData reaches the coincidence degree condition, thereby determining the scaling The value of the coefficient, the scale scaling factor is
Figure PCTCN2020107863-appb-000013
and
Figure PCTCN2020107863-appb-000014
The value of s in . Therefore, by substituting s, the second positioning position data of the second position in the map data can be determined.
在另一些示例中,为防止移动机器人在移动期间因颠簸、被抱起等外界因素对测距感应装置和图像摄取装置获取相应数据的干扰,移动机器人通过对不同位置所测得的测量数据和图像数据进行反映其物理位置变化的分析,得到相应的移动状态。In other examples, in order to prevent the mobile robot from interfering with the acquisition of corresponding data by the ranging sensing device and the image capturing device due to external factors such as bumping and being picked up during the movement, the mobile robot measures the measurement data and The image data is analyzed to reflect the change of its physical position, and the corresponding movement state is obtained.
在一个示例性的实施例中,本申请考虑图像摄取装置中的误差,以联合优化图像摄取装置和测距感应装置的误差,从而提高定位的精度。In an exemplary embodiment, the present application considers the error in the image pickup device to jointly optimize the error of the image pickup device and the ranging sensing device, thereby improving the accuracy of positioning.
在此,将在不同位置所获取的测量数据之间的重合度和所采集的图像数据之间的重合度进行联合优化处理,得到在符合联合优化的重合条件下所述第二位置在所述地图数据中的第二定位位置数据。Here, a joint optimization process is performed on the degree of coincidence between the measurement data acquired at different positions and the degree of coincidence between the acquired image data, so as to obtain that the second position is in the The second positioning position data in the map data.
在一实施方式中,分别对各测量数据和各图像数据进行调整以优化各测量数据的重合度 以及各图像数据的重合度;当基于各自重合度而得到的各移动状态之间的误差符合误差条件时,利用相应的移动状态确定所述第二位置在所述地图数据中对应的第二定位位置数据。其中所述误差条件举例包括预设的误差阈值、或者基于预设调整次数而选取误差最小值等。In one embodiment, each measurement data and each image data are adjusted respectively to optimize the degree of coincidence of each measurement data and the degree of coincidence of each image data; when the errors between the moving states obtained based on the respective degrees of coincidence conform to the error When conditions are met, the corresponding movement state is used to determine the second positioning position data corresponding to the second position in the map data. An example of the error condition includes a preset error threshold, or a minimum error value selected based on a preset number of adjustments.
以移动机器人在第一位置和第二位置分别同步获取图像数据和测量数据为例,将移动机器人在第一位置处时所获取的测量数据映射到第二位置处,并通过计算重合度得到模拟出的对应于第二位置处的测量数据,即利用当前重合度而确定的该映射后的测量数据对应于模拟出的移动机器人的移动状态,其表征了在第一位置处的测量数据按照相应的移动状态所推测出的在第二位置处的测量数据。Taking the image data and measurement data obtained by the mobile robot at the first position and the second position synchronously as an example, the measurement data obtained when the mobile robot is at the first position is mapped to the second position, and the simulation is obtained by calculating the coincidence degree. The measured data corresponding to the second position, that is, the mapped measured data determined by using the current coincidence degree corresponds to the simulated moving state of the mobile robot, which represents the measured data at the first position according to the corresponding The measurement data at the second location inferred from the movement state of .
同时,所述图像摄取装置亦通过对比两个位置处拍摄到的图像数据中对应共同的地标特征的地标图像数据,计算出移动机器人在设定坐标系下的移动状态(即在设定坐标系下的移动量和旋转量)。其中,利用图像数据得到的移动状态与通过测量数据得到的移动状态是通过图像摄取装置和测距感应装置之间的装配参数而对应的。At the same time, the image capturing device also calculates the moving state of the mobile robot in the set coordinate system (that is, in the set coordinate system) by comparing the landmark image data corresponding to the common landmark feature in the image data captured at the two positions. amount of movement and rotation below). Wherein, the movement state obtained by using the image data and the movement state obtained by the measurement data correspond to the assembly parameters between the image capturing device and the ranging sensing device.
利用该对应关系,通过迭代式调整两个移动状态之间的误差,使得调整后的误差符合一误差条件,则利用两移动状态确定地标特征在物理坐标系下的地标位置数据、和/或第一位置和第二位置在物理坐标系下的定位位置数据。在将相重合的测量数据作为对应地标特征的地标测量数据的情况下,还可以确定各地标测量数据在物理坐标系下的地标位置数据。Using this correspondence, the error between the two moving states is adjusted iteratively so that the adjusted error meets an error condition, and the two moving states are used to determine the landmark position data of the landmark feature in the physical coordinate system, and/or the first Positioning position data of a position and a second position in a physical coordinate system. When the coincident measurement data is used as the landmark measurement data corresponding to the landmark feature, the landmark position data of each landmark measurement data in the physical coordinate system can also be determined.
在另一实施方式中,对两位置的测量数据之间的重合度进行调整,以及在相应重合度下得到的移动状态确定图像数据中的地标图像数据的重合度;当调整后的测量数据之间的重合度、以及调整后的图像数据之间的重合度符合重合条件时,利用相应的移动状态确定所述图像数据中对应的地标特征映射到物理坐标系下的地标位置数据,和/或所述至少两个位置在物理坐标系中对应的定位位置数据。In another embodiment, the degree of coincidence between the measurement data of the two positions is adjusted, and the movement state obtained under the corresponding degree of coincidence determines the degree of coincidence of the landmark image data in the image data; When the degree of coincidence between the image data and the adjusted image data meets the coincidence condition, use the corresponding movement state to determine that the corresponding landmark feature in the image data is mapped to the landmark position data in the physical coordinate system, and/or Positioning position data corresponding to the at least two positions in the physical coordinate system.
仍以移动机器人在第一位置和第二位置分别同步获取图像数据和测量数据为例,将移动机器人在第一位置处时所获取的测量数据映射到第二位置处,并通过计算重合度得到模拟出的对应于第二位置处的测量数据,即利用当前重合度而确定的该映射后的测量数据对应于模拟出的移动机器人的移动状态,其表征了在第一位置处的测量数据按照相应的移动状态所推测出的在第二位置处的测量数据。Still taking the image data and measurement data obtained by the mobile robot synchronously at the first position and the second position as an example, the measurement data obtained when the mobile robot is at the first position is mapped to the second position, and obtained by calculating the coincidence degree. The simulated measurement data corresponding to the second position, that is, the mapped measurement data determined by using the current coincidence degree corresponds to the simulated movement state of the mobile robot, which represents the measurement data at the first position according to The measurement data at the second location inferred from the corresponding movement state.
利用图像数据得到的移动状态与通过测量数据得到的移动状态是通过图像摄取装置和测距感应装置之间的装配参数而对应的,计算在通过测量数据得到的移动状态下,两位置所获取的图像数据中相匹配的地标图像数据的重合度。The moving state obtained by using the image data and the moving state obtained by the measurement data are corresponding to the assembly parameters between the image capturing device and the ranging sensing device. The degree of coincidence of the matched landmark image data in the image data.
将利用测量数据得到的重合度与利用图像数据得到的重合度进行评估处理,以确定其评 估结构是否符合重合条件,若是,则以相应的移动状态将各位置映射到物理坐标系中,若否,继续调整测量数据的重合度,直至符合重合条件为止。The degree of coincidence obtained by using the measurement data and the degree of coincidence obtained by using the image data are evaluated and processed to determine whether the evaluation structure meets the coincidence conditions. If so, map each position to the physical coordinate system with the corresponding moving state. , continue to adjust the coincidence of the measured data until the coincidence conditions are met.
通过本实施例中的优化方式,可将图像摄取装置和测距感应装置的误差联合优化,从而利用各传感器的优势同时避免了误差的累积,提高了定位的精度。Through the optimization method in this embodiment, the errors of the image capturing device and the ranging sensing device can be jointly optimized, so that the advantages of each sensor are used while the accumulation of errors is avoided, and the positioning accuracy is improved.
在一种实施方式中,可将属于同一位置的来自各测距感应装置的测量数据进行融合处理,再将各位置处对应的融合后的测量数据之间进行优化处理,得到在符合各测量数据之间的重合条件下的所述第二位置在所述地图数据中的第二定位位置数据。In one embodiment, the measurement data from each ranging sensing device belonging to the same position can be fused, and then the fused measurement data corresponding to each position can be optimized to obtain the measurement data that is consistent with each measurement data. The second positioning position data in the map data under the coincidence condition between the second position data.
在此,各测距感应装置在各位置处根据各自的测量方式同步获取测量数据,在得到了来自于各测距感应装置的测量数据后,将属于同一位置的来自于各测距感应装置的测量数据进行融合处理,以基于融合后的测量数据进行优化处理。例如,当测距感应装置包括双目摄像装置和激光传感器时,在第一位置处,双目摄像装置根据其两个摄像头所拍摄到的图像,利用三角原理来确定移动机器人相对于各地标特征的距离以得到双目摄像装置获取的第一测量数据;同时,激光传感器也在第一位置处根据其发射激光束的时间和接收激光束的时间差,确定其相对于地标特征的距离以得到激光传感器获取的第一测量数据。在第二位置处,双目摄像装置根据其两个摄像头所拍摄到的图像,利用三角原理来确定移动机器人相对于各地标特征的距离以得到双目摄像装置获取的第二测量数据;同时,激光传感器也在第二位置处根据其发射激光束的时间和接收激光束的时间差,确定其相对于地标特征的距离以得到激光传感器获取的第二测量数据。将各第一测量数据融合处理、以及将各第二测量数据融合处理后,基于融合后的各第一测量数据和融合后的第二测量数据进行重合度优化处理;类似于前述示例中所提及的各种方式,得到在物理坐标系下的各位置数据,在此不再详述。Here, each ranging sensing device synchronously acquires measurement data at each position according to their respective measurement methods, and after obtaining the measurement data from each ranging sensing device, the measurement data from each ranging sensing device belonging to the same position The measurement data is fused to perform optimization processing based on the fused measurement data. For example, when the ranging sensing device includes a binocular camera device and a laser sensor, at the first position, the binocular camera device uses the triangulation principle to determine the characteristics of the mobile robot relative to each target according to the images captured by its two cameras. At the same time, the laser sensor also determines the distance relative to the landmark feature at the first position according to the time difference between the time it emits the laser beam and the time it receives the laser beam to obtain the laser beam. The first measurement data acquired by the sensor. At the second position, the binocular camera device uses the triangulation principle to determine the distance of the mobile robot relative to each landmark feature according to the images captured by its two cameras to obtain the second measurement data obtained by the binocular camera device; at the same time, The laser sensor also determines its distance from the landmark feature at the second location based on the time difference between the time it emits the laser beam and the time it receives the laser beam to obtain second measurement data acquired by the laser sensor. After the fusion processing of each first measurement data and the fusion processing of each second measurement data, the coincidence degree optimization processing is performed based on each first measurement data after fusion and the second measurement data after fusion; and various ways to obtain each position data in the physical coordinate system, which will not be described in detail here.
其中,所述融合处理包括:对基于不同测距感应装置提供的测量数据进行地标分析以得到对应共同地标特征的地标测量数据。例如,各测距感应装置获取的测量数据中具有相对同一地标特征的距离,将各测距感应装置检测到的相对于同一地标特征的距离以取均值或中值等方式进行数据处理。所述融合处理还包括:依据各自测量的方向角度对不同测距感应装置提供的测量数据进行插值、均值等处理。例如,将利用双目摄像装置所测得的某一体素位置的测量数值与利用单点激光测距感应装置所测得的对应体素位置的测量数据做均值处理。又如,利用单点激光测距感应装置测得的某一方向角度上的测距数值D1对应于利用双目摄像装置所测得的某两个体素位置的测量数值D21、D22之间的方向角度,则将测距数值D1分别和测量数值D21、D22做均值处理,得到新的对应两个体素位置的测量数值D21’、D22’,并将其替换测量数值D21、D22。再如,利用单点激光测距感应装置测得的某一方向角度上 的测距数值D1对应于利用双目摄像装置所测得的某两个体素位置的测量数值D21、D22之间的方向角度,则将双目摄像装置所测得的二维的距离点阵数据中对应两个体素位置的测量数值D21、D22所在列C1和C2之间增加一列C21,所增加的列C21上的其他体素位置的数值可依据列C1和C2所在各行的测量数值的均值做插值处理。The fusion processing includes: performing landmark analysis on measurement data provided by different ranging sensing devices to obtain landmark measurement data corresponding to common landmark features. For example, the measurement data obtained by each ranging sensing device has a distance relative to the same landmark feature, and the distances detected by each ranging sensing device relative to the same landmark feature are processed by means of averaging or median. The fusion processing further includes: performing interpolation, averaging and other processing on the measurement data provided by different ranging sensing devices according to the respective measured direction angles. For example, the measurement value of a certain voxel position measured by the binocular camera device and the measurement data of the corresponding voxel position measured by the single-point laser ranging sensing device are averaged. For another example, the ranging value D1 at a certain direction angle measured by the single-point laser ranging sensing device corresponds to the direction between the measured values D21 and D22 of two voxel positions measured by the binocular camera device. angle, the distance measurement value D1 and the measurement values D21 and D22 are respectively averaged to obtain new measurement values D21' and D22' corresponding to the two voxel positions, and replace the measurement values D21 and D22. For another example, the ranging value D1 at a certain direction angle measured by the single-point laser ranging sensing device corresponds to the direction between the measured values D21 and D22 of two voxel positions measured by the binocular camera device. angle, then add a column C21 between the columns C1 and C2 where the measured values D21 and D22 corresponding to the two voxel positions in the two-dimensional distance lattice data measured by the binocular camera device are located, and the other columns on the added column C21 The value of the voxel position can be interpolated according to the mean value of the measured values of the rows where columns C1 and C2 are located.
在另一种实施方式中,可将在不同位置所获取的来自不同测距感应装置的测量数据分别进行重合度优化处理,并在各测量数据之间的各重合度符合重合条件下,得到所述第二位置在所述地图数据中的第二定位位置数据。In another embodiment, the coincidence degree optimization process may be performed on the measurement data obtained from different ranging sensing devices obtained at different positions, and the obtained results are obtained under the condition that the coincidence degrees between the measurement data meet the coincidence conditions. second positioning position data of the second position in the map data.
在此,各测距感应装置在各位置处根据各自的测量方式同步获取测量数据,在得到了来自于各测距感应装置的测量数据后,分别依据各自的测量数据分析移动机器人的移动状态,并对各测距感应装置在不同位置所获取的测量数据的重合度分别进行优化处理以得到符合重合条件下各移动状态。将各移动状态做如平均值、中值等数据处理,得到可供将图像数据中的地标特征数据映射到物理坐标系下的移动状态,按照前述任一示例所提供的方式,得到在物理坐标系下的各位置数据,在此不再赘述。Here, each ranging sensing device synchronously acquires measurement data at each position according to their respective measurement methods, and after obtaining the measurement data from each ranging sensing device, analyzes the moving state of the mobile robot according to the respective measurement data, The coincidence degree of the measurement data obtained by each ranging sensing device at different positions is optimized respectively to obtain each moving state under the coincidence condition. Perform data processing such as average value and median value on each moving state to obtain a moving state that can be used to map the landmark feature data in the image data to the physical coordinate system. The location data under the system will not be repeated here.
本申请针对移动机器人在大范围、装修环境具有一定相似度等特点的物理空间内不易于定位等问题,通过同步地获取反映移动机器人体侧环境等环境复杂度较高的测量数据和包含移动机器人上方等环境复杂度较低的图像数据,实现对物理空间中更宽泛的环境感知;以及利用测量数据和图像数据定位的方法,解决了利用其中任一种数据定位精度低、有助于准确定位的地标密度低差等问题。Aiming at the problem that the mobile robot is not easy to locate in the physical space with the characteristics of a large area and a certain similarity of the decoration environment, the present application obtains the measurement data reflecting the high complexity of the mobile robot's body-side environment and other environments by synchronously acquiring and including the mobile robot. Image data with lower environmental complexity, such as above, realizes wider environmental perception in physical space; and the method of positioning using measurement data and image data solves the problem of using either of these data positioning accuracy is low, which is helpful for accurate positioning. problems such as low density of landmarks.
本申请还提供一种服务端。The present application also provides a server.
所述服务端包括但不限于单台服务器、服务器集群、分布式服务器群、云服务端等。在此,根据实际设计,所述服务端由云提供商所提供的云服务端提供。其中,所述云服务端包括公共云(Public Cloud)服务端与私有云(Private Cloud)服务端,其中,所述公共或私有云服务端包括Software-as-a-Service(软件即服务,SaaS)、Platform-as-a-Service(平台即服务,PaaS)及Infrastructure-as-a-Service(基础设施即服务,IaaS)等。所述私有云服务端例如阿里云计算服务平台、亚马逊(Amazon)云计算服务平台、百度云计算平台、腾讯云计算平台等等。The server includes, but is not limited to, a single server, a server cluster, a distributed server cluster, a cloud server, and the like. Here, according to the actual design, the server is provided by a cloud server provided by a cloud provider. Wherein, the cloud server includes a public cloud (Public Cloud) server and a private cloud (Private Cloud) server, wherein the public or private cloud server includes Software-as-a-Service (Software as a Service, SaaS ), Platform-as-a-Service (Platform as a Service, PaaS) and Infrastructure-as-a-Service (Infrastructure as a Service, IaaS), etc. The private cloud service end is, for example, Alibaba cloud computing service platform, Amazon (Amazon) cloud computing service platform, Baidu cloud computing platform, Tencent cloud computing platform and so on.
所述服务端可接受来自移动机器人的图像数据和测量数据,从而基于所获取的图像数据和测量数据构建当前物理空间的地图或得到移动机器人在物理坐标系中的位置。The server can accept image data and measurement data from the mobile robot, so as to construct a map of the current physical space or obtain the position of the mobile robot in the physical coordinate system based on the acquired image data and measurement data.
所述服务端与位于一物理空间中的移动机器人通信连接。其中,所述物理空间表示为移动机器人进行导航移动而提供的物理上的空间,所述物理空间包括但不限于以下任一种:室 内/室外空间、道路空间、飞行空间等。例如在一些实施例中,所述移动机器人为无人机,则所述物理空间对应为飞行空间;在另一些实施例中,所述移动机器人为具有自动驾驶功能的车辆,则所述物理空间对应为无法获得定位的隧道道路或网络信号弱但需要导航的道路空间;在再一些实施例中,所述移动机器人为扫地机器人,则所述物理空间对应为室内或室外的空间。所述移动机器人配置有摄像装置、移动传感装置等为自主移动提供导航数据的感测装置。The server is connected in communication with a mobile robot located in a physical space. Wherein, the physical space refers to the physical space provided for the mobile robot to navigate and move, and the physical space includes but is not limited to any of the following: indoor/outdoor space, road space, flight space, etc. For example, in some embodiments, the mobile robot is a drone, and the physical space corresponds to a flight space; in other embodiments, the mobile robot is a vehicle with an automatic driving function, then the physical space Corresponds to a tunnel road where positioning cannot be obtained or a road space where the network signal is weak but needs to be navigated; in still other embodiments, the mobile robot is a sweeping robot, and the physical space corresponds to an indoor or outdoor space. The mobile robot is equipped with sensing devices such as a camera device, a movement sensing device, and the like, which provide navigation data for autonomous movement.
请参阅图6,其显示为本申请中服务端在一实施方式中的示意图,所述服务端40包括接口装置41、存储装置42、以及处理装置43。其中,存储装置42包含非易失性存储器、存储服务器等。其中,所述非易失性存储器举例为固态硬盘或U盘等。所述存储服务器用于存储所获取的各种用电相关信息和供电相关信息。接口装置41包括网络接口、数据线接口等。其中所述网络接口包括但不限于:以太网的网络接口装置、基于移动网络(3G、4G、5G等)的网络接口装置、基于近距离通信(WiFi、蓝牙等)的网络接口装置等。所述数据线接口包括但不限于:USB接口、RS232、RS485等。所述接口装置与所述控制系统、第三方系统、互联网等数据连接。处理装置43连接接口装置41和存储装置42,其包含:CPU或集成有CPU的芯片、可编程逻辑器件(FPGA)和多核处理器中的至少一种。处理装置43还包括内存、寄存器等用于临时存储数据的存储器。Please refer to FIG. 6 , which is a schematic diagram of the server in an embodiment of the present application. The server 40 includes an interface device 41 , a storage device 42 , and a processing device 43 . The storage device 42 includes a non-volatile memory, a storage server, and the like. Wherein, the non-volatile memory is, for example, a solid state disk or a U disk. The storage server is used for storing various obtained power consumption related information and power supply related information. The interface device 41 includes a network interface, a data line interface, and the like. The network interfaces include but are not limited to: Ethernet network interface devices, network interface devices based on mobile networks (3G, 4G, 5G, etc.), network interface devices based on short-range communication (WiFi, Bluetooth, etc.), and the like. The data line interface includes but is not limited to: USB interface, RS232, RS485, etc. The interface device is data-connected with the control system, a third-party system, the Internet, and the like. The processing device 43 is connected to the interface device 41 and the storage device 42, and includes at least one of a CPU or a chip integrated with the CPU, a programmable logic device (FPGA) and a multi-core processor. The processing device 43 also includes memory, registers, etc., for temporarily storing data.
所述接口装置41用于与位于一物理空间中的移动机器人进行数据通信。The interface device 41 is used for data communication with a mobile robot located in a physical space.
所述存储装置42用以存储至少一个程序。在此,所述存储装置42举例包括设置在服务端的硬盘并储存有所述至少一种程序。The storage device 42 is used to store at least one program. Here, the storage device 42 includes, for example, a hard disk disposed on the server and stores the at least one program.
所述处理装置43用于调用所述至少一个程序以协调所述接口装置和存储装置执行前述第一方面的示例所提及的构建地图的方法,或执行前述第二方面的示例所提及的定位方法。其中,所述构建地图的方法如图1及所对应的描述所示,所述定位方法如图5及所对应的描述所示,在此不再重述。The processing device 43 is configured to call the at least one program to coordinate the interface device and the storage device to execute the method for constructing a map mentioned in the example of the first aspect, or to execute the method mentioned in the example of the second aspect. positioning method. The method for constructing a map is shown in FIG. 1 and the corresponding description, and the positioning method is shown in FIG. 5 and the corresponding description, which will not be repeated here.
此外,在基于所述构建地图的方法得到的地图数据可通过接口装置发送给移动机器人,也可被保存在服务端,以便在定位时使用所述地图数据,或通过服务端向其他设备提供所述地图数据。当服务器需要使用地图数据以定位时,所述存储装置可将地图数据提供给处理装置;当其他设备需要调用地图数据时,所述存储装置可将地图数据提供给接口装置,以便接口装置将地图数据发送给其他设备。In addition, the map data obtained based on the method for constructing a map can be sent to the mobile robot through the interface device, and can also be stored on the server side, so that the map data can be used during positioning, or the server side can provide other devices with the information. Describe the map data. When the server needs to use the map data for positioning, the storage device can provide the map data to the processing device; when other devices need to call the map data, the storage device can provide the map data to the interface device, so that the interface device can map the map data to the interface device. data to other devices.
在此,提供一种移动机器人,请参阅图7,其显示为移动机器人在一实施方式中的模块结构示意图。如图所示,所述移动机器人50包括存储装置54、移动装置53、图像摄取装置51、测距感应装置55和处理装置52。Here, a mobile robot is provided, please refer to FIG. 7 , which is a schematic diagram of a module structure of a mobile robot in an embodiment. As shown in the figure, the mobile robot 50 includes a storage device 54 , a moving device 53 , an image capturing device 51 , a distance sensing device 55 and a processing device 52 .
所述图像摄取装置用于采集图像数据。其中,所述图像摄取装置为用于按照预设像素分辨率提供二维图像的装置。在一些实施例中,所述图像摄取装置包括但不限于:照相机、视频摄像机、集成有光学系统或CCD芯片的摄像模块、以及集成有光学系统和CMOS芯片的摄像模块等。根据实际成像的需求,所述摄像机或视频摄像机可以采用的镜头包括但不限于:标准镜头、远摄镜头、鱼眼镜头、以及广角镜头等。The image pickup device is used to collect image data. Wherein, the image capturing device is a device for providing a two-dimensional image according to a preset pixel resolution. In some embodiments, the image capturing device includes, but is not limited to, a camera, a video camera, a camera module integrated with an optical system or a CCD chip, a camera module integrated with an optical system and a CMOS chip, and the like. According to actual imaging requirements, lenses that can be used by the camera or video camera include, but are not limited to, standard lenses, telephoto lenses, fisheye lenses, and wide-angle lenses.
在一些实施例中,所述图像摄取装置可设置在移动机器人上且主光轴在水平平面与竖直向天花板方向之间,如所述主光轴位于以竖直向天花板方向为0°以及0±90°的范围内。以移动机器人为商用清洁机器人为例,所述图像摄取装置装配在商用清洁机器人上半部,其主光轴为向斜上方的预设角度,以获取相应视角范围内的图像数据。In some embodiments, the image capturing device may be disposed on a mobile robot with a main optical axis between a horizontal plane and a vertical ceiling direction, such as the main optical axis being located at 0° in a vertical ceiling direction and within the range of 0±90°. Taking the mobile robot as a commercial cleaning robot as an example, the image capturing device is assembled on the upper half of the commercial cleaning robot, and its main optical axis is a preset angle obliquely upward to obtain image data within a corresponding viewing angle range.
所述图像摄取装置摄取的图像可以是单张图像、连续图像序列、非连续图像序列、或者视频等中的一种或多种。若图像摄取装置摄取的是图像序列或视频,则可以通过在该序列或视频中提取一或多个图像帧,作为后续进行处理的图像数据。The images captured by the image capturing device may be one or more of a single image, a continuous image sequence, a non-sequential image sequence, or a video. If the image capturing device captures an image sequence or video, one or more image frames can be extracted from the sequence or video as image data for subsequent processing.
所述测距感应装置用以获取测量数据,所述测距感应装置可对移动机器人所在的物理空间中的地标特征相对移动机器人的距离进行测量。The ranging sensing device is used for acquiring measurement data, and the ranging sensing device can measure the distance of the landmark feature in the physical space where the mobile robot is located relative to the mobile robot.
其中,所述地标特征指的是移动机器人所在的物理空间中易于与其他物体进行区分的特征,例如,所述地标特征可以是桌角、天花板上的顶灯的轮廓特征、墙壁与地面之间的连线等。所述测距感应装置举例包括如激光传感器、超声波传感器等提供一维测量数据的测距感应装置,或者包括如ToF传感器、多线激光雷达、毫米波雷达或者双目摄像装置等提供二维或三维测量数据的测距感应装置,还或者包含上述两种测距感应装置。例如,激光传感器可根据其发射激光束的时间和接收激光束的时间差,确定其相对于地标特征的距离;又如,超声波传感器可根据其发射的声波被地标特征反弹回的振动信号来确定移动机器人相对于地标特征的距离;再如,双目摄像装置可根据其两个摄像头所拍摄到的图像,利用三角原理来确定移动机器人相对于地标特征的距离;还如,ToF(Time of Flight,飞行时间)传感器的红外光投射器向外投射红外光,红外光遇到被测物体后反射,并被接收模组接收,通过记录红外光从发射到被接收的时间,计算出被照物体深度信息。The landmark feature refers to a feature that is easy to distinguish from other objects in the physical space where the mobile robot is located. For example, the landmark feature can be a table corner, a contour feature of a ceiling light on a ceiling, a space between a wall and the ground. connection etc. Examples of the ranging sensing device include ranging sensing devices that provide one-dimensional measurement data such as laser sensors and ultrasonic sensors, or include two-dimensional or binocular camera devices such as ToF sensors, multi-line lidars, millimeter-wave radars, or binocular cameras. The ranging sensing device for three-dimensional measurement data may also include the above two ranging sensing devices. For example, a laser sensor can determine its distance from a landmark feature based on the time difference between the time it emits a laser beam and the time it receives the laser beam; for another example, an ultrasonic sensor can determine movement based on the vibration signal that the sound wave it emits is bounced back by the landmark feature. The distance of the robot relative to the landmark feature; another example, the binocular camera device can use the triangulation principle to determine the distance of the mobile robot relative to the landmark feature according to the images captured by its two cameras; another example, ToF (Time of Flight, The infrared light projector of the time-of-flight) sensor projects infrared light outward, and the infrared light is reflected after encountering the measured object and received by the receiving module. The depth of the illuminated object is calculated by recording the time from infrared light emission to being received. information.
所述测距感应装置通过探测移动机器人周围环境中的实体而产生相应的测量数据。为防止移动机器人与周围环境中的实体发生如碰撞、缠绕等移动异常情况,或者防止如移动机器人跌落等移动异常情况,测距感应装置装配在移动机器人的体侧。测距感应装置用于探测物理空间中对应图像数据的边缘或图像数据无法覆盖区域的环境数据。以移动机器人是商用清洁机器人为例,测距感应装置配置在商用清洁机器人体侧相距地面10-80cm之间的位置。The range sensing device generates corresponding measurement data by detecting entities in the surrounding environment of the mobile robot. In order to prevent the mobile robot from abnormal movements such as collision and entanglement with the entities in the surrounding environment, or to prevent abnormal movements such as the mobile robot from falling, the ranging sensing device is assembled on the body side of the mobile robot. The ranging sensing device is used to detect the edge of the corresponding image data in the physical space or the environmental data in the area that cannot be covered by the image data. Taking the mobile robot as a commercial cleaning robot as an example, the ranging sensing device is arranged at a position between 10-80 cm from the ground on the side of the commercial cleaning robot.
在一些实施例中,所述测距感应装置装配在所述移动机器人行进方向上的一侧,以便于移动机器人了解其行进方向上的地标特征从而躲避或采取其他行为控制。当然,在某些情况下所述测距感应装置也可以安装在移动机器人的其他位置上,只要能获取到周围物理空间中的相对各地标特征的距离即可。In some embodiments, the ranging sensing device is mounted on one side of the mobile robot in the direction of travel, so that the mobile robot can learn about landmark features in the direction of travel so as to avoid or take other behavioral controls. Of course, in some cases, the ranging sensing device can also be installed at other positions of the mobile robot, as long as the distances relative to each landmark feature in the surrounding physical space can be obtained.
在一些实施例中,安装在移动机器人上的测距感应装置种类可以为一种或多种。例如,在移动机器人上可仅安装有激光传感器,也可同时安装有激光传感器和双目摄像装置,或者同时安装有激光传感器、双目摄像装置和ToF传感器等。并且,同一种传感器的安装数量也可根据需求来配置,以获取不同方向上的测量数据。In some embodiments, there may be one or more types of ranging sensing devices installed on the mobile robot. For example, only a laser sensor may be installed on the mobile robot, a laser sensor and a binocular camera device may be installed at the same time, or a laser sensor, a binocular camera device and a ToF sensor may be installed at the same time. In addition, the installed quantity of the same sensor can also be configured according to requirements to obtain measurement data in different directions.
所述移动装置用于基于导航路线执行移动操作。所述移动装置包括设置在移动机器人底部的轮组等驱动移动机器人移动的部件。The mobile device is configured to perform a movement operation based on a navigation route. The moving device includes components such as wheels arranged at the bottom of the mobile robot to drive the mobile robot to move.
所述存储装置用于存储至少一个程序。存储装置包含非易失性存储器、存储服务器等。其中,所述非易失性存储器举例为固态硬盘或U盘等。The storage device is used to store at least one program. The storage device includes a nonvolatile memory, a storage server, and the like. Wherein, the non-volatile memory is, for example, a solid state disk or a U disk.
所述处理装置与所述存储装置、图像摄取装置、测距感应装置和移动装置相连,用于调用并执行所述至少一个程序,以协调所述存储装置、图像摄取装置、测距感应装置和移动装置执行前述第一方面的示例所提及的构建地图的方法,或执行前述第二方面的示例所提及的定位方法。其中,所述构建地图的方法如图1及所对应的描述所示,所述定位方法如图5及所对应的描述所示,在此不再重述。The processing device is connected with the storage device, the image capturing device, the distance measuring sensing device and the mobile device, and is used for calling and executing the at least one program to coordinate the storage device, the image capturing device, the distance measuring sensing device and the mobile device. The mobile device executes the method for building a map mentioned in the example of the aforementioned first aspect, or performs the positioning method mentioned in the example for the aforementioned second aspect. The method for constructing a map is shown in FIG. 1 and the corresponding description, and the positioning method is shown in FIG. 5 and the corresponding description, which will not be repeated here.
此外,在基于所述构建地图的方法得到的地图数据可被保存在存储装置中,以便在定位时使用所述地图数据。当需要使用地图数据以定位时,所述存储装置可将地图数据提供给处理装置。Furthermore, map data obtained based on the method of constructing a map may be stored in a storage device so that the map data can be used in positioning. The storage device may provide the map data to the processing device when the map data needs to be used for positioning.
另外,所述移动机器人还可包括一接口装置,从而可通过接口装置向其他设备提供地图数据或将获取的图像数据和测量数据发送给其他设备。In addition, the mobile robot may further include an interface device, so that map data can be provided to other devices or the acquired image data and measurement data can be sent to other devices through the interface device.
其中,所述接口装置包括网络接口、数据线接口等。其中所述网络接口包括但不限于:以太网的网络接口装置、基于移动网络(3G、4G、5G等)的网络接口装置、基于近距离通信(WiFi、蓝牙等)的网络接口装置等。所述数据线接口包括但不限于:USB接口、RS232、RS485等。所述接口装置与所述控制系统、第三方系统、互联网等数据连接。Wherein, the interface device includes a network interface, a data line interface, and the like. The network interfaces include but are not limited to: Ethernet network interface devices, network interface devices based on mobile networks (3G, 4G, 5G, etc.), network interface devices based on short-range communication (WiFi, Bluetooth, etc.), and the like. The data line interface includes but is not limited to: USB interface, RS232, RS485, etc. The interface device is data-connected with the control system, a third-party system, the Internet, and the like.
本申请中的移动机器人上安装有图像摄取装置和测距感应装置,从而结合了各传感器的优缺点,采用多传感器融合的方式提升了构图或定位的精度和可靠性。The mobile robot in the present application is installed with an image capturing device and a ranging sensing device, so that the advantages and disadvantages of each sensor are combined, and the accuracy and reliability of composition or positioning are improved by means of multi-sensor fusion.
本申请还提供一种计算机可读写存储介质,存储有计算机程序,所述计算机程序被执行时所述存储介质所在设备实现上述针对构建地图的方法所描述的至少一种实施例,比如图1 对应的各实施方式中任一所描述的实施例;或者所述计算机程序被执行时所述存储介质所在设备实现上述针对移动机器人的定位方法所描述的至少一种实施例,比如图5对应的各实施方式中任一所描述的实施例。The present application also provides a computer readable and writable storage medium, which stores a computer program. When the computer program is executed, the device where the storage medium is located implements at least one embodiment described above for the method for constructing a map, such as FIG. 1 . The corresponding embodiment described in any one of the embodiments; or when the computer program is executed, the device where the storage medium is located implements at least one of the embodiments described above for the positioning method for a mobile robot, such as the one corresponding to FIG. 5 . Any of the described embodiments in various implementations.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。The functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution. The computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
于本申请提供的实施例中,所述计算机可读写存储介质可以包括只读存储器、随机存取存储器、EEPROM、CD-ROM或其它光盘存储装置、磁盘存储装置或其它磁存储设备、闪存、U盘、移动硬盘、或者能够用于存储具有指令或数据结构形式的期望的程序代码并能够由计算机进行存取的任何其它介质。另外,任何连接都可以适当地称为计算机可读介质。例如,如果指令是使用同轴电缆、光纤光缆、双绞线、数字订户线(DSL)或者诸如红外线、无线电和微波之类的无线技术,从网站、服务器或其它远程源发送的,则所述同轴电缆、光纤光缆、双绞线、DSL或者诸如红外线、无线电和微波之类的无线技术包括在所述介质的定义中。然而,应当理解的是,计算机可读写存储介质和数据存储介质不包括连接、载波、信号或者其它暂时性介质,而是旨在针对于非暂时性、有形的存储介质。如申请中所使用的磁盘和光盘包括压缩光盘(CD)、激光光盘、光盘、数字多功能光盘(DVD)、软盘和蓝光光盘,其中,磁盘通常磁性地复制数据,而光盘则用激光来光学地复制数据。In the embodiments provided in this application, the computer readable and writable storage medium may include read-only memory, random access memory, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices or other magnetic storage devices, flash memory, A USB stick, a removable hard disk, or any other medium that can be used to store the desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the instructions are sent from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave Coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of the medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead intended to be non-transitory, tangible storage media. Disk and disc, as used in the application, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc, where disks typically reproduce data magnetically, while discs use lasers to optically reproduce data replicate the data.
在一个或多个示例性方面,本申请所述方法的计算机程序所描述的功能可以用硬件、软件、固件或其任意组合的方式来实现。当用软件实现时,可以将这些功能作为一个或多个指令或代码存储或传送到计算机可读介质上。本申请所公开的方法或算法的步骤可以用处理器可执行软件模块来体现,其中处理器可执行软件模块可以位于有形、非临时性计算机可读写存储介质上。有形、非临时性计算机可读写存储介质可以是计算机能够存取的任何可用介质。In one or more exemplary aspects, the functions described by the computer programs of the methods described herein may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. The steps of the methods or algorithms disclosed herein may be embodied in processor-executable software modules, where the processor-executable software modules may reside on a tangible, non-transitory computer readable and writable storage medium. Tangible, non-transitory computer-readable storage media can be any available media that can be accessed by a computer.
本申请上述的附图中的流程图和框图,图示了按照本申请各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。基于此,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这根据所涉及的功能而定。 也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以通过执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以通过专用硬件与计算机指令的组合来实现。The flowcharts and block diagrams in the above-described figures of the present application illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Based on this, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which contains one or more possible functions for implementing the specified logical function(s) Execute the instruction. It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It is also noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by dedicated hardware-based systems that perform the specified functions or operations , or can be implemented by a combination of dedicated hardware and computer instructions.
上述实施例仅例示性说明本申请的原理及其功效,而非用于限制本申请。任何熟悉此技术的人士皆可在不违背本申请的精神及范畴下,对上述实施例进行修饰或改变。因此,举凡所属技术领域中具有通常知识者在未脱离本申请所揭示的精神与技术思想下所完成的一切等效修饰或改变,仍应由本申请的权利要求所涵盖。The above-mentioned embodiments merely illustrate the principles and effects of the present application, but are not intended to limit the present application. Anyone skilled in the art can make modifications or changes to the above embodiments without departing from the spirit and scope of the present application. Therefore, all equivalent modifications or changes made by those with ordinary knowledge in the technical field without departing from the spirit and technical idea disclosed in this application should still be covered by the claims of this application.

Claims (21)

  1. 一种移动机器人构建地图的方法,其特征在于,包括:A method for building a map for a mobile robot, comprising:
    控制图像摄取装置和测距感应装置在不同的位置分别同步获取图像数据和测量数据;Control the image capturing device and the ranging sensing device to obtain image data and measurement data synchronously at different positions;
    利用在不同位置所获取的测量数据分析移动机器人在物理空间中的移动状态,以得到各图像数据中的地标特征映射到物理坐标系下的地标位置数据,和/或以得到所述不同位置在物理坐标系中对应的定位位置数据;Use the measurement data obtained at different positions to analyze the moving state of the mobile robot in the physical space, so as to obtain the landmark feature in each image data mapped to the landmark position data in the physical coordinate system, and/or to obtain the different positions in the physical space. The corresponding positioning position data in the physical coordinate system;
    将所得到的各地标位置数据和/或定位位置数据记录在基于所述物理坐标系而构建的地图数据中。The obtained landmark position data and/or positioning position data are recorded in map data constructed based on the physical coordinate system.
  2. 根据权利要求1所述的移动机器人构建地图的方法,其特征在于,还包括根据所述测距感应装置所测量的测量数据确定所述地图数据中的环境边界的步骤。The method for constructing a map for a mobile robot according to claim 1, further comprising the step of determining an environment boundary in the map data according to the measurement data measured by the range-finding sensing device.
  3. 根据权利要求1所述的移动机器人构建地图的方法,其特征在于,当所述测距感应装置包括双目摄像装置时,还包括对应于所述移动机器人在物理坐标系中的各定位位置数据,将所述双目摄像装置所拍摄的各图像保存在地图数据中的步骤。The method for building a map for a mobile robot according to claim 1, wherein when the ranging sensing device includes a binocular camera device, it further includes data corresponding to each positioning position of the mobile robot in the physical coordinate system , the step of saving each image captured by the binocular camera device in the map data.
  4. 根据权利要求1所述的移动机器人构建地图的方法,其特征在于,所述利用在不同位置所获取的测量数据分析移动机器人在物理空间中移动状态,以得到各图像数据中的地标特征映射到物理坐标系下的地标位置数据,和/或以得到所述不同位置在物理坐标系中对应的定位位置数据,包括:The method for constructing a map by a mobile robot according to claim 1, wherein the measurement data obtained at different positions is used to analyze the moving state of the mobile robot in the physical space, so as to obtain the mapping of landmark features in each image data to the physical space. Landmark location data in the coordinate system, and/or positioning location data corresponding to the different locations in the physical coordinate system, including:
    将在不同位置所获取的测量数据之间的重合度进行优化处理,得到在符合重合条件下的各图像数据中的地标特征映射到物理坐标系下的地标位置数据,和/或得到在符合重合条件下的所述不同位置在物理坐标系中对应的定位位置数据。The degree of coincidence between the measurement data obtained at different positions is optimized, so that the landmark features in each image data that meet the coincidence conditions are mapped to the landmark position data under the physical coordinate system, and/or the landmark position data that meets the coincidence conditions are obtained. Positioning position data corresponding to the different positions under the conditions in the physical coordinate system.
  5. 根据权利要求1所述的移动机器人构建地图的方法,其特征在于,所述利用在不同位置所获取的测量数据分析移动机器人在物理空间中移动状态,以得到各图像数据中的地标特征映射到物理坐标系下的地标位置数据,和/或以得到所述不同位置在物理坐标系中对应的定位位置数据,包括:The method for constructing a map by a mobile robot according to claim 1, wherein the measurement data obtained at different positions is used to analyze the moving state of the mobile robot in the physical space, so as to obtain the mapping of landmark features in each image data to the physical space. Landmark location data in the coordinate system, and/or positioning location data corresponding to the different locations in the physical coordinate system, including:
    将在不同位置所获取的测量数据之间的重合度和所采集的图像数据之间的重合度进行联合优化处理,得到在符合联合优化的重合度条件下各图像数据中的地标特征映射到物理坐标系下的地标位置数据,和/或所述不同位置在物理坐标系中对应的定位位置数据。The degree of coincidence between the measurement data acquired at different locations and the degree of coincidence between the collected image data is jointly optimized, and the landmark features in each image data are mapped to physical objects under the conditions of coincidence that meet the joint optimization. Landmark location data in the coordinate system, and/or positioning location data corresponding to the different locations in the physical coordinate system.
  6. 根据权利要求1、4或5所述的移动机器人构建地图的方法,其特征在于,所述控制图像摄取装置和测距感应装置在不同的位置分别同步获取图像数据和测量数据,包括:控制图像摄取装置,以及多种测距感应装置,在不同的位置分别同步获取图像数据和测量数据。The method for building a map for a mobile robot according to claim 1, 4 or 5, wherein the controlling the image capturing device and the ranging sensing device to obtain image data and measurement data synchronously at different positions, respectively, includes: controlling the image The capturing device, as well as a variety of ranging sensing devices, acquire image data and measurement data synchronously at different positions.
  7. 根据权利要求1所述的移动机器人构建地图的方法,其特征在于,所述利用在不同位置所获取的测量数据分析移动机器人在物理空间中的移动状态,包括:提取在不同位置所获取的测量数据中的地标特征,并利用所提取的地标特征分析移动机器人在物理空间中的移动状态。The method for constructing a map for a mobile robot according to claim 1, wherein the analyzing the movement state of the mobile robot in the physical space by using the measurement data obtained at different positions comprises: extracting the measurements obtained at different positions Landmark features in the data, and use the extracted landmark features to analyze the moving state of the mobile robot in the physical space.
  8. 根据权利要求1所述的移动机器人构建地图的方法,其特征在于,所述控制图像摄取装置和测距感应装置在不同的位置分别同步获取图像数据和测量数据,包括:控制图像摄取装置和测距感应装置在不同的位置分别同步获取图像数据和测量数据,其中,同步信号获取自移动机器人中图像摄取装置和测距感应装置的外部或内部。The method for constructing a map by a mobile robot according to claim 1, wherein the controlling the image capturing device and the ranging sensing device to obtain image data and measurement data synchronously at different positions, respectively, comprises: controlling the image capturing device and the measuring device. The distance sensing device obtains image data and measurement data synchronously at different positions, wherein the synchronization signal is obtained from the outside or inside of the image capturing device and the distance measuring sensing device in the mobile robot.
  9. 根据权利要求8所述的移动机器人构建地图的方法,其特征在于,所述同步信号获取自移动机器人中图像摄取装置和测距感应装置的外部,包括:同步信号获取自移动机器人中图像摄取装置和测距感应装置的外部的惯导传感器。The method for constructing a map for a mobile robot according to claim 8, wherein the synchronization signal is obtained from the outside of the image capturing device and the ranging sensing device in the mobile robot, comprising: the synchronization signal is obtained from the image capturing device in the mobile robot and the external inertial navigation sensor of the ranging sensing device.
  10. 根据权利要求1所述的移动机器人构建地图的方法,其特征在于,所述图像摄取装置的视角范围和测距感应装置的测量范围之间部分重叠或者不重叠。The method for constructing a map for a mobile robot according to claim 1, wherein the viewing angle range of the image capturing device and the measurement range of the ranging sensing device partially overlap or do not overlap.
  11. 一种移动机器人的定位方法,其特征在于,包括:A method for positioning a mobile robot, comprising:
    控制图像摄取装置和测距感应装置在移动机器人的第一位置和与第一位置不相同的第二位置分别同步获取图像数据和测量数据;其中,所述第一位置映射于地图数据中的第一定位位置数据;Control the image capturing device and the ranging sensing device to obtain image data and measurement data synchronously at a first position of the mobile robot and a second position different from the first position, respectively; wherein, the first position is mapped to the first position in the map data. a positioning location data;
    利用在第一位置和第二位置分别测得的测量数据分析移动机器人在物理空间中的移动状态,以确定当所述移动机器人处于第二位置时,所述第二位置在所述地图数据中的第二定位位置数据。The moving state of the mobile robot in the physical space is analyzed using the measurement data measured at the first position and the second position, respectively, to determine that when the mobile robot is at the second position, the second position is in the map data of the second positioning position data.
  12. 根据权利要求11所述的移动机器人的定位方法,其特征在于,所述利用在第一位置和第二位置分别测得的测量数据分析移动机器人在物理空间中的移动状态,以确定当所述移动机器人处于第二位置时,所述第二位置在所述地图数据中的第二定位位置数据,包括:The method for positioning a mobile robot according to claim 11, wherein the measurement data measured at the first position and the second position is used to analyze the moving state of the mobile robot in the physical space to determine when the When the mobile robot is in the second position, the second positioning position data of the second position in the map data includes:
    将在不同位置所获取的测量数据之间的重合度进行优化处理,得到在符合各测量数据之间的重合条件下所述第二位置在所述地图数据中对应的第二定位位置数据。The degree of coincidence between the measurement data acquired at different positions is optimized to obtain second positioning position data corresponding to the second position in the map data under the condition of coincidence between the measurement data.
  13. 根据权利要求11所述的移动机器人的定位方法,其特征在于,所述利用在第一位置和第二位置分别测得的测量数据分析移动机器人在物理空间中的移动状态,以确定当所述移动机器人处于第二位置时,所述第二位置在所述地图数据中的第二定位位置数据,包括:The method for positioning a mobile robot according to claim 11, wherein the measurement data measured at the first position and the second position is used to analyze the moving state of the mobile robot in the physical space to determine when the When the mobile robot is in the second position, the second positioning position data of the second position in the map data includes:
    将在不同位置所获取的测量数据之间的重合度和所采集的图像数据之间的重合度进行联合优化处理,得到在联合优化的重合度条件下所述第二位置在所述地图数据中的第二定位位置数据。Perform joint optimization processing on the degree of coincidence between the measurement data acquired at different locations and the degree of coincidence between the collected image data, so as to obtain that the second position is in the map data under the condition of the degree of coincidence of the joint optimization. of the second positioning position data.
  14. 根据权利要求11~13任一所述的移动机器人的定位方法,其特征在于,所述控制图像摄取装置和测距感应装置在移动机器人的第一位置和与第一位置不相同的第二位置分别同步获取图像数据和测量数据,包括:控制图像摄取装置,以及多种测距感应装置,在移动机器人的第一位置和与第一位置不相同的第二位置分别同步获取图像数据和测量数据。The method for positioning a mobile robot according to any one of claims 11 to 13, wherein the image capturing device and the distance measuring and sensing device are controlled at a first position of the mobile robot and a second position different from the first position Respectively acquire image data and measurement data synchronously, including: controlling the image capture device and various ranging sensing devices to acquire image data and measurement data synchronously at a first position of the mobile robot and a second position different from the first position, respectively .
  15. 根据权利要求11所述的移动机器人的定位方法,其特征在于,所述利用在第一位置和第二位置分别测得的测量数据分析移动机器人在物理空间中的移动状态,包括:提取在第一位置和第二位置所获取的测量数据中的地标特征,并利用所提取的地标特征分析移动机器人在物理空间中的移动状态。The method for positioning a mobile robot according to claim 11, wherein the analyzing the movement state of the mobile robot in the physical space by using the measurement data measured at the first position and the second position respectively comprises: extracting the movement state of the mobile robot in the first position and the second position The landmark features in the measurement data obtained at the first position and the second position are used, and the movement state of the mobile robot in the physical space is analyzed by using the extracted landmark features.
  16. 根据权利要求11所述的移动机器人的定位方法,其特征在于,所述控制图像摄取装置和测距感应装置在移动机器人的第一位置和与第一位置不相同的第二位置分别同步获取图像数据和测量数据,包括:控制图像摄取装置和测距感应装置在移动机器人的第一位置和与第一位置不相同的第二位置分别同步获取图像数据和测量数据,其中,同步信号获取自移动机器人中图像摄取装置和测距感应装置的外部或内部。The method for positioning a mobile robot according to claim 11, wherein the image capturing device and the distance measuring sensing device are controlled to acquire images synchronously at a first position of the mobile robot and a second position different from the first position, respectively. data and measurement data, including: controlling the image capturing device and the distance measuring sensing device to obtain image data and measurement data synchronously at a first position of the mobile robot and a second position different from the first position, respectively, wherein the synchronization signal is obtained from the mobile robot Outside or inside of image capture devices and ranging sensing devices in robots.
  17. 根据权利要求16所述的移动机器人的定位方法,其特征在于,所述同步信号获取自移动机器人中图像摄取装置和测距感应装置的外部,包括:同步信号获取自移动机器人中图像摄取装置和测距感应装置的外部的惯导传感器。The method for positioning a mobile robot according to claim 16, wherein the synchronization signal is obtained from the outside of the image capturing device and the distance measuring sensing device in the mobile robot, comprising: obtaining the synchronization signal from the image capturing device in the mobile robot and Inertial navigation sensor outside the ranging sensing device.
  18. 根据权利要求11所述的移动机器人的定位方法,其特征在于,所述图像摄取装置的视角范围和测距感应装置的测量范围之间部分重叠或者不重叠。The positioning method of a mobile robot according to claim 11, wherein the viewing angle range of the image capturing device and the measurement range of the distance measuring sensing device partially overlap or do not overlap.
  19. 一种服务端,其特征在于,包括:A server, characterized in that it includes:
    接口装置,用于与移动机器人进行数据通信;an interface device for data communication with the mobile robot;
    存储装置,存储至少一个程序;a storage device that stores at least one program;
    处理装置,与所述存储装置和接口装置相连,用于执行所述至少一个程序,以协调所述存储装置和接口装置执行并实现如权利要求1~10中任一所述的移动机器人构建地图的方法,或者如权利要求11~18中任一所述的移动机器人的定位方法。A processing device, connected with the storage device and the interface device, for executing the at least one program, so as to coordinate the storage device and the interface device to execute and implement the mobile robot construction map according to any one of claims 1 to 10 method, or the positioning method of a mobile robot according to any one of claims 11 to 18.
  20. 一种移动机器人,其特征在于,包括:A mobile robot, comprising:
    图像摄取装置,用以采集图像数据;an image capturing device for collecting image data;
    测距感应装置,用以获取测量数据;A ranging sensing device for acquiring measurement data;
    移动装置,用于执行移动操作;a mobile device for performing mobile operations;
    存储装置,用以存储至少一个程序;a storage device for storing at least one program;
    处理装置,与所述移动装置、图像摄取装置、测距感应装置以及存储装置相连,用于执行所述至少一个程序,以执行如权利要求1~10中任一所述的移动机器人构建地图的方法,或者如权利要求11~18中任一所述的移动机器人的定位方法。A processing device, connected with the mobile device, the image capturing device, the distance measuring and sensing device, and the storage device, and used for executing the at least one program, so as to execute the map building process of the mobile robot according to any one of claims 1 to 10. method, or the positioning method of a mobile robot according to any one of claims 11-18.
  21. 一种计算机可读存储介质,其特征在于,存储有至少一种计算机程序,所述计算机程序被处理器运行时控制所述存储介质所在设备执行如权利要求1~10中任一所述的移动机器人构建地图的方法或者如权利要求11~18中任一所述的移动机器人的定位方法。A computer-readable storage medium, characterized in that it stores at least one computer program, and when the computer program is run by a processor, the computer program controls a device where the storage medium is located to perform the movement according to any one of claims 1 to 10. The method for constructing a map by a robot or the positioning method for a mobile robot according to any one of claims 11 to 18.
PCT/CN2020/107863 2020-08-07 2020-08-07 Positioning method and map construction method for mobile robot, and mobile robot WO2022027611A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/107863 WO2022027611A1 (en) 2020-08-07 2020-08-07 Positioning method and map construction method for mobile robot, and mobile robot
CN202080001821.0A CN112041634A (en) 2020-08-07 2020-08-07 Mobile robot positioning method, map building method and mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/107863 WO2022027611A1 (en) 2020-08-07 2020-08-07 Positioning method and map construction method for mobile robot, and mobile robot

Publications (1)

Publication Number Publication Date
WO2022027611A1 true WO2022027611A1 (en) 2022-02-10

Family

ID=73572857

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/107863 WO2022027611A1 (en) 2020-08-07 2020-08-07 Positioning method and map construction method for mobile robot, and mobile robot

Country Status (2)

Country Link
CN (1) CN112041634A (en)
WO (1) WO2022027611A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115314700A (en) * 2022-10-13 2022-11-08 潍坊歌尔电子有限公司 Position detection method for control device, positioning system, and readable storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113391318B (en) * 2021-06-10 2022-05-17 上海大学 Mobile robot positioning method and system
CN113960999B (en) * 2021-07-30 2024-05-07 珠海一微半导体股份有限公司 Repositioning method, repositioning system and repositioning chip for mobile robot

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130158865A1 (en) * 2011-12-15 2013-06-20 Electronics And Telecommunications Research Institute Method and apparatus for estimating position of moving object
CN107167148A (en) * 2017-05-24 2017-09-15 安科机器人有限公司 Synchronous superposition method and apparatus
CN109643127A (en) * 2018-11-19 2019-04-16 珊口(深圳)智能科技有限公司 Construct map, positioning, navigation, control method and system, mobile robot
CN109634279A (en) * 2018-12-17 2019-04-16 武汉科技大学 Object positioning method based on laser radar and monocular vision
CN110211228A (en) * 2019-04-30 2019-09-06 北京云迹科技有限公司 For building the data processing method and device of figure
JP2019185213A (en) * 2018-04-04 2019-10-24 国立大学法人九州工業大学 Autonomous movement robot and control method thereof
CN110706248A (en) * 2019-08-20 2020-01-17 广东工业大学 Visual perception mapping algorithm based on SLAM and mobile robot
CN111089585A (en) * 2019-12-30 2020-05-01 哈尔滨理工大学 Mapping and positioning method based on sensor information fusion
CN111427061A (en) * 2020-06-15 2020-07-17 北京云迹科技有限公司 Robot mapping method and device, robot and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102538781B (en) * 2011-12-14 2014-12-17 浙江大学 Machine vision and inertial navigation fusion-based mobile robot motion attitude estimation method
CA2950791C (en) * 2013-08-19 2019-04-16 State Grid Corporation Of China Binocular visual navigation system and method based on power robot
KR102135560B1 (en) * 2018-05-16 2020-07-20 주식회사 유진로봇 Moving Object and Hybrid Sensor with Camera and Lidar
CN108981672A (en) * 2018-07-19 2018-12-11 华南师范大学 Hatch door real-time location method based on monocular robot in conjunction with distance measuring sensor
CN109431381B (en) * 2018-10-29 2022-06-07 北京石头创新科技有限公司 Robot positioning method and device, electronic device and storage medium
CN109752725A (en) * 2019-01-14 2019-05-14 天合光能股份有限公司 Low-speed commercial robot, positioning and navigation method and positioning and navigation system
CN111427360B (en) * 2020-04-20 2023-05-05 珠海一微半导体股份有限公司 Map construction method based on landmark positioning, robot and robot navigation system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130158865A1 (en) * 2011-12-15 2013-06-20 Electronics And Telecommunications Research Institute Method and apparatus for estimating position of moving object
CN107167148A (en) * 2017-05-24 2017-09-15 安科机器人有限公司 Synchronous superposition method and apparatus
JP2019185213A (en) * 2018-04-04 2019-10-24 国立大学法人九州工業大学 Autonomous movement robot and control method thereof
CN109643127A (en) * 2018-11-19 2019-04-16 珊口(深圳)智能科技有限公司 Construct map, positioning, navigation, control method and system, mobile robot
CN109634279A (en) * 2018-12-17 2019-04-16 武汉科技大学 Object positioning method based on laser radar and monocular vision
CN110211228A (en) * 2019-04-30 2019-09-06 北京云迹科技有限公司 For building the data processing method and device of figure
CN110706248A (en) * 2019-08-20 2020-01-17 广东工业大学 Visual perception mapping algorithm based on SLAM and mobile robot
CN111089585A (en) * 2019-12-30 2020-05-01 哈尔滨理工大学 Mapping and positioning method based on sensor information fusion
CN111427061A (en) * 2020-06-15 2020-07-17 北京云迹科技有限公司 Robot mapping method and device, robot and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115314700A (en) * 2022-10-13 2022-11-08 潍坊歌尔电子有限公司 Position detection method for control device, positioning system, and readable storage medium
CN115314700B (en) * 2022-10-13 2023-01-24 潍坊歌尔电子有限公司 Position detection method for control device, positioning system, and readable storage medium

Also Published As

Publication number Publication date
CN112041634A (en) 2020-12-04

Similar Documents

Publication Publication Date Title
US11927450B2 (en) Methods for finding the perimeter of a place using observed coordinates
US11204247B2 (en) Method for updating a map and mobile robot
WO2022027611A1 (en) Positioning method and map construction method for mobile robot, and mobile robot
US10705535B2 (en) Systems and methods for performing simultaneous localization and mapping using machine vision systems
US9329598B2 (en) Simultaneous localization and mapping for a mobile robot
CN112867424B (en) Navigation and cleaning area dividing method and system, and moving and cleaning robot
US9400501B2 (en) Simultaneous localization and mapping for a mobile robot
US20190122386A1 (en) Lidar to camera calibration for generating high definition maps
CN102622762B (en) Real-time camera tracking using depth maps
CN110275538A (en) Intelligent cruise vehicle navigation method and system
KR102169283B1 (en) Method of updating map using autonomous robot and system implementing the same
WO2021146862A1 (en) Indoor positioning method for mobile device, mobile device and control system
WO2022160790A1 (en) Three-dimensional map construction method and apparatus
CN111220148A (en) Mobile robot positioning method, system and device and mobile robot
KR101319525B1 (en) System for providing location information of target using mobile robot
Pirker et al. GPSlam: Marrying Sparse Geometric and Dense Probabilistic Visual Mapping.
CN111609853A (en) Three-dimensional map construction method, sweeping robot and electronic equipment
CN113607166B (en) Indoor and outdoor positioning method and device for autonomous mobile robot based on multi-sensor fusion
US20240069203A1 (en) Global optimization methods for mobile coordinate scanners
Runceanu et al. Indoor point cloud segmentation for automatic object interpretation
Biström Comparative analysis of properties of LiDAR-based point clouds versus camera-based point clouds for 3D reconstruction using SLAM algorithms
US20220329737A1 (en) 3d polygon scanner
Karam et al. Microdrone-Based Indoor Mapping with Graph SLAM. Drones 2022, 6, 352
Lin et al. An integrated 3D mapping approach based on RGB-D for a multi-robot system
WANG 2D Mapping Solutionsfor Low Cost Mobile Robot

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20948884

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20948884

Country of ref document: EP

Kind code of ref document: A1