CN112041634A - Mobile robot positioning method, map building method and mobile robot - Google Patents

Mobile robot positioning method, map building method and mobile robot Download PDF

Info

Publication number
CN112041634A
CN112041634A CN202080001821.0A CN202080001821A CN112041634A CN 112041634 A CN112041634 A CN 112041634A CN 202080001821 A CN202080001821 A CN 202080001821A CN 112041634 A CN112041634 A CN 112041634A
Authority
CN
China
Prior art keywords
data
mobile robot
measurement data
landmark
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080001821.0A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Shankou Intelligent Technology Co Ltd
Original Assignee
Suzhou Shankou Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Shankou Intelligent Technology Co Ltd filed Critical Suzhou Shankou Intelligent Technology Co Ltd
Publication of CN112041634A publication Critical patent/CN112041634A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/18Stabilised platforms, e.g. by gyroscope
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/06Systems determining position data of a target
    • G01S13/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/02Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
    • G01S15/06Systems determining the position data of a target
    • G01S15/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/02Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
    • G01S15/06Systems determining the position data of a target
    • G01S15/46Indirect determination of position data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/93Sonar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/46Indirect determination of position data
    • G01S17/48Active triangulation systems, i.e. using the transmission and reflection of electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/02Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
    • G01S15/06Systems determining the position data of a target
    • G01S15/46Indirect determination of position data
    • G01S2015/465Indirect determination of position data by Trilateration, i.e. two transducers determine separately the distance to a target, whereby with the knowledge of the baseline length, i.e. the distance between the transducers, the position data of the target is determined

Abstract

The application discloses a positioning method of a mobile robot, a map construction method and the mobile robot, comprising the following steps: controlling the image pickup device and the ranging sensing device to synchronously acquire image data and measurement data at least two positions respectively; analyzing the moving state of the mobile robot in a physical space by using at least the measurement data acquired at different positions to obtain at least landmark position data of which the landmark features in each image data are mapped to a physical coordinate system and/or obtain corresponding positioning position data of the at least two positions in the physical coordinate system; the resulting respective landmark position data and/or positioning position data are recorded in map data constructed based on the physical coordinate system. The method and the device combine advantages and disadvantages of each sensor, and improve precision and reliability of composition by adopting a multi-sensor fusion mode.

Description

Mobile robot positioning method, map building method and mobile robot
Technical Field
The application relates to the technical field of positioning, in particular to a positioning method of a mobile robot, a map building method and the mobile robot.
Background
The mobile robot is a machine device which automatically executes specific work, can receive the command of people, can run a pre-arranged program, and can perform an action according to a principle formulated by an artificial intelligence technology. The mobile robot has an autonomous mobile function, and is applied to large-scale indoor occasions such as airports, railway stations, warehouses, hotels and the like. For example, a commercial cleaning robot, a carrying/delivery robot, a guest greeting robot, and the like realize functions such as cleaning, carrying, and approach by using an autonomous movement function.
The mobile robot is influenced by external factors such as complexity of an indoor site and internal factors such as accuracy of hardware configured by the mobile robot, and the mobile robot is not favorable for accurate positioning by depending on environment information provided by a single detection device. For example, in an open area where an obstacle is far away, the mobile robot is not favorable for measuring the surrounding environment by means of laser measurement only, and further is not favorable for determining the accurate position relationship between the mobile robot and the obstacle. As another example, in an area with high similarity and a large range of the roof environment, the mobile robot is not favorable for determining the accurate position of the area in which the mobile robot is located by only using the camera to acquire landmark data.
Disclosure of Invention
In view of the above-mentioned drawbacks of the related art, it is an object of the present application to provide a method for overcoming the positioning and patterning accuracy problems of the related art.
To achieve the above and other related objects, a first aspect of the disclosure provides a method for a mobile robot to construct a map, including: controlling the image pickup device and the ranging sensing device to respectively and synchronously acquire image data and measurement data at different positions; analyzing the moving state of the mobile robot in a physical space by using the measurement data acquired at different positions to obtain landmark position data of which the landmark features in each image data are mapped to a physical coordinate system and/or obtain corresponding positioning position data of the different positions in the physical coordinate system; the resulting respective landmark position data and/or positioning position data are recorded in map data constructed based on the physical coordinate system.
A second aspect of the present disclosure provides a positioning method of a mobile robot, including: controlling an image pickup device and a ranging sensing device to respectively and synchronously acquire image data and measurement data at a first position and a second position different from the first position of the mobile robot; wherein the first location is mapped to first location data in the map data; and analyzing the moving state of the mobile robot in the physical space by using the measurement data respectively measured at the first position and the second position to determine second positioning position data of the second position in the map data when the mobile robot is at the second position.
A third aspect of the disclosure provides a server, including: an interface device for performing data communication with the mobile robot; a storage device storing at least one program; and the processing device is connected with the storage device and the interface device and used for executing the at least one program so as to coordinate the storage device and the interface device to execute and realize the method for constructing the map by the mobile robot according to the first aspect of the disclosure or the positioning method for the mobile robot according to the second aspect of the disclosure.
A fourth aspect of the present disclosure provides a mobile robot including: the image shooting device is used for collecting image data; the distance measurement sensing device is used for acquiring measurement data; a moving means for performing a moving operation; a storage device to store at least one program; and the processing device is connected with the mobile device, the image shooting device, the ranging sensing device and the storage device and is used for executing the at least one program so as to execute the method for constructing the map by the mobile robot in the first aspect of the disclosure or the positioning method for the mobile robot in the second aspect of the disclosure.
A fifth aspect of the present disclosure provides a computer-readable storage medium storing at least one computer program, which when executed by a processor controls an apparatus on which the storage medium is located to perform a method for constructing a map of a mobile robot according to any one of the first aspect of the present disclosure or a method for positioning a mobile robot according to any one of the second aspect of the present disclosure.
In summary, the positioning method of the mobile robot, the map building method, the server, the mobile robot and the storage medium in the application combine the advantage that a plurality of sensors can obtain information in more physical spaces, utilize the positioning capability of the image capturing device and the measuring capability of the distance measuring sensing device, and optimize errors in the plurality of sensors, thereby providing a new method and structure to realize the map building and positioning processes, and improving the accuracy and reliability of composition and positioning.
Other aspects and advantages of the present application will be readily apparent to those skilled in the art from the following detailed description. Only exemplary embodiments of the present application have been shown and described in the following detailed description. As those skilled in the art will recognize, the disclosure of the present application enables those skilled in the art to make changes to the specific embodiments disclosed without departing from the spirit and scope of the invention as it is directed to the present application. Accordingly, the descriptions in the drawings and the specification of the present application are illustrative only and not limiting.
Drawings
The specific features of the invention to which this application relates are set forth in the appended claims. The features and advantages of the invention to which this application relates will be better understood by reference to the exemplary embodiments described in detail below and the accompanying drawings. The brief description of the drawings is as follows:
fig. 1 is a schematic diagram of a method for constructing a map according to an embodiment of the present disclosure.
Fig. 2a and 2b are schematic diagrams illustrating a map constructed in the present application according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of measurement data acquired by the mobile robot at two positions in one embodiment.
Fig. 4 is a schematic diagram of measurement data acquired by the mobile robot at two positions in another embodiment.
Fig. 5 is a schematic diagram of a positioning method of a mobile robot according to an embodiment of the present disclosure.
Fig. 6 is a schematic diagram of a server according to an embodiment of the present application.
Fig. 7 is a schematic block diagram of a mobile robot according to an embodiment.
Detailed Description
The following description of the embodiments of the present application is provided for illustrative purposes, and other advantages and capabilities of the present application will become apparent to those skilled in the art from the present disclosure.
In the following description, reference is made to the accompanying drawings that describe several embodiments of the application. It is to be understood that other embodiments may be utilized and that changes in the module or unit composition, electrical, and operation may be made without departing from the spirit and scope of the present disclosure. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present application is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
Although the terms first, second, etc. are used herein in some instances to describe various positioning location data, information, or parameters, these positioning location data or parameters should not be limited by these terms. These terms are only used to distinguish one positional location data or parameter from another positional location data or parameter. For example, the first positional location data may be referred to as second positional location data, and similarly, the second positional location data may be referred to as first positional location data, without departing from the scope of the various described embodiments. The first positioning location data and the second positioning location data are both describing one positioning location data, but they are not the same positioning location data unless the context clearly indicates otherwise. Depending on context, for example, the word "if" as used herein may be interpreted as "at … …" or "at … …".
Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, location data, components, items, categories, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, location data, components, items, categories, and/or groups. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of position location data, functions, steps or operations are inherently mutually exclusive in some way.
The current positioning technology of mobile robots mainly comprises the following steps: dead Reckoning (DR) based positioning, map matching based positioning, and laser SLAM (SLAM) or visual SLAM based positioning, among others.
The dead reckoning-based positioning method includes measuring data such as acceleration and angular velocity of the mobile robot during moving by using an Inertial Measurement Unit (IMU) and an odometer installed in a robot wheel set, and accumulating increments of the data to deduce a relative position of the mobile robot at a certain time relative to a starting time, so as to realize positioning of the mobile robot. However, this method has a problem of accumulated errors, for example, when the mobile robot is moved in a large working period, the moving distance of the mobile robot is different from the data sensed by each sensor due to different ground materials (carpet, tile, wood board, etc.) passed by the wheel set, and the accumulated errors gradually increase with time. If there is no extra positioning data to help correction, the positioning of the mobile robot will eventually fail.
Secondly, an example of a positioning method based on map matching includes that a sensor on the mobile robot is used for detecting the surrounding environment, a local map is built, and the local map is matched with a complete map stored in advance, so that the position of the mobile robot in the whole environment at the current moment is obtained. However, this approach is limited to environmental layouts and can only be applied to relatively simple, highly variable environments.
Furthermore, examples of the positioning method based on the laser SLAM or the visual SLAM include detecting the surrounding environment by using a laser detector or a camera on the mobile robot, measuring distance and orientation data of landmark features by using laser in combination with the laser SLAM or the visual SLAM technology, and constructing an environment map corresponding to the measured landmark features to determine the movement track and the pose of the mobile robot. The positioning technology is not easy to be directly applied to a workplace with the characteristics of large moving range, complex surrounding environment, high top view decoration environment similarity and the like, and the following factors are generally considered:
on one hand, the indoor space of families is small, the environment similarity of each position is low, and the landmark features of space areas close to the ceiling, such as the ceiling, walls, wardrobes and the like, are easy to convert into data through image and distance measurement so as to realize positioning. The workplace has a wider space range, so that if the positioning technology is used for positioning, the object and image space within the same visual angle range of the image pickup device is far away from the mobile robot, and meanwhile, the distance measurement sensor cannot provide accurate distance data, so that a map constructed by the mobile robot is inconsistent with a real environment.
On the other hand, due to the problems of high environmental complexity such as flowability of objects around the mobile robot or blocking of landmark features, the mobile robot is not enough to rely on the configured image capturing device to provide a stable image data source of the landmark features for robot positioning, so that the situations that the actual path of the mobile robot is inconsistent with the planned path during autonomous movement due to the failure of continuous detection of the matchable landmark features may occur.
In view of this, in an embodiment of the first aspect of the present application, a method for constructing a map is provided, so as to construct a map of a physical space in which a mobile robot is located by jointly optimizing errors through data provided by an image capturing device and a ranging sensing device, respectively.
The method for constructing the map can be executed by a processor in the mobile robot, and can also be executed by a server side which is communicated with the mobile robot.
Wherein the mobile robot refers to an autonomous mobile device with map building capability, including but not limited to: one or more of an unmanned aerial vehicle, an industrial robot, a home-companion mobile device, a medical mobile device, a home cleaning robot, a commercial cleaning robot, a smart vehicle, a patrol mobile device, and the like.
The physical space refers to an actual three-dimensional space in which the mobile robot is located, and can be described by abstract data constructed in the spatial coordinate system. For example, the physical space includes, but is not limited to, a home residence, a public place (e.g., an office, a mall, a hospital, an underground parking lot, and a bank), and the like. For a mobile robot, the physical space generally refers to a space in a room, i.e., a space having boundaries in the length, width, and height directions. The system particularly comprises physical spaces with the characteristics of large space range, high scene repeatability and the like, such as shopping malls, waiting halls and the like.
The mobile robot is provided with an image pickup device, wherein the image pickup device is a device for providing a two-dimensional image according to a preset pixel resolution. In some embodiments, the image capture device includes, but is not limited to: cameras, video cameras, camera modules integrated with optical systems or CCD chips, and camera modules integrated with optical systems and CMOS chips, and the like. According to the requirement of actual imaging, the lens that can be adopted by the camera or video camera includes but is not limited to: standard lenses, telephoto lenses, fisheye lenses, wide-angle lenses, and the like.
In some embodiments, the image capture device may be disposed on the mobile robot with a primary optical axis between the horizontal plane and the vertical ceiling direction, such as the primary optical axis being located within a range of 0 ° and 0 ± 90 ° in the vertical ceiling direction. Taking a mobile robot as an example of a commercial cleaning robot, the image capturing device is mounted at the upper half part of the commercial cleaning robot, and the main optical axis of the image capturing device is a preset angle which is inclined upwards so as to acquire image data within a corresponding visual angle range.
The image taken by the image taking device can be one or more of a single image, a continuous image sequence, a discontinuous image sequence, a video and the like. If the image capturing device captures an image sequence or a video, one or more image frames can be extracted from the sequence or the video to be used as image data for subsequent processing.
Wherein the image data reflects the perception situation of the image pickup device to the physical space where the mobile robot is located. In some cases, an image acquired by the image pickup apparatus may be directly taken as image data; in other cases, the data obtained by processing the image acquired by the image pickup device may be used. For example, when the image acquired by the image pickup apparatus is a single image, the single images taken at the first position and the second position may be directly taken as image data. For another example, when the image acquired by the image capturing apparatus is an image sequence or a video, the image frame may be extracted as image data from the image sequence or the video.
In some embodiments, the mobile device stores the captured image in a local storage medium. The storage medium may include, among other things, read-only memory, random-access memory, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, a usb disk, a removable hard disk, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed.
In some embodiments, the mobile robot transmits the captured image to an external device for storage, wherein the communication connection comprises a wired or wireless communication connection. The external device may be a server located in a network, where the server includes but is not limited to one or more of a single server, a server cluster, a distributed server cluster, a cloud server, and the like. In a specific implementation, the cloud server may be a cloud computing platform provided by a cloud computing provider. Based on the architecture of the cloud server, the types of the cloud server include but are not limited to: Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS). Based on the nature of the cloud server, the types of the cloud server include, but are not limited to: public Cloud (Public Cloud) server, Private Cloud (Private Cloud) server, Hybrid Cloud (Hybrid Cloud) server, and the like.
In some embodiments, the public Cloud service is, for example, Amazon's elastic computing Cloud (Amazon EC2), IBM's Blue Cloud, google's appeengine, and Windows' Azure service platform; the private cloud service end is, for example, an aristoson cloud computing service platform, an Amazon cloud computing service platform, a hundredth cloud computing platform, an Tencent cloud computing platform, and the like.
The mobile robot is also provided with a distance measurement sensing device, and the distance measurement sensing device can measure the distance of the landmark features in the physical space where the mobile robot is located relative to the mobile robot.
The landmark feature refers to a feature that is easily distinguished from other objects in a physical space where the mobile robot is located, and for example, the landmark feature may be a table corner, an outline feature of a ceiling lamp on a ceiling, a connecting line between a wall and a floor, and the like.
The ranging sensing device includes, for example, a ranging sensing device that provides one-dimensional measurement data such as a laser sensor or an ultrasonic sensor, or a ranging sensing device that provides two-dimensional or three-dimensional measurement data such as a ToF sensor, a multi-line laser radar, a millimeter wave radar, or a binocular imaging device, or includes both of the above two ranging sensing devices. For example, the laser sensor may determine its range relative to a landmark feature based on the difference between the time it emits the laser beam and the time it receives the laser beam; for another example, the ultrasonic sensor may determine the distance of the mobile robot relative to the landmark feature according to the vibration signal of the emitted sound wave reflected back by the landmark feature; for another example, the binocular camera device can determine the distance of the mobile robot relative to the landmark features by utilizing the principle of triangulation according to images shot by the two cameras; for example, an infrared light projector of a ToF (Time of flight) sensor projects infrared light outwards, the infrared light is reflected after encountering a measured object and is received by a receiving module, and depth information of the irradiated object is calculated by recording the Time from the emitting to the receiving of the infrared light.
The range finding sensing device generates corresponding measurement data by detecting entities in the environment surrounding the mobile robot. In order to prevent the mobile robot from moving abnormally, such as collision, winding and the like, with an entity in the surrounding environment, or prevent the mobile robot from moving abnormally, such as falling and the like, the ranging sensing device is mounted on the body side of the mobile robot. The ranging sensing device is used for detecting the edge of the corresponding image data in the physical space or the environment data of the area which cannot be covered by the image data. Taking the mobile robot as an example of a commercial cleaning robot, the distance measuring sensing device may include, but is not limited to, a device disposed at a position between 10-80cm from the ground on the robot side of the commercial cleaning robot.
In some embodiments, the range finding sensing device is mounted on one side of the mobile robot in the direction of travel so that the mobile robot can learn about landmark features in its direction of travel to avoid or take other behavioral controls. Of course, in some cases, the distance measuring and sensing device may be installed at other positions of the mobile robot as long as the distance to each landmark feature in the surrounding physical space can be obtained.
In some embodiments, the range finding sensing device mounted on the mobile robot may be one or more in kind. For example, only the laser sensor may be mounted on the mobile robot, the laser sensor and the binocular imaging device may be mounted at the same time, or the laser sensor, the binocular imaging device, the ToF sensor, and the like may be mounted at the same time. And the installation quantity of the same sensor can be configured according to requirements so as to acquire measurement data in different directions.
The measurement data reflect the perception situation of the distance measurement sensing device to the physical space where the mobile robot is located. In other words, the measurement data reflects the physical quantity having a physical meaning in the physical space that the range finding sensing device can detect. E.g., the relative position of an entity in physical space with respect to a mobile robot, etc. Wherein the physical meaning means that the measurement data is capable of providing physical data in a basic unit such as time unit/length unit/angle unit.
The measurement data at least reflects the position relation between the mobile robot and the entity in the surrounding environment, wherein the measurement data comprises the distance and the relative orientation of each landmark feature observed in the physical space of the mobile robot relative to the mobile robot. In some cases, the data acquired by the ranging sensing device may be directly used as measurement data; in other cases, the data obtained by the ranging sensing device may be processed to obtain the measurement data. For example, when the ranging sensing device is a laser sensor, the distance detected by the laser sensor relative to each landmark feature can be directly used as measurement data; for another example, when the distance measurement sensing device is a binocular camera device, the distance of each landmark feature is obtained and used as measurement data after processing the image acquired by the binocular camera device.
Typically, because there are multiple landmark features in physical space, there are distances and relative orientations in the measurement data relative to the multiple landmark features, thereby rendering depth point cloud data.
In some examples, the measurement data also reflects perception data of the mobile robot to entities in its surroundings. For example, the measurement sensing device is a laser sensor that detects a set of measurement points that can reflect a light beam by emitting the light beam to a physical surface in the surrounding environment, and acquires corresponding measurement data. The acquired measurement data is 3D reconstructed by means of predetermined physical parameters to obtain the relative position.
It should be noted that, although the image capturing device and the range finding sensing device are installed in the mobile robot at the same time, the view angle range of the image capturing device and the measurement range of the range finding sensing device may not overlap, or at most, there is a partial overlap between the view angle range of the image capturing device and the measurement range of the range finding sensing device, that is, the execution of each step in the present application does not necessarily depend on the landmark feature commonly detected in the image capturing device and the range finding sensing device, and even if there is no landmark feature commonly detected in the image capturing device and the range finding sensing device, the process of constructing the map in the present application is not affected.
In an exemplary embodiment, please refer to fig. 1, which is a schematic diagram of a map construction method according to an embodiment of the present application. As shown, in step S110, the image capturing device and the range finding sensing device are controlled to synchronously acquire image data and measurement data at least two positions, respectively.
Here, the mobile robot passes through a plurality of positions during the movement in the current physical space. In order to accurately correlate the image data and the measurement data, the image pickup device and the range finding sensing device of the mobile robot acquire the image data and the measurement data at respective positions, respectively.
Taking an example of the work environment of a mobile robot in an airport, a supermarket, etc., since the physical space where the mobile robot is located includes moving objects such as a walking passenger and a trunk thereof, or a walking customer and a shopping cart thereof, and includes fixed objects such as fences, shelves, standing people, etc., the measurement data measured at least two locations includes both the fixed objects and the moving objects. The image data acquired at the same two positions at least comprises the image data of the entity in the physical space which is vertically higher than the detection range of the distance measuring and sensing device, and possibly the image data of the corresponding part of the measurement data.
In order to improve the data accuracy when the data fusion from two independent devices is used, the image data and the measurement data belonging to the same position are synchronously acquired. The synchronous acquisition mode comprises the steps that the mobile robot stops at a plurality of passing positions and acquires image data and measurement data, and then the mobile robot continues to move to synchronously acquire the image data and the measurement data; or synchronously acquiring the image data and the measurement data without stopping by using the synchronous signal.
In a possible embodiment, the synchronous acquisition may be based on an external synchronous signal, and the image data and the measurement data corresponding to the position are synchronously acquired by the image capturing device and the ranging sensing device; the image data and the measurement data corresponding to the position can be synchronously acquired by the image pickup device and the ranging sensing device based on the synchronous signal in the image pickup device or the ranging sensing device. For example, in the working process of the mobile robot in the current physical space, the control device in the mobile robot sends out a synchronization signal to the image capturing device and the ranging sensing device at a certain time interval, so that the image capturing device and the ranging sensing device respectively acquire image data and measurement data. For another example, the image capturing device and the distance measurement sensing device are provided with clock modules and preset with the same clock signal mechanism to synchronously send out signals, and when the image capturing device and the distance measurement sensing device respectively receive respective synchronous signals, the steps of obtaining image data and measurement data are respectively executed.
In some cases, the external synchronization signal may also be generated based on a signal in an inertial navigation sensor of the mobile robot. The inertial navigation sensor (IMU) is used for acquiring inertial navigation data of the mobile robot. The inertial navigation sensors include, but are not limited to, one or more of a gyroscope, odometer, optical flow meter, and accelerometer, among others. The inertial navigation data acquired by the inertial navigation sensor includes, but is not limited to, one or more of speed data, acceleration data, moving distance, rolling number of the roller, deflection angle of the roller, and the like of the mobile robot.
For example, an IMU is disposed in a driving component (i.e., a component for moving the mobile robot forward, such as a wheel set) of the mobile robot, and is connected to a lower computer of the mobile robot, and the lower computer is connected to the image capturing device; the distance measurement sensing device is also provided with an IMU, and the distance measurement sensing device and the image pickup device are connected to an upper computer of the mobile robot. The IMU in the range finding sensing device and the time stamp of the IMU in the mobile robot driving assembly are kept synchronous, so that the IMU in the mobile robot driving assembly generates a synchronous signal to the image pickup device to enable the image pickup device to acquire image data, and the IMU in the range finding sensing device also generates the synchronous signal to enable the range finding sensing device to acquire measurement data.
For another example, an IMU is disposed in a driving component (i.e., a component for moving the mobile robot forward, such as a wheel set) of the mobile robot, and the IMU is connected to a lower computer of the mobile robot, and the lower computer is connected to the image capturing device and the distance measuring sensing device, respectively. The IMU sends a synchronous signal to the image shooting device and the distance measurement sensing device when detecting that the wheel set rotates for a preset number of turns, so that the image shooting device obtains image data, and the distance measurement sensing device obtains measurement data.
For convenience of description, how the mobile robot constructs a map at two locations (i.e., a first location and a second location) will be exemplified in the following embodiments.
With reference to fig. 1, in step S120, the mobile robot analyzes the moving state of the mobile robot in the physical space by using at least the measurement data obtained at different positions, so as to obtain at least landmark position data in which the landmark features in each image data are mapped to the physical coordinate system, and/or obtain corresponding positioning position data of the at least two positions in the physical coordinate system.
Here, the measurement data and the image data acquired in step S110 each include data that is useful for positioning/constructing a map in different spatial height ranges in the physical space in which the mobile robot is located, such as landmark data, a distance/angle with respect to a landmark feature, and the like. In addition, since the distance measurement sensing device and the image pickup device move along with the whole mobile robot, at least the movement state determined by using the measurement data acquired at different positions can solve the problems that the landmark features reflected in the image data are at the landmark positions in a set coordinate system without physical dimensions (such as a scale), and that it is not favorable to obtain high-precision positioning only by using the measurement data. The set coordinate system includes, for example, a camera coordinate system or a virtual coordinate system that lacks a mapping relationship determined based on physical dimensions with a physical coordinate system. Therefore, the movement state of the mobile robot described by the measurement data and the landmark data of the corresponding landmark feature in the image data are used for calculating the landmark position data of each landmark feature mapped to the physical coordinate system captured by the image data.
Specifically, in the case where there is no referable data reflecting the actual movement of the mobile robot in the physical space, the mobile robot cannot determine the physical distance of its own movement, the physical distance between itself and a certain entity, and the like from the image data, which reflect the positional relationship in the actual physical space. For this purpose, the mobile robot obtains the moving state of the mobile robot in the physical space by analyzing the measurement data with physical meaning; and by utilizing the moving state and preset internal and external parameters of the image pickup device and the distance measuring device during assembly on the mobile robot and image acquisition, the mobile robot constructs the corresponding relation between landmark data in the image data and landmark position data in a physical coordinate system, thereby determining the landmark position data of the corresponding landmark features in the physical coordinate system.
Wherein the moving state comprises at least one of: the mobile robot positioning system comprises data related to the self positioning of the mobile robot, data used for helping the mobile robot to determine the positioning, data used for reflecting the mapping relation between the movement position change of the mobile robot in a physical space and the position change of the landmark feature in the image pixel in the image data, and the like. Wherein the data related to the self-positioning of the mobile robot includes but is not limited to: a pose change of the mobile robot with respect to the entity, a pose change between front and rear positions of the mobile robot, or position information of each path the mobile robot passes, etc.; the data for assisting the mobile robot in determining the position includes, but is not limited to: the relative position relationship (such as pose change data) between the landmark data corresponding to the same landmark feature in the measurement data and the mobile robot, the landmark position of the landmark feature, and the like.
It should be noted that the landmark feature described in the measurement data and the landmark feature described in the image data are not necessarily the same landmark feature. Subsequently, for convenience of description, the landmark data in which the landmark feature is reflected in the measurement data is referred to as landmark measurement data, and the landmark data in which the landmark feature is reflected in the image data is referred to as landmark image data.
Wherein the manner in which the mobile robot analyzes the moving state of the mobile robot in the physical space using at least the measurement data acquired at different positions includes the following examples:
in some examples, the mobile robot obtains the corresponding movement state by analyzing the measurement data measured at different positions to reflect the change of the physical position of the mobile robot. Wherein the analysis process reflecting the change of the physical position thereof includes a process of performing a movement behavior analysis on the mobile robot using the measurement data having the physical meaning. The mobile robot obtains the moving state of the mobile robot generated by the change of the moving pose by analyzing the data values of the measured data relative to the same entity at least two positions, the position deviation between the measured data relative to the same entity and the like.
To improve the accuracy of the measurement of the movement state, in some specific examples, the mobile robot calculates the movement state using landmark measurement data corresponding to a common landmark feature among the identified measurement data.
Taking the mobile robot to acquire the measurement data at the first position and the second position as an example, the measurement data acquired by the mobile robot at the first position and the second position respectively has at least one common landmark feature, and the moving state of the mobile robot can be calculated by the distances between the mobile robot at the two positions and the common landmark feature.
In the process that the mobile robot presets the initial movement position as the starting point to construct the map, under the condition that the mobile robot knows the positioning position information of the previous position in the physical coordinate system, the movement states between the front and back different positions obtained by the above example are utilized to obtain the positioning position data corresponding to the at least two positions in the physical coordinate system, and/or the landmark position data corresponding to the landmark features corresponding to the measurement data in the physical coordinate system, and the like. It should be understood that since the measurement data acquired by the ranging sensing device is data having an actual physical unit, the calculated moving state also has an actual physical unit.
In other specific examples, the mobile robot obtains the moving state by performing a coincidence calculation on each measurement data acquired at different positions. The single measurement data provides the position relation between the entity part in the corresponding measurement plane in the physical space and the mobile robot, and when the mobile robot measures the same entity at different positions, the mobile robot reflects the change of the measurement position of the same entity in the measurement data caused by the change of the pose of the mobile robot. For example, the mobile robot may perform processing such as rotation, translation, and scaling on the measurement data acquired at one of the positions to obtain measurement data coinciding with some measurement points in the measurement data at another position, and the processing procedures such as rotation, translation, and scaling reflect a procedure of simulating the mobile robot to perform a moving operation so as to coincide the corresponding measurement points. For this purpose, the mobile robot can simulate the moving states of the mobile robot such as data of pose change between different positions, physical dimensions and the like by executing the overlap ratio optimization operation, and determine other data in the moving states by using some moving states such as the pose change data and the like and physical quantities provided by the coincident measurement data, such as the positioning position data of the at least two positions corresponding to the physical coordinate system, and/or the landmark position data of the landmark measurement data in the measurement data corresponding to the physical coordinate system.
For this reason, in an exemplary embodiment, the mobile robot performs optimization processing on the coincidence degree between the measurement data acquired at different positions, to obtain landmark position data in which at least coincident measurement data in the measurement data meeting the coincidence condition is mapped to a physical coordinate system, and/or positioning position data corresponding to at least two positions in the physical coordinate system when the mobile robot acquires data. The mobile robot can regard the coincident measurement data part as landmark data of the corresponding landmark feature. The coincidence condition includes, for example, that the iterative optimization number of coincidence degrees reaches a preset iterative number threshold, and/or that the gradient value of coincidence degrees is smaller than a preset gradient threshold. This method is advantageous for calculating the movement state by reflecting the physical quantity of the fixed object in each measurement data as much as possible.
Taking the measurement data as a one-dimensional data as an example, please refer to fig. 3, which is a schematic diagram of the measurement data acquired by the mobile robot at two positions in one embodiment. As shown in the figure, the mobile robot acquires measurement data MData _1 and MData _2 at a first position and a second position, respectively, wherein if there is an entity with a fixed position in the physical space, the measurement data MData _1 and MData _2 have a measurement point corresponding to the entity and can be overlapped after data processing. As can be seen from the measurement points of the entities 311, 312 and 313 in the measurement data MData _1 and MData _2, the entities 311 and 313 in the two measurement data can be coincided after data processing, while the entity 312 cannot be coincided with the entities 311 and 313 simultaneously, so the entities 311 and 313 can be regarded as fixed entities. The mobile robot performs data processing including rotation, translation, scaling and the like on the two measurement data MData _1 and MData _2, namely, the relative position relationship of the mobile robot between the first position and the second position is reflected. For this purpose, the mobile robot performs a coincidence optimization calculation using an optimization function constructed according to the two measurement data MData _1 and MData _2 until the coincidence of the two measurement data MData _1 and MData _2 meets a preset coincidence condition. And obtaining position and attitude change data reflecting the relative position relationship, taking the coincident measurement data points in the two measurement data as landmark measurement data under the preset coincidence condition, and obtaining landmark position data in the measurement data, wherein at least the landmark measurement data is mapped to a physical coordinate system, and/or corresponding positioning position data of the at least two positions in the physical coordinate system based on the obtained position and attitude change data. It should be noted that the above example is also applicable to the calculation processing in which the measurement data is two-dimensional data.
In order to improve the calculation efficiency, in a further exemplary embodiment, the mobile robot performs optimization processing on the coincidence degree between landmark measurement data in the measurement data acquired at different positions, so as to obtain the moving state of the mobile robot under the coincidence condition.
Wherein the optimization process is similar to the example of one-dimensional data described above. In an exemplary embodiment, the method for constructing a map further includes a step of extracting a landmark feature in each measurement data, so as to analyze the movement state by using the landmark measurement data corresponding to the landmark feature in each measurement data. Wherein the landmark measurement data includes landmark measurement data corresponding to a moving object and landmark measurement data corresponding to a fixed object. For example, the distance to each feature in physical space is included in the measurement data acquired by the range sensing device. Including stationary features (e.g., flowerpots, end tables, shelves, etc.) and active features (e.g., people, pets, shopping carts while pushing, etc.).
It should be appreciated that in some scenarios, some features in physical space may not be suitable for use as landmark features, such as movable objects like people, pets, etc. in physical space. Marking these inappropriate features in the map data can affect the positioning of subsequent mobile robots and can lead to large errors. Therefore, it is necessary to screen each piece of measurement data to determine which pieces of measurement data can be used as landmark features, so as to analyze the movement state by using the extracted landmark features, thereby improving the calculation accuracy.
In one embodiment, the qualified feature is screened out as a landmark feature by means of a threshold. For example, there are five common features a, b, c, d, e in the measurement data acquired at the first position and the measurement data acquired at the second position, where the numerical difference range between the movement states calculated based on a, b, c, d is within 1cm, and the numerical difference range between the movement state calculated based on e and the movement states calculated based on other features is 74-75 cm, and if the filtering threshold is set at 5cm, a, b, c, d can be extracted as landmark features, and e may be an active feature and cannot be used as landmark features.
In another embodiment, the landmark features may also be extracted directly based on numerical variations in the measurement data acquired at the locations. Specifically, in the measurement data acquired at different positions, the change of the static characteristic in each position data is relatively regular, and the change of the active characteristic is relatively large relative to the change of the other static characteristic values. Therefore, the feature with regular numerical change can be found in the measurement data relative to the distance of each feature through a preset change threshold value, and the feature is used as the landmark feature. For example, the measurement data acquired at the first position and the measurement data acquired at the second position have five common features a, b, c, d, and e. When in the first position, the distance of the a feature relative to the mobile robot is 2m, the distance of the b feature relative to the mobile robot is 3.4m, the distance of the c feature relative to the mobile robot is 4.2m, the distance of the d feature relative to the mobile robot is 2.8m, and the distance of the e feature relative to the mobile robot is 5 m; in the second position, the distance of the a-feature with respect to the mobile robot is 2.5m, the distance of the b-feature with respect to the mobile robot is 3.8m, the distance of the c-feature with respect to the mobile robot is 4.9m, the distance of the d-feature with respect to the mobile robot is 3.6m, and the distance of the e-feature with respect to the mobile robot is 2 m. It can be seen that the amount of change of the a-feature at two positions is 0.5m, the amount of change of the b-feature at two positions is 0.4m, the amount of change of the c-feature at two positions is 0.7m, the amount of change of the d-feature at two positions is 0.8m, and the amount of change of the e-feature at two positions is 3 m. Assuming that the preset variation threshold is 0.5m, a, b, c, d can be extracted as landmark features, and e may be an active feature and cannot be used as a landmark feature.
When determining the static landmark feature determined in any of the two embodiments, the landmark measurement data corresponding to the static landmark feature in the measurement data is used to perform overlap ratio calculation to determine the corresponding movement state. For example, the overlap ratio calculation is performed according to the position deviation between the landmark measurement data corresponding to the same static landmark feature at different positions, so that the position deviation between the landmark measurement data is minimized, and when the overlap ratio condition is determined to be met, the corresponding movement state is determined.
In still other embodiments, the respective landmark measurement data including the mobile object and the fixed object are optimized for overlap, and when the overlap satisfies the overlap condition, the coincident landmark measurement data is determined to correspond to the landmark feature of the fixed object.
The process of overlap optimization mentioned in any of the above embodiments can be exemplified as follows: please refer to fig. 4, which is a schematic diagram of the mobile robot in the present application for measuring data acquired at two positions respectively according to another embodiment. As shown in the figure, the measurement data MData _3 and MData _4 acquired at the first position and the second position by the mobile robot are shown, where a circle represents landmark measurement data matching in the two measurement data, and a star represents landmark measurement data coinciding at one coincidence calculation time in the two measurement data. The mobile robot iteratively optimizes the coincideable landmark measurement data to ensure that the change of the number of the landmark measurement data represented by the star reaches the coincidence condition during the previous coincidence calculation, and then determines the movement state obtained during the corresponding coincidence calculation; and obtaining landmark feature mapping data which is mapped to the physical coordinate system and corresponds to at least the landmark measurement data in the measurement data and/or positioning position data which corresponds to at least two positions in the physical coordinate system when the mobile robot acquires the data by using the obtained movement state and the coincident landmark measurement data.
With the movement state obtained in any of the above examples, the mobile robot further determines landmark position information of a landmark feature corresponding to the landmark image data in the synchronously obtained image data in the physical coordinate system.
In the process of moving the mobile robot, the image pickup device also synchronously acquires image data at each position. By comparing landmark image data corresponding to common landmark features in the image data shot at each position, the moving amount and the rotating amount of the mobile robot in the set coordinate system are calculated, and for example, the relative position relation between the landmark features and the mobile robot in the set coordinate system is calculated. Assuming that the initial position of the mobile robot is taken as the origin of coordinates in the set coordinate system, coordinates of each positioning position of the mobile robot in the set coordinate system and coordinates of each landmark feature reflected by the image data in the landmark position in the set coordinate system can be obtained.
Although the mobile robot can calculate the coordinates of each position where the mobile robot passes under the set coordinate system and the coordinates of the image features of each landmark, the coordinates lack actual physical units, that is, the mobile robot cannot know the mapping relationship between the position coordinates of one pixel point in the image data and the entity measurement point in the corresponding physical space under the set coordinate system and the position in the actual physical coordinate system.
The mobile robot converts each position coordinate marked under the set coordinate system to the physical coordinate system by using the movement state with the physical unit calculated by depending on the measurement data to obtain the landmark position data in which the landmark feature in each image data is mapped to the physical coordinate system, and/or to obtain the positioning position data and the like corresponding to the at least two positions in the physical coordinate system. The distance measuring sensing device and the image pickup device move integrally with the mobile robot, so that the mobile robot reflects the moving condition of the mobile robot relative to the landmark features corresponding to the landmark image data in the image data by using the moving state obtained by the measured data. Here, the mobile robot maps the landmark image features in the set coordinate system into the physical coordinate system based on the mapping relationship between the set coordinate system and the physical coordinate system determined by the movement state, so as to obtain the landmark position data in which the corresponding landmark features are mapped in the physical coordinate system.
Optimizing the coincidence degree between the measurement data acquired at different positions by using a mobile robot to obtain landmark position data in which corresponding landmark features in the image data are mapped to a physical coordinate system under the coincidence condition, and/or describing the execution process of obtaining each position data under the physical coordinate system by using the positioning position data corresponding to the at least two positions in the physical coordinate system as an example:
the mobile robot converts the measurement data respectively acquired at the first position and the second position into a coordinate system of the same measurement data, and performs iterative calculation of the coincidence degree to obtain the coincidence degree of the two measurement data in the corresponding coordinate systems according with the coincidence condition, thereby obtaining the moving state of the mobile robot under the coincidence condition. Wherein the moving state includes a physical dimension between a relative positional relationship of the mobile robot between the first position and the second position and a measured positional relationship between the coincident landmark measurement data, pose change data of the mobile robot at the second position with respect to the first position, and the like. Wherein, under the action of the whole movement of the mobile robot, the physical scale also corresponds to the conversion relation (namely, the map scale) between the set coordinate system and the physical coordinate system. For example, the conversion relationship between the set coordinate system and the physical coordinate system is determined based on the physical dimensions and internal and external parameters of the image pickup device, wherein the internal and external parameters include internal optical parameters and assembly parameters of the image pickup device, assembly parameters between the image pickup device and the ranging sensing device, and the like.
After the conversion relationship between the set coordinate system and the physical coordinate system is obtained, the mobile robot calculates positioning position data corresponding to each position of the mobile robot in the set coordinate system in the physical coordinate system and landmark position data in which each landmark feature in the set coordinate system is mapped to the physical coordinate system. Of course, in practical application, the required data can be obtained according to requirements, for example, in some cases, only the landmark position data of the landmark feature mapped to the physical coordinate system needs to be obtained; in other cases, only the corresponding positioning position data of each position of the mobile robot in the physical space in the physical coordinate system need to be obtained; in still other cases, it is desirable to obtain the landmark position data and the positioning position data simultaneously.
It is added that the overlapping condition may be a preset overlapping condition, or a local optimal overlapping is selected, which includes but is not limited to: and the average value of the contact ratio of the common landmark features in the two measurement data obtained based on multiple times of superposition iteration is the highest, or the standard deviation of the contact ratio of the common landmark features in the two measurement data is the smallest, the gradient local minimum of the contact ratio is the smallest, and the like.
It should be noted that the positioning position data may also be determined according to a moving state obtained when the measured data meets the contact ratio condition, or the positioning position data is determined after error correction is performed according to the two calculation methods.
A specific example will be described below as to how to optimize the coincidence degree between the measurement data acquired at different positions, so as to obtain landmark position data in which the landmark features under coincidence conditions are mapped to a physical coordinate system and corresponding positioning position data of the at least two positions in the physical coordinate system. It should be noted that the present example is only used for explaining the present application and not limiting the present application.
Let T be the position of the image pickup apparatus itself at two times (time 1 and time 2) before and after the current physical space in the set coordinate systemwc1And Twc2The position of the distance measuring induction device under the set coordinate system is Twb1And Twb2. Using assembly parameters between the image pick-up device and the range-finding sensing device, i.e. the position T of the range-finding sensing device relative to the image pick-up devicebcIt can be derived that: t iswc1=Twb1×Tbc,Twc2=Twb2×Tbc
Let us assume that the scale factor between the distance unit of the set coordinate system and the actual physical coordinate system is s, Twc1The coordinates (expressed in matrix form) in the set coordinate system are:
Figure BDA0002670028980000151
the coordinates in the real physical coordinate system are then:
Figure BDA0002670028980000152
Twc2the coordinates in the set coordinate system are:
Figure BDA0002670028980000153
the coordinates in the real physical coordinate system are then:
Figure BDA0002670028980000154
wherein R iswc1、Rwc2All represent rotation amount, twc1And twc2Both represent the amount of translation.
Similarly, the coordinates of each landmark feature in the set coordinate system and the coordinates corresponding to the physical coordinate system may also be obtained in a similar manner according to the distance between each landmark feature in the set coordinate system and the mobile robot, which is not described herein again.
From T'wc1A matrix for determining the relative positions T of the image pickup device at time 1 and time 2c1c2=Tc1wTwc2. Wherein:
Figure BDA0002670028980000155
Twc2is the position of the image pickup device in the physical coordinate system at time 2.
By Tc1c2=Tc1wTwc2The measurement data at the time 2 is mapped to the coordinate system where the measurement data at the time 1 is located to form simulated measurement data SData (or the measurement data at the time 1 may be mapped to the time 2, the principle is the same, and details are not described here), and the measurement data obtained by the ranging sensing device at the time 1 is used as actual measurement data RData. It should be understood that by mapping the measurement data at time 2 into time 1, the position and angle at time 2 relative to the position and angle of each landmark feature at time 1 can be theoretically modeled.
Here, one of the analog measurement data SData and the actual measurement data RData is adjusted by a scaling factor (i.e., a physical scale) to make the overlap ratio of the analog measurement data SData and the actual measurement data RData meet an overlap ratio condition, so as to determine a value of the scaling factor, i.e., the scaling factor
Figure BDA0002670028980000161
And
Figure BDA0002670028980000162
the value of s in (1). By substituting s, the coordinates of the mobile robot in the physical coordinate system at time 1 and time 2 (i.e., the positioning position data) and the coordinates of each landmark feature in the physical coordinate system (i.e., the landmark position data) can be determined.
In other examples, in order to prevent the mobile robot from interfering with the acquisition of corresponding data by the ranging sensing device and the image capturing device due to external factors such as jolting and being held up during movement, the mobile robot obtains a corresponding movement state by analyzing the measured data and the image data measured at different positions to reflect the change of the physical positions of the measured data and the image data.
In an exemplary embodiment, the present application considers errors in the image pickup device to jointly optimize the errors of the image pickup device and the range sensing device, thereby improving the accuracy of constructing the map.
Performing joint optimization processing on the contact ratio between the measurement data acquired at different positions and the contact ratio between the acquired image data to obtain landmark position data in which corresponding landmark features in the image data are mapped to a physical coordinate system under the condition of the contact ratio meeting the joint optimization, and/or positioning position data in which the at least two positions correspond to the physical coordinate system.
In one embodiment, the measurement data and the image data are respectively adjusted to optimize the coincidence degree of the measurement data and the coincidence degree of the image data; when the error between the movement states obtained based on the respective contact ratios meets the error condition, the corresponding movement states are used for determining the landmark position data of the image data, wherein the corresponding landmark features are mapped to the physical coordinate system, and/or the corresponding positioning position data of the at least two positions in the physical coordinate system. The error condition includes, for example, a preset error threshold, or a minimum error value selected based on a preset adjustment number.
Taking the example that the mobile robot synchronously acquires image data and measurement data at a first position and a second position respectively, mapping the measurement data acquired when the mobile robot is at the first position to the second position, and obtaining simulated measurement data corresponding to the second position by calculating the contact ratio, namely, the mapped measurement data determined by using the current contact ratio corresponds to the simulated movement state of the mobile robot, and the mapped measurement data represents the measurement data at the second position, which is obtained by the fact that the measurement data at the first position is estimated according to the corresponding movement state.
Meanwhile, the image pickup device also calculates the moving state of the mobile robot in the set coordinate system (i.e., the moving amount and the rotating amount in the set coordinate system) by comparing landmark image data corresponding to a common landmark feature in the image data captured at the two positions. Wherein the moving state obtained by using the image data and the moving state obtained by measuring the data correspond to each other by the assembly parameter between the image pickup device and the ranging sensing device.
And determining landmark position data of the landmark features in the physical coordinate system and/or positioning position data of the first position and the second position in the physical coordinate system by utilizing the two moving states. When coincident measurement data is used as landmark measurement data of corresponding landmark features, landmark position data of each landmark measurement data in a physical coordinate system can also be determined.
In another embodiment, the coincidence degree between the measured data of the two positions is adjusted, and the movement state obtained at the corresponding coincidence degree determines the coincidence degree of the landmark image data in the image data; when the contact ratio between the adjusted measurement data and the contact ratio between the adjusted image data meet the contact condition, determining landmark position data in which the corresponding landmark features in the image data are mapped to the physical coordinate system and/or positioning position data corresponding to the at least two positions in the physical coordinate system by using the corresponding movement states.
Still taking the example that the mobile robot synchronously acquires the image data and the measurement data at the first position and the second position respectively, mapping the measurement data acquired when the mobile robot is at the first position to the second position, and obtaining the simulated measurement data corresponding to the second position by calculating the contact ratio, namely, the mapped measurement data determined by using the current contact ratio corresponds to the simulated movement state of the mobile robot, which represents the measurement data at the second position, which is obtained by the measurement data at the first position according to the corresponding movement state.
The moving state obtained by using the image data corresponds to the moving state obtained by measuring the data through the assembly parameters between the image pickup device and the ranging sensing device, and the coincidence degree of the landmark image data matched with the landmark image data in the image data obtained at the two positions in the moving state obtained by measuring the data is calculated.
And evaluating the contact ratio obtained by using the measured data and the contact ratio obtained by using the image data to determine whether the evaluation structure meets the overlapping condition, if so, mapping each landmark feature and the position into a physical coordinate system in a corresponding moving state, and if not, continuously adjusting the contact ratio of the measured data until the overlapping condition is met.
Through the optimization mode in the embodiment, the errors of the image pickup device and the distance measurement sensing device can be jointly optimized, so that the advantages of all sensors are utilized, the accumulation of the errors is avoided, and the composition precision is improved.
In an exemplary embodiment, the range finding sensing device includes a plurality of types, and each of the range finding sensing devices synchronously obtains respective measurement data according to a respective measurement mode for analyzing the movement state. For example, the range finding sensing device includes any two or more of the following: a laser ranging sensing device, a binocular camera device, a ToF ranging sensing device, a structured light ranging sensing device, and the like.
Specifically, when the mobile robot has a plurality of ranging sensing devices, the respective measurement data may be synchronously obtained based on the respective measurement modes of the ranging sensing devices, and the measurement data of the ranging sensing devices may be integrated to determine the moving state of the mobile robot.
In an embodiment, the measurement data from each ranging sensing device belonging to the same position may be subjected to fusion processing, and then optimization processing is performed between the fused measurement data corresponding to each position, so as to obtain position data in which the landmark features are mapped to a physical coordinate system under a condition that the measurement data coincide with each other, and/or position data corresponding to the at least two positions in the physical coordinate system.
Here, each ranging sensing device synchronously acquires measurement data at each position according to a respective measurement method, and after the measurement data from each ranging sensing device is obtained, fusion processing is performed on the measurement data from each ranging sensing device belonging to the same position, so that optimization processing is performed based on the fused measurement data. For example, when the distance measurement sensing device comprises a binocular camera device and a laser sensor, at a first position, the binocular camera device determines the distance of the mobile robot relative to each landmark feature according to images shot by two cameras of the binocular camera device by utilizing the principle of triangulation to obtain first measurement data acquired by the binocular camera device; at the same time, the laser sensor also determines its distance from the landmark feature at the first location based on the difference between the time it emits the laser beam and the time it receives the laser beam to obtain first measurement data acquired by the laser sensor. At the second position, the binocular camera device determines the distance of the mobile robot relative to each landmark feature by utilizing the principle of triangulation according to images shot by the two cameras of the binocular camera device so as to obtain second measurement data acquired by the binocular camera device; at the same time, the laser sensor also determines its distance from the landmark feature at the second location based on the difference between the time it emits the laser beam and the time it receives the laser beam to obtain second measurement data acquired by the laser sensor. After the first measurement data are subjected to fusion processing and the second measurement data are subjected to fusion processing, performing contact degree optimization processing based on the fused first measurement data and the fused second measurement data; each position data in the physical coordinate system is obtained in a similar manner to the various ways mentioned in the previous examples, and will not be described in detail here.
Wherein the fusion process comprises: and performing landmark analysis on the measurement data provided by the sensing devices based on different ranges to obtain landmark measurement data corresponding to the common landmark characteristics. For example, the measurement data acquired by each ranging sensing device has a distance to the same landmark feature, and the distances to the same landmark feature detected by each ranging sensing device are subjected to data processing such as averaging or median. The fusion process further includes: and processing the measurement data provided by different distance measurement sensing devices by interpolation, mean value and the like according to the respective measured direction angles. For example, the measured value of a certain voxel position measured by the binocular imaging device and the measured data of the corresponding voxel position measured by the single-point laser ranging sensing device are averaged. For another example, if the distance measurement value D1 at a certain direction angle measured by the single-point laser distance measurement sensing device corresponds to the direction angle between the measurement values D21 and D22 at two voxel positions measured by the binocular imaging device, the distance measurement value D1 is averaged with the measurement values D21 and D22, respectively, to obtain new measurement values D21 'and D22' corresponding to the two voxel positions, and the new measurement values D21 'and D22' are substituted for the measurement values D21 and D22. For another example, if the distance measurement value D1 at a certain direction angle measured by the single-point laser distance measurement sensing device corresponds to the direction angle between the measurement values D21 and D22 at two voxel positions measured by the binocular imaging device, a column C21 is added between the columns C1 and C2 of the measurement values D21 and D22 at the two voxel positions in the two-dimensional distance lattice data measured by the binocular imaging device, and the added values at the other voxel positions in the column C21 may be interpolated according to the average of the measurement values at the rows C1 and C2.
In another embodiment, the measurement data from different ranging sensing devices acquired at different positions may be subjected to overlap optimization processing, and when the overlap between the measurement data meets an overlap condition, position data of the landmark feature mapped to the physical coordinate system and/or corresponding position data of the at least two positions in the physical coordinate system may be obtained.
The distance measurement sensing devices synchronously acquire measurement data at each position according to respective measurement modes, the movement state of the mobile robot is analyzed according to the respective measurement data after the measurement data from each distance measurement sensing device are acquired, and the coincidence degrees of the measurement data acquired by each distance measurement sensing device at different positions are optimized respectively to acquire each movement state meeting the coincidence condition. And performing data processing such as an average value, a median value and the like on each moving state to obtain a moving state for mapping landmark feature data in the image data to a physical coordinate system, and obtaining each position data in the physical coordinate system according to a manner provided by any one of the foregoing examples, which is not described in detail herein.
With continued reference to fig. 1, in step S130, the obtained landmark position data and positioning position data are recorded in map data constructed based on the physical coordinate system.
Here, after steps S110 and S120 are performed, landmark feature data in each image data is determined to be mapped to landmark position data in a physical coordinate system and/or positioning position data corresponding to at least two positions in the physical coordinate system, and the obtained landmark position data and/or positioning position data is recorded in map data constructed based on the physical coordinate system, so as to obtain a map of the current physical space, where each position where the mobile robot passes through and the positions of each landmark feature are included in the map. Here, the map data is data stored in a storage medium through a database, and may be presented to a user in a grid/unit vector manner, or the like.
In some cases, the map also holds therein respective image data taken by the image pickup device at respective positions for use in relocation. The relocation method comprises the following steps: after map data of the current physical space is constructed for the first time, the mobile robot can determine the position of the mobile robot in the physical space through a comparison result by comparing image data shot by an image shooting device at a certain position with each image data stored in the map data in the working process of the current physical space.
In other cases, historical measurement data (e.g., point cloud data) acquired by the range sensing device at each location is also stored in the map for use in repositioning. The relocation method comprises the following steps: after map data of the current physical space is constructed for the first time, the mobile robot can determine the position of the mobile robot in the physical space through a comparison result by comparing image data shot by an image shooting device at a certain position with each image data stored in the map data and comparing measurement data obtained by a distance measurement sensing device at a certain position with each measurement data stored in the map data in the working process of the current physical space.
The comparison method of the measurement data includes, but is not limited to, an Iterative Closest Point (ICP) algorithm, that is, the current measurement data and the measurement data stored in the map data to be compared are respectively used as two sets of Point sets, and points in the two sets of Point sets are in one-to-one correspondence in a three-dimensional space transformation manner, so that the similarity degree of the two sets of Point sets is judged.
In an exemplary embodiment, when the range finding sensing device includes a binocular camera, the range finding sensing device further includes a step of storing images captured by the binocular camera in map data corresponding to positioning position data of the mobile robot in a physical coordinate system.
Here, the map also holds images taken by the binocular imaging device at positions for repositioning use.
The repositioning method comprises the following steps: after map data of the current physical space are constructed for the first time, the mobile robot can determine the position of the mobile robot in the physical space through a comparison result by comparing an image shot by the binocular camera device at a certain position with each image stored in the map data in the working process of the current physical space.
In some cases, the map has stored therein both image data taken by the image pickup device at respective positions and images taken by the binocular image pickup device. After map data of the current physical space is constructed for the first time, the mobile robot can determine the position of the mobile robot in the physical space through a comparison result by comparing image data shot by an image shooting device at a certain position with each image data stored in the map data and comparing an image shot by a binocular image shooting device with each image stored in the map data in the working process of the current physical space.
In an exemplary embodiment, the method of constructing a map further comprises the step of determining environmental boundaries in the map data from the measurement data measured by the range sensing device. Examples of the environmental boundary include, but are not limited to, a door, a wall, a screen, and other space dividers.
Please refer to fig. 2a and 2b, which are schematic diagrams illustrating a map constructed in the present application according to an embodiment of the present invention. As shown in the figure, the mobile robot marks the positions and distances acquired by the ranging sensing device with respect to the landmark features in the map data during the movement of the current physical space. Since the spatial separation is a continuous uninterrupted object in space, the measurement data is also in continuous or dense lines based on the type of range finding sensing device during its sensing, forming an environmental boundary 11 as shown in fig. 2a and 2 b. In fig. 2a and 2b, the horizontal axis and the vertical axis represent the X axis and the Y axis, respectively, in the physical coordinate system, the discrete point (0,0) in fig. 2a represents the initial position of the mobile robot, and the other discrete points in fig. 2b represent the positions traveled by the mobile robot during the movement process and other landmark features in the physical environment.
It should be noted that, although the embodiment is described by taking an example that the mobile robot respectively and synchronously acquires the image data and the measurement data at two positions as an example, in an actual application, three positions, four positions … …, and the like may be used, which is not described herein again.
Aiming at the problems that the mobile robot is not easy to position in a large-scale physical space with the characteristics of certain similarity in a decoration environment and the like, wider environmental perception in the physical space is realized by synchronously acquiring measurement data reflecting higher environmental complexity of the side environment and the like of the mobile robot and image data containing lower environmental complexity of the upper part and the like of the mobile robot; and the map is constructed by using the measurement data and the image data, so that the problems of low precision, low density difference of landmarks which are beneficial to accurate positioning and the like in the map constructed by using any data are solved.
In an embodiment of the second aspect of the present application, a positioning method is provided to determine a position of a mobile robot in a physical space by jointly optimizing an error through data provided by an image capturing device and a ranging sensing device, respectively.
The positioning method may be executed by a processor in the mobile robot, or may be executed by a server in communication with the mobile robot.
The mobile robot or the server may have map data of a current physical space prestored therein, and the current position of the mobile robot in the physical space may be determined based on the map data. Or, when there is no map data prestored, the current position of the mobile robot in the physical space may be determined based on the data in the physical space acquired at the current position of the mobile robot and the data in the physical space acquired at the previous position, and the map data of the current physical space may be constructed during the moving process.
Please refer to fig. 5, which is a schematic diagram illustrating a positioning method of a mobile robot according to an embodiment of the present disclosure. As shown in the figure, in step S210, the image capturing device and the ranging sensing device are controlled to synchronously acquire image data and measurement data at least at a first position and a second position of the mobile robot, respectively; wherein the first location is mapped to first location data in the map data.
Here, the mobile robot passes through a plurality of positions during the movement in the current physical space. In order to accurately correlate the image data and the measurement data, the image pickup device and the range finding sensing device of the mobile robot acquire the image data and the measurement data at respective positions, respectively.
Taking an example of the work environment of a mobile robot in an airport, a supermarket, etc., since the physical space where the mobile robot is located includes moving objects such as a walking passenger and a trunk thereof, or a walking customer and a shopping cart thereof, and includes fixed objects such as fences, shelves, standing people, etc., the measurement data measured at least two locations includes both the fixed objects and the moving objects. The image data acquired at the same two positions at least comprises the image data of the entity in the physical space which is vertically higher than the detection range of the distance measuring and sensing device, and possibly the image data of the corresponding part of the measurement data.
In order to improve the data accuracy when the data fusion from two independent devices is used, the image data and the measurement data belonging to the same position are synchronously acquired. The synchronous acquisition mode comprises the steps that the mobile robot stops at a plurality of passing positions and acquires image data and measurement data, and then the mobile robot continues to move to synchronously acquire the image data and the measurement data; or synchronously acquiring the image data and the measurement data without stopping by using the synchronous signal.
The first location map corresponds to first positioning location data in the map data. Wherein the first position may be a position through which the mobile robot is passed in the current moving operation. Alternatively, when the map data of the current physical space is prestored, the first position may be a position at which the mobile robot has been stored in the map data during a historical movement operation.
In a possible embodiment, the synchronous acquisition may be based on an external synchronous signal, and the image data and the measurement data corresponding to the position are synchronously acquired by the image capturing device and the ranging sensing device; the image data and the measurement data corresponding to the position can be synchronously acquired by the image pickup device and the ranging sensing device based on the synchronous signal in the image pickup device or the ranging sensing device. For example, in the working process of the mobile robot in the current physical space, the control device in the mobile robot sends out a synchronization signal to the image capturing device and the ranging sensing device at a certain time interval, so that the image capturing device and the ranging sensing device respectively acquire image data and measurement data. For another example, the image capturing device and the distance measurement sensing device are provided with clock modules and preset with the same clock signal mechanism to synchronously send out signals, and when the image capturing device and the distance measurement sensing device respectively receive respective synchronous signals, the steps of obtaining image data and measurement data are respectively executed.
In some cases, the external synchronization signal may also be generated based on a signal in an inertial navigation sensor of the mobile robot. The inertial navigation sensor (IMU) is used for acquiring inertial navigation data of the mobile robot. The inertial navigation sensors include, but are not limited to, one or more of a gyroscope, odometer, optical flow meter, and accelerometer, among others. The inertial navigation data acquired by the inertial navigation sensor includes, but is not limited to, one or more of speed data, acceleration data, moving distance, rolling number of the roller, deflection angle of the roller, and the like of the mobile robot.
For example, an IMU is disposed in a driving component (i.e., a component for moving the mobile robot forward, such as a wheel set) of the mobile robot, and is connected to a lower computer of the mobile robot, and the lower computer is connected to the image capturing device; the distance measurement sensing device is also provided with an IMU, and the distance measurement sensing device and the image pickup device are connected to an upper computer of the mobile robot. The IMU in the range finding sensing device and the time stamp of the IMU in the mobile robot driving assembly are kept synchronous, so that the IMU in the mobile robot driving assembly generates a synchronous signal to the image pickup device to enable the image pickup device to acquire image data, and the IMU in the range finding sensing device also generates the synchronous signal to enable the range finding sensing device to acquire measurement data.
For another example, an IMU is disposed in a driving component (i.e., a component for moving the mobile robot forward, such as a wheel set) of the mobile robot, and the IMU is connected to a lower computer of the mobile robot, and the lower computer is connected to the image capturing device and the distance measuring sensing device, respectively. The IMU sends a synchronous signal to the image shooting device and the distance measurement sensing device when detecting that the wheel set rotates for a preset number of turns, so that the image shooting device obtains image data, and the distance measurement sensing device obtains measurement data.
For convenience of description, how the mobile robot is positioned at two positions (i.e., a first position and a second position) will be exemplified in the following embodiments.
With reference to fig. 5, in step S220, the moving state of the mobile robot in the physical space is analyzed by using at least the measured data at the first location and the second location, respectively, to determine second positioning location data of the second location in the map data when the mobile robot is at the second location relative to the first location.
Here, the measurement data and the image data acquired in step S210 each include data that is helpful for positioning in different spatial height ranges in the physical space in which the mobile robot is located, such as landmark data, distances/angles with respect to landmark features, and the like. In addition, since the distance measurement sensing device and the image pickup device move along with the whole mobile robot, at least the movement state determined by using the measurement data acquired at different positions can solve the problems that the landmark features reflected in the image data are at the landmark positions in a set coordinate system without physical dimensions (such as a scale), and that it is not favorable to obtain high-precision positioning only by using the measurement data. The set coordinate system includes, for example, a camera coordinate system or a virtual coordinate system that lacks a mapping relationship determined based on physical dimensions with a physical coordinate system. Therefore, the movement state of the mobile robot described by the measurement data and the landmark data of the corresponding landmark feature in the image data are used for calculating the landmark position data of each landmark feature mapped to the physical coordinate system captured by the image data.
Specifically, in the case where there is no referable data reflecting the actual movement of the mobile robot in the physical space, the mobile robot cannot determine the physical distance of its own movement, the physical distance between itself and a certain entity, and the like from the image data, which reflect the positional relationship in the actual physical space. For this purpose, the mobile robot obtains the moving state of the mobile robot in the physical space by analyzing the measurement data with physical meaning; and by utilizing the moving state and preset internal and external parameters of the image pickup device and the distance measuring device during assembly on the mobile robot and image acquisition, the mobile robot constructs the corresponding relation between landmark data in the image data and landmark position data in a physical coordinate system, thereby determining the landmark position data of the corresponding landmark features in the physical coordinate system.
Wherein the moving state comprises at least one of: the mobile robot positioning system comprises data related to the self positioning of the mobile robot, data used for helping the mobile robot to determine the positioning, data used for reflecting the mapping relation between the movement position change of the mobile robot in a physical space and the position change of the landmark feature in the image pixel in the image data, and the like. Wherein the data related to the self-positioning of the mobile robot includes but is not limited to: a pose change of the mobile robot with respect to the entity, a pose change between front and rear positions of the mobile robot, or position information of each path the mobile robot passes, etc.; the data for assisting the mobile robot in determining the position includes, but is not limited to: the relative position relationship (such as pose change data) between the landmark data corresponding to the same landmark feature in the measurement data and the mobile robot, the landmark position of the landmark feature, and the like.
Wherein the manner in which the mobile robot analyzes the moving state of the mobile robot in the physical space using at least the measurement data acquired at different positions includes the following examples:
in some examples, the mobile robot obtains the corresponding movement state by analyzing the measurement data measured at different positions to reflect the change of the physical position of the mobile robot. Wherein the analysis process reflecting the change of the physical position thereof includes a process of performing a movement behavior analysis on the mobile robot using the measurement data having the physical meaning. The mobile robot obtains the moving state of the mobile robot generated by the change of the moving pose by analyzing the data values of the measured data relative to the same entity at least two positions, the position deviation between the measured data relative to the same entity and the like.
To improve the accuracy of the measurement of the movement state, in some specific examples, the mobile robot calculates the movement state using landmark measurement data corresponding to a common landmark feature among the identified measurement data.
Taking the mobile robot to acquire the measurement data at the first position and the second position as an example, the measurement data acquired by the mobile robot at the first position and the second position respectively has at least one common landmark feature, and the moving state of the mobile robot can be calculated by the distances between the mobile robot at the two positions and the common landmark feature.
In the process that the initial moving position is preset as the starting point positioning by the mobile robot, under the condition that the positioning position information of the previous position in the physical coordinate system is known, the moving state between the front position and the rear position obtained by the example is utilized to obtain the corresponding positioning position data of the at least two positions in the physical coordinate system. It should be understood that since the measurement data acquired by the ranging sensing device is data having an actual physical unit, the calculated moving state also has an actual physical unit.
In other specific examples, the mobile robot obtains the moving state by performing a coincidence calculation on each measurement data acquired at different positions. The single measurement data provides the position relation between the entity part in the corresponding measurement plane in the physical space and the mobile robot, and when the mobile robot measures the same entity at different positions, the mobile robot reflects the change of the measurement position of the same entity in the measurement data caused by the change of the pose of the mobile robot. For example, the mobile robot may perform processing such as rotation, translation, and scaling on the measurement data acquired at one of the positions to obtain measurement data coinciding with some measurement points in the measurement data at another position, and the processing procedures such as rotation, translation, and scaling reflect a procedure of simulating the mobile robot to perform a moving operation so as to coincide the corresponding measurement points. For this purpose, the mobile robot can simulate the moving states of the mobile robot such as data of pose change between different positions, physical dimensions and the like by executing the overlap ratio optimization operation, and determine other data in the moving states such as second positioning position data and the like of the second position mapping in the map data by using some moving states such as the pose change data and the like and physical quantities provided by the coincident measurement data.
Since the measurement data includes the physical quantity of the corresponding fixed object and the physical quantity of the corresponding moving object measured by the ranging sensing device at the corresponding positions, in an exemplary embodiment, the mobile robot optimizes the coincidence degree between the measurement data acquired at different positions to obtain the corresponding second positioning position data of the second position in the map data under the coincidence condition between the measurement data. The mobile robot can regard the coincident measurement data part as landmark data of the corresponding landmark feature. The coincidence condition includes, for example, that the iterative optimization number of coincidence degrees reaches a preset iterative number threshold, and/or that the gradient value of coincidence degrees is smaller than a preset gradient threshold. This method is advantageous for calculating the movement state by reflecting the physical quantity of the fixed object in each measurement data as much as possible.
Taking the measurement data as a one-dimensional data as an example, please refer to fig. 3, which is a schematic diagram of the measurement data acquired by the mobile robot at two positions in one embodiment. As shown in the figure, the mobile robot acquires measurement data MData _1 and MData _2 at a first position and a second position, respectively, wherein if there is an entity with a fixed position in the physical space, the measurement data MData _1 and MData _2 have a measurement point corresponding to the entity and can be overlapped after data processing. As can be seen from the measurement points of the entities 311, 312 and 313 in the measurement data MData _1 and MData _2, the entities 311 and 313 in the two measurement data can be coincided after data processing, while the entity 312 cannot be coincided with the entities 311 and 313 simultaneously, so the entities 311 and 313 can be regarded as fixed entities. The mobile robot performs data processing including rotation, translation, scaling and the like on the two measurement data MData _1 and MData _2, namely, the relative position relationship of the mobile robot between the first position and the second position is reflected. For this purpose, the mobile robot performs a coincidence optimization calculation using an optimization function constructed according to the two measurement data MData _1 and MData _2 until the coincidence of the two measurement data MData _1 and MData _2 meets a preset coincidence condition. And obtaining pose change data reflecting the relative position relationship, taking the coincident measurement data points in the two measurement data as landmark measurement data under the preset coincidence condition, and obtaining second positioning position data corresponding to the second position in the map data under the coincidence condition according with the measurement data based on the obtained pose change data. It should be noted that the above example is also applicable to the calculation processing in which the measurement data is two-dimensional data.
In another embodiment, when the map data of the current physical space is prestored, and the first position corresponds to the first positioning position data in the map data, the second positioning position data of the second position mapped in the map data can be calculated based on the first positioning position data according to the moving state of the second position relative to the first position. For example, in the historical working process of the mobile robot, after the map data of the current physical space is obtained by the method for constructing the map according to the embodiment of the first aspect of the present application (i.e., the embodiment part corresponding to fig. 1 to 5), in the current moving operation, the mobile robot is located at the second position, and the landmark feature common to the first position is located at the second position, so that the moving state of the second position relative to the first position can be determined based on the common landmark feature, and the second positioning position data corresponding to the second position can be determined based on the first positioning position data corresponding to the first position and the moving state.
Of course, in some cases, since the mobile robot has acquired and saved the image data and the measurement data corresponding to the respective positions at a plurality of positions in the physical space at the time of constructing the map data. The second location where the mobile robot is currently located may be the same location or a location very close to the corresponding first location in the map data. At this time, the first positioning position data corresponding to the first position can be determined as the second positioning position data corresponding to the second position by comparing the image data and the measurement data acquired at the second position with the image data and the measurement data corresponding to the first position stored in the map data.
In order to improve the calculation efficiency, in a further exemplary embodiment, the mobile robot performs optimization processing on the coincidence degree between landmark measurement data in the measurement data acquired at different positions, so as to obtain the moving state of the mobile robot under the condition that the coincidence condition between the measurement data is met.
Wherein the optimization process is similar to the example of one-dimensional data described above. In an exemplary embodiment, the positioning method further includes a step of extracting a landmark feature in each measurement data, so as to analyze the movement state by using the landmark measurement data corresponding to the landmark feature in each measurement data. Wherein the landmark measurement data includes landmark measurement data corresponding to a moving object and landmark measurement data corresponding to a fixed object. For example, the distance to each feature in physical space is included in the measurement data acquired by the range sensing device. Including stationary features (e.g., flowerpots, end tables, shelves, etc.) and active features (e.g., people, pets, shopping carts while pushing, etc.).
It should be appreciated that in some scenarios, some features in physical space may not be suitable for use as landmark features, such as movable objects like people, pets, etc. in physical space. Marking these inappropriate features in the map data can affect the positioning of subsequent mobile robots and can lead to large errors. Therefore, it is necessary to screen each piece of measurement data to determine which pieces of measurement data can be used as landmark features, so as to analyze the movement state by using the extracted landmark features, thereby improving the calculation accuracy.
It should be appreciated that in some scenarios, some features in physical space may not be suitable for use as landmark features, such as movable objects like people, pets, etc. in physical space. These undesirable features, if used as landmark features, can result in large errors. Therefore, it is necessary to screen each piece of measurement data to determine which pieces of measurement data can be used as landmark features, so as to analyze the movement state by using the extracted landmark features, thereby improving the calculation accuracy.
In one embodiment, the qualified feature is screened out as a landmark feature by means of a threshold. For example, there are five common features a, b, c, d, e in the measurement data acquired at the first position and the measurement data acquired at the second position, where the numerical difference range between the movement states calculated based on a, b, c, d is within 1cm, and the numerical difference range between the movement state calculated based on e and the movement states calculated based on other features is 74-75 cm, and if the filtering threshold is set at 5cm, a, b, c, d can be extracted as landmark features, and e may be an active feature and cannot be used as landmark features.
In another embodiment, the landmark features may also be extracted directly based on numerical variations in the measurement data acquired at the locations. Specifically, in the measurement data acquired at different positions, the change of the static characteristic in each position data is relatively regular, and the change of the active characteristic is relatively large relative to the change of the other static characteristic values. Therefore, the feature with regular numerical change can be found in the measurement data relative to the distance of each feature through a preset change threshold value, and the feature is used as the landmark feature. For example, the measurement data acquired at the first position and the measurement data acquired at the second position have five common features a, b, c, d, and e. When in the first position, the distance of the a feature relative to the mobile robot is 2m, the distance of the b feature relative to the mobile robot is 3.4m, the distance of the c feature relative to the mobile robot is 4.2m, the distance of the d feature relative to the mobile robot is 2.8m, and the distance of the e feature relative to the mobile robot is 5 m; in the second position, the distance of the a-feature with respect to the mobile robot is 2.5m, the distance of the b-feature with respect to the mobile robot is 3.8m, the distance of the c-feature with respect to the mobile robot is 4.9m, the distance of the d-feature with respect to the mobile robot is 3.6m, and the distance of the e-feature with respect to the mobile robot is 2 m. It can be seen that the amount of change of the a-feature at two positions is 0.5m, the amount of change of the b-feature at two positions is 0.4m, the amount of change of the c-feature at two positions is 0.7m, the amount of change of the d-feature at two positions is 0.8m, and the amount of change of the e-feature at two positions is 3 m. Assuming that the preset variation threshold is 0.5m, a, b, c, d can be extracted as landmark features, and e may be an active feature and cannot be used as a landmark feature.
When determining the static landmark feature determined in any of the two embodiments, the landmark measurement data corresponding to the static landmark feature in the measurement data is used to perform overlap ratio calculation to determine the corresponding movement state. For example, the overlap ratio calculation is performed according to the position deviation between the landmark measurement data corresponding to the same static landmark feature at different positions, so that the position deviation between the landmark measurement data is minimized, and when the overlap ratio condition is determined to be met, the corresponding movement state is determined.
In still other embodiments, the respective landmark measurement data including the mobile object and the fixed object are optimized for overlap, and when the overlap satisfies the overlap condition, the coincident landmark measurement data is determined to correspond to the landmark feature of the fixed object.
The process of overlap optimization mentioned in any of the above embodiments can be exemplified as follows: please refer to fig. 4, which is a schematic diagram of the mobile robot in the present application for measuring data acquired at two positions respectively according to another embodiment. As shown in the figure, the measurement data MData _3 and MData _4 acquired at the first position and the second position by the mobile robot are shown, where a circle represents landmark measurement data matching in the two measurement data, and a star represents landmark measurement data coinciding at one coincidence calculation time in the two measurement data. The mobile robot iteratively optimizes the coincideable landmark measurement data to ensure that the change of the number of the landmark measurement data represented by the star reaches the coincidence condition during the previous coincidence calculation, and then determines the movement state obtained during the corresponding coincidence calculation; and obtaining second positioning position data corresponding to the second position in the map data under the condition of meeting the coincidence condition among the measurement data by using the obtained moving state and the coincident landmark measurement data.
By using the movement state obtained by any of the above examples, the mobile robot further determines landmark position information of the earth surface features corresponding to the landmark image data in the synchronously obtained image data in the physical coordinate system.
In the process of moving the mobile robot, the image pickup device also synchronously acquires image data at each position. By comparing landmark image data corresponding to common landmark features in the image data shot at each position, the moving amount and the rotating amount of the mobile robot in the set coordinate system are calculated, and for example, the relative position relation between the landmark features and the mobile robot in the set coordinate system is calculated. Assuming that the initial position of the mobile robot is taken as the origin of coordinates in the set coordinate system, coordinates of each positioning position of the mobile robot in the set coordinate system and coordinates of each landmark feature reflected by the image data in the landmark position in the set coordinate system can be obtained.
Although the mobile robot can calculate the coordinates of each position where the mobile robot passes under the set coordinate system and the coordinates of the image features of each landmark, the coordinates lack actual physical units, that is, the mobile robot cannot know the mapping relationship between the position coordinates of one pixel point in the image data and the entity measurement point in the corresponding physical space under the set coordinate system and the position in the actual physical coordinate system.
The mobile robot converts each position coordinate marked in the set coordinate system to a physical coordinate system using a movement state having a physical unit calculated by depending on the measurement data to obtain second positioning position data and the like corresponding to the second position in the map data under a condition of coincidence between the measurement data. The distance measuring sensing device and the image pickup device move integrally with the mobile robot, so that the mobile robot reflects the moving condition of the mobile robot relative to the landmark features corresponding to the landmark image data in the image data by using the moving state obtained by the measured data. Here, the mobile robot maps the landmark image features in the set coordinate system into the physical coordinate system based on the mapping relationship between the set coordinate system and the physical coordinate system determined by the movement state, so as to obtain the landmark position data in which the corresponding landmark features are mapped in the physical coordinate system.
The mobile robot is used for optimizing the coincidence degree between the measurement data acquired at different positions to obtain second positioning position data corresponding to the second position in the map data under the coincidence condition of the measurement data, and the execution process of obtaining the second positioning position data under a physical coordinate system is described by taking the second positioning position data as an example:
and respectively converting the measurement data acquired by the mobile robot at the first position and the second position into a coordinate system of the same measurement data, and performing iterative calculation of the contact ratio to obtain the contact ratio of the two measurement data in the corresponding coordinate systems according with the contact condition. Thereby, the moving state of the mobile robot under the overlapping condition is obtained. Wherein the moving state includes a physical dimension between a relative positional relationship of the mobile robot between the first position and the second position and a measured positional relationship between the coincident landmark measurement data, pose change data of the mobile robot at the second position with respect to the first position, and the like. Wherein, under the action of the whole movement of the mobile robot, the physical scale also corresponds to the conversion relation (namely, the map scale) between the set coordinate system and the physical coordinate system. For example, the conversion relationship between the set coordinate system and the physical coordinate system is determined based on the physical dimensions and internal and external parameters of the image pickup device, wherein the internal and external parameters include internal optical parameters and assembly parameters of the image pickup device, assembly parameters between the image pickup device and the ranging sensing device, and the like.
After the conversion relationship between the set coordinate system and the physical coordinate system is obtained, the mobile robot calculates positioning position data corresponding to each position of the mobile robot in the set coordinate system in the physical coordinate system.
It is added that the overlapping condition may be a preset overlapping condition, or a local optimal overlapping is selected, which includes but is not limited to: the average value of the contact ratio of the common landmark features in the two measurement data is the highest, or the standard deviation of the contact ratio of the common landmark features in the two measurement data is the smallest, the gradient of the contact ratio is the smallest locally, and the like.
It should be noted that the positioning position data may also be determined according to a moving state obtained when the measured data meets the contact ratio condition, or the positioning position data is determined after error correction is performed according to the two calculation methods.
A specific example of how to optimize the coincidence degree between the measurement data acquired at different positions to obtain the second positioning position data of the second position in the map data in accordance with the coincidence condition will be described below. It should be noted that the present example is only used for explaining the present application and not limiting the present application.
Let T be the position of the image pickup apparatus itself at two times (time 1 and time 2) before and after the current physical space in the set coordinate systemwc1And Twc2The position of the distance measuring induction device under the set coordinate system is Twb1And Twb2. Using mounting parameters between the image pick-up device and the range-finding sensing device, i.e. the position T of the range-finding sensing device relative to the image pick-up devicebcIt can be derived that: t iswc1=Twb1Tbc,Twc2=Twb2Tbc
Assuming that the conversion relation between the distance unit of the set coordinate system and the actual physical coordinate system has a scale factor of s, Twc1The coordinates (expressed in matrix form) in the set coordinate system are:
Figure BDA0002670028980000291
the coordinates in the real physical coordinate system are then:
Figure BDA0002670028980000292
Twc2the coordinates in the set coordinate system are:
Figure BDA0002670028980000293
the coordinates in the real physical coordinate system are then:
Figure BDA0002670028980000294
wherein R iswc1、Rwc2All represent rotation amount, twc1And twc2Both represent the amount of translation.
Similarly, the coordinates of each landmark feature in the set coordinate system and the coordinates corresponding to the physical coordinate system may also be obtained in a similar manner according to the distance between each landmark feature in the set coordinate system and the mobile robot, which is not described herein again.
From T'wc1A matrix for determining the relative positions T of the image pickup device at time 1 and time 2c1c2=Tc1wTwc2. Wherein:
Figure BDA0002670028980000295
Twc2is the position of the image pickup device in the physical coordinate system at time 2.
By Tc1c2=Tc1wTwc2The measurement data at the time 2 is mapped to the coordinate system where the measurement data at the time 1 is located to form simulated measurement data SData (or the measurement data at the time 1 may be mapped to the time 2, the principle is the same, and details are not described here), and the measurement data obtained by the ranging sensing device at the time 1 is used as actual measurement data RData. It should be understood that by mapping the measurement data at time 2 into time 1, the position and angle at time 2 relative to the position and angle of each landmark feature at time 1 can be theoretically modeled.
Here, one of the analog measurement data SData and the actual measurement data RData is adjusted by a scaling factor (i.e., a physical scale) to make the overlap ratio of the analog measurement data SData and the actual measurement data RData meet an overlap ratio condition, so as to determine a value of the scaling factor, i.e., the scaling factor
Figure BDA0002670028980000296
And
Figure BDA0002670028980000297
the value of s in (1). By substituting s, second positioning position data of the second position in the map data can be determined.
In other examples, in order to prevent the mobile robot from interfering with the acquisition of corresponding data by the ranging sensing device and the image capturing device due to external factors such as jolting and being held up during movement, the mobile robot obtains a corresponding movement state by analyzing the measured data and the image data measured at different positions to reflect the change of the physical positions of the measured data and the image data.
In an exemplary embodiment, the present application considers errors in the image pickup device to jointly optimize the errors of the image pickup device and the range sensing device, thereby improving the accuracy of the positioning.
Herein, joint optimization processing is performed on the contact ratio between the measurement data acquired at different positions and the contact ratio between the acquired image data, so that second positioning position data of the second position in the map data under the contact condition conforming to the joint optimization is obtained.
In one embodiment, the measurement data and the image data are respectively adjusted to optimize the coincidence degree of the measurement data and the coincidence degree of the image data; and when the error between the movement states obtained based on the respective contact ratios meets the error condition, determining corresponding second positioning position data of the second position in the map data by using the corresponding movement states. The error condition includes, for example, a preset error threshold, or a minimum error value selected based on a preset adjustment number.
Taking the example that the mobile robot synchronously acquires image data and measurement data at a first position and a second position respectively, mapping the measurement data acquired when the mobile robot is at the first position to the second position, and obtaining simulated measurement data corresponding to the second position by calculating the contact ratio, namely, the mapped measurement data determined by using the current contact ratio corresponds to the simulated movement state of the mobile robot, and the mapped measurement data represents the measurement data at the second position, which is obtained by the fact that the measurement data at the first position is estimated according to the corresponding movement state.
Meanwhile, the image pickup device also calculates the moving state of the mobile robot in the set coordinate system (i.e., the moving amount and the rotating amount in the set coordinate system) by comparing landmark image data corresponding to a common landmark feature in the image data captured at the two positions. Wherein the moving state obtained by using the image data and the moving state obtained by measuring the data correspond to each other by the assembly parameter between the image pickup device and the ranging sensing device.
And determining landmark position data of the landmark features in the physical coordinate system and/or positioning position data of the first position and the second position in the physical coordinate system by utilizing the two moving states. When coincident measurement data is used as landmark measurement data of corresponding landmark features, landmark position data of each landmark measurement data in a physical coordinate system can also be determined.
In another embodiment, the coincidence degree between the measured data of the two positions is adjusted, and the movement state obtained at the corresponding coincidence degree determines the coincidence degree of the landmark image data in the image data; when the contact ratio between the adjusted measurement data and the contact ratio between the adjusted image data meet the contact condition, determining landmark position data in which the corresponding landmark features in the image data are mapped to the physical coordinate system and/or positioning position data corresponding to the at least two positions in the physical coordinate system by using the corresponding movement states.
Still taking the example that the mobile robot synchronously acquires the image data and the measurement data at the first position and the second position respectively, mapping the measurement data acquired when the mobile robot is at the first position to the second position, and obtaining the simulated measurement data corresponding to the second position by calculating the contact ratio, namely, the mapped measurement data determined by using the current contact ratio corresponds to the simulated movement state of the mobile robot, which represents the measurement data at the second position, which is obtained by the measurement data at the first position according to the corresponding movement state.
The moving state obtained by using the image data corresponds to the moving state obtained by measuring the data through the assembly parameters between the image pickup device and the ranging sensing device, and the coincidence degree of the landmark image data matched with the landmark image data in the image data obtained at the two positions in the moving state obtained by measuring the data is calculated.
And evaluating the contact ratio obtained by using the measured data and the contact ratio obtained by using the image data to determine whether the evaluation structure meets the contact condition, if so, mapping each position into a physical coordinate system in a corresponding moving state, and if not, continuously adjusting the contact ratio of the measured data until the contact condition is met.
Through the optimization mode in the embodiment, the errors of the image pickup device and the distance measurement sensing device can be jointly optimized, so that the advantages of all sensors are utilized, the accumulation of the errors is avoided, and the positioning precision is improved.
In one embodiment, the measurement data from each ranging sensing device belonging to the same position may be subjected to fusion processing, and then optimization processing is performed on the fused measurement data corresponding to each position, so as to obtain second positioning position data of the second position in the map data under a coincidence condition that the measurement data coincide with each other.
Here, each ranging sensing device synchronously acquires measurement data at each position according to a respective measurement method, and after the measurement data from each ranging sensing device is obtained, fusion processing is performed on the measurement data from each ranging sensing device belonging to the same position, so that optimization processing is performed based on the fused measurement data. For example, when the distance measurement sensing device comprises a binocular camera device and a laser sensor, at a first position, the binocular camera device determines the distance of the mobile robot relative to each landmark feature according to images shot by two cameras of the binocular camera device by utilizing the principle of triangulation to obtain first measurement data acquired by the binocular camera device; at the same time, the laser sensor also determines its distance from the landmark feature at the first location based on the difference between the time it emits the laser beam and the time it receives the laser beam to obtain first measurement data acquired by the laser sensor. At the second position, the binocular camera device determines the distance of the mobile robot relative to each landmark feature by utilizing the principle of triangulation according to images shot by the two cameras of the binocular camera device so as to obtain second measurement data acquired by the binocular camera device; at the same time, the laser sensor also determines its distance from the landmark feature at the second location based on the difference between the time it emits the laser beam and the time it receives the laser beam to obtain second measurement data acquired by the laser sensor. After the first measurement data are subjected to fusion processing and the second measurement data are subjected to fusion processing, performing contact degree optimization processing based on the fused first measurement data and the fused second measurement data; each position data in the physical coordinate system is obtained in a similar manner to the various ways mentioned in the previous examples, and will not be described in detail here.
Wherein the fusion process comprises: and performing landmark analysis on the measurement data provided by the sensing devices based on different ranges to obtain landmark measurement data corresponding to the common landmark characteristics. For example, the measurement data acquired by each ranging sensing device has a distance to the same landmark feature, and the distances to the same landmark feature detected by each ranging sensing device are subjected to data processing such as averaging or median. The fusion process further includes: and processing the measurement data provided by different distance measurement sensing devices by interpolation, mean value and the like according to the respective measured direction angles. For example, the measured value of a certain voxel position measured by the binocular imaging device and the measured data of the corresponding voxel position measured by the single-point laser ranging sensing device are averaged. For another example, if the distance measurement value D1 at a certain direction angle measured by the single-point laser distance measurement sensing device corresponds to the direction angle between the measurement values D21 and D22 at two voxel positions measured by the binocular imaging device, the distance measurement value D1 is averaged with the measurement values D21 and D22, respectively, to obtain new measurement values D21 'and D22' corresponding to the two voxel positions, and the new measurement values D21 'and D22' are substituted for the measurement values D21 and D22. For another example, if the distance measurement value D1 at a certain direction angle measured by the single-point laser distance measurement sensing device corresponds to the direction angle between the measurement values D21 and D22 at two voxel positions measured by the binocular imaging device, a column C21 is added between the columns C1 and C2 of the measurement values D21 and D22 at the two voxel positions in the two-dimensional distance lattice data measured by the binocular imaging device, and the added values at the other voxel positions in the column C21 may be interpolated according to the average of the measurement values at the rows C1 and C2.
In another embodiment, the measurement data from different ranging sensing devices acquired at different positions may be subjected to overlap optimization, and second positioning location data of the second location in the map data may be obtained when the overlap between the measurement data meets an overlap condition.
The distance measurement sensing devices synchronously acquire measurement data at each position according to respective measurement modes, the movement state of the mobile robot is analyzed according to the respective measurement data after the measurement data from each distance measurement sensing device are acquired, and the coincidence degrees of the measurement data acquired by each distance measurement sensing device at different positions are optimized respectively to acquire each movement state meeting the coincidence condition. And performing data processing such as an average value, a median value and the like on each moving state to obtain a moving state for mapping the landmark feature data in the image data to the physical coordinate system, and obtaining each position data in the physical coordinate system according to the manner provided by any one of the above examples, which is not described herein again.
Aiming at the problems that the mobile robot is not easy to position in a large-scale physical space with the characteristics of certain similarity in a decoration environment and the like, wider environmental perception in the physical space is realized by synchronously acquiring measurement data reflecting higher environmental complexity of the side environment and the like of the mobile robot and image data containing lower environmental complexity of the upper part and the like of the mobile robot; and a method for positioning by using the measurement data and the image data solves the problems of low positioning precision and low density difference of landmarks which are beneficial to accurate positioning by using any data.
The application also provides a server.
The server includes, but is not limited to, a single server, a server cluster, a distributed server cluster, a cloud server, and the like. Here, according to the actual design, the server is provided by a cloud server provided by a cloud provider. The Cloud Service end comprises a Public Cloud (Public Cloud) Service end and a Private Cloud (Private Cloud) Service end, wherein the Public or Private Cloud Service end comprises Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), Infrastructure as a Service (IaaS), and Infrastructure as a Service (IaaS). The private cloud service end is used for example for an Aliskian cloud computing service platform, an Amazon cloud computing service platform, a Baidu cloud computing platform, a Tencent cloud computing platform and the like.
The server can receive the image data and the measurement data from the mobile robot, so that a map of the current physical space is constructed or the position of the mobile robot in a physical coordinate system is obtained based on the acquired image data and the measurement data.
The server is in communication connection with the mobile robot located in a physical space. Wherein the physical space represents a physical space provided for the mobile robot to perform the navigation movement, and the physical space includes but is not limited to any one of the following: indoor/outdoor spaces, road spaces, flight spaces, etc. For example, in some embodiments, the mobile robot is a drone, and the physical space corresponds to a flight space; in other embodiments, the mobile robot is a vehicle with an automatic driving function, and the physical space corresponds to a tunnel road where positioning cannot be obtained or a road space where network signals are weak and navigation is needed; in still other embodiments, the mobile robot is a sweeping robot, and the physical space corresponds to an indoor or outdoor space. The mobile robot is provided with a sensing device for providing navigation data for autonomous movement, such as a camera device and a movement sensing device.
Referring to fig. 6, which is a schematic diagram of a server in one embodiment of the present application, the server 40 includes an interface device 41, a storage device 42, and a processing device 43. The storage device 42 includes a nonvolatile memory, a storage server, and the like. The nonvolatile memory is, for example, a solid state disk or a usb disk. The storage server is used for storing the acquired various electricity utilization related information and power supply related information. The interface device 41 includes a network interface, a data line interface, and the like. Wherein the network interfaces include, but are not limited to: network interface devices based on ethernet, network interface devices based on mobile networks (3G, 4G, 5G, etc.), network interface devices based on near field communication (WiFi, bluetooth, etc.), and the like. The data line interface includes, but is not limited to: USB interface, RS232, RS485, etc. The interface device is connected with data such as the control system, a third party system, the Internet and the like. The processing device 43 is connected to the interface device 41 and the storage device 42, and includes: a CPU or a chip integrated with a CPU, a programmable logic device (FPGA), and a multi-core processor. The processing means 43 further comprise memories, registers, etc. for temporarily storing data.
The interface device 41 is used for data communication with a mobile robot located in a physical space.
The storage device 42 is used to store at least one program. Here, the storage device 42 includes, for example, a hard disk provided at a server and stores the at least one program.
The processing means 43 is adapted to invoke the at least one program to coordinate the interface means and the storage means to perform the method of constructing a map as mentioned in the examples of the first aspect or to perform the positioning method as mentioned in the examples of the second aspect. The method for constructing the map is shown in fig. 1 and the corresponding description, and the positioning method is shown in fig. 5 and the corresponding description, which are not repeated here.
In addition, the map data obtained based on the map construction method can be sent to the mobile robot through an interface device, and can also be stored in a server so as to be used in positioning, or the map data can be provided to other equipment through the server. When the server needs to use the map data for positioning, the storage device can provide the map data to the processing device; when the other device needs to call the map data, the storage means may provide the map data to the interface means so that the interface means transmits the map data to the other device.
Please refer to fig. 7, which is a block diagram of a mobile robot according to an embodiment. As shown, the mobile robot 50 includes a storage device 54, a moving device 53, an image capturing device 51, a distance measuring sensing device 55, and a processing device 52.
The image shooting device is used for collecting image data. Wherein the image capturing apparatus is an apparatus for providing a two-dimensional image at a preset pixel resolution. In some embodiments, the image capture device includes, but is not limited to: cameras, video cameras, camera modules integrated with optical systems or CCD chips, and camera modules integrated with optical systems and CMOS chips, and the like. According to the requirement of actual imaging, the lens that can be adopted by the camera or video camera includes but is not limited to: standard lenses, telephoto lenses, fisheye lenses, wide-angle lenses, and the like.
In some embodiments, the image capture device may be disposed on the mobile robot with a main optical axis between the horizontal plane and the vertical ceiling direction, such as the main optical axis being located within a range of 0 ° and 0 ± 90 ° in the vertical ceiling direction. Taking a mobile robot as an example of a commercial cleaning robot, the image capturing device is mounted at the upper half part of the commercial cleaning robot, and the main optical axis of the image capturing device is a preset angle which is inclined upwards so as to acquire image data within a corresponding visual angle range.
The image taken by the image taking device can be one or more of a single image, a continuous image sequence, a discontinuous image sequence, a video and the like. If the image capturing device captures an image sequence or a video, one or more image frames can be extracted from the sequence or the video to be used as image data for subsequent processing.
The distance measurement sensing device is used for acquiring measurement data, and can measure the distance between the landmark features in the physical space where the mobile robot is located and the mobile robot.
The landmark feature refers to a feature that is easily distinguished from other objects in a physical space where the mobile robot is located, and for example, the landmark feature may be a table corner, an outline feature of a ceiling lamp on a ceiling, a connecting line between a wall and a floor, and the like. The ranging sensing device includes, for example, a ranging sensing device that provides one-dimensional measurement data such as a laser sensor or an ultrasonic sensor, or a ranging sensing device that provides two-dimensional or three-dimensional measurement data such as a ToF sensor, a multi-line laser radar, a millimeter wave radar, or a binocular imaging device, or includes both of the above two ranging sensing devices. For example, the laser sensor may determine its range relative to a landmark feature based on the difference between the time it emits the laser beam and the time it receives the laser beam; for another example, the ultrasonic sensor may determine the distance of the mobile robot relative to the landmark feature according to the vibration signal of the emitted sound wave reflected back by the landmark feature; for another example, the binocular camera device can determine the distance of the mobile robot relative to the landmark features by utilizing the principle of triangulation according to images shot by the two cameras; for example, an infrared light projector of a ToF (Time of flight) sensor projects infrared light outwards, the infrared light is reflected after encountering a measured object and is received by a receiving module, and depth information of the irradiated object is calculated by recording the Time from the emitting to the receiving of the infrared light.
The range finding sensing device generates corresponding measurement data by detecting entities in the environment surrounding the mobile robot. In order to prevent the mobile robot from moving abnormally, such as collision, winding and the like, with an entity in the surrounding environment, or prevent the mobile robot from moving abnormally, such as falling and the like, the ranging sensing device is mounted on the body side of the mobile robot. The ranging sensing device is used for detecting the edge of the corresponding image data in the physical space or the environment data of the area which cannot be covered by the image data. Taking the mobile robot as an example of a commercial cleaning robot, the distance measuring sensing device is arranged at a position between 10-80cm away from the ground on the side of the commercial cleaning robot.
In some embodiments, the range finding sensing device is mounted on one side of the mobile robot in the direction of travel so that the mobile robot can learn about landmark features in its direction of travel to avoid or take other behavioral controls. Of course, in some cases, the distance measuring and sensing device may be installed at other positions of the mobile robot as long as the distance to each landmark feature in the surrounding physical space can be obtained.
In some embodiments, the range finding sensing device mounted on the mobile robot may be one or more in kind. For example, only the laser sensor may be mounted on the mobile robot, the laser sensor and the binocular imaging device may be mounted at the same time, or the laser sensor, the binocular imaging device, the ToF sensor, and the like may be mounted at the same time. And the installation quantity of the same sensor can be configured according to requirements so as to acquire measurement data in different directions.
The mobile device is used for performing a movement operation based on a navigation route. The mobile device comprises a wheel set and other components which are arranged at the bottom of the mobile robot and drive the mobile robot to move.
The storage device is used for storing at least one program. The storage device includes a nonvolatile memory, a storage server, and the like. The nonvolatile memory is, for example, a solid state disk or a usb disk.
The processing device is connected to the storage device, the image capturing device, the range sensing device, and the mobile device, and is configured to invoke and execute the at least one program to coordinate the storage device, the image capturing device, the range sensing device, and the mobile device to execute the method for constructing a map as mentioned in the foregoing example of the first aspect or execute the positioning method as mentioned in the foregoing example of the second aspect. The method for constructing the map is shown in fig. 1 and the corresponding description, and the positioning method is shown in fig. 5 and the corresponding description, which are not repeated here.
Furthermore, map data obtained based on the map construction method may be stored in a storage device so as to be used in positioning. The storage device may provide the map data to the processing device when it is desired to use the map data for positioning.
In addition, the mobile robot may further include an interface device so that map data may be provided to other devices or acquired image data and measurement data may be transmitted to other devices through the interface device.
Wherein, the interface device comprises a network interface, a data line interface and the like. Wherein the network interfaces include, but are not limited to: network interface devices based on ethernet, network interface devices based on mobile networks (3G, 4G, 5G, etc.), network interface devices based on near field communication (WiFi, bluetooth, etc.), and the like. The data line interface includes, but is not limited to: USB interface, RS232, RS485, etc. The interface device is connected with data such as the control system, a third party system, the Internet and the like.
Install image shooting device and range finding induction system on the mobile robot in this application to the advantage and the shortcoming of each sensor have been combined, the precision and the reliability of picture composition or location have been promoted to the mode that adopts the multisensor to fuse.
The present application further provides a computer-readable and writable storage medium, in which a computer program is stored, where the computer program is executed on a device in which the storage medium is located, to implement at least one embodiment described above for the method for constructing a map, such as the embodiment described in any one of the embodiments corresponding to fig. 1; or the computer program, when executed, implements at least one of the embodiments described above for the positioning method for a mobile robot, such as the embodiment described in any of the embodiments corresponding to fig. 5.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application.
In the embodiments provided herein, the computer-readable and writable storage medium may include read-only memory, random-access memory, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, a USB flash drive, a removable hard disk, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable-writable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are intended to be non-transitory, tangible storage media. Disk and disc, as used in this application, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
In one or more exemplary aspects, the functions described in the computer program of the methods described herein may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. The steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module, which may be located on a tangible, non-transitory computer-readable and/or writable storage medium. Tangible, non-transitory computer readable and writable storage media may be any available media that can be accessed by a computer.
The flowcharts and block diagrams in the figures described above of the present application illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The above embodiments are merely illustrative of the principles and utilities of the present application and are not intended to limit the application. Any person skilled in the art can modify or change the above-described embodiments without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical concepts disclosed in the present application shall be covered by the claims of the present application.

Claims (21)

1. A method for a mobile robot to construct a map, comprising:
controlling the image pickup device and the ranging sensing device to respectively and synchronously acquire image data and measurement data at different positions;
analyzing the moving state of the mobile robot in a physical space by using the measurement data acquired at different positions to obtain landmark position data of which the landmark features in each image data are mapped to a physical coordinate system and/or obtain corresponding positioning position data of the different positions in the physical coordinate system;
the resulting respective landmark position data and/or positioning position data are recorded in map data constructed based on the physical coordinate system.
2. The method for mapping by a mobile robot according to claim 1, further comprising the step of determining the environmental boundary in the map data from the measurement data measured by the range finding sensing device.
3. The method for constructing a map by a mobile robot according to claim 1, wherein when the range finding sensing device includes a binocular camera, further comprising the step of storing images photographed by the binocular camera in the map data corresponding to the respective positioning position data of the mobile robot in the physical coordinate system.
4. The method for constructing the map by the mobile robot according to claim 1, wherein the analyzing the moving state of the mobile robot in the physical space by using the measurement data acquired at different positions to obtain the landmark position data of the landmark feature mapped to the physical coordinate system in each image data and/or to obtain the corresponding positioning position data of the different positions in the physical coordinate system comprises:
and optimizing the coincidence degree between the measurement data acquired at different positions to obtain landmark position data in which the landmark features in the image data conforming to the coincidence condition are mapped to the physical coordinate system and/or obtain corresponding positioning position data of the different positions conforming to the coincidence condition in the physical coordinate system.
5. The method for constructing the map by the mobile robot according to claim 1, wherein the analyzing the moving state of the mobile robot in the physical space by using the measurement data acquired at different positions to obtain the landmark position data of the landmark feature mapped to the physical coordinate system in each image data and/or to obtain the corresponding positioning position data of the different positions in the physical coordinate system comprises:
and performing joint optimization processing on the contact ratio between the acquired measurement data at different positions and the contact ratio between the acquired image data to obtain landmark position data in which the landmark features in the image data are mapped to a physical coordinate system under the condition of the contact ratio conforming to the joint optimization, and/or positioning position data corresponding to the different positions in the physical coordinate system.
6. The method for constructing a map by a mobile robot according to claim 1, 4 or 5, wherein the controlling the image pickup device and the ranging sensing device to synchronously acquire the image data and the measurement data at different positions respectively comprises: and the image pickup device and the various distance measurement sensing devices are controlled to respectively and synchronously acquire image data and measurement data at different positions.
7. The method for constructing a map by a mobile robot according to claim 1, wherein the analyzing the moving state of the mobile robot in the physical space by using the measurement data acquired at different positions comprises: the landmark features in the measurement data acquired at different positions are extracted, and the movement state of the mobile robot in the physical space is analyzed using the extracted landmark features.
8. The method for constructing the map by the mobile robot according to claim 1, wherein the controlling the image pickup device and the ranging sensing device to synchronously acquire the image data and the measurement data at different positions respectively comprises: and controlling the image pickup device and the ranging sensing device to synchronously acquire image data and measurement data at different positions respectively, wherein synchronous signals are acquired from the outside or the inside of the image pickup device and the ranging sensing device in the mobile robot.
9. The method for constructing the map by the mobile robot according to claim 8, wherein the synchronization signal is obtained from the outside of the image capturing device and the ranging sensing device in the mobile robot, and comprises: the synchronization signal is obtained from inertial navigation sensors outside the image pickup device and the ranging sensing device in the mobile robot.
10. The method for constructing a map by a mobile robot according to claim 1, wherein the view angle range of the image capturing device and the measurement range of the range finding sensing device are partially overlapped or not overlapped.
11. A method of positioning a mobile robot, comprising:
controlling an image pickup device and a ranging sensing device to respectively and synchronously acquire image data and measurement data at a first position and a second position different from the first position of the mobile robot; wherein the first location is mapped to first location data in the map data;
and analyzing the moving state of the mobile robot in the physical space by using the measurement data respectively measured at the first position and the second position to determine second positioning position data of the second position in the map data when the mobile robot is at the second position.
12. The method according to claim 11, wherein the analyzing a moving state of the mobile robot in the physical space using the measurement data measured at the first location and the second location, respectively, to determine second positioning location data of the second location in the map data when the mobile robot is at the second location comprises:
and optimizing the coincidence degree between the measurement data acquired at different positions to obtain corresponding second positioning position data of the second position in the map data under the coincidence condition of the measurement data.
13. The method according to claim 11, wherein the analyzing a moving state of the mobile robot in the physical space using the measurement data measured at the first location and the second location, respectively, to determine second positioning location data of the second location in the map data when the mobile robot is at the second location comprises:
and carrying out joint optimization processing on the contact ratio between the measurement data acquired at different positions and the contact ratio between the acquired image data to obtain second positioning position data of the second position in the map data under the condition of joint optimization contact ratio.
14. The method according to any one of claims 11 to 13, wherein the step of controlling the image capturing device and the distance measuring sensing device to synchronously acquire the image data and the measurement data at a first position and a second position different from the first position of the mobile robot comprises: the mobile robot comprises an image shooting device and a plurality of ranging induction devices, wherein image data and measurement data are respectively and synchronously acquired at a first position and a second position different from the first position of the mobile robot.
15. The method according to claim 11, wherein the analyzing the moving state of the mobile robot in the physical space using the measurement data measured at the first and second positions, respectively, comprises: landmark features in the measurement data acquired at the first and second positions are extracted, and the movement state of the mobile robot in the physical space is analyzed using the extracted landmark features.
16. The method as claimed in claim 11, wherein the controlling the image capturing device and the ranging sensing device to synchronously acquire the image data and the measurement data at a first position and a second position different from the first position of the mobile robot, respectively, comprises: and controlling the image pickup device and the ranging sensing device to respectively and synchronously acquire image data and measurement data at a first position and a second position different from the first position of the mobile robot, wherein synchronous signals are acquired from the outside or the inside of the image pickup device and the ranging sensing device in the mobile robot.
17. The method according to claim 16, wherein the synchronization signal is obtained from outside of the image capturing device and the range finding sensing device in the mobile robot, and comprises: the synchronization signal is obtained from inertial navigation sensors outside the image pickup device and the ranging sensing device in the mobile robot.
18. The method according to claim 11, wherein the range of view of the image capturing device and the range of measurement of the range finding sensing device are partially overlapped or not overlapped.
19. A server, comprising:
an interface device for performing data communication with the mobile robot;
a storage device storing at least one program;
processing means connected to the storage means and the interface means for executing the at least one program to coordinate the storage means and the interface means to execute and implement the method for constructing a map by a mobile robot according to any one of claims 1 to 10 or the method for positioning a mobile robot according to any one of claims 11 to 18.
20. A mobile robot, comprising:
the image shooting device is used for collecting image data;
the distance measurement sensing device is used for acquiring measurement data;
a moving means for performing a moving operation;
a storage device to store at least one program;
a processing device connected to the moving device, the image capturing device, the range sensing device and the storage device, for executing the at least one program to perform the method for constructing the map by the mobile robot according to any one of claims 1 to 10 or the method for positioning the mobile robot according to any one of claims 11 to 18.
21. A computer-readable storage medium, in which at least one computer program is stored, which, when being executed by a processor, controls an apparatus in which the storage medium is located to perform the method for constructing a map by a mobile robot according to any one of claims 1 to 10 or the method for positioning a mobile robot according to any one of claims 11 to 18.
CN202080001821.0A 2020-08-07 2020-08-07 Mobile robot positioning method, map building method and mobile robot Pending CN112041634A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/107863 WO2022027611A1 (en) 2020-08-07 2020-08-07 Positioning method and map construction method for mobile robot, and mobile robot

Publications (1)

Publication Number Publication Date
CN112041634A true CN112041634A (en) 2020-12-04

Family

ID=73572857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080001821.0A Pending CN112041634A (en) 2020-08-07 2020-08-07 Mobile robot positioning method, map building method and mobile robot

Country Status (2)

Country Link
CN (1) CN112041634A (en)
WO (1) WO2022027611A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113391318A (en) * 2021-06-10 2021-09-14 上海大学 Mobile robot positioning method and system
CN113960999A (en) * 2021-07-30 2022-01-21 珠海一微半导体股份有限公司 Mobile robot repositioning method, system and chip

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115314700B (en) * 2022-10-13 2023-01-24 潍坊歌尔电子有限公司 Position detection method for control device, positioning system, and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102538781A (en) * 2011-12-14 2012-07-04 浙江大学 Machine vision and inertial navigation fusion-based mobile robot motion attitude estimation method
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN108981672A (en) * 2018-07-19 2018-12-11 华南师范大学 Hatch door real-time location method based on monocular robot in conjunction with distance measuring sensor
CN109431381A (en) * 2018-10-29 2019-03-08 北京石头世纪科技有限公司 Localization method and device, electronic equipment, the storage medium of robot
CN109643127A (en) * 2018-11-19 2019-04-16 珊口(深圳)智能科技有限公司 Construct map, positioning, navigation, control method and system, mobile robot
CN109752725A (en) * 2019-01-14 2019-05-14 天合光能股份有限公司 A kind of low speed business machine people, positioning navigation method and Position Fixing Navigation System
KR20190131402A (en) * 2018-05-16 2019-11-26 주식회사 유진로봇 Moving Object and Hybrid Sensor with Camera and Lidar
CN111427360A (en) * 2020-04-20 2020-07-17 珠海市一微半导体有限公司 Map construction method based on landmark positioning, robot and robot navigation system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130068249A (en) * 2011-12-15 2013-06-26 한국전자통신연구원 Apparatus and method for strongness of tie evalution apparatus and method
CN107167148A (en) * 2017-05-24 2017-09-15 安科机器人有限公司 Synchronous superposition method and apparatus
JP7112066B2 (en) * 2018-04-04 2022-08-03 国立大学法人九州工業大学 Autonomous mobile robot and its control method
CN109634279B (en) * 2018-12-17 2022-08-12 瞿卫新 Object positioning method based on laser radar and monocular vision
CN110211228A (en) * 2019-04-30 2019-09-06 北京云迹科技有限公司 For building the data processing method and device of figure
CN110706248B (en) * 2019-08-20 2024-03-12 广东工业大学 Visual perception mapping method based on SLAM and mobile robot
CN111089585A (en) * 2019-12-30 2020-05-01 哈尔滨理工大学 Mapping and positioning method based on sensor information fusion
CN111427061A (en) * 2020-06-15 2020-07-17 北京云迹科技有限公司 Robot mapping method and device, robot and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102538781A (en) * 2011-12-14 2012-07-04 浙江大学 Machine vision and inertial navigation fusion-based mobile robot motion attitude estimation method
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
KR20190131402A (en) * 2018-05-16 2019-11-26 주식회사 유진로봇 Moving Object and Hybrid Sensor with Camera and Lidar
CN108981672A (en) * 2018-07-19 2018-12-11 华南师范大学 Hatch door real-time location method based on monocular robot in conjunction with distance measuring sensor
CN109431381A (en) * 2018-10-29 2019-03-08 北京石头世纪科技有限公司 Localization method and device, electronic equipment, the storage medium of robot
CN109643127A (en) * 2018-11-19 2019-04-16 珊口(深圳)智能科技有限公司 Construct map, positioning, navigation, control method and system, mobile robot
CN109752725A (en) * 2019-01-14 2019-05-14 天合光能股份有限公司 A kind of low speed business machine people, positioning navigation method and Position Fixing Navigation System
CN111427360A (en) * 2020-04-20 2020-07-17 珠海市一微半导体有限公司 Map construction method based on landmark positioning, robot and robot navigation system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113391318A (en) * 2021-06-10 2021-09-14 上海大学 Mobile robot positioning method and system
CN113391318B (en) * 2021-06-10 2022-05-17 上海大学 Mobile robot positioning method and system
CN113960999A (en) * 2021-07-30 2022-01-21 珠海一微半导体股份有限公司 Mobile robot repositioning method, system and chip

Also Published As

Publication number Publication date
WO2022027611A1 (en) 2022-02-10

Similar Documents

Publication Publication Date Title
US11276191B2 (en) Estimating dimensions for an enclosed space using a multi-directional camera
KR102403504B1 (en) Mobile Robot And Method Thereof
US10006772B2 (en) Map production method, mobile robot, and map production system
EP3283843B1 (en) Generating 3-dimensional maps of a scene using passive and active measurements
AU2011352997B2 (en) Mobile human interface robot
CN102622762B (en) Real-time camera tracking using depth maps
CN112041634A (en) Mobile robot positioning method, map building method and mobile robot
Fruh et al. Fast 3D model generation in urban environments
Acharya et al. BIM-Tracker: A model-based visual tracking approach for indoor localisation using a 3D building model
JP6676814B2 (en) Object detection based on rider strength
CN111220148A (en) Mobile robot positioning method, system and device and mobile robot
RU2572637C2 (en) Parallel or serial reconstructions in online and offline modes for 3d measurements of rooms
WO2021146862A1 (en) Indoor positioning method for mobile device, mobile device and control system
CN110597265A (en) Recharging method and device for sweeping robot
KR20230137395A (en) 3D map construction method and device
Pirker et al. GPSlam: Marrying Sparse Geometric and Dense Probabilistic Visual Mapping.
US20200064481A1 (en) Autonomous mobile device, control method and storage medium
US20230401748A1 (en) Apparatus and methods to calibrate a stereo camera pair
US11561553B1 (en) System and method of providing a multi-modal localization for an object
Hu et al. A small and lightweight autonomous laser mapping system without GPS
Runceanu et al. Indoor point cloud segmentation for automatic object interpretation
KR102481615B1 (en) Method and system for collecting data
Rydell et al. Chameleon v2: Improved imaging-inertial indoor navigation
AU2015202200A1 (en) Mobile human interface robot
Li et al. A Mirror Detection Method in the Indoor Environment Using a Laser Sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination