WO2021146862A1 - Procédé de positionnement intérieur de dispositif mobile, dispositif mobile et système de commande - Google Patents

Procédé de positionnement intérieur de dispositif mobile, dispositif mobile et système de commande Download PDF

Info

Publication number
WO2021146862A1
WO2021146862A1 PCT/CN2020/073307 CN2020073307W WO2021146862A1 WO 2021146862 A1 WO2021146862 A1 WO 2021146862A1 CN 2020073307 W CN2020073307 W CN 2020073307W WO 2021146862 A1 WO2021146862 A1 WO 2021146862A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
mobile device
depth
mobile
detection device
Prior art date
Application number
PCT/CN2020/073307
Other languages
English (en)
Chinese (zh)
Inventor
崔彧玮
李巍
张哲霄
Original Assignee
珊口(深圳)智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 珊口(深圳)智能科技有限公司 filed Critical 珊口(深圳)智能科技有限公司
Priority to CN202080003090.3A priority Critical patent/CN112204345A/zh
Priority to PCT/CN2020/073307 priority patent/WO2021146862A1/fr
Publication of WO2021146862A1 publication Critical patent/WO2021146862A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • G01C11/08Interpretation of pictures by comparison of two or more pictures of the same area the pictures not being supported in the same relative position as when they were taken
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Definitions

  • This application relates to the field of positioning technology, and in particular to an indoor positioning method of a mobile device, a mobile device, and a control system.
  • Mobile robots are mechanical devices that perform tasks automatically. It can accept human command, run pre-arranged programs, or act according to principles and programs formulated with artificial intelligence technology. This type of mobile robot can be used indoors or outdoors, can be used in industry or home, can be used to replace security patrols, replace people to clean the ground, can also be used for family companions, auxiliary office, etc.
  • the mobile robots used in each field have different ways of moving. For example, mobile robots can use wheeled movement, walking movement, chain movement, and so on. With the update and iteration of the mobile technology of mobile robots, the mobile information provided by the sensors is used for real-time positioning and map construction (Simultaneous Localization and Mapping, SLAM), in order to provide mobile robots with more accurate navigation capabilities, so that mobile robots can be more effective Move autonomously.
  • SLAM Simultaneous Localization and Mapping
  • the distances that the rollers can travel on the ground of different materials are not the same.
  • the mobile robot is stuck or the rollers are slipping, errors in the movement information are caused, so that the positioning of the sweeping robot may vary greatly.
  • the purpose of this application is to provide an indoor positioning method, mobile device, and control system for a mobile device to solve the problem of inaccurate positioning of the mobile device in the prior art.
  • the first aspect of the present application discloses an indoor positioning method for a mobile device.
  • the mobile device includes a depth detection device and a camera device.
  • the camera device captures an image of the indoor environment.
  • the depth detection device provides depth information corresponding to at least one pixel unit in the image in the indoor space, where the depth information is used to determine the spatial scale parameter;
  • the indoor positioning method includes: acquiring information from different positions of the camera device The first image and the second image respectively taken; wherein, in the pixel unit of the second image, there is a second image feature that matches the first image feature in the first image, and the first image feature An image feature pair is formed between the image feature and the second image feature; according to the current spatial scale parameter and the pixel position offset of the image feature pair in the first image and the second image, it is determined that the mobile device is in the Location change information between different locations.
  • the current spatial scale parameter is based on the depth information of the image feature in the corresponding pixel unit detected by the depth detection device and the map coordinates of the corresponding image feature in the camera device. The proportional relationship determined by the depth information in the department.
  • the current spatial scale parameter is set according to the depth information corresponding to the pixel unit in the first image or the second image; or the current spatial scale The parameters are obtained according to historical depth information of the depth detection device or historical data obtained from spatial scale parameters.
  • the relative positional relationship between the camera device and the depth detection device is preset.
  • the previous image in the first image and the second image is taken by a camera during the movement of the mobile device; or, the previous image is from A map database for the indoor space.
  • the map database includes landmark information; wherein, the landmark information includes: the prior image and matching image features located in the prior image.
  • the positioning method further includes: determining the position information of the mobile device in the indoor space according to the position change information and a preset map database of the indoor space .
  • the positioning method further includes the following step: based on the determined position change information and at least one of the following: a first image, a second image, and depth information, updating the Map database of indoor spaces.
  • the positioning method further includes the step of updating the position relationship between the mobile device and a preset navigation route based on the determined position change information.
  • the positioning method further includes: during the continuous movement of the mobile device, identifying the determined position change according to the depth information sequence provided by the depth detection device at different positions The information corresponds to a physical object with suspended space.
  • the physical object includes at least one of the following: doors and furniture.
  • the mobile device further includes an inertial navigation detection device
  • the indoor positioning method further includes: using the determined position change information to modify the inertial navigation provided by the inertial navigation detection device. Data steps.
  • a second aspect of the present application provides a mobile device, including: a camera device for capturing images; a depth detection device for detecting depth information; a processing device connected to the depth detection device and the camera device for performing the following operations: The indoor positioning method described in any one of the aspects.
  • the distance between the camera device and the depth detection device is not greater than 3 cm.
  • the axis of the depth detection device is parallel to the optical axis of the imaging device, and the angle between the axis and the optical axis is from 0 degrees to the field of view of the imaging device In the range between the maximum values, or the angle between the axis and the optical axis is in the range from 0 degrees to the minimum value of the field of view of the imaging device.
  • the depth detection device includes a laser distance measuring device, or a single-point ToF sensor, or an ultrasonic sensing device.
  • the mobile device includes any one of the following: a mobile robot and an intelligent terminal.
  • the mobile robot is a cleaning robot; correspondingly, the cleaning robot further includes a cleaning device for performing cleaning operations.
  • a third aspect of the present application provides a control system for a mobile device, including: an interface device for connecting a depth detection device and a camera device in the mobile device; wherein the axis of the depth detection device and the light of the camera device There is a preset angle relationship between the axes, so that the depth information measured by the depth detection device corresponds to a pixel unit in the image captured by the camera device; a storage device for storing at least one program; a processing device , Connected to the interface device and the storage device, and used to call and execute the at least one program to coordinate the execution of the storage device, the depth detection device, and the camera device and realize the indoor as described in any one of the first aspect Positioning method.
  • a fourth aspect of the present application provides a control system for a mobile device, including: a camera device for capturing an image of an indoor environment; a depth detection device for providing depth information corresponding to at least one pixel unit in the image in the indoor space ; Wherein, the axis of the depth detection device and the optical axis of the camera device have a predetermined angle relationship, so that the depth information measured by the depth detection device and the image taken by the camera device Corresponding to the pixel unit; an interface device, connected to the depth detection device and the camera device; a storage device, used to store at least one program; a processing device, connected to the interface device and the storage device, used to call and execute the at least one program A program to coordinate the storage device, the depth detection device, and the camera device to execute and implement the indoor positioning method as described in any one of the first aspect.
  • a fifth aspect of the present application provides a computer-readable storage medium, characterized in that it stores at least one program, and the at least one program executes and implements the indoor positioning method according to any one of the first aspects when called.
  • the indoor positioning method for mobile equipment, mobile equipment, and control system capture at least two images through a camera device, and match the image features in the two images according to the matched images.
  • the indoor positioning of the mobile device improves the accuracy of positioning.
  • FIG. 1A shows a schematic diagram of determining depth information of the sweeping robot in a scene of this application.
  • Figure 1B shows a schematic diagram of the service robot determining depth information in a scenario of this application.
  • FIG. 1C shows a schematic diagram of the intelligent terminal determining depth information in a scenario of this application.
  • FIG. 2 shows a schematic flowchart of an embodiment of an indoor positioning method for a mobile device provided by this application.
  • Figure 3 shows a schematic diagram of image feature matching for this application.
  • FIG. 4 is a schematic diagram showing the image characteristics of the image captured in FIG. 1A in the image pixel coordinate system.
  • FIG. 5 shows a schematic diagram of a scene in an embodiment of an indoor positioning method for a mobile device provided by this application.
  • FIG. 6 shows a schematic structural diagram of an embodiment of the mobile device of this application.
  • FIG. 7 shows a schematic structural diagram of a control system of a mobile device of this application in an embodiment.
  • FIG. 8 shows a schematic structural diagram of a control system of another mobile device of this application in an embodiment.
  • Figure 9 shows the mapping relationship between the map coordinate system and the pixel point p'in the image Image1 captured by the camera device.
  • first, second, etc. are used herein to describe various elements or parameters in some examples, these elements or parameters should not be limited by these terms. These terms are only used to distinguish one element or parameter from another element or parameter.
  • the first image may be referred to as the second image, and similarly, the second image may be referred to as the first image without departing from the scope of the various described embodiments.
  • the first image and the second image are both describing an image, but unless the context clearly indicates otherwise, they are not the same image. Similar situations also include the first image feature and the second image feature.
  • head-mounted devices can use augmented reality (Augmented Reality, AR) technology to achieve immersion in AR games, AR decoration, somatosensory games, or holographic image interaction through environmental images and depth information provided by depth cameras. Style field of view experience. How to obtain the position change information of the operator's body limbs or the position change information of the operating equipment is crucial to whether the operator can provide a more realistic immersive field of view experience.
  • AR Augmented Reality
  • the mobile robot uses an inertial measurement unit (IMU) for positioning and motion control.
  • the inertial measurement unit includes a gyroscope, an accelerometer, etc., and measures the angular velocity and acceleration of the mobile robot. To get the pose of the mobile robot.
  • IMU inertial measurement unit
  • the inertial measurement unit includes a gyroscope, an accelerometer, etc., and measures the angular velocity and acceleration of the mobile robot. To get the pose of the mobile robot.
  • there are errors in positioning in this way. For example, when the walking wheels of the mobile robot slip or walk on the ground of different materials, the data provided by the inertial measurement unit is different from the actual data.
  • the present application provides an indoor positioning method for a mobile device, which uses the image captured by the camera device and the depth information provided by the depth detection device to accurately obtain the position of the mobile device and improve the positioning accuracy of the mobile device.
  • the mobile device refers to a mobile device that can be displaced in a physical space.
  • the displacement refers to the offset between the location of the mobile device at the previous time and the location at the next time.
  • the offset has a direction, and the direction is from the location at the previous time to the next time. ’S location.
  • the displacement is used to describe the change of the position of the mobile device, or to describe the movement track or mode of the mobile device.
  • the motion mode of the mobile device includes, but is not limited to, linear motion, zigzag motion, curved motion, and the like.
  • the mobile device is controlled and displaced by its own control system.
  • the mobile device includes, but is not limited to: drones, industrial robots, home companion mobile devices, medical mobile devices, and cleaning devices.
  • the mobile device is externally driven to cause displacement.
  • the mobile device can be worn on the human body and be displaced by human activities, or the mobile device can be installed on a device capable of autonomous movement (for example, on a vehicle), and the device can be Displacement due to movement.
  • the mobile device includes, but is not limited to: wearable electronic devices such as Head Mounted Display (HMD), smart glasses, and bracelets, smart phones, tablet computers, and notebook computers One or more of other intelligent terminals.
  • HMD Head Mounted Display
  • smart glasses smart glasses
  • bracelets smart phones
  • tablet computers tablet computers
  • notebook computers One or more of other intelligent terminals.
  • the positioning method provided in the embodiment of the present application is suitable for indoor space.
  • the indoor space is a physical space with boundaries, such as indoor environments such as family residences, public places (such as office spaces, shopping malls, hospitals, parking lots, and banks).
  • the indoor space may further include several room dividers.
  • the room divider refers to a facade used to form a space boundary in a physical space, such as a wall, partition, floor-to-ceiling window, and ceiling.
  • the coordinate system established according to the indoor space is the world coordinate system.
  • the origin, x-axis, and y-axis of the world coordinate system can be set on the ground, and the height direction of the indoor space can be used as the z-axis.
  • the unit of the world coordinate system is meter (m); of course, it is not limited to this, for example, according to actual conditions, the unit of the world coordinate system can also be decimeter (dm), centimeter (cm), millimeter (mm) etc. It should be understood that this world coordinate system is only an example, and is not limited to this.
  • the mobile device includes a camera device for capturing an image of the indoor space.
  • the camera device includes, but is not limited to: a camera, a video camera, a camera module integrated with an optical system or a CCD chip, and a camera module integrated with an optical system and a CMOS chip.
  • the lenses that can be used by the camera or video camera include, but are not limited to: standard lenses, telephoto lenses, fisheye lenses, and wide-angle lenses.
  • the embodiment of the present application takes the camera as an example of a camera for description. However, those skilled in the art should understand that the examples do not limit the scope of the specific embodiments.
  • the camera device may capture one or more of a single image, a continuous image sequence, a non-continuous image sequence, or a video.
  • the mobile device stores the captured image in a local storage medium, or transmits it to an external device connected by communication for storage.
  • the communication connection includes a wired or wireless communication connection.
  • the storage medium may include read-only memory, random access memory, EEPROM, CD-ROM or other optical disk storage device, magnetic disk storage device or other magnetic storage device, flash memory, U disk, mobile hard disk, or can be used to store instructions Or any other medium that can be accessed in the desired program code in the form of a data structure.
  • the external device may be a server located in the network, and the server includes but is not limited to one or more of a single server, a server cluster, a distributed server group, and a cloud server. kind.
  • the cloud server may be a cloud computing platform provided by a cloud computing provider.
  • the types of the cloud server include but are not limited to: Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS) , And Infrastructure-as-a-Service (IaaS).
  • the types of the cloud server include but are not limited to: public cloud (Public Cloud) server, private cloud (Private Cloud) server, and hybrid cloud (Hybrid Cloud) server, etc.
  • the public cloud server is, for example, Amazon's elastic computing cloud (Amazon EC2), IBM's Blue Cloud, Google's AppEngine, and Windows' Azure service platform, etc.; the private cloud server Examples include Facebook Cloud Computing Service Platform, Amazon Cloud Computing Service Platform, Baidu Cloud Computing Platform, and Tencent Cloud Computing Platform.
  • Amazon's elastic computing cloud Amazon EC2
  • IBM's Blue Cloud IBM's Blue Cloud
  • Google's AppEngine IBM's Blue Cloud
  • Windows' Azure service platform etc.
  • the private cloud server Examples include Facebook Cloud Computing Service Platform, Amazon Cloud Computing Service Platform, Baidu Cloud Computing Platform, and Tencent Cloud Computing Platform.
  • the location where the camera device is installed on the mobile device may be determined according to the type and/or application scenario of the mobile device.
  • the camera device may be provided on the top surface of the mobile robot (for example, the central area of the top surface, the front end of the top surface relative to the central area, the top surface Relative to the rear end of the central area), the side surface, or the junction of the top surface and the side surface, to capture images of the working environment of the mobile device for subsequent possible object recognition, map construction, and real-time Processing such as positioning or virtual simulation.
  • the camera device when the mobile device is a smart terminal, for example, the camera device may be provided on the outer surface of the smart terminal, for example, the upper side of the display side is close to the edge area, the center area above the display side is close to the back of the display side. Edge area, central area on the back, etc.
  • the camera device may also be telescopically arranged inside the smart terminal, and extend out of the surface of the smart terminal when an image needs to be taken, and so on.
  • the number of the camera devices can also be set according to actual needs; in some embodiments, the camera devices can also be movable, for example, the direction of the optical axis can be adjusted and the camera can be obtained at a position reached by the movement. position.
  • the angle of view of the camera device is determined by the parameters of the camera device.
  • the parameters of the camera device include internal parameters, external parameters, and distortion parameters, where the internal parameters include, but are not limited to: one or more of focal length, physical size corresponding to each pixel, pixel center, etc., and the external
  • the parameters include, but are not limited to, one or more of the position, rotation direction, and translation matrix of the camera on the mobile device.
  • the range of the angle of view of the camera device includes, but is not limited to: 10 degrees to 120 degrees.
  • the field of view angle is 10 degrees, 20 degrees, 30 degrees, 40 degrees, 50 degrees, 60 degrees, 70 degrees, 80 degrees, 90 degrees, 100 degrees, 110 degrees, and 120 degrees.
  • the above range is only an example, and does not limit the accuracy of the field of view angle to a range of 10 degrees. According to actual design requirements, the accuracy of the field of view angle may be higher, such as 1 degree, 0.1 degree, 0.01 degree or more, etc.
  • the range of the optical axis of the camera device includes but is not limited to: the angle relative to the height direction of the indoor space is 0° ⁇ 30°, or the angle relative to the ground is 60° ⁇ 120°.
  • the angles of the optical axis of the camera device relative to the vertical are -30 degrees, -29 degrees, -28 degrees, -27 degrees...-2 degrees, -1 degrees, 0 degrees, 1 degrees, and 2 degrees. ...29 degrees, or 30 degrees.
  • the included angle of the optical axis of the imaging device with respect to the ground plane is 60 degrees, 61 degrees, 62 degrees...89 degrees, 90 degrees, 91 degrees...119 degrees, 120 degrees.
  • the included angle between the optical axis of the above-mentioned camera device and the vertical or horizontal line is only an example, and is not limited to the range where the included angle accuracy is within 1 degree. According to actual design requirements, the included angle accuracy can be Higher, such as reaching 0.1 degrees, 0.01 degrees or more, etc.
  • the optical axis of the camera device is determined by the orientation of the camera device.
  • the orientation of the camera device may be preset, for example, the orientation is a fixed angle set according to the structure of the mobile device.
  • the orientation of the camera device can also be manually adjusted according to actual needs, or adjusted by the control system of the mobile device. For example, the camera device can adjust the horizontal and pitch angles by means of a pan/tilt.
  • the world coordinate system is similarly transformed to form a map coordinate system.
  • the similar transformation includes scale transformation, rotation transformation, and translation transformation.
  • the unit of the map coordinate system may be a custom voxel unit, and the voxel unit may be related or unrelated to the length unit. Among them, the unit of length is, for example, meters, decimeters, centimeters, millimeters, etc.
  • the map coordinate system may be a coordinate system in a virtual space, or a map coordinate system constructed by a mobile device for the physical space.
  • each pixel in the image taken by the camera device is mapped to the virtual three-dimensional space of the map coordinate system.
  • the map coordinate system overlaps with the camera coordinate system at the initial position of the mobile device, and by analyzing features in images taken at preset intervals and preset voxel units, constructing and moving A map of the virtual space corresponding to the physical space where the device is located.
  • Figure 9 is shown as a schematic diagram of map coordinates describing the position relationship of the mobile device relative to the measurement point M on the lamp when the mobile device is at position O and position A respectively in the map coordinate system, where the coordinate system XYZ is the map coordinate system, O is the origin of the map coordinate system.
  • the mobile device takes two images containing the same real measurement point M'(not shown) of the lamp in the physical space at a preset time interval.
  • the distance moved by the mobile device at the preset time interval corresponds to the unit distance D in the map coordinate system, and the matching image features feature1 and feature2 of the two pictures Pic1 and Pic2 corresponding to the measurement point M'are used to determine the mobile device
  • the coordinates of the position M in the map coordinate system (X M , Y M , Z M ).
  • a plurality of matching image feature pairs in the two pictures Pic1 and Pic2 can be used to construct a transformation matrix equation to obtain the result.
  • the posture changes.
  • the matching image features of the two images can be used to construct the physical space map in the virtual map coordinate system.
  • the difference between the map constructed by measurement is that its unit distance D has nothing to do with the actual moving distance of the mobile device.
  • map coordinate system Take the map coordinate system as an example of the indoor space coordinate system constructed by the mobile device using the measurements during the previous movement.
  • the mobile device matches the image currently captured with the image captured during the previous movement, and the location taken during the previous movement.
  • the coordinates determine the position coordinates of each pixel in the currently captured image in the map coordinate system.
  • the coordinates of the corresponding points in the world coordinate system and the map coordinate system can be converted and calculated according to the internal parameters and external parameters of the camera device.
  • the step of calibrating the camera device may also be included.
  • the calibration is to determine the position of the pixel point where the calibration point is mapped to the image by photographing the geometric pattern of the calibration point in the measurement space.
  • the calibration method includes but is not limited to one or more of the traditional camera calibration method, the active vision camera calibration method, and the camera self-calibration method.
  • the actual object is mapped to a pixel point/pixel point set in the image, and each pixel point corresponds to the voxel point coordinates (x, y, z) in the map coordinate system.
  • the position on the actual object corresponding to the voxel point is known or measurable in the indoor space, and the physical length can be on any axis of the voxel point coordinates (x, y, z)
  • Establish a known proportional relationship between the coordinate values for example, the physical length is parallel/consistent with the axis direction, or the component of the physical length on the axis can be calculated
  • the ratio between x, y, and z is not Therefore, it can be extended to the space scale parameter of the coordinate conversion between the map coordinate system and the world coordinate system of the indoor space according to the proportional relationship.
  • the pixel points may be selected from image features to facilitate identification.
  • the mobile equipment further includes a depth detection device.
  • the depth detection device includes, but is not limited to: one or more of an infrared distance measuring device, a laser distance measuring device, and an ultrasonic sensing device.
  • the infrared distance measuring device continuously emits modulated infrared light to form a reflection after irradiating an object, and then receives the reflected light, and calculates the depth information by calculating the time difference or phase difference between infrared light emission and reception .
  • the depth detection device may be a single-point ToF (Time of Flight) sensor, and the number of ToF sensors may be one or more.
  • the number of ToF sensors is one, which is arranged on one side of the camera device.
  • the number of ToF sensors is two, which are respectively symmetrically arranged on opposite sides of the camera device.
  • the laser distance measuring device has a laser signal transmitter and a laser signal receiver.
  • the laser signal transmitter is used to emit a beam of laser light, which is reflected upon an object, and the reflected laser light is received by the laser signal receiver.
  • the depth information is calculated.
  • the number of the laser distance measuring device may be one or more.
  • the number of the laser distance measuring device may be four, six or eight, which are respectively arranged symmetrically on opposite sides of the camera device. .
  • the ultrasonic sensing device emits ultrasonic waves in a certain direction and starts timing at the same time as the transmission time.
  • the ultrasonic waves are reflected when they encounter obstacles while propagating in the air.
  • the ultrasonic sensing device receives the reflected waves and stops. Timing; according to the time recorded by the timer, the depth information is calculated.
  • the depth detection device provides depth information corresponding to at least one pixel unit in the image in the indoor space.
  • the image block imaged by the pixel unit is obtained by light reflection from an entity (or called an object) in the indoor space.
  • the depth information of the corresponding pixel unit provided by the depth detection device is used to determine the distance from the mobile device to the corresponding entity.
  • the distance is expressed in units of length.
  • objects that the depth detection device can detect include opaque objects such as ceilings, homes, door frames, and tables; or objects that can be detected by the depth detection device include transparent/translucent objects such as glass.
  • the position of the pixel unit in the image is fixed in advance according to the angle of the imaging device and the depth detection device.
  • the detection position of the depth detection device can be calibrated so that the detection position of the depth detection device is in the central area of the field of view of the camera device, and correspondingly, the pixel unit is located in the central area of the captured image.
  • the pixel unit can also be located in other areas of the image through calibration.
  • the size of the range of the pixel unit may be determined according to the measurement range of the depth detection device.
  • the range of emission and reception of the axis of the depth detection device includes but is not limited to: 0 degrees to ⁇ 5 degrees relative to the vertical.
  • the angle between the axis of the depth detection device and the vertical is 0 degrees.
  • the angle between the axis of the depth detection device and the vertical is -5 degrees, -4 degrees, -3 degrees, -2 degrees, -1 degrees, 1 degree, 2 degrees, 3 degrees, 4 degrees, or 5 degrees.
  • the included angle between the axis of the depth detection device and the vertical or horizontal line is only an example, and does not limit the included angle accuracy to a range of 1 degree. According to the actual design requirements of mobile equipment, the included angle The accuracy can be higher, such as reaching 0.1 degrees, 0.01 degrees or more.
  • the pixel unit can roughly occupy 3*3 pixels, 4*4 pixels, 5*5 pixels, 6*6 pixels, 7*7 pixels, or 8*8 pixels A square pixel area; or a pixel unit can roughly occupy a rectangular pixel area such as 3*4 pixels, 4*5 pixels, and 5*7 pixels.
  • the selected beams emitted by infrared distance measuring devices, ultrasonic sensing devices, etc. can be concentrated as much as possible to make their falling point range smaller; on the one hand, the obtained calibration point
  • the depth information is more accurate; on the other hand, for example, a single-point TOF sensor is used, which is much cheaper than other types of depth detection devices.
  • the depth detection device can simultaneously detect the depth information between the mobile device and multiple physical locations in the indoor space. Using multiple depth information to determine space size parameters can improve positioning accuracy.
  • the relative positional relationship between the camera device and the depth detection device may be preset.
  • the depth detection device may be one, which is arranged on one side of the camera device, and the measured depth information corresponds to one pixel unit in the image taken by the camera device.
  • the depth detection device may also be multiple, which are respectively arranged symmetrically around the camera device, etc., and the four depth information measured correspond to the four pixel units in the image taken by the camera device.
  • the image features include, but are not limited to: shape features, grayscale features, and the like.
  • shape features include, but are not limited to, one or more of corner features, edge features, straight line features, and curve features.
  • the grayscale features include, but are not limited to, one or more of: grayscale jump features, grayscale values higher or lower than a grayscale threshold, and area sizes in an image frame that include a preset grayscale range, etc. .
  • the obtained depth information may be the distance information from the depth detection device to the ceiling.
  • the depth detection device is a single-point ToF sensor and points vertically to the ceiling
  • the obtained depth information is the vertical height from the depth detection device to the ceiling.
  • the detection position is located on an object, such as a room divider in an indoor space or other indoor objects, and the obtained depth information is the distance information from the depth detection device to the surface of the object.
  • the depth information relative to the indoor object corresponding to the image feature can be obtained by the depth detection device.
  • the depth detection device determines the spatial scale parameter according to the depth information of the detected image feature and the location information of the image feature corresponding to the map coordinate system.
  • the spatial scale parameter is used to determine the proportional relationship between the height information corresponding to the image feature in the indoor space and the depth information corresponding to the map coordinate system.
  • FIG. 1A shows a schematic diagram of determining depth information of the sweeping robot in a scene of this application.
  • the corner point of a square object such as a ceiling light
  • the image feature p0 is detected.
  • the coordinates of the image feature in the map coordinate system are p0(x0,y0,z0).
  • the x, y, and z axes of the map coordinate system can be parallel to the X, Y, and Z axes of the world coordinate system, respectively, so
  • the current spatial scale parameter can be obtained.
  • the image feature p0 can be mapped to the world coordinate system.
  • the mobile device can also be a service robot with a certain height.
  • Figure 1B shows a schematic diagram of the service robot determining depth information in a scenario of this application.
  • a corner point on the top light corresponds to an image feature p1 on the image
  • the coordinates of the image feature in the map coordinate system are p1 (x1, y1, z1).
  • the depth information detected by the service robot 11 is the distance H1 between the detection position and the depth detection device on the top of the service robot. Therefore, the depth information may be the sum of the distance H1 and the height H2 of the service robot itself.
  • the current spatial scale parameter can be obtained according to the proportional relationship between the depth information (ie, H1+H2) and the height information z1 of the image feature p1.
  • the mobile device may be a smart terminal (for example, AR glasses).
  • FIG. 1C shows a schematic diagram of the intelligent terminal determining depth information in a scenario of this application.
  • the depth detection device points horizontally to a room divider (such as a wall).
  • a corner point on a mural on the wall corresponds to an image feature p2 on the image, and the position corresponding to the image feature is in world coordinates.
  • the coordinate under the system is P2 (X2, Y2, Z2).
  • the depth information detected by the smart terminal 12 is the distance H3 between the detection position and the depth detection device on the smart terminal 12.
  • the map coordinate system (for example, the x, y, and z axes shown by the dashed arrows in FIGS. 1A, 1B, and 1C) can be compared with the world coordinate system (as shown in FIGS. 1A, 1B, and 1C).
  • the X, Y, and Z axes shown by the solid arrows in the middle) construct a conversion relationship. Therefore, the coordinate position in the map coordinate system corresponding to the image feature is mapped to the coordinate position in the world coordinate system, that is, the coordinates of the image feature in the world coordinate system are obtained.
  • the height information corresponding to each point on the object can also be obtained through mathematical calculation based on the detected depth information and the deflection angle;
  • Known depth detection device predetermined position data of the camera device (for example, one or more of the angle between the axes, the angle between the axes of the two devices and the height direction of the indoor space, etc.) can be calculated
  • the spatial scale parameter is obtained, and no expansion derivation is made here.
  • the angular relationship between the optical axis of the imaging device and the axis of the depth detection device may also be preset, for example, the optical axis of the imaging device and the axis of the depth detection device are parallel.
  • the included angle between the depth detection device and the optical axis of the imaging device is in a range from 0 degrees to the maximum value of the angle of view of the imaging device.
  • the angle between the depth detection device and the optical axis of the camera device may be 0 degree, 1 degree, 2 degrees...58 degrees , 59 degrees, 60 degrees...118 degrees, 119 degrees, 120 degrees. It should be understood that the included angle is only an example, and does not limit its scope.
  • the angle between the depth detection device and the optical axis of the imaging device is in a range from 0 degree to the minimum value of the angle of view of the imaging device.
  • the angle between the depth detection device and the optical axis of the camera device may be 0 degree, 1 degree, 2 degrees, ... 13 Degrees, 14 degrees, 15 degrees whil 28 degrees, 29 degrees, 30 degrees.
  • the angular relationship between the imaging device and the depth detection device may also be that the optical axis of the imaging device is parallel to the vertical, and the axis of the depth detection device forms a certain angle with the vertical. Do not give endless examples.
  • FIG. 2 shows a schematic flowchart of an indoor positioning method for a mobile device provided in this application in an embodiment.
  • the indoor positioning method includes:
  • S201 Acquire a first image and a second image respectively captured by the camera device at different positions; wherein a pixel unit of the second image has a feature that matches the feature of the first image in the first image.
  • a second image feature, and the first image feature and the second image feature form an image feature pair.
  • what the camera device captures may be one or more of a single frame image, a continuous image frame sequence, a non-continuous image frame sequence, or a video.
  • the first image and the second image may be two frames of images taken separately.
  • the first image and the second image may be two frames of images arbitrarily extracted from a captured video or image sequence.
  • first and second do not mean the order of image capture, but are only used to distinguish between two images; the previous image means that the first image and the second image have a shorter capture time. An early one.
  • the mobile device may directly obtain a captured image from the camera device as the first image or the second image, or may obtain a historically captured image from a storage medium as the first image or the second image.
  • the camera device in the mobile device caches the captured indoor image in a storage medium in a preset format and obtains it by the control system of the mobile device.
  • the previous image in the first image and the second image is taken by a camera during the movement of the mobile device.
  • the mobile device captures the first image C1 at the location A at time t1 and captures the second image C2 at the location B at time t2 during the movement.
  • the mobile device captures the second image C2 at position A at time t1, and captures the first image C1 at position B at time t2, and so on.
  • the mobile device captures the first image C1 during the movement, and extracts a previously captured image from the map database stored in the storage medium as the second image C2; or the mobile device captures the first image C2 during the movement.
  • the second image C2 is to extract a previously captured image from the map database stored in the storage medium as the first image C1, and so on.
  • the previous image comes from a map database of the indoor space
  • the subsequent image is obtained by a camera device of a mobile device.
  • the map database includes landmark information.
  • the landmark information refers to features that can be easily distinguished from other objects in the environment.
  • the landmark information may be a table corner, a contour feature of a ceiling light on a ceiling, a line between a wall and the ground, and the like.
  • the landmark information includes, but is not limited to: previous images in the first image and the second image, map data of the physical space where the mobile device is located when an image feature has been captured, image features that have been matched, and image features that have been matched.
  • the landmark information also has coordinate information.
  • the landmark information has the location of the mobile device when the landmark is captured, that is, the coordinates of the mobile device in the world coordinate system.
  • the landmark information when the landmark information is an image feature, the landmark information has the position of the image feature in the corresponding image.
  • the map database may be stored in a storage medium together with the image of the indoor environment acquired by the mobile device through the camera device, or uploaded to the server or the cloud for storage.
  • an image feature pair is formed between the first image feature and the second image feature.
  • the first image feature matches the second image feature, that is, the first image feature is similar to the second image feature.
  • the distance (such as Euclidean distance, cosine distance, etc.) between the feature vector of the first image feature and the feature vector of the second image feature is calculated. The closer the distance, the more matching the two image features.
  • the image matching algorithm includes but is not limited to: SIFT (Scale-invariant Feature Transform), One or more of ORB (Oriented FAST and Rotated Brief), FLANN (Fast Library for Approximate Nearest Neighbors), etc.
  • the number of matched image features can be multiple. For this reason, according to the position in the image of the pixel unit corresponding to the recognized image feature, the image feature that can be matched is searched from the recognized image feature.
  • the matched image feature in different images may belong to the same object or part.
  • Figure 3 shows a schematic diagram of image feature matching in this application.
  • the first image C1 includes image features a1 and a2
  • the second image C2 includes image features b1, b2, and b3, and the image features a1 and b1 belong to The matched image feature pairs, and the image features a2 and b3 belong to the matched image feature pairs.
  • the image feature a1 in the first image C1 is located on the left side of the image feature a2 and the spacing is d1 pixels; at the same time, it is also determined that the image feature b1 in the second image C2 is located on the left side of the image feature b3. And the spacing is d1' pixels, and the image feature b2 is located on the right side of the image feature b3 and the spacing is d2' pixels.
  • the pixel pitches of the feature a1 and a2 are matched, so that the image feature a1 in the first image C1 matches the image feature b1 in the second image C2, and the image feature a2 matches the image feature b3.
  • the image feature pair that matches the first image and the second image is determined.
  • the position of the mobile device can be obtained according to the displacement change in the image pixel coordinate system
  • the posture can be obtained according to the angle change in the image pixel coordinate system.
  • the pixel position offset of multiple image features in two images or the physical position offset of multiple image features in physical space can be determined according to the corresponding relationship, and the obtained Any kind of position offset information is used to calculate the relative position and posture of the mobile device at the time t2 when the second image C2 is captured relative to the time t1 when the first image C1 is captured.
  • the position and posture of the mobile device from the time t1 of capturing the first image C1 to the time t2 of capturing the second image C2 are obtained as follows: it has moved on the ground by a length of d and rotated to the left by an angle of ⁇ , or equivalent The mobile device moves dsin ⁇ and dcos ⁇ in the X and Y coordinate axis directions on the ground, respectively.
  • S202 Determine the position change information of the mobile device between the different positions according to the current spatial scale parameter and the pixel position offset of the matching image feature in the first image and the second image.
  • the current spatial scale parameter is a ratio determined according to the depth information of the image feature in the corresponding pixel unit detected by the depth detection device at the current moment and the depth information of the corresponding image feature in the map coordinate system of the camera device. relation.
  • the current spatial scale parameter is set according to the depth information corresponding to the pixel unit in the first image or the second image.
  • the depth detection apparatus can obtain depth information instantly when positioning is required, for example, it can be the depth information of the feature corresponding to the pixel unit in any one of the first image or the second image obtained by the mobile device. Or, it can also obtain the depth information of the feature corresponding to the pixel unit in the first image, and the depth information of the image feature corresponding to the pixel unit in the second image, and use the average of the two depth information as the final depth. information.
  • the current spatial scale parameter may also be obtained based on historical depth information of the depth detection device.
  • the depth information may be randomly selected by the mobile device from a plurality of historical depth information previously acquired and stored.
  • the depth information may be a plurality of historical depth information previously acquired and stored by the mobile device, and the depth information with the closest date is selected in chronological order as the final depth information.
  • the depth information may also be an average value or a weighted sum of multiple pieces of historical depth information previously acquired and stored by the mobile device.
  • the position and the position of the mobile device when the first image is captured are obtained.
  • the pixel position offset refers to the offset distance and angle of the second image feature relative to the first image feature, or the offset distance and angle of the first image feature relative to the second image feature.
  • the position change information reflects the relative position and posture change of the mobile device from the previous position to the current position.
  • the position change information includes the displacement distance, movement direction, etc. of the mobile device.
  • a corner point on the top light corresponds to an image feature p0 on the image, and the coordinates of the image feature in the map coordinate system are p0 (x0, y0, z0).
  • the mobile robot 10 detects that the depth information of the image feature obtained by the depth detection device (for example, a ToF sensor, not shown) is H0.
  • the depth detection device for example, a ToF sensor, not shown
  • the camera device of the mobile robot 10 is set such that its optical axis is set along the height direction of the indoor space, and the depth information H0 is used as the corresponding image feature of the mobile robot 10 and the pixel unit in the currently captured image Height information H of the current space between objects. Therefore, the current spatial scale parameter is the ratio of the height direction of the image feature p0 in the map coordinate system (assuming that the Z axis is consistent with) z0 and the depth information H0.
  • the mobile robot 10 may be as shown in the figure. When it is set on the ground, its upper surface is a plane parallel to the ceiling surface, and the camera device is arranged vertically upward on the upper surface of the mobile robot 10 , So that its axis can also be consistent with the height direction of the indoor space; of course, this is only an example. In other embodiments, the upper surface of the mobile device may not be a plane, which is not limited to this embodiment.
  • the mobile device captures the first image through the camera at the previous position w1; after moving d meters in the direction indicated by the dashed arrow, the mobile device captures the second image at the current position w2.
  • FIG. 4 shows a schematic diagram of the image characteristics of the image captured in FIG. 1A in the image pixel coordinate system.
  • the coordinates of the first image feature in the first image captured by the mobile device are P1(u1, v1), and the first image in the captured second image
  • the coordinates of the second image feature are P2 (u2, v2).
  • the pixel position offset of the matched first image feature P1 and second image feature P2 can be calculated
  • the vector of pixel position offsets It shows that affected by the movement of the mobile device, the first image feature P1 has moved m pixels in the direction of the second image feature P2 (the direction shown by the dashed arrow).
  • the position of the same image feature in the first image and the position change in the second image reflect the position and posture change during the movement of the mobile device.
  • the mobile device After obtaining the second image feature P2 and the first image feature P1 corresponding to the local Q on the object, and the pixel position offset between P2 and P1, the mobile device can be calculated according to the current spatial scale parameter When the second image is taken, the position change information relative to the first image taken.
  • the mobile device obtains its own position relative to the object part Q Relationship, and use the matched pixel position relationship of the first image feature P1 and the second image feature P2 to determine the position relationship between the current position of the mobile device and the shooting position when the first image was taken, including the pose relationship and Displacement relationship.
  • the mobile device uses the current spatial scale parameters to infer multiple image feature pairs matched in the first image and the second image, and uses each image feature pair to determine that the current location of the mobile device is relative to the first image. The positional relationship between the previous shooting position at the time of the image to improve the positioning accuracy. For example, each image feature pair is used to calculate the position relationship, and the position relationship between the current position and the previous shooting position is determined by the weighted sum, that is, the position change information is obtained.
  • the above method for determining the current spatial scale parameter is only an example, and it may also be a weighted value obtained by the mobile device according to multiple measurements during the movement. For example, during the movement, the mobile device collects the depth information corresponding to the continuous shooting and performs a weighted average to obtain the depth information of the image features in the pixel unit of an image during the continuous shooting, and thereby determine the current spatial scale parameter.
  • the smart terminal may be a mobile phone, a head-mounted device, smart glasses, etc.
  • the camera device provides the first image and the second image during the movement, and the pixel unit in the second image contains image features that match the first image, according to the corresponding second image provided by the depth detection device.
  • the depth information of the image feature in the pixel unit of the image the smart terminal can align the coordinates of the key parts of the virtual object displayed on the screen with the coordinates in the physical space, thereby determining the plane of the virtual object in the real scene Reference, and the correspondence between the coordinates of the key parts of the virtual object and the coordinates in the physical space.
  • the player operates the smart terminal, it can interact with virtual objects in real space.
  • the mobile device when the previous image comes from a map database pre-stored by the mobile device, using the determined location change information of the mobile device, the mobile device can also perform the following steps of the indoor positioning method: according to the location The change information and the preset map database of the indoor space determine the location information of the mobile device in the indoor space. Therefore, by acquiring the location change information of the mobile device, the current location information of the mobile device is determined, thereby realizing real-time indoor positioning of the mobile device.
  • the mobile robot takes a first image C1 at position A at time t1, and compares the landmark information in the first image C1 with the landmark information stored in the map database Matching is performed to determine that the position A of the mobile robot in the real world corresponds to the position on the map.
  • the mobile robot captures a second image C2 at position B at time t2, and using the indoor positioning method, it is determined that the position change of the mobile robot relative to position A when the mobile robot is at position B is: moved D distance in the southeast direction .
  • the landmark information in the second image C2 is matched with the landmark information stored in the map database, so as to obtain the position on the map corresponding to the position B of the mobile robot in the real world. In this way, not only the position information of the mobile robot in the real world can be obtained, but also the position change information of the mobile robot on the map and the map position at time t1 and time t2 respectively.
  • the mobile device may also construct a corresponding relationship between the image pixel coordinate system and the world coordinate system according to the position change information, thereby constructing a map of the indoor space.
  • the constructed map includes, but is not limited to: a grid map, a topological map, and so on.
  • the sweeping robot constructs a feature map according to the captured image of the landmark and its coordinate information during the cleaning movement.
  • the features included as landmark information are usually fixed, but in practical applications, the features included as landmark information are not necessarily so.
  • the feature recorded as landmark information is the outline feature of the lamp, and the corresponding feature disappears when the lamp is replaced. When the mobile device needs to use this feature for positioning, it will not be able to find the feature.
  • the indoor positioning method further includes: updating the indoor space based on the determined position change information and one or more of the first image, the second image, and the depth information.
  • Map database For example, when an image feature that has not been stored by the mobile device is found based on similar or the same location change information (such as location and posture), the latest image feature of the first image and/or the second image can be correspondingly supplemented and saved to the map. In the database. For another example, when a stored image feature that cannot be matched with the newly matched image feature is found based on the similar or the same position and posture, the redundant feature saved in the corresponding map database is deleted.
  • the threshold may be a fixed value or set based on the number of features corresponding to the marked position on the map. For example, when the number of newly matched features found based on similar or identical positions and poses is more than the number of features stored in the storage medium at the corresponding location, the new features can be added to the already constructed landmark information.
  • the map database can also be updated based on image features.
  • the mobile device can correct the constructed map of the indoor space Or update.
  • the position change information obtained by the mobile device through the inertial navigation detection device is D1
  • the position change information obtained after processing according to the data obtained by the camera device and the depth detection device is D2.
  • the mobile device further includes an inertial navigation detection device, and the inertial navigation detection device is used to obtain inertial navigation data of the mobile device.
  • the inertial navigation detection device includes, but is not limited to, one or more of a gyroscope, an odometer, an optical flow meter, and an accelerometer.
  • the inertial navigation data includes, but is not limited to, one or more of movement speed data, acceleration data, and movement distance of the mobile device.
  • the inertial navigation data may be one or more of the speed data of the mobile robot, acceleration data, the number of turns of the roller, the moving distance, and the deflection angle of the roller. kind.
  • the mobile device is an electronic device such as a mobile phone or a head-mounted device
  • the inertial navigation data may be one of the displacement data, displacement direction, acceleration data, and velocity data of the electronic device in three-dimensional space. Many kinds.
  • the indoor positioning method may further include the step of using the determined position change information to correct the inertial navigation data provided by the inertial navigation detection device. For example, according to the location change information of the mobile device obtained by the indoor positioning method, the current location of the mobile device is obtained, and compared and corrected with the location information provided by the inertial navigation detection device.
  • the determined position change information may replace the inertial navigation data provided by the inertial navigation detection device.
  • the positioning method described in the embodiment of the present application can be used in positioning with sufficient matching features. For example, during navigation of a mobile device, acquiring the relative position and posture changes in the above manner can quickly determine whether the current moving route of the mobile device is shifted, and make subsequent navigation adjustments based on the judgment result.
  • the indoor positioning method further includes the step of updating the position relationship between the mobile device and the preset navigation route based on the determined position change information.
  • the mobile device as a sweeping robot as an example
  • the obtained position and posture can help the robot determine whether it is on a preset navigation route.
  • the obtained position and posture can help the sweeping robot determine the relative displacement and relative rotation angle between the current position and the previous position, and use the data to draw a map, or Re-plan the navigation route based on the current location, etc.
  • FIG. 5 shows a schematic diagram of a scene in an embodiment of an indoor positioning method for a mobile device provided in this application.
  • the mobile device as a sweeping robot as an example, the sweeping robot performs cleaning tasks in a room.
  • the camera device for example, a camera
  • a depth detection device is provided on the right side of the camera, such as a single-point ToF sensor.
  • the optical axis direction of the imaging device is the same as the vertical direction, and the axis of the single-point ToF sensor is perpendicular to the ceiling (the direction shown by the dashed arrow B in the figure) as an example; in this case, the detected The depth information obtained is the vertical height from the single-point ToF sensor to the roof.
  • the extension direction of the wall of the sweeping robot forward is the X axis
  • the extension direction of the wall on the right side of the sweeping robot is the Y axis
  • the vertical upward direction is the Z axis, forming a world coordinate system (X,Y,Z).
  • the sweeping robot can take at least two images of the ceiling through a camera in a moving or stationary state, assuming that the at least two images both contain an indoor object T.
  • the cleaning robot obtains the depth information of the object T in the indoor space through the single-point ToF sensor.
  • the current spatial scale parameter obtained from the lower height can obtain the position change information of the cleaning robot at the current position relative to the previous position. Combine the position information of the previous position of the cleaning robot and the position change information, and compare and calculate with the map of the indoor space stored in the map database, so as to obtain the current position of the cleaning robot in the map. Location. Further, it is also possible to determine whether the cleaning robot deviates from a preset navigation route according to the previous position and the current position of the cleaning robot.
  • the sweeping robot can also combine the built indoor map and the position change information obtained according to the aforementioned positioning method to predict the distance between the current position and the obstacle marked on the indoor map, and facilitate timely Adjust the cleaning strategy.
  • the obstacle can be described by a single mark, or be marked as a wall, table, sofa, wardrobe, etc. based on characteristics such as shape and size.
  • the indoor positioning method further includes: during the continuous movement of the mobile device, according to the depth information sequence provided by the depth detection device at different positions, identifying that the determined position change information corresponds to a floating position.
  • the suspended space refers to a space formed by a horizontal plane that is at least partially suspended above the ground.
  • the physical objects include one or more of a door, a bed, a coffee table, and a table.
  • the space from the bottom of the upper door frame to the ground forms the suspended space.
  • the table top is suspended by the legs of the table, and the space from the bottom of the table top to the ground forms the suspended space.
  • the sweeping robot passes the single-point ToF sensor Perform detection (such as continuous detection, or periodic detection at certain intervals, which is not limited here), and obtain depth information of each position of the ceiling to form at least one depth information sequence.
  • detection such as continuous detection, or periodic detection at certain intervals, which is not limited here
  • the depth information in the depth information sequence should be the same, or the difference between the depth information Within the margin of error.
  • the at least one depth information sequence obtained by the single-point ToF sensor detection may be ⁇ 3.02m, 2.99 m, 3.01m, 2.98m, 3.00m ⁇ , etc.
  • the single-point ToF sensor detects the depth information of the lower surface of the door frame. Obviously, the difference between the depth information and other depth information in the depth information sequence is greater than the error range, and therefore, it can be determined that the top of the sweeping robot corresponds to a door frame with a hanging space.
  • the mobile device as an example of a sweeping robot.
  • the obtained depth information is the vertical height between the depth detection device and the roof.
  • the vertical height of the ceiling is 3m and the vertical height of the door is 2m, if the thickness of the sweeping robot itself is negligible, the obtained vertical height of the lower surface of the door frame can be 2m.
  • the detected depth information may remain at 3.00m ⁇ 0.02m (the error range of 0.02m is only an example in this embodiment, and Without limiting the range), the acquired depth information sequence may be ⁇ 3.01m, 2.99m,..., 2.98m, 3.00m ⁇ .
  • the depth information detected by the single-point ToF sensor is the height information of the lower surface of the door frame from the ground.
  • the depth information may be 2.00 m at this moment.
  • the depth information is significantly different from other depth information in the depth information sequence, and the sweeping robot can recognize that the ceiling above the current position is not a ceiling.
  • the depth information templates of multiple physical objects are stored in the map database in advance, and the depth information obtained by detection is compared or matched with the templates, so as to determine whether the physical object corresponding to the depth information is A certain physical object in the template.
  • the mobile device can also be a service robot with a certain height.
  • the depth information detected by the service robot may be the distance between the detection position and the depth detection device on the top of the service robot, or the like. Therefore, the depth information may be the depth information obtained by subtracting the height of the service robot from the actual vertical height of the physical object. For example, assuming that the vertical height of the ceiling is 3m and the height of the service robot itself is 1m, the acquired depth information sequence may be ⁇ 2.01m,1.99m,...,1.98m,2.00m ⁇ . Of course, it is not limited to this, and this is only an example of a possible implementation.
  • the sweeping robot can select a cleaning strategy that does not clean according to the recognized low and low physical objects, such as a bed, a coffee table, etc. For example, when the sweeping robot measures the positional relationship between itself and the bed according to the depth information sequence, it chooses to turn around and clean other areas.
  • the mobile device When the mobile device moves into or leaves other suspended spaces, it can also identify the corresponding physical object according to the change of the depth information in the depth information sequence. For example, when the mobile device moves to a suspended space under a table or leaves a suspended space under a coffee table, the depth information will also change accordingly, thereby identifying the entity above as a table, a coffee table, etc. Go into details again.
  • the indoor positioning method for mobile equipment provided by the present application, at least two images are captured by a camera, and the image features in the two images are matched, based on the pixel position offset of the matched image features, and according to the
  • the space scale parameter obtained by the depth information of the feature object in the indoor space corresponding to the image feature acquired by the depth detection device acquires the position change information of the mobile device in the indoor space, thereby realizing indoor positioning of the mobile device.
  • the indoor positioning method is used to position the mobile device, which avoids the influence of the positioning result on the ground conditions and improves the accuracy of the positioning.
  • the indoor positioning method provided by the present application can realize multiple functions such as map construction, navigation, and positioning through the camera device, which saves costs.
  • FIG. 6 shows a schematic structural diagram of the mobile device in an embodiment of this application.
  • the mobile device 6 includes a camera 601, a depth detection device 602, and a processing device 603.
  • the various devices may be arranged on the circuit mainboard of the mobile device, and the various devices are directly or indirectly electrically connected to each other to realize data transmission or interaction.
  • the indoor space is a physical space with boundaries, such as indoor environments such as family residences, public places (such as office spaces, shopping malls, hospitals, parking lots, and banks).
  • the indoor space may further include several room dividers.
  • the room divider refers to a facade used to form a space boundary in a physical space, such as a wall, partition, floor-to-ceiling window, and ceiling.
  • the coordinate system established according to the indoor space is the world coordinate system.
  • the origin, x-axis, and y-axis of the world coordinate system can be set on the ground, and the height direction of the indoor space can be used as the z-axis.
  • the unit of the world coordinate system is meter (m); of course, it is not limited to this, for example, according to actual conditions, the unit of the world coordinate system can also be decimeter (dm), centimeter (cm), millimeter (mm) etc. It should be understood that this world coordinate system is only an example, and is not limited to this.
  • the camera 601 is used to capture images.
  • the camera device includes, but is not limited to: a camera, a video camera, a camera module integrated with an optical system or a CCD chip, and a camera module integrated with an optical system and a CMOS chip.
  • the lenses that can be used by the camera or video camera include, but are not limited to: standard lenses, telephoto lenses, fisheye lenses, and wide-angle lenses.
  • the embodiment of the present application takes the camera as a camera as an example for description. However, those skilled in the art should understand that the examples do not limit the scope of the specific embodiments.
  • the camera device may capture one or more of a single image, a continuous image sequence, a non-continuous image sequence, or a video.
  • the mobile device caches the captured image in a local storage medium, or transmits it to an external device connected by communication for storage.
  • the communication connection includes a wired or wireless communication connection.
  • the storage medium may include read-only memory, random access memory, EEPROM, CD-ROM or other optical disk storage device, magnetic disk storage device or other magnetic storage device, flash memory, U disk, mobile hard disk, or can be used to store instructions Or any other medium that can be accessed in the desired program code in the form of a data structure.
  • the external device may be a server located in the network, and the server includes but is not limited to one or more of a single server, a server cluster, a distributed server group, and a cloud server. kind.
  • the cloud server may be a cloud computing platform provided by a cloud computing provider.
  • the types of the cloud server include but are not limited to: Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS) , And Infrastructure-as-a-Service (IaaS).
  • the types of the cloud server include but are not limited to: public cloud (Public Cloud) server, private cloud (Private Cloud) server, and hybrid cloud (Hybrid Cloud) server, etc.
  • the public cloud server is, for example, Amazon's elastic computing cloud (Amazon EC2), IBM's Blue Cloud, Google's AppEngine, and Windows' Azure service platform, etc.; the private cloud server Examples include Facebook Cloud Computing Service Platform, Amazon Cloud Computing Service Platform, Baidu Cloud Computing Platform, and Tencent Cloud Computing Platform.
  • Amazon's elastic computing cloud Amazon EC2
  • IBM's Blue Cloud IBM's Blue Cloud
  • Google's AppEngine IBM's Blue Cloud
  • Windows' Azure service platform etc.
  • the private cloud server Examples include Facebook Cloud Computing Service Platform, Amazon Cloud Computing Service Platform, Baidu Cloud Computing Platform, and Tencent Cloud Computing Platform.
  • the location where the camera device is installed on the mobile device may be determined according to the type and/or application scenario of the mobile device.
  • the camera device may be provided on the top surface of the mobile robot (for example, the central area of the top surface, the front end of the top surface relative to the central area, the top surface Relative to the rear end of the central area), the side surface, or the junction of the top surface and the side surface, to capture images of the working environment of the mobile device for subsequent possible object recognition, map construction, and real-time Processing such as positioning or virtual simulation.
  • the camera device when the mobile device is a smart terminal, for example, the camera device may be provided on the outer surface of the smart terminal, for example, the upper side of the display side is close to the edge area, the center area above the display side is close to the back of the display side. Edge area, central area on the back, etc.
  • the camera device may also be telescopically arranged inside the smart terminal, and extend out of the surface of the smart terminal when an image needs to be taken, and so on.
  • the number of the camera devices can also be set according to actual needs; in some embodiments, the camera devices can also be movable, for example, the direction of the optical axis can be adjusted and the camera can be obtained at a position reached by the movement. position.
  • the angle of view of the camera device is determined by the parameters of the camera device itself.
  • the parameters of the camera device include internal parameters, external parameters, and distortion parameters, where the internal parameters include, but are not limited to: one or more of focal length, physical size corresponding to each pixel, pixel center, etc., and the external
  • the parameters include, but are not limited to, one or more of the position, rotation direction, and translation matrix of the camera on the mobile device.
  • the range of the angle of view of the camera device includes, but is not limited to: 10 degrees to 120 degrees.
  • the field of view angle is 10 degrees, 20 degrees, 30 degrees, 40 degrees, 50 degrees, 60 degrees, 70 degrees, 80 degrees, 90 degrees, 100 degrees, 110 degrees, and 120 degrees.
  • the included angle of the optical axis of the imaging device with respect to the ground plane is 60 degrees, 61 degrees, 62 degrees...89 degrees, 90 degrees, 91 degrees...119 degrees, 120 degrees. It should be noted that the above range is only an example, and does not limit the accuracy of the field of view angle to a range of 10 degrees. According to actual design requirements, the accuracy of the field of view angle may be higher, such as 1 degree, 0.1 degree, 0.01 degree or more, etc.
  • the optical axis of the camera device is determined by the orientation of the camera device.
  • the orientation of the camera device may be preset, for example, the orientation is a fixed angle set according to the structure of the mobile device.
  • the orientation of the camera device can also be manually adjusted according to actual needs, or adjusted by the control system of the mobile device. For example, the camera device can adjust the horizontal and pitch angles by means of a pan/tilt.
  • the range of the optical axis of the camera device includes but is not limited to: the angle relative to the height direction of the indoor space is 0° ⁇ 30°, or the angle relative to the ground is 60° ⁇ 120°.
  • the angles of the optical axis of the camera device relative to the vertical are -30 degrees, -29 degrees, -28 degrees, -27 degrees...-2 degrees, -1 degrees, 0 degrees, 1 degrees, and 2 degrees. ...29 degrees, or 30 degrees.
  • the included angle of the optical axis of the imaging device with respect to the ground plane is 60 degrees, 61 degrees, 62 degrees...89 degrees, 90 degrees, 91 degrees...119 degrees, 120 degrees.
  • the included angle between the optical axis of the above-mentioned camera device and the vertical or horizontal line is only an example, and is not limited to the range where the included angle accuracy is within 1 degree. According to actual design requirements, the included angle accuracy can be Higher, such as reaching 0.1 degrees, 0.01 degrees or more, etc.
  • the world coordinate system is similarly transformed to form a map coordinate system.
  • the unit of the map coordinate system may be a custom voxel unit, and the voxel unit may be related or unrelated to the length unit.
  • the unit of length is, for example, meters, decimeters, centimeters, millimeters, etc.
  • the map coordinate system may be a coordinate system in a virtual space, or a map coordinate system constructed by a mobile device for the physical space.
  • each pixel in the image taken by the camera device is mapped to the virtual three-dimensional space of the map coordinate system.
  • the map coordinate system overlaps with the camera coordinate system at the initial position of the mobile device, and by analyzing features in images taken at preset intervals and preset voxel units, constructing and moving A map of the virtual space corresponding to the physical space where the device is located.
  • Figure 9 is shown as a schematic diagram of map coordinates describing the positional relationship of the mobile device relative to the measurement point M on the lamp when the mobile device is at position O and position A respectively in the map coordinate system, where the coordinate system XYZ is the map coordinate system, O is the origin of the map coordinate system.
  • the mobile device takes two images containing the same real measurement point M'(not shown) of the lamp in the physical space at a preset time interval.
  • the distance moved by the mobile device at the preset time interval corresponds to the unit distance D in the map coordinate system, and the matching image features feature1 and feature2 of the two pictures Pic1 and Pic2 corresponding to the measurement point M'are used to determine the mobile device
  • the coordinates of the position M in the map coordinate system (X M , Y M , Z M ).
  • a plurality of matching image feature pairs in the two pictures Pic1 and Pic2 can be used to construct a transformation matrix equation to obtain the result.
  • the posture changes.
  • the matching image features of the two images can be used to construct the physical space map in the virtual map coordinate system.
  • the difference between the map constructed by measurement is that its unit distance D has nothing to do with the actual moving distance of the mobile device.
  • map coordinate system Take the map coordinate system as an example of the indoor space coordinate system constructed by the mobile device using the measurements during the previous movement.
  • the mobile device matches the image currently captured with the image captured during the previous movement, and the location taken during the previous movement.
  • the coordinates determine the position coordinates of each pixel in the currently captured image in the map coordinate system.
  • the coordinates of the corresponding points in the world coordinate system and the map coordinate system can be converted and calculated according to the internal parameters and external parameters of the camera device.
  • the step of calibrating the camera device may also be included.
  • the calibration is to determine the position of the pixel point where the calibration point is mapped to the image by photographing the geometric pattern of the calibration point in the measurement space.
  • the calibration method includes but is not limited to one or more of the traditional camera calibration method, the active vision camera calibration method, and the camera self-calibration method.
  • the actual object is mapped to a pixel point/pixel point set in the image, and each pixel point corresponds to the voxel point coordinates (x, y, z) in the map coordinate system.
  • the position on the actual object corresponding to the voxel point is known or measurable in the indoor space, and the physical length can be on any axis of the voxel point coordinates (x, y, z)
  • Establish a known proportional relationship between the coordinate values for example, the physical length is parallel/consistent with the axis direction, or the component of the physical length on the axis can be calculated
  • the ratio between x, y, and z is not Therefore, it can be extended to the space scale parameter of the coordinate conversion between the map coordinate system and the world coordinate system of the indoor space according to the proportional relationship.
  • the pixel points may be selected from image features to facilitate identification.
  • the mobile equipment further includes a depth detection device.
  • the depth detection device includes, but is not limited to: one or more of an infrared distance measuring device, a laser distance measuring device, and an ultrasonic sensing device.
  • the infrared distance measuring device continuously emits modulated infrared light to form a reflection after irradiating an object, and then receives the reflected light, and calculates the depth information by calculating the time difference or phase difference between infrared light emission and reception.
  • the depth detection device may be a single-point ToF (Time of Flight) sensor, and the number of ToF sensors may be one or more.
  • the number of ToF sensors is one, which is arranged on one side of the camera device.
  • the number of ToF sensors is two, which are respectively symmetrically arranged on opposite sides of the camera device.
  • the laser distance measuring device has a laser signal transmitter and a laser signal receiver.
  • the laser signal transmitter is used to emit a beam of laser light, which forms a reflection after it is irradiated on an object.
  • the reflected laser light is then received by the laser signal receiver.
  • the depth information is calculated.
  • the number of the laser distance measuring device may be one or more.
  • the number of the laser distance measuring device may be four, six or eight, which are respectively arranged symmetrically on opposite sides of the camera device. .
  • the ultrasonic sensing device emits ultrasonic waves in a certain direction and starts timing at the same time as the transmitting time.
  • the ultrasonic waves are reflected when they encounter obstacles while propagating in the air, and the ultrasonic sensing device receives the reflected waves and stops timing;
  • the time recorded by the timer, and the depth information is calculated.
  • the depth detection device 602 of the mobile device is used to obtain depth information of the position of at least one detection location in the indoor space, and the depth information indicates the distance between the detection location and the depth detection device.
  • the detection position corresponds to a pixel unit in the image, and the pixel unit includes one or more pixel points corresponding to the detection position mapping in the image.
  • the position of the pixel unit in the image is fixed in advance according to the angle of the imaging device and the depth detection device.
  • the detection position of the depth detection device can be calibrated so that the detection position of the depth detection device is in the central area of the field of view of the camera device, and correspondingly, the pixel unit is located in the central area of the captured image.
  • calibration can also be used to make the pixel unit fall at another known position in the image.
  • the obtained depth information may be the distance information from the depth detection device to the ceiling.
  • the depth detection device is a single-point ToF sensor and points vertically to the ceiling
  • the obtained depth information is the vertical height from the depth detection device to the ceiling.
  • the detection position is located on an object, such as a room divider in an indoor space or other indoor objects, and the obtained depth information is the distance information from the depth detection device to the surface of the object.
  • the detection position of the depth detection device may fall in the field of view of the camera device, so as to directly obtain the depth information of the pixel unit corresponding to the detection position in the image; or, the detection position may also be It may not be in the image, but in the indoor space, the physical structure relationship between the detection position and the corresponding shooting area in the image is known in advance, then one or more pixel units in the image can also be calculated based on this In-depth information.
  • the image corresponds to the area A of the ceiling in the indoor space
  • the detection position a of the depth detection device can be located in the area A, so as to obtain the depth information of the pixel unit corresponding to the position a in the image; or, the depth detection device
  • the detection position b can be located in the B area of the ceiling outside the A area, and it can be inferred that the depth information of the pixel unit corresponding to the position a'that is also on the same plane as the position b in the area A is the depth information of b.
  • the size of the range of the pixel unit is determined according to the measurement range and the measurement angle of the depth detection device.
  • the range of emission and reception of the axis of the depth detection device includes but is not limited to: 0 degrees to ⁇ 5 degrees relative to the vertical.
  • the angle between the axis of the depth detection device and the vertical is 0 degrees.
  • the angle between the axis of the depth detection device and the vertical is -5 degrees, -4 degrees, -3 degrees, -2 degrees, -1 degrees, 1 degree, 2 degrees, 3 degrees, 4 degrees, or 5 degrees.
  • the included angle between the axis of the depth detection device and the vertical or horizontal line is only an example, and does not limit the included angle accuracy to a range of 1 degree. According to the actual design requirements of mobile equipment, the included angle The accuracy can be higher, such as reaching 0.1 degrees, 0.01 degrees or more.
  • the pixel unit can roughly occupy 3*3 pixels, 4*4 pixels, 5*5 pixels, 6*6 pixels, 7*7 pixels, or 8*8 pixels A square pixel area; or a pixel unit can roughly occupy a rectangular pixel area such as 3*4 pixels, 4*5 pixels, and 5*7 pixels.
  • the selected beams emitted by infrared distance measuring devices, ultrasonic sensing devices, etc. can be concentrated as much as possible to make their falling point range smaller; on the one hand, the obtained calibration point
  • the depth information is more accurate; on the other hand, for example, a single-point TOF sensor is used, which is much cheaper than other types of depth detection devices.
  • the depth detection device can simultaneously detect the depth information between the mobile device and multiple physical locations in the indoor space. Using multiple depth information to determine space size parameters can improve positioning accuracy.
  • the relative positional relationship between the camera device and the depth detection device may be preset.
  • the depth detection device may be one, which is arranged on one side of the camera device, and the measured depth information corresponds to one pixel unit in the image taken by the camera device.
  • the depth detection device may also be multiple, which are respectively arranged symmetrically around the camera device, etc., and the four depth information measured correspond to the four pixel units in the image taken by the camera device.
  • the image features include, but are not limited to: shape features, grayscale features, and the like.
  • shape features include, but are not limited to, one or more of corner point features, edge features, straight line features, and curve features.
  • the grayscale features include, but are not limited to: one or more of grayscale jump features, grayscale values higher or lower than a grayscale threshold, and area sizes in the image frame that include a preset grayscale range, etc. .
  • the obtained depth information may be the distance information from the depth detection device to the ceiling.
  • the depth detection device is a single-point ToF sensor and points vertically to the ceiling
  • the obtained depth information is the vertical height from the depth detection device to the ceiling.
  • the detection position is located on an object, such as a room divider in an indoor space or other indoor objects, and the obtained depth information is the distance information from the depth detection device to the surface of the object.
  • the depth information relative to the indoor object corresponding to the image feature can be obtained by the depth detection device.
  • the depth detection device determines the spatial scale parameter according to the depth information of the detected image feature and the location information of the image feature corresponding to the map coordinate system.
  • the spatial scale parameter is used to determine the proportional relationship between the height information corresponding to the image feature in the indoor space and the depth information corresponding to the map coordinate system.
  • FIG. 1A shows a schematic diagram of determining depth information of the sweeping robot in a scene of this application.
  • the corner point of a square object such as a ceiling light
  • the image feature p0 is detected.
  • the coordinates of the image feature in the map coordinate system are p0(x0,y0,z0).
  • the x, y, and z axes of the map coordinate system can be parallel to the X, Y, and Z axes of the world coordinate system, respectively, so
  • the current spatial scale parameter can be obtained.
  • the image feature p0 can be mapped to the world coordinate system.
  • the mobile device can also be a service robot with a certain height.
  • Figure 1B shows a schematic diagram of the service robot determining depth information in a scenario of this application.
  • a corner point on the top light corresponds to an image feature p1 on the image
  • the coordinates of the image feature in the map coordinate system are p1 (x1, y1, z1).
  • the depth information detected by the service robot 11 is the distance H1 between the detection position and the depth detection device on the top of the service robot. Therefore, the depth information may be the sum of the distance H1 and the height H2 of the service robot itself.
  • the current spatial scale parameter can be obtained according to the proportional relationship between the depth information (ie, H1+H2) and the height information z1 of the image feature p1.
  • the mobile device may be a smart terminal (for example, AR glasses).
  • FIG. 1C shows a schematic diagram of the intelligent terminal determining depth information in a scenario of this application.
  • the depth detection device points horizontally to a room divider (such as a wall).
  • a corner point on a mural on the wall corresponds to an image feature p2 on the image, and the position corresponding to the image feature is in world coordinates.
  • the coordinate under the system is P2 (X2, Y2, Z2).
  • the depth information detected by the smart terminal 12 is the distance H3 from the detection position to the depth detection device on the smart terminal 12.
  • the map coordinate system (for example, the x, y, and z axes shown by the dashed arrows in FIGS. 1A, 1B, and 1C) can be compared with the world coordinate system (as shown in FIGS. 1A, 1B, and 1C).
  • the X, Y, and Z axes shown by the solid arrows in the middle) construct a conversion relationship. Therefore, the coordinate position in the map coordinate system corresponding to the image feature is mapped to the coordinate position in the world coordinate system, that is, the coordinates of the image feature in the world coordinate system are obtained.
  • the height information corresponding to each point on the object can also be obtained through mathematical calculation based on the detected depth information and the deflection angle;
  • Known depth detection device predetermined position data of the camera device (for example, one or more of the angle between the axes, the angle between the axes of the two devices and the height direction of the indoor space, etc.) can be calculated
  • the spatial scale parameter is obtained, and no expansion derivation is made here.
  • the angular relationship between the optical axis of the imaging device and the axis of the depth detection device may also be preset, for example, the optical axis of the imaging device and the axis of the depth detection device are parallel.
  • the included angle between the depth detection device and the optical axis of the imaging device is in a range from 0 degrees to the maximum value of the angle of view of the imaging device.
  • the angle between the depth detection device and the optical axis of the camera device may be 0 degree, 1 degree, 2 degrees...58 degrees , 59 degrees, 60 degrees...118 degrees, 119 degrees, 120 degrees. It should be understood that the included angle is only an example, and does not limit its scope.
  • the angle between the depth detection device and the optical axis of the imaging device is in a range from 0 degree to the minimum value of the angle of view of the imaging device.
  • the angle between the depth detection device and the optical axis of the camera device may be 0 degree, 1 degree, 2 degrees, ... 13 Degrees, 14 degrees, 15 degrees whil 28 degrees, 29 degrees, 30 degrees.
  • the angular relationship between the imaging device and the depth detection device may also be that the optical axis of the imaging device is parallel to the vertical, and the axis of the depth detection device forms a certain angle with the vertical. Do not give endless examples.
  • the distance between the camera device and the depth detection device is not greater than 3 cm.
  • the distance between the camera device and the depth detection device is 1.0cm, 1.1cm, 1.2cm, 1.3cm, 1.4cm, 1.5cm, 1.6cm, 1.7cm, 1.8cm, 1.9cm, 2.0cm, 2.1cm , 2.2cm, 2.3cm, 2.4cm, 2.50cm, 2.6cm, 2.7cm, 2.8cm, 2.9cm, 3.0cm.
  • the above-mentioned distance between the imaging device and the depth detection device is only an example, and is not limited to a range of accuracy of 0.1 degree. According to the actual design requirements of the mobile device, the accuracy of the included angle can be adjusted, for example, to an accuracy range of 1 degree, or to increase the accuracy to reach 0.01 or 0.001 degrees or more.
  • the processing device 604 is connected to the depth detection device, the camera device and the mobile device. It should be understood that the various devices may be arranged on the circuit mainboard of the mobile device, and the various devices are directly or indirectly electrically connected to each other to realize data transmission or interaction.
  • the data transmission includes wireless network transmission (such as one or more of TDMA, CDMA, GSM, PHS, and Bluetooth), wired network transmission (such as private network, ADSL network, and cable modem network, etc.) One or more), or interface transmission (for example, obtained from storage media such as flash memory, U disk, mobile hard disk, optical disk, and floppy disk through the interface), etc.
  • the processing device 604 is used to execute the indoor positioning method described in the embodiment corresponding to FIG. 2: acquiring the first image and the second image respectively captured by the camera device at different positions; according to the current spatial scale parameter, The pixel position offset of the image feature matching the first image and the second image is used to determine the position change information of the mobile device between the different positions. It should be noted that, for the specific process and technical effect of the indoor positioning method, please refer to the embodiment corresponding to FIG. 2 of the present application, which will not be repeated here.
  • the mobile device is a mobile robot.
  • the mobile robot is a machine device that automatically performs specific tasks. It can accept commands from people, run pre-arranged programs, or act according to principles and guidelines formulated with artificial intelligence technology. This type of mobile robot can be used indoors or outdoors, can be used in industry or home, can be used to replace security patrols, replace people to clean the ground, can also be used for family companions, auxiliary office, etc.
  • the mobile robots include, but are not limited to: drones, industrial robots, home-accompanied mobile devices, medical mobile devices, cleaning robots, smart vehicles, and patrol mobile devices, and other devices that can move autonomously. kind.
  • the mobile robot also includes a mobile device for receiving a piece of movement control information and performing a corresponding movement operation.
  • the movement control information includes, but is not limited to: one or more of the moving distance, the moving direction, the moving speed, and the acceleration. For example, executing the movement control information may cause the mobile device to perform the following operation: walk d meters in the southeast direction.
  • the movement control information may be sent by the processing device, or may be generated by receiving an instruction transmitted by a server or the cloud. For example, by using the indoor positioning method, the mobile device determines that it has deviated from a preset navigation route. Therefore, the processing device generates the movement control information according to the distance between the current position and the navigation route and the offset direction. , And send it to the mobile device to control the mobile device to perform the corresponding mobile operation. In another example, the mobile device receives movement control information including a travel distance and direction generated from a server, and executes a movement operation according to the movement control information.
  • the mobile device includes at least one drive unit, such as a left-wheel drive unit for driving a left-hand drive wheel of the mobile device and a right-wheel drive unit for driving a right-hand drive wheel of the mobile device.
  • the driving unit may include one or more processors (CPU) or micro processing units (MCU) dedicated to controlling the driving motor.
  • the micro-processing unit is used to convert the information or data provided by the processing device into an electrical signal for controlling the drive motor, and control the rotation speed, steering, etc. of the drive motor according to the electrical signal to adjust the movement.
  • the information or data is the deflection angle determined by the processing device.
  • the processor in the drive unit can be shared with the processor in the processing device or can be set independently.
  • the drive unit serves as the slave processing device, the processing device serves as the master device, and the drive unit performs movement control based on the control of the processing device.
  • the drive unit is shared with the processor in the processing device.
  • the drive unit receives the data provided by the processing device through the program interface.
  • the driving unit is used for controlling the driving wheel based on the movement control information provided by the processing device.
  • the types of the mobile robots are also different.
  • the mobile robot is a sweeping robot.
  • Sweeping robots can also be called autonomous cleaners, automatic sweepers, smart vacuum cleaners, etc. They are a type of smart household appliances that can clean, vacuum, and wipe the floor.
  • the sweeping robot can be controlled by humans (the operator holds the remote control or through the APP loaded on the smart terminal) or completes the floor cleaning work in the room by itself according to certain set rules, which can clean the hair and dust on the ground , Debris and other ground debris.
  • the cleaning robot further includes a cleaning device for performing cleaning operations.
  • the cleaning device includes, but is not limited to: a side brush (or side sweep, side brush, etc.) arranged on at least one side of the bottom of the cleaning robot, and a rolling brush (or called side brush) arranged near the center of the bottom of the cleaning robot.
  • a side brush or side sweep, side brush, etc.
  • a rolling brush or called side brush
  • dust collectors for collecting ground debris.
  • the sweeping robot stirs or absorbs the ground debris such as hair, dust, debris, etc. through the rolling brush during the movement, and then draws the ground debris into the set above the rolling brush by the suction of the fan
  • the dust suction port can collect the debris on the ground to complete the cleaning operation.
  • the mobile device is a smart terminal
  • the smart terminal includes, but is not limited to: Head Mounted Display (HMD), smart glasses, smart bracelets, smart phones, tablet computers, and notebooks One or more of electronic devices such as computers.
  • HMD Head Mounted Display
  • smart glasses smart bracelets
  • smart phones tablet computers
  • notebooks One or more of electronic devices such as computers.
  • the smart terminal is an AR device (such as an AR head-mounted device, a smart phone, etc.), and the AR device acquires data in the physical space through a camera and a depth detection device, and analyzes and reproduces it through a processing device.
  • the augmented reality device uses the camera device, depth detection device, inertial measurement unit (IMU), and other sensors on the augmented reality device to update the location change information of the device in the physical space in real time, thereby performing virtual
  • the integration of the scene and the real scene to provide the operator with a real immersive perspective experience.
  • FIG. 1C which will not be repeated here.
  • the smart terminal further includes an interactive device, which is used to collect the operator's interactive instructions (the forms include but are not limited to: voice instructions, key instructions, touch instructions, eye movement instructions, and One or more of gesture instructions, etc.), and generate control signals to achieve human-computer interaction.
  • the interactive device includes, but is not limited to: one or more of an eye tracker, an infrared sensor, a camera, a microphone, and various sensors.
  • the present application also provides a control system for a mobile device, which is used to control the mobile device described in any embodiment corresponding to FIG. 7 to execute the indoor positioning method described in any embodiment corresponding to FIG. 2.
  • the control system of the mobile device can be implemented by including software and hardware in a computer device.
  • the computer device may be any computing device with mathematical and logical operations and data processing capabilities, including but not limited to: personal computer equipment, single server, server cluster, distributed server, cloud server, etc.
  • the cloud server may be a cloud computing platform provided by a cloud computing provider.
  • the types of the cloud server include but are not limited to: Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS) , And Infrastructure-as-a-Service (IaaS).
  • the types of the cloud server include but are not limited to: public cloud (Public Cloud) server, private cloud (Private Cloud) server, and hybrid cloud (Hybrid Cloud) server, etc.
  • the public cloud server is, for example, Amazon's elastic computing cloud (Amazon EC2), IBM's Blue Cloud, Google's AppEngine, and Windows' Azure service platform, etc.; the private cloud server Examples include Facebook Cloud Computing Service Platform, Amazon Cloud Computing Service Platform, Baidu Cloud Computing Platform, and Tencent Cloud Computing Platform.
  • Amazon's elastic computing cloud Amazon EC2
  • IBM's Blue Cloud IBM's Blue Cloud
  • Google's AppEngine IBM's Blue Cloud
  • Windows' Azure service platform etc.
  • the private cloud server Examples include Facebook Cloud Computing Service Platform, Amazon Cloud Computing Service Platform, Baidu Cloud Computing Platform, and Tencent Cloud Computing Platform.
  • FIG. 7 shows a schematic structural diagram of a control system of a mobile device according to an embodiment of the present application.
  • the control system 7 of the mobile device includes an interface device 701, a storage device 702, and a processing device 703.
  • each of the devices can be arranged on the circuit board of the mobile device, and the devices are directly or indirectly electrically connected to each other to realize data transmission or interaction.
  • the data transmission includes wireless network transmission (such as one or more of TDMA, CDMA, GSM, PHS, and Bluetooth), wired network transmission (such as private network, ADSL network, and cable modem network, etc.) One or more), or interface transmission (for example, obtained from storage media such as flash memory, U disk, mobile hard disk, optical disk, and floppy disk through the interface), etc.
  • control system described in the embodiments of the present application is only an application example, and the components of the system may have more or fewer components than those shown in the figure, or have different component configurations; and, the camera device, depth
  • the detection device, the interface device, the storage device, and the processing device do not necessarily belong to separate components; for example, part or all of the camera device and the depth detection device can be integrated with the interface device, the storage device, and the processing device.
  • part or all of the interface device and the storage device may be integrated with the processing device, etc., which is not limited here.
  • the interface device 701 is used to connect the depth detection device and the camera device in the mobile device to perform data transmission with the depth detection device and the camera device. Wherein, there is a predetermined angular relationship between the axis of the depth detection device and the optical axis of the camera device, so that the depth information measured by the depth detection device is equal to a pixel in the image captured by the camera device
  • the unit corresponds.
  • the storage device 702 is used to store at least one program.
  • the processing device 703 executes the program after receiving the execution instruction.
  • the storage device 702 may also include a memory remote from one or more processors, such as a network-attached memory accessed via an RF circuit or an external port and a communication network, where the communication network may be the Internet, one or more Intranet, local area network (LAN), wide area network (WLAN), storage local area network (SAN), etc., or a suitable combination thereof.
  • the memory controller can control access to the memory by other components of the device, such as the CPU and peripheral interfaces.
  • the memory optionally includes a high-speed random access memory, and optionally also includes a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Access to the memory is optionally controlled by the memory controller by other components of the device, such as the CPU and peripheral interfaces.
  • the memory may also include volatile memory (Volatile Memory), such as random access memory (Random Access Memory, RAM); the memory may also include non-volatile memory (Non-Volatile Memory), such as read-only memory (Read Only Memory). -Only Memory, ROM, Flash Memory, Hard Disk Drive (HDD) or Solid State Drive (SSD).
  • the storage device 702 may include at least one software module stored in the storage device 702 in the form of software or firmware (Firmware).
  • the software module is used to store images taken by the camera device, a map of the indoor space where the mobile device is located, a map database, and various programs that can be executed by the mobile device, for example, a path planning program of the mobile device;
  • the processing device 703 is used to execute the program, thereby controlling the mobile device to perform operations.
  • the processing device 703 can be electrically connected with the storage device 702 and the interface device 701 through one or more communication buses or signal lines for calling and executing the at least one program to coordinate the storage device ,
  • the depth detection device, and the camera device execute and implement the indoor positioning method described in the embodiment corresponding to FIG. 2.
  • the processing device 703 includes an integrated circuit chip with signal processing capabilities; or includes a general-purpose processor, for example, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a discrete gate, or a transistor Logic devices and discrete hardware components can implement or execute the methods, steps, and logic block diagrams disclosed in the embodiments of the present application.
  • the general-purpose processor may be a microprocessor or any conventional processor, for example, a central processing unit (CPU).
  • the processing device 703 is used for coordinating the methods executed by the storage device 702 and the interface device 701.
  • the processing device 703 is used for coordinating the methods executed by the storage device 702 and the interface device 701.
  • control system of the mobile device described in the embodiments of the present application is only an application example, and the components of the device may have more or fewer components than those shown in the figure, or have different component configurations.
  • the various components drawn and illustrated can be implemented by hardware, software, or a combination of software and hardware, including one or more signal processing and/or application specific integrated circuits.
  • the present application also provides a control system for a mobile device.
  • FIG. 8 shows a schematic structural diagram of a control system for another mobile device in an embodiment of this application.
  • the control system 8 includes a camera device 801, a depth detection device 802, an interface device 803, a storage device 804, and a processing device 805.
  • each of the devices can be arranged on the circuit board of the mobile device, and the devices are directly or indirectly electrically connected to each other to realize data transmission.
  • the data transmission includes wireless network transmission (such as one or more of TDMA, CDMA, GSM, PHS, and Bluetooth, etc.), wired network transmission (such as a dedicated network, ADSL network, and cable modem network, etc.) Or multiple), or interface transmission (for example, obtained from storage media such as flash memory, U disk, mobile hard disk, optical disk, and floppy disk through the interface), etc.
  • the control system of the mobile device may be implemented by including software, hardware, or a combination of software and hardware in a computer device, including one or more signal processing and/or application specific integrated circuits.
  • the computer device may be any computing device with mathematical and logical operations and data processing capabilities, including but not limited to: personal computer equipment, single server, server cluster, distributed server, cloud server, etc.
  • the camera 801 is used to capture an image of an indoor environment.
  • the camera device includes, but is not limited to: a camera, a video camera, a camera module integrated with an optical system or a CCD chip, and a camera module integrated with an optical system and a CMOS chip.
  • the lenses that can be used by the camera or video camera include, but are not limited to: standard lenses, telephoto lenses, fisheye lenses, and wide-angle lenses. It should be understood that the description of the camera device can refer to the foregoing embodiments, and the principles and technical effects are similar, and will not be repeated here.
  • the depth detection device 802 of the mobile device is used to detect the depth information from a detection position (such as one or more detection points) in the indoor space.
  • a detection position such as one or more detection points
  • the obtained depth information is the distance information from the depth detection device to the ceiling.
  • the obtained depth information is the vertical height from the depth detection device to the ceiling.
  • the detection position is located on the room divider of the indoor space, for example, when the detection position is on the wall elevation
  • the obtained depth information is the depth information from the depth detection device to the wall elevation Distance information.
  • the obtained depth information is the distance information from the depth detection device to the surface of the object, and so on.
  • the depth detection device is used to obtain depth information corresponding to the position of the detection position of at least one pixel unit in the image in the indoor space, and the depth information indicates the distance between the detection position and the depth detection device.
  • the detection position corresponds to a pixel unit in the image, and the pixel unit includes one or more pixel points corresponding to the detection position mapping in the image.
  • the position of the pixel unit in the image is fixed in advance according to the angle of the imaging device and the depth detection device.
  • the detection position of the depth detection device can be calibrated so that the detection position of the depth detection device is in the central area of the field of view of the camera device, and correspondingly, the pixel unit is located in the central area of the captured image.
  • calibration can also be used to make the pixel unit fall at another known position in the image.
  • the interface device 803 is connected to the depth detection device 802 and the camera 801 to perform data transmission with the depth detection device 802 and the camera 801.
  • the storage device 804 is used to store at least one program.
  • the storage device 804 may also include a memory remote from one or more processors, such as a network attached memory accessed via an RF circuit or an external port and a communication network, where the communication network may be the Internet, one or more Intranet, local area network (LAN), wide area network (WLAN), storage local area network (SAN), etc., or a suitable combination thereof.
  • the memory controller can control access to the memory by other components of the device, such as the CPU and peripheral interfaces.
  • the memory optionally includes a high-speed random access memory, and optionally also includes a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices.
  • the memory may also include volatile memory (Volatile Memory), such as random access memory (Random Access Memory, RAM); the memory may also include non-volatile memory (Non-Volatile Memory), such as read-only memory (Read Only Memory). -Only Memory, ROM, Flash Memory, Hard Disk Drive (HDD) or Solid State Drive (SSD).
  • volatile memory such as random access memory (Random Access Memory, RAM
  • non-Volatile Memory such as read-only memory (Read Only Memory).
  • Read Only Memory Read Only Memory
  • ROM Read Only Memory
  • Flash Memory Hard Disk Drive (HDD) or Solid State Drive (SSD).
  • SSD Solid State Drive
  • the storage device 804 may include at least one software module stored in the storage device 804 in the form of software or firmware (Firmware).
  • the software module is used to store images taken by the camera device, a map of the indoor space where the mobile device is located, a map database, and various programs that can be executed by the mobile device, for example, a path planning program of the mobile device;
  • the processing device 803 is used to execute the program, thereby controlling the mobile device to perform operations.
  • the processing device 805 can be electrically connected to the interface device and the storage device through one or more communication buses or signal lines for calling and executing the at least one program to coordinate the camera device 801,
  • the depth detection device 802, the interface device 803, and the storage device 804 execute and implement the indoor positioning method described in the embodiment corresponding to FIG.
  • the processing device 803 includes an integrated circuit chip with signal processing capabilities; or includes a general-purpose processor, for example, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a discrete gate, or a transistor Logic devices and discrete hardware components can implement or execute the methods, steps, and logic block diagrams disclosed in the embodiments of the present application.
  • the general-purpose processor may be a microprocessor or any conventional processor, for example, a central processing unit (CPU).
  • control system described in the embodiments of the present application is only an application example, and the components of the system may have more or fewer components than those shown in the figure, or have different component configurations.
  • part or all of the camera device, the depth detection device, the interface device, the storage device, and the processing device can be integrated into a positioning module, so as to be more conveniently embedded in mobile devices such as mobile robots or smart terminals.
  • the present application also provides a computer readable and writable storage medium that stores a computer program that, when executed, realizes the indoor positioning method of the mobile device described in the above example with respect to the embodiment in FIG. 2.
  • the computer program When the computer program is sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product; in implementation, the computer software product is stored in a storage
  • the medium includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the examples of this application.
  • the computer readable and writable storage medium may include read-only memory, random access memory, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices or other magnetic storage devices, flash memory, U A disk, a mobile hard disk, or any other medium that can be used to store desired program codes in the form of instructions or data structures and that can be accessed by a computer.
  • any connection is properly termed a computer-readable medium.
  • the instruction is sent from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, and microwave
  • coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave
  • the Coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of the medium.
  • computer readable and writable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are intended for non-transitory, tangible storage media.
  • the magnetic disks and optical disks used in the application include compact disks (CD), laser disks, optical disks, digital versatile disks (DVD), floppy disks and Blu-ray disks.
  • CD compact disks
  • DVD digital versatile disks
  • floppy disks floppy disks
  • Blu-ray disks Blu-ray disks.
  • disks usually copy data magnetically, while optical disks use lasers for optical Copy data locally.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of code, and the module, program segment, or part of code includes one or more for realizing prescribed logical functions. Executable instructions.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations Or it can be realized by a combination of dedicated hardware and computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

L'invention concerne un procédé de positionnement intérieur d'un dispositif mobile (6), et le dispositif mobile (6) ainsi qu'un système de commande (7, 8). Le procédé de positionnement intérieur consiste à : acquérir une première image (C1) et une seconde image (C2) photographiées respectivement par un appareil de caméra (601, 801) à différentes positions, une unité de pixel de la seconde image (C2) affichant une seconde caractéristique d'image (b1, b3) correspondant à une première caractéristique d'image (a1, a2) de la première image, et la première caractéristique d'image (a1, a2) et la seconde caractéristique d'image (b1, b3) constituant une paire de caractéristiques d'image ; et déterminer des informations de changement de position du dispositif mobile (6) entre différentes positions conformément à un paramètre d'échelle spatiale courant et à un décalage de position de pixel de la paire de caractéristiques d'image de la première image (C1) et de la seconde image (C2). Le procédé de positionnement intérieur permet de positionner un robot mobile avec précision, ce qui permet d'améliorer la précision de positionnement du robot mobile.
PCT/CN2020/073307 2020-01-20 2020-01-20 Procédé de positionnement intérieur de dispositif mobile, dispositif mobile et système de commande WO2021146862A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080003090.3A CN112204345A (zh) 2020-01-20 2020-01-20 移动设备的室内定位方法、移动设备及控制系统
PCT/CN2020/073307 WO2021146862A1 (fr) 2020-01-20 2020-01-20 Procédé de positionnement intérieur de dispositif mobile, dispositif mobile et système de commande

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/073307 WO2021146862A1 (fr) 2020-01-20 2020-01-20 Procédé de positionnement intérieur de dispositif mobile, dispositif mobile et système de commande

Publications (1)

Publication Number Publication Date
WO2021146862A1 true WO2021146862A1 (fr) 2021-07-29

Family

ID=74034118

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/073307 WO2021146862A1 (fr) 2020-01-20 2020-01-20 Procédé de positionnement intérieur de dispositif mobile, dispositif mobile et système de commande

Country Status (2)

Country Link
CN (1) CN112204345A (fr)
WO (1) WO2021146862A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115808170A (zh) * 2023-02-09 2023-03-17 宝略科技(浙江)有限公司 一种融合蓝牙与视频分析的室内实时定位方法
CN116403008A (zh) * 2023-05-29 2023-07-07 广州市德赛西威智慧交通技术有限公司 驾校训练场地的地图采集方法、装置、设备及存储介质
WO2023207610A1 (fr) * 2022-04-25 2023-11-02 追觅创新科技(苏州)有限公司 Procédé et appareil de cartographie, et support de stockage et appareil électronique

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112802045B (zh) * 2021-02-24 2022-05-13 燕山大学 一种同步检测图像中平行直线和平行曲线特征的方法
CN115950437B (zh) * 2023-03-14 2023-06-09 北京建筑大学 一种室内的定位方法、定位装置、设备和介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107993233A (zh) * 2016-10-26 2018-05-04 中国科学院深圳先进技术研究院 一种坑区域的定位方法及装置
JP2018081008A (ja) * 2016-11-16 2018-05-24 株式会社岩根研究所 基準映像地図を用いた自己位置姿勢標定装置
CN109506658A (zh) * 2018-12-26 2019-03-22 广州市申迪计算机系统有限公司 机器人自主定位方法和系统
CN110018688A (zh) * 2019-04-11 2019-07-16 清华大学深圳研究生院 基于视觉的自动引导车定位方法
CN110058602A (zh) * 2019-03-27 2019-07-26 天津大学 基于深度视觉的多旋翼无人机自主定位方法
CN110622085A (zh) * 2019-08-14 2019-12-27 珊口(深圳)智能科技有限公司 移动机器人及其控制方法和控制系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107993233A (zh) * 2016-10-26 2018-05-04 中国科学院深圳先进技术研究院 一种坑区域的定位方法及装置
JP2018081008A (ja) * 2016-11-16 2018-05-24 株式会社岩根研究所 基準映像地図を用いた自己位置姿勢標定装置
CN109506658A (zh) * 2018-12-26 2019-03-22 广州市申迪计算机系统有限公司 机器人自主定位方法和系统
CN110058602A (zh) * 2019-03-27 2019-07-26 天津大学 基于深度视觉的多旋翼无人机自主定位方法
CN110018688A (zh) * 2019-04-11 2019-07-16 清华大学深圳研究生院 基于视觉的自动引导车定位方法
CN110622085A (zh) * 2019-08-14 2019-12-27 珊口(深圳)智能科技有限公司 移动机器人及其控制方法和控制系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023207610A1 (fr) * 2022-04-25 2023-11-02 追觅创新科技(苏州)有限公司 Procédé et appareil de cartographie, et support de stockage et appareil électronique
CN115808170A (zh) * 2023-02-09 2023-03-17 宝略科技(浙江)有限公司 一种融合蓝牙与视频分析的室内实时定位方法
CN116403008A (zh) * 2023-05-29 2023-07-07 广州市德赛西威智慧交通技术有限公司 驾校训练场地的地图采集方法、装置、设备及存储介质
CN116403008B (zh) * 2023-05-29 2023-09-01 广州市德赛西威智慧交通技术有限公司 驾校训练场地的地图采集方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN112204345A (zh) 2021-01-08

Similar Documents

Publication Publication Date Title
WO2021146862A1 (fr) Procédé de positionnement intérieur de dispositif mobile, dispositif mobile et système de commande
US11499832B1 (en) Method for constructing a map while performing work
CN109643127B (zh) 构建地图、定位、导航、控制方法及系统、移动机器人
US9886774B2 (en) Photogrammetric methods and devices related thereto
WO2019232806A1 (fr) Procédé et système de navigation, système de commande mobile et robot mobile
TWI620627B (zh) 機器人及用於機器人之定位之方法
US8976172B2 (en) Three-dimensional scanning using existing sensors on portable electronic devices
EP3063553B1 (fr) Système et procédé de mesure par balayages laser
TWI467494B (zh) 使用深度圖進行移動式攝影機定位
WO2020186493A1 (fr) Procédé et système de navigation et de division d'une région de nettoyage, robot mobile et robot de nettoyage
CN110801180A (zh) 清洁机器人的运行方法及装置
CN111220148A (zh) 移动机器人的定位方法、系统、装置及移动机器人
WO2021143543A1 (fr) Robot et son procédé de commande
US20180173243A1 (en) Movable object and method for controlling the same
Diakité et al. First experiments with the tango tablet for indoor scanning
Ye et al. 6-DOF pose estimation of a robotic navigation aid by tracking visual and geometric features
WO2016065063A1 (fr) Procédés photogrammétriques et dispositifs associés à ceux-ci
WO2019001237A1 (fr) Dispositif électronique mobile, et procédé dans un dispositif électronique mobile
WO2022027611A1 (fr) Procédé de positionnement et procédé de construction de carte pour robot mobile, et robot mobile
WO2022127572A1 (fr) Procédé permettant d'afficher une posture d'un robot dans une carte tridimensionnelle, appareil, dispositif, et support de stockage
CN117042927A (zh) 用于优化单目视觉-惯性定位系统的方法和装置
US11835343B1 (en) Method for constructing a map while performing work
US20240069203A1 (en) Global optimization methods for mobile coordinate scanners
JP7266128B2 (ja) 3次元マップ生成方法及びシステム
WANG 2D Mapping Solutionsfor Low Cost Mobile Robot

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20915397

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20915397

Country of ref document: EP

Kind code of ref document: A1