WO2019090833A1 - 定位系统、方法及所适用的机器人 - Google Patents

定位系统、方法及所适用的机器人 Download PDF

Info

Publication number
WO2019090833A1
WO2019090833A1 PCT/CN2017/112412 CN2017112412W WO2019090833A1 WO 2019090833 A1 WO2019090833 A1 WO 2019090833A1 CN 2017112412 W CN2017112412 W CN 2017112412W WO 2019090833 A1 WO2019090833 A1 WO 2019090833A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
posture
positioning
features
robot according
Prior art date
Application number
PCT/CN2017/112412
Other languages
English (en)
French (fr)
Inventor
崔彧玮
侯喜茹
曹开齐
李轩
卫洋
Original Assignee
珊口(上海)智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 珊口(上海)智能科技有限公司 filed Critical 珊口(上海)智能科技有限公司
Priority to EP17931779.7A priority Critical patent/EP3708954A4/en
Priority to US16/043,746 priority patent/US10436590B2/en
Publication of WO2019090833A1 publication Critical patent/WO2019090833A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3848Data obtained from both position sensors and additional sensors
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the present application relates to the field of indoor positioning technology, and in particular to a positioning system, a method and a suitable robot.
  • a mobile robot is a machine that automatically performs work. It can accept human command, run pre-programmed procedures, or act on principles that are based on artificial intelligence techniques. These mobile robots can be used indoors or outdoors, can be used in industry or home, can be used to replace security inspections, replace people to clean the ground, can also be used for family companion, auxiliary office and so on. Due to the difference in the fields applied by different mobile robots, the mobile robots used in various fields have different ways of moving. For example, mobile robots can adopt wheel movement, walking movement, chain movement, and the like.
  • the mobile information provided by the sensor is used for real-time positioning and map construction (SLAM), so as to provide more accurate navigation capability for the mobile robot, so that the mobile robot can move more autonomously more effectively.
  • SLAM real-time positioning and map construction
  • the distance traveled by the rollers on the ground of different materials is not the same, which makes the map constructed by SLAM technology in this field and the map of the actual physical space may be quite different.
  • the purpose of the present application is to provide a positioning system, a method, and a suitable robot for solving the problem of inaccurate positioning of the robot by using the data provided by the sensor in the prior art.
  • a first aspect of the present application provides a positioning system for a robot, including: a storage device that stores a correspondence between an image coordinate system and a physical space coordinate system; and an imaging device that moves the robot And capturing an image frame; the processing device is connected to the camera device and the storage device, and configured to acquire a position of a matching feature in the image frame at the current time and the image frame at a previous time, and determine the robot according to the correspondence relationship and the position The position and posture.
  • the field of view optical axis of the camera device is ⁇ 30° with respect to the vertical or 60-120° for the horizontal.
  • the processing device includes a tracking module coupled to the camera device for tracking locations in the two image frames that include the same feature.
  • the positioning system further includes a motion sensing device coupled to the processing device for acquiring movement information of the robot.
  • the processing device includes an initialization module for based on two images The corresponding relationship is constructed by the position of the matching feature in the frame and the movement information acquired from the last time to the current time.
  • the processing apparatus includes: a first positioning module, configured to determine a position and a posture of the robot according to the correspondence relationship and a position of the matching feature; the first positioning compensation module, And for compensating for the error in the determined position and posture based on the acquired movement information.
  • the storage device further stores landmark information constructed based on the matched features.
  • the processing device includes: a second positioning module, configured to determine a position and a posture of the robot according to the correspondence relationship and a position of the matching feature; and a second positioning compensation module, Landmark information for the stored corresponding matching features compensates for errors in the determined position and pose.
  • the processing device includes an update module for updating the stored landmark information based on the matched features.
  • a second aspect of the present application provides a positioning system for a robot, comprising: a motion sensing device for acquiring movement information of a robot; an imaging device for capturing an image frame during movement of the robot; a processing device, and the image capturing
  • the device is connected to the mobile sensing device for acquiring two image frames of the previous time and the current time, and constructing according to the positions of the matching features in the two image frames and the movement information acquired during the two moments.
  • the storage device is connected to the processing device for storing the correspondence relationship.
  • the optical axis of the field of view of the camera device is ⁇ 30° with respect to the vertical or 60-120° for the horizontal.
  • the processing apparatus is configured to acquire a location of a matching feature in an image frame at a current time and an image frame in a previous time, and determine a location of the robot according to the correspondence and the location. Position and posture.
  • the processing device includes a tracking module coupled to the camera device for tracking locations of the same image in the two image frames.
  • the processing apparatus includes: a first positioning module, configured to determine a position and a posture of the robot according to the correspondence relationship and a position of the matching feature; the first positioning compensation module, And for compensating for the error in the determined position and posture based on the acquired movement information.
  • the storage device further stores landmark information constructed based on the matched features.
  • the processing device includes: a second positioning module, configured to determine a position and a posture of the robot according to the correspondence relationship and a position of the matching feature; and a second positioning compensation module, Used for storage based The stored landmark information corresponding to the matching feature compensates for the error in the determined position and attitude.
  • the processing device includes an update module for updating the stored landmark information based on the matched features.
  • a third aspect of the present application provides a robot comprising: any positioning system as provided in the above first aspect; or any positioning system as provided in the second aspect; a mobile device; a control device The position and attitude provided by the positioning system controls the mobile device to perform a moving operation.
  • a fourth aspect of the present application provides a robot positioning method, including: acquiring a position of a matching feature in an image frame at a current time and an image frame in a previous time; determining a position and a posture of the robot according to the correspondence relationship and the position;
  • the correspondence relationship includes: a correspondence between an image coordinate system and a physical space coordinate system.
  • the manner of obtaining the location of the matching feature in the current time image frame and the previous time image frame includes tracking locations of the two image frames that include the same feature.
  • the step of acquiring movement information of the robot is further included.
  • the method further includes constructing the correspondence based on a location of a matching feature in the two image frames and movement information acquired from the previous time to the current time step.
  • the determining the position and posture of the robot according to the correspondence relationship and the position comprises: determining a position and a posture of the robot according to the correspondence relationship and the position of the matching feature; The error in the determined position and posture is compensated based on the acquired movement information.
  • the determining the position and posture of the robot according to the correspondence relationship and the position comprises: determining a position and a posture of the robot according to the correspondence relationship and the position of the matching feature;
  • the landmark information based on the pre-stored corresponding matching features compensates for errors in the determined position and attitude.
  • the locating method further comprises the step of updating the stored landmark information based on the matched features.
  • a fifth aspect of the present application further provides a method for positioning a robot, comprising: acquiring movement information of a robot during movement and a plurality of image frames; acquiring two image frames of a previous time and a current time, and according to the two The position of the matching feature in the image frame and the movement information acquired during the two time periods are used to construct a correspondence relationship between the image coordinate system and the physical space coordinate system; and the position and posture of the robot are determined by using the correspondence relationship.
  • the determining, by the correspondence, the position and the posture of the robot includes: acquiring a current time image frame and a position of the matching feature in the image frame at the previous time; The relationship and the position determine the position and attitude of the robot.
  • the obtaining the location of the matching feature in the current time image frame and the previous time image frame comprises tracking a location of the same image in the two image frames.
  • the determining the position and posture of the robot according to the correspondence relationship and the position comprises: determining a position and a posture of the robot according to the correspondence relationship and the position of the matching feature; The error in the determined position and posture is compensated based on the acquired movement information.
  • the determining the position and posture of the robot according to the correspondence relationship and the position comprises: determining a position and a posture of the robot according to the correspondence relationship and the position of the matching feature;
  • the landmark information based on the pre-stored corresponding matching features compensates for errors in the determined position and attitude.
  • the locating method further comprises the step of updating the stored landmark information based on the matched features.
  • the positioning system, the method, and the applicable robot of the present application have the following beneficial effects: determining the position and posture of the robot from the positional offset information of the feature points matched in the two image frames by the imaging device can be effective Reduce the error of the movement information provided by the sensors in the robot.
  • the positional offset information of the feature points matched in the two image frames and the movement information provided by the sensor are used to initialize the correspondence between the image coordinate system and the physical space coordinate system, thereby achieving the target of positioning by using the monocular camera device. , and effectively solve the problem of sensor error accumulation.
  • FIG. 1 shows a schematic structural view of a positioning system of the present application in an embodiment.
  • FIG. 2 is a schematic diagram showing feature matching in two image frames in the positioning system of the present application.
  • FIG. 3 is a schematic structural view of a positioning system of the present application in still another embodiment.
  • FIG. 4 shows a schematic structural view of a robot of the present application in an embodiment.
  • FIG. 5 shows a schematic structural view of a robot of the present application in still another embodiment.
  • FIG. 6 shows a flow chart of the positioning method of the present application in one embodiment.
  • FIG. 7 shows a flow chart of a positioning method of the present application in still another embodiment.
  • first, second, etc. are used herein to describe various elements in some instances, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
  • the first predetermined threshold may be referred to as a second predetermined threshold, and similarly, the second predetermined threshold may be referred to as a first predetermined threshold without departing from the scope of the various described embodiments.
  • Both the first preset threshold and the preset threshold are describing a threshold, but unless the context is otherwise explicitly indicated, they are not the same preset threshold.
  • a similar situation also includes a first volume and a second volume.
  • the mobile robot can construct the map data of the site where the robot is located on the one hand, and provide route planning based on the constructed map data on the other hand. Route planning adjustment and navigation services. This makes the mobile robot more efficient to move.
  • the indoor sweeping robot can combine the built indoor map and positioning technology to predict the distance between the current position and the obstacles marked on the indoor map, and facilitate the timely adjustment of the cleaning strategy.
  • the obstacle may be described by a single mark, or may be marked as a wall, a table, a sofa, a closet or the like based on characteristics such as shape, size, and the like.
  • the indoor sweeping robot can accumulate the positioned positions and postures based on the positioning technique, and construct an indoor map based on the accumulated position and posture changes.
  • patrol robots are usually used in factories, industrial parks, etc.
  • the patrol robots can combine the constructed plant maps and positioning technologies to predict the distance between the current position, the intersection, the intersection, and the charging pile. It is convenient to timely control the movement of the robot's mobile device according to other acquired monitoring data.
  • the positioning system 1 includes a storage device 12, an imaging device 11, and a processing device 13.
  • the storage device 12 includes, but is not limited to, a high speed random access memory, a nonvolatile memory.
  • a nonvolatile memory For example, one or more disk storage devices, flash devices, or other non-volatile solid state storage devices.
  • storage device 12 may also include memory remote from one or more processors, for example, via RF circuitry or external ports, and a communication network Network attached storage accessed by a network (not shown), wherein the communication network may be the Internet, one or more internal networks, a local area network (LAN), a wide area network (WLAN), a storage area network (SAN), etc., or a suitable combination thereof.
  • the memory controller can control access to the storage device by other components of the robot, such as the CPU and peripheral interfaces.
  • the imaging device 11 includes, but is not limited to, a camera, a video camera, a camera module integrated with an optical system or a CCD chip, a camera module integrated with an optical system and a CMOS chip, and the like.
  • the power supply system of the camera device can be controlled by the power supply system of the robot.
  • the camera device 11 starts to take image frames and provides them to the processing device 13.
  • the camera device in the cleaning robot caches the captured indoor image frame in a predetermined video format in a storage device and is acquired by the processing device.
  • the camera device 11 is for taking an image frame during the movement of the robot.
  • the camera device 11 can be placed on the top of the robot.
  • the camera of the cleaning robot is placed on the middle or edge of its top cover.
  • the field of view optical axis of the imaging device is ⁇ 30° with respect to the vertical line or 60-120° with the horizontal line.
  • the angle of the optical axis of the imaging device of the cleaning robot with respect to the perpendicular is -30°, -29°, -28°, -27°...-1°, 0°, 1°, 2°...29 °, or 30 °.
  • the angle between the optical axes of the imaging device of the cleaning robot with respect to the horizontal line is 60°, 61°, 62°, 119°, and 120°.
  • the angle between the optical axis and the vertical or horizontal line is only an example, and is not limited to the range of the angle accuracy of 1°, according to the design requirements of the actual robot.
  • the accuracy of the angle can be higher, such as 0.1 °, 0.01 °, etc., and no endless examples are given here.
  • the processing device 13 includes one or more processors. Processing device 13 is operatively coupled to volatile memory and/or non-volatile memory in storage device 12. Processing device 13 may execute instructions stored in a memory and/or non-volatile storage device to perform operations in the robot, such as extracting features in image frames and positioning in a map based on features, and the like. As such, the processor may include one or more general purpose microprocessors, one or more application specific processors (ASICs), one or more digital signal processors (DSPs), one or more field programmable logic arrays (FPGAs) , or any combination of them. The processing device is also operatively coupled to an I/O port and an input structure that enables the robot to interact with various other electronic devices that enable the user to interact with the computing device.
  • ASICs application specific processors
  • DSPs digital signal processors
  • FPGAs field programmable logic arrays
  • the input structure can include buttons, keyboards, mice, trackpads, and the like.
  • the other electronic device may be a mobile motor in the mobile device in the robot, or a slave processor in the robot dedicated to controlling the mobile device and the cleaning device, such as a Microcontroller Unit (MCU).
  • MCU Microcontroller Unit
  • the processing device 13 is coupled to the storage device 12 and the camera device 11 via data lines, respectively.
  • the processing device 13 interacts with the storage device 12 via a data reading and writing technique, and the processing device 13 interacts with the camera device 11 via an interface protocol.
  • the data reading and writing technology includes but is not limited to: a high speed/low speed data interface protocol, a database read and write operation, and the like.
  • the interface protocols include, but are not limited to, an HDMI interface protocol, a serial interface protocol, and the like.
  • the storage device 12 stores a correspondence relationship between an image coordinate system and a physical space coordinate system.
  • the image coordinate system is constructed based on image pixel points, and the two-dimensional coordinate parameters of each image pixel point in the image frame captured by the imaging device 11 can be described by the image coordinate system.
  • the image coordinate system may be a Cartesian coordinate system or a polar coordinate system or the like.
  • the physical space coordinate system and the coordinate system constructed based on each position in the actual two-dimensional or three-dimensional physical space, the physical space position thereof may be described according to the correspondence relationship between the preset image pixel unit and the unit length (or unit angle) In the physical space coordinate system.
  • the physical space coordinate system may be a two-dimensional Cartesian coordinate system, a polar coordinate system, a spherical coordinate system, a three-dimensional rectangular coordinate system, or the like.
  • the correspondence may be pre-stored in the storage device before leaving the factory. However, for a robot having a higher ground complexity using a scene, such as a sweeping robot, the correspondence can be obtained by field testing at the site used and stored in the storage device.
  • the robot further includes a motion sensing device (not shown) for acquiring movement information of the robot.
  • the motion sensing device includes, but is not limited to, a displacement sensor, a gyroscope, a speed sensor, a ranging sensor, a cliff sensor, and the like. During the movement of the robot, the mobile sensing device continuously detects the mobile information and provides it to the processing device.
  • the displacement sensor, gyroscope, speed sensor, etc. can be integrated in one or more chips.
  • the ranging sensor and the cliff sensor may be disposed on a body side of the robot.
  • a ranging sensor in the cleaning robot is disposed at the edge of the housing;
  • a cliff sensor in the cleaning robot is disposed at the bottom of the robot.
  • the movement information that the processing device can acquire includes, but is not limited to, displacement information, angle information, distance information with obstacles, speed information, direction of travel information, and the like.
  • the initialization module in the processing device is based on the position of the matching feature in the two image frames and the movement information acquired from the last time to the current time. Construct the corresponding relationship.
  • the initialization module may be a program module whose program portion is stored in the storage device and executed via a call of the processing device. When the correspondence is not stored in the storage device, the processing device invokes an initialization module to construct the correspondence.
  • the initialization module acquires the movement information provided by the mobile sensing device during the movement of the robot and acquires each image frame provided by the imaging device.
  • the initialization module may acquire the movement information and at least two image frames for a short period of time during which the robot moves. For example, the initialization module acquires the movement information and at least two image frames when it is detected that the robot is moving in a straight line. For another example, the initialization module acquires the movement information and at least two image frames when it is detected that the robot is in a turning movement.
  • the initialization module identifies and matches the features in each image frame and obtains the image locations of the matched features in each image frame.
  • Features include, but are not limited to, corner features, edge features, line features, curve features, and the like.
  • the initialization module can obtain image locations of matching features in accordance with a tracking module in the processing device. The tracking module is used to track the locations of the two image frames that contain the same features.
  • the initialization module then constructs the correspondence according to the image location and the physical spatial location provided by the movement information.
  • the initialization module may establish the correspondence by constructing a feature coordinate parameter of a physical space coordinate system and an image coordinate system.
  • the initialization module may be a coordinate origin of the physical space coordinate system according to the physical space position of the image frame at the previous moment, and correspondingly match the coordinate origin to the position in the image coordinate system. To construct a correspondence between the two coordinate systems.
  • the working process of the initialization module may be performed based on a user's instruction or transparent to the user.
  • the execution process of the initialization module is initiated based on when the corresponding relationship is not stored in the storage device, or the corresponding relationship needs to be updated. There are no restrictions here.
  • the correspondence may be saved in the storage device by a program, a database, or the like of the corresponding algorithm.
  • software components stored in memory include an operating system, a communication module (or set of instructions), a contact/motion module (or set of instructions), a graphics module (or set of instructions), and an application (or set of instructions).
  • the storage device also stores temporary data or persistent data including image frames captured by the imaging device and positions and postures obtained by the processing device when performing positioning calculation.
  • the processing device After the corresponding relationship is constructed, the processing device acquires matching features in the current time image frame and the previous time image frame, and determines the position and posture of the robot according to the correspondence relationship and the feature.
  • the processing device 13 can acquire two image frames of the last time t1 and the current time t2 according to a preset time interval or an image frame number interval, and identify and match features in the two image frames.
  • the time interval may be selected between several milliseconds and several hundred milliseconds
  • the number of image frames may be between 0 frames and tens of frames. select.
  • Such features include, but are not limited to, shape features, grayscale features, and the like.
  • the shape features include, but are not limited to, corner features, line features, edge features, curved features, and the like.
  • the grayscale color features include, but are not limited to, a grayscale transition feature, a grayscale value above or below a grayscale threshold, an area size in a image frame that includes a predetermined grayscale range, and the like.
  • the number of matching features is usually plural, for example, more than ten.
  • the processing means 13 finds features that can be matched from the identified features based on the position of the identified features in the respective image frames. For example, please refer to FIG. 2, which is a schematic diagram showing the positional change relationship of the matching features in the two image frames acquired at time t1 and time t2. After identifying the features in each image frame, the processing device 13 determines that the image frame P1 includes the features a1 and a2, and the image frame P2 includes the features b1, b2, and b3, and the features a1 and b1 and b2 belong to the same feature.
  • the features a2 and b3 belong to the same feature, and the processing device 13 may first determine that the feature a1 in the image frame P1 is located on the left side of the feature a2 and the pitch is a d1 pixel point; and also determines that the feature b1 in the image frame P2 is located The left side of feature b3 and the spacing is d1' pixel points, and feature b2 is located to the right of feature b3 and the spacing is d2' pixel points.
  • the processing device 13 according to the positional relationship of the features b1 and b3, the positional relationship of the features b2 and b3, and the positional relationship of the features a1 and a2, respectively, and the pixel pitch of the features b1 and b3, and the pixel pitch of the features b2 and b3, respectively, and the feature a1 Match the pixel spacing of a2 to get
  • the feature a1 in the image frame P1 matches the feature b1 in the image frame P2, and the feature a2 matches the feature b3.
  • the processing device 13 will match the selected features in order to locate the position and attitude of the robot in accordance with the change in position of the image pixels corresponding to each of the features.
  • the position of the robot can be obtained according to a displacement change in a two-dimensional plane, which can be obtained according to an angular change in a two-dimensional plane.
  • the processing device 13 may determine image position offset information of multiple features in the two image frames or determine physical position offset information of the plurality of features in the physical space according to the correspondence relationship, and integrate the obtained information. Any position offset information is used to calculate the relative position and attitude of the robot from time t1 to time t2. For example, by the coordinate conversion, the processing device 13 obtains the position and posture of the robot from the time t1 of capturing the image frame P1 to the time t2 of the captured image frame P2: the m length is shifted on the ground and the n-degree angle is rotated to the left.
  • the position and posture obtained according to the processing device 13 can help the robot determine whether it is on the navigation route.
  • the position and posture obtained according to the processing device 13 can help the robot determine the relative displacement and the relative rotation angle, and use the data to perform map drawing.
  • the processing device 13 includes a tracking module and a positioning module.
  • the tracking module and the positioning module may share a hardware circuit such as a processor in the processing device 13, and implement data interaction and instruction calling based on the program interface.
  • the tracking module is connected to the camera device 11 for tracking positions of the same image in the two image frames.
  • the tracking module can utilize the visual tracking technique to track features in the image frame of the previous moment in the image frame of the current time to obtain matching features. For example, the position of the feature c i identified in the image frame P1 at the previous moment is referenced in the image frame P1, and the tracking module performs the corresponding feature c i in the region near the corresponding position in the image frame P2 at the current time. It is judged that if the corresponding feature c i is found, the position of the feature c i in the image frame P2 is obtained, and if the corresponding feature c i is not found, it is determined that the feature c i is not in the image frame P2. Thus, when the plurality of features tracked and the locations of the features in the respective image frames are collected, each of the features and their respective locations are provided to the positioning module.
  • the tracking module also utilizes motion information provided by the motion sensing device in the robot to track locations in the two image frames that contain the same features.
  • the hardware circuit of the tracking module is connected to the mobile sensing device via a data line, and the mobile information corresponding to the acquisition times t1 and t2 of the two image frames P1 and P2 is acquired from the mobile sensing device, using the Corresponding relationship and each feature c i identified in the image frame P1 at the previous moment and its position in the image frame P1, estimating the position change corresponding feature c i described by the movement information in the current time image frame P2 a candidate location, and identifying a corresponding feature c i near the estimated candidate location, if the corresponding feature c i is found, obtaining a location of the feature c i in the image frame P2, and if the corresponding feature c i is not found, determining the feature c i is not in the image frame P2.
  • the tracked features i.e., matched
  • the positioning module is configured to determine position offset information of the robot from the previous moment to the current moment according to the correspondence relationship and the position to obtain a position and a posture of the robot.
  • the positioning module may be a combination of a plurality of program modules, or may be a single program module.
  • the positioning module may perform coordinate transformation on the position of the same feature in the two image frames according to the corresponding relationship, that is, from the previous moment to the current time.
  • Position offset information of the time the position offset information reflecting the relative position and posture change of the robot from the previous moment to the current moment.
  • This type of positioning can be used in positioning with matching features. For example, during the navigation of the robot movement, the relative position and posture change obtained by the above manner can quickly determine whether the current movement route of the robot is offset, and perform subsequent navigation adjustment based on the determination result.
  • the processing device 13 also combines the movement information provided by the motion sensor device to determine the position and posture of the robot.
  • the processing device 13 includes: a first positioning module and a first positioning compensation module.
  • the first positioning module and the first positioning compensation module may belong to a program module in the foregoing positioning module.
  • the first positioning module is configured to determine a position and a posture of the robot according to the correspondence relationship and the position of the matching feature.
  • the first positioning compensation module is configured to compensate an error in the determined position and posture based on the acquired movement information.
  • the first positioning module acquires two image frames from time t1 to time t2, and also acquires movement information, and the first positioning module obtains multiple features that can be used for positioning in the two image frames according to the foregoing feature recognition and matching manner. And its position in the respective image frame, and using the correspondence to determine the first position and attitude of the robot.
  • the first positioning compensation module determines, according to the acquired displacement information and angle information, that the trajectory direction and the yaw angle indicated by the angle information of the robot move the distance provided by the displacement information, thus obtaining the second of the robot. Position and posture.
  • the first positioning compensation module further determines a position and a posture of the robot based on an error between the first position and posture and the second position and posture.
  • the first positioning compensation module may perform the weight-based mean processing based on the displacement information and the angle information corresponding to the first and second positions and postures, thereby obtaining the position and posture after the error is compensated.
  • the first positioning compensation module performs weighted averaging processing on the displacement information in the first candidate position and the posture and the displacement information in the second candidate position and posture to obtain the displacement information in the compensated position and posture.
  • the first positioning compensation module takes the angle change information in the first candidate position and the posture and the angle change information in the second candidate position and posture to perform weighted averaging processing, and obtains the compensated position and the angle change information in the posture.
  • the processing device 13 may also compensate for the position and posture determined based on the position of the matching feature in the image frame based on the matched time and the landmark information constructed based on the matched feature. error.
  • the landmark information is stored in the storage device 12.
  • the landmark information includes, but is not limited to, features of previous matches, map data in physical space when photographing the features, positions in corresponding image frames when photographing the features, and positions when photographing corresponding features.
  • attribute information such as posture.
  • the landmark information may be stored in the storage device 12 along with the map data.
  • the processing device 13 includes a second positioning module and a second positioning compensation module.
  • the second positioning module is configured to determine a position and a posture of the robot according to the correspondence relationship and the position of the matching feature.
  • the second positioning compensation module is configured to compensate an error in the determined position and posture based on the stored landmark information of the corresponding matching feature.
  • the second positioning module and the second positioning compensation module may belong to the program module in the foregoing positioning module.
  • the second positioning module obtains, according to the foregoing feature recognition and matching manner, a plurality of features that can be used for positioning in the two image frames acquired at two time instants and their positions in respective image frames, and uses the corresponding relationship.
  • the first position and posture of the robot are determined from the previous time t1 to the current time t2.
  • the second positioning compensation module separately matches the matched features in the two image frames with the features in the pre-stored landmark information, and uses the other attribute information in the landmark information corresponding to the matched features to determine that the robot is
  • the position and posture of each shooting time further obtain the second position and posture of the robot from the previous time t1 to the current time t2.
  • the second positioning compensation module further determines the position and posture of the robot according to an error between the first position and the posture and the second position and posture. For example, the second positioning compensation module performs weighted averaging processing on the displacement information in the first position and the posture and the displacement information in the second position and the posture to obtain the displacement information in the compensated position and posture. The second positioning compensation module further performs weighted averaging processing on the angle change information in the first position and the posture and the angle change information in the second position and the posture, and obtains the angle change information in the compensated position and posture.
  • the processing device 13 may perform error compensation in combination with an error compensation method including any one or more of the above.
  • the processing device 13 may perform improvements and extensions based on any one or more of the above error compensation methods, and should be considered as an example generated based on the positioning technique of the present application.
  • the features that are included as landmark information are usually fixed, but in practical applications, the features that are included as landmark information are not necessarily the same.
  • the feature that is included as landmark information is the outline feature of the lamp, and its corresponding feature disappears when the lamp is replaced. When the robot needs to be positioned with this feature, the features used to compensate for the error will not be found.
  • the processing device 13 further comprises an update module for updating the stored landmark information based on the matched features.
  • the update module may acquire: a matching feature, a position of the matching feature in the at least one image frame, such as a location determined by the positioning module, the first positioning compensation module, or the second positioning compensation module. And gestures and other information.
  • the update module may determine whether to update the saved landmark information by comparing the local target information and the acquired information stored in the storage device 12. For example, when the update module searches for features that are not stored in the storage device 12 based on similar or identical positions and postures, the latest feature corresponding supplements are saved into the corresponding landmark information. Another example is when the update module is based on When similar or identical positions and gestures are found to have stored features in the storage device 12 that are not compatible with the newly matched features, the redundant features stored in the corresponding landmark information are deleted.
  • the update module may further add new landmark information when the number of currently matched features is higher than a preset threshold.
  • the threshold may be a fixed value or set based on the number of features corresponding to the marked location in the map. For example, when the update module finds that the number of newly matched features is greater than the number of features saved in the storage device at the corresponding location based on similar or identical positions and gestures, the new feature may be added to the constructed landmark information.
  • the update module can also adjust the position and the like in the map based on the features.
  • FIG. 3 shows a schematic structural diagram of another positioning system of the present application.
  • the positioning system can be configured in a sweeping robot.
  • the positioning system 2 includes a motion sensing device 24, an imaging device 21, a processing device 23, and a storage device 22.
  • the motion sensing device 24 includes, but is not limited to, a displacement sensor, a gyroscope, a speed sensor, a ranging sensor, a cliff sensor, and the like.
  • the mobile sensing device 24 continuously detects the mobile information and provides it to the processing device.
  • the displacement sensor, gyroscope, speed sensor, etc. can be integrated in one or more chips.
  • the ranging sensor and the cliff sensor may be disposed on a body side of the robot.
  • a ranging sensor in the cleaning robot is disposed at the edge of the housing; a cliff sensor in the cleaning robot is disposed at the bottom of the robot.
  • the movement information that the processing device can acquire includes, but is not limited to, displacement information, angle information, distance information with obstacles, speed information, direction of travel information, and the like.
  • the storage device 22 includes, but is not limited to, a high speed random access memory, a non-volatile memory.
  • a non-volatile memory For example, one or more disk storage devices, flash devices, or other non-volatile solid state storage devices.
  • storage device 22 may also include a memory remote from one or more processors, such as network attached storage accessed via RF circuitry or external ports and a communication network (not shown), wherein the communication network It can be the Internet, one or more intranets, a local area network (LAN), a wide area network (WLAN), a storage area network (SAN), etc., or a suitable combination thereof.
  • the memory controller can control access to the storage device by other components of the robot, such as the CPU and peripheral interfaces.
  • the camera device 21 includes, but is not limited to, a camera, a video camera, a camera module integrated with an optical system or a CCD chip, a camera module integrated with an optical system and a CMOS chip, and the like.
  • the power supply system of the camera device 21 can be controlled by the power supply system of the robot.
  • the camera device starts capturing image frames and provides them to the processing device.
  • the camera device in the cleaning robot caches the captured indoor image frame in a predetermined video format in a storage device and is acquired by the processing device.
  • the camera device 21 is for taking an image frame during the movement of the robot.
  • the camera device 21 can be disposed on The top of the robot.
  • the camera of the cleaning robot is placed on the middle or edge of its top cover.
  • the field of view optical axis of the imaging device is ⁇ 30° with respect to the vertical line or 60-120° with the horizontal line.
  • the angle of the optical axis of the imaging device of the cleaning robot with respect to the perpendicular is -30°, -29°, -28°, -27°...-1°, 0°, 1°, 2°, ..., 29°, or 30°.
  • the angle between the optical axes of the imaging device of the cleaning robot with respect to the horizontal line is 60°, 61°, 62°, 119°, and 120°. It should be noted that those skilled in the art should understand that the angle between the optical axis and the vertical or horizontal line is only an example, and is not limited to the range of the angle accuracy of 1°, according to the design requirements of the actual robot. The accuracy of the angle can be higher, such as 0.1 °, 0.01 °, etc., and no endless examples are given here.
  • the processing device 23 acquires two image frames of the previous time and the current time from the imaging device 21, and according to the two images.
  • the position of the matching feature in the frame and the movement information acquired from the motion sensing device during the two time periods are constructed, and the image coordinate system is mapped to the physical space coordinate system and stored in the storage device.
  • the processing device 23 includes one or more processors.
  • Processing device 23 is operatively coupled to volatile memory and/or non-volatile memory in storage device 222.
  • Processing device 23 may execute instructions stored in a memory and/or non-volatile storage device to perform operations in the robot, such as extracting features in image frames and positioning in a map based on features, and the like.
  • the processor may include one or more general purpose microprocessors, one or more application specific processors (ASICs), one or more digital signal processors (DSPs), one or more field programmable logic arrays (FPGAs) , or any combination of them.
  • ASICs application specific processors
  • DSPs digital signal processors
  • FPGAs field programmable logic arrays
  • the processing device is also operatively coupled to an I/O port and an input structure that enables the robot to interact with various other electronic devices that enable the user to interact with the computing device.
  • the input structure can include buttons, keyboards, mice, trackpads, and the like.
  • the other electronic device may be a mobile motor in a mobile device in the robot, or a slave processor in the robot dedicated to controlling the mobile device and the cleaning device, such as a micro control unit.
  • the processing device 23 is coupled to the storage device 22, the camera device 21, and the motion sensing device 24 via data lines, respectively.
  • the processing device 23 interacts with the storage device via a data read/write technology, and the processing device 23 interacts with the camera device 21 and the motion sensing device 24 via an interface protocol, respectively.
  • the data reading and writing technology includes but is not limited to: a high speed/low speed data interface protocol, a database read and write operation, and the like.
  • the interface protocols include, but are not limited to, an HDMI interface protocol, a serial interface protocol, and the like.
  • the initialization module in the processing device 23 is based on the position of the matching feature in the two image frames and the movement information acquired from the last time to the current time. And construct the corresponding relationship.
  • the initialization module may be a program module whose program portion is stored in the storage device and executed via a call of the processing device. When the correspondence is not stored in the storage device, the processing device invokes an initialization module to construct the correspondence.
  • the initialization module acquires the movement information provided by the mobile sensing device during the movement of the robot and acquires each image frame provided by the imaging device.
  • the initialization module may acquire the movement information and at least two image frames for a short period of time during which the robot moves.
  • the processing device may acquire two image frames of the last time t1 and the current time t2 according to a preset time interval or an image frame number interval.
  • the time interval may be selected between several milliseconds and several hundred milliseconds, and the number of image frames may be between 0 frames and tens of frames. select.
  • the initialization module acquires the movement information and at least two image frames when it is detected that the robot is moving in a straight line. For another example, the initialization module acquires the movement information and at least two image frames when it is detected that the robot is in a turning movement.
  • the initialization module identifies and matches the features in each image frame and obtains the image locations of the matched features in each image frame.
  • Features include, but are not limited to, corner features, edge features, line features, curve features, and the like.
  • the initialization module can obtain image locations of matching features in accordance with a tracking module in the processing device. The tracking module is used to track the locations of the two image frames that contain the same features.
  • the initialization module then constructs the correspondence according to the image location and the physical spatial location provided by the movement information.
  • the initialization module may establish the correspondence by constructing a feature coordinate parameter of a physical space coordinate system and an image coordinate system.
  • the initialization module may be a coordinate origin of the physical space coordinate system according to the physical space position of the image frame at the previous moment, and correspondingly match the coordinate origin to the position in the image coordinate system. To construct a correspondence between the two coordinate systems.
  • the working process of the initialization module may be performed based on a user's instruction or transparent to the user.
  • the execution process of the initialization module is initiated based on when the storage device 22 does not store the corresponding relationship, or the corresponding relationship needs to be updated. There are no restrictions here.
  • the correspondence may be saved in the storage device 22 by a program, a database, or the like of the corresponding algorithm.
  • software components stored in memory include an operating system, a communication module (or set of instructions), a contact/motion module (or set of instructions), a graphics module (or set of instructions), and an application (or set of instructions).
  • the storage device 22 also stores temporary data or persistent data including the image frame captured by the imaging device 21 and the position and posture obtained by the processing device 23 when performing the positioning calculation.
  • the processing device 23 After the correspondence is constructed, the processing device 23 also determines the position and posture of the robot by using the correspondence.
  • the processing device 23 may acquire an image frame captured by the camera device 21, and identify a feature from the image frame, and determine, by the correspondence relationship, that a position of a feature in the image frame corresponds to a physical space.
  • the position using the accumulation of multi-frame images, can determine the position and posture of the robot.
  • the processing device 23 acquires matching features in the current time image frame and the previous time image frame, and determines the position and posture of the robot according to the correspondence and the feature.
  • the processing device 23 can acquire two image frames of the last time t1 and the current time t2 according to a preset time interval or an image frame number interval, and identify and match features in the two image frames.
  • the time interval may be selected between several milliseconds and several hundred milliseconds
  • the number of image frames may be between 0 frames and tens of frames. select.
  • Such features include, but are not limited to, shape features, grayscale features, and the like.
  • the shape features include, but are not limited to, corner features, line features, edge features, curved features, and the like.
  • the grayscale color features include, but are not limited to, a grayscale transition feature, a grayscale value above or below a grayscale threshold, an area size in a image frame that includes a predetermined grayscale range, and the like.
  • the processing means 23 finds a feature that can be matched from the identified features based on the position of the identified features in the respective image frames. For example, as shown in FIG. 2, after identifying the features in each image frame, the processing device 23 determines that the image frame P1 includes the features a1 and a2, and the image frame P2 includes the features b1, b2, and b3, and the feature a1 Both b1 and b2 belong to the same feature, and features a2 and b3 belong to the same feature.
  • the processing device 23 may first determine that the feature a1 in the image frame P1 is located on the left side of the feature a2 and the pitch is a d1 pixel point;
  • the feature b1 in the image frame P2 is located on the left side of the feature b3 and the pitch is a d1' pixel point, and the feature b2 is located on the right side of the feature b3 and the pitch is a d2' pixel point.
  • the processing device 23 according to the positional relationship of the features b1 and b3, the positional relationship of the features b2 and b3, and the positional relationship of the features a1 and a2, respectively, and the pixel pitch of the features b1 and b3, and the pixel pitch of the features b2 and b3, respectively, and the feature a1
  • the pixel pitch of a2 is matched, so that the feature a1 in the image frame P1 matches the feature b1 in the image frame P2, and the feature a2 matches the feature b3.
  • the processing device 23 will match the features to facilitate positioning the position and attitude of the robot in accordance with the change in position of the image pixels corresponding to each of the features.
  • the position of the robot can be obtained according to a displacement change in a two-dimensional plane, which can be obtained according to an angular change in a two-dimensional plane.
  • the processing device 23 may determine image position offset information of multiple features in the two image frames or determine physical position offset information of the plurality of features in the physical space according to the correspondence relationship, and integrate the obtained information. Any position offset information is used to calculate the relative position and attitude of the robot from time t1 to time t2. For example, by the coordinate conversion, the processing device 23 obtains the position and posture of the robot from the time t1 of capturing the image frame P1 to the time t2 of the captured image frame P2 as follows: m length is moved on the ground and n degrees are rotated to the left. Taking the sweeping robot as an example, when the sweeping robot has established a map, the position and posture obtained according to the processing device 23 can help the robot determine whether it is on the navigation route. When the cleaning robot does not establish a map, the position and posture obtained according to the processing device 23 can help the robot determine the relative displacement and the relative rotation angle, and use the data to perform map drawing.
  • the processing device 23 includes a tracking module and a positioning module.
  • the tracking module and the positioning module may share hardware circuits such as processors in the processing device 23, and implement data interaction and instruction calls based on the program interface.
  • the tracking module is connected to the camera device 21 for tracking positions of the same image in the two image frames.
  • the tracking module can utilize the visual tracking technique to track features in the image frame of the previous moment in the image frame of the current time to obtain matching features. For example, the position of the feature c i identified in the image frame P1 at the previous moment is referenced in the image frame P1, and the tracking module performs the corresponding feature c i in the region near the corresponding position in the image frame P2 at the current time. It is judged that if the corresponding feature c i is found, the position of the feature c i in the image frame P2 is obtained, and if the corresponding feature c i is not found, it is determined that the feature c i is not in the image frame P2. Thus, when the plurality of features tracked and the locations of the features in the respective image frames are collected, each of the features and their respective locations are provided to the positioning module.
  • the tracking module also utilizes movement information provided by the motion sensing device 24 in the robot to track locations in the two image frames that contain the same features.
  • the hardware circuit of the tracking module is connected to the mobile sensing device 24 via a data line, and the mobile information corresponding to the acquisition times t1 and t2 of the two image frames P1 and P2 is acquired from the mobile sensing device 24, and utilized.
  • the positioning module is configured to determine position offset information of the robot from the previous moment to the current moment according to the correspondence relationship and the position to obtain a position and a posture of the robot.
  • the positioning module may be a combination of a plurality of program modules, or may be a single program module.
  • the positioning module may perform coordinate transformation on the position of the same feature in the two image frames according to the corresponding relationship, that is, from the previous moment to the current time.
  • Position offset information of the time the position offset information reflecting the relative position and posture change of the robot from the previous moment to the current moment.
  • This type of positioning can be used in positioning with matching features. For example, during the navigation of the robot movement, the relative position and posture change obtained by the above manner can quickly determine whether the current movement route of the robot is offset, and perform subsequent navigation adjustment based on the determination result.
  • the processing device 23 also incorporates the movement information provided by the motion sensing device 24 to determine the position and attitude of the robot.
  • the processing device 23 includes: a first positioning module and a first positioning compensation module.
  • the first positioning module and the first positioning compensation module may belong to a program module in the foregoing positioning module.
  • the first positioning module is configured to determine a position and a posture of the robot according to the correspondence relationship and the position of the matching feature.
  • the first positioning compensation module is configured to determine based on the acquired motion information compensation In the position and posture.
  • the first positioning module acquires two image frames from time t1 to time t2, and also acquires movement information, and the first positioning module obtains multiple features that can be used for positioning in the two image frames according to the foregoing feature recognition and matching manner. And its position in the respective image frame, and using the correspondence to determine the first position and attitude of the robot.
  • the first positioning compensation module determines, according to the acquired displacement information and angle information, that the trajectory direction and the yaw angle indicated by the angle information of the robot move the distance provided by the displacement information, thus obtaining the second of the robot. Position and posture.
  • the first positioning compensation module further determines a position and a posture of the robot based on an error between the first position and posture and the second position and posture.
  • the first positioning compensation module may perform the weight-based mean processing based on the displacement information and the angle information corresponding to the first and second positions and postures, thereby obtaining the position and posture after the error is compensated.
  • the first positioning compensation module performs weighted averaging processing on the displacement information in the first candidate position and the posture and the displacement information in the second candidate position and posture to obtain the displacement information in the compensated position and posture.
  • the first positioning compensation module takes the angle change information in the first candidate position and the posture and the angle change information in the second candidate position and posture to perform weighted averaging processing, and obtains the compensated position and the angle change information in the posture.
  • the processing device 23 may also compensate for the position and posture determined based on the position of the matching feature in the image frame based on the matched two-time time, in combination with the landmark information constructed based on the matched feature. error.
  • the landmark information is stored in the storage device 22.
  • the landmark information includes, but is not limited to, features of previous matches, map data in physical space when photographing the features, positions in corresponding image frames when photographing the features, and positions when photographing corresponding features. And attribute information such as posture.
  • the landmark information may be stored in the storage device 22 along with the map data.
  • the processing device 23 includes: a second positioning module and a second positioning compensation module.
  • the second positioning module is configured to determine a position and a posture of the robot according to the correspondence relationship and the position of the matching feature.
  • the second positioning compensation module is configured to compensate an error in the determined position and posture based on the stored landmark information of the corresponding matching feature.
  • the second positioning module and the second positioning compensation module may belong to the program module in the foregoing positioning module.
  • the second positioning module obtains, according to the foregoing feature recognition and matching manner, a plurality of features that can be used for positioning in the two image frames acquired at two time instants and their positions in respective image frames, and uses the corresponding relationship.
  • the first position and posture of the robot are determined from the previous time t1 to the current time t2.
  • the second positioning compensation module separately matches the matched features in the two image frames with the features in the pre-stored landmark information, and uses the other attribute information in the landmark information corresponding to the matched features to determine that the robot is
  • the position and posture of each shooting time further obtain the second position and posture of the robot from the previous time t1 to the current time t2.
  • the second positioning compensation module further determines the position and posture of the robot according to an error between the first position and the posture and the second position and posture. For example, the second positioning compensation module takes the first The displacement information in one position and posture and the displacement information in the second position and posture are subjected to weighted averaging processing to obtain displacement information in the position and posture after compensation. The second positioning compensation module further performs weighted averaging processing on the angle change information in the first position and the posture and the angle change information in the second position and the posture, and obtains the angle change information in the compensated position and posture.
  • the processing device 23 may perform error compensation in combination with an error compensation method including any one or more of the above.
  • the processing device 23 may perform improvements and extensions based on any one or more of the above error compensation methods, and should be considered as an example generated based on the positioning technique of the present application.
  • the features that are included as landmark information are usually fixed, but in practical applications, the features that are included as landmark information are not necessarily the same.
  • the feature that is included as landmark information is the outline feature of the lamp, and its corresponding feature disappears when the lamp is replaced. When the robot needs to be positioned with this feature, the features used to compensate for the error will not be found.
  • the processing device 23 further comprises an update module for updating the stored landmark information based on the matched features.
  • the update module may acquire: a matching feature, a position of the matching feature in the at least one image frame, such as a location determined by the positioning module, the first positioning compensation module, or the second positioning compensation module. And gestures and other information.
  • the update module may determine whether to update the saved landmark information by comparing the local information stored in the storage device 22 with the acquired information. For example, when the update module searches for features that are not stored in the storage device 22 based on similar or identical positions and postures, the latest feature corresponding supplements are saved in the corresponding landmark information. As another example, when the update module finds features stored in the storage device 22 that cannot be matched with the newly matched features based on similar or identical positions and poses, the redundant features saved in the corresponding landmark information are deleted.
  • the update module may further add new landmark information when the number of currently matched features is higher than a preset threshold.
  • the threshold may be a fixed value or set based on the number of features corresponding to the marked location in the map. For example, when the update module finds that the number of newly matched features is greater than the number of features saved in the storage device 22 at the corresponding location based on similar or identical positions and gestures, the new feature may be added to the constructed landmark information.
  • the update module can also adjust the position and the like in the map based on the features.
  • FIG. 4 is a schematic structural view of a movable robot in an embodiment.
  • the robot includes a positioning system 31, a moving device 33, and a control device 32.
  • the robot includes, but is not limited to, a sweeping robot or the like.
  • the mobile device 33 is used to drive the robot to move on the ground.
  • the moving device 33 includes, but is not limited to, a wheel set, a shock absorbing assembly connected to the wheel set, a drive motor that drives the roller, and the like.
  • the control device 32 may include one or more processors (CPUs) or microprocessing units (MCUs) dedicated to controlling the mobile device 33.
  • the control device 32 acts as a slave processing device, the processing device in the positioning system 31 As the master device 313, the control device 32 performs movement control based on the positioning of the positioning system 31.
  • the control device 32 is shared with a processor in the positioning system 31, which is connected to a drive motor in the mobile device 33 by means such as a bus or the like.
  • Control device 32 receives the data provided by positioning system 31 via the program interface.
  • the control device 32 is configured to control the mobile device 33 to perform a moving operation based on the position and posture provided by the positioning system 31.
  • the manner in which the control device 32 controls the mobile device 33 to perform the moving operation includes, but is not limited to: determining a navigation route based on the currently located position and posture and controlling the mobile device to travel according to the determined navigation route;
  • the position and posture determine the drawing of the map data and the landmark information, while estimating the subsequent route according to the random route or based on the positioned positions and postures, and controlling the movement of the mobile device 33 according to the determined route, and the like.
  • the moving operation includes, but is not limited to, a moving direction, a moving speed, and the like.
  • the mobile device 33 includes two driving motors, each driving motor correspondingly driving a group of rollers, and the moving operation includes driving the two driving motors at different speeds and rotation angles respectively, so that the two groups of rollers drive the robot to a certain Rotate in one direction.
  • the positioning system can perform positioning processing as shown in FIG. 1 and in combination with the foregoing description corresponding to FIG. 1, and will not be described in detail herein.
  • the imaging device 311 shown in FIG. 4 may correspond to the imaging device 11 described in FIG. 1; the storage device 312 shown in FIG. 4 may correspond to the storage device 12 described in FIG. 1;
  • the illustrated processing device 313 may correspond to the processing device 13 described in FIG.
  • the positioning system shown in FIG. 4 includes a storage device 312, an imaging device 311, and a processing device 313.
  • the processing device 31 is connected to the control device 32.
  • the control device 32 is connected to the mobile device 33 as an example, and the robot is based on the positioning.
  • the working process of moving and locating the position and posture of the system 31 is described:
  • the storage device 312 stores a correspondence relationship between an image coordinate system and a physical space coordinate system.
  • the camera 311 captures the image frame in real time and temporarily stores it in the storage device 312.
  • the processing device 313 acquires two image frames P1 and P2 of the previous time t1 and the current time t2 according to a preset time interval or an image frame number interval, and obtains the positions of the matched features in the two image frames by using a visual tracking algorithm. Based on the obtained feature positions in the image frames and the corresponding relationship, the processing device 313 performs coordinate conversion of the feature position in the physical space, thereby obtaining the relative position and posture of the robot from the last time t1 to the current time t2.
  • the processing device 313 can obtain the relative position and posture of the robot by performing error compensation on the obtained position and posture. Meanwhile, the processing device 313 can also accumulate the obtained relative position and posture to determine the position of the robot in the map data and attitude. The processing device 313 can provide the obtained positions and postures to the control device 32. For the cleaning robot, the control device 32 can calculate control data, such as moving speed, steering and corner, etc., required to control the robot to travel along a preset route based on the received position and posture, and control according to the control data. The drive motor in the mobile device 33 moves the wheel set.
  • FIG. 5 is a schematic structural view of the robot in another embodiment.
  • the positioning system 41 can perform positioning processing as shown in FIG. 3 and in combination with the foregoing description corresponding to FIG. 3, and no longer Detailed.
  • the imaging device 411 shown in FIG. 5 may correspond to the imaging device 21 described in FIG. 3; the storage device 412 shown in FIG. 5 may correspond to the storage device 22 described in FIG. 3;
  • the illustrated processing device 413 may correspond to the processing device 23 described in FIG. 3; the mobile device 414 illustrated in FIG. 5 may correspond to the mobile device 24 illustrated in FIG.
  • the positioning system shown in FIG. 5 includes a storage device 412, a motion sensing device 414, an imaging device 411, and a processing device 413.
  • the processing device 413 is connected to the control device 43, and the control device 43 is connected to the mobile device 42 as an example. The working process of moving based on the position and orientation of the positioning system is described:
  • the motion sensing device 414 acquires the movement information of the robot in real time and temporarily stores it in the storage device 412, and the imaging device 411 captures the image frame in real time and temporarily stores it in the storage device 412.
  • the processing device 413 acquires two image frames P1 and P2 of the previous time and the current time according to a preset time interval or an image frame number interval, and movement information during the two time periods. Processing device 413 can obtain the image location of the feature by tracking features in the two image frames P1 and P2.
  • the processing device 413 further constructs a correspondence between the image coordinate system and the physical space coordinate system according to the image position and the physical spatial position provided by the movement information.
  • the processing device 413 can use the visual tracking algorithm to match features and their locations in subsequent image frames Pi. Based on the obtained position of the feature in each image frame and the corresponding relationship, the processing device 413 performs coordinate conversion of the feature position in the physical space, thereby obtaining the relative position and posture of the robot during the acquisition time interval of the two-frame image.
  • the processing device 413 can obtain the relative position and posture of the robot by performing error compensation on the obtained position and posture. Meanwhile, the processing device 413 can also accumulate the obtained relative position and posture to determine the position of the robot in the map data and attitude.
  • the processing device 413 can provide the obtained positions and postures to the control device 43.
  • control device 43 can calculate control data, such as moving speed, steering and corner, etc., required for controlling the robot to travel along the preset route based on the received position and posture, and control according to the control data.
  • control data such as moving speed, steering and corner, etc.
  • the drive motor in the mobile device 42 moves the wheel set.
  • FIG. 6 shows a flowchart of an embodiment of the positioning method of the robot of the present application.
  • the positioning method is mainly performed by a positioning system.
  • the positioning system can be configured in a sweeping robot.
  • the positioning system can be as shown in Figure 1 and its description, or other positioning system capable of performing the positioning method.
  • step S110 the positions of the matching features in the current time image frame and the previous time image frame are acquired.
  • the processing device may be used to acquire two image frames of the last time t1 and the current time t2 according to a preset time interval or an image frame number interval, and identify and match features in the two image frames.
  • the time interval may be selected between a few milliseconds and a few hundred milliseconds
  • the number of image frame intervals may be selected between 0 frames and tens of frames.
  • Such features include, but are not limited to, shape features, grayscale features, and the like.
  • the shape features include, but are not limited to, corner features, line features, edge features, curved features, and the like.
  • the grayscale color features include, but are not limited to, a grayscale transition feature, a grayscale value above or below a grayscale threshold, an area size in a image frame that includes a predetermined grayscale range, and the like.
  • the processing means finds features that can be matched from the identified features based on the locations of the identified features in the respective image frames. For example, as shown in FIG. 2, after identifying the features in each image frame, the processing device determines that the image frame P1 includes the features a1 and a2, and the image frame P2 includes the features b1, b2, and b3, and the feature a1 and Both b1 and b2 belong to the same feature, and features a2 and b3 belong to the same feature.
  • the processing device may first determine that the feature a1 in the image frame P1 is located on the left side of the feature a2 and the pitch is d1 pixel; and also determines the image frame.
  • the feature b1 in P2 is located on the left side of feature b3 and the pitch is d1' pixel point, and feature b2 is located on the right side of feature b3 and the pitch is d2' pixel point.
  • the processing device according to the positional relationship of the features b1 and b3, the positional relationship of the features b2 and b3, and the positional relationship of the features a1 and a2, respectively, and the pixel pitch of the features b1 and b3, the pixel pitch of the features b2 and b3, respectively, and the features a1 and a2
  • the pixel pitch is matched so that the feature a1 in the image frame P1 matches the feature b1 in the image frame P2, and the feature a2 matches the feature b3.
  • the processing device will match the selected features in order to locate the position and attitude of the robot in accordance with the change in position of the image pixels corresponding to each of the features.
  • the position of the robot can be obtained according to a displacement change in a two-dimensional plane, which can be obtained according to an angular change in a two-dimensional plane.
  • the manner of determining the matching feature location in step S110 can be accomplished by tracking the locations of the two image frames that contain the same feature.
  • the tracking module utilizes a visual tracking technique to track features in an image frame at a previous time in an image frame at a current time to obtain a matched feature. For example, the position of the feature c i identified in the image frame P1 at the previous moment is referenced in the image frame P1, and the tracking module performs the corresponding feature c i in the region near the corresponding position in the image frame P2 at the current time.
  • step S120 is performed.
  • the tracking module also utilizes motion information provided by the motion sensing device in the robot to track locations in the two image frames that contain the same features.
  • the hardware circuit of the tracking module is connected to the mobile sensing device via a data line, and the mobile information corresponding to the acquisition times t1 and t2 of the two image frames P1 and P2 is acquired from the mobile sensing device, using the Corresponding relationship and each feature c i identified in the image frame P1 at the previous moment and its position in the image frame P1, estimating the position change corresponding feature c i described by the movement information in the current time image frame P2 a candidate location, and identifying a corresponding feature c i near the estimated candidate location, if the corresponding feature c i is found, obtaining a location of the feature c i in the image frame P2, and if the corresponding feature c i is not found, determining the feature c i is not in the image frame P2.
  • step S120 the position and posture of the robot are determined according to the correspondence relationship and the position.
  • the relationship should include: the correspondence between the image coordinate system and the physical space coordinate system.
  • the correspondence can be stored in the robot before leaving the factory.
  • the correspondence may be obtained and saved in a manner that is field tested at the site being used.
  • the robot also includes a motion sensing device.
  • the positioning method further acquires movement information of the robot before performing step S120, and constructs based on the position of the matching feature in the two image frames and the movement information acquired from the previous time to the current time. The corresponding relationship.
  • the motion sensing device includes, but is not limited to, a displacement sensor, a gyroscope, a speed sensor, a ranging sensor, a cliff sensor, and the like.
  • the mobile sensing device continuously detects the mobile information and provides it to the processing device.
  • the displacement sensor, gyroscope, speed sensor, etc. can be integrated in one or more chips.
  • the ranging sensor and the cliff sensor may be disposed on a body side of the robot.
  • a ranging sensor in the cleaning robot is disposed at the edge of the housing; a cliff sensor in the cleaning robot is disposed at the bottom of the robot.
  • the movement information that the processing device can acquire includes, but is not limited to, displacement information, angle information, distance information with obstacles, speed information, direction of travel information, and the like.
  • the processing device acquires the movement information provided by the movement sensing device during the movement of the robot and acquires each image frame provided by the imaging device.
  • the processing device may acquire the movement information and at least two image frames for a short period of time during which the robot moves. For example, the processing device acquires the movement information and at least two image frames when it is detected that the robot is moving in a straight line. For another example, the processing device acquires the movement information and at least two image frames when it is detected that the robot is in a turning movement.
  • the processing device identifies and matches the features in each image frame and obtains the image locations of the matched features in each image frame.
  • Features include, but are not limited to, corner features, edge features, line features, curve features, and the like.
  • the processing device can utilize visual tracking techniques to obtain image locations of matching features.
  • the processing device then constructs the correspondence according to the image location and the physical spatial location provided by the mobile information.
  • the processing device may establish the correspondence by constructing a feature coordinate parameter of a physical space coordinate system and an image coordinate system.
  • the processing device may be a coordinate origin of the physical space coordinate system according to the physical space position of the image frame at the previous time, and correspondingly match the coordinate origin to the position in the image coordinate system. To construct a correspondence between the two coordinate systems.
  • step S120 that is, determining position offset information of the robot from the previous time to the current time according to the correspondence relationship and the position to obtain the robot. Position and posture.
  • the processing device may perform coordinate transformation on the position of the same feature in the two image frames according to the correspondence relationship, that is, from the previous moment to Current moment Position offset information, which reflects the relative position and attitude change of the robot from the previous moment to the current moment.
  • This type of positioning can be used in positioning with matching features. For example, during the navigation of the robot movement, the relative position and posture change obtained by the above manner can quickly determine whether the current movement route of the robot is offset, and perform subsequent navigation adjustment based on the determination result.
  • the processing device further combines the movement information provided by the motion sensor device to determine the position and posture of the robot when performing step S120.
  • the step S120 includes: determining a position and a posture of the robot according to the correspondence relationship and a position of the matching feature; and compensating the error in the determined position and posture based on the acquired movement information.
  • the processing device acquires two image frames from time t1 to time t2, and also acquires motion information, and the processing device obtains multiple features that can be used for positioning in the two image frames according to the foregoing feature recognition and matching manner.
  • a position in the image frame, and the first position and posture of the robot are determined using the correspondence.
  • the processing device determines, according to the acquired displacement information and angle information, that the robot moves the distance provided by the displacement information along the yaw direction and the yaw angle indicated by the angle information, thereby obtaining the second position and posture of the robot. .
  • the processing device determines the position and attitude of the robot based on an error between the first position and attitude and the second position and attitude.
  • the processing device may perform weight-based mean processing based on the displacement information and the angle information corresponding to each of the first and second positions and postures, thereby obtaining a position and a posture after the error is compensated.
  • the processing device performs weighted averaging processing on the displacement information in the first candidate position and posture and the displacement information in the second candidate position and posture to obtain displacement information in the compensated position and posture.
  • the processing device performs weighted averaging processing on the angle change information in the first candidate position and the posture and the angle change information in the second candidate position and posture to obtain the angle change information in the compensated position and posture.
  • the processing device may compensate for errors in the position and pose determined based only on the position of the matching features in the image frames at the two time instants based on the landmark information constructed based on the matched features.
  • the landmark information is stored in the positioning system.
  • the landmark information includes, but is not limited to, features of previous matches, map data in physical space when photographing the features, positions in corresponding image frames when photographing the features, and positions when photographing corresponding features.
  • attribute information such as posture.
  • the landmark information can be saved along with the map data.
  • the step S120 includes: determining a position and a posture of the robot according to the correspondence relationship and a position of the matching feature; and two steps of compensating the error in the determined position and posture based on the stored landmark information of the corresponding matching feature.
  • the processing device obtains, according to the foregoing feature recognition and matching manner, a plurality of features that are available for positioning in the two image frames acquired at two times before and after and their positions in respective image frames, and uses the correspondence to determine the former
  • the first position and posture of the robot from time t1 to current time t2.
  • the processing device respectively matches the matching features in the two image frames with
  • the features in the pre-stored landmark information are individually matched, and the other attribute information in the landmark information corresponding to the matched feature is used to determine the position and posture of the robot at each shooting time, thereby obtaining the current time t1 to the current time.
  • the second position and attitude of the t2 robot is used to determine the position and posture of the robot at each shooting time, thereby obtaining the current time t1 to the current time.
  • the processing device further determines the position and posture of the robot based on an error between the first position and posture and the second position and posture. For example, the processing device performs weighted averaging processing on the displacement information in the first position and the posture and the displacement information in the second position and posture to obtain the displacement information in the compensated position and posture. The processing device further performs weighted averaging processing on the angle change information in the first position and the posture and the angle change information in the second position and posture to obtain the angle change information in the compensated position and posture.
  • the processing device may perform error compensation in combination with an error compensation method including any one or more of the above.
  • the processing device may be improved and expanded based on any one or more of the above error compensation methods, and should be considered as an example generated based on the positioning technique of the present application.
  • the features that are included as landmark information are usually fixed, but in practical applications, the features that are included as landmark information are not necessarily the same.
  • the feature that is included as landmark information is the outline feature of the lamp, and its corresponding feature disappears when the lamp is replaced. When the robot needs to be positioned with this feature, the features used to compensate for the error will not be found.
  • the positioning method further comprises the step of updating the stored landmark information based on the matched features.
  • the processing device may acquire information including: matching features, positions of the matching features in the at least one image frame, and information such as the position and posture determined by step S120.
  • the processing device may determine whether to update the saved landmark information by comparing the local information stored in the storage device with the acquired information. For example, when the processing device finds a feature that is not stored in the storage device based on the similar or the same position and posture, the latest feature corresponding supplement is saved in the corresponding landmark information. As another example, when the processing device finds a feature stored in the storage device that cannot match the newly matched feature based on the similar or identical position and posture, the redundant feature saved in the corresponding landmark information is deleted.
  • the processing device may further add new landmark information when the number of features currently matched is higher than a preset threshold.
  • the threshold may be a fixed value or set based on the number of features corresponding to the marked location in the map. For example, when the processing device finds that the number of newly matched features is greater than the number of features saved in the storage device at the corresponding location based on similar or identical positions and gestures, the new feature may be added to the constructed landmark information.
  • the processing device can also adjust the position and the like in the map based on the features.
  • FIG. 7 shows a flowchart of a positioning method of the present application in still another embodiment.
  • the positioning method can be performed by a positioning system as shown in 3, or other positioning system capable of performing the following steps.
  • the positioning method can be used in a sweeping robot.
  • step S210 movement information and a plurality of image frames of the robot during the movement are acquired.
  • the mobile sensing device and the imaging device of the robot acquire moving information and image frames in real time during the movement of the robot.
  • This step can utilize the processing device to acquire the movement information and at least two image frames for a short period of time during which the robot moves.
  • step S220 two image frames of the previous time and the current time are acquired, and the image coordinate system and the physics are constructed according to the positions of the matching features in the two image frames and the movement information acquired during the two time periods.
  • the spatial coordinate system corresponds to the relationship.
  • the processing device identifies and matches features in each image frame and obtains image locations of the matching features in each image frame.
  • Features include, but are not limited to, corner features, edge features, line features, curve features, and the like.
  • the processing device can utilize visual tracking techniques to obtain image locations of matching features.
  • the processing device may establish the correspondence by constructing a feature coordinate parameter of a physical space coordinate system and an image coordinate system.
  • the processing device may be a coordinate origin of the physical space coordinate system according to the physical space position of the image frame at the previous time, and correspondingly match the coordinate origin to the position in the image coordinate system. To construct a correspondence between the two coordinate systems.
  • the positioning system performs step S230, that is, determines the position and posture of the robot by using the correspondence relationship.
  • the processing device may acquire an image frame captured by the imaging device, and identify a feature from the image frame, and determine, by using the correspondence relationship, a position of a feature in the image frame corresponding to a position in the physical space, With the accumulation of multi-frame images, the position and posture of the robot can be determined.
  • the step S230 includes the steps of acquiring matching features in the current time image frame and the previous time image frame, and determining the position and posture of the robot according to the correspondence and the feature.
  • the processing device may acquire two image frames of the last time t1 and the current time t2 according to a preset time interval or an image frame number interval, and identify and match features in the two image frames.
  • the time interval may be selected between several milliseconds and several hundred milliseconds
  • the number of image frames may be between 0 frames and tens of frames. select.
  • Such features include, but are not limited to, shape features, grayscale features, and the like.
  • the shape features include, but are not limited to, corner features, line features, edge features, curved features, and the like.
  • the grayscale color features include, but are not limited to, a grayscale transition feature, a grayscale value above or below a grayscale threshold, an area size in a image frame that includes a predetermined grayscale range, and the like.
  • the processing means finds features that can be matched from the identified features based on the locations of the identified features in the respective image frames. For example, as shown in FIG. 2, after identifying the features in each image frame, the processing device determines that the image frame P1 contains features. A1 and a2, the image frame P2 includes the features b1, b2, and b3, and the features a1 and b1 and b2 all belong to the same feature, and the features a2 and b3 belong to the same feature, and the processing device may first determine the feature in the image frame P1.
  • A1 is located on the left side of the feature a2 and has a pitch of d1 pixel points; it is also determined that the feature b1 in the image frame P2 is located on the left side of the feature b3 and the pitch is d1' pixel point, and the feature b2 is located on the right side of the feature b3 with a pitch of D2' pixel point.
  • the processing device according to the positional relationship of the features b1 and b3, the positional relationship of the features b2 and b3, and the positional relationship of the features a1 and a2, respectively, and the pixel pitch of the features b1 and b3, the pixel pitch of the features b2 and b3, respectively, and the features a1 and a2
  • the pixel pitch is matched so that the feature a1 in the image frame P1 matches the feature b1 in the image frame P2, and the feature a2 matches the feature b3.
  • the processing device will match the selected features in order to locate the position and attitude of the robot in accordance with the change in position of the image pixels corresponding to each of the features.
  • the position of the robot can be obtained according to a displacement change in a two-dimensional plane, which can be obtained according to an angular change in a two-dimensional plane.
  • the processing device may determine image position offset information of the plurality of features in the two image frames according to the correspondence relationship, or determine physical position offset information of the plurality of features in the physical space, and integrate the obtained A position offset information is used to calculate the relative position and attitude of the robot from time t1 to time t2.
  • the processing device obtains the position and posture of the robot from the time t1 of capturing the image frame P1 to the time t2 of the captured image frame P2: the m length is moved on the ground and the n-degree angle is rotated to the left.
  • the position and posture obtained according to the processing device can help the robot determine whether it is on the navigation route.
  • the position and posture obtained according to the processing device can help the robot determine the relative displacement and the relative rotation angle, and use the data to perform map drawing.
  • the step S230 includes: determining a position and a posture of the robot according to the correspondence relationship and a position of the matching feature; and a step of compensating for the error in the determined position and posture based on the acquired movement information.
  • the processing device acquires two image frames from time t1 to time t2, and also acquires motion information, and the processing device obtains multiple features that can be used for positioning in the two image frames according to the foregoing feature recognition and matching manner.
  • a position in the image frame, and the first position and posture of the robot are determined using the correspondence.
  • the processing device determines, according to the acquired displacement information and angle information, that the robot moves the distance provided by the displacement information along the yaw direction and the yaw angle indicated by the angle information, thereby obtaining the second position and posture of the robot. .
  • the processing device determines the position and attitude of the robot based on an error between the first position and attitude and the second position and attitude.
  • the processing device may perform weight-based mean processing based on the displacement information and the angle information corresponding to each of the first and second positions and postures, thereby obtaining a position and a posture after the error is compensated.
  • the processing device performs weighted averaging processing on the displacement information in the first candidate position and posture and the displacement information in the second candidate position and posture to obtain displacement information in the compensated position and posture.
  • Processing device takes the first waiting
  • the angle change information in the selected position and posture and the angle change information in the second candidate position and posture are subjected to weighted averaging processing to obtain angle change information in the compensated position and posture.
  • the processing device may further compensate the error in the position and posture determined based on the position of the matching feature in the image frame based on the matched time and the landmark information constructed based on the matched feature.
  • the landmark information is stored in the storage device.
  • the landmark information includes, but is not limited to, features of previous matches, map data in physical space when photographing the features, positions in corresponding image frames when photographing the features, and positions when photographing corresponding features.
  • attribute information such as posture.
  • the landmark information may be stored with the map data in the storage device.
  • the step S230 further includes: determining a position and a posture of the robot according to the correspondence relationship and a position of the matching feature; and a step of compensating the error in the determined position and posture based on the stored landmark information of the corresponding matching feature.
  • the processing device obtains, according to the foregoing feature recognition and matching manner, a plurality of features that are available for positioning in the two image frames acquired at two times before and after and their positions in respective image frames, and uses the correspondence to determine the former
  • the first position and posture of the robot from time t1 to current time t2.
  • the processing device separately matches the matched features in the two image frames with the features in the pre-stored landmark information, and uses the other attribute information in the landmark information corresponding to the matched features to determine the robot in each shooting.
  • the position and posture of the time further obtain the second position and posture of the robot from the previous time t1 to the current time t2.
  • the processing device further determines the position and posture of the robot based on an error between the first position and posture and the second position and posture. For example, the processing device performs weighted averaging processing on the displacement information in the first position and the posture and the displacement information in the second position and posture to obtain the displacement information in the compensated position and posture. The processing device further performs weighted averaging processing on the angle change information in the first position and the posture and the angle change information in the second position and posture to obtain the angle change information in the compensated position and posture.
  • the processing device may perform error compensation in combination with an error compensation method including any one or more of the above.
  • the processing device may be improved and expanded based on any one or more of the above error compensation methods, and should be considered as an example generated based on the positioning technique of the present application.
  • the features that are included as landmark information are usually fixed, but in practical applications, the features that are included as landmark information are not necessarily the same.
  • the feature that is included as landmark information is the outline feature of the lamp, and its corresponding feature disappears when the lamp is replaced. When the robot needs to be positioned with this feature, the features used to compensate for the error will not be found.
  • the positioning method further comprises the step of updating the stored landmark information based on the matched features.
  • the processing device may acquire: a matching feature, a position of the matching feature in the at least one image frame, and information such as the position and posture determined in the foregoing step S230.
  • the processing device may determine whether to update the saved landmark information by comparing the local information stored in the storage device with the acquired information. For example, when the processing device finds that the storage device has not been stored based on the similar or the same position and posture When the feature is stored, the latest feature is supplemented and saved to the corresponding landmark information. As another example, when the processing device finds a feature stored in the storage device that cannot match the newly matched feature based on the similar or identical position and posture, the redundant feature saved in the corresponding landmark information is deleted.
  • the processing device may further add new landmark information when the number of features currently matched is higher than a preset threshold.
  • the threshold may be a fixed value or set based on the number of features corresponding to the marked location in the map. For example, when the processing device finds that the number of newly matched features is greater than the number of features saved at the corresponding location based on similar or identical positions and gestures, new features may be added to the constructed landmark information.
  • the processing device can also adjust the position and the like in the map based on the features.
  • the present application determines the position and posture of the robot from the positional offset information of the feature points matched in the two image frames by the imaging device, and can effectively reduce the error of the movement information provided by the sensor in the robot.
  • the positional offset information of the feature points matched in the two image frames and the movement information provided by the sensor are used to initialize the correspondence between the image coordinate system and the physical space coordinate system, thereby achieving the target of positioning by using the monocular camera device. , and effectively solve the problem of sensor error accumulation.

Abstract

一种定位系统、方法及所适用的机器人。定位系统至少包括:存储装置(12),存储有图像坐标系与物理空间坐标系的对应关系;摄像装置(11),用于在机器人移动期间摄取图像帧;处理装置(13),与摄像装置(11)和存储装置(12)相连,用于获取当前时刻图像帧和上一时刻图像帧中相匹配特征的位置,并依据对应关系和位置确定机器人的位置及姿态。由此借助摄像装置自两图像帧中所匹配的特征点的位置偏移信息来确定机器人的位置及姿态,可减少机器人中传感器所提供移动信息的误差。

Description

定位系统、方法及所适用的机器人 技术领域
本申请涉及室内定位技术领域,特别是涉及一种定位系统、方法及所适用的机器人。
背景技术
移动机器人是自动执行工作的机器装置。它既可以接受人类指挥,又可以运行预先编排的程序,也可以根据以人工智能技术制定的原则纲领行动。这类移动机器人可用在室内或室外,可用于工业或家庭,可用于取代保安巡视、取代人们清洁地面,还可用于家庭陪伴、辅助办公等。受不同移动机器人所应用的领域差别,各领域所使用的移动机器人的移动方式有所差异,例如,移动机器人可采用轮式移动、行走式移动、链条式移动等。随着移动机器人的移动技术的更新迭代,利用传感器所提供的移动信息进行即时定位与地图构建(简称SLAM),以便为移动机器人提供更精准的导航能力,使得移动机器人能更有效地自主移动。然而,以扫地机器人为例,滚轮在不同材质的地面上移动所能行进的距离并不相同,这使得SLAM技术在该领域中所构建的地图与实际物理空间的地图可能出现较大差异。
发明内容
鉴于以上所述现有技术的缺点,本申请的目的在于提供一种定位系统、方法及所适用的机器人,用于解决现有技术中利用传感器所提供数据对机器人的定位不准确的问题。
为实现上述目的及其他相关目的,本申请的第一方面提供一种机器人的定位系统,包括:存储装置,存储有图像坐标系与物理空间坐标系的对应关系;摄像装置,用于在机器人移动期间摄取图像帧;处理装置,与所述摄像装置和存储装置相连,用于获取当前时刻图像帧和上一时刻图像帧中相匹配特征的位置,并依据所述对应关系和所述位置确定机器人的位置及姿态。
在所述第一方面的某些实施方式中,所述摄像装置的视野光学轴相对于垂线为±30°或水平线为60-120°。
在所述第一方面的某些实施方式中,所述处理装置包括跟踪模块,与所述摄像装置相连,用于跟踪两幅图像帧中包含相同特征的位置。
在所述第一方面的某些实施方式中,所述定位系统还包括移动传感装置,与所述处理装置相连,用于获取机器人的移动信息。
在所述第一方面的某些实施方式中,所述处理装置包括初始化模块,用于基于两幅图像 帧中相匹配特征的位置和自所述上一时刻至所述当前时刻所获取的移动信息,构建所述对应关系。
在所述第一方面的某些实施方式中,所述处理装置包括:第一定位模块,用于依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态;第一定位补偿模块,用于基于所获取的移动信息补偿所确定的位置及姿态中的误差。
在所述第一方面的某些实施方式中,所述存储装置还存储基于所匹配的特征而构建的地标信息。
在所述第一方面的某些实施方式中,所述处理装置包括:第二定位模块,用于依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态;第二定位补偿模块,用于基于所存储的对应相匹配特征的地标信息补偿所确定的位置及姿态中的误差。
在所述第一方面的某些实施方式中,所述处理装置包括更新模块,用于基于相匹配的特征更新所存储的地标信息。
本申请的第二方面提供一种机器人的定位系统,包括:移动传感装置,用于获取机器人的移动信息;摄像装置,用于在机器人移动期间摄取图像帧;处理装置,与所述图像摄取装置和移动传感装置相连,用于获取上一时刻和当前时刻的两幅图像帧,并依据两幅所述图像帧中相匹配特征的位置和在该两时刻期间所获取的移动信息,构建图像坐标系与物理空间坐标系对应关系,以及利用所述对应关系确定机器人的位置及姿态;存储装置,与所述处理装置相连,用于存储所述对应关系。
在所述第二方面的某些实施方式中,所述摄像装置的视野光学轴相对于垂线为±30°或水平线为60-120°。
在所述第二方面的某些实施方式中,所述处理装置用于获取当前时刻图像帧和上一时刻图像帧中相匹配特征的位置,并依据所述对应关系和所述位置确定机器人的位置及姿态。
在所述第二方面的某些实施方式中,所述处理装置包括跟踪模块,与所述摄像装置相连,用于跟踪两幅图像帧中包含相同特征的位置。
在所述第二方面的某些实施方式中,所述处理装置包括:第一定位模块,用于依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态;第一定位补偿模块,用于基于所获取的移动信息补偿所确定的位置及姿态中的误差。
在所述第二方面的某些实施方式中,所述存储装置还存储基于所匹配的特征而构建的地标信息。
在所述第二方面的某些实施方式中,所述处理装置包括:第二定位模块,用于依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态;第二定位补偿模块,用于基于所存 储的对应相匹配特征的地标信息补偿所确定的位置及姿态中的误差。
在所述第二方面的某些实施方式中,所述处理装置包括更新模块,用于基于相匹配的特征更新所存储的地标信息。
本申请的第三方面提供一种机器人,包括:如上述第一方面所提供的任一定位系统;或者如上述第二方面所提供的任一定位系统;移动装置;控制装置,用于基于所述定位系统所提供的位置及姿态控制所述移动装置进行移动操作。
本申请的第四方面提供一种机器人定位方法,包括:获取当前时刻图像帧和上一时刻图像帧中相匹配特征的位置;依据所述对应关系和所述位置确定机器人的位置及姿态;其中,所述对应关系包括:图像坐标系与物理空间坐标系的对应关系。
在所述第四方面的某些实施方式中,所述获取当前时刻图像帧和上一时刻图像帧中相匹配特征的位置的方式包括跟踪两幅图像帧中包含相同特征的位置。
在所述第四方面的某些实施方式中,还包括获取机器人的移动信息的步骤。
在所述第四方面的某些实施方式中,还包括基于两幅图像帧中相匹配特征的位置和自所述上一时刻至所述当前时刻所获取的移动信息,构建所述对应关系的步骤。
在所述第四方面的某些实施方式中,所述依据对应关系和所述位置确定机器人的位置及姿态的方式包括:依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态;基于所获取的移动信息补偿所确定的位置及姿态中的误差。
在所述第四方面的某些实施方式中,所述依据对应关系和所述位置确定机器人的位置及姿态的方式包括:依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态;基于预存储的对应相匹配特征的地标信息补偿所确定的位置及姿态中的误差。
在所述第四方面的某些实施方式中,所述定位方法还包括基于相匹配的特征更新所存储的地标信息的步骤。
本申请的第五方面还提供一种机器人的定位方法,包括:获取机器人在移动期间的移动信息和多幅图像帧;获取上一时刻和当前时刻的两幅图像帧,并依据两幅所述图像帧中相匹配特征的位置和在该两时刻期间所获取的移动信息,构建图像坐标系与物理空间坐标系对应关系;利用所述对应关系确定机器人的位置及姿态。
在所述第五方面的某些实施方式中,所述利用对应关系确定机器人的位置及姿态的方式包括:获取当前时刻图像帧和上一时刻图像帧中相匹配特征的位置;依据所述对应关系和所述位置确定机器人的位置及姿态。
在所述第五方面的某些实施方式中,所述获取当前时刻图像帧和上一时刻图像帧中相匹配特征的位置的方式包括:跟踪两幅图像帧中包含相同特征的位置。
在所述第五方面的某些实施方式中,所述依据对应关系和所述位置确定机器人的位置及姿态的方式包括:依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态;基于所获取的移动信息补偿所确定的位置及姿态中的误差。
在所述第五方面的某些实施方式中,所述依据对应关系和所述位置确定机器人的位置及姿态的方式包括:依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态;基于预存储的对应相匹配特征的地标信息补偿所确定的位置及姿态中的误差。
在所述第五方面的某些实施方式中,所述定位方法还包括基于相匹配的特征更新所存储的地标信息的步骤。
如上所述,本申请的定位系统、方法及所适用的机器人,具有以下有益效果:借助摄像装置自两图像帧中所匹配的特征点的位置偏移信息来确定机器人的位置及姿态,可有效减少机器人中传感器所提供移动信息的误差。另外,利用两图像帧中所匹配的特征点的位置偏移信息和传感器所提供的移动信息来初始化图像坐标系与物理空间坐标系的对应关系,既实现了利用单目摄像装置进行定位的目标,又有效解决了传感器误差累积的问题。
附图说明
图1显示为本申请的定位系统在一种实施方式中的结构示意图。
图2显示为本申请的定位系统中两幅图像帧中特征匹配的示意图。
图3显示为本申请的定位系统在又一种实施方式中的结构示意图。
图4显示为本申请的机器人在一种实施方式中的结构示意图。
图5显示为本申请的机器人在又一种实施方式中的结构示意图。
图6显示为本申请的定位方法在一种实施方式中的流程图。
图7显示为本申请的定位方法在又一种实施方式中的流程图。
具体实施方式
以下由特定的具体实施例说明本申请的实施方式,熟悉此技术的人士可由本说明书所揭露的内容轻易地了解本申请的其他优点及功效。
在下述描述中,参考附图,附图描述了本申请的若干实施例。应当理解,还可使用其他实施例,并且可以在不背离本申请的精神和范围的情况下进行机械组成、结构、电气以及操作上的改变.下面的详细描述不应该被认为是限制性的,并且本申请的实施例的范围仅由公布的专利的权利要求书所限定.这里使用的术语仅是为了描述特定实施例,而并非旨在限制本申请。空间相关的术语,例如“上”、“下”、“左”、“右”、“下面”、“下方”、“下部”、“上方”、“上 部”等,可在文中使用以便于说明图中所示的一个元件或特征与另一元件或特征的关系。
虽然在一些实例中术语第一、第二等在本文中用来描述各种元件,但是这些元件不应当被这些术语限制。这些术语仅用来将一个元件与另一个元件进行区分。例如,第一预设阈值可以被称作第二预设阈值,并且类似地,第二预设阈值可以被称作第一预设阈值,而不脱离各种所描述的实施例的范围。第一预设阈值和预设阈值均是在描述一个阈值,但是除非上下文以其他方式明确指出,否则它们不是同一个预设阈值。相似的情况还包括第一音量与第二音量。
再者,如同在本文中所使用的,单数形式“一”、“一个”和“该”旨在也包括复数形式,除非上下文中有相反的指示.应当进一步理解,术语“包含”、“包括”表明存在所述的特征、步骤、操作、元件、组件、项目、种类、和/或组,但不排除一个或多个其他特征、步骤、操作、元件、组件、项目、种类、和/或组的存在、出现或添加.此处使用的术语“或”和“和/或”被解释为包括性的,或意味着任一个或任何组合.因此,“A、B或C”或者“A、B和/或C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A、B和C”。仅当元件、功能、步骤或操作的组合在某些方式下内在地互相排斥时,才会出现该定义的例外。
移动机器人基于不断定位的积累并结合其他预设的或所获取的与移动相关的信息,一方面能够构建机器人所在场地的地图数据,另一方面,还可基于已构建的地图数据提供路线规划、路线规划调整及导航服务。这使得移动机器人的移动效率更高。以扫地机器人为例,例如,室内扫地机器人可结合已构建的室内地图和定位技术,预判当前位置相距室内地图上标记的障碍物的距离,并便于及时调整清扫策略。其中,所述障碍物可由单一标记描述,或基于对形状、尺寸等特征而被标记成墙、桌、沙发、衣柜等。又如,室内扫地机器人可基于定位技术累积所定位的各位置和姿态,并根据累积的位置及姿态的变化构建室内地图。以巡逻机器人为例,巡逻机器人通常应用于厂区、工业园区等场景,巡逻机器人可结合已构建的厂区地图和定位技术,预判当前位置相距转弯处、路口、充电桩等位置的距离,由此便于根据所获取的其他监控数据及时控制机器人的移动装置进行移动。
基于上述移动机器人的示例而推及至其他应用场景下所使用的移动机器人,为了提高移动机器人的定位精度,减少传感器的误差累积,本申请提供一种机器人的定位系统。所述定位系统可配置于扫地机器人中。请参见图1,其显示为所述定位系统的结构示意图。所述定位系统1包含存储装置12、摄像装置11、处理装置13。
在此,所述存储装置12包括但不限于高速随机存取存储器、非易失性存储器。例如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。在某些实施例中,存储装置12还可以包括远离一个或多个处理器的存储器,例如,经由RF电路或外部端口以及通信网 络(未示出)访问的网络附加存储器,其中所述通信网络可以是因特网、一个或多个内部网、局域网(LAN)、广域网(WLAN)、存储局域网(SAN)等,或其适当组合。存储器控制器可控制机器人的诸如CPU和外设接口之类的其他组件对存储装置的访问。
所述摄像装置11包括但不限于:照相机、视频摄像机、集成有光学系统或CCD芯片的摄像模块、集成有光学系统和CMOS芯片的摄像模块等。所述摄像装置的供电系统可受机器人的供电系统控制,当机器人上电移动期间,所述摄像装置11即开始摄取图像帧,并提供给处理装置13。例如,扫地机器人中的摄像装置将所摄取的室内图像帧以预设视频格式缓存在存储装置中,并由处理装置获取。所述摄像装置11用于在机器人移动期间摄取图像帧。在此,所述摄像装置11可设置于机器人的顶部。例如,扫地机器人的摄像装置设置于其顶盖的中部、或边缘上。摄像装置的视野光学轴相对于垂线为±30°或水平线为60-120°。例如,扫地机器人的摄像装置的光学轴相对于垂线的夹角为-30°、-29°、-28°、-27°……-1°、0°、1°、2°……29°、或30°。又如,扫地机器人的摄像装置的光学轴相对于水平线的夹角为60°、61°、62°……119°、120°。需要说明的是,本领域技术人员应该理解,上述光学轴与垂线或水平线的夹角仅为举例,而非限制其夹角精度为1°的范围内,根据实际机器人的设计需求,所述夹角的精度可更高,如达到0.1°、0.01°以上等,在此不做无穷尽的举例。
所述处理装置13包括一个或多个处理器。处理装置13可操作地与存储装置12中的易失性存储器和/或非易失性存储器耦接。处理装置13可执行在存储器和/或非易失性存储设备中存储的指令以在机器人中执行操作,诸如提取图像帧中的特征以及基于特征在地图中进行定位等。如此,处理器可包括一个或多个通用微处理器、一个或多个专用处理器(ASIC)、一个或多个数字信号处理器(DSP)、一个或多个现场可编程逻辑阵列(FPGA)、或它们的任何组合。所述处理装置还与I/O端口和输入结构可操作地耦接,该I/O端口可使得机器人能够与各种其他电子设备进行交互,该输入结构可使得用户能够与计算设备进行交互。因此,输入结构可包括按钮、键盘、鼠标、触控板等。所述其他电子设备可以是所述机器人中移动装置中的移动电机,或机器人中专用于控制移动装置和清扫装置的从处理器,如微控制单元(Microcontroller Unit,简称MCU)。
在一种示例中,所述处理装置13通过数据线分别连接存储装置12和摄像装置11。所述处理装置13通过数据读写技术与存储装置12进行交互,所述处理装置13通过接口协议与摄像装置11进行交互。其中,所述数据读写技术包括但不限于:高速/低速数据接口协议、数据库读写操作等。所述接口协议包括但不限于:HDMI接口协议、串行接口协议等。
所述存储装置12存储有图像坐标系与物理空间坐标系的对应关系。其中,所述图像坐标 系是基于图像像素点而构建的图像坐标系,摄像装置11所摄取的图像帧中各图像像素点的二维坐标参数可由所述图像坐标系描述。所述图像坐标系可为直角坐标系或极坐标系等。所述物理空间坐标系及基于实际二维或三维物理空间中各位置而构建的坐标系,其物理空间位置可依据预设的图像像素单位与单位长度(或单位角度)的对应关系而被描述在所述物理空间坐标系中。所述物理空间坐标系可为二维直角坐标系、极坐标系、球坐标系、三维直角坐标系等。
对于所使用场景的地面复杂度不高的机器人来说,该对应关系可在出厂前预存在所述存储装置中。然而,对于使用场景的地面复杂度较高的机器人,例如扫地机器人来说,可利用在所使用的场地进行现场测试的方式得到所述对应关系并保存在所述存储装置中。在一些实施方式中,所述机器人还包括移动传感装置(未予图示),用于获取机器人的移动信息。其中,所述移动传感装置包括但不限于:位移传感器、陀螺仪、速度传感器、测距传感器、悬崖传感器等。在机器人移动期间,移动传感装置不断侦测移动信息并提供给处理装置。所述位移传感器、陀螺仪、速度传感器等可被集成在一个或多个芯片中。所述测距传感器和悬崖传感器可设置在机器人的体侧。例如,扫地机器人中的测距传感器被设置在壳体的边缘;扫地机器人中的悬崖传感器被设置在机器人底部。根据机器人所布置的传感器的类型和数量,处理装置所能获取的移动信息包括但不限于:位移信息、角度信息、与障碍物之间的距离信息、速度信息、行进方向信息等。
为了构建所述对应关系,在一些实施方式中,所述处理装置中的初始化模块基于两幅图像帧中相匹配特征的位置和自所述上一时刻至所述当前时刻所获取的移动信息,构建所述对应关系。在此,所述初始化模块可以是一种程序模块,其程序部分存储在存储装置中,并经由处理装置的调用而被执行。当所述存储装置中未存储所述对应关系时,所述处理装置调用初始化模块以构建所述对应关系。
在此,初始化模块在机器人移动期间获取移动传感装置所提供的移动信息以及获取摄像装置所提供的各图像帧。为了减少移动传感装置的累积误差,所述初始化模块可在机器人移动的一小段时间内获取所述移动信息和至少两幅图像帧。例如,所述初始化模块在监测到机器人处于直线移动时,获取所述移动信息和至少两幅图像帧。又如,所述初始化模块在监测到机器人处于转弯移动时,获取所述移动信息和至少两幅图像帧。
接着,初始化模块对各图像帧中的特征进行识别和匹配并得到相匹配特征在各图像帧中的图像位置。其中特征包括但不限于角点特征、边缘特征、直线特征、曲线特征等。例如,所述初始化模块可依据所述处理装置中的跟踪模块来获取相匹配特征的图像位置。所述跟踪模块用于跟踪两幅图像帧中包含相同特征的位置。
所述初始化模块再根据所述图像位置和移动信息所提供的物理空间位置来构建所述对应关系。在此,所述初始化模块可通过构建物理空间坐标系和图像坐标系的特征坐标参数来建立所述对应关系。例如,所述初始化模块可依据所拍摄上一时刻图像帧所在物理空间位置为物理空间坐标系的坐标原点,并将该坐标原点与图像帧中相匹配的特征在图像坐标系中的位置进行对应,从而构建两个坐标系的对应关系。
需要说明的是,所述初始化模块的工作过程可以基于用户的指令来执行,或对用户透明。例如,所述初始化模块的执行过程是基于所述存储装置中未存储所述对应关系、或所述对应关系需要被更新时而启动的。在此不做限制。
所述对应关系可由对应算法的程序、数据库等方式保存在所述存储装置中。为此,存储在存储器中的软件组件包括操作系统、通信模块(或指令集)、接触/运动模块(或指令集)、图形模块(或指令集)、以及应用(或指令集)。此外,存储装置还保存有包含摄像装置所拍摄的图像帧、处理装置在进行定位运算时所得到的位置及姿态在内的临时数据或持久数据。
在构建了所述对应关系后,所述处理装置获取当前时刻图像帧和上一时刻图像帧中相匹配的特征,并依据所述对应关系和所述特征确定机器人的位置及姿态。
在此,所述处理装置13可按照预设的时间间隔或图像帧数量间隔获取上一时刻t1和当前时刻t2的两幅图像帧,识别并匹配两幅图像帧中的特征。其中,根据所述定位系统所使用的硬件和软件处理能力的设计,所述时间间隔可在几毫秒至几百毫秒之间选择,所述图像帧数量间隔可在0帧至几十帧之间选择。所述特征包括但不限于:形状特征、和灰度特征等。所述形状特征包括但不限于:角点特征、直线特征、边缘特征、曲线特征等。所述灰度色特征包括但不限于:灰度跳变特征、高于或低于灰度阈值的灰度值、图像帧中包含预设灰度范围的区域尺寸等。
为了能够准确定位,所匹配特征的数量通常为多个,例如在10个以上。为此,所述处理装置13根据所识别的特征在各自图像帧中位置,从所识别出的特征中寻找能够匹配的特征。例如,请参阅图2,其显示为在t1时刻和t2时刻所获取的两幅图像帧中相匹配特征的位置变化关系示意图。所述处理装置13在识别出各图像帧中的特征后,确定图像帧P1中包含特征a1和a2,图像帧P2中包含特征b1、b2和b3,且特征a1与b1和b2均属于同一特征,特征a2与b3属于同一特征,所述处理装置13可先确定在图像帧P1中的特征a1位于特征a2的左侧且间距为d1像素点;同时还确定在图像帧P2中的特征b1位于特征b3的左侧且间距为d1’像素点,以及特征b2位于特征b3右侧且间距为d2’像素点。处理装置13根据特征b1与b3的位置关系、特征b2与b3的位置关系分别与特征a1与a2的位置关系,以及特征b1与b3的像素间距、特征b2与b3的像素间距分别与特征a1与a2的像素间距进行匹配,从而得到 图像帧P1中特征a1与图像帧P2中特征b1相匹配,特征a2与特征b3相匹配。以此类推,处理装置13将所匹配的各特征,以便于依据各所述特征所对应的图像像素的位置变化来定位机器人的位置及姿态。其中,所述机器人的位置可依据在二维平面内的位移变化而得到,所述姿态可依据在二维平面内的角度变化而得到。
在此,处理装置13可以根据所述对应关系,确定两幅图像帧中多个特征的图像位置偏移信息、或确定多个特征在物理空间中的物理位置偏移信息,并综合所得到的任一种位置偏移信息来计算机器人自t1时刻至t2时刻的相对位置及姿态。例如,通过坐标变换,所述处理装置13得到机器人从摄取图像帧P1时刻t1至摄取图像帧P2时刻t2的位置和姿态为:在地面上移动了m长度以及向左旋转了n度角。以扫地机器人为例,当扫地机器人已建立地图时,依据所述处理装置13得到的位置及姿态可帮助机器人确定是否在导航的路线上。当扫地机器人未建立地图时,依据所述处理装置13得到的位置及姿态可帮助机器人确定相对位移和相对转角,并借此数据进行地图绘制。
在某些实施方式中,所述处理装置13包括:跟踪模块和定位模块。其中,所述跟踪模块和定位模块可共用处理装置13中的处理器等硬件电路,并基于程序接口实现数据交互和指令调用。
其中,所述跟踪模块与所述摄像装置11相连,用于跟踪两幅图像帧中包含相同特征的位置。
在一些实施方式中,所述跟踪模块可利用视觉跟踪技术对上一时刻的图像帧中的特征在当前时刻的图像帧中进行追踪以得相匹配的特征。例如,以上一时刻图像帧P1中识别出的特征ci在该图像帧P1中的位置为基准,所述跟踪模块在当前时刻图像帧P2中对应位置附近的区域中是否包含相应特征ci进行判断,若找到相应特征ci,则获取该特征ci在图像帧P2中的位置,若未找到相应特征ci,则认定该特征ci不在图像帧P2中。如此当收集到所跟踪的多个特征、和各特征在各自图像帧中的位置时,将各所述特征及其各位置提供给定位模块。
在又一些实施方式中,所述跟踪模块还利用机器人中的移动传感装置所提供的移动信息来跟踪两幅图像帧中包含相同特征的位置。例如,所述跟踪模块的硬件电路通过数据线连接移动传感装置,并从所述移动传感装置获取与两幅图像帧P1和P2的获取时刻t1和t2相对应的移动信息,利用所述对应关系和上一时刻图像帧P1中所识别的各特征ci及其在图像帧P1中的位置,估计经过所述移动信息所描述的位置变化对应特征ci在当前时刻图像帧P2中的候选位置,并在所估计的候选位置附近识别对应特征ci,若找到相应特征ci,则获取该特征ci在图像帧P2中的位置,若未找到相应特征ci,则认定该特征ci不在图像帧P2中。如此当收集到所跟踪的特征(即相匹配的特征)及其各位置时,将各所述特征及其位置提供给定 位模块。
所述定位模块用于依据所述对应关系及所述位置确定机器人自所述上一时刻至所述当前时刻的位置偏移信息以得到所述机器人的位置及姿态。
在此,所述定位模块可由多个程序模块组合而成,也可为单一的程序模块。例如,为了快速得到机器人的相对位置及姿态变化,则可仅由所述定位模块依据所述对应关系对同一特征在两幅图像帧中的位置进行坐标变换,即能得到自上一时刻至当前时刻的位置偏移信息,该位置偏移信息反映了所述机器人自上一时刻至当前时刻的相对位置及姿态变化。该种定位方式可用于在相匹配的特征充足的定位中。例如,在对机器人移动的导航期间,利用上述方式获取相对位置及姿态变化能够快速确定机器人当前的移动路线是否发生偏移,并基于判断结果来进行后续导航调整。
为了防止摄像装置11的误差在本方案中的累积,在一种实施方式中,所述处理装置13还结合移动传感装置所提供的移动信息以确定机器人的位置及姿态。所述处理装置13包括:第一定位模块和第一定位补偿模块。其中,所述第一定位模块和第一定位补偿模块可属于前述定位模块中的程序模块。所述第一定位模块用于依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态。所述第一定位补偿模块用于基于所获取的移动信息补偿所确定的位置及姿态中的误差。
例如,所述第一定位模块自t1时刻至t2时刻获取两图像帧,同时还获取了移动信息,第一定位模块依据前述特征识别和匹配方式得到两幅图像帧中可用于定位的多个特征及其在各自图像帧中的位置,并利用所述对应关系确定机器人的第一位置及姿态。所述第一定位补偿模块根据所获取的位移信息和角度信息,确定机器人沿所述角度信息所指示的偏转方向及偏转角度移动了由所述位移信息所提供的距离,如此得到机器人的第二位置及姿态。
受两种计算方式和硬件设备的误差影响,所得到的第一位置及姿态和第二位置及姿态之间必然存在误差。为了减少所述误差,所述第一定位补偿模块还基于所述第一位置及姿态和第二位置及姿态之间的误差确定所述机器人的位置及姿态。在此,所述第一定位补偿模块可基于第一和第二位置及姿态各自所对应的位移信息及角度信息进行基于权重的均值处理,从而得到补偿了误差后的位置及姿态。例如,第一定位补偿模块取第一候选位置及姿态中的位移信息和第二候选位置及姿态中位移信息进行加权平均处理,得到补偿后的位置及姿态中位移信息。第一定位补偿模块取第一候选位置及姿态中的角度变化信息和第二候选位置及姿态中角度变化信息进行加权平均处理,得到补偿后的位置及姿态中角度变化信息。
在另一种实施方式中,所述处理装置13还可以结合基于所匹配的特征而构建的地标信息来补偿仅基于前后两时刻的图像帧中相匹配特征的位置而确定的位置及姿态中的误差。对应 地,所述存储装置12中存储有所述地标信息。其中所述地标信息包括但不限于:历次匹配的特征、历次拍摄到所述特征时在物理空间的地图数据、历次拍摄到所述特征时在相应图像帧中的位置、拍摄相应特征时的位置及姿态等属性信息。所述地标信息可与地图数据一并保存在所述存储装置12。
所述处理装置13包括:第二定位模块和第二定位补偿模块。所述第二定位模块用于依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态。所述第二定位补偿模块用于基于所存储的对应相匹配特征的地标信息补偿所确定的位置及姿态中的误差。其中,所述第二定位模块和第二定位补偿模块可属于前述定位模块中的程序模块。
例如,所述第二定位模块依据前述特征识别和匹配方式得到前后两时刻所获取的两幅图像帧中可用于定位的多个特征及其在各自图像帧中的位置,并利用所述对应关系确定从前一时刻t1至当前时刻t2机器人的第一位置及姿态。第二定位补偿模块分别将该两幅图像帧中相匹配的特征与预存的地标信息中的特征进行单独匹配,并利用各自所匹配的特征所对应的地标信息中的其他属性信息来确定机器人在每一拍摄时刻的位置及姿态,进而得到从前一时刻t1至当前时刻t2机器人的第二位置及姿态。接着,第二定位补偿模块还根据第一位置及姿态和第二位置及姿态之间的误差确定所述机器人的位置及姿态。例如,第二定位补偿模块取第一位置及姿态中的位移信息和第二位置及姿态中位移信息进行加权平均处理,得到补偿后的位置及姿态中位移信息。第二定位补偿模块还取第一位置及姿态中的角度变化信息和第二位置及姿态中角度变化信息进行加权平均处理,得到补偿后的位置及姿态中角度变化信息。
需要说明的是,所述处理装置13还可以结合包含上述任一种或多种的误差补偿方式进行误差补偿。或者,所述处理装置13可在上述任一种或多种误差补偿方式基础上进行改进及拓展应视为基于本申请的定位技术而产生的示例。
另外,被收录为地标信息的特征通常为固定不变的,然而在实际应用中,被收录为地标信息的特征并非一定如此。比如,被收录为地标信息的特征为灯的轮廓特征,当灯被更换后其相应的特征消失。当机器人需要借助该特征进行定位时,将无法找到用于补偿误差的特征。为此,所述处理装置13还包括更新模块,用于基于相匹配的特征更新所存储的地标信息。
在此,所述更新模块可获取包括:相匹配的特征,相匹配特征在至少一个图像帧中的位置,如前述定位模块、第一定位补偿模块、或第二定位补偿模块等所确定的位置及姿态等信息。
所述更新模块可通过比存储装置12中保存的各地标信息与所获取的信息来确定是否更新所保存的地标信息。例如,当更新模块基于相似或相同的位置及姿态查找到存储装置12中未曾存储的特征时,将最新的特征对应补充保存到相应地标信息中。又如,当更新模块基于 相似或相同的位置及姿态查找到存储装置12中已存储的但无法与新匹配的特征相匹配的特征时,删除相应地标信息中所保存的多余特征。
所述更新模块还可以基于当前匹配的特征数量高于预设门限时添加新的地标信息。其中所述门限可以是固定值、或基于地图中所标记位置处对应的特征数量而设定。例如,当更新模块基于相似或相同的位置及姿态查找到新匹配的特征数量多于相应位置处存储装置中所保存的特征数量时,可将新的特征添加到已构建的地标信息中。
需要说明的是,本领域技术人员应该理解,上述基于位置而调整地标信息中的特征的方式仅为举例,而非对本申请的限制。事实上,更新模块也可以基于特征来调整地图中的位置等。
请参阅图3,其显示为本申请又一种定位系统的结构示意图。所述定位系统可配置于扫地机器人中。其中,所述定位系统2包括:移动传感装置24、摄像装置21、处理装置23和存储装置22。
在此,所述移动传感装置24包括但不限于:位移传感器、陀螺仪、速度传感器、测距传感器、悬崖传感器等。在机器人移动期间,移动传感装置24不断侦测移动信息并提供给处理装置。所述位移传感器、陀螺仪、速度传感器等可被集成在一个或多个芯片中。所述测距传感器和悬崖传感器可设置在机器人的体侧。例如,扫地机器人中的测距传感器被设置在壳体的边缘;扫地机器人中的悬崖传感器被设置在机器人底部。根据机器人所布置的传感器的类型和数量,处理装置所能获取的移动信息包括但不限于:位移信息、角度信息、与障碍物之间的距离信息、速度信息、行进方向信息等。
所述存储装置22包括但不限于高速随机存取存储器、非易失性存储器。例如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。在某些实施例中,存储装置22还可以包括远离一个或多个处理器的存储器,例如,经由RF电路或外部端口以及通信网络(未示出)访问的网络附加存储器,其中所述通信网络可以是因特网、一个或多个内部网、局域网(LAN)、广域网(WLAN)、存储局域网(SAN)等,或其适当组合。存储器控制器可控制机器人的诸如CPU和外设接口之类的其他组件对存储装置的访问。
所述摄像装置21包括但不限于:照相机、视频摄像机、集成有光学系统或CCD芯片的摄像模块、集成有光学系统和CMOS芯片的摄像模块等。所述摄像装置21的供电系统可受机器人的供电系统控制,当机器人上电移动期间,所述摄像装置即开始摄取图像帧,并提供给处理装置。例如,扫地机器人中的摄像装置将所摄取的室内图像帧以预设视频格式缓存在存储装置中,并由处理装置获取。
所述摄像装置21用于在机器人移动期间摄取图像帧。在此,所述摄像装置21可设置于 机器人的顶部。例如,扫地机器人的摄像装置设置于其顶盖的中部、或边缘上。摄像装置的视野光学轴相对于垂线为±30°或水平线为60-120°。例如,扫地机器人的摄像装置的光学轴相对于垂线的夹角为-30°、-29°、-28°、-27°……-1°、0°、1°、2°、…、29°、或30°。又如,扫地机器人的摄像装置的光学轴相对于水平线的夹角为60°、61°、62°……119°、120°。需要说明的是,本领域技术人员应该理解,上述光学轴与垂线或水平线的夹角仅为举例,而非限制其夹角精度为1°的范围内,根据实际机器人的设计需求,所述夹角的精度可更高,如达到0.1°、0.01°以上等,在此不做无穷尽的举例。
为了提高移动机器人的定位精度,减少传感器的误差累积,在机器人移动期间,所述处理装置23自所述摄像装置21获取上一时刻和当前时刻的两幅图像帧,并依据两幅所述图像帧中相匹配特征的位置和在该两时刻期间自移动传感装置所获取的移动信息,构建图像坐标系与物理空间坐标系对应关系并保存在存储装置中。
在此,所述处理装置23包括一个或多个处理器。处理装置23可操作地与存储装置222中的易失性存储器和/或非易失性存储器耦接。处理装置23可执行在存储器和/或非易失性存储设备中存储的指令以在机器人中执行操作,诸如提取图像帧中的特征以及基于特征在地图中进行定位等。如此,处理器可包括一个或多个通用微处理器、一个或多个专用处理器(ASIC)、一个或多个数字信号处理器(DSP)、一个或多个现场可编程逻辑阵列(FPGA)、或它们的任何组合。所述处理装置还与I/O端口和输入结构可操作地耦接,该I/O端口可使得机器人能够与各种其他电子设备进行交互,该输入结构可使得用户能够与计算设备进行交互。因此,输入结构可包括按钮、键盘、鼠标、触控板等。所述其他电子设备可以是所述机器人中移动装置中的移动电机,或机器人中专用于控制移动装置和清扫装置的从处理器,如微控制单元。
在一种示例中,所述处理装置23通过数据线分别连接存储装置22、摄像装置21和移动传感装置24。所述处理装置23通过数据读写技术与存储装置进行交互,所述处理装置23通过接口协议分别与摄像装置21和移动传感装置24进行交互。其中,所述数据读写技术包括但不限于:高速/低速数据接口协议、数据库读写操作等。所述接口协议包括但不限于:HDMI接口协议、串行接口协议等。
为了构建所述对应关系,在一些实施方式中,所述处理装置23中的初始化模块基于两幅图像帧中相匹配特征的位置和自所述上一时刻至所述当前时刻所获取的移动信息,构建所述对应关系。在此,所述初始化模块可以是一种程序模块,其程序部分存储在存储装置中,并经由处理装置的调用而被执行。当所述存储装置中未存储所述对应关系时,所述处理装置调用初始化模块以构建所述对应关系。
在此,初始化模块在机器人移动期间获取移动传感装置所提供的移动信息以及获取摄像装置所提供的各图像帧。为了减少移动传感装置的累积误差,所述初始化模块可在机器人移动的一小段时间内获取所述移动信息和至少两幅图像帧。在此,所述处理装置可按照预设的时间间隔或图像帧数量间隔获取上一时刻t1和当前时刻t2的两幅图像帧。其中,根据所述定位系统所使用的硬件和软件处理能力的设计,所述时间间隔可在几毫秒至几百毫秒之间选择,所述图像帧数量间隔可在0帧至几十帧之间选择。例如,所述初始化模块在监测到机器人处于直线移动时,获取所述移动信息和至少两幅图像帧。又如,所述初始化模块在监测到机器人处于转弯移动时,获取所述移动信息和至少两幅图像帧。
接着,初始化模块对各图像帧中的特征进行识别和匹配并得到相匹配特征在各图像帧中的图像位置。其中特征包括但不限于角点特征、边缘特征、直线特征、曲线特征等。例如,所述初始化模块可依据所述处理装置中的跟踪模块来获取相匹配特征的图像位置。所述跟踪模块用于跟踪两幅图像帧中包含相同特征的位置。
所述初始化模块再根据所述图像位置和移动信息所提供的物理空间位置来构建所述对应关系。在此,所述初始化模块可通过构建物理空间坐标系和图像坐标系的特征坐标参数来建立所述对应关系。例如,所述初始化模块可依据所拍摄上一时刻图像帧所在物理空间位置为物理空间坐标系的坐标原点,并将该坐标原点与图像帧中相匹配的特征在图像坐标系中的位置进行对应,从而构建两个坐标系的对应关系。
需要说明的是,所述初始化模块的工作过程可以基于用户的指令来执行,或对用户透明。例如,所述初始化模块的执行过程是基于所述存储装置22中未存储所述对应关系、或所述对应关系需要被更新时而启动的。在此不做限制。
所述对应关系可由对应算法的程序、数据库等方式保存在所述存储装置22中。为此,存储在存储器中的软件组件包括操作系统、通信模块(或指令集)、接触/运动模块(或指令集)、图形模块(或指令集)、以及应用(或指令集)。此外,存储装置22还保存有包含摄像装置21所拍摄的图像帧、处理装置23在进行定位运算时所得到的位置及姿态在内的临时数据或持久数据。
在构建了所述对应关系后,所述处理装置23还利用所述对应关系确定机器人的位置及姿态。在此,所述处理装置23可获取所述摄像装置21所摄取的图像帧,并从所述图像帧中识别特征,借助所述对应关系确定在图像帧中特征的位置对应到物理空间中的位置,利用多帧图像的累积,能够确定机器人的位置及姿态。
在一种实施方式中,所述处理装置23获取当前时刻图像帧和上一时刻图像帧中相匹配的特征,并依据所述对应关系和所述特征确定机器人的位置及姿态。
在此,所述处理装置23可按照预设的时间间隔或图像帧数量间隔获取上一时刻t1和当前时刻t2的两幅图像帧,识别并匹配两幅图像帧中的特征。其中,根据所述定位系统所使用的硬件和软件处理能力的设计,所述时间间隔可在几毫秒至几百毫秒之间选择,所述图像帧数量间隔可在0帧至几十帧之间选择。所述特征包括但不限于:形状特征、和灰度特征等。所述形状特征包括但不限于:角点特征、直线特征、边缘特征、曲线特征等。所述灰度色特征包括但不限于:灰度跳变特征、高于或低于灰度阈值的灰度值、图像帧中包含预设灰度范围的区域尺寸等。
为了能够准确定位,所匹配特征的数量通常为多个,例如在10个以上。为此,所述处理装置23根据所识别的特征在各自图像帧中位置,从所识别出的特征中寻找能够匹配的特征。例如,如图2所示,所述处理装置23在识别出各图像帧中的特征后,确定图像帧P1中包含特征a1和a2,图像帧P2中包含特征b1、b2和b3,且特征a1与b1和b2均属于同一特征,特征a2与b3属于同一特征,所述处理装置23可先确定在图像帧P1中的特征a1位于特征a2的左侧且间距为d1像素点;同时还确定在图像帧P2中的特征b1位于特征b3的左侧且间距为d1’像素点,以及特征b2位于特征b3右侧且间距为d2’像素点。处理装置23根据特征b1与b3的位置关系、特征b2与b3的位置关系分别与特征a1与a2的位置关系,以及特征b1与b3的像素间距、特征b2与b3的像素间距分别与特征a1与a2的像素间距进行匹配,从而得到图像帧P1中特征a1与图像帧P2中特征b1相匹配,特征a2与特征b3相匹配。以此类推,处理装置23将所匹配的各特征,以便于依据各所述特征所对应的图像像素的位置变化来定位机器人的位置及姿态。其中,所述机器人的位置可依据在二维平面内的位移变化而得到,所述姿态可依据在二维平面内的角度变化而得到。
在此,处理装置23可以根据所述对应关系,确定两幅图像帧中多个特征的图像位置偏移信息、或确定多个特征在物理空间中的物理位置偏移信息,并综合所得到的任一种位置偏移信息来计算机器人自t1时刻至t2时刻的相对位置及姿态。例如,通过坐标变换,所述处理装置23得到机器人从摄取图像帧P1时刻t1至摄取图像帧P2时刻t2的位置和姿态为:在地面上移动了m长度以及向左旋转了n度角。以扫地机器人为例,当扫地机器人已建立地图时,依据所述处理装置23得到的位置及姿态可帮助机器人确定是否在导航的路线上。当扫地机器人未建立地图时,依据所述处理装置23得到的位置及姿态可帮助机器人确定相对位移和相对转角,并借此数据进行地图绘制。
在某些实施方式中,所述处理装置23包括:跟踪模块和定位模块。其中,所述跟踪模块和定位模块可共用处理装置23中的处理器等硬件电路,并基于程序接口实现数据交互和指令调用。
其中,所述跟踪模块与所述摄像装置21相连,用于跟踪两幅图像帧中包含相同特征的位置。
在一些实施方式中,所述跟踪模块可利用视觉跟踪技术对上一时刻的图像帧中的特征在当前时刻的图像帧中进行追踪以得相匹配的特征。例如,以上一时刻图像帧P1中识别出的特征ci在该图像帧P1中的位置为基准,所述跟踪模块在当前时刻图像帧P2中对应位置附近的区域中是否包含相应特征ci进行判断,若找到相应特征ci,则获取该特征ci在图像帧P2中的位置,若未找到相应特征ci,则认定该特征ci不在图像帧P2中。如此当收集到所跟踪的多个特征、和各特征在各自图像帧中的位置时,将各所述特征及其各位置提供给定位模块。
在又一些实施方式中,所述跟踪模块还利用机器人中的移动传感装置24所提供的移动信息来跟踪两幅图像帧中包含相同特征的位置。例如,所述跟踪模块的硬件电路通过数据线连接移动传感装置24,并从所述移动传感装置24获取与两幅图像帧P1和P2的获取时刻t1和t2相对应的移动信息,利用所述对应关系和上一时刻图像帧P1中所识别的各特征ci及其在图像帧P1中的位置,估计经过所述移动信息所描述的位置变化对应特征ci在当前时刻图像帧P2中的候选位置,并在所估计的候选位置附近识别对应特征ci,若找到相应特征ci,则获取该特征ci在图像帧P2中的位置,若未找到相应特征ci,则认定该特征ci不在图像帧P2中。如此当收集到所跟踪的特征(即相匹配的特征)及其各位置时,将各所述特征及其位置提供给定位模块。
所述定位模块用于依据所述对应关系及所述位置确定机器人自所述上一时刻至所述当前时刻的位置偏移信息以得到所述机器人的位置及姿态。
在此,所述定位模块可由多个程序模块组合而成,也可为单一的程序模块。例如,为了快速得到机器人的相对位置及姿态变化,则可仅由所述定位模块依据所述对应关系对同一特征在两幅图像帧中的位置进行坐标变换,即能得到自上一时刻至当前时刻的位置偏移信息,该位置偏移信息反映了所述机器人自上一时刻至当前时刻的相对位置及姿态变化。该种定位方式可用于在相匹配的特征充足的定位中。例如,在对机器人移动的导航期间,利用上述方式获取相对位置及姿态变化能够快速确定机器人当前的移动路线是否发生偏移,并基于判断结果来进行后续导航调整。
为了防止摄像装置21的误差在本方案中的累积,在一种实施方式中,所述处理装置23还结合移动传感装置24所提供的移动信息以确定机器人的位置及姿态。所述处理装置23包括:第一定位模块和第一定位补偿模块。其中,所述第一定位模块和第一定位补偿模块可属于前述定位模块中的程序模块。所述第一定位模块用于依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态。所述第一定位补偿模块用于基于所获取的移动信息补偿所确定 的位置及姿态中。
例如,所述第一定位模块自t1时刻至t2时刻获取两图像帧,同时还获取了移动信息,第一定位模块依据前述特征识别和匹配方式得到两幅图像帧中可用于定位的多个特征及其在各自图像帧中的位置,并利用所述对应关系确定机器人的第一位置及姿态。所述第一定位补偿模块根据所获取的位移信息和角度信息,确定机器人沿所述角度信息所指示的偏转方向及偏转角度移动了由所述位移信息所提供的距离,如此得到机器人的第二位置及姿态。
受两种计算方式和硬件设备的误差影响,所得到的第一位置及姿态和第二位置及姿态之间必然存在误差。为了减少所述误差,所述第一定位补偿模块还基于所述第一位置及姿态和第二位置及姿态之间的误差确定所述机器人的位置及姿态。在此,所述第一定位补偿模块可基于第一和第二位置及姿态各自所对应的位移信息及角度信息进行基于权重的均值处理,从而得到补偿了误差后的位置及姿态。例如,第一定位补偿模块取第一候选位置及姿态中的位移信息和第二候选位置及姿态中位移信息进行加权平均处理,得到补偿后的位置及姿态中位移信息。第一定位补偿模块取第一候选位置及姿态中的角度变化信息和第二候选位置及姿态中角度变化信息进行加权平均处理,得到补偿后的位置及姿态中角度变化信息。
在另一种实施方式中,所述处理装置23还可以结合基于所匹配的特征而构建的地标信息来补偿仅基于前后两时刻的图像帧中相匹配特征的位置而确定的位置及姿态中的误差。对应地,所述存储装置22中存储有所述地标信息。其中所述地标信息包括但不限于:历次匹配的特征、历次拍摄到所述特征时在物理空间的地图数据、历次拍摄到所述特征时在相应图像帧中的位置、拍摄相应特征时的位置及姿态等属性信息。所述地标信息可与地图数据一并保存在所述存储装置22。
所述处理装置23包括:第二定位模块和第二定位补偿模块。所述第二定位模块用于依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态。所述第二定位补偿模块用于基于所存储的对应相匹配特征的地标信息补偿所确定的位置及姿态中的误差。其中,所述第二定位模块和第二定位补偿模块可属于前述定位模块中的程序模块。
例如,所述第二定位模块依据前述特征识别和匹配方式得到前后两时刻所获取的两幅图像帧中可用于定位的多个特征及其在各自图像帧中的位置,并利用所述对应关系确定从前一时刻t1至当前时刻t2机器人的第一位置及姿态。第二定位补偿模块分别将该两幅图像帧中相匹配的特征与预存的地标信息中的特征进行单独匹配,并利用各自所匹配的特征所对应的地标信息中的其他属性信息来确定机器人在每一拍摄时刻的位置及姿态,进而得到从前一时刻t1至当前时刻t2机器人的第二位置及姿态。接着,第二定位补偿模块还根据第一位置及姿态和第二位置及姿态之间的误差确定所述机器人的位置及姿态。例如,第二定位补偿模块取第 一位置及姿态中的位移信息和第二位置及姿态中位移信息进行加权平均处理,得到补偿后的位置及姿态中位移信息。第二定位补偿模块还取第一位置及姿态中的角度变化信息和第二位置及姿态中角度变化信息进行加权平均处理,得到补偿后的位置及姿态中角度变化信息。
需要说明的是,所述处理装置23还可以结合包含上述任一种或多种的误差补偿方式进行误差补偿。或者,所述处理装置23可在上述任一种或多种误差补偿方式基础上进行改进及拓展应视为基于本申请的定位技术而产生的示例。
另外,被收录为地标信息的特征通常为固定不变的,然而在实际应用中,被收录为地标信息的特征并非一定如此。比如,被收录为地标信息的特征为灯的轮廓特征,当灯被更换后其相应的特征消失。当机器人需要借助该特征进行定位时,将无法找到用于补偿误差的特征。为此,所述处理装置23还包括更新模块,用于基于相匹配的特征更新所存储的地标信息。
在此,所述更新模块可获取包括:相匹配的特征,相匹配特征在至少一个图像帧中的位置,如前述定位模块、第一定位补偿模块、或第二定位补偿模块等所确定的位置及姿态等信息。
所述更新模块可通过比存储装置22中保存的各地标信息与所获取的信息来确定是否更新所保存的地标信息。例如,当更新模块基于相似或相同的位置及姿态查找到存储装置22中未曾存储的特征时,将最新的特征对应补充保存到相应地标信息中。又如,当更新模块基于相似或相同的位置及姿态查找到存储装置22中已存储的但无法与新匹配的特征相匹配的特征时,删除相应地标信息中所保存的多余特征。
所述更新模块还可以基于当前匹配的特征数量高于预设门限时添加新的地标信息。其中所述门限可以是固定值、或基于地图中所标记位置处对应的特征数量而设定。例如,当更新模块基于相似或相同的位置及姿态查找到新匹配的特征数量多于相应位置处存储装置22中所保存的特征数量时,可将新的特征添加到已构建的地标信息中。
需要说明的是,本领域技术人员应该理解,上述基于位置而调整地标信息中的特征的方式仅为举例,而非对本申请的限制。事实上,更新模块也可以基于特征来调整地图中的位置等。
请参阅图4,其显示为可移动的机器人在一种实施方式中的结构示意图。所述机器人包括:定位系统31、移动装置33、和控制装置32。所述机器人包括但不限于:扫地机器人等。
所述移动装置33用于带动机器人在地面移动。以扫地机器人为例,所述移动装置33包括但不限于:轮组、与轮组相连的减震组件、驱动所述滚轮的驱动电机等。
所述控制装置32可以包含专用于控制移动装置33的一个或多个处理器(CPU)或微处理单元(MCU)。例如,所述控制装置32作为从处理设备,所述定位系统31中的处理装置 313作为主设备,控制装置32基于定位系统31的定位进行移动控制。或者所述控制装置32与所述定位系统31中的处理器相共用,该处理器通过如总线等方式连接至移动装置33中的驱动电机。控制装置32通过程序接口接收定位系统31所提供的数据。所述控制装置32用于基于所述定位系统31所提供的位置及姿态控制所述移动装置33进行移动操作。
在此,所述控制装置32控制移动装置33进行移动操作的方式包括但不限于:基于当前定位的位置及姿态确定导航路线并按照所确定的导航路线控制移动装置行进;基于前后两次所定位的位置及姿态确定绘制地图数据和地标信息,同时按照随机路线或基于已定位的各位置及姿态估计后续路线并按照所确定的路线控制移动装置33行进等。其中,所述移动操作包括但不限于:移动方向、移动速度等。例如,所述移动装置33包含两个驱动电机,每个驱动电机对应驱动一组滚轮,所述移动操作包含分别以不同速度和转角驱动该两个驱动电机,以使两组滚轮带动机器人向某一方向转动。
在一种实施方式中,所述定位系统可如图1所示并结合前述基于该图1所对应的说明进行定位处理,在此不再详述。其中,图4中所示的摄像装置311可对应于图1中所述的摄像装置11;图4中所示的存储装置312可对应于图1中所述的存储装置12;图4中所示的处理装置313可对应于图1中所述的处理装置13。以图4中所示的定位系统包含存储装置312、摄像装置311和处理装置313,所述处理装置31连接控制装置32,控制装置32连接移动装置33为例,对所述机器人基于所述定位系统31的位置及姿态定位而进行移动的工作过程予以描述:
所述存储装置312存储有图像坐标系与物理空间坐标系的对应关系。在机器人移动期间,摄像装置311实时摄取图像帧并暂存于存储装置312中。处理装置313按照预设的时间间隔或图像帧数量间隔获取上一时刻t1和当前时刻t2的两幅图像帧P1和P2,并利用视觉跟踪算法得到两幅图像帧中相匹配的特征的位置。基于所得到的各图像帧中的特征位置及所述对应关系,处理装置313进行特征位置在物理空间中的坐标转换,由此得到机器人自上一时刻t1至当前时刻t2的相对位置及姿态。处理装置313通过对所得到的位置及姿态进行误差补偿,可得到机器人的相对位置及姿态;同时,处理装置313还可以累积所得到的相对位置及姿态以确定机器人在地图数据中定位的位置及姿态。处理装置313可将所得到的各位置及姿态提供给控制装置32。对于扫地机器人来说,控制装置32可基于所接收的位置及姿态计算用于控制机器人沿预设路线行进时所需的控制数据,如移动速度、转向及转角等,并按照所述控制数据控制移动装置33中的驱动电机以便轮组移动。
在另一些实施方式中,请参阅图5,其显示为机器人在另一实施方式中的结构示意图。所述定位系统41可如图3所示并结合前述基于该图3所对应的说明进行定位处理,在此不再 详述。其中,图5中所示的摄像装置411可对应于图3中所述的摄像装置21;图5中所示的存储装置412可对应于图3中所述的存储装置22;图5中所示的处理装置413可对应于图3中所述的处理装置23;图5中所示的移动装置414可对应于图3中所述的移动装置24。
以图5所述定位系统包含存储装置412、移动传感装置414、摄像装置411和处理装置413,所述处理装置413连接控制装置43,控制装置43连接移动装置42为例,对所述机器人基于所述定位系统的位置及姿态定位而进行移动的工作过程予以描述:
在机器人移动期间,移动传感装置414实时获取机器人的移动信息并暂存于存储装置412中,以及摄像装置411实时摄取图像帧并暂存于存储装置412中。处理装置413按照预设的时间间隔或图像帧数量间隔获取上一时刻和当前时刻的两幅图像帧P1和P2,以及该两时刻期间的移动信息。处理装置413可通过跟踪两幅图像帧P1和P2中的特征来得到所述特征的图像位置。所述处理装置413再根据所述图像位置和移动信息所提供的物理空间位置来构建图像坐标系与物理空间坐标系的对应关系。接着,处理装置413可利用视觉跟踪算法匹配后续各图像帧Pi中的特征及其位置。基于所得到的各图像帧中特征的位置及所述对应关系,处理装置413进行特征位置在物理空间中的坐标转换,由此得到机器人自两帧图像的获取时刻间隔期间的相对位置及姿态。处理装置413通过对所得到的位置及姿态进行误差补偿,可得到机器人的相对位置及姿态;同时,处理装置413还可以累积所得到的相对位置及姿态以确定机器人在地图数据中定位的位置及姿态。处理装置413可将所得到的各位置及姿态提供给控制装置43。对于扫地机器人来说,控制装置43可基于所接收的位置及姿态计算用于控制机器人沿预设路线行进时所需的控制数据,如移动速度、转向及转角等,并按照所述控制数据控制移动装置42中的驱动电机以便轮组移动。
请参考图6,其显示为本申请机器人的定位方法在一实施方式的流程图。所述定位方法主要由定位系统来执行。所述定位系统可配置于扫地机器人中。所述定位系统可如图1及其描述所示,或其他能够执行所述定位方法的定位系统。
在步骤S110中,获取当前时刻图像帧和上一时刻图像帧中相匹配特征的位置。
在此,可利用处理装置按照预设的时间间隔或图像帧数量间隔获取上一时刻t1和当前时刻t2的两幅图像帧,识别并匹配两幅图像帧中的特征。其中,根据定位系统所使用的硬件和软件处理能力的设计,所述时间间隔可在几毫秒至几百毫秒之间选择,所述图像帧数量间隔可在0帧至几十帧之间选择。所述特征包括但不限于:形状特征、和灰度特征等。所述形状特征包括但不限于:角点特征、直线特征、边缘特征、曲线特征等。所述灰度色特征包括但不限于:灰度跳变特征、高于或低于灰度阈值的灰度值、图像帧中包含预设灰度范围的区域尺寸等。
为了能够准确定位,所匹配特征的数量通常为多个,例如在10个以上。为此,所述处理装置根据所识别的特征在各自图像帧中位置,从所识别出的特征中寻找能够匹配的特征。例如,如图2所示,所述处理装置在识别出各图像帧中的特征后,确定图像帧P1中包含特征a1和a2,图像帧P2中包含特征b1、b2和b3,且特征a1与b1和b2均属于同一特征,特征a2与b3属于同一特征,所述处理装置可先确定在图像帧P1中的特征a1位于特征a2的左侧且间距为d1像素点;同时还确定在图像帧P2中的特征b1位于特征b3的左侧且间距为d1’像素点,以及特征b2位于特征b3右侧且间距为d2’像素点。处理装置根据特征b1与b3的位置关系、特征b2与b3的位置关系分别与特征a1与a2的位置关系,以及特征b1与b3的像素间距、特征b2与b3的像素间距分别与特征a1与a2的像素间距进行匹配,从而得到图像帧P1中特征a1与图像帧P2中特征b1相匹配,特征a2与特征b3相匹配。以此类推,处理装置将所匹配的各特征,以便于依据各所述特征所对应的图像像素的位置变化来定位机器人的位置及姿态。其中,所述机器人的位置可依据在二维平面内的位移变化而得到,所述姿态可依据在二维平面内的角度变化而得到。
在某些实施方式中,所述步骤S110中确定相匹配特征位置的方式可通过跟踪两幅图像帧中包含相同特征的位置的步骤来实现。
在此,可利用处理装置中的跟踪模块来执行。在一些实施方式中,所述跟踪模块利用视觉跟踪技术对上一时刻的图像帧中的特征在当前时刻的图像帧中进行追踪以得相匹配的特征。例如,以上一时刻图像帧P1中识别出的特征ci在该图像帧P1中的位置为基准,所述跟踪模块在当前时刻图像帧P2中对应位置附近的区域中是否包含相应特征ci进行判断,若找到相应特征ci,则获取该特征ci在图像帧P2中的位置,若未找到相应特征ci,则认定该特征ci不在图像帧P2中。如此当收集到所跟踪的多个特征、和各特征在各自图像帧中的位置时,执行步骤S120。
在又一些实施方式中,所述跟踪模块还利用机器人中的移动传感装置所提供的移动信息来跟踪两幅图像帧中包含相同特征的位置。例如,所述跟踪模块的硬件电路通过数据线连接移动传感装置,并从所述移动传感装置获取与两幅图像帧P1和P2的获取时刻t1和t2相对应的移动信息,利用所述对应关系和上一时刻图像帧P1中所识别的各特征ci及其在图像帧P1中的位置,估计经过所述移动信息所描述的位置变化对应特征ci在当前时刻图像帧P2中的候选位置,并在所估计的候选位置附近识别对应特征ci,若找到相应特征ci,则获取该特征ci在图像帧P2中的位置,若未找到相应特征ci,则认定该特征ci不在图像帧P2中。如此当收集到所跟踪的特征(即相匹配的特征)及其各位置时,执行步骤S120。
在步骤S120中,依据所述对应关系和所述位置确定机器人的位置及姿态。其中,所述对 应关系包括:图像坐标系与物理空间坐标系的对应关系。在此,所述对应关系可在出厂前存储在机器人中。
在一些实施方式中,所述对应关系可利用在所使用的场地进行现场测试的方式得到所述对应关系并保存。为此,所述机器人还包括移动传感装置。对应的,所述定位方法在执行步骤S120之前还获取机器人的移动信息,以及基于两幅图像帧中相匹配特征的位置和自所述上一时刻至所述当前时刻所获取的移动信息,构建所述对应关系。
其中,所述移动传感装置包括但不限于:位移传感器、陀螺仪、速度传感器、测距传感器、悬崖传感器等。在机器人移动期间,移动传感装置不断侦测移动信息并提供给处理装置。所述位移传感器、陀螺仪、速度传感器等可被集成在一个或多个芯片中。所述测距传感器和悬崖传感器可设置在机器人的体侧。例如,扫地机器人中的测距传感器被设置在壳体的边缘;扫地机器人中的悬崖传感器被设置在机器人底部。根据机器人所布置的传感器的类型和数量,处理装置所能获取的移动信息包括但不限于:位移信息、角度信息、与障碍物之间的距离信息、速度信息、行进方向信息等。
在此,处理装置在机器人移动期间获取移动传感装置所提供的移动信息以及获取摄像装置所提供的各图像帧。为了减少移动传感装置的累积误差,所述处理装置可在机器人移动的一小段时间内获取所述移动信息和至少两幅图像帧。例如,所述处理装置在监测到机器人处于直线移动时,获取所述移动信息和至少两幅图像帧。又如,所述处理装置在监测到机器人处于转弯移动时,获取所述移动信息和至少两幅图像帧。
接着,处理装置对各图像帧中的特征进行识别和匹配并得到相匹配特征在各图像帧中的图像位置。其中特征包括但不限于角点特征、边缘特征、直线特征、曲线特征等。例如,所述处理装置可利用视觉跟踪技术来获取相匹配特征的图像位置。
所述处理装置再根据所述图像位置和移动信息所提供的物理空间位置来构建所述对应关系。在此,所述处理装置可通过构建物理空间坐标系和图像坐标系的特征坐标参数来建立所述对应关系。例如,所述处理装置可依据所拍摄上一时刻图像帧所在物理空间位置为物理空间坐标系的坐标原点,并将该坐标原点与图像帧中相匹配的特征在图像坐标系中的位置进行对应,从而构建两个坐标系的对应关系。
在所述对应关系确定之后,所述定位系统执行步骤S120,即依据所述对应关系及所述位置确定机器人自所述上一时刻至所述当前时刻的位置偏移信息以得到所述机器人的位置及姿态。
在此,为了快速得到机器人的相对位置及姿态变化,则可仅由所述处理装置依据所述对应关系对同一特征在两幅图像帧中的位置进行坐标变换,即能得到自上一时刻至当前时刻的 位置偏移信息,该位置偏移信息反映了所述机器人自上一时刻至当前时刻的相对位置及姿态变化。该种定位方式可用于在相匹配的特征充足的定位中。例如,在对机器人移动的导航期间,利用上述方式获取相对位置及姿态变化能够快速确定机器人当前的移动路线是否发生偏移,并基于判断结果来进行后续导航调整。
为了防止摄像装置的误差在本方案中的累积,在一种实施方式中,所述处理装置在执行步骤S120时还结合移动传感装置所提供的移动信息以确定机器人的位置及姿态。所述步骤S120包括:依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态;以及基于所获取的移动信息补偿所确定的位置及姿态中的误差两步骤。
例如,所述处理装置自t1时刻至t2时刻获取两图像帧,同时还获取了移动信息,处理装置依据前述特征识别和匹配方式得到两幅图像帧中可用于定位的多个特征及其在各自图像帧中的位置,并利用所述对应关系确定机器人的第一位置及姿态。所述处理装置根据所获取的位移信息和角度信息,确定机器人沿所述角度信息所指示的偏转方向及偏转角度移动了由所述位移信息所提供的距离,如此得到机器人的第二位置及姿态。
受两种计算方式和硬件设备的误差影响,所得到的第一位置及姿态和第二位置及姿态之间必然存在误差。为了减少所述误差,所述处理装置还基于所述第一位置及姿态和第二位置及姿态之间的误差确定所述机器人的位置及姿态。在此,所述处理装置可基于第一和第二位置及姿态各自所对应的位移信息及角度信息进行基于权重的均值处理,从而得到补偿了误差后的位置及姿态。例如,处理装置取第一候选位置及姿态中的位移信息和第二候选位置及姿态中位移信息进行加权平均处理,得到补偿后的位置及姿态中位移信息。处理装置取第一候选位置及姿态中的角度变化信息和第二候选位置及姿态中角度变化信息进行加权平均处理,得到补偿后的位置及姿态中角度变化信息。
在另一种实施方式中,所述处理装置可基于所匹配的特征而构建的地标信息来补偿仅基于前后两时刻的图像帧中相匹配特征的位置而确定的位置及姿态中的误差。对应地,所述定位系统中存储有所述地标信息。其中所述地标信息包括但不限于:历次匹配的特征、历次拍摄到所述特征时在物理空间的地图数据、历次拍摄到所述特征时在相应图像帧中的位置、拍摄相应特征时的位置及姿态等属性信息。所述地标信息可与地图数据一并保存。
所述步骤S120包括:依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态;以及基于所存储的对应相匹配特征的地标信息补偿所确定的位置及姿态中的误差的两步骤。
例如,所述处理装置依据前述特征识别和匹配方式得到前后两时刻所获取的两幅图像帧中可用于定位的多个特征及其在各自图像帧中的位置,并利用所述对应关系确定从前一时刻t1至当前时刻t2机器人的第一位置及姿态。处理装置分别将该两幅图像帧中相匹配的特征与 预存的地标信息中的特征进行单独匹配,并利用各自所匹配的特征所对应的地标信息中的其他属性信息来确定机器人在每一拍摄时刻的位置及姿态,进而得到从前一时刻t1至当前时刻t2机器人的第二位置及姿态。接着,处理装置还根据第一位置及姿态和第二位置及姿态之间的误差确定所述机器人的位置及姿态。例如,处理装置取第一位置及姿态中的位移信息和第二位置及姿态中位移信息进行加权平均处理,得到补偿后的位置及姿态中位移信息。处理装置还取第一位置及姿态中的角度变化信息和第二位置及姿态中角度变化信息进行加权平均处理,得到补偿后的位置及姿态中角度变化信息。
需要说明的是,所述处理装置还可以结合包含上述任一种或多种的误差补偿方式进行误差补偿。或者,所述处理装置可在上述任一种或多种误差补偿方式基础上进行改进及拓展应视为基于本申请的定位技术而产生的示例。
另外,被收录为地标信息的特征通常为固定不变的,然而在实际应用中,被收录为地标信息的特征并非一定如此。比如,被收录为地标信息的特征为灯的轮廓特征,当灯被更换后其相应的特征消失。当机器人需要借助该特征进行定位时,将无法找到用于补偿误差的特征。为此,所述定位方法还包括基于相匹配的特征更新所存储的地标信息的步骤。
在此,所述处理装置可获取包括:相匹配的特征,相匹配特征在至少一个图像帧中的位置,以及由步骤S120所确定的位置及姿态等信息。
所述处理装置可通过比存储装置中保存的各地标信息与所获取的信息来确定是否更新所保存的地标信息。例如,当处理装置基于相似或相同的位置及姿态查找到存储装置中未曾存储的特征时,将最新的特征对应补充保存到相应地标信息中。又如,当处理装置基于相似或相同的位置及姿态查找到存储装置中已存储的但无法与新匹配的特征相匹配的特征时,删除相应地标信息中所保存的多余特征。
所述处理装置还可以基于当前匹配的特征数量高于预设门限时添加新的地标信息。其中所述门限可以是固定值、或基于地图中所标记位置处对应的特征数量而设定。例如,当处理装置基于相似或相同的位置及姿态查找到新匹配的特征数量多于相应位置处存储装置中所保存的特征数量时,可将新的特征添加到已构建的地标信息中。
需要说明的是,本领域技术人员应该理解,上述基于位置而调整地标信息中的特征的方式仅为举例,而非对本申请的限制。事实上,处理装置也可以基于特征来调整地图中的位置等。
请参阅图7,其显示为本申请定位方法在又一实施方式中的流程图。所述定位方法可由如3所示的定位系统来执行,或其他能够执行以下步骤的定位系统。所述定位方法可用于扫地机器人中。
在步骤S210中,获取机器人在移动期间的移动信息和多幅图像帧。
在此,机器人的移动传感设备和摄像设备在机器人移动期间实时获取移动信息和图像帧。本步骤可利用处理装置在机器人移动的一小段时间内获取所述移动信息和至少两幅图像帧。
在步骤S220中,获取上一时刻和当前时刻的两幅图像帧,并依据两幅所述图像帧中相匹配特征的位置和在该两时刻期间所获取的移动信息,构建图像坐标系与物理空间坐标系对应关系。
在此,处理装置对各图像帧中的特征进行识别和匹配并得到相匹配特征在各图像帧中的图像位置。其中特征包括但不限于角点特征、边缘特征、直线特征、曲线特征等。例如,所述处理装置可利用视觉跟踪技术来获取相匹配特征的图像位置。
接着,再根据所述图像位置和移动信息所提供的物理空间位置来构建所述对应关系。在此,所述处理装置可通过构建物理空间坐标系和图像坐标系的特征坐标参数来建立所述对应关系。例如,所述处理装置可依据所拍摄上一时刻图像帧所在物理空间位置为物理空间坐标系的坐标原点,并将该坐标原点与图像帧中相匹配的特征在图像坐标系中的位置进行对应,从而构建两个坐标系的对应关系。
在所述对应关系确定之后,所述定位系统执行步骤S230,即利用所述对应关系确定机器人的位置及姿态。在此,所述处理装置可获取所述摄像装置所摄取的图像帧,并从所述图像帧中识别特征,借助所述对应关系确定在图像帧中特征的位置对应到物理空间中的位置,利用多帧图像的累积,能够确定机器人的位置及姿态。
在一种实施方式中,所述步骤S230包括获取当前时刻图像帧和上一时刻图像帧中相匹配的特征,并依据所述对应关系和所述特征确定机器人的位置及姿态的步骤。
在此,处理装置可按照预设的时间间隔或图像帧数量间隔获取上一时刻t1和当前时刻t2的两幅图像帧,识别并匹配两幅图像帧中的特征。其中,根据所述定位系统所使用的硬件和软件处理能力的设计,所述时间间隔可在几毫秒至几百毫秒之间选择,所述图像帧数量间隔可在0帧至几十帧之间选择。所述特征包括但不限于:形状特征、和灰度特征等。所述形状特征包括但不限于:角点特征、直线特征、边缘特征、曲线特征等。所述灰度色特征包括但不限于:灰度跳变特征、高于或低于灰度阈值的灰度值、图像帧中包含预设灰度范围的区域尺寸等。
为了能够准确定位,所匹配特征的数量通常为多个,例如在10个以上。为此,所述处理装置根据所识别的特征在各自图像帧中位置,从所识别出的特征中寻找能够匹配的特征。例如,如图2所示,所述处理装置在识别出各图像帧中的特征后,确定图像帧P1中包含特征 a1和a2,图像帧P2中包含特征b1、b2和b3,且特征a1与b1和b2均属于同一特征,特征a2与b3属于同一特征,所述处理装置可先确定在图像帧P1中的特征a1位于特征a2的左侧且间距为d1像素点;同时还确定在图像帧P2中的特征b1位于特征b3的左侧且间距为d1’像素点,以及特征b2位于特征b3右侧且间距为d2’像素点。处理装置根据特征b1与b3的位置关系、特征b2与b3的位置关系分别与特征a1与a2的位置关系,以及特征b1与b3的像素间距、特征b2与b3的像素间距分别与特征a1与a2的像素间距进行匹配,从而得到图像帧P1中特征a1与图像帧P2中特征b1相匹配,特征a2与特征b3相匹配。以此类推,处理装置将所匹配的各特征,以便于依据各所述特征所对应的图像像素的位置变化来定位机器人的位置及姿态。其中,所述机器人的位置可依据在二维平面内的位移变化而得到,所述姿态可依据在二维平面内的角度变化而得到。
在此,处理装置可以根据所述对应关系,确定两幅图像帧中多个特征的图像位置偏移信息、或确定多个特征在物理空间中的物理位置偏移信息,并综合所得到的任一种位置偏移信息来计算机器人自t1时刻至t2时刻的相对位置及姿态。例如,通过坐标变换,所述处理装置得到机器人从摄取图像帧P1时刻t1至摄取图像帧P2时刻t2的位置和姿态为:在地面上移动了m长度以及向左旋转了n度角。以扫地机器人为例,当扫地机器人已建立地图时,依据所述处理装置得到的位置及姿态可帮助机器人确定是否在导航的路线上。当扫地机器人未建立地图时,依据所述处理装置得到的位置及姿态可帮助机器人确定相对位移和相对转角,并借此数据进行地图绘制。
在一些实施方式中,所述步骤S230包括:依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态;以及基于所获取的移动信息补偿所确定的位置及姿态中的误差的步骤。
例如,所述处理装置自t1时刻至t2时刻获取两图像帧,同时还获取了移动信息,处理装置依据前述特征识别和匹配方式得到两幅图像帧中可用于定位的多个特征及其在各自图像帧中的位置,并利用所述对应关系确定机器人的第一位置及姿态。所述处理装置根据所获取的位移信息和角度信息,确定机器人沿所述角度信息所指示的偏转方向及偏转角度移动了由所述位移信息所提供的距离,如此得到机器人的第二位置及姿态。
受两种计算方式和硬件设备的误差影响,所得到的第一位置及姿态和第二位置及姿态之间必然存在误差。为了减少所述误差,所述处理装置还基于所述第一位置及姿态和第二位置及姿态之间的误差确定所述机器人的位置及姿态。在此,所述处理装置可基于第一和第二位置及姿态各自所对应的位移信息及角度信息进行基于权重的均值处理,从而得到补偿了误差后的位置及姿态。例如,处理装置取第一候选位置及姿态中的位移信息和第二候选位置及姿态中位移信息进行加权平均处理,得到补偿后的位置及姿态中位移信息。处理装置取第一候 选位置及姿态中的角度变化信息和第二候选位置及姿态中角度变化信息进行加权平均处理,得到补偿后的位置及姿态中角度变化信息。
在另一种实施方式中,所述处理装置还可以结合基于所匹配的特征而构建的地标信息来补偿仅基于前后两时刻的图像帧中相匹配特征的位置而确定的位置及姿态中的误差。对应地,所述存储装置中存储有所述地标信息。其中所述地标信息包括但不限于:历次匹配的特征、历次拍摄到所述特征时在物理空间的地图数据、历次拍摄到所述特征时在相应图像帧中的位置、拍摄相应特征时的位置及姿态等属性信息。所述地标信息可与地图数据一并保存在所述存储装置。
所述步骤S230还包括:依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态;以及基于所存储的对应相匹配特征的地标信息补偿所确定的位置及姿态中的误差的步骤。
例如,所述处理装置依据前述特征识别和匹配方式得到前后两时刻所获取的两幅图像帧中可用于定位的多个特征及其在各自图像帧中的位置,并利用所述对应关系确定从前一时刻t1至当前时刻t2机器人的第一位置及姿态。处理装置分别将该两幅图像帧中相匹配的特征与预存的地标信息中的特征进行单独匹配,并利用各自所匹配的特征所对应的地标信息中的其他属性信息来确定机器人在每一拍摄时刻的位置及姿态,进而得到从前一时刻t1至当前时刻t2机器人的第二位置及姿态。接着,处理装置还根据第一位置及姿态和第二位置及姿态之间的误差确定所述机器人的位置及姿态。例如,处理装置取第一位置及姿态中的位移信息和第二位置及姿态中位移信息进行加权平均处理,得到补偿后的位置及姿态中位移信息。处理装置还取第一位置及姿态中的角度变化信息和第二位置及姿态中角度变化信息进行加权平均处理,得到补偿后的位置及姿态中角度变化信息。
需要说明的是,所述处理装置还可以结合包含上述任一种或多种的误差补偿方式进行误差补偿。或者,所述处理装置可在上述任一种或多种误差补偿方式基础上进行改进及拓展应视为基于本申请的定位技术而产生的示例。
另外,被收录为地标信息的特征通常为固定不变的,然而在实际应用中,被收录为地标信息的特征并非一定如此。比如,被收录为地标信息的特征为灯的轮廓特征,当灯被更换后其相应的特征消失。当机器人需要借助该特征进行定位时,将无法找到用于补偿误差的特征。为此,所述定位方法还包括基于相匹配的特征更新所存储的地标信息的步骤。
在此,所述处理装置可获取包括:相匹配的特征,相匹配特征在至少一个图像帧中的位置,如前述步骤S230所确定的位置及姿态等信息。
所述处理装置可通过比存储装置中保存的各地标信息与所获取的信息来确定是否更新所保存的地标信息。例如,当处理装置基于相似或相同的位置及姿态查找到存储装置中未曾存 储的特征时,将最新的特征对应补充保存到相应地标信息中。又如,当处理装置基于相似或相同的位置及姿态查找到存储装置中已存储的但无法与新匹配的特征相匹配的特征时,删除相应地标信息中所保存的多余特征。
所述处理装置还可以基于当前匹配的特征数量高于预设门限时添加新的地标信息。其中所述门限可以是固定值、或基于地图中所标记位置处对应的特征数量而设定。例如,当处理装置基于相似或相同的位置及姿态查找到新匹配的特征数量多于相应位置处所保存的特征数量时,可将新的特征添加到已构建的地标信息中。
需要说明的是,本领域技术人员应该理解,上述基于位置而调整地标信息中的特征的方式仅为举例,而非对本申请的限制。事实上,处理装置也可以基于特征来调整地图中的位置等。
综上所述,本申请借助摄像装置自两图像帧中所匹配的特征点的位置偏移信息来确定机器人的位置及姿态,可有效减少机器人中传感器所提供移动信息的误差。另外,利用两图像帧中所匹配的特征点的位置偏移信息和传感器所提供的移动信息来初始化图像坐标系与物理空间坐标系的对应关系,既实现了利用单目摄像装置进行定位的目标,又有效解决了传感器误差累积的问题。
上述实施例仅例示性说明本申请的原理及其功效,而非用于限制本申请。任何熟悉此技术的人士皆可在不违背本申请的精神及范畴下,对上述实施例进行修饰或改变。因此,举凡所属技术领域中具有通常知识者在未脱离本申请所揭示的精神与技术思想下所完成的一切等效修饰或改变,仍应由本申请的权利要求所涵盖。

Claims (31)

  1. 一种机器人的定位系统,其特征在于,包括:
    存储装置,存储有图像坐标系与物理空间坐标系的对应关系;
    摄像装置,用于在机器人移动期间摄取图像帧;
    处理装置,与所述摄像装置和存储装置相连,用于获取当前时刻图像帧和上一时刻图像帧中相匹配特征的位置,并依据所述对应关系和所述位置确定机器人的位置及姿态。
  2. 根据权利要求1所述的机器人的定位系统,其特征在于,所述摄像装置的视野光学轴相对于垂线为±30°或水平线为60-120°。
  3. 根据权利要求1所述的机器人的定位系统,其特征在于,所述处理装置包括跟踪模块,与所述摄像装置相连,用于跟踪两幅图像帧中包含相同特征的位置。
  4. 根据权利要求1所述的机器人的定位系统,其特征在于,还包括移动传感装置,与所述处理装置相连,用于获取机器人的移动信息。
  5. 根据权利要求4所述的机器人的定位系统,其特征在于,所述处理装置包括初始化模块,用于基于两幅图像帧中相匹配特征的位置和自所述上一时刻至所述当前时刻所获取的移动信息,构建所述对应关系。
  6. 根据权利要求4所述的机器人的定位系统,其特征在于,所述处理装置包括:
    第一定位模块,用于依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态;
    第一定位补偿模块,用于基于所获取的移动信息补偿所确定的位置及姿态中的误差。
  7. 根据权利要求1所述的机器人的定位系统,其特征在于,所述存储装置还存储基于所匹配的特征而构建的地标信息。
  8. 根据权利要求7所述的机器人的定位系统,其特征在于,所述处理装置包括:
    第二定位模块,用于依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态;
    第二定位补偿模块,用于基于所存储的对应相匹配特征的地标信息补偿所确定的位置 及姿态中的误差。
  9. 根据权利要求7所述的机器人的定位系统,其特征在于,所述处理装置包括更新模块,用于基于相匹配的特征更新所存储的地标信息。
  10. 一种机器人的定位系统,其特征在于,包括:
    移动传感装置,用于获取机器人的移动信息;
    摄像装置,用于在机器人移动期间摄取图像帧;
    处理装置,与所述图像摄取装置和移动传感装置相连,用于获取上一时刻和当前时刻的两幅图像帧,并依据两幅所述图像帧中相匹配特征的位置和在该两时刻期间所获取的移动信息,构建图像坐标系与物理空间坐标系对应关系,以及利用所述对应关系确定机器人的位置及姿态;
    存储装置,与所述处理装置相连,用于存储所述对应关系。
  11. 根据权利要求10所述的机器人的定位系统,其特征在于,所述摄像装置的视野光学轴相对于垂线为±30°或水平线为60-120°。
  12. 根据权利要求10所述的机器人的定位系统,其特征在于,所述处理装置用于获取当前时刻图像帧和上一时刻图像帧中相匹配特征的位置,并依据所述对应关系和所述位置确定机器人的位置及姿态。
  13. 根据权利要求12所述的机器人的定位系统,其特征在于,所述处理装置包括跟踪模块,与所述摄像装置相连,用于跟踪两幅图像帧中包含相同特征的位置。
  14. 根据权利要求10所述的机器人的定位系统,其特征在于,所述处理装置包括:
    第一定位模块,用于依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态;
    第一定位补偿模块,用于基于所获取的移动信息补偿所确定的位置及姿态中的误差。
  15. 根据权利要求10所述的机器人的定位系统,其特征在于,所述存储装置还存储基于所匹配的特征而构建的地标信息。
  16. 根据权利要求15所述的机器人的定位系统,其特征在于,所述处理装置包括:
    第二定位模块,用于依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态;
    第二定位补偿模块,用于基于所存储的对应相匹配特征的地标信息补偿所确定的位置及姿态中的误差。
  17. 根据权利要求15所述的机器人的定位系统,其特征在于,所述处理装置包括更新模块,用于基于相匹配的特征更新所存储的地标信息。
  18. 一种机器人,其特征在于,包括:
    如权利要求1-8中任一所述的定位系统;或者如权利要求10-17中任一所述的定位系统;
    移动装置;
    控制装置,用于基于所述定位系统所提供的位置及姿态控制所述移动装置进行移动操作。
  19. 一种机器人定位方法,其特征在于,包括:
    获取当前时刻图像帧和上一时刻图像帧中相匹配特征的位置;
    依据所述对应关系和所述位置确定机器人的位置及姿态;其中,所述对应关系包括:
    图像坐标系与物理空间坐标系的对应关系。
  20. 根据权利要求19所述的机器人的定位方法,其特征在于,所述获取当前时刻图像帧和上一时刻图像帧中相匹配特征的位置的方式包括跟踪两幅图像帧中包含相同特征的位置。
  21. 根据权利要求19所述的机器人的定位方法,其特征在于,还包括获取机器人的移动信息的步骤。
  22. 根据权利要求21所述的机器人的定位方法,其特征在于,还包括基于两幅图像帧中相匹配特征的位置和自所述上一时刻至所述当前时刻所获取的移动信息,构建所述对应关系的步骤。
  23. 根据权利要求21所述的机器人的定位方法,其特征在于,所述依据对应关系和所述位置确定机器人的位置及姿态的方式包括:
    依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态;
    基于所获取的移动信息补偿所确定的位置及姿态中的误差。
  24. 根据权利要求19所述的机器人的定位方法,其特征在于,所述依据对应关系和所述位置确定机器人的位置及姿态的方式包括:
    依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态;
    基于预存储的对应相匹配特征的地标信息补偿所确定的位置及姿态中的误差。
  25. 根据权利要求24所述的机器人的定位方法,其特征在于,还包括基于相匹配的特征更新所存储的地标信息的步骤。
  26. 一种机器人的定位方法,其特征在于,包括:
    获取机器人在移动期间的移动信息和多幅图像帧;
    获取上一时刻和当前时刻的两幅图像帧,并依据两幅所述图像帧中相匹配特征的位置和在该两时刻期间所获取的移动信息,构建图像坐标系与物理空间坐标系对应关系;
    利用所述对应关系确定机器人的位置及姿态。
  27. 根据权利要求26所述的机器人的定位方法,其特征在于,所述利用对应关系确定机器人的位置及姿态的方式包括:
    获取当前时刻图像帧和上一时刻图像帧中相匹配特征的位置;
    依据所述对应关系和所述位置确定机器人的位置及姿态。
  28. 根据权利要求27所述的机器人的定位方法,其特征在于,所述获取当前时刻图像帧和上一时刻图像帧中相匹配特征的位置的方式包括:跟踪两幅图像帧中包含相同特征的位置。
  29. 根据权利要求27所述的机器人的定位方法,其特征在于,所述依据对应关系和所述位置确定机器人的位置及姿态的方式包括:
    依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态;
    基于所获取的移动信息补偿所确定的位置及姿态中的误差。
  30. 根据权利要求27所述的机器人的定位方法,其特征在于,所述依据对应关系和所述位置确定机器人的位置及姿态的方式包括:
    依据所述对应关系和相匹配特征的位置确定机器人的位置及姿态;
    基于预存储的对应相匹配特征的地标信息补偿所确定的位置及姿态中的误差。
  31. 根据权利要求30所述的机器人的定位方法,其特征在于,还包括基于相匹配的特征更新所存储的地标信息的步骤。
PCT/CN2017/112412 2017-11-10 2017-11-22 定位系统、方法及所适用的机器人 WO2019090833A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP17931779.7A EP3708954A4 (en) 2017-11-10 2017-11-22 POSITIONING SYSTEM AND METHOD AND ROBOTS WITH USE THEREOF
US16/043,746 US10436590B2 (en) 2017-11-10 2018-07-24 Localization system and method, and robot using the same

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711104306.1A CN107907131B (zh) 2017-11-10 2017-11-10 定位系统、方法及所适用的机器人
CN201711104306.1 2017-11-10

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/043,746 Continuation US10436590B2 (en) 2017-11-10 2018-07-24 Localization system and method, and robot using the same

Publications (1)

Publication Number Publication Date
WO2019090833A1 true WO2019090833A1 (zh) 2019-05-16

Family

ID=61844667

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/112412 WO2019090833A1 (zh) 2017-11-10 2017-11-22 定位系统、方法及所适用的机器人

Country Status (3)

Country Link
EP (1) EP3708954A4 (zh)
CN (1) CN107907131B (zh)
WO (1) WO2019090833A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111220148A (zh) * 2020-01-21 2020-06-02 珊口(深圳)智能科技有限公司 移动机器人的定位方法、系统、装置及移动机器人
CN112417924A (zh) * 2019-08-20 2021-02-26 北京地平线机器人技术研发有限公司 一种标志杆的空间坐标获取方法及装置

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108665508B (zh) * 2018-04-26 2022-04-05 腾讯科技(深圳)有限公司 一种即时定位与地图构建方法、装置及存储介质
CN108514389A (zh) * 2018-06-04 2018-09-11 赵海龙 一种智能清洁设备的控制方法
WO2020014864A1 (zh) * 2018-07-17 2020-01-23 深圳市大疆创新科技有限公司 位姿确定方法、设备、计算机可读存储介质
CN109116845B (zh) * 2018-08-17 2021-09-17 华晟(青岛)智能装备科技有限公司 自动导引运输车定位方法、定位系统及自动导引运输系统
CN109151440B (zh) * 2018-10-15 2020-06-09 盎锐(上海)信息科技有限公司 影像定位装置及方法
CN109643127B (zh) 2018-11-19 2022-05-03 深圳阿科伯特机器人有限公司 构建地图、定位、导航、控制方法及系统、移动机器人
CN109756750B (zh) * 2019-01-04 2022-01-28 中国科学院大学 视频流中动态图像动态特性识别方法和装置
CN109822568B (zh) * 2019-01-30 2020-12-29 镁伽科技(深圳)有限公司 机器人控制方法、系统及存储介质
CN109993793B (zh) * 2019-03-29 2021-09-07 北京易达图灵科技有限公司 视觉定位方法及装置
WO2020223974A1 (zh) 2019-05-09 2020-11-12 珊口(深圳)智能科技有限公司 更新地图的方法及移动机器人
CN110207537A (zh) * 2019-06-19 2019-09-06 赵天昊 基于计算机视觉技术的火控装置及其自动瞄准方法
CN112406608B (zh) * 2019-08-23 2022-06-21 国创移动能源创新中心(江苏)有限公司 充电桩及其自动充电装置和方法
CN110531445A (zh) * 2019-09-05 2019-12-03 中国科学院长春光学精密机械与物理研究所 一种日照时长测量装置及设备
CN111583338B (zh) * 2020-04-26 2023-04-07 北京三快在线科技有限公司 用于无人设备的定位方法、装置、介质及无人设备
CN112338910A (zh) * 2020-09-22 2021-02-09 北京无线体育俱乐部有限公司 空间地图确定方法、机器人、存储介质及系统
CN112947439A (zh) * 2021-02-05 2021-06-11 深圳市优必选科技股份有限公司 位置调整方法、装置、终端设备及可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120300020A1 (en) * 2011-05-27 2012-11-29 Qualcomm Incorporated Real-time self-localization from panoramic images
CN103292804A (zh) * 2013-05-27 2013-09-11 浙江大学 一种单目自然视觉路标辅助的移动机器人定位方法
CN104180818A (zh) * 2014-08-12 2014-12-03 北京理工大学 一种单目视觉里程计算装置
CN106444846A (zh) * 2016-08-19 2017-02-22 杭州零智科技有限公司 移动终端的定位和控制方法、装置及无人机
CN107193279A (zh) * 2017-05-09 2017-09-22 复旦大学 基于单目视觉和imu信息的机器人定位与地图构建系统

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004059900A2 (en) * 2002-12-17 2004-07-15 Evolution Robotics, Inc. Systems and methods for visual simultaneous localization and mapping
CN101598556B (zh) * 2009-07-15 2011-05-04 北京航空航天大学 一种未知环境下无人机视觉/惯性组合导航方法
CN102506830B (zh) * 2011-11-21 2014-03-12 奇瑞汽车股份有限公司 视觉定位方法及装置
US9420275B2 (en) * 2012-11-01 2016-08-16 Hexagon Technology Center Gmbh Visual positioning system that utilizes images of a working environment to determine position
EP3159126A4 (en) * 2014-06-17 2018-05-30 Yujin Robot Co., Ltd. Device and method for recognizing location of mobile robot by means of edge-based readjustment
CN104552341B (zh) * 2014-12-29 2016-05-04 国家电网公司 移动工业机器人单点多视角挂表位姿误差检测方法
CN106352877B (zh) * 2016-08-10 2019-08-23 纳恩博(北京)科技有限公司 一种移动装置及其定位方法
CN106338289A (zh) * 2016-08-11 2017-01-18 张满仓 基于机器人的室内定位导航系统及其方法
CN106370188A (zh) * 2016-09-21 2017-02-01 旗瀚科技有限公司 一种基于3d摄像机的机器人室内定位与导航方法
CN106990776B (zh) * 2017-02-27 2020-08-11 广东省智能制造研究所 机器人归航定位方法与系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120300020A1 (en) * 2011-05-27 2012-11-29 Qualcomm Incorporated Real-time self-localization from panoramic images
CN103292804A (zh) * 2013-05-27 2013-09-11 浙江大学 一种单目自然视觉路标辅助的移动机器人定位方法
CN104180818A (zh) * 2014-08-12 2014-12-03 北京理工大学 一种单目视觉里程计算装置
CN106444846A (zh) * 2016-08-19 2017-02-22 杭州零智科技有限公司 移动终端的定位和控制方法、装置及无人机
CN107193279A (zh) * 2017-05-09 2017-09-22 复旦大学 基于单目视觉和imu信息的机器人定位与地图构建系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3708954A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112417924A (zh) * 2019-08-20 2021-02-26 北京地平线机器人技术研发有限公司 一种标志杆的空间坐标获取方法及装置
CN111220148A (zh) * 2020-01-21 2020-06-02 珊口(深圳)智能科技有限公司 移动机器人的定位方法、系统、装置及移动机器人

Also Published As

Publication number Publication date
EP3708954A4 (en) 2020-12-30
CN107907131B (zh) 2019-12-13
CN107907131A (zh) 2018-04-13
EP3708954A1 (en) 2020-09-16

Similar Documents

Publication Publication Date Title
WO2019090833A1 (zh) 定位系统、方法及所适用的机器人
US10436590B2 (en) Localization system and method, and robot using the same
CN109074083B (zh) 移动控制方法、移动机器人及计算机存储介质
WO2019095681A1 (zh) 定位方法、系统及所适用的机器人
JP6475772B2 (ja) 視覚的測位によるナビゲーション装置およびその方法
Kragic et al. Vision for robotic object manipulation in domestic settings
US8644557B2 (en) Method and apparatus for estimating position of moving vehicle such as mobile robot
CN106813672B (zh) 移动机器人的导航方法及移动机器人
Feigl et al. Localization Limitations of ARCore, ARKit, and Hololens in Dynamic Large-scale Industry Environments.
CN110874100A (zh) 用于使用视觉稀疏地图进行自主导航的系统和方法
Goedemé et al. Feature based omnidirectional sparse visual path following
Šegvić et al. A mapping and localization framework for scalable appearance-based navigation
CN111220148A (zh) 移动机器人的定位方法、系统、装置及移动机器人
WO2019232804A1 (zh) 软件更新方法、系统、移动机器人及服务器
Karlekar et al. Positioning, tracking and mapping for outdoor augmentation
Burschka et al. Principles and practice of real-time visual tracking for navigation and mapping
Gratal et al. Visual servoing on unknown objects
Diosi et al. Outdoor visual path following experiments
Gerstmayr-Hillen et al. Dense topological maps and partial pose estimation for visual control of an autonomous cleaning robot
Strobl et al. The self-referenced DLR 3D-modeler
Tian et al. Research on multi-sensor fusion SLAM algorithm based on improved gmapping
Strobl et al. Image-based pose estimation for 3-D modeling in rapid, hand-held motion
WO2022227632A1 (zh) 基于图像的轨迹规划方法和运动控制方法以及使用该些方法的移动机器
Yu et al. Indoor Localization Based on Fusion of AprilTag and Adaptive Monte Carlo
Zhang et al. A visual slam system with laser assisted optimization

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17931779

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017931779

Country of ref document: EP

Effective date: 20200610