CN112450820B - Pose optimization method, mobile robot and storage medium - Google Patents

Pose optimization method, mobile robot and storage medium Download PDF

Info

Publication number
CN112450820B
CN112450820B CN202011322841.6A CN202011322841A CN112450820B CN 112450820 B CN112450820 B CN 112450820B CN 202011322841 A CN202011322841 A CN 202011322841A CN 112450820 B CN112450820 B CN 112450820B
Authority
CN
China
Prior art keywords
pose
mobile robot
visual
actual
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011322841.6A
Other languages
Chinese (zh)
Other versions
CN112450820A (en
Inventor
闫瑞君
任娟娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Silver Star Intelligent Group Co Ltd
Original Assignee
Shenzhen Silver Star Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Silver Star Intelligent Technology Co Ltd filed Critical Shenzhen Silver Star Intelligent Technology Co Ltd
Priority to CN202011322841.6A priority Critical patent/CN112450820B/en
Publication of CN112450820A publication Critical patent/CN112450820A/en
Application granted granted Critical
Publication of CN112450820B publication Critical patent/CN112450820B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection

Abstract

The invention relates to the field of robots, and discloses a pose optimization method, a mobile robot and a storage medium, wherein the method comprises the steps of collecting actual visual odometer information and actual wheel type odometer information when the mobile robot moves to a key position in an area to be cleaned by controlling the mobile robot, calculating the visual pose and the wheel type pose of the mobile robot according to the actual visual odometer information and the actual wheel type odometer information, carrying out fusion calculation by combining a predetermined standard pose by using a pose optimization algorithm based on the visual pose and the wheel type pose to obtain a final pose, and the realization mode is based on the combination of the visual pose, the wheel type pose and the standard pose, so that the mobile robot can realize real-time calculation during each positioning without calculation according to historical positioning data, thereby reducing errors and avoiding the phenomenon that errors can be accumulated due to the historical positioning data, the positioning accuracy is improved, and meanwhile, the complexity of positioning calculation is reduced.

Description

Pose optimization method, mobile robot and storage medium
Technical Field
The invention relates to the field of robots, in particular to a pose optimization method, a mobile robot and a storage medium.
Background
With the development of scientific technology, positioning navigation is one of the prerequisites for realizing intellectualization of the robot, and is a key factor for endowing the robot with perception and action capability.
At present, a traditional robot positioning method usually adopts a wheel type odometer for calculation, the wheel type odometer has the advantage of high positioning accuracy in a short time and a short distance, but the track reckoning positioning method has accumulated errors, particularly for a sweeping robot, the accumulated errors exist in the cleaning process, map inclination and pose errors are inevitably accumulated, and the errors are difficult to eliminate according to self information in the current technical implementation, so that the positioning is not accurate, and the movement control of the robot is influenced.
Disclosure of Invention
The invention mainly aims to solve the technical problem that the robot is not accurately positioned due to accumulated pose errors in the existing positioning method.
The invention provides a pose optimization method, which is applied to a mobile robot and comprises the following steps:
receiving a cleaning instruction, starting the mobile robot, and collecting actual odometer information when the mobile robot moves to a key position of an area to be cleaned, wherein the actual odometer information comprises actual visual odometer information and actual wheel type odometer information;
respectively calculating the visual pose and the wheel pose of the mobile robot according to the actual visual odometer information and the actual wheel type odometer information;
and performing fusion calculation on the visual pose, the wheel pose and a pre-configured standard pose by using a preset pose optimization algorithm to obtain a final pose of the mobile robot.
Optionally, in a first implementation manner of the first aspect of the present invention, the receiving a cleaning instruction and starting the mobile robot, and acquiring actual odometry information when the mobile robot moves to a key position of an area to be cleaned includes:
starting the mobile robot according to the cleaning instruction, controlling the mobile robot to move, and cleaning the area to be cleaned;
detecting whether the mobile robot touches an obstacle when moving to a key position;
if yes, acquiring a key frame of the key position through a camera on the mobile robot, and calculating actual visual odometry information of the key position based on the key frame;
and acquiring actual wheel type odometer information of the key position through a motion sensor.
Optionally, in a second implementation manner of the first aspect of the present invention, the acquiring, by a camera on the mobile robot, a key frame of the key location, and calculating actual visual odometry information of the key location based on the key frame includes:
starting a loop detection thread of the mobile robot and determining whether a loop is detected;
if yes, shooting image frames on the key positions by using the camera, and calculating actual visual odometry information of the mobile robot at the key positions at the current moment based on the image frames.
Optionally, in a third implementation manner of the first aspect of the present invention, the starting a loop detection thread of the mobile robot, and determining whether a loop is detected includes:
starting the loop detection thread, and detecting whether the key position is a key position traversed when the mobile robot is started and follows a wall;
if so, determining that the barrier collided by the mobile robot is a wall body which is passed by the mobile robot when the mobile robot is started and follows the wall, and determining that a loop is detected;
if not, determining that the obstacle collided by the mobile robot is not the wall body passed by the mobile robot when the mobile robot is started up and along the wall, and determining that no loop is detected.
Optionally, in a fourth implementation manner of the first aspect of the present invention, after the capturing, with the camera, image frames at the key positions and calculating actual visual odometry information of the key positions at the current time of the mobile robot based on the image frames, the method further includes:
inquiring whether the loop has a standard pose or not;
if the standard pose exists, comparing whether the visual pose corresponding to the actual visual odometer information is coincident with the standard pose or not;
and if the standard pose does not coincide with the standard pose, acquiring an image frame corresponding to the standard pose, and calculating a transformation matrix with the image frame acquired at the current moment to obtain the standard pose.
Optionally, in a fifth implementation manner of the first aspect of the present invention, the performing fusion calculation on the visual pose, the wheel pose, and a preconfigured standard pose by using a preset pose optimization algorithm to obtain a final pose of the mobile robot includes:
determining the proportion of root mean square error weights of the visual pose, the wheel pose and the standard pose;
and performing weighted fusion calculation on the visual pose, the wheel pose and the standard pose by using a Kalman filtering algorithm according to the proportion of the root mean square error weight to obtain the final pose of the mobile robot.
Optionally, in a sixth implementation manner of the first aspect of the present invention, if it is determined that no loop is detected, or it is determined that a loop is detected and a standard pose does not exist in the loop, performing fusion calculation on the visual pose, the wheel pose, and a preconfigured standard pose by using a preset pose optimization algorithm to obtain a final pose of the mobile robot includes:
and performing loose coupling calculation on the actual visual mileage information and the actual wheel type mileage information by using a Kalman filtering algorithm to obtain the final pose of the mobile robot.
Optionally, in a seventh implementation manner of the first aspect of the present invention, the pose optimization method further includes:
if the situation that a loop is detected and the loop does not have a standard pose is determined, acquiring a new key frame of the key position by using a camera on the mobile robot;
detecting whether the feature points in the new key frame meet preset conditions or not;
if the characteristic points do not meet preset conditions, acquiring actual wheel type odometry information of the mobile robot at the key position through a motion sensor;
calculating angle information between the sweeper and the wall;
judging whether the output angle between the angle information and the wheel type odometer information is smaller than a preset angle or not;
if so, performing fusion calculation on the angle information and the output angle, and adjusting the actual wheel type odometer information based on the calculation result;
if not, the actual wheel type odometer information is not changed;
taking the actual wheel-type odometer information as the actual visual odometer information;
and if the feature points meet preset conditions, acquiring a visual descriptor in the new key frame, and calculating actual visual odometry information of the key position based on the descriptor.
Optionally, in an eighth implementation manner of the first aspect of the present invention, before the receiving a cleaning instruction, starting the mobile robot, and acquiring actual odometer information when the mobile robot moves to a key location of an area to be cleaned, the method further includes:
controlling the mobile robot to move along the wall of the area to be cleaned, and acquiring a key frame and motion data in the moving process of the mobile robot;
calculating a first pose of the wheel type odometer according to the motion data;
calculating a second pose of the visual odometer according to the key frame;
and performing loose coupling processing on the first pose and the second pose through a Kalman filtering algorithm to obtain a standard pose.
Optionally, in a ninth implementation manner of the first aspect of the present invention, the calculating a second pose of the visual odometer according to the keyframe includes:
identifying the characteristic points in the key frame, and judging whether the total number of the characteristic points reaches a preset condition;
if not, taking the first pose as the second pose;
and if so, acquiring a visual descriptor in the key frame, controlling the mobile robot to move along the wall, searching a similar key frame similar to the visual descriptor, and calculating a second pose of the key position based on the similar key frame.
A second aspect of the present invention provides a mobile robot, characterized in that the mobile robot includes:
the mobile robot comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for receiving a cleaning instruction, starting the mobile robot and acquiring actual odometer information when the mobile robot moves to a key position of an area to be cleaned, and the actual odometer information comprises actual visual odometer information and actual wheel type odometer information;
the calculation module is used for respectively calculating the visual pose and the wheel pose of the mobile robot according to the actual visual odometer information and the actual wheel type odometer information;
and the optimization module is used for performing fusion calculation on the visual pose, the wheel pose and a pre-configured standard pose by using a preset pose optimization algorithm to obtain a final pose of the mobile robot.
Optionally, in a first implementation manner of the second aspect of the present invention, the acquisition module includes:
the cleaning unit is used for starting the mobile robot according to the cleaning instruction, controlling the mobile robot to move and cleaning the area to be cleaned;
the first detection unit is used for detecting whether the mobile robot touches an obstacle when moving to a key position;
the first calculation unit is used for acquiring a key frame of a key position through a camera on the mobile robot when the first detection unit detects that the mobile robot touches an obstacle when moving to the key position, and calculating actual visual odometry information of the key position based on the key frame;
and the second calculation unit is used for acquiring the actual wheel type odometer information of the key position through the motion sensor.
Optionally, in a second implementation manner of the second aspect of the present invention, the first computing unit is specifically configured to:
starting a loop detection thread of the mobile robot and determining whether a loop is detected;
if yes, shooting image frames on the key positions by using the camera, and calculating actual visual odometry information of the mobile robot at the key positions at the current moment based on the image frames.
Optionally, in a third implementation manner of the second aspect of the present invention, the first computing unit is specifically configured to:
starting the loop detection thread, and detecting whether the key position is a key position traversed when the mobile robot is started and follows a wall;
if so, determining that the barrier collided by the mobile robot is a wall body which is passed by the mobile robot when the mobile robot is started and follows the wall, and determining that a loop is detected;
if not, determining that the obstacle collided by the mobile robot is not the wall body passed by the mobile robot when the mobile robot is started up and along the wall, and determining that no loop is detected.
Optionally, in a fourth implementation manner of the second aspect of the present invention, the mobile robot further includes a standard pose calculation module, which is specifically configured to:
inquiring whether the loop has a standard pose or not;
if the standard pose exists, comparing whether the visual pose corresponding to the actual visual odometer information is coincident with the standard pose or not;
and if the standard pose does not coincide with the standard pose, acquiring an image frame corresponding to the standard pose, and calculating a transformation matrix with the image frame acquired at the current moment to obtain the standard pose.
Optionally, in a fifth implementation manner of the second aspect of the present invention, the optimization module includes:
the determining unit is used for determining the proportion of root mean square error weights of the visual pose, the wheel pose and the standard pose;
and the first optimization unit is used for performing weighted fusion calculation on the visual pose, the wheel pose and the standard pose by using a Kalman filtering algorithm according to the proportion of the root mean square error weight to obtain the final pose of the mobile robot.
Optionally, in a sixth implementation manner of the second aspect of the present invention, if it is determined that no loop is detected, or it is determined that a loop is detected and the loop does not have a standard pose, the optimization module further includes a second optimization unit, which is specifically configured to:
and performing loose coupling calculation on the actual visual mileage information and the actual wheel type mileage information by using a Kalman filtering algorithm to obtain the final pose of the mobile robot.
Optionally, in a seventh implementation manner of the second aspect of the present invention, the first computing unit is further specifically configured to:
if the situation that a loop is detected and the loop does not have a standard pose is determined, acquiring a new key frame of the key position by using a camera on the mobile robot;
detecting whether the feature points in the new key frame meet preset conditions or not;
if the characteristic points do not meet preset conditions, acquiring actual wheel type odometry information of the mobile robot at the key position through a motion sensor;
calculating angle information between the sweeper and the wall;
judging whether the output angle between the angle information and the wheel type odometer information is smaller than a preset angle or not;
if so, performing fusion calculation on the angle information and the output angle, and adjusting the actual wheel type odometer information based on the calculation result;
if not, the actual wheel type odometer information is not changed;
taking the actual wheel-type odometer information as the actual visual odometer information;
and if the feature points meet preset conditions, acquiring a visual descriptor in the new key frame, and calculating actual visual odometry information of the key position based on the descriptor.
Optionally, in an eighth implementation manner of the second aspect of the present invention, the standard pose calculation module is further specifically configured to:
controlling the mobile robot to move along the wall of the area to be cleaned, and acquiring a key frame and motion data in the moving process of the mobile robot;
calculating a first pose of the wheel type odometer according to the motion data;
calculating a second pose of the visual odometer according to the key frame;
and performing loose coupling processing on the first pose and the second pose through a Kalman filtering algorithm to obtain a standard pose.
Optionally, in a ninth implementation manner of the second aspect of the present invention, the standard pose calculation module is specifically configured to:
identifying the characteristic points in the key frame, and judging whether the total number of the characteristic points reaches a preset condition;
if not, taking the first pose as the second pose;
and if so, acquiring a visual descriptor in the key frame, controlling the mobile robot to move along the wall, searching a similar key frame similar to the visual descriptor, and calculating a second pose of the key position based on the similar key frame.
A third aspect of the present invention provides a mobile robot comprising: a memory having instructions stored therein and at least one processor, the memory and the at least one processor interconnected by a line;
the at least one processor invokes the instructions in the memory to cause the mobile robot to perform the pose optimization method described above.
A fourth aspect of the present invention provides a computer-readable storage medium having stored therein instructions, which when run on a computer, cause the computer to execute the pose optimization method described above.
In the technical scheme provided by the invention, the actual visual odometer information and the actual wheel type odometer information when the mobile robot moves to the key position in the area to be cleaned are acquired by controlling the mobile robot, the visual pose and the wheel type pose of the mobile robot are calculated according to the actual visual odometer information and the actual wheel type odometer information, and fusion calculation is carried out by utilizing a pose optimization algorithm and combining a predetermined standard pose based on the visual pose and the wheel type pose to obtain a final pose, so that the positioning is carried out based on the visual pose, the wheel type pose and the standard pose, and the combination of the three poses can realize that the mobile robot can carry out real-time positioning data calculation during each positioning without calculating according to historical positioning data, thereby reducing errors and avoiding the phenomenon that errors are accumulated due to historical data, the positioning accuracy is improved, and meanwhile, the complexity of positioning calculation is reduced.
Drawings
Fig. 1 is a flowchart of a pose optimization method according to a first embodiment of the present invention;
fig. 2 is a flowchart of a pose optimization method according to a second embodiment of the present invention;
fig. 3 is a flowchart of a pose optimization method according to a third embodiment of the present invention;
FIG. 4 is a flowchart of calculating a standard pose provided by an embodiment of the present invention;
FIG. 5 is a flowchart of a standard pose determination process when a collision occurs to a wall or an obstacle according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a mobile robot according to an embodiment of the present invention;
FIG. 7 is a schematic view of another embodiment of a mobile robot according to the present invention;
fig. 8 is a schematic diagram of an embodiment of a mobile robot in the embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a pose optimization method, a mobile robot and a storage medium, and particularly relates to a method for determining a world coordinate system, pre-cleaning a room, controlling a sweeper to walk along a wall for a circle, performing loop detection and loop optimization on a closed area, and determining a standard pose. And then the sweeper is used for non-wall sweeping, when the sweeper walks to a wall body every time, the obtained pose, the standard pose and the pose obtained by the wheel type odometer are fused, confidence is added in the fusion process, accurate pose output is finally obtained, and through detection and optimization of loop, the mobile robot can calculate in real time during every positioning without calculating according to historical positioning data, so that errors are reduced, the phenomenon that errors can be accumulated due to the historical positioning data is avoided, the positioning accuracy is improved, and the complexity of positioning calculation is reduced.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," or "having," and any variations thereof, are intended to cover non-exclusive inclusions, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
For convenience of understanding, a specific flow of the embodiment of the present invention is described below, and referring to fig. 1, an embodiment of the pose optimization method in the embodiment of the present invention includes:
101. receiving a cleaning instruction, starting the mobile robot, and acquiring actual odometer information when the mobile robot moves to a key position of an area to be cleaned, wherein the actual odometer information comprises actual visual odometer information and actual wheel type odometer information;
it is understood that the execution subject of the present invention may be a mobile robot, and the specific type of robot is not limited herein. The embodiment of the invention takes a floor sweeping robot as an example for explanation.
In this embodiment, the cleaning instruction may be a control instruction generated by a user through an APP or a trigger button on the mobile robot, and after receiving the control instruction, the mobile robot determines whether the cleaning instruction is a cleaning instruction, and if so, starts a mobile control program in the mobile robot, and controls the mobile robot to move in the area to be cleaned.
In practical application, the movement of the mobile robot can be performed according to a preset movement path, or the movement can be freely controlled by acquiring environmental data in real time, in the moving process, whether the mobile robot belongs to a key position or not is judged by a sensor of the mobile robot when the mobile robot moves to each position, if so, actual odometry information of the mobile robot at the current position is acquired by the sensor, the actual odometry information comprises visual odometry information and wheel type odometry information, the visual odometry information refers to environmental image data at the current position, and the wheel type odometry information refers to motion data of the mobile robot at the current position, such as the number of turns of a wheel running from a starting position to the current position, the angular speed of the wheel at the current position, and the like.
In this embodiment, the step may be specifically implemented as: starting the mobile robot according to the cleaning instruction, controlling the mobile robot to move, and cleaning the area to be cleaned;
detecting whether the mobile robot touches an obstacle when moving to a key position;
if yes, acquiring a key frame of the key position through a camera on the mobile robot, and calculating actual visual odometry information of the key position based on the key frame;
and acquiring actual wheel type odometer information of the key position through a motion sensor, wherein the motion sensor acquires an encoder and an Inertial Measurement Unit (IMU) of the wheel, so that the information of the wheel type odometer is calculated.
In practical application, the setting of the key position is specifically determined by automatic scanning learning of the mobile robot, or may be set based on actual use conditions of the mobile robot, and regardless of the setting, when the mobile robot initially uses for cleaning at startup, the mobile robot is controlled to scan once based on the edge of the cleaning area, so as to set the key position, that is, the position of a specific detection point on the edge position in the cleaning area is set, and preferably, the wall line of the cleaning area may be set with the detection points on average, so as to set the key position.
When the mobile robot moves to the key position when in normal cleaning, the positioning operation of the pose is automatically triggered. Thereby starting the data acquisition process.
102. Respectively calculating the visual pose and the wheel pose of the mobile robot according to the actual visual odometer information and the actual wheel type odometer information;
in this embodiment, the calculation of the visual pose is mainly performed based on the actual visual odometer information, specifically, an environment image of the current position of the mobile robot is captured by a capturing device on the mobile robot, and the pose of the mobile robot is identified based on the environment image, so as to obtain the real-time visual pose.
In practical application, the calculation of the visual pose also comprises the steps of identifying pixels of the environment image after the environment image is obtained, and if the pixels meet the requirement of definition, taking the environment image as actual visual odometer information and calculating the visual pose based on the environment image;
and when the pixels of the environment image do not meet the definition requirement, determining that the mobile robot does not have actual visual odometry information at the current position, setting parameters corresponding to the actual visual odometry information of the mobile robot to be null, and then executing a wheel type odometry information calculation process.
In this embodiment, the wheel type pose is calculated by collecting motion parameters of wheels of the mobile robot by using a motion sensor, using the collected motion parameters as actual wheel type odometer information, and then calculating a corresponding wheel type pose based on the motion parameters.
Specifically, the wheel type pose of the current wheel is calculated based on the rotating speed data and the angle data of the mobile robot acquired by the encoder and the angle data of the mobile robot acquired by the gyroscope, and then the wheel type pose is replaced by the visual pose to obtain the visual pose and the wheel type pose.
103. And performing fusion calculation on the visual pose, the wheel pose and the pre-configured standard pose by using a preset pose optimization algorithm to obtain the final pose of the mobile robot.
In this embodiment, the pose optimization algorithm may be understood as a fusion algorithm, where the standard pose is a pose calculated when the mobile robot cleans an area to be cleaned when the mobile robot is started, and a key position is marked at a position where the pose is located, and during normal cleaning, loop detection is performed whenever the mobile robot moves to the key position, and when it is detected that the key position has a record in previous cleaning, it is determined that loop detection is performed, and a standard pose at the key position is obtained, and then the real-time pose of the mobile robot is located, specifically, a real-time pose, that is, a final pose, is calculated according to a visual pose and a wheel pose at a current time in combination with the standard pose.
In practical application, before fusion calculation, the proportion among the visual pose, the wheel pose and the standard pose, namely the weight coefficient, needs to be determined, and the visual pose, the wheel pose and the standard pose are subjected to fusion calculation based on the weight coefficient to obtain the final pose.
Specifically, the pose optimization algorithm can be converted into a calculation formula as follows:
Figure BDA0002793385000000081
and after stable data are obtained, performing weighted fusion by the proportion of the position root-mean-square error weight.
Wherein, w1、w2、w3Respectively are root mean square errors of the visual odometer, the wheel type odometer and the standard pose, and X is the pose after fusion. Xa、Xb、XcRespectively a visual pose, a wheel pose and a standard pose.
In the embodiment of the invention, the mobile robot is controlled to collect the actual visual odometer information and the actual wheel type odometer information when the mobile robot moves to the key position in the area to be cleaned, the visual pose and the wheel type pose of the mobile robot are calculated according to the actual visual odometer information and the actual wheel type odometer information, and the fusion calculation is carried out by utilizing a pose optimization algorithm and combining a predetermined standard pose based on the visual pose and the wheel type pose to obtain the final pose, so that the realization mode is that the positioning is carried out based on the visual pose, the wheel type pose and the standard pose, the obtained poses are fused, confidence is added in the fusion process, the more accurate pose positioning data output is finally obtained, the calculation is not required according to the historical positioning data, the error is reduced, and the phenomenon that the error is accumulated due to the historical error is avoided, the positioning accuracy is improved.
Referring to fig. 2, a pose optimization method according to a second embodiment of the pose optimization method in the embodiment of the present invention is a method for actually implementing a positioning method for a mobile robot, and is mainly based on an improvement of a positioning method for an existing sweeping robot, and specifically based on an existing wheel-type odometer positioning and an additional combination of a visual odometer and a standard pose to optimize the pose, so as to obtain a more accurate pose, and the method specifically includes the following steps:
201. starting the mobile robot according to the cleaning instruction, controlling the mobile robot to move, and cleaning the area to be cleaned;
in this step, this cleaning instruction includes the instruction of cleaning in advance and cleans the instruction normally, cleans control in advance and cleans control normally according to two kinds of instructions, and is specific:
firstly, controlling a mobile robot to start according to a pre-cleaning instruction, taking the position of the mobile robot as an initial position, and constructing a world coordinate system by combining a wall in a map of an area to be cleaned, wherein the wall can be understood as a boundary line in the map, and in practical application, the boundary line can be set as an obstacle concept of the wall, and when the boundary line is detected in the moving process of the mobile robot, the boundary line can be determined as an obstacle;
and controlling the mobile robot to carry out pre-cleaning treatment along the wall body, setting a key position and recording the pose in the key position, wherein the recorded pose comprises visual odometer information and wheel-type odometer information, and carrying out fusion calculation based on the visual odometer information and the wheel-type odometer information to obtain a standard pose, which is also called a reference pose in some embodiments.
And further, after the calculation of the complete standard pose according to the pre-cleaning instruction, controlling the mobile robot to move according to the normal cleaning instruction, wherein the movement is performed at each position in the area to be cleaned so as to realize the cleaning operation.
202. Detecting whether the mobile robot touches an obstacle when moving to a key position;
in this step, the obstacle may be understood as a physical object, or may be understood as some preset mark, and during the moving process of the mobile robot, the mobile robot starts to identify a sensor, such as an infrared sensor, a camera, a photoresistor sensor, and so on, and the position condition of the mobile robot when moving to each position is identified by these sensors, and whether an obstacle exists at the position is detected, or whether the mobile robot is an obstacle after moving and hitting the object is determined.
203. If yes, acquiring key frames of the key positions through a camera on the mobile robot, and calculating actual visual odometry information of the key positions based on the key frames;
in this step, in the process of collecting the key frame of the key position, the specific implementation is as follows:
starting a loop detection thread of the mobile robot and determining whether a loop is detected;
if yes, shooting image frames on the key positions by using the camera, and calculating actual visual odometry information of the mobile robot at the key positions at the current moment based on the image frames.
Wherein the starting a loop detection thread of the mobile robot and determining whether a loop is detected comprises:
starting the loop detection thread, and detecting whether the key position is a key position traversed when the mobile robot is started and follows a wall;
if so, determining that the barrier collided by the mobile robot is a wall body which is passed by the mobile robot when the mobile robot is started and follows the wall, and determining that a loop is detected;
if not, determining that the obstacle collided by the mobile robot is not the wall body which is passed by the mobile robot when the mobile robot is not started and is along the wall, and determining that no loop is detected.
In practical application, when the mobile robot detects a loop, it indicates that the key position or the obstacle is a wall, and is also a position that the mobile robot passes through during pre-cleaning, but the corresponding standard pose is not necessarily calculated, and for this reason, after the loop is detected, the method further includes:
inquiring whether the loop has a standard pose or not;
if the standard pose exists, comparing whether the visual pose corresponding to the actual visual odometer information is coincident with the standard pose or not;
and if the standard pose does not coincide with the standard pose, acquiring an image frame corresponding to the standard pose, and calculating a transformation matrix with the image frame acquired at the current moment to obtain the standard pose.
204. Acquiring actual wheel type odometer information of a key position through a motion sensor;
205. respectively calculating the visual pose and the wheel pose of the mobile robot according to the actual visual odometer information and the actual wheel type odometer information;
206. determining the proportion of root mean square error weights of the visual pose, the wheel pose and the standard pose;
207. and performing weighted fusion calculation on the visual pose, the wheel pose and the standard pose by using a Kalman filtering algorithm according to the proportion of the root mean square error weight to obtain the final pose of the mobile robot.
In the embodiment of the invention, the sweeper is controlled to walk along the wall for a circle, loop detection and loop optimization are firstly carried out on a closed area, and a standard pose is determined. And then the sweeper is used for non-wall sweeping, when the sweeper walks to a wall body every time, the obtained pose, the standard pose and the pose obtained by the wheel type odometer are fused, confidence is added in the fusion process, and accurate pose output is finally obtained, so that real-time positioning calculation is realized, calculation is not required according to historical positioning data, errors are reduced, and meanwhile, the phenomenon that errors are accumulated due to the historical positioning data is avoided.
In this embodiment, as shown in fig. 3, the positioning of the mobile robot can also be achieved by a method specifically including pre-cleaning (cleaning along the wall) and normal cleaning (i.e., not cleaning along the wall). The normal cleaning comprises cleaning of collision with an obstacle and cleaning of no collision with the obstacle, and the cleaning of collision with the obstacle comprises collision with a wall body and collision with a non-wall body, and the method comprises the following implementation steps:
301. starting the mobile robot to carry out cleaning operation;
302. receiving a pre-cleaning instruction, controlling the mobile robot to move and determining a standard pose;
the method comprises the following steps of determining a world coordinate system and determining a reference pose;
wherein, determining a world coordinate system: specifically, the position of the sweeping robot is taken as the position of a world coordinate system, the center of the sweeping machine is taken as the origin of the world coordinate system, the forward direction is an x axis, the leftward direction is a y axis, and the upward direction is a z axis;
determining a reference pose: after the mobile robot is started, the sweeper is controlled to perform pre-sweeping for one circle along the wall, so that the reference pose is determined, and the specific implementation flow is shown in fig. 4:
401. acquiring an image frame of the current position of the mobile robot;
402. acquiring wheel parameters of the mobile robot, and calculating wheel type odometer information;
403. calculating the visual odometer according to the image frame to obtain visual odometer information;
in the step, when the image frame is obtained, whether the image frame is a key frame or not is judged, if yes, the image frame is analyzed and extracted, a visual descriptor in the image frame is extracted, similarity query of the descriptor is carried out, and the visual odometry information is calculated based on a query result and the descriptor.
404. Judging whether visual odometer information exists or not;
in this step, the presence or absence of this means whether or not the visual odometry information can be calculated based on the image frame, and if not, it is determined to be absent, otherwise, it is present. Specifically, under the condition of image blurring or low texture, when the number of feature points of the acquired image frame is less than 50, the key frame cannot be obtained, so that the visual odometer cannot calculate.
405. If so, performing fusion calculation on the visual odometer information and the wheel type odometer information to obtain a pose;
406. controlling the mobile robot to perform loop detection to obtain a detection result;
407. and calculating a standard pose based on the pose and the detection result.
In the step, the standard pose is calculated by controlling the mobile robot to move along the wall of the area to be cleaned and acquiring a key frame and motion data in the moving process of the mobile robot;
calculating a first pose of the wheel type odometer according to the motion data;
calculating a second pose of the visual odometer according to the key frame;
and performing loose coupling processing on the first pose and the second pose through a Kalman filtering algorithm to obtain a standard pose.
Further, the calculating a second pose of the visual odometer from the keyframe comprises:
identifying the characteristic points in the key frame, and judging whether the total number of the characteristic points reaches a preset condition;
if not, taking the first pose as the second pose;
and if so, acquiring a visual descriptor in the key frame, controlling the mobile robot to move along the wall, searching a similar key frame similar to the visual descriptor, and calculating a second pose of the key position based on the similar key frame.
In practical application, the following specific implementation is implemented for calculating the visual odometer information and the wheel-type odometer information:
controlling the sweeping robot to collect key frames in the process of following the wall, and fusing the pose of the vision odometer and the wheel type odometer (determining the size of the pose by using an encoder and determining the direction of the pose by using a gyroscope) to obtain a temporary pose;
loosely coupling the visual odometer information and the wheel-type odometer information by using Kalman filtering to construct a target function model, a motion model and an observation model;
wherein, the motion model is: xk=AXk-1+uk
And (3) observing the model: zk=CXk+wk
Wherein, XkIn order to be a pose position,
Figure BDA0002793385000000121
ukand wkThe input white noise is represented by mean 0 and variance R, Q and ZkIs the pose of the measurement.
After obtaining stable data, performing weighted fusion by the proportion of the position root mean square error weight, specifically calculating by the following formula:
Figure BDA0002793385000000122
wherein w1、w2The root mean square errors of the visual odometer and the wheel type odometer are respectively, and X is the pose after fusion.
Xa、XbThe positions of the visual odometer and the wheel-type odometer are respectively shown.
When the pre-cleaning and the normal cleaning meet the condition of image blurring or low texture, and the number of the feature points of the collected image frame is less than 50, the key frame can not be obtained, so that the visual odometer can not calculate, and only the wheel-type odometer exists at the moment, the wheel-type odometer is in a temporary pose;
in the pre-cleaning process, under the condition that the feature points can be obtained to meet the requirement of key frame extraction, a visual descriptor is obtained, similarity searching is carried out simultaneously, after the wall body is along the wall for one circle, loop detection and loop optimization are carried out, the temporary pose is corrected through the loop optimization, and the corrected temporary pose is the standard pose.
In this embodiment, if it is determined that a loop is detected and the loop does not have a standard pose, the standard pose may be calculated by:
acquiring a new keyframe of the key location with a camera on the mobile robot;
detecting whether the feature points in the new key frame meet preset conditions or not;
if the characteristic points do not meet preset conditions, acquiring actual wheel type odometry information of the mobile robot at the key position through a motion sensor;
calculating angle information between the sweeper and the wall;
judging whether the output angle between the angle information and the wheel type odometer information is smaller than a preset angle or not;
if so, performing fusion calculation on the angle information and the output angle, and adjusting the actual wheel type odometer information based on the calculation result;
if not, the actual wheel type odometer information is not changed;
taking the actual wheel-type odometer information as the actual visual odometer information;
and if the feature points meet preset conditions, acquiring a visual descriptor in the new key frame, and calculating actual visual odometry information of the key position based on the descriptor.
In practical application, when the mobile robot is detected to move to a key position without encountering an obstacle, a loop detection and loop optimization strategy is not added, and the following implementation is realized:
when cleaning is started, a new image frame is utilized to calculate the visual odometer, under the condition that the visual odometer cannot be calculated without characteristics under the conditions of weak texture or fuzzy image and the like, the wheel type odometer, the sweeper and the wall angle extraction algorithm are fused to obtain new wheel type odometer for outputting the pose, the angle between the sweeper and the wall is more confident when the angle between the sweeper and the wall and the output angle between the wheel type odometer are less than 2 degrees, the weight setting of the angle is larger and is fused with the angle of the wheel type odometer, otherwise, the angle value of the wheel type odometer is not changed;
and under the condition that the visual odometer can be calculated, performing Kalman filtering algorithm by using the newly obtained wheel type odometer and the visual odometer to fuse and output the pose.
In this embodiment, before loop optimization, the pose of the keyframe needs to be determined: (1) if the front and rear images can be tracked, the camera pose can be calculated by utilizing the one-to-one correspondence relationship from 3D to 2D; (2) if the pre-post image is not tracked up, the key frame database is retrieved for repositioning, and if the images in several places are similar to the current frame, there will be several candidate frames for repositioning. For each candidate keyframe, the ORB feature matching consistency is computed, i.e., of course the keyframes for the map points with which the frame is associated with ORB feature points. Thus we have a set of 2D to 3D one-to-one relationships for each candidate key frame. Executing RANSAC iterative algorithm on each candidate frame to calculate the pose of the camera; if the current pose cannot be determined for repositioning, defining the pose of the wheel type odometer as a camera pose by considering;
after the loop is determined, global optimization (the optimization variables comprise a three-dimensional point set of the key frame and the temporary pose) is optimized by using the pose of the camera, namely the pose of the visual odometer and the temporary pose obtained by fusing the wheel odometers by using a Levenberg-Marquardt nonlinear least square algorithm, and the standard pose is confirmed.
303. Controlling the mobile robot to carry out cleaning operation in the area to be cleaned according to the normal cleaning instruction;
304. detecting whether the mobile robot touches an obstacle when moving to a key position;
305. if the collision is the obstacle, starting a loop detection thread, determining whether a loop is detected or not, and obtaining a loop detection result;
306. determining a standard pose according to a loop detection result;
in the step, the detection result comprises that loop is detected and loop is not detected, different detection results are different for determining the standard pose, and the specific implementation flow is as follows:
while for detecting the loop, the flow of calculating the standard pose is shown in fig. 5, and the specific steps include:
501. identifying a loop detection result;
502. if the loop is detected, inquiring a corresponding standard pose;
503. determining whether a standard pose is queried;
504. if the loop cannot be detected or the standard pose cannot be inquired, acquiring the current visual odometer information and wheel type odometer information of the mobile robot;
505. if the standard pose is inquired, acquiring visual odometer information by using a camera on the mobile robot, and calculating the visual odometer information and the inquired standard pose to obtain a new standard pose;
307. acquiring key frames of key positions through a camera on the mobile robot, and calculating actual visual odometry information of the key positions based on the key frames;
308. acquiring actual wheel type odometer information of a key position through a motion sensor;
309. respectively calculating the visual pose and the wheel pose of the mobile robot according to the actual visual odometer information and the actual wheel type odometer information;
310. determining the proportion of root mean square error weights of the visual pose, the wheel pose and the standard pose;
311. and performing weighted fusion calculation on the visual pose, the wheel pose and the standard pose by using a Kalman filtering algorithm according to the proportion of the root mean square error weight to obtain the final pose of the mobile robot.
In this embodiment, when a wall or an obstacle is collided, the loop detection thread is turned on to determine whether a loop is detected, and the first mode is that a loop is detected when the wall is collided, and the second mode is that a loop is not detected when the obstacle is collided (in the case that a wall is actually collided but a loop is not detected, the loop detection algorithm should be perfected theoretically, and if a loop is not detected, the loop detection algorithm is processed according to the situation that a loop is not detected):
detecting a loop (generally regarded as a wall body passing by when the sweeper collides with the start-up wall), starting a thread for searching and detecting a standard pose, calculating an actual standard pose by using the standard pose and the pose of the camera (mainly under the condition that the standard pose and the pose of the camera are not completely coincident, calculating a transformation matrix T1 by using a key frame to which the standard pose belongs and a key frame of the pose of the camera, and calculating the standard pose of the camera under the current pose on the basis of the standard pose (T0)
Figure BDA0002793385000000142
Then, outputting the pose by using a fusion algorithm through a standard pose, a visual odometer and a wheel type odometer; generally, under the condition that the actual standard pose can be calculated by using the standard pose and the pose of the camera, the confidence coefficient of the standard pose at the moment is set to be larger, and when the standard pose is fused with the visual odometer and the wheel-type odometer, the standard pose is believed to be set, covariance with a certain size is set, and the pose after the fusion is calculated by Kalman filtering.
Figure BDA0002793385000000141
And after stable data are obtained, performing weighted fusion by the proportion of the position root-mean-square error weight.
Wherein w1、w2、w3Respectively are root mean square errors of the visual odometer, the wheel type odometer and the standard pose, and X is the pose after fusion. Xa Xb XcRespectively a visual odometer, a wheel-type odometer and a standard pose.
And (4) detecting a loop, and if the standard pose is not found, outputting the pose by using the visual odometer and the wheel type odometer in the same way, namely, outputting the fused pose by using a Kalman filtering algorithm.
And (2) no loop can be detected (generally, the loop is considered as a wall body which is passed when the sweeper collides with an obstacle instead of being started along the wall), and the visual odometer and the wheel-type odometer are loosely coupled by using a Kalman filtering algorithm to output a fused pose.
In summary, by combining the visual pose, the wheel pose and the standard pose, the mobile robot can calculate in real time during each positioning without calculating according to historical positioning data, so that errors are reduced, the phenomenon that errors are accumulated due to the historical positioning data is avoided, and the positioning accuracy is improved.
With reference to fig. 6, the mobile robot in the embodiment of the present invention is described below, and an embodiment of the mobile robot in the embodiment of the present invention includes:
the system comprises an acquisition module 601, a display module and a control module, wherein the acquisition module 601 is used for receiving a cleaning instruction, starting the mobile robot and acquiring actual odometer information when the mobile robot moves to a key position of an area to be cleaned, and the actual odometer information comprises actual visual odometer information and actual wheel type odometer information;
a calculating module 602, configured to calculate a visual pose and a wheel pose of the mobile robot according to the actual visual odometer information and the actual wheel odometer information;
and the optimizing module 603 is configured to perform fusion calculation on the visual pose, the wheel pose, and a preconfigured standard pose by using a preset pose optimization algorithm to obtain a final pose of the mobile robot.
In the embodiment of the invention, the mobile robot is controlled to walk along the wall for a circle, loop detection and loop optimization are firstly carried out on a closed area, and a standard pose is determined. And then the mobile robot is cleaned along the wall, when the mobile robot walks to the wall every time, the obtained pose, the standard pose and the pose obtained by the wheel type odometer are fused, confidence is added in the fusion process, and accurate pose output is finally obtained.
Referring to fig. 7, another embodiment of the mobile robot according to the embodiment of the present invention includes:
the system comprises an acquisition module 601, a display module and a control module, wherein the acquisition module 601 is used for receiving a cleaning instruction, starting the mobile robot and acquiring actual odometer information when the mobile robot moves to a key position of an area to be cleaned, and the actual odometer information comprises actual visual odometer information and actual wheel type odometer information;
a calculating module 602, configured to calculate a visual pose and a wheel pose of the mobile robot according to the actual visual odometer information and the actual wheel odometer information;
and the optimizing module 603 is configured to perform fusion calculation on the visual pose, the wheel pose, and a preconfigured standard pose by using a preset pose optimization algorithm to obtain a final pose of the mobile robot. .
Wherein the acquisition module 601 comprises:
a cleaning unit 6011, configured to start the mobile robot according to the cleaning instruction, control the mobile robot to move, and perform a cleaning operation on the area to be cleaned;
a first detecting unit 6012, configured to detect whether the mobile robot touches an obstacle when moving to a critical position;
a first calculating unit 6013, configured to, when the first detecting unit detects that the mobile robot touches an obstacle when moving to a key position, acquire, by a camera on the mobile robot, a key frame of the key position, and calculate actual visual odometry information of the key position based on the key frame;
and a second calculation unit 6014, configured to acquire, through the motion sensor, actual wheel-type odometer information of the key location.
Optionally, the first computing unit 6013 is specifically configured to:
starting a loop detection thread of the mobile robot and determining whether a loop is detected;
if yes, shooting image frames on the key positions by using the camera, and calculating actual visual odometry information of the mobile robot at the key positions at the current moment based on the image frames.
Optionally, the first computing unit 6013 is specifically configured to:
starting the loop detection thread, and detecting whether the key position is a key position traversed when the mobile robot is started and follows a wall;
if so, determining that the barrier collided by the mobile robot is a wall body which is passed by the mobile robot when the mobile robot is started and follows the wall, and determining that a loop is detected;
if not, determining that the obstacle collided by the mobile robot is not the wall body passed by the mobile robot when the mobile robot is started up and along the wall, and determining that no loop is detected.
Optionally, the mobile robot further includes a standard pose calculation module 704, which is specifically configured to:
inquiring whether the loop has a standard pose or not;
if the standard pose exists, comparing whether the visual pose corresponding to the actual visual odometer information is coincident with the standard pose or not;
and if the standard pose does not coincide with the standard pose, acquiring an image frame corresponding to the standard pose, and calculating a transformation matrix with the image frame acquired at the current moment to obtain the standard pose.
In this embodiment, the optimizing module 603 includes:
a determining unit 6031 configured to determine a proportion of root mean square error weights of the visual pose, the wheel pose, and the standard pose;
and a first optimization unit 6032, configured to perform weighted fusion calculation on the visual pose, the wheel pose, and the standard pose by using a kalman filter algorithm according to the specific gravity of the root-mean-square error weight, so as to obtain a final pose of the mobile robot.
In this embodiment, if it is determined that no loop is detected, or it is determined that a loop is detected and the loop does not have a standard pose, the optimization module 603 further includes a second optimization unit 6033, which is specifically configured to:
and performing loose coupling calculation on the actual visual mileage information and the actual wheel type mileage information by using a Kalman filtering algorithm to obtain the final pose of the mobile robot.
Optionally, the first computing unit 6013 is further specifically configured to:
if the situation that a loop is detected and the loop does not have a standard pose is determined, acquiring a new key frame of the key position by using a camera on the mobile robot;
detecting whether the feature points in the new key frame meet preset conditions or not;
if the characteristic points do not meet preset conditions, acquiring actual wheel type odometry information of the mobile robot at the key position through a motion sensor;
calculating angle information between the sweeper and the wall;
judging whether the output angle between the angle information and the wheel type odometer information is smaller than a preset angle or not;
if so, performing fusion calculation on the angle information and the output angle, and adjusting the actual wheel type odometer information based on the calculation result;
if not, the actual wheel type odometer information is not changed;
taking the actual wheel-type odometer information as the actual visual odometer information;
and if the feature points meet preset conditions, acquiring a visual descriptor in the new key frame, and calculating actual visual odometry information of the key position based on the descriptor.
Optionally, the standard pose calculation module 604 is further specifically configured to:
controlling the mobile robot to move along the wall of the area to be cleaned, and acquiring a key frame and motion data in the moving process of the mobile robot;
calculating a first pose of the wheel type odometer according to the motion data;
calculating a second pose of the visual odometer according to the key frame;
and performing loose coupling processing on the first pose and the second pose through a Kalman filtering algorithm to obtain a standard pose.
Optionally, the standard pose calculation module 604 is specifically configured to:
identifying the characteristic points in the key frame, and judging whether the total number of the characteristic points reaches a preset condition;
if not, taking the first pose as the second pose;
and if so, acquiring a visual descriptor in the key frame, controlling the mobile robot to move along the wall, searching a similar key frame similar to the visual descriptor, and calculating a second pose of the key position based on the similar key frame.
Fig. 6 and 7 describe the mobile robot in the embodiment of the present invention in detail from the perspective of the modular functional entity, and the mobile robot in the embodiment of the present invention is described in detail from the perspective of hardware processing.
Fig. 8 is a schematic structural diagram of a mobile robot according to an embodiment of the present invention, where the mobile robot 900 may have a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 910 (e.g., one or more processors) and a memory 920, and one or more storage media 930 (e.g., one or more mass storage devices) storing applications 933 or data 932. Memory 920 and storage media 930 may be, among other things, transient storage or persistent storage. The program stored in the storage medium 930 may include one or more modules (not shown), each of which may include a series of instructions operating on the mobile robot 900. Still further, the processor 910 may be configured to communicate with the storage medium 930 to execute a series of instruction operations in the storage medium 930 on the mobile robot 900.
The mobile robot 900 may also include one or more power supplies 940, one or more wired or wireless network interfaces 950, one or more input-output interfaces 960, and/or one or more operating systems 931, such as Windows Server, Mac OS X, Unix, Linux, FreeBSD, and so forth. Those skilled in the art will appreciate that the mobile robot configuration shown in fig. 8 does not constitute a limitation of the mobile robot, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The present invention also provides a mobile robot, the computer device includes a memory and a processor, the memory stores computer readable instructions, and when the computer readable instructions are executed by the processor, the processor executes the steps of the pose optimization method in the above embodiments.
The present invention also provides a computer-readable storage medium, which may be a non-volatile computer-readable storage medium, and which may also be a volatile computer-readable storage medium, having stored therein instructions, which, when run on a computer, cause the computer to perform the steps of the pose optimization method.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (12)

1. A pose optimization method is applied to a mobile robot, and is characterized by comprising the following steps:
receiving a cleaning instruction, starting the mobile robot, and collecting actual odometer information when the mobile robot moves to a key position of an area to be cleaned, wherein the actual odometer information comprises actual visual odometer information and actual wheel type odometer information;
respectively calculating the visual pose and the wheel pose of the mobile robot according to the actual visual odometer information and the actual wheel type odometer information;
performing fusion calculation on the visual pose, the wheel pose and a pre-configured standard pose by using a preset pose optimization algorithm to obtain a final pose of the mobile robot;
the standard pose is obtained based on the following steps: controlling the mobile robot to move along the wall of the area to be cleaned, and acquiring a key frame and motion data in the moving process of the mobile robot; calculating a first pose of the wheel type odometer according to the motion data; calculating a second pose of the visual odometer according to the key frame; and performing loose coupling processing on the first pose and the second pose through a Kalman filtering algorithm to obtain a standard pose.
2. A pose optimization method according to claim 1, wherein the receiving a cleaning instruction and starting the mobile robot, and the collecting actual odometry information when the mobile robot moves to a key position of an area to be cleaned comprises:
starting the mobile robot according to the cleaning instruction, controlling the mobile robot to move, and cleaning the area to be cleaned;
detecting whether the mobile robot touches an obstacle when moving to a key position;
if yes, acquiring a key frame of the key position through a camera on the mobile robot, and calculating actual visual odometry information of the key position based on the key frame;
and acquiring actual wheel type odometer information of the key position through a motion sensor.
3. A pose optimization method according to claim 2, wherein the capturing, by a camera on the mobile robot, keyframes of the key positions and calculating actual visual odometry information for the key positions based on the keyframes comprises:
starting a loop detection thread of the mobile robot and determining whether a loop is detected;
if yes, shooting image frames on the key positions by using the camera, and calculating actual visual odometry information of the mobile robot at the key positions at the current moment based on the image frames.
4. A pose optimization method according to claim 3, wherein the starting a loop detection thread of the mobile robot and determining whether a loop is detected comprises:
starting the loop detection thread, and detecting whether the key position is a key position traversed when the mobile robot is started and follows a wall;
if so, determining that the barrier collided by the mobile robot is a wall body which is passed by the mobile robot when the mobile robot is started and follows the wall, and determining that a loop is detected;
if not, determining that the obstacle collided by the mobile robot is not the wall body passed by the mobile robot when the mobile robot is started up and along the wall, and determining that no loop is detected.
5. A pose optimization method according to claim 3, wherein after the capturing image frames on the key positions with the camera and calculating actual visual odometry information of the key positions at the current time of the mobile robot based on the image frames, further comprising:
inquiring whether the loop has a standard pose or not;
if the standard pose exists, comparing whether the visual pose corresponding to the actual visual odometer information is coincident with the standard pose or not;
and if the standard pose does not coincide with the standard pose, acquiring an image frame corresponding to the standard pose, and calculating a transformation matrix with the image frame acquired at the current moment to obtain the standard pose.
6. The pose optimization method according to claim 5, wherein the fusion calculation of the visual pose, the wheeled pose, and the preconfigured standard pose using a preset pose optimization algorithm to obtain the final pose of the mobile robot comprises:
determining the proportion of root mean square error weights of the visual pose, the wheel pose and the standard pose;
and performing weighted fusion calculation on the visual pose, the wheel pose and the standard pose by using a Kalman filtering algorithm according to the proportion of the root mean square error weight to obtain the final pose of the mobile robot.
7. The pose optimization method according to claim 5, wherein if it is determined that no loop is detected or that a loop is detected and no standard pose exists, the fusion calculation of the visual pose, the wheel pose and the pre-configured standard pose by using a preset pose optimization algorithm to obtain the final pose of the mobile robot comprises:
and performing loose coupling calculation on the actual visual mileage information and the actual wheel type mileage information by using a Kalman filtering algorithm to obtain the final pose of the mobile robot.
8. The pose optimization method according to claim 5, further comprising:
if the situation that a loop is detected and the loop does not have a standard pose is determined, acquiring a new key frame of the key position by using a camera on the mobile robot;
detecting whether the feature points in the new key frame meet preset conditions or not;
if the characteristic points do not meet preset conditions, acquiring actual wheel type odometry information of the mobile robot at the key position through a motion sensor;
calculating angle information between the mobile robot and the wall;
judging whether the output angle between the angle information and the wheel type odometer information is smaller than a preset angle or not;
if so, performing fusion calculation on the angle information and the output angle, and adjusting the actual wheel type odometer information based on the calculation result;
if not, the actual wheel type odometer information is not changed;
taking the actual wheel-type odometer information as the actual visual odometer information;
and if the feature points meet preset conditions, acquiring a visual descriptor in the new key frame, and calculating actual visual odometry information of the key position based on the descriptor.
9. A pose optimization method according to claim 1, wherein the calculating a second pose of a visual odometer from the keyframes comprises:
identifying the characteristic points in the key frame, and judging whether the total number of the characteristic points reaches a preset condition;
if not, taking the first pose as the second pose;
and if so, acquiring a visual descriptor in the key frame, controlling the mobile robot to move along the wall, searching a similar key frame similar to the visual descriptor, and calculating a second pose of the key position based on the similar key frame.
10. A mobile robot, characterized in that the mobile robot comprises:
the mobile robot comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for receiving a cleaning instruction, starting the mobile robot and acquiring actual odometer information when the mobile robot moves to a key position of an area to be cleaned, and the actual odometer information comprises actual visual odometer information and actual wheel type odometer information;
the calculation module is used for respectively calculating the visual pose and the wheel pose of the mobile robot according to the actual visual odometer information and the actual wheel type odometer information;
the optimization module is used for performing fusion calculation on the visual pose, the wheel pose and a pre-configured standard pose by using a preset pose optimization algorithm to obtain a final pose of the mobile robot; the standard pose is obtained based on the following steps: controlling the mobile robot to move along the wall of the area to be cleaned, and acquiring a key frame and motion data in the moving process of the mobile robot; calculating a first pose of the wheel type odometer according to the motion data; calculating a second pose of the visual odometer according to the key frame; and performing loose coupling processing on the first pose and the second pose through a Kalman filtering algorithm to obtain a standard pose.
11. A mobile robot, characterized in that the mobile robot comprises: a memory having instructions stored therein and at least one processor, the memory and the at least one processor interconnected by a line;
the at least one processor invoking the instructions in the memory to cause the mobile robot to perform the steps of the pose optimization method of any of claims 1-9.
12. A computer-readable storage medium having instructions stored thereon, wherein the instructions, when executed by a processor, implement the steps of the pose optimization method according to any one of claims 1-9.
CN202011322841.6A 2020-11-23 2020-11-23 Pose optimization method, mobile robot and storage medium Active CN112450820B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011322841.6A CN112450820B (en) 2020-11-23 2020-11-23 Pose optimization method, mobile robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011322841.6A CN112450820B (en) 2020-11-23 2020-11-23 Pose optimization method, mobile robot and storage medium

Publications (2)

Publication Number Publication Date
CN112450820A CN112450820A (en) 2021-03-09
CN112450820B true CN112450820B (en) 2022-01-21

Family

ID=74798581

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011322841.6A Active CN112450820B (en) 2020-11-23 2020-11-23 Pose optimization method, mobile robot and storage medium

Country Status (1)

Country Link
CN (1) CN112450820B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113063441B (en) * 2021-03-16 2022-11-18 李金波 Data source correction method and device for accumulated calculation error of odometer
CN113203419B (en) * 2021-04-25 2023-11-10 重庆大学 Indoor inspection robot correction positioning method based on neural network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109506641A (en) * 2017-09-14 2019-03-22 深圳乐动机器人有限公司 The pose loss detection and relocation system and robot of mobile robot
CN110261870A (en) * 2019-04-15 2019-09-20 浙江工业大学 It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102016211805A1 (en) * 2015-10-09 2017-04-13 Volkswagen Aktiengesellschaft Fusion of position data using poses graph
CN107356252B (en) * 2017-06-02 2020-06-16 青岛克路德机器人有限公司 Indoor robot positioning method integrating visual odometer and physical odometer
CN107160395B (en) * 2017-06-07 2020-10-16 中国人民解放军装甲兵工程学院 Map construction method and robot control system
CN107869989B (en) * 2017-11-06 2020-02-07 东北大学 Positioning method and system based on visual inertial navigation information fusion
CN109579844B (en) * 2018-12-04 2023-11-21 电子科技大学 Positioning method and system
CN109974721A (en) * 2019-01-08 2019-07-05 武汉中海庭数据技术有限公司 A kind of vision winding detection method and device based on high-precision map
CN111739063B (en) * 2020-06-23 2023-08-18 郑州大学 Positioning method of power inspection robot based on multi-sensor fusion

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109506641A (en) * 2017-09-14 2019-03-22 深圳乐动机器人有限公司 The pose loss detection and relocation system and robot of mobile robot
CN110261870A (en) * 2019-04-15 2019-09-20 浙江工业大学 It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method

Also Published As

Publication number Publication date
CN112450820A (en) 2021-03-09

Similar Documents

Publication Publication Date Title
EP3886048B1 (en) Slam map joining method and system
JP6445995B2 (en) Adaptive mapping using spatial aggregation of sensor data
CN109084732B (en) Positioning and navigation method, device and processing equipment
US9836653B2 (en) Systems and methods for capturing images and annotating the captured images with information
CN107907131B (en) positioning system, method and applicable robot
CN106780608B (en) Pose information estimation method and device and movable equipment
CN112985416B (en) Robust positioning and mapping method and system based on laser and visual information fusion
KR101776622B1 (en) Apparatus for recognizing location mobile robot using edge based refinement and method thereof
US9329598B2 (en) Simultaneous localization and mapping for a mobile robot
Kümmerle et al. On measuring the accuracy of SLAM algorithms
AU2013270671B2 (en) Carpet drift estimation using differential sensors or visual measurements
CN111210477B (en) Method and system for positioning moving object
CN112450820B (en) Pose optimization method, mobile robot and storage medium
CN112734852A (en) Robot mapping method and device and computing equipment
CN110587597B (en) SLAM closed loop detection method and detection system based on laser radar
Michot et al. Bi-objective bundle adjustment with application to multi-sensor slam
Sujiwo et al. Monocular vision-based localization using ORB-SLAM with LIDAR-aided mapping in real-world robot challenge
CN208323361U (en) A kind of positioning device and robot based on deep vision
Maier et al. Vision-based humanoid navigation using self-supervised obstacle detection
CN208289901U (en) A kind of positioning device and robot enhancing vision
WO2022062480A1 (en) Positioning method and positioning apparatus of mobile device
CN108544494A (en) A kind of positioning device, method and robot based on inertia and visual signature
Maier et al. Appearance-based traversability classification in monocular images using iterative ground plane estimation
CN113379850B (en) Mobile robot control method, device, mobile robot and storage medium
Berrabah et al. SLAM for robotic assistance to fire-fighting services

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 518110 1701, building 2, Yinxing Zhijie, No. 1301-72, sightseeing Road, Xinlan community, Guanlan street, Longhua District, Shenzhen, Guangdong Province

Patentee after: Shenzhen Yinxing Intelligent Group Co.,Ltd.

Address before: 518110 Building A1, Yinxing Hi-tech Industrial Park, Guanlan Street Sightseeing Road, Longhua District, Shenzhen City, Guangdong Province

Patentee before: Shenzhen Silver Star Intelligent Technology Co.,Ltd.