CN115077467A - Attitude estimation method and device for cleaning robot and cleaning robot - Google Patents

Attitude estimation method and device for cleaning robot and cleaning robot Download PDF

Info

Publication number
CN115077467A
CN115077467A CN202210655620.3A CN202210655620A CN115077467A CN 115077467 A CN115077467 A CN 115077467A CN 202210655620 A CN202210655620 A CN 202210655620A CN 115077467 A CN115077467 A CN 115077467A
Authority
CN
China
Prior art keywords
vertical
constraint
vertical line
point cloud
cleaning robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210655620.3A
Other languages
Chinese (zh)
Other versions
CN115077467B (en
Inventor
韩松杉
盛腾飞
杨盛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dreame Innovation Technology Suzhou Co Ltd
Original Assignee
Dreame Innovation Technology Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dreame Innovation Technology Suzhou Co Ltd filed Critical Dreame Innovation Technology Suzhou Co Ltd
Priority to CN202210655620.3A priority Critical patent/CN115077467B/en
Publication of CN115077467A publication Critical patent/CN115077467A/en
Application granted granted Critical
Publication of CN115077467B publication Critical patent/CN115077467B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C1/00Measuring angles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P3/00Measuring linear or angular speed; Measuring differences of linear or angular speeds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/50Photovoltaic [PV] energy

Abstract

The embodiment of the invention provides a posture estimation method and device of a cleaning robot and the cleaning robot, wherein the method comprises the following steps: detecting a vertical line of image data acquired by the cleaning robot; constructing a vertical constraint according to the detected vertical; the posture of the cleaning robot is estimated according to the vertical constraint and the angular velocity data of the cleaning robot, the problems that the global pitch angle and the global roll angle are mainly estimated by an accelerometer and the estimation error is large in the related technology can be solved, the vertical constraint is constructed by performing vertical detection on image data collected by a camera of the cleaning robot, the posture of the sweeper is estimated by fusing the angular velocity data of a gyroscope, the posture estimation of the cleaning robot with high precision, high robustness and no drift along with time can be realized without depending on the accelerometer.

Description

Attitude estimation method and device for cleaning robot and cleaning robot
Technical Field
The embodiment of the invention relates to the field of communication, in particular to a posture estimation method and device of a cleaning robot and the cleaning robot.
Background
High-precision body posture estimation is a necessary premise for drawing, planning and obstacle avoidance of the existing autonomous navigation sweeper. The body attitude is mainly based on the yaw angle, the pitch angle and the roll angle of the world or the global coordinate system. The yaw angle of attitude estimation in the existing autonomous navigation sweeper is mainly estimated by utilizing information of a single-line laser radar, a camera and a gyroscope. However, global pitch and roll angles are estimated mainly by the accelerometer. However, when the accelerometer is used, the acceleration reading of the accelerometer is affected by factory calibration, offset, noise, instability, aging and the like of the accelerometer, so that the global pitch angle and roll angle estimation causes large errors and drifts along with time. Particularly, due to the cost consideration of the sweeper, an ultra-low-cost accelerometer is adopted, the precision and the stability are poor, and larger global attitude estimation errors of a pitch angle and a roll angle can be generated.
Aiming at the problems that the global pitch angle and the global roll angle in the related technology are mainly estimated by an accelerometer, and the estimation error is large, a solution is not provided.
Disclosure of Invention
The embodiment of the invention provides a cleaning robot posture estimation method and device and a cleaning robot, which at least solve the problems that in the related technology, the global pitch angle and the global roll angle are mainly estimated by an accelerometer, and the estimation error is large.
According to an embodiment of the present invention, there is provided a posture estimation method of a cleaning robot, the method including:
detecting a vertical line of image data acquired by the cleaning robot;
constructing a vertical constraint according to the detected vertical;
and carrying out attitude estimation on the cleaning robot according to the vertical constraint and the angular speed data of the cleaning robot.
There is also provided, according to another embodiment of the present invention, an attitude estimation device of a cleaning robot, including:
the vertical line detection module is used for detecting the vertical line of the image data acquired by the cleaning robot;
the building module is used for building vertical constraint according to the detected vertical;
and the attitude estimation module is used for estimating the attitude of the cleaning robot according to the vertical constraint and the angular speed data of the cleaning robot.
According to another embodiment of the invention, there is also provided a cleaning robot comprising at least the apparatus described above.
According to a further embodiment of the present invention, a computer-readable storage medium is also provided, in which a computer program is stored, wherein the computer program is configured to perform the steps of any of the above-described method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, comprising a memory in which a computer program is stored and a processor configured to run the computer program to perform the steps of any of the method embodiments described above.
According to the embodiment of the invention, the vertical line detection is carried out on the image data acquired by the cleaning robot; constructing a vertical constraint according to the detected vertical; the method comprises the steps of carrying out attitude estimation on the cleaning robot according to vertical constraint and angular velocity data of the cleaning robot, solving the problems that in the related technology, the global pitch angle and the global roll angle are mainly estimated by an accelerometer, and the estimation error is large, carrying out vertical detection on image data collected by a camera of the cleaning robot, constructing vertical constraint, fusing the angular velocity data of a gyroscope to carry out attitude estimation on the sweeper, and realizing high-precision, high-robustness and time-drift-free pose estimation of the cleaning robot without depending on the accelerometer.
Drawings
Fig. 1 is a block diagram of a hardware configuration of a mobile terminal of a posture estimation method of a cleaning robot according to an embodiment of the present invention;
fig. 2 is a flowchart of a posture estimation method of a cleaning robot according to an embodiment of the present invention;
FIG. 3 is a first flowchart of a method for estimating an attitude of a cleaning robot in accordance with an alternative embodiment of the present invention;
FIG. 4 is a flowchart II of a pose estimation method of a cleaning robot according to an alternative embodiment of the present invention;
FIG. 5 is a flow chart of body pose estimation according to an embodiment of the present invention;
FIG. 6 is a flow chart of vertical line detection according to an embodiment of the present invention;
fig. 7 is a block diagram of an attitude estimation device of a cleaning robot according to the present embodiment.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the embodiments of the present invention may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking a mobile terminal as an example, fig. 1 is a hardware structure block diagram of the mobile terminal of the attitude estimation method of the cleaning robot according to the embodiment of the present invention, and as shown in fig. 1, the mobile terminal may include one or more processors 102 (only one is shown in fig. 1) (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), and a memory 104 for storing data, wherein the mobile terminal may further include a transmission device 106 for communication function and an input/output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program and a module of application software, such as a computer program corresponding to the posture estimation method of the cleaning robot in the embodiment of the present invention, and the processor 102 executes various functional applications and the service chain address pool slicing process by running the computer program stored in the memory 104, thereby implementing the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
In the present embodiment, a method for estimating an attitude of a cleaning robot operating in the mobile terminal or the network architecture is provided, and fig. 2 is a flowchart of the method for estimating an attitude of a cleaning robot according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, vertical line detection is carried out on image data collected by the cleaning robot;
step S204, constructing a vertical constraint according to the detected vertical;
and step S206, estimating the posture of the cleaning robot according to the vertical constraint and the angular speed data of the cleaning robot.
Performing vertical line detection on the image data collected by the cleaning robot through the above steps S202 to S206; constructing a vertical constraint according to the detected vertical; the method comprises the steps of carrying out attitude estimation on the cleaning robot according to vertical constraint and angular velocity data of the cleaning robot, solving the problems that in the related technology, the global pitch angle and the global roll angle are mainly estimated by an accelerometer, and the estimation error is large, carrying out vertical detection on image data collected by a camera of the cleaning robot, constructing vertical constraint, fusing the angular velocity data of a gyroscope to carry out attitude estimation on the sweeper, and realizing high-precision, high-robustness and time-drift-free pose estimation of the cleaning robot without depending on the accelerometer.
Fig. 3 is a first flowchart of a method for estimating a pose of a cleaning robot according to an alternative embodiment of the present invention, and as shown in fig. 3, the step S202 may specifically include:
s302, determining a plurality of groups of ROI areas from image data based on a vertical line detection and segmentation algorithm;
s304, determining 3D point cloud coordinates of a plurality of groups of ROI areas;
on one hand, if the camera is not provided with a 3D point cloud sensor, extracting 2D feature points from a plurality of groups of ROI (region of interest) areas of a previous frame image and a current frame image respectively; aiming at each group of ROI (region of interest), matching 2D feature points of a previous frame image and a current frame image to obtain 2D coordinates of matching points, and determining pose increment of the previous frame image and the current frame image, specifically, determining a homography matrix by using epipolar constraint of a camera based on the 2D coordinates of the matching points; obtaining a rotation matrix and a displacement vector without scale between the previous frame image and the current frame image according to homography matrix decomposition, and obtaining an absolute position increment between the previous frame image and the current frame image through a sensor; determining a displacement vector with a scale according to the displacement vector without the scale and the absolute position increment; and determining the pose increment according to the rotation matrix and the displacement vector with the scale. And then triangularizing the pose increment and the 2D coordinates of the matching points aiming at each group of ROI areas to obtain the 3D point cloud coordinates of the matching points under a camera coordinate system.
On the other hand, if the camera is provided with a 3D point cloud sensor, acquiring a 3D point cloud acquired based on the 3D point cloud sensor, registering the camera and the 3D point cloud sensor, and acquiring a corresponding relation between each pixel in the image data and each point cloud coordinate in the 3D point cloud; and acquiring corresponding 3D point cloud coordinates according to the pixel range of the ROI in the image data.
S306, determining a vertical line equation of the 3D point cloud coordinates of the multiple sets of ROI areas.
For each group of point clouds in the 3D point cloud coordinate, performing outer point elimination and inner point screening to obtain a plurality of groups of vertical line equation parameters corresponding to the maximum number of inner points, then performing length check on the point clouds corresponding to the plurality of groups of vertical line equation parameters, specifically, for each group of point clouds in the point clouds corresponding to the plurality of groups of vertical line equation parameters, selecting two points with the farthest distance by using the screened inner points, and then calculating the maximum distance of the vertical line according to the two points; if the maximum distance is greater than or equal to a preset length threshold value, determining that the vertical line passes the length check; and if the maximum distance is smaller than the preset length threshold value, determining that the vertical line does not pass the length check. And finally, for each group of point clouds passing through the length verification, utilizing the screened interior points, taking the vertical line equation parameters as initial values, adding a Huber robust kernel function, and optimizing an optimal vertical line equation through least square to obtain a plurality of groups of vertical line equations.
In this embodiment, the step S204 may specifically include: and constructing a vertical constraint for each vertical equation in the multiple groups of vertical equations and the current rotation attitude to obtain multiple groups of vertical constraints. Further, the following steps are performed for each set of vertical line equations in the multiple sets of vertical line equations to obtain multiple sets of vertical line constraints, wherein the vertical line equation being performed is referred to as a current vertical line equation: and calculating a vertical vector in a world coordinate system based on the current rotation attitude, determining a vertical equation estimation value in the current camera coordinate system according to the vertical vector and the current vertical equation, and determining a vertical constraint according to a difference value between the vertical equation estimation value and the current vertical equation.
Fig. 4 is a second flowchart of a method for estimating a pose of a cleaning robot according to an alternative embodiment of the present invention, and as shown in fig. 4, the step S206 may specifically include:
s402, constructing gyroscope constraint according to the angular velocity data, specifically, determining a current rotation attitude predicted value based on the rotation attitude at the last moment and the current angular velocity data of the gyroscope, and determining a difference value between the current rotation attitude predicted value and the current rotation attitude to obtain the gyroscope constraint.
And S404, carrying out nonlinear optimization according to the vertical line constraint and the gyroscope constraint to obtain an attitude estimation value of the cleaning robot.
Further, constructing a visual odometer constraint, specifically, firstly, determining a first attitude increment between two adjacent frames of images by using epipolar constraint according to a feature point matching result of a previous frame of image and a current frame of image; then, determining a first posture estimation value based on the visual odometer based on the rotary posture at the last moment and the first posture increment; determining the difference value between the first attitude estimation value and the current rotation attitude as a visual odometer constraint and/or constructing a point cloud registration constraint, specifically, performing point cloud registration according to the 3D point cloud at the last moment and the 3D point cloud at the current moment based on a 3D point cloud sensor to obtain a second attitude increment between two adjacent frames; then, a second attitude estimation value based on point cloud registration is determined based on the rotation attitude at the last moment and the second attitude increment; finally, determining the difference value between the second attitude estimation value and the current rotation attitude as point cloud registration constraint and/or constructing vertical line consistency constraint, specifically, determining the same vertical line in different image frames according to semantic recognition, and constructing the vertical line consistency constraint based on the same vertical line; and carrying out nonlinear optimization with the vertical constraint and the gyroscope constraint according to one of the visual odometer constraint, the point cloud registration constraint and the vertical consistency constraint, so as to obtain an attitude estimation value of the cleaning robot.
Fig. 5 is a flowchart of body pose estimation according to an embodiment of the present invention, as shown in fig. 5, including:
step S501, calibrating internal parameters and external parameters off line, and calibrating the external parameters and the internal parameters of each sensor of the used sweeper, wherein the following parameters are mainly calibrated:
(1) a camera internal reference matrix and a distortion model. According to different selected camera mathematical models, corresponding camera internal parameter matrixes and distortion models are different. Taking the pinhole model as an example, the camera internal reference matrix K includes: focal length f x ,f y Principal point offset x 0 ,y 0 Axis tilt s, etc.
Figure BDA0003689346180000041
(2) A gyroscope internal reference comprising: 3-axis or uniaxial offset, scaling factor, etc.
(3) External reference to a camera coordinate system and a gyroscope coordinate system, comprising: rotation matrix and translation.
(4) External parameters of a gyroscope coordinate system and a body coordinate system comprise: rotation matrix and translation.
(5) The accelerometer is internally involved, such as the self-contained accelerometer of the sweeper. Comprises the following steps: 3-axis or uniaxial offset, scaling factor, etc.
(6) External reference to the accelerometer and gyroscope coordinate systems, comprising: rotation matrix and translation.
(7) A point cloud internal reference model of a TOF sensor, such as a sweeper with the TOF sensor.
(8) External reference to TOF sensor coordinate system and gyroscope coordinate system, comprising: rotation matrix and translation.
Step S502, detecting a perpendicular line, where the detection range of the perpendicular line in the embodiment of the present invention includes but is not limited to: vertical lines on two sides of a door frame, vertical lines on two sides of table legs and glass, vertical lines on two sides of a wall body, and the like. Fig. 6 is a flowchart of vertical line detection according to an embodiment of the present invention, as shown in fig. 6, including:
s601, obtaining a Region Of Interest (ROI for short) through detection Of a camera 2D image;
the ROI region may specifically be detected using deep learning based on a detection and segmentation algorithm of the camera image. Compared with the traditional method, the deep learning method has obvious advantages in the image fields of recognition rate, accuracy rate, semantics and the like. It should be noted that the present embodiment does not limit the specific implementation method of the detection and segmentation algorithm based on the vertical line in the camera image. The detection results of the algorithm are multiple sets of ROI areas or regions on the image plane.
S602, calculating 3D point cloud coordinates of a plurality of groups of ROI areas;
the method is divided into two cases according to whether the point cloud sensor is arranged or not.
If the camera is not provided with a 3D point cloud sensor, extracting 2D feature points from the ROI of the previous frame of image; extracting 2D feature points in an ROI (region of interest) of a current frame image; matching the feature points of the previous frame and the current frame to obtain 2D coordinates of matching points, wherein the method is not limited to an optical flow method, a descriptor matching method and the like; based on the 2D feature point matching result, solving a homography matrix by using epipolar constraint of a camera, inhibiting external points by adopting a RANSAC method and screening internal points; decomposing the rotation matrix between two frames and the displacement vector without scale according to the homography matrix; the scale of translation is obtained from other sensors or other algorithms of the sweeper, such as: the absolute position increment between two frames can be obtained by adopting a code wheel or a wheel speed meter of left and right wheels of the sweeper, the displacement vector of the size to be measured is determined according to the displacement vector without the scale and the absolute position increment, and the pose increment of the two adjacent frames of images is determined according to the rotation matrix and the displacement vector without the scale; and triangularizing according to the pose increment of the two adjacent frames and the 2D coordinates of the matching points to obtain the 3D coordinates of all the inner points in the camera coordinate system.
If the camera is provided with a 3D point cloud sensor. Such as a common RGBD camera or TOF sensor. Then, in the first step, a 3D point cloud based on a point cloud sensor coordinate system is obtained through the point cloud sensor. And secondly, registering the camera and the point cloud sensor by using the external parameters of the camera and the point cloud sensor to obtain the corresponding relation between each pixel in the camera image and each point cloud of the point cloud sensor. And thirdly, taking out the 3D point cloud coordinates of the corresponding region according to the pixel range of the ROI in the camera image.
Here, RANSAC is an abbreviation of random sample consensus, and means random sample consensus. It means that the parameters of a mathematical model can be estimated iteratively from a set of data sets containing "outliers". In the RANSAC algorithm, data is composed of "inliers" and "outliers". "interior points" are data that make up the model parameters, and "exterior points" are data that do not fit into the model. While RANSAC assumes: given a set of data that contains a small fraction of "interior points," there is a model that a program can estimate to fit the "interior points.
The model is estimated by iteratively selecting the data set until a model deemed to be good is estimated. The concrete implementation steps are as follows:
1, selecting a minimum data set capable of estimating a model;
2, calculating a data model by using the data set;
3, substituting all data into the model, and calculating the number of 'interior points'; (data for the current iteration extrapolation model accumulated within a certain error margin)
4, comparing the number of the 'interior points' of the current model and the best model deduced before, and recording the model parameters of the maximum number of the 'interior points' and the number of the 'interior points';
and 5, repeating the steps until the iteration is finished or the current model is good enough (the number of the inner points is more than a certain number).
S603, solving a vertical line equation of a plurality of groups of point clouds;
and for each group of point clouds, performing outer point elimination and inner point screening on the point clouds extracted in the last step through the RANSAC process, and obtaining the vertical line equation parameters corresponding to the maximum number of inner points.
And carrying out length check on each group of point clouds. And selecting two points with the farthest distance by utilizing the inner points screened in the last step, calculating the maximum distance of the perpendicular line, and if the distance is smaller than a length threshold value, considering that the perpendicular line is too short, and using no subsequent operation.
And for each group of point clouds passing the verification, optimizing an optimal perpendicular line equation L _ opt _ i by utilizing the interior points screened in the first step, taking the perpendicular line equation parameter corresponding to the maximum number of the interior points as an initial value, adding a Huber robust kernel function and performing least square optimization. The plane equation is expressed as follows:
and A _ i + B _ i + y + C _ i + z + D _ i is 0, wherein A _ i, B _ i and C _ i are corresponding vertical line equation parameters of the ith group.
Step S503, constructing a vertical constraint based on a plurality of sets of vertical equations;
the method comprises the following steps of firstly, acquiring a current-time rotation posture acquired by a sensor, wherein the current-time rotation posture is represented by R _ cur, and the specific representation form of the R _ cur is not limited to a rotation matrix, a quaternion or an Euler angle.
In the second step, for each set of vertical equations, a vertical constraint e _ i ═ f _ i (R _ cur, L _ opt _ i) can be constructed with the current rotation posture R _ cur. The vertical constraint construction comprises the following steps:
(1) and (5) obtaining an estimated value of the vertical line. Based on the current rotation attitude R _ cur, the vertical line vector L _ global in the world coordinate system is [0,0,1] and the vertical line equation estimated value L _ cur ═ R _ cur _ L _ global in the current camera coordinate system can be calculated.
(2) And constructing the ith vertical constraint. Based on each group of point clouds, calculating a vertical constraint e _ i ═ L _ cur Θ L _ opt _ i)/Σ _ i, wherein Θ represents the difference between two vectors without being limited to euler distance, modular length, cross product, etc., wherein Σ _ i represents the uncertainty of the group of vertical point clouds, and is positively correlated with line length, point number, distance from the camera, observation times, etc.
(3) N vertical constraints are constructed. And traversing N groups of point clouds to construct N groups of vertical line constraints.
It should be noted that the above-mentioned manner of constructing the vertical lines is only an exemplary illustration, and other similar implementations are also possible, and are not described one by one here.
Step S504, constructing other posture constraints;
(1) a gyroscope constraint is constructed. The gyroscope can deduct a current rotation attitude predicted value R _ predict based on the rotation attitude R _ prev at the last moment and the gyroscope reading w _ cur, and the predicted value is subtracted from the current rotation attitude R _ cur to obtain the gyroscope constraint.
(2) Visual odometry constraints are constructed. And obtaining the attitude increment delta R _ visual between two frames by using epipolar constraint according to the matching of the feature points of the image at the previous moment and the image at the current moment. Based on the rotating posture R _ prev and the posture increment delta R _ visual at the last moment, a posture estimated value R _ visual based on the visual odometer can be obtained. And the difference between the estimated value R _ visual and the current rotation attitude R _ cur is obtained, and the constraint of the visual odometer can be obtained.
(3) And constructing point cloud registration constraint. Based on a Point cloud sensor, according to the 3D Point cloud at the last moment and the 3D Point cloud at the current moment, an attitude increment delta R _ pcl between two adjacent frames can be obtained by utilizing a Point cloud registration method such as an ICP (Iterative Closest Point). Based on the rotating attitude R _ prev and the attitude increment delta R _ pcl at the last moment, an attitude estimation value R _ pcl based on point cloud registration can be obtained. And (4) calculating the difference between the estimated value R _ pcl and the current rotation attitude R _ cur to obtain point cloud registration constraint.
(4) A vertical consistency constraint is constructed. If according to the recognition semantic information, the fact that certain sections of vertical lines in different image frames are the same vertical line can be obtained, and vertical line consistency constraint can be constructed. For example: according to the semantic information, the doorframe observed by the image frame P _1 and the doorframe observed by the image frame P _2 belong to the same doorframe. The method specifically comprises the following steps:
based on the pose of the image frame P _1, converting the point cloud on the vertical line of the door frame in the image frame P _1 from a camera coordinate system to a world coordinate system, and projecting the point cloud coordinate to the ground, namely [ x _1, y _1 ];
based on the pose of the image frame P _2, converting the point cloud on the vertical line of the door frame in the image frame P _2 from a camera coordinate system to a world coordinate system, and projecting the point cloud coordinate to the ground, namely [ x _2, y _2 ];
since the two frames observe the same doorframe, the projection of the point cloud on the world coordinate system on the vertical line of the two frames of doorframes should be the same point, i.e., [ x _1, y _1] should be equal to [ x _2, y _2] theoretically. So a vertical consistency constraint can be constructed, [ x _1, y _1] Θ [ x _2, y _2 ];
and step S505, solving the posture through nonlinear optimization.
And carrying out nonlinear optimization based on the various attitude constraints, and solving the optimal value of the attitude. The embodiment does not limit the specific non-linear optimization method, such as gauss-newton method, doglegg, LM, etc.
There is also provided an attitude estimation device of a cleaning robot according to another embodiment of the present invention, and fig. 7 is a block diagram of the attitude estimation device of the cleaning robot according to the embodiment of the present invention, as shown in fig. 7, the device including:
a vertical line detection module 72 for performing vertical line detection on image data collected by the cleaning robot;
a construction module 74 for constructing a vertical constraint based on the detected vertical;
an attitude estimation module 76 for performing attitude estimation of the cleaning robot based on the vertical constraints and angular velocity data of the cleaning robot.
Optionally, the vertical line detection module 72 includes:
a first determining submodule, configured to determine multiple sets of ROI regions from the image data based on a vertical line detection and segmentation algorithm;
a second determining submodule for determining 3D point cloud coordinates of the plurality of sets of ROI areas;
and the third determining submodule is used for determining a vertical line equation of the 3D point cloud coordinates of the multiple groups of ROI areas.
Optionally, the second determining sub-module is further configured to, if the camera is not provided with a 3D point cloud sensor, extract 2D feature points from the multiple sets of ROI regions of the previous image and the current image respectively; aiming at each group of ROI (region of interest), matching 2D feature points of the previous frame image and the current frame image to obtain 2D coordinates of matching points, and determining pose increments of the previous frame image and the current frame image; and triangularizing the pose increment and the 2D coordinates of the matching points aiming at each group of ROI areas to obtain the 3D point cloud coordinates of the matching points under a camera coordinate system.
Optionally, the second determining submodule is further configured to determine a homography matrix by using epipolar constraint of a camera based on the 2D coordinates of the matching points; obtaining a rotation matrix and a displacement vector without scale between the previous frame image and the current frame image according to the homography matrix decomposition; acquiring an absolute position increment between the previous frame image and the current frame image through a sensor; determining a displacement vector with dimensions according to the displacement vector without dimensions and the absolute position increment; and determining the pose increment according to the rotation matrix and the displacement vector with the scale.
Optionally, the second determining submodule is further configured to acquire a 3D point cloud acquired based on the 3D point cloud sensor if the camera is provided with the 3D point cloud sensor; registering the camera and the 3D point cloud sensor to obtain the corresponding relation between each pixel in the image data and each point cloud coordinate in the 3D point cloud; and acquiring corresponding 3D point cloud coordinates according to the pixel range of the ROI in the image data.
Optionally, the third determining submodule is further configured to perform outlier rejection and inlier screening on each group of point clouds in the 3D point cloud coordinates to obtain multiple sets of vertical line equation parameters corresponding to the maximum inlier number; respectively carrying out length verification on point clouds corresponding to the multiple groups of vertical line equation parameters; and for each group of point clouds passing through the length verification, utilizing the screened interior points, taking the vertical line equation parameters as initial values, adding a Huber robust kernel function, and optimizing an optimal vertical line equation through least square to obtain a plurality of groups of vertical line equations.
Optionally, the third determining submodule is further configured to select, for each group of point clouds in the point clouds corresponding to the multiple groups of vertical line equation parameters, two points with the farthest distance by using the screened interior points; calculating the maximum distance of the vertical line according to the two points; if the maximum distance is larger than or equal to a preset length threshold value, determining that the vertical line passes length verification; and if the maximum distance is smaller than the preset length threshold value, determining that the vertical line does not pass the length check.
Optionally, the building block 74 includes:
and the first construction submodule is used for constructing a perpendicular constraint for each perpendicular equation in the multiple sets of perpendicular equations and the current rotation posture to obtain multiple sets of perpendicular constraints.
Optionally, the first building submodule is further configured to perform the following steps on each vertical equation in the multiple sets of vertical equations to obtain the multiple sets of vertical constraints, where the vertical equation being performed is referred to as a current vertical equation: calculating a vertical vector in a world coordinate system based on the current rotation attitude; determining a vertical line equation estimation value in a current camera coordinate system according to the vertical line vector and the current vertical line equation; and determining the vertical constraint according to the difference value between the vertical equation estimated value and the current vertical equation.
Optionally, the posture estimation module 76 includes:
a second construction submodule for constructing a gyroscope constraint based on the angular rate data;
and the nonlinear optimization submodule is used for carrying out nonlinear optimization according to the vertical constraint and the gyroscope constraint to obtain an attitude estimation value of the cleaning robot.
Optionally, the second building submodule is further configured to determine a predicted value of the current rotation attitude based on the rotation attitude at the previous time and the current angular velocity data of the gyroscope; and determining the difference value between the predicted value of the current rotation attitude and the current rotation attitude to obtain the constraint of the gyroscope.
Optionally, the nonlinear optimization submodule is further configured to construct a visual odometer constraint, and/or a point cloud registration constraint, and/or a vertical line consistency constraint; and carrying out nonlinear optimization with the vertical constraint and the gyroscope constraint according to one of the visual odometer constraint, the point cloud registration constraint and the vertical consistency constraint to obtain an attitude estimation value of the cleaning robot.
Optionally, the nonlinear optimization submodule is further configured to determine, according to a feature point matching result of the previous frame image and the current frame image, a first attitude increment between two adjacent frames of images by using epipolar constraint; determining a first attitude estimate based on a visual odometer based on the rotational attitude at the previous time and the first attitude increment; determining a difference between the first pose estimate and a current rotational pose as the visual odometry constraint;
based on the 3D point cloud sensor, performing point cloud registration according to the 3D point cloud at the last moment and the 3D point cloud at the current moment to obtain a second attitude increment between two adjacent frames; determining a second attitude estimation value based on point cloud registration based on the rotation attitude at the last moment and the second attitude increment; determining a difference value between the second attitude estimation value and the current rotation attitude as the point cloud registration constraint;
and determining the same vertical line in different image frames according to semantic recognition, and constructing the vertical line consistency constraint based on the same vertical line.
An embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to perform the steps in any of the above method embodiments when executed.
In an exemplary embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
In an exemplary embodiment, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
For specific examples in this embodiment, reference may be made to the examples described in the above embodiments and exemplary embodiments, and details of this embodiment are not repeated herein.
It will be apparent to those skilled in the art that the various modules or steps of the invention described above may be implemented using a general purpose computing device, they may be centralized on a single computing device or distributed across a network of computing devices, and they may be implemented using program code executable by the computing devices, such that they may be stored in a memory device and executed by the computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into various integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (17)

1. A method of attitude estimation of a cleaning robot, the method comprising:
detecting a vertical line of image data acquired by the cleaning robot;
constructing a vertical constraint according to the detected vertical;
and carrying out attitude estimation on the cleaning robot according to the vertical constraint and the angular speed data of the cleaning robot.
2. The method of claim 1, wherein the vertical line detection of the image data collected by the cleaning robot comprises:
determining a plurality of groups of ROI (region of interest) areas from the image data based on a vertical line detection and segmentation algorithm;
determining 3D point cloud coordinates of the plurality of sets of ROI areas;
and determining a vertical line equation of the 3D point cloud coordinates of the plurality of groups of ROI areas.
3. The method of claim 2, wherein determining the 3D point cloud coordinates for the plurality of sets of ROI regions comprises:
if the camera is not provided with a 3D point cloud sensor, extracting 2D feature points from the multiple groups of ROI areas of the previous frame image and the current frame image respectively;
aiming at each group of ROI (region of interest), matching 2D feature points of the previous frame image and the current frame image to obtain 2D coordinates of matching points, and determining pose increments of the previous frame image and the current frame image;
and triangularizing the pose increment and the 2D coordinates of the matching points aiming at each group of ROI areas to obtain the 3D point cloud coordinates of the matching points under a camera coordinate system.
4. The method of claim 3, wherein determining pose increments for the previous frame image and the current frame image comprises:
determining a homography matrix by using epipolar constraint of a camera based on the 2D coordinates of the matching points;
obtaining a rotation matrix and a displacement vector without scale between the previous frame image and the current frame image according to the homography matrix decomposition;
acquiring an absolute position increment between the previous frame image and the current frame image through a sensor;
determining a displacement vector with dimensions according to the displacement vector without dimensions and the absolute position increment;
and determining the pose increment according to the rotation matrix and the displacement vector with the scale.
5. The method of claim 2, wherein determining the 3D point cloud coordinates for the plurality of sets of ROI regions comprises:
if the camera is provided with a 3D point cloud sensor, acquiring a 3D point cloud acquired based on the 3D point cloud sensor;
registering the camera and the 3D point cloud sensor to obtain the corresponding relation between each pixel in the image data and each point cloud coordinate in the 3D point cloud;
and acquiring corresponding 3D point cloud coordinates according to the pixel range of the ROI in the image data.
6. The method of claim 2, wherein determining a vertical equation for the 3D point cloud coordinates for the plurality of sets of ROI regions comprises:
for each group of point clouds in the 3D point cloud coordinates, carrying out exterior point elimination and interior point screening to obtain a plurality of groups of vertical line equation parameters corresponding to the maximum interior point number;
respectively carrying out length verification on the point clouds corresponding to the multiple groups of vertical line equation parameters;
and for each group of point clouds passing through the length verification, utilizing the screened interior points, taking the vertical line equation parameters as initial values, adding a Huber robust kernel function, and optimizing an optimal vertical line equation through least square to obtain a plurality of groups of vertical line equations.
7. The method of claim 6, wherein the length check of the point clouds corresponding to the plurality of sets of vertical line equation parameters respectively comprises:
selecting two points with the farthest distance by utilizing the screened interior points for each group of point clouds in the point clouds corresponding to the multiple groups of vertical line equation parameters;
calculating the maximum distance of the vertical line according to the two points;
if the maximum distance is larger than or equal to a preset length threshold value, determining that the vertical line passes length verification;
and if the maximum distance is smaller than the preset length threshold value, determining that the vertical line does not pass the length check.
8. The method of claim 6, wherein constructing the vertical constraints from the detected vertical comprises:
and for each group of vertical line equations in the multiple groups of vertical line equations, constructing a vertical line constraint with the current rotation attitude to obtain multiple groups of vertical line constraints.
9. The method of claim 8, wherein for each of the plurality of vertical equations, constructing one vertical constraint with the current rotational pose, the obtaining the plurality of vertical constraints comprises:
executing the following steps for each group of vertical line equations in the multiple groups of vertical line equations to obtain the multiple groups of vertical line constraints, wherein the executed vertical line equations are called current vertical line equations:
calculating a vertical vector in a world coordinate system based on the current rotation attitude;
determining a vertical line equation estimation value in a current camera coordinate system according to the vertical line vector and the current vertical line equation;
and determining the vertical constraint according to the difference value between the estimated vertical equation value and the current vertical equation.
10. The method of claim 1, wherein pose estimating the cleaning robot from the vertical constraints and the cleaning robot angular velocity data comprises:
constructing a gyroscope constraint from the angular velocity data;
and carrying out nonlinear optimization according to the vertical constraint and the gyroscope constraint to obtain an attitude estimation value of the cleaning robot.
11. The method of claim 10, wherein constructing a gyroscope constraint from the angular rate data comprises:
determining a current rotation attitude predicted value based on the rotation attitude at the last moment and the current angular velocity data of the gyroscope;
and determining the difference value between the predicted value of the current rotation attitude and the current rotation attitude to obtain the constraint of the gyroscope.
12. The method of claim 10, wherein performing a non-linear optimization based on the vertical constraints and the gyroscope constraints to obtain an attitude estimate for the cleaning robot comprises:
constructing a visual odometer constraint, and/or constructing a point cloud registration constraint, and/or constructing a vertical line consistency constraint;
and carrying out nonlinear optimization with the vertical constraint and the gyroscope constraint according to one of the visual odometer constraint, the point cloud registration constraint and the vertical consistency constraint to obtain an attitude estimation value of the cleaning robot.
13. The method of claim 12,
constructing the visual odometry constraint includes: determining a first attitude increment between two adjacent frames of images by using epipolar constraint according to the feature point matching result of the previous frame of image and the current frame of image; determining a first attitude estimate based on a visual odometer based on the rotational attitude at the previous time and the first attitude increment; determining a difference between the first pose estimate and a current rotational pose as the visual odometry constraint;
constructing a point cloud registration constraint comprises: based on the 3D point cloud sensor, performing point cloud registration according to the 3D point cloud at the last moment and the 3D point cloud at the current moment to obtain a second attitude increment between two adjacent frames; determining a second attitude estimation value based on point cloud registration based on the rotation attitude at the last moment and the second attitude increment; determining a difference value between the second attitude estimation value and the current rotation attitude as the point cloud registration constraint;
constructing a vertical consistency constraint includes: and determining the same vertical line in different image frames according to semantic recognition, and constructing the vertical line consistency constraint based on the same vertical line.
14. An attitude estimation device of a cleaning robot, characterized by comprising:
the vertical line detection module is used for detecting the vertical line of the image data acquired by the cleaning robot;
the building module is used for building vertical constraint according to the detected vertical;
and the attitude estimation module is used for estimating the attitude of the cleaning robot according to the vertical constraint and the angular speed data of the cleaning robot.
15. A cleaning robot, characterized in that it comprises at least a device as claimed in claim 14.
16. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to carry out the method of any one of claims 1 to 13 when executed.
17. An electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the method of any of claims 1 to 13.
CN202210655620.3A 2022-06-10 2022-06-10 Cleaning robot posture estimation method and device and cleaning robot Active CN115077467B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210655620.3A CN115077467B (en) 2022-06-10 2022-06-10 Cleaning robot posture estimation method and device and cleaning robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210655620.3A CN115077467B (en) 2022-06-10 2022-06-10 Cleaning robot posture estimation method and device and cleaning robot

Publications (2)

Publication Number Publication Date
CN115077467A true CN115077467A (en) 2022-09-20
CN115077467B CN115077467B (en) 2023-08-08

Family

ID=83251904

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210655620.3A Active CN115077467B (en) 2022-06-10 2022-06-10 Cleaning robot posture estimation method and device and cleaning robot

Country Status (1)

Country Link
CN (1) CN115077467B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170124693A1 (en) * 2015-11-02 2017-05-04 Mitsubishi Electric Research Laboratories, Inc. Pose Estimation using Sensors
CN111928861A (en) * 2020-08-07 2020-11-13 杭州海康威视数字技术股份有限公司 Map construction method and device
CN112729294A (en) * 2021-04-02 2021-04-30 北京科技大学 Pose estimation method and system suitable for vision and inertia fusion of robot
CN113192140A (en) * 2021-05-25 2021-07-30 华中科技大学 Binocular vision inertial positioning method and system based on point-line characteristics
CN113341989A (en) * 2021-06-18 2021-09-03 广州蓝胖子移动科技有限公司 Wheeled mobile robot, control point model establishing method and device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170124693A1 (en) * 2015-11-02 2017-05-04 Mitsubishi Electric Research Laboratories, Inc. Pose Estimation using Sensors
CN111928861A (en) * 2020-08-07 2020-11-13 杭州海康威视数字技术股份有限公司 Map construction method and device
CN112729294A (en) * 2021-04-02 2021-04-30 北京科技大学 Pose estimation method and system suitable for vision and inertia fusion of robot
CN113192140A (en) * 2021-05-25 2021-07-30 华中科技大学 Binocular vision inertial positioning method and system based on point-line characteristics
CN113341989A (en) * 2021-06-18 2021-09-03 广州蓝胖子移动科技有限公司 Wheeled mobile robot, control point model establishing method and device and storage medium

Also Published As

Publication number Publication date
CN115077467B (en) 2023-08-08

Similar Documents

Publication Publication Date Title
US9243916B2 (en) Observability-constrained vision-aided inertial navigation
Bloesch et al. Iterated extended Kalman filter based visual-inertial odometry using direct photometric feedback
US8259994B1 (en) Using image and laser constraints to obtain consistent and improved pose estimates in vehicle pose databases
US8447116B2 (en) Identifying true feature matches for vision based navigation
Acharya et al. BIM-Tracker: A model-based visual tracking approach for indoor localisation using a 3D building model
CN112197764B (en) Real-time pose determining method and device and electronic equipment
Wen et al. An indoor backpack system for 2-D and 3-D mapping of building interiors
CN112734852A (en) Robot mapping method and device and computing equipment
CN108332752B (en) Indoor robot positioning method and device
CN112183171A (en) Method and device for establishing beacon map based on visual beacon
CN112184824A (en) Camera external parameter calibration method and device
Indelman et al. Incremental light bundle adjustment for robotics navigation
CN112802096A (en) Device and method for realizing real-time positioning and mapping
Mehralian et al. EKFPnP: extended Kalman filter for camera pose estimation in a sequence of images
US11145072B2 (en) Methods, devices and computer program products for 3D mapping and pose estimation of 3D images
Pöppl et al. Integrated trajectory estimation for 3D kinematic mapping with GNSS, INS and imaging sensors: A framework and review
CN113393519B (en) Laser point cloud data processing method, device and equipment
CN116958452A (en) Three-dimensional reconstruction method and system
CN115077467B (en) Cleaning robot posture estimation method and device and cleaning robot
CN113126117B (en) Method for determining absolute scale of SFM map and electronic equipment
CN113034538B (en) Pose tracking method and device of visual inertial navigation equipment and visual inertial navigation equipment
Rybski et al. Appearance-based minimalistic metric SLAM
Aliakbarpour et al. Geometric exploration of virtual planes in a fusion-based 3D data registration framework
Panahandeh et al. IMU-camera data fusion: Horizontal plane observation with explicit outlier rejection
CN114983302B (en) Gesture determining method and device, cleaning equipment, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant