CN112729294A - Pose estimation method and system suitable for vision and inertia fusion of robot - Google Patents

Pose estimation method and system suitable for vision and inertia fusion of robot Download PDF

Info

Publication number
CN112729294A
CN112729294A CN202110363020.5A CN202110363020A CN112729294A CN 112729294 A CN112729294 A CN 112729294A CN 202110363020 A CN202110363020 A CN 202110363020A CN 112729294 A CN112729294 A CN 112729294A
Authority
CN
China
Prior art keywords
pose
point
image
sensor
inertial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110363020.5A
Other languages
Chinese (zh)
Other versions
CN112729294B (en
Inventor
王哲
李希胜
潘月斗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology Beijing USTB
Original Assignee
University of Science and Technology Beijing USTB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology Beijing USTB filed Critical University of Science and Technology Beijing USTB
Priority to CN202110363020.5A priority Critical patent/CN112729294B/en
Publication of CN112729294A publication Critical patent/CN112729294A/en
Application granted granted Critical
Publication of CN112729294B publication Critical patent/CN112729294B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Abstract

The invention discloses a pose estimation method and a pose estimation system suitable for vision and inertia fusion of a robot, wherein the method comprises the following steps: acquiring an image output by a visual sensor, and detecting the point-line characteristics of adjacent frame images; establishing a visual pose model according to the position and the posture change of the point-line characteristics of the adjacent frame images; detecting pose data of the visual sensor by using an inertial sensor, and establishing an inertial pose model; and carrying out data fusion on the visual pose model and the inertial pose model, and estimating the pose of the visual sensor of the robot through the fusion pose model. The invention solves the problems that the robot vision sensor cannot estimate the pose due to the frame loss caused by the light and environment shielding, the inertial sensor can not estimate the zero-offset error of the long-time motion, and the pose estimation precision of the single sensor is low, improves the pose estimation precision and robustness in the real environment, and can be widely applied to the field of robot environment sensing.

Description

Pose estimation method and system suitable for vision and inertia fusion of robot
Technical Field
The invention relates to the technical field of robot vision, in particular to a pose estimation method and a pose estimation system suitable for vision and inertia fusion of a robot.
Background
At present, vision and inertia sensors of robots are widely applied to various fields of robots, such as picking robots, welding robots, vegetable grafting robots, picking robots and the like, and the robots are controlled to reach specified poses in the environment through the vision and inertia sensors so as to complete given operation tasks.
Vision-based algorithms are able to handle changes over time well, but cannot observe sudden rotational rates correctly and are susceptible to illumination. On the other hand, a gyroscope can accurately measure the angular velocity of its axis in a short time, but drifts greatly over a longer period. Thus, the combination of vision and gyroscopes are complementary and may provide more robust pose estimation capabilities. Gyroscope zero drift compensation can be achieved when pixel displacement between successive video frames is primarily caused by camera rotation. However, the prior art for realizing visual and inertial fusion also has the problems of initialization, nonlinearity, inconsistent coordinates and time and the like.
Disclosure of Invention
The invention provides a pose estimation method and a pose estimation system suitable for vision and inertia fusion of a robot, and aims to solve the technical problem that the existing pose estimation method is not accurate enough.
In order to solve the technical problems, the invention provides the following technical scheme:
in one aspect, the present invention provides a pose estimation method suitable for robot vision and inertia fusion, including:
acquiring an image acquired by a vision sensor of the robot, and detecting point-line characteristics of adjacent frame images;
determining the pose change of the visual sensor according to the position and the pose change of the point-line characteristics of the adjacent frame images, and establishing a visual pose model for representing the pose change of the visual sensor;
detecting pose data of the visual sensor by using an inertial sensor, and establishing an inertial pose model for representing pose change of the visual sensor according to the pose data detected by the inertial sensor; wherein the inertial sensor is mounted on the visual sensor and is coaxially arranged with the visual sensor;
and carrying out data fusion on the visual pose model and the inertial pose model to obtain a fusion pose model so as to estimate the pose of a visual sensor of the robot through the fusion pose model.
Further, the detecting the point-line characteristics of the adjacent frame images includes:
for the previous frame of image, retrieving vanishing points, ground points and plumb lines in the image by a preset point-line feature extraction method, and recording coordinate positions of the retrieved vanishing points, ground points and plumb line starting and stopping points in the image; wherein the vanishing point refers to a point to which parallel line projections which are not parallel to the projection plane are gathered in perspective projection; the ground pin point refers to the intersection point of ground pin lines of different planes; the plumb line is a straight line vertical to the ground;
estimating the change of the attitude and the position of the adjacent frame images through the pose data of the visual sensor detected by the inertial sensor, and predicting the coordinate positions of the vanishing point, the ground point and the plumb line starting point in the next frame image according to the estimated change of the attitude and the position of the adjacent frame images based on the coordinate positions of the vanishing point, the ground point and the plumb line starting point in the previous frame image in the images;
and based on the prediction result of the coordinate positions of the vanishing point, the foot point and the plumb line starting point in the next frame of image in the image, searching the vanishing point, the foot point and the plumb line in the next frame of image by adopting a local search method, and recording the coordinate positions of the vanishing point, the foot point and the plumb line starting point in the image.
Further, the inertial sensor includes a gyroscope for detecting an angular velocity of the vision sensor and an accelerometer for detecting a linear acceleration of the vision sensor;
the estimating of the change of the pose and the position of the adjacent frame image through the pose data of the visual sensor detected by the inertial sensor comprises:
and pre-integrating the angular velocity and the linear acceleration detected by the gyroscope and the accelerometer to obtain the attitude and position change of the visual sensor so as to estimate the attitude and position change of the adjacent frame images.
Further, the retrieving the vanishing point, the footpoint and the plumb line in the next frame image by using a local search method based on the prediction result of the coordinate positions of the vanishing point, the footpoint and the plumb line starting and stopping point in the next frame image in the image comprises:
designing a search frame in the next frame of image based on the prediction result of the coordinate positions of the vanishing point, the foot point and the vertical line starting and stopping point in the next frame of image in the image; and local searching is carried out on vanishing points, ground points and plumb lines in the next frame of image by adopting a preset point-line characteristic searching method in the searching frame.
Further, the detecting the pose data of the vision sensor by the inertial sensor and establishing an inertial pose model representing the pose change of the vision sensor according to the pose data detected by the inertial sensor includes:
periodically calculating the attitude and position change of the visual sensor by using the visual pose model, and performing deviation calibration on the inertial sensor by using the calculation result;
and detecting the pose data of the visual sensor by using the calibrated inertial sensor, and establishing an inertial pose model for representing the pose change of the visual sensor according to the pose data detected by the inertial sensor.
In another aspect, the present invention further provides a pose estimation system suitable for robot vision and inertia fusion, including:
the image point-line characteristic extraction module is used for acquiring an image acquired by a vision sensor of the robot and detecting the point-line characteristics of adjacent frame images;
the visual pose model establishing module is used for determining the pose change of the visual sensor according to the position and the pose change of the point-line characteristics of the adjacent frame images and establishing a visual pose model for representing the pose change of the visual sensor;
the inertial pose model establishing module is used for detecting pose data of the visual sensor by using an inertial sensor and establishing an inertial pose model for representing pose change of the visual sensor according to the pose data detected by the inertial sensor; wherein the inertial sensor is mounted on the visual sensor and is coaxially arranged with the visual sensor;
and the model fusion module is used for carrying out data fusion on the visual pose model and the inertial pose model to obtain a fusion pose model so as to estimate the pose of a visual sensor of the robot through the fusion pose model.
Further, the image point-line feature extraction module is specifically configured to:
for the previous frame of image, retrieving vanishing points, ground points and plumb lines in the image by a preset point-line feature extraction method, and recording coordinate positions of the retrieved vanishing points, ground points and plumb line starting and stopping points in the image; wherein the vanishing point refers to a point to which parallel line projections which are not parallel to the projection plane are gathered in perspective projection; the ground pin point refers to the intersection point of ground pin lines of different planes; the plumb line is a straight line vertical to the ground;
estimating the change of the attitude and the position of the adjacent frame images through the pose data of the visual sensor detected by the inertial sensor, and predicting the coordinate positions of the vanishing point, the ground point and the plumb line starting point in the next frame image according to the estimated change of the attitude and the position of the adjacent frame images based on the coordinate positions of the vanishing point, the ground point and the plumb line starting point in the previous frame image in the images;
and based on the prediction result of the coordinate positions of the vanishing point, the foot point and the plumb line starting point in the next frame of image in the image, searching the vanishing point, the foot point and the plumb line in the next frame of image by adopting a local search method, and recording the coordinate positions of the vanishing point, the foot point and the plumb line starting point in the image.
Further, the inertial sensor includes a gyroscope for detecting an angular velocity of the vision sensor and an accelerometer for detecting a linear acceleration of the vision sensor;
the image point-line feature extraction module is further specifically configured to:
and pre-integrating the angular velocity and the linear acceleration detected by the gyroscope and the accelerometer to obtain the attitude and position change of the visual sensor so as to estimate the attitude and position change of the adjacent frame images.
Further, the image point-line feature extraction module is specifically further configured to:
designing a search frame in the next frame of image based on the prediction result of the coordinate positions of the vanishing point, the foot point and the vertical line starting and stopping point in the next frame of image in the image; and local searching is carried out on vanishing points, ground points and plumb lines in the next frame of image by adopting a preset point-line characteristic searching method in the searching frame.
Further, the inertial pose model building module is specifically configured to:
periodically calculating the attitude and position change of the visual sensor by using the visual pose model, and performing deviation calibration on the inertial sensor by using the calculation result;
and detecting the pose data of the visual sensor by using the calibrated inertial sensor, and establishing an inertial pose model for representing the pose change of the visual sensor according to the pose data detected by the inertial sensor.
In yet another aspect, the present invention also provides an electronic device comprising a processor and a memory; wherein the memory has stored therein at least one instruction that is loaded and executed by the processor to implement the above-described method.
In yet another aspect, the present invention also provides a computer-readable storage medium having at least one instruction stored therein, the instruction being loaded and executed by a processor to implement the above method.
The technical scheme provided by the invention has the beneficial effects that at least:
1. aiming at the problems of simplicity, low extraction precision and slow matching speed of point line features in the prior art, the invention provides a point line feature extraction method with unique vanishing points and anchor points as point features and vanishing lines and plumb lines as line features, which comprises the following steps: establishing a vanishing point evaluation model, and preferentially selecting to realize high-precision vanishing point coordinate estimation according to the obtained vanishing line information; establishing a point characteristic matching model, and estimating a quick matching algorithm of the coordinates of the ground point of the next frame by using an inertial sensor according to the current coordinate position of the ground point; the model is preferentially selected and fitted by the plumb line, and the high-precision coordinates of the starting point and the stopping point are obtained.
2. The invention provides a visual pose model established based on point characteristics (vanishing point, plumb line start and stop point and footing point), an inertial pose model is established by pre-integration by utilizing pose data acquired by a calibrated inertial sensor, and the inertial pose model and the pose data are fused in a multi-rate fusion mode, so that a high-precision pose model is realized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a pose estimation method suitable for vision and inertia fusion of a robot according to an embodiment of the present invention;
FIG. 2 is a flow chart of position calculation of a point-line feature in an image of a previous frame according to an embodiment of the present invention;
FIG. 3 is a flow chart of position calculation of a point-line feature in an image in a subsequent frame according to an embodiment of the present invention;
FIG. 4 is a flow chart of inertial sensor pose model building provided by an embodiment of the invention;
fig. 5 is a schematic flow chart for fusing visual and inertial sensor data to obtain high-precision pose data according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
First embodiment
The embodiment provides a quick, accurate, widely-applied and applicable pose estimation method for robot vision and inertia fusion, which is applicable to a motion carrier. The execution flow of the method is shown in fig. 1, and comprises the following steps:
s1, acquiring an image acquired by a robot vision sensor, and detecting the point-line characteristics of adjacent frame images;
s2, determining the pose change of the vision sensor according to the position and the pose change of the point-line characteristic of the adjacent frame image, and establishing a vision pose model for representing the pose change of the vision sensor;
s3, detecting the pose data of the vision sensor by using an inertial sensor, and establishing an inertial pose model for representing the pose change of the vision sensor according to the pose data detected by the inertial sensor; wherein the inertial sensor is mounted on the visual sensor and is coaxially arranged with the visual sensor;
and S4, carrying out data fusion on the visual pose model and the inertial pose model to obtain a fusion pose model, and estimating the pose of the visual sensor of the robot through the fusion pose model.
Further, referring to fig. 2 and fig. 3, the implementation process of S1 is as follows:
s11, acquiring an image acquired by the robot vision sensor;
s12, searching vanishing points, anchor points and plumb lines in the image for the previous frame of image by a point-line feature extraction method, and recording coordinate positions of the searched vanishing points, anchor points and plumb line starting and stopping points in the image;
wherein the vanishing point refers to a point to which parallel line projections which are not parallel to the projection plane are gathered in perspective projection; the ground pin point refers to the intersection point of ground pin lines of different planes; the plumb line is a straight line vertical to the ground;
the point-line feature extraction method adopted in the embodiment is as follows: the point-line characteristics in the image are extracted by using Hough transform (Hough) and a linear detection segmentation algorithm (LSD), as shown in fig. 2, the specific flow steps are as follows:
carrying out Hough transformation on the obtained original image, extracting related parallel lines, and calculating the slope of the line segment;
calculating the slope of the line segment and the intersection point of the line segment through an LSD algorithm, determining a footing point according to the intersection point of the line segment, estimating a vanishing point, and determining a plumb line according to the calculated slope of the line segment;
establishing a vanishing point evaluation function according to the line segment slope calculated by Hough transformation and the line segment slope calculated by an LSD algorithm, and evaluating the estimated vanishing point to obtain vanishing point information;
and recording the coordinate positions of the vanishing point, the foot point and the plumb line starting and stopping point in the image.
The optimal model for vanishing point extraction is as follows:
Figure 548971DEST_PATH_IMAGE001
in the formula (I), the compound is shown in the specification,
Figure 347163DEST_PATH_IMAGE002
and
Figure 201855DEST_PATH_IMAGE003
the slope and the line length of the ith straight line in the image are shown, c represents the evaluation value of the vanishing point function,
Figure 401892DEST_PATH_IMAGE004
and
Figure 801781DEST_PATH_IMAGE005
showing the slope and segment length of the jth line in the image.
S13, estimating the change of the posture and the position of the adjacent frame image through the posture data of the visual sensor detected by the inertial sensor, and predicting the coordinate positions of the vanishing point, the ground point and the plumb line starting point in the next frame image in the image according to the estimated change of the posture and the position of the adjacent frame image based on the coordinate positions of the vanishing point, the ground point and the plumb line starting point in the previous frame image in the image;
in the present embodiment, the inertial sensor includes a gyroscope and an accelerometer, the gyroscope is used for detecting an angular velocity of the visual sensor, and the accelerometer is used for detecting a linear acceleration of the visual sensor;
as shown in fig. 3, the pose data of the visual sensor detected by the inertial sensor is used to estimate the change of the pose and position of the adjacent frame image, which is specifically as follows:
and pre-integrating the angular velocity and the linear acceleration detected by the gyroscope and the accelerometer to obtain the attitude and position change of the visual sensor so as to estimate the attitude and position change of the adjacent frame images.
And S14, based on the prediction results of the coordinate positions of the vanishing point, the ground point and the plumb line starting point in the next frame of image in the image, searching the vanishing point, the ground point and the plumb line in the next frame of image by using a local search method, and recording the coordinate positions of the vanishing point, the ground point and the plumb line starting point in the image.
In this embodiment, based on the prediction result of the coordinate positions of the vanishing point, the foot point, and the vertical line starting point in the next frame of image in the image, the vanishing point, the foot point, and the vertical line are retrieved in the next frame of image by using a local search method, as shown in fig. 3, specifically as follows:
designing a 5 x 5 search frame in the next frame image based on the prediction result of the coordinate positions of the vanishing point, the foot point and the plumb line starting and stopping point in the next frame image in the image; and local searching is carried out on vanishing points, ground points and plumb lines in the next frame of image in the search frame by adopting a Sequential Similarity Detection Algorithm (SSDA) and a random sample consensus (RANSAC).
Further, referring to fig. 4, the implementation process of S3 is as follows:
s31, periodically calculating the attitude and position change of the visual sensor by using the visual pose model, and performing deviation calibration on a gyroscope and an accelerometer of the inertial sensor by using the calculation result;
s32, detecting the pose data of the vision sensor by using the calibrated inertia sensor, pre-integrating the angular velocity and linear acceleration detected by the gyroscope and the accelerometer to obtain the pose and position change of the vision sensor, and establishing an inertia pose model for representing the pose change of the vision sensor according to the pose change data of the vision sensor detected by the inertia sensor.
Further, referring to fig. 5, the implementation process of S4 is as follows:
s41, acquiring pose data of the visual sensor through the visual pose model;
s42, acquiring pose data of the visual sensor through the inertial pose model;
and S43, performing data fusion on the pose data corresponding to the visual pose model and the inertial pose model by adopting a multi-rate filtering algorithm, and acquiring a high-quality pose estimation result.
In summary, in the method of the present embodiment, the image output by the vision sensor is obtained, and the dot-line characteristics of the adjacent frame images are detected; establishing a visual pose model according to the pose change of the point-line characteristics of the adjacent frame images; detecting pose data of the visual sensor by using an inertial sensor, and establishing an inertial pose model; and carrying out data fusion on the visual pose model and the inertial pose model, and estimating the pose of the visual sensor of the robot through the fusion pose model. The problems that the pose of the vision sensor cannot be estimated due to frame loss caused by light and environment shielding, the long-time motion zero-bias error accumulation of the inertial sensor and the pose estimation precision of a single sensor are low are solved, the pose estimation precision and robustness in a real environment are improved, and the method can be widely applied to the field of robot environment sensing.
Second embodiment
The embodiment provides a pose estimation system suitable for the vision and inertia fusion of a robot, which comprises the following modules:
the image point-line characteristic extraction module is used for acquiring an image acquired by a vision sensor of the robot and detecting the point-line characteristics of adjacent frame images;
the visual pose model establishing module is used for determining the pose change of the visual sensor according to the position and the pose change of the point-line characteristics of the adjacent frame images and establishing a visual pose model for representing the pose change of the visual sensor;
the inertial pose model establishing module is used for detecting pose data of the visual sensor by using an inertial sensor and establishing an inertial pose model for representing pose change of the visual sensor according to the pose data detected by the inertial sensor; wherein the inertial sensor is mounted on the visual sensor and is coaxially arranged with the visual sensor;
and the model fusion module is used for carrying out data fusion on the visual pose model and the inertial pose model to obtain a fusion pose model so as to estimate the pose of a visual sensor of the robot through the fusion pose model.
The pose estimation system applicable to the vision and inertia fusion of the robot of the embodiment corresponds to the pose estimation method applicable to the vision and inertia fusion of the robot of the first embodiment; the functions realized by the functional modules in the pose estimation system applicable to the vision and inertia fusion of the robot in the embodiment correspond to the flow steps in the pose estimation method applicable to the vision and inertia fusion of the robot in the first embodiment one by one; therefore, it is not described herein.
Third embodiment
The present embodiment provides an electronic device, which includes a processor and a memory; wherein the memory has stored therein at least one instruction that is loaded and executed by the processor to implement the method of the first embodiment.
The electronic device may have a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) and one or more memories, where at least one instruction is stored in the memory, and the instruction is loaded by the processor and executes the method.
Fourth embodiment
The present embodiment provides a computer-readable storage medium, in which at least one instruction is stored, and the instruction is loaded and executed by a processor to implement the method of the first embodiment. The computer readable storage medium may be, among others, ROM, random access memory, CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like. The instructions stored therein may be loaded by a processor in the terminal and perform the above-described method.
Furthermore, it should be noted that the present invention may be provided as a method, apparatus or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied in the medium.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should also be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
Finally, it should be noted that while the above describes a preferred embodiment of the invention, it will be appreciated by those skilled in the art that, once the basic inventive concepts have been learned, numerous changes and modifications may be made without departing from the principles of the invention, which shall be deemed to be within the scope of the invention. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.

Claims (10)

1. A pose estimation method suitable for vision and inertia fusion of a robot is characterized by comprising the following steps:
acquiring an image acquired by a vision sensor of the robot, and detecting point-line characteristics of adjacent frame images;
determining the pose change of the visual sensor according to the position and the pose change of the point-line characteristics of the adjacent frame images, and establishing a visual pose model for representing the pose change of the visual sensor;
detecting pose data of the visual sensor by using an inertial sensor, and establishing an inertial pose model for representing pose change of the visual sensor according to the pose data detected by the inertial sensor; wherein the inertial sensor is mounted on the visual sensor and is coaxially arranged with the visual sensor;
and carrying out data fusion on the visual pose model and the inertial pose model to obtain a fusion pose model so as to estimate the pose of a visual sensor of the robot through the fusion pose model.
2. The pose estimation method suitable for vision and inertia fusion of a robot according to claim 1, wherein the detecting the point-line feature of the adjacent frame image comprises:
for the previous frame of image, retrieving vanishing points, ground points and plumb lines in the image by a preset point-line feature extraction method, and recording coordinate positions of the retrieved vanishing points, ground points and plumb line starting and stopping points in the image; wherein the vanishing point refers to a point to which parallel line projections which are not parallel to the projection plane are gathered in perspective projection; the ground pin point refers to the intersection point of ground pin lines of different planes; the plumb line is a straight line vertical to the ground;
estimating the change of the attitude and the position of the adjacent frame images through the pose data of the visual sensor detected by the inertial sensor, and predicting the coordinate positions of the vanishing point, the ground point and the plumb line starting point in the next frame image according to the estimated change of the attitude and the position of the adjacent frame images based on the coordinate positions of the vanishing point, the ground point and the plumb line starting point in the previous frame image in the images;
and based on the prediction result of the coordinate positions of the vanishing point, the foot point and the plumb line starting point in the next frame of image in the image, searching the vanishing point, the foot point and the plumb line in the next frame of image by adopting a local search method, and recording the coordinate positions of the vanishing point, the foot point and the plumb line starting point in the image.
3. The pose estimation method suitable for vision and inertia fusion of a robot according to claim 2, wherein the inertial sensor includes a gyroscope for detecting an angular velocity of the vision sensor and an accelerometer for detecting a linear acceleration of the vision sensor;
the estimating of the change of the pose and the position of the adjacent frame image through the pose data of the visual sensor detected by the inertial sensor comprises:
and pre-integrating the angular velocity and the linear acceleration detected by the gyroscope and the accelerometer to obtain the attitude and position change of the visual sensor so as to estimate the attitude and position change of the adjacent frame images.
4. The pose estimation method suitable for vision and inertia fusion of a robot according to claim 2, wherein the retrieving of the vanishing point, the footpoint and the plumb line in the next frame image using the local search method based on the prediction result of the coordinate positions of the vanishing point, the footpoint and the plumb line starting and stopping point in the next frame image in the image comprises:
designing a search frame in the next frame of image based on the prediction result of the coordinate positions of the vanishing point, the foot point and the vertical line starting and stopping point in the next frame of image in the image; and local searching is carried out on vanishing points, ground points and plumb lines in the next frame of image by adopting a preset point-line characteristic searching method in the searching frame.
5. The pose estimation method for vision and inertia fusion of a robot according to claim 1, wherein the detecting pose data of the vision sensor using the inertia sensor and building an inertia pose model representing the pose change of the vision sensor based on the pose data detected by the inertia sensor comprises:
periodically calculating the attitude and position change of the visual sensor by using the visual pose model, and performing deviation calibration on the inertial sensor by using the calculation result;
and detecting the pose data of the visual sensor by using the calibrated inertial sensor, and establishing an inertial pose model for representing the pose change of the visual sensor according to the pose data detected by the inertial sensor.
6. A pose estimation system adapted for vision and inertia fusion of a robot, comprising:
the image point-line characteristic extraction module is used for acquiring an image acquired by a vision sensor of the robot and detecting the point-line characteristics of adjacent frame images;
the visual pose model establishing module is used for determining the pose change of the visual sensor according to the position and the pose change of the point-line characteristics of the adjacent frame images and establishing a visual pose model for representing the pose change of the visual sensor;
the inertial pose model establishing module is used for detecting pose data of the visual sensor by using an inertial sensor and establishing an inertial pose model for representing pose change of the visual sensor according to the pose data detected by the inertial sensor; wherein the inertial sensor is mounted on the visual sensor and is coaxially arranged with the visual sensor;
and the model fusion module is used for carrying out data fusion on the visual pose model and the inertial pose model to obtain a fusion pose model so as to estimate the pose of a visual sensor of the robot through the fusion pose model.
7. The pose estimation system suitable for vision and inertial fusion of a robot of claim 6, wherein the image point line feature extraction module is specifically configured to:
for the previous frame of image, retrieving vanishing points, ground points and plumb lines in the image by a preset point-line feature extraction method, and recording coordinate positions of the retrieved vanishing points, ground points and plumb line starting and stopping points in the image; wherein the vanishing point refers to a point to which parallel line projections which are not parallel to the projection plane are gathered in perspective projection; the ground pin point refers to the intersection point of ground pin lines of different planes; the plumb line is a straight line vertical to the ground;
estimating the change of the attitude and the position of the adjacent frame images through the pose data of the visual sensor detected by the inertial sensor, and predicting the coordinate positions of the vanishing point, the ground point and the plumb line starting point in the next frame image according to the estimated change of the attitude and the position of the adjacent frame images based on the coordinate positions of the vanishing point, the ground point and the plumb line starting point in the previous frame image in the images;
and based on the prediction result of the coordinate positions of the vanishing point, the foot point and the plumb line starting point in the next frame of image in the image, searching the vanishing point, the foot point and the plumb line in the next frame of image by adopting a local search method, and recording the coordinate positions of the vanishing point, the foot point and the plumb line starting point in the image.
8. The pose estimation system suitable for vision and inertia fusion of a robot of claim 7, wherein the inertial sensor comprises a gyroscope and an accelerometer, the gyroscope being used for detecting an angular velocity of the vision sensor, the accelerometer being used for detecting a linear acceleration of the vision sensor;
the image point-line feature extraction module is further specifically configured to:
and pre-integrating the angular velocity and the linear acceleration detected by the gyroscope and the accelerometer to obtain the attitude and position change of the visual sensor so as to estimate the attitude and position change of the adjacent frame images.
9. The pose estimation system suitable for vision and inertial fusion of a robot of claim 7, wherein the image point line feature extraction module is further specifically configured to:
designing a search frame in the next frame of image based on the prediction result of the coordinate positions of the vanishing point, the foot point and the vertical line starting and stopping point in the next frame of image in the image; and local searching is carried out on vanishing points, ground points and plumb lines in the next frame of image by adopting a preset point-line characteristic searching method in the searching frame.
10. The pose estimation system for vision and inertia fusion of a robot of claim 6, wherein the inertial pose model building module is specifically configured to:
periodically calculating the attitude and position change of the visual sensor by using the visual pose model, and performing deviation calibration on the inertial sensor by using the calculation result;
and detecting the pose data of the visual sensor by using the calibrated inertial sensor, and establishing an inertial pose model for representing the pose change of the visual sensor according to the pose data detected by the inertial sensor.
CN202110363020.5A 2021-04-02 2021-04-02 Pose estimation method and system suitable for vision and inertia fusion of robot Expired - Fee Related CN112729294B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110363020.5A CN112729294B (en) 2021-04-02 2021-04-02 Pose estimation method and system suitable for vision and inertia fusion of robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110363020.5A CN112729294B (en) 2021-04-02 2021-04-02 Pose estimation method and system suitable for vision and inertia fusion of robot

Publications (2)

Publication Number Publication Date
CN112729294A true CN112729294A (en) 2021-04-30
CN112729294B CN112729294B (en) 2021-06-25

Family

ID=75596400

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110363020.5A Expired - Fee Related CN112729294B (en) 2021-04-02 2021-04-02 Pose estimation method and system suitable for vision and inertia fusion of robot

Country Status (1)

Country Link
CN (1) CN112729294B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115077467A (en) * 2022-06-10 2022-09-20 追觅创新科技(苏州)有限公司 Attitude estimation method and device for cleaning robot and cleaning robot
WO2023165093A1 (en) * 2022-03-01 2023-09-07 上海商汤智能科技有限公司 Training method for visual inertial odometer model, posture estimation method and apparatuses, electronic device, computer-readable storage medium, and program product

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102538781A (en) * 2011-12-14 2012-07-04 浙江大学 Machine vision and inertial navigation fusion-based mobile robot motion attitude estimation method
US8761439B1 (en) * 2011-08-24 2014-06-24 Sri International Method and apparatus for generating three-dimensional pose using monocular visual sensor and inertial measurement unit
CN106679648A (en) * 2016-12-08 2017-05-17 东南大学 Vision-inertia integrated SLAM (Simultaneous Localization and Mapping) method based on genetic algorithm
CN109669533A (en) * 2018-11-02 2019-04-23 北京盈迪曼德科技有限公司 A kind of motion capture method, the apparatus and system of view-based access control model and inertia
CN110095116A (en) * 2019-04-29 2019-08-06 桂林电子科技大学 A kind of localization method of vision positioning and inertial navigation combination based on LIFT
CN110702107A (en) * 2019-10-22 2020-01-17 北京维盛泰科科技有限公司 Monocular vision inertial combination positioning navigation method
US20200043196A1 (en) * 2016-12-20 2020-02-06 Samsung Electronics Co., Ltd. Multiscale weighted matching and sensor fusion for dynamic vision sensor tracking
CN111595333A (en) * 2020-04-26 2020-08-28 武汉理工大学 Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8761439B1 (en) * 2011-08-24 2014-06-24 Sri International Method and apparatus for generating three-dimensional pose using monocular visual sensor and inertial measurement unit
CN102538781A (en) * 2011-12-14 2012-07-04 浙江大学 Machine vision and inertial navigation fusion-based mobile robot motion attitude estimation method
CN106679648A (en) * 2016-12-08 2017-05-17 东南大学 Vision-inertia integrated SLAM (Simultaneous Localization and Mapping) method based on genetic algorithm
US20200043196A1 (en) * 2016-12-20 2020-02-06 Samsung Electronics Co., Ltd. Multiscale weighted matching and sensor fusion for dynamic vision sensor tracking
CN109669533A (en) * 2018-11-02 2019-04-23 北京盈迪曼德科技有限公司 A kind of motion capture method, the apparatus and system of view-based access control model and inertia
CN110095116A (en) * 2019-04-29 2019-08-06 桂林电子科技大学 A kind of localization method of vision positioning and inertial navigation combination based on LIFT
CN110702107A (en) * 2019-10-22 2020-01-17 北京维盛泰科科技有限公司 Monocular vision inertial combination positioning navigation method
CN111595333A (en) * 2020-04-26 2020-08-28 武汉理工大学 Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023165093A1 (en) * 2022-03-01 2023-09-07 上海商汤智能科技有限公司 Training method for visual inertial odometer model, posture estimation method and apparatuses, electronic device, computer-readable storage medium, and program product
CN115077467A (en) * 2022-06-10 2022-09-20 追觅创新科技(苏州)有限公司 Attitude estimation method and device for cleaning robot and cleaning robot
CN115077467B (en) * 2022-06-10 2023-08-08 追觅创新科技(苏州)有限公司 Cleaning robot posture estimation method and device and cleaning robot

Also Published As

Publication number Publication date
CN112729294B (en) 2021-06-25

Similar Documents

Publication Publication Date Title
US11668571B2 (en) Simultaneous localization and mapping (SLAM) using dual event cameras
CN109084732B (en) Positioning and navigation method, device and processing equipment
CN110084832B (en) Method, device, system, equipment and storage medium for correcting camera pose
CN112734852B (en) Robot mapping method and device and computing equipment
JP6760114B2 (en) Information processing equipment, data management equipment, data management systems, methods, and programs
KR102440358B1 (en) Inertial-based navigation device and Inertia-based navigation method based on relative preintegration
EP3451288A1 (en) Visual-inertial odometry with an event camera
EP3159123A1 (en) Device for controlling driving of mobile robot having wide-angle cameras mounted thereon, and method therefor
CN110044354A (en) A kind of binocular vision indoor positioning and build drawing method and device
CN107909614B (en) Positioning method of inspection robot in GPS failure environment
EP3159126A1 (en) Device and method for recognizing location of mobile robot by means of edge-based readjustment
CN112729294B (en) Pose estimation method and system suitable for vision and inertia fusion of robot
CN106814753B (en) Target position correction method, device and system
EP3159122A1 (en) Device and method for recognizing location of mobile robot by means of search-based correlation matching
CN110956665B (en) Bidirectional calculation method, system and device for turning track of vehicle
CN108090921A (en) Monocular vision and the adaptive indoor orientation method of IMU fusions
CN112907678B (en) Vehicle-mounted camera external parameter attitude dynamic estimation method and device and computer equipment
US20180075614A1 (en) Method of Depth Estimation Using a Camera and Inertial Sensor
WO2022007602A1 (en) Method and apparatus for determining location of vehicle
EP2851868A1 (en) 3D Reconstruction
CN109141411B (en) Positioning method, positioning device, mobile robot, and storage medium
JP7145770B2 (en) Inter-Vehicle Distance Measuring Device, Error Model Generating Device, Learning Model Generating Device, Methods and Programs Therefor
CN111238490B (en) Visual positioning method and device and electronic equipment
Huttunen et al. A monocular camera gyroscope
CN112731503B (en) Pose estimation method and system based on front end tight coupling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210625

CF01 Termination of patent right due to non-payment of annual fee