CN111197984A - Vision-inertial motion estimation method based on environmental constraint - Google Patents
Vision-inertial motion estimation method based on environmental constraint Download PDFInfo
- Publication number
- CN111197984A CN111197984A CN202010041988.1A CN202010041988A CN111197984A CN 111197984 A CN111197984 A CN 111197984A CN 202010041988 A CN202010041988 A CN 202010041988A CN 111197984 A CN111197984 A CN 111197984A
- Authority
- CN
- China
- Prior art keywords
- camera
- tag
- state
- label
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 230000007613 environmental effect Effects 0.000 title claims abstract description 10
- 238000005259 measurement Methods 0.000 claims abstract description 25
- 238000012545 processing Methods 0.000 claims abstract description 6
- 230000000007 visual effect Effects 0.000 claims description 10
- 239000011159 matrix material Substances 0.000 claims description 7
- 239000000126 substance Substances 0.000 claims description 6
- 230000001133 acceleration Effects 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 4
- 238000006073 displacement reaction Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 claims description 3
- 230000007704 transition Effects 0.000 claims description 2
- 230000009466 transformation Effects 0.000 abstract description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
Abstract
The invention requests to protect a vision-inertia motion estimation method based on environmental constraint, which is realized by adopting the following steps: (1) and installing a strapdown inertial navigation system and a monocular camera on the carrier, completing the calibration of the inertial navigation system and the camera, and placing an AprilTag label in the scene. (2) Acquiring an undistorted image by the monocular camera, then processing the image by an AprilTag tag detector, assigning a unique identifier id to the image associated with the tag, and the tag detector estimating the relative transformation between each tag and the monocular camera by iteratively optimizing the reprojection error of the 3D corner points on the tag; (3) the state and prediction model of the EKF filter is established by analyzing the IMU measurement model and the camera observation model and including the pose of the tag in the state of the filter. (4) And (4) performing an updating step by using the reprojection error of the tag corner point as an updating item of the Kalman filter, and finally obtaining accurate motion estimation.
Description
Technical Field
The invention belongs to the field of integrated navigation positioning, and particularly relates to a visual inertial motion estimation method based on environmental constraints.
Background
When moving objects such as robots and unmanned planes perform tasks, it is most important to estimate their state relative to their working space, and to obtain their pose and speed information for use in control, motion planning, navigation, and interaction or verification with the environment. Motion capture systems are commonly used to obtain state estimates of moving objects, but these systems are often expensive, have limited workspace size, and are not suitable for outdoor applications. The inertial sensor (IMU) and the visual sensor (camera) which are low in cost are perfect combinations for motion state estimation due to good complementarity in positioning. The IMU can estimate the motion of the carrier according to the motion information of the IMU, and the vision obtains the motion estimation of the IMU in a mode of feature point matching and the like.
Currently, many different motion estimation algorithms have been developed. Among them, m.faessler [ m.faessler, e.mueggler, k.schwabe, and d.scanamuzza, "a monomeric position estimation system on unknown leaves," in IEEE International Conference on rotational Automation,2014], etc., the proposed estimation method cannot provide any estimation during occlusion or motion blurring, and can only calculate linear velocity from position and direction estimates. The monocular SLAM method was proposed by J.Sola [ J.Sola, T.Vidal-Calleja, J.Civera, and J.M.M.Montiel, "Impact of landmark parameter differentiation on monocular luminance-slits points and lines," International journal of computer vision, vol.97, No.3,2012], et al. However, this measurement does not fuse inertial measurement data and therefore does not provide an absolute scale. M.Bryson [ M.Bryson and S.Sukkarieh, "Building a simulation of bearing-only initial modeling for a uav," Journal of FieldRobotics, vol.24, No.1-2, pp.113-143,2007 ], et al teach an SLAM method based on artificial labeling. However, the reference points are only represented as point features here, and therefore only the 3D positions of the reference points are estimated.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. A visual inertial motion estimation method based on environmental constraints is provided, a paper artificial landmark (a tag with 6 DOF) is combined with a monocular visual inertial SLAM, an Extended Kalman Filter (EKF) is used for tightly fusing an inertial measurement value with observation of an upper corner point of the artificial visual landmark, the pose and speed information of a moving object relative to a working space can be accurately estimated, and the method can be used in indoor or outdoor scenes so as to be conveniently applied to tasks such as control, motion planning, navigation and the like. The technical scheme of the invention is as follows:
a visual-inertial motion estimation method based on environmental constraints comprises the following steps:
step 1, installing a strapdown inertial navigation system and a monocular camera on a carrier, completing calibration of the strapdown inertial navigation system and the monocular camera, and placing an Apriltag in a scene;
step 2, obtaining an undistorted image by a monocular camera, then processing the image obtained by the monocular camera by an AprilTag label detector, allocating the image to a unique identifier id associated with a label, and estimating the relative conversion between each label and the monocular camera by the label detector through iteratively optimizing the reprojection error of a 3D corner point on the label;
step 3, establishing the state and the prediction model of the EKF filter by analyzing the IMU measurement model and the camera observation model and then including the pose of the tag in the state of the filter;
and 4, performing an updating step by using the reprojection error of the label corner point as an updating item of the EKF filter, and finally obtaining accurate motion estimation.
Further, in step 1, a strapdown inertial navigation system and a monocular camera are installed on a moving carrier, the strapdown inertial navigation system and the monocular camera jointly form a visual-inertial tightly-coupled combined navigation system, the strapdown inertial navigation system can describe carrier movement according to data acquired by the strapdown inertial navigation system, the monocular camera estimates the state of the monocular camera through image feature matching, an AprilTag is a relatively mature visual tag, is commonly used for positioning of robots and unmanned planes, is advanced compared with a two-dimensional code, is similar to a two-dimensional code tag, has 6DOF position and posture information, and can be detected from a longer range more reliably, so that the AprilTag can be placed in indoor or outdoor scenes.
Further, in said step 2, the monocular camera takes an undistorted photograph, and after processing by the AprilTag detector, assigns a unique identifier id to the image associated with each detected tag, and estimates the 3D position and orientation of the camera with respect to the tag, which is obtained by iteratively optimizing the reprojection error, wherein the reprojection error is the error between the corner points on the projected tags and their detected values in image space.
Further, in step 3, the camera observation model is as follows:
according to the imaging principle of pinhole camera model, a 3D point P (X, Y, Z) in the camera coordinate system is located at the point corresponding to the pixel coordinateComprises the following steps:
where K is called the camera's intrinsic parameter matrix and fx, fy, cx, cy are the camera intrinsic parameters.
This formula is simplified to:
whereinRepresenting a point in camera pixel coordinates, PCRepresenting points in camera coordinates.
Defining a camera coordinate system C and a label coordinate system t, wherein the origin of the label coordinate system is the geometric center of April tag, and the z axis is vertical to the label plane; when a tag is first detected by the detector, the camera is provided with initial pose information, and the relative transition between each tag and the camera is estimated in a subsequent detection process, given the relative position and pose P of the particular tag with respect to the camera frameCtAnd q istCCalculating the ith tag corner point P seen from the cameraCiPosition fixed on the tag coordinate system T:
modeling noise at pixel position of detected corner, projecting the number to image plane to obtain corresponding pixel coordinate measured valueAssume its Gaussian noise model np,i~σ(0,Rp):Rp、np,iThat is, a Gaussian noise model, which is fit to (0, R)p) The normal distribution of (c),
according to the reprojection error definition of the camera: the error obtained by comparing the pixel coordinates (observed projection position) with the position obtained by projecting the 3D point according to the currently estimated pose, i.e. the reprojection error e, can be expressed as:
e=Pi-fPCi(5)
further, the IMU model in step 3 is:
defining an IMU coordinate system i, a global coordinate system G, and expressing the IMU state vector as:
wherein the content of the first and second substances,iqG(t)、Gvi(t) andGpi(t) represents the rotation, velocity and displacement, respectively, of the IMU coordinate system i relative to the global coordinate system G, bg(t) and ba(t) represents the noise of the gyroscope and accelerometer, respectively;
the model of the evolution of the system state over time is:
wherein the content of the first and second substances,iω andGa is angular velocity and linear acceleration, Ω is a matrix containing the three-axis gyro information required for updating the attitude quaternion, nwgAnd nwaWhite gaussian noise representing IMU;
the measurements of the gyroscope and accelerometer can be expressed as:
is a quaternionTo the corresponding rotation matrix; a ism(t) represents the above acceleration measurement value.
Further, the establishing of the state and prediction model of the EKF filter specifically includes:
to pass the state of the moving carrier, the filter uses inertial measurements, incorporates the pose of the tag into the filter state as well, and performs updates based on the available tag angle measurements, and based on the previously established sensor model, the state equation of the filter can be established:
X=[P v q bgbaPtqtPicqic](9)
wherein P, v, q respectively represent the position, speed and attitude of the moving carrier, PtAnd q istRespectively representing pose information of the tags, PicAnd q isicThe pose information of the conversion from the camera to the IMU is respectively expressed, the states are derived, and an IMU state equation is combined to obtain:
adding extra continuous Gaussian white noise wP、wPf、wPCAnd wqvThe whole filtering state is activated, and modeling errors caused by state discretization are processed.
Further, in the step (4), when the detector observes the tag, the updating step is performed by using the reprojection error of the tag corner as an update term of the EKF filter, and finally obtaining an accurate motion estimation, which specifically includes:
for each tag angle i, an update term is defined as y based on equation (4)iThe reprojection error of the angular point of the label is as follows:
and correcting the estimated value of the prediction stage by using the observation value of the label at the current moment, and continuously tracking the state of a new filter to obtain the accurate motion estimation of the moving object.
The invention has the following advantages and beneficial effects:
the invention can accurately estimate the pose and speed information of the moving object relative to the working space, so as to be conveniently applied to tasks such as control, motion planning, navigation and the like. The innovation points are step 3 and step 4, and the following beneficial effects can be realized in the process:
(1) high precision, strong reliability: the system closely fuses the inertia measurement value and the observation of an artificial landmark (AprilTag), and can accurately estimate a moving object while reducing the work of estimation, composition and loop detection to the maximum extent by using the artificial landmark providing rich information. Furthermore, since AprilTag has 6 degrees of freedom, a single measurement completely limits the relative pose of 6 degrees of freedom, and thus a single landmark is sufficient for estimating the pose.
(2) The camera observation model is stable: the advantage of the employed visual measurement model is that the noise can be modeled directly on the pixel positions of the detected corner points, since the re-error noise of the 3D corner points of the tag is largely independent of the pose of the camera and constant. Although the detector provides an estimate of the tag pose relative to the current camera frame and can be used directly in the filter, it is difficult to fit an accurate noise model for this relative pose since the magnitude of the noise depends largely on the location and orientation of the tag in the camera frame.
(3) Low cost, suitability are strong: the AprilTag label can be printed by paper, and is low in cost and convenient to deploy. The robustness of system motion estimation is improved through the fusion of vision and inertial navigation, and the method can be applied to indoor and outdoor multi-scenes.
Drawings
Fig. 1 is a flow chart of a vision-inertial motion estimation method based on environmental constraints according to a preferred embodiment of the present invention.
FIG. 2 is a schematic diagram of a camera pinhole model.
Fig. 3 is a schematic diagram of relative pose estimation when AprilTag is observed by the camera.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
a visual inertial motion estimation method based on environmental constraints comprises the following steps:
and (1) installing a strapdown inertial navigation system and a monocular camera on a carrier, completing the calibration of inertial navigation and the camera, and placing an AprilTag label in a scene.
Step (2), the undistorted image is acquired by the camera and then processed by the AprilTag label detector, assigning the image a unique identifier (id) associated with the label. Furthermore, it uses the reprojection error of the 3D corners on the tags to estimate the relative transformation between each tag and the camera
And (3) establishing a state and prediction model of the EKF filter by analyzing the camera observation model and the IMU measurement model.
And (4) performing an updating step by using the reprojection error of the tag corner point as an updating item of the EKF filter, and finally obtaining accurate motion estimation.
In the step (1), a strapdown inertial navigation system and a monocular camera are installed on a moving carrier and jointly form a visual-inertial tightly-coupled combined navigation system, the inertial navigation system can transmit carrier movement according to data acquired by the inertial navigation system, and the monocular camera estimates the state of the monocular camera through image feature matching. The aprilatag tag is similar to a two-dimensional code tag, has 6DOF (position and orientation), can be created with a normal printer, but is easier to detect and decode quickly than a two-dimensional code tag. In this approach, we can place aprilatag tags in indoor or outdoor scenes.
In the step (2), the camera acquires an undistorted photo, and after processing by the aprilat detector, a unique identifier (id) can be assigned to the image associated with each detected tag, and the 3D position and direction of the camera relative to the tag are estimated, and the estimated values are obtained by iteratively optimizing a re-projection error, wherein the re-projection error is an error between corner points on the projected tags and their detected values in the image space.
In the step (3), the state and the prediction model of the EKF filter are established by analyzing the camera observation model and the IMU measurement model.
The camera observation model:
according to the imaging principle of pinhole camera model, a 3D point P (X, Y, Z) in the camera coordinate system is located at the point corresponding to the pixel coordinateComprises the following steps:
this formula can be simplified to:
defining a camera coordinate system C and a label coordinate system t. Where the origin of the coordinate system of the tag is the geometric center of Apriltag and the z-axis is perpendicular to the tag plane. When the detector first detects the tag, initial pose information is provided to the camera. It estimates the relative translation between each tag and the camera during the subsequent detection process. Given the relative position and pose of a particular tag with respect to the camera frame,andwe can calculate the position of the ith tag angle (fixed on the tag coordinate system T) as seen from the camera:
although the detector may provide an estimate of the pose of the camera with respect to the current tag and is continually optimized in the filter, this noise is not a problemThe size of the sound is heavily dependent on the position and orientation of the current tag in the camera frame, which makes it difficult to fit an accurate noise model. Thus, the method uses an observation model of the observation tag, which can directly model the noise at the pixel position of the detected corner point. Using the above quantities projected onto the image plane, we can derive the corresponding pixel coordinate measurements, where we assume its Gaussian noise model np,i~σ(0,Rp)
The chosen visual measurement model has the advantage that the noise of the projection of the corner points on the labels is largely independent of the pose of the camera, so it can be assumed that the noise is constant and the same for all labels and measurements.
An IMU model:
defining an IMU coordinate system i and a global coordinate system G. The state vector of the IMU may be represented as:
wherein the content of the first and second substances,iqG(t)、Gvi(t) andGpi(t) represents the rotation, velocity and displacement, respectively, of the IMU coordinate system i relative to the global coordinate system G, bg(t) and ba(t) represents the noise of the gyroscope and accelerometer, respectively.
The model of the evolution of the system state over time is:
wherein the content of the first and second substances,iω andGa is angular velocity and linear acceleration, nwgAnd nwaWhich represents the white gaussian noise of the IMU,
the measurements of the gyroscope and accelerometer can be expressed as:
To pass the state of the moving carrier, the filter uses inertial navigation measurements to incorporate the pose of the tag into the filter state and perform updates based on the available tag angle measurements. Based on the sensor model established previously, we can establish the state equation of the filter
X=[P v q bgbaPtqtPicqic](28)
Wherein P, v, q respectively represent the position, speed and attitude of the moving carrier, PtAnd q istRespectively representing pose information of the tags, PicAnd q isicRespectively representing pose information of the camera to IMU transformation. Derivation of these states, simultaneous IMU state equations can be derived:
we add an additional continuous gaussian white noise process wP、wPt、wqt、wPCAnd wqvTo activate the entire filtering state and to handle modeling errors resulting from subsequent state discretization.
In the step (4), when the detector observes the tag, the re-projection error of the tag corner is used as an update item of the EKF filter to perform the update step, and finally, accurate motion estimation is obtained.
For each tag angle i, an update term is defined as y based on equation (4)i:
And then continuously tracking the state of the new filter by using an updated extended Kalman update formula to obtain the accurate motion estimation of the moving object.
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.
Claims (7)
1. A visual-inertial motion estimation method based on environmental constraints is characterized by comprising the following steps:
step 1, installing a strapdown inertial navigation system and a monocular camera on a carrier, completing calibration of the strapdown inertial navigation system and the monocular camera, and placing an Apriltag in a scene;
step 2, obtaining an undistorted image by a monocular camera, then processing the image obtained by the monocular camera by an AprilTag label detector, allocating the image to a unique identifier id associated with a label, and estimating the relative conversion between each label and the monocular camera by the label detector through iteratively optimizing the reprojection error of a 3D corner point on the label;
step 3, establishing the state and the prediction model of the EKF filter by analyzing the IMU measurement model and the camera observation model and then including the pose of the tag in the state of the filter;
and 4, performing an updating step by using the reprojection error of the label corner point as an updating item of the EKF filter, and finally obtaining accurate motion estimation.
2. The method of claim 1, wherein in step 1, a strapdown inertial navigation system and a monocular camera are installed on a moving carrier, which together form a visual-inertial tightly-coupled integrated navigation system, the strapdown inertial navigation system can describe the carrier motion according to the data collected by the monocular camera, and the monocular camera estimates the state of the monocular camera by image feature matching, and the AprilTag tag is a more mature visual tag, commonly used for positioning of robots and unmanned planes, more advanced than two-dimensional codes, similar to two-dimensional code tags, and has 6DOF position and attitude information, which can be detected from a longer range more reliably, so that the AprilTag tag can be placed in indoor or outdoor scenes.
3. The method of claim 1, wherein in step 2, the monocular camera acquires an undistorted picture, and after processing by the AprilTag detector, assigns a unique identifier id to the image associated with each detected tag, and estimates the 3D position and orientation of the camera with respect to the tag, which is obtained by iteratively optimizing the reprojection error, wherein the reprojection error is the error between the corner points on the projected tags and their detected values in image space.
4. The method for estimating visual-inertial motion based on environmental constraints according to claim 1, wherein the camera observation model in step 3 is:
according to the imaging principle of pinhole camera model, a 3D point P (X, Y, Z) in the camera coordinate system is located at the point corresponding to the pixel coordinateComprises the following steps:
where K is called the camera's intrinsic parameter matrix and fx, fy, cx, cy are the camera intrinsic parameters.
This formula is simplified to:
whereinRepresenting a point in camera pixel coordinates, PCRepresenting points in camera coordinates.
Defining a camera coordinate system C and a label coordinate system t, wherein the origin of the label coordinate system is the geometric center of April tag, and the z axis is vertical to the label plane; when a tag is first detected by the detector, the camera is provided with initial pose information, and the relative transition between each tag and the camera is estimated in a subsequent detection process, given the relative position and pose P of the particular tag with respect to the camera frameCtAnd q istCCalculating the ith tag corner point P seen from the cameraCiPosition fixed on the tag coordinate system T:
modeling noise at pixel position of detected corner, projecting the number to image plane to obtain corresponding pixel coordinate measured valueAssume its Gaussian noise model np,i~σ(0,Rp):Rp、np,iThat is, a Gaussian noise model, which is fit to (0, R)p) The normal distribution of (c),
according to the reprojection error definition of the camera: the error obtained by comparing the pixel coordinates (observed projection position) with the position obtained by projecting the 3D point according to the currently estimated pose, i.e. the reprojection error e, can be expressed as:
e=Pi-fPCi(5) 。
5. the method according to claim 4, wherein the IMU model in step 3 is:
defining an IMU coordinate system i, a global coordinate system G, and expressing the IMU state vector as:
wherein the content of the first and second substances,iqG(t)、Gvi(t) andGpi(t) represents the rotation, velocity and displacement, respectively, of the IMU coordinate system i relative to the global coordinate system G, bg(t) and ba(t) represents the noise of the gyroscope and accelerometer, respectively;
the model of the evolution of the system state over time is:
wherein the content of the first and second substances,iω andGa is angular velocity and linear acceleration, Ω is a matrix containing the three-axis gyro information required for updating the attitude quaternion, nwgAnd nwaWhite gaussian noise representing IMU;
the measurements of the gyroscope and accelerometer can be expressed as:
6. The method according to claim 5, wherein the establishing the state and prediction model of the EKF filter specifically comprises:
to pass the state of the moving carrier, the filter uses inertial measurements, incorporates the pose of the tag into the filter state as well, and performs updates based on the available tag angle measurements, and based on the previously established sensor model, the state equation of the filter can be established:
X=[P v q bgbaPtqtPicqic](9)
wherein P, v, q respectively represent the position, speed and attitude of the moving carrier, PtAnd q istRespectively representing pose information of the tags, PicAnd q isicThe pose information of the conversion from the camera to the IMU is respectively expressed, the states are derived, and an IMU state equation is combined to obtain:
adding extra continuous Gaussian white noise wP、wPf、wPCAnd wqvThe whole filtering state is activated, and modeling errors caused by state discretization are processed.
7. The method according to claim 6, wherein in the step (4), when the detector observes the tag, the updating step is performed by using the reprojection error of the tag corner as the update term of the EKF filter, and finally obtaining the accurate motion estimation, specifically comprises:
for each tag angle i, an update term is defined as y based on equation (4)i,The reprojection error of the angular points of the label is as follows:
and correcting the estimated value of the prediction stage by using the observation value of the label at the current moment, and continuously tracking the state of a new filter to obtain the accurate motion estimation of the moving object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010041988.1A CN111197984A (en) | 2020-01-15 | 2020-01-15 | Vision-inertial motion estimation method based on environmental constraint |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010041988.1A CN111197984A (en) | 2020-01-15 | 2020-01-15 | Vision-inertial motion estimation method based on environmental constraint |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111197984A true CN111197984A (en) | 2020-05-26 |
Family
ID=70744678
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010041988.1A Pending CN111197984A (en) | 2020-01-15 | 2020-01-15 | Vision-inertial motion estimation method based on environmental constraint |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111197984A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112184812A (en) * | 2020-09-23 | 2021-01-05 | 广东海洋大学 | Method for improving identification and positioning precision of unmanned aerial vehicle camera to Apriltag, positioning method and positioning system |
CN112256001A (en) * | 2020-09-29 | 2021-01-22 | 华南理工大学 | Visual servo control method for mobile robot under visual angle constraint |
CN112270716A (en) * | 2020-10-30 | 2021-01-26 | 浙江理工大学 | Decoding positioning method of artificial visual landmark |
CN112700503A (en) * | 2020-12-29 | 2021-04-23 | 合肥学院 | Dtag-based intelligent garbage truck relative pose positioning method and device |
CN112862768A (en) * | 2021-01-28 | 2021-05-28 | 重庆邮电大学 | Adaptive monocular VIO (visual image analysis) initialization method based on point-line characteristics |
CN113218394A (en) * | 2021-04-20 | 2021-08-06 | 浙江大学 | Indoor visual positioning method and system for flapping wing aircraft |
CN113218403A (en) * | 2021-05-14 | 2021-08-06 | 哈尔滨工程大学 | AGV system of inertia vision combination formula location |
CN113237478A (en) * | 2021-05-27 | 2021-08-10 | 哈尔滨工业大学 | Unmanned aerial vehicle attitude and position estimation method and unmanned aerial vehicle |
CN113280817A (en) * | 2020-07-08 | 2021-08-20 | 阿里巴巴集团控股有限公司 | Visual navigation based on landmarks |
CN114715447A (en) * | 2022-04-19 | 2022-07-08 | 北京航空航天大学 | Cell spacecraft module docking device and visual alignment method |
CN115388902A (en) * | 2022-10-28 | 2022-11-25 | 苏州工业园区测绘地理信息有限公司 | Indoor positioning method and system, AR indoor positioning navigation method and system |
CN117848331A (en) * | 2024-03-06 | 2024-04-09 | 河北美泰电子科技有限公司 | Positioning method and device based on visual tag map |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102768042A (en) * | 2012-07-11 | 2012-11-07 | 清华大学 | Visual-inertial combined navigation method |
CN103827917A (en) * | 2011-07-25 | 2014-05-28 | 科英布拉大学 | Method and apparatus for automatic camera calibration using one or more images of a checkerboard pattern |
CN106446815A (en) * | 2016-09-14 | 2017-02-22 | 浙江大学 | Simultaneous positioning and map building method |
CN107180215A (en) * | 2017-05-31 | 2017-09-19 | 同济大学 | Figure and high-precision locating method are built in parking lot based on warehouse compartment and Quick Response Code automatically |
CN108288294A (en) * | 2018-01-17 | 2018-07-17 | 视缘(上海)智能科技有限公司 | A kind of outer ginseng scaling method of a 3D phases group of planes |
CN108627092A (en) * | 2018-04-17 | 2018-10-09 | 南京阿凡达机器人科技有限公司 | A kind of measurement method, system, storage medium and the mobile terminal of package volume |
CN108734737A (en) * | 2018-06-14 | 2018-11-02 | 哈尔滨工业大学 | The method that view-based access control model SLAM estimation spaces rotate noncooperative target shaft |
CN108759826A (en) * | 2018-04-12 | 2018-11-06 | 浙江工业大学 | A kind of unmanned plane motion tracking method based on mobile phone and the more parameter sensing fusions of unmanned plane |
CN109376785A (en) * | 2018-10-31 | 2019-02-22 | 东南大学 | Air navigation aid based on iterative extended Kalman filter fusion inertia and monocular vision |
-
2020
- 2020-01-15 CN CN202010041988.1A patent/CN111197984A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103827917A (en) * | 2011-07-25 | 2014-05-28 | 科英布拉大学 | Method and apparatus for automatic camera calibration using one or more images of a checkerboard pattern |
CN102768042A (en) * | 2012-07-11 | 2012-11-07 | 清华大学 | Visual-inertial combined navigation method |
CN106446815A (en) * | 2016-09-14 | 2017-02-22 | 浙江大学 | Simultaneous positioning and map building method |
CN107180215A (en) * | 2017-05-31 | 2017-09-19 | 同济大学 | Figure and high-precision locating method are built in parking lot based on warehouse compartment and Quick Response Code automatically |
CN108288294A (en) * | 2018-01-17 | 2018-07-17 | 视缘(上海)智能科技有限公司 | A kind of outer ginseng scaling method of a 3D phases group of planes |
CN108759826A (en) * | 2018-04-12 | 2018-11-06 | 浙江工业大学 | A kind of unmanned plane motion tracking method based on mobile phone and the more parameter sensing fusions of unmanned plane |
CN108627092A (en) * | 2018-04-17 | 2018-10-09 | 南京阿凡达机器人科技有限公司 | A kind of measurement method, system, storage medium and the mobile terminal of package volume |
CN108734737A (en) * | 2018-06-14 | 2018-11-02 | 哈尔滨工业大学 | The method that view-based access control model SLAM estimation spaces rotate noncooperative target shaft |
CN109376785A (en) * | 2018-10-31 | 2019-02-22 | 东南大学 | Air navigation aid based on iterative extended Kalman filter fusion inertia and monocular vision |
Non-Patent Citations (1)
Title |
---|
MICHAEL NEUNERT 等: "An open source, fiducial based, visual-inertial motion capture system", 《2016 19TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION)》 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113280817A (en) * | 2020-07-08 | 2021-08-20 | 阿里巴巴集团控股有限公司 | Visual navigation based on landmarks |
CN112184812A (en) * | 2020-09-23 | 2021-01-05 | 广东海洋大学 | Method for improving identification and positioning precision of unmanned aerial vehicle camera to Apriltag, positioning method and positioning system |
CN112184812B (en) * | 2020-09-23 | 2023-09-22 | 广东海洋大学 | Method for improving identification and positioning precision of unmanned aerial vehicle camera to april tag and positioning method and system |
CN112256001A (en) * | 2020-09-29 | 2021-01-22 | 华南理工大学 | Visual servo control method for mobile robot under visual angle constraint |
CN112270716A (en) * | 2020-10-30 | 2021-01-26 | 浙江理工大学 | Decoding positioning method of artificial visual landmark |
CN112270716B (en) * | 2020-10-30 | 2024-01-05 | 浙江理工大学 | Decoding and positioning method for artificial visual landmarks |
CN112700503A (en) * | 2020-12-29 | 2021-04-23 | 合肥学院 | Dtag-based intelligent garbage truck relative pose positioning method and device |
CN112862768A (en) * | 2021-01-28 | 2021-05-28 | 重庆邮电大学 | Adaptive monocular VIO (visual image analysis) initialization method based on point-line characteristics |
CN112862768B (en) * | 2021-01-28 | 2022-08-02 | 重庆邮电大学 | Adaptive monocular VIO (visual image analysis) initialization method based on point-line characteristics |
CN113218394A (en) * | 2021-04-20 | 2021-08-06 | 浙江大学 | Indoor visual positioning method and system for flapping wing aircraft |
CN113218403B (en) * | 2021-05-14 | 2022-09-09 | 哈尔滨工程大学 | AGV system of inertia vision combination formula location |
CN113218403A (en) * | 2021-05-14 | 2021-08-06 | 哈尔滨工程大学 | AGV system of inertia vision combination formula location |
CN113237478A (en) * | 2021-05-27 | 2021-08-10 | 哈尔滨工业大学 | Unmanned aerial vehicle attitude and position estimation method and unmanned aerial vehicle |
CN113237478B (en) * | 2021-05-27 | 2022-10-14 | 哈尔滨工业大学 | Unmanned aerial vehicle attitude and position estimation method and unmanned aerial vehicle |
CN114715447A (en) * | 2022-04-19 | 2022-07-08 | 北京航空航天大学 | Cell spacecraft module docking device and visual alignment method |
CN115388902A (en) * | 2022-10-28 | 2022-11-25 | 苏州工业园区测绘地理信息有限公司 | Indoor positioning method and system, AR indoor positioning navigation method and system |
CN117848331A (en) * | 2024-03-06 | 2024-04-09 | 河北美泰电子科技有限公司 | Positioning method and device based on visual tag map |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111197984A (en) | Vision-inertial motion estimation method based on environmental constraint | |
Lupton et al. | Visual-inertial-aided navigation for high-dynamic motion in built environments without initial conditions | |
US9071829B2 (en) | Method and system for fusing data arising from image sensors and from motion or position sensors | |
CN114088087B (en) | High-reliability high-precision navigation positioning method and system under unmanned aerial vehicle GPS-DENIED | |
CN110388919B (en) | Three-dimensional model positioning method based on feature map and inertial measurement in augmented reality | |
CN111932674A (en) | Optimization method of line laser vision inertial system | |
CN111083633B (en) | Mobile terminal positioning system, establishment method thereof and positioning method of mobile terminal | |
CN111595342B (en) | Indoor positioning method and system capable of being deployed in large scale | |
CN110352331A (en) | The method and clouds terrace system of the attitude algorithm of hand-held holder | |
JP2005182834A (en) | Method and apparatus for using rotational movement amount of mobile device and computer-readable recording medium for storing computer program | |
CN110751123A (en) | Monocular vision inertial odometer system and method | |
CN111504314B (en) | IMU and rigid body pose fusion method, device, equipment and storage medium | |
Zheng et al. | SE (2)-constrained visual inertial fusion for ground vehicles | |
CN115371665A (en) | Mobile robot positioning method based on depth camera and inertia fusion | |
Huai et al. | Real-time large scale 3D reconstruction by fusing Kinect and IMU data | |
JP6922348B2 (en) | Information processing equipment, methods, and programs | |
CN112179373A (en) | Measuring method of visual odometer and visual odometer | |
CN114972668A (en) | Laser SLAM method and system based on height information | |
Xian et al. | Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach | |
Scheuermann et al. | Mobile augmented reality based annotation system: A cyber-physical human system | |
Irmisch et al. | Simulation framework for a visual-inertial navigation system | |
CN116952229A (en) | Unmanned aerial vehicle positioning method, device, system and storage medium | |
CN116659490A (en) | Low cost vision-inertial fusion SLAM method | |
CN111862146A (en) | Target object positioning method and device | |
Zhang et al. | Monocular visual-inertial and robotic-arm calibration in a unifying framework |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200526 |
|
RJ01 | Rejection of invention patent application after publication |