CN112884838A - Robot autonomous positioning method - Google Patents

Robot autonomous positioning method Download PDF

Info

Publication number
CN112884838A
CN112884838A CN202110282300.3A CN202110282300A CN112884838A CN 112884838 A CN112884838 A CN 112884838A CN 202110282300 A CN202110282300 A CN 202110282300A CN 112884838 A CN112884838 A CN 112884838A
Authority
CN
China
Prior art keywords
camera
point
image
pose
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110282300.3A
Other languages
Chinese (zh)
Other versions
CN112884838B (en
Inventor
薛方正
刘世敏
岑汝平
苏晓杰
江涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202110282300.3A priority Critical patent/CN112884838B/en
Publication of CN112884838A publication Critical patent/CN112884838A/en
Application granted granted Critical
Publication of CN112884838B publication Critical patent/CN112884838B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses an autonomous robot positioning method, which comprises the following steps: 1) the robot collects the current environment image through a camera; 2) converting a current frame image collected by a camera and a selected reference image as a positioning reference into an HSI color space; 3) extraction of a point P in real environment space in a reference imagejProjected points in a reference image
Figure DDA0002979063910000011
4) Calculating a point PjProjection point of current frame image
Figure DDA0002979063910000012
5) Calculating projected points
Figure DDA0002979063910000013
And projection point
Figure DDA0002979063910000014
Has a projection error of rj(ii) a 6) And continuously iterating and solving by minimizing the objective function E (xi) to obtain the optimal solution of the camera pose. According to the robot autonomous positioning method, the accuracy of pose estimation of the robot can be improved by utilizing more image information and constraint relation through the image registration result with higher accuracy, namely the accuracy of robot autonomous positioning is improved.

Description

Robot autonomous positioning method
Technical Field
The invention relates to the technical field of computer vision and robots, in particular to a robot autonomous positioning method.
Background
In the existing robot autonomous positioning method, a visual SLAM direct method is adopted, and a luminosity invariance assumed condition is used, so that after an input picture is directly converted into a gray image, the motion of a camera and the projection of a point are simultaneously estimated according to the pixel gray information of the image. However, according to the visual characteristics of human eyes, the human eyes are more sensitive to colors than gray scales; and the gray scale assumption condition can be influenced by the automatic exposure of the camera and the specular reflection of the object surface in the actual camera imaging system. Therefore, using the gradation information alone may cause image alignment failure.
In brief, image alignment, which aims to find the optimal image transformation to establish the spatial correspondence between different images, is widely applied in the field of computer vision. In addition to the visual SLAM field, many fields such as image depth estimation three-dimensional reconstruction, visual tracking and the like are established on the basis of image alignment, and the corresponding relation between images needs to be estimated by utilizing the image alignment. If the image alignment fails or is inaccurate, the accuracy of camera pose estimation, pixel point depth estimation value and the like is directly influenced.
Therefore, the method and the device aim at positioning requirements in the fields of robots and the like in practice, a more accurate and robust image alignment result is beneficial to improving the pose estimation precision of the robots, and play an important role in solving the positioning problem of the robots.
Disclosure of Invention
In view of the above, in order to solve the existing problems described above, an object of the present invention is to provide an autonomous positioning method for a robot, so as to solve the technical problem that the positioning accuracy of the robot is affected due to inaccurate image alignment in the existing autonomous positioning technology for the robot.
The invention discloses an autonomous robot positioning method, which comprises the following steps:
1) the robot collects the current environment image through a camera;
2) converting a current frame image collected by a camera and a selected reference image serving as a positioning reference into an HSI color space to obtain three components of H, S and I;
3) extraction of a point P in real environment space in a reference imagejProjected points in a reference image
Figure BDA0002979063890000021
Figure BDA0002979063890000022
Pixel point
Figure BDA0002979063890000023
Has a gray value of
Figure BDA0002979063890000024
The color components are
Figure BDA0002979063890000025
In the above formula (1), K is camera reference, Z1Is a point PjDepth coordinate value, P, in reference frame camera coordinate systemj=[Xj,Yj,Zj]∈R3(ii) a C is a conversion matrix from homogeneous coordinates to non-homogeneous coordinates,
Figure BDA0002979063890000026
4) calculating a point PjProjection point of current frame image
Figure BDA0002979063890000027
Figure BDA0002979063890000028
Pixel point
Figure BDA0002979063890000029
Has a gray value of
Figure BDA00029790638900000210
The color components are
Figure BDA00029790638900000211
In the above formula (2), Z2Is a point PjDepth coordinate values under a current frame camera coordinate system, R is a pose rotation amount estimated value of a current frame image relative to a reference image, t is a pose translation amount estimated value of the current frame image relative to the reference image, and xi is a lie algebra corresponding to a camera pose (R, t), so that the lie algebra xi is used for representing camera pose amount;
Figure BDA00029790638900000212
wherein ξ ^ is an antisymmetric matrix of ξ;
5) calculating projected points
Figure BDA00029790638900000213
And projection point
Figure BDA00029790638900000214
Projection error of rjThe projection error comprises a projection luminosity error eIjAnd projection color error eHj
Figure BDA00029790638900000215
6) Suppose the same point PjThe gray values and color values under the current frame image and the reference image are constant,by minimizing the objective function E (xi), continuously iterating and solving to obtain the optimal solution xi of the camera pose*
Figure BDA0002979063890000031
In the above formula wjIn order to be the weight coefficient,
Figure BDA0002979063890000032
n is a point PjThe number of (2).
Further, the iterative solution in step 6) includes the following steps:
step1 calculating the error rjJacobian matrix J about camera pose lie algebra xij
Figure BDA0002979063890000033
Step2 based on the derivative matrix JjCalculating the step length of descent, i.e. attitude increment delta, by Gauss-Newton methodξ
Figure BDA0002979063890000034
Step3, updating the camera pose quantity ξ:
ξ=ξ0ξ
wherein ξ0Representing the last iteration-derived camera pose quantity, δξRepresenting the increment of the camera pose, and xi representing the current camera pose quantity;
and (5) Setp4, repeatedly executing the Step1-3 until a convergence condition is met, ending the circulation to obtain the optimal pose solution ξ of the camera*=ξ。
The invention has the beneficial effects that:
the robot autonomous positioning method disclosed by the invention, 1) the pixel point errors of the luminosity component and the color component are combined, and as more image information is utilized, the image matching constraint relation is enhanced, and the matching precision of the multi-view image in the illumination change environment is improved; 2) the weighted value is added in the overall optimization objective function, so that the sensitivity of pixel point errors to brightness changes can be relieved to a certain extent, and if the weighted value of the pixel points in the area with large image luminosity error changes is reduced, the image alignment accuracy under the conditions of camera exposure and the like can be improved.
Because the assumption that the gray value is unchanged in practice is difficult to satisfy, the robot autonomous positioning method can better estimate the camera motion and the projection of the point under the condition that the whole image is lightened or darkened according to more information of image pixel gray and color.
Therefore, the invention can improve the pose estimation precision of the robot by using more image information and constraint relation through the image registration result with higher precision, namely improve the autonomous positioning precision of the robot.
Drawings
Fig. 1 shows the relationship between three-dimensional space points and image projection points, and the five-pointed star in the figure represents a point in the three-dimensional space.
Detailed Description
The invention is further described below with reference to the figures and examples.
The robot autonomous positioning method in the embodiment comprises the following steps:
1) the robot collects the current environment image through the camera.
2) And converting the current frame image collected by the camera and the selected reference image as the positioning reference into an HSI color space to obtain three components of H, S and I.
3) Extraction of a point P in real environment space in a reference imagejProjection point p in reference image1 j
Figure BDA0002979063890000041
Pixel point
Figure BDA0002979063890000042
Has a gray value of
Figure BDA0002979063890000043
The color components are
Figure BDA0002979063890000044
Figure BDA0002979063890000045
Representing projected points
Figure BDA0002979063890000046
The number of rows in the image array,
Figure BDA0002979063890000047
representing projected points
Figure BDA0002979063890000048
The number of columns in the image array,
Figure BDA0002979063890000049
i.e. projected points
Figure BDA00029790638900000410
The image coordinates of (a).
In the above formula (1), K is camera reference, Z1Is a point PjDepth coordinate value, P, in reference frame camera coordinate systemj=[Xj,Yj,Zj]∈R3(ii) a C is a conversion matrix from homogeneous coordinates to non-homogeneous coordinates,
Figure BDA00029790638900000411
4) calculating a point PjProjection point of current frame image
Figure BDA00029790638900000412
Figure BDA00029790638900000413
Pixel point
Figure BDA00029790638900000414
Has a gray value of
Figure BDA00029790638900000415
The color components are
Figure BDA00029790638900000416
Figure BDA00029790638900000417
Is a projected point
Figure BDA00029790638900000418
The number of rows in the image array,
Figure BDA00029790638900000419
is a projected point
Figure BDA00029790638900000420
The number of columns in the image array,
Figure BDA00029790638900000421
i.e. projected points
Figure BDA00029790638900000422
The image coordinates of (a).
In the above formula (2), Z2Is a point PjDepth coordinate values under a current frame camera coordinate system, R is a pose rotation amount estimated value of a current frame image relative to a reference image, t is a pose translation amount estimated value of the current frame image relative to the reference image, and xi is a lie algebra corresponding to a camera pose (R, t), so that the lie algebra xi is used for representing camera pose amount;
Figure BDA0002979063890000051
wherein ξ^Is an antisymmetric matrix of ξ.
5) Calculating projected points
Figure BDA0002979063890000052
And projection point
Figure BDA0002979063890000053
Has a projection error of rjThe projection error comprises a projection luminosity error eIjAnd projection color error eHj
Figure BDA0002979063890000054
6) Suppose the same point PjThe gray value and the color value under the current frame image and the reference image are unchanged, and the camera pose quantity xi is obtained by minimizing an objective function E (xi):
Figure BDA0002979063890000055
wherein N represents a point PjThe number of (2);
optimal camera pose xi*Solving by maximizing the posterior probability function:
Figure BDA0002979063890000056
and (3) converting the posterior probability density function into the prior probability density function by using Bayes formula, and equivalently replacing the formula (5) by the following formula:
Figure BDA0002979063890000057
by making the partial derivative equal to 0, neglecting the motion information P (ξ) term, there are:
Figure BDA0002979063890000058
reissue to order
Figure BDA0002979063890000059
Then formula (7) is arranged as:
Figure BDA0002979063890000061
since equation (8) is actually a form of weighted least squares, the optimization function of equation (5) is equivalent to:
Figure BDA0002979063890000062
for each projection error amount rjIs given a weight coefficient wj(ii) a The optimization objective function in modification (4) is:
Figure BDA0002979063890000063
by minimizing an objective function
Figure BDA0002979063890000064
Continuously and iteratively solving to obtain an optimal solution of the camera pose, wherein the iteratively solving comprises the following steps:
step1 calculating the error rjJacobian matrix J about camera pose lie algebra xij
Figure BDA0002979063890000065
Step2 based on the derivative matrix JjCalculating the step length of descent, i.e. attitude increment delta, by Gauss-Newton methodξ
Figure BDA0002979063890000066
Step3, updating the camera pose quantity ξ:
ξ=ξ0ξ
wherein ξ0Representing the last iteration-derived camera pose quantity, δξRepresenting the increment of the camera pose, and xi representing the current camera pose quantity;
and (5) Setp4, repeatedly executing the Step1-3 until a convergence condition is met, ending the circulation to obtain the optimal pose solution ξ of the camera*=ξ。
Because the camera is arranged on the robot, the current camera pose quantity, namely the current position of the robot, is obtained.
In the embodiment, the robot autonomous positioning method comprises the steps that 1) pixel point errors of luminosity components and color components are combined, and more image information is utilized, so that the image matching constraint relation is enhanced, and the matching precision of multi-view images in the illumination change environment is improved; 2) the weighted value is added in the overall optimization objective function, so that the sensitivity of pixel point errors to brightness changes can be relieved to a certain extent, and if the weighted value of the pixel points in the area with large image luminosity error changes is reduced, the image alignment accuracy under the conditions of camera exposure and the like can be improved.
Because the assumption of unchanged gray value is difficult to satisfy in practice, the robot autonomous positioning method in the embodiment can better estimate the camera motion and the projection of the point under the condition that the whole image is bright or dark according to more information of the gray value and the color of the image pixel.
Therefore, according to the autonomous robot positioning method, the accuracy of pose estimation of the robot can be improved by using more image information and constraint relation through the image registration result with higher accuracy, and the accuracy of autonomous robot positioning can also be improved.
Finally, the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made to the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, and all of them should be covered in the claims of the present invention.

Claims (2)

1. An autonomous positioning method of a robot, comprising the steps of:
1) the robot collects the current environment image through a camera;
2) converting a current frame image collected by a camera and a selected reference image serving as a positioning reference into an HSI color space to obtain three components of H, S and I;
3) extraction of a point P in real environment space in a reference imagejProjected points in a reference image
Figure FDA0002979063880000011
Figure FDA0002979063880000012
Pixel point
Figure FDA0002979063880000013
Has a gray value of
Figure FDA0002979063880000014
The color components are
Figure FDA0002979063880000015
In the above formula (1), K is camera reference, Z1Is a point PjDepth coordinate value, P, in reference frame camera coordinate systemj=[Xj,Yj,Zj]∈R3(ii) a C is a conversion matrix from homogeneous coordinates to non-homogeneous coordinates,
Figure FDA0002979063880000016
4) calculating a point PjProjection point of current frame image
Figure FDA0002979063880000017
Figure FDA0002979063880000018
Pixel point
Figure FDA0002979063880000019
Has a gray value of
Figure FDA00029790638800000110
The color components are
Figure FDA00029790638800000111
In the above formula (2), Z2Is a point PjDepth coordinate values under a current frame camera coordinate system, R is a pose rotation amount estimated value of a current frame image relative to a reference image, t is a pose translation amount estimated value of the current frame image relative to the reference image, and xi is a lie algebra corresponding to a camera pose (R, t), so that the lie algebra xi is used for representing camera pose amount;
Figure FDA00029790638800000112
wherein ξ^Is an antisymmetric matrix of ξ;
5) calculating projected points
Figure FDA00029790638800000113
And projection point
Figure FDA00029790638800000114
Has a projection error of rjThe projection error comprises a projection luminosity error eIjAnd projection color error eHj
Figure FDA00029790638800000115
6) FalseLet a same point PjThe gray value and the color value under the current frame image and the reference image are unchanged, and the optimal solution xi of the camera pose is obtained by minimizing an objective function E (xi) and continuously iterating and solving*
Figure FDA0002979063880000021
In the above formula wjIn order to be the weight coefficient,
Figure FDA0002979063880000022
n is a point PjThe number of (2).
2. The robot autonomous positioning method according to claim 1, characterized in that: the iterative solution in the step 6) comprises the following steps:
step1 calculating the error rjJacobian matrix J about camera pose lie algebra xij
Figure FDA0002979063880000023
Step2 based on the derivative matrix JjCalculating the step length of descent, i.e. attitude increment delta, by Gauss-Newton methodξ
Figure FDA0002979063880000024
Step3, updating the camera pose quantity ξ:
ξ=ξ0ξ
wherein ξ0Representing the last iteration-derived camera pose quantity, δξRepresenting the increment of the camera pose, and xi representing the current camera pose quantity;
and (5) Setp4, repeatedly executing the Step1-3 until a convergence condition is met, ending the circulation to obtain the optimal pose solution ξ of the camera*=ξ。
CN202110282300.3A 2021-03-16 2021-03-16 Robot autonomous positioning method Active CN112884838B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110282300.3A CN112884838B (en) 2021-03-16 2021-03-16 Robot autonomous positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110282300.3A CN112884838B (en) 2021-03-16 2021-03-16 Robot autonomous positioning method

Publications (2)

Publication Number Publication Date
CN112884838A true CN112884838A (en) 2021-06-01
CN112884838B CN112884838B (en) 2022-11-15

Family

ID=76042651

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110282300.3A Active CN112884838B (en) 2021-03-16 2021-03-16 Robot autonomous positioning method

Country Status (1)

Country Link
CN (1) CN112884838B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101498581A (en) * 2008-12-29 2009-08-05 北京航空航天大学 Relative attitude determination method for spacecraft based on three coplanar points
US20140294298A1 (en) * 2013-03-26 2014-10-02 Xerox Corporation Method and system for inverse halftoning utilizing inverse projection of predicted errors
CN105976343A (en) * 2016-04-29 2016-09-28 广东小天才科技有限公司 Image exposure correction method, device and intelligent equipment
US20170294020A1 (en) * 2016-04-07 2017-10-12 The Boeing Company Camera pose estimation
CN107945184A (en) * 2017-11-21 2018-04-20 安徽工业大学 A kind of mount components detection method positioned based on color images and gradient projection
CN109211241A (en) * 2018-09-08 2019-01-15 天津大学 The unmanned plane autonomic positioning method of view-based access control model SLAM
CN109387204A (en) * 2018-09-26 2019-02-26 东北大学 The synchronous positioning of the mobile robot of dynamic environment and patterning process in faced chamber
CN109405824A (en) * 2018-09-05 2019-03-01 武汉契友科技股份有限公司 A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile
CN109670411A (en) * 2018-11-30 2019-04-23 武汉理工大学 Based on the inland navigation craft point cloud data depth image processing method and system for generating confrontation network
CN109801233A (en) * 2018-12-27 2019-05-24 中国科学院西安光学精密机械研究所 A kind of Enhancement Method suitable for true color remote sensing image
CN110152293A (en) * 2018-02-13 2019-08-23 腾讯科技(深圳)有限公司 Manipulate the localization method of object and the localization method and device of device, game object
CN110175523A (en) * 2019-04-26 2019-08-27 南京华捷艾米软件科技有限公司 A kind of self-movement robot animal identification and hide method and its storage medium
CN111445526A (en) * 2020-04-22 2020-07-24 清华大学 Estimation method and estimation device for pose between image frames and storage medium
CN112053383A (en) * 2020-08-18 2020-12-08 东北大学 Method and device for real-time positioning of robot
CN113129451A (en) * 2021-03-15 2021-07-16 北京航空航天大学 Holographic three-dimensional image space quantitative projection method based on binocular vision positioning

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101498581A (en) * 2008-12-29 2009-08-05 北京航空航天大学 Relative attitude determination method for spacecraft based on three coplanar points
US20140294298A1 (en) * 2013-03-26 2014-10-02 Xerox Corporation Method and system for inverse halftoning utilizing inverse projection of predicted errors
US20170294020A1 (en) * 2016-04-07 2017-10-12 The Boeing Company Camera pose estimation
CN105976343A (en) * 2016-04-29 2016-09-28 广东小天才科技有限公司 Image exposure correction method, device and intelligent equipment
CN107945184A (en) * 2017-11-21 2018-04-20 安徽工业大学 A kind of mount components detection method positioned based on color images and gradient projection
CN110152293A (en) * 2018-02-13 2019-08-23 腾讯科技(深圳)有限公司 Manipulate the localization method of object and the localization method and device of device, game object
CN109405824A (en) * 2018-09-05 2019-03-01 武汉契友科技股份有限公司 A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile
CN109211241A (en) * 2018-09-08 2019-01-15 天津大学 The unmanned plane autonomic positioning method of view-based access control model SLAM
CN109387204A (en) * 2018-09-26 2019-02-26 东北大学 The synchronous positioning of the mobile robot of dynamic environment and patterning process in faced chamber
CN109670411A (en) * 2018-11-30 2019-04-23 武汉理工大学 Based on the inland navigation craft point cloud data depth image processing method and system for generating confrontation network
CN109801233A (en) * 2018-12-27 2019-05-24 中国科学院西安光学精密机械研究所 A kind of Enhancement Method suitable for true color remote sensing image
CN110175523A (en) * 2019-04-26 2019-08-27 南京华捷艾米软件科技有限公司 A kind of self-movement robot animal identification and hide method and its storage medium
CN111445526A (en) * 2020-04-22 2020-07-24 清华大学 Estimation method and estimation device for pose between image frames and storage medium
CN112053383A (en) * 2020-08-18 2020-12-08 东北大学 Method and device for real-time positioning of robot
CN113129451A (en) * 2021-03-15 2021-07-16 北京航空航天大学 Holographic three-dimensional image space quantitative projection method based on binocular vision positioning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHRISTIAN KERL等: "Robust odometry estimation for RGB-D cameras", 《2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION》 *
吴丹: "基于深度视觉的室内机器人定位研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
薛方正等: "基于CPG的双足机器人多层步行控制器设计", 《控制与决策》 *
高北斗: "面向工业机器人的双目视觉系统研究", 《中国优秀硕士学位论文全文数据库》 *

Also Published As

Publication number Publication date
CN112884838B (en) 2022-11-15

Similar Documents

Publication Publication Date Title
CN109344882B (en) Convolutional neural network-based robot control target pose identification method
US11830216B2 (en) Information processing apparatus, information processing method, and storage medium
CN109754432B (en) Camera automatic calibration method and optical motion capture system
CN108362266B (en) Auxiliary monocular vision measurement method and system based on EKF laser ranging
CN113330486A (en) Depth estimation
CN110281240B (en) Method and system for positioning and picking up glass of liquid crystal display screen and vision processing system
WO2021218542A1 (en) Visual perception device based spatial calibration method and apparatus for robot body coordinate system, and storage medium
CN114283203B (en) Calibration method and system of multi-camera system
CN107040695B (en) satellite-borne video image stabilization method and system based on RPC positioning model
CN113763479B (en) Calibration method of refraction and reflection panoramic camera and IMU sensor
CN109781068B (en) Visual measurement system ground simulation evaluation system and method for space application
CN113296395A (en) Robot hand-eye calibration method in specific plane
CN114714356A (en) Method for accurately detecting calibration error of hand eye of industrial robot based on binocular vision
CN114494449A (en) Visual calibration and alignment laminating method for special-shaped product lamination
CN112184809A (en) Relative pose estimation method, device, electronic device and medium
CN113362377B (en) VO weighted optimization method based on monocular camera
CN112884838B (en) Robot autonomous positioning method
CN111275764A (en) Depth camera visual mileage measurement method based on line segment shadow
KR101766823B1 (en) Robust visual odometry system and method to irregular illumination changes
CN117152257A (en) Method and device for multidimensional angle calculation of ground monitoring camera
JP2022011818A (en) Information processing apparatus and control method thereof
CN113256711B (en) Pose estimation method and system of monocular camera
CN114998444A (en) Robot high-precision pose measurement system based on two-channel network
CN114241059A (en) Synchronous calibration method for camera and light source in photometric stereo vision system
CN114998429A (en) Robot positioning system, method, apparatus, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant