CN114427832A - Cone motion measurement method based on machine vision - Google Patents

Cone motion measurement method based on machine vision Download PDF

Info

Publication number
CN114427832A
CN114427832A CN202111553179.XA CN202111553179A CN114427832A CN 114427832 A CN114427832 A CN 114427832A CN 202111553179 A CN202111553179 A CN 202111553179A CN 114427832 A CN114427832 A CN 114427832A
Authority
CN
China
Prior art keywords
coordinate system
motion
coordinate
laser
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111553179.XA
Other languages
Chinese (zh)
Inventor
蔡晨光
张颖
刘志华
郑德智
吕琦
夏岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Institute of Metrology
Beihang University
Original Assignee
National Institute of Metrology
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Institute of Metrology, Beihang University filed Critical National Institute of Metrology
Priority to CN202111553179.XA priority Critical patent/CN114427832A/en
Publication of CN114427832A publication Critical patent/CN114427832A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • G01C25/005Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass initial alignment, calibration or starting-up of inertial devices

Abstract

The invention discloses a conical motion measuring method based on machine vision, which comprises the steps of firstly converting the space motion of a Stewart platform into the plane motion of three laser points through a point projector and a rear projection screen. The three laser points are projection points of three beams of mutually vertical laser emitted by the rear projection screen on the rear projection screen; secondly, acquiring a plane motion sequence of the three laser points through a camera; then obtaining coordinate values of the three laser points on a rear projection screen coordinate system by an image processing method; and finally, acquiring the motion information of the Stewart platform according to the physical decoupling model. In order to realize higher-precision motion information measurement, static space attitude information of a plurality of sets of Stewart platforms is acquired before spatial motion is measured so as to obtain the spatial coordinates of actual cone points. Compared with the existing measuring method, the method has the advantages of non-contact, low cost, simplicity, rapidness and the like, can obtain the space coordinate information of the actual cone point, and realizes high-precision cone motion measurement.

Description

Cone motion measurement method based on machine vision
Technical Field
The invention belongs to the field of motion measurement, and particularly relates to a conical motion measurement method.
Background
MEMS inertial sensor has advantages such as small volume, high integration, high accuracy, low cost, is widely applied to different fields such as structure health monitoring, autopilot, intelligence dress, navigation. These fields generally require the MEMS inertial sensor to have high-precision trajectory tracking performance, and for this reason, the MEMS inertial sensor needs to be subjected to dynamic performance test to evaluate the performance.
The Stewart platform can realize a spatial complex motion track, and the tracking performance of the MEMS inertial sensor can be tested by conical motion around a Z axis generated by the Stewart platform. However, the actual terminal pose and the theoretical pose of the Stewart platform are deviated due to phase lag and amplitude attenuation of a servo motor of the Stewart platform. In order to ensure the reliability of the test result of the MEMS inertial sensor, the high-precision measurement of the motion track of the Stewart platform needs to be realized.
At present, the commonly used method for measuring the motion trail of the Stewart platform comprises a sensor measuring method, a laser tracker measuring method and a machine vision measuring method. In the sensor measurement method, a stay wire encoder or an encoder is commonly used for measuring the variable length and realizing measurement by using a vector closed loop method; the laser tracker measurement method and the machine vision measurement method realize the motion track acquisition of the Stewart platform by measuring the real-time space coordinates of a target placed on the moving platform. In the methods, the origin of the coordinate system of the movable platform is located at the circle center of the movable platform. However, due to machining errors and control errors, the origin of the coordinate system of the movable platform (i.e. the actual cone point of the conical motion) usually has a small error with the center of the circle of the movable platform. In order to improve the measurement accuracy of the motion trail, a new measurement method needs to be researched to measure the actual origin position of the coordinate system of the movable platform.
Therefore, a conical motion machine vision measurement method based on physical decoupling is proposed. According to the method, a point projector coordinate system and a rear projection screen coordinate system are added, the spatial motion of a Stewart platform is converted into the planar motion of three laser points, and motion information is added to realize the measurement of the spatial position information of a real cone point and the measurement of cone motion information. The method is beneficial to realizing high-precision real-time measurement of the conical motion.
Disclosure of Invention
Aiming at the defects of high cost of a measuring system, limited measuring precision, incapability of determining the spatial position of a real cone point and the like of the conventional cone motion measuring method, the invention provides a quick and accurate cone motion measuring method capable of determining the spatial position information of the real cone point, which comprises the following steps:
conversion of spatial motion information to planar motion information: space motion of the Stewart platform is converted into plane motion of three laser points through projection, physical decoupling of six degrees of freedom is achieved, and a mathematical model is simplified;
acquisition of high-contrast feature marker motion sequence images: adjusting an external experiment environment light source to improve the contrast between the three laser points and the background, and obtaining a planar motion sequence image containing the three laser points through one camera;
and (3) sub-pixel extraction of the central points of three laser points: obtaining edge point pixel coordinates of three laser points by using an edge extraction method, obtaining sub-pixel coordinates of center points of the three laser points by fitting the edge points with least squares, converting the sub-pixel coordinates into corresponding world coordinates by using a corresponding relation between the image pixel coordinates determined by a camera and the world coordinates, and obtaining the world coordinates of intersection points of the three beams of laser by combining the spatial position relation of the three beams of laser;
establishing a mathematical model: establishing a plurality of coordinate systems, and establishing a mathematical model of projecting space motion into plane motion through coordinate system conversion based on the space position relation of each component in the measuring device, wherein a mathematical formula is expressed as follows:
Figure BDA0003418380380000021
wherein, P is a coordinate value of the characteristic point; r is a rotation matrix; t is a translation matrix; subscript as original coordinate system, superscript as target coordinate system, e.g.SRWIs a rotation matrix from the world coordinate system { W } to the rear projection screen coordinate system { S },Sand P is the coordinate value of the characteristic point under the coordinate system { S } of the rear projection screen.
Since the world coordinate system { W } is parallel to the rear projection screen coordinate system { S }, two equations in equation (1) are obtained in parallel:
WTMsTL-WRM MTL-sTW (2)
wherein the content of the first and second substances,WTMa translation matrix of a movable platform coordinate system { M } relative to a world coordinate system { W };STLthe coordinate values of the three laser points under the coordinate system { S } of the rear projection screen and the physical position relation of the three beams of laser can be obtained through solving;MTLthe coordinate origin of the coordinate system { L } of the point thrower is the coordinate value of the coordinate system { M } of the movable platform, and the point thrower is fixed on the Stewart platform, so the point thrower is used for solving the problem that the coordinate origin of the coordinate system { L } of the point thrower is the coordinate value of the coordinate system { M } of the movable platformMTLIs a certain value matrix and can be obtained;STWthe coordinate value of the origin of the movable platform coordinate system { M } in the initial state in the pointer coordinate system { L } is obtained.
Due to the fact thatMTLIn which comprisesSTWThe information of (2) is decomposed to obtain:
MTLsTL(0)-sTW(3) wherein the content of the first and second substances,STL(0) at the initial moment, the coordinate system { L } of the projector is a translation matrix relative to the coordinate system { S } of the rear projection screen.
The compound represented by formula (3) may be substituted for formula (1):
WTMsTL-WRM sTL(0)+(WRM-I)sTW (4)
due to det: (WRMWhere I) ═ 0, it is necessary to use the pseudo-inverse to determineSTW. Because the solution obtained by the pseudo-inverse method is a general solution, in order to determine the only solution, the method obtains two groups of formulas (4) by rotating the Stewart platform at two different poses, and further obtains the solutionSTW
In the traditional Stewart platform circular cone motion measurement, the origin of a movable platform coordinate system (namely the actual cone point of circular cone motion) is generally determined to be coincident with the circle center of the movable platform, and errors introduced in the machining and installation process and control are not considered. The invention can realize the acquisition of the actual cone point space position of the conical motion by the formula (4) by introducing a rear projection screen coordinate system { S } and a projector coordinate system { L }.
Acquiring actual cone point spatial position information: obtaining motion sequence images of two sets of Stewart platforms in different poses, substituting the obtained world coordinates of the central points of the three laser points and the intersection points of the three beams of laser into a mathematical model by an image processing method to obtain two equations, and obtaining the spatial position information of the actual cone point by solving a pseudo-inverse matrix by an SVD method.
Resolving of conical motion: and (3) obtaining a sequence image of the Stewart platform doing conical motion, substituting the obtained world coordinates of the central points of the three laser points and the intersection points of the three beams of laser into a mathematical model containing actual conical point spatial position information through an image processing method, and solving to obtain conical motion information of the Stewart platform.
The conical motion measuring method has the following advantages:
(1) according to the invention, the cone point information is listed in the mathematical model, and the high-precision cone motion measurement is realized by resolving the actual cone point information.
(2) The method has the advantages of simple and convenient measuring steps, small calculated amount and low system cost, and can be simultaneously suitable for measuring the conical motions with different frequencies and motion ranges.
(3) The method belongs to a space motion measurement method, and can realize high-precision space motion measurement in a certain frequency range.
Drawings
FIG. 1 is a flow chart of cone motion measurement based on machine vision;
FIG. 2 is a flow chart of decoupling measurement of conical motion based on machine vision;
fig. 3-4 are graphs of results of measurements of conical motions of specific simulation examples of the method of the present invention.
Detailed Description
In order to solve the problems that the existing measuring method has complex system and high cost, does not consider actual movement cone point information and the like in the measurement of conical movement, the invention provides a conical movement measuring method based on machine vision.
Referring to fig. 1, a flow chart of cone motion measurement based on machine vision is shown. The measuring method mainly comprises the following steps:
step S1: the point projector is fixed on a Stewart platform moving platform, three laser beams which are mutually vertical are projected on the rear projection screen, and the camera collects a motion sequence image containing three laser points;
step S2: the sub-pixel extraction of the sequence image at the center points of three lasers comprises the following steps: extracting edge point pixels of the three laser points, performing least square fitting on the edge point pixels to obtain sub-pixel coordinate values of three laser central points, and solving world coordinates of the laser central points based on coordinate transformation;
step S3: establishing a physical decoupling model of projecting space motion information to plane motion information according to the physical position relation of the three beams of laser;
step S4: solving the position information of the actual movement cone point, which comprises the following steps: two groups of equations based on the mathematical model established in S3 are obtained through two different poses of the Stewart platform, the spatial position coordinate value of the actual motion cone point is obtained by utilizing SVD, and the spatial position coordinate value is substituted into the updated mathematical model;
step S5: decoupled measurement of conical motion: carrying out image processing on the three obtained laser point plane motion sequence images based on the Stewart platform conical motion to obtain coordinate values of related laser central points, and substituting the information into a mathematical model to solve to obtain conical motion information of the Stewart platform;
step S6: and displaying and storing the obtained motion information.
Referring to fig. 2, a flowchart of decoupling measurement of conical motion based on machine vision is shown. The conical motion decoupling measurement method comprises the following steps:
step S11: reading in an acquired planar motion sequence image containing three laser points;
step S12: extracting pixel coordinates of the laser edge points by using a Canny algorithm;
step S13: fitting laser edge points based on a least square principle to obtain sub-pixel coordinates of a laser central point;
step S14: converting the sub-pixel coordinate of the obtained laser center point into a corresponding world coordinate through the corresponding relation between the image pixel coordinate determined by camera calibration and the world coordinate;
step S15: obtaining world coordinates of intersection points of the three beams of laser according to the relative position relation of the three beams of laser;
step S16: and substituting the space coordinate information of the three laser points and the intersection points thereof into a mathematical model containing the actual space position information of the cone points, and solving to obtain the cone motion information of the Stewart platform.
In order to verify the feasibility of the method provided by the invention, MATLAB is utilized to realize the simulation of the conical motion of the Stewart platform, and simultaneously, three laser beams which are mutually vertical and have the motion state consistent with that of the Stewart platform and the simulation of the plane motion of three laser points formed by projecting the three laser beams on the same plane are realized.
The invention realizes the simulation of conical motion by MATLAB by utilizing a quaternion method, and solves the motion information according to the decoupling model provided by the invention. Setting q (t) to (cos (theta/2), sin (theta/2) cos (w.t), 0, where theta is 3; w-2 × pi; the actual cone position is (-2, -1, 4).
Referring to fig. 3, Stewart platform motion information obtained when the actual cone point position is not calculated, that is, the default cone point position is (0, 0, 0). FIG. 3(a) is a graph of the relationship between the spatial coordinate value of the cone point position and time; FIG. 3(b) is a graph of Q (t) parameters versus time. As can be seen from the figure, when the error between the actual cone point position and the default cone point position is not considered, the obtained q (t) parameters are the same as the set values, and the cone point position shifts around (-2, -1, 4), which is not in accordance with the actual situation.
Referring to fig. 4, Stewart platform motion information obtained after calculating and compensating the actual cone point position is obtained. FIG. 4(a) is a graph of the spatial coordinate value of the cone point position versus time; FIG. 4(b) is a graph of Q (t) parameters versus time. As can be seen from the figure, when the error between the actual cone point position and the default cone point position is considered, the obtained q (t) parameters are the same as the set values, and the cone point position does not have a translation and matches the actual situation.
The above description is a detailed description of the simulation of the present invention and is not intended to limit the present invention in any way. The invention is capable of many modifications, improvements and adaptations by those skilled in the art. Accordingly, the scope of the invention should be determined from the following claims.

Claims (4)

1. A conical motion measurement method based on machine vision is characterized in that: the measuring method comprises the following steps of,
s1: fastening a point projector on a Stewart movable platform, projecting three laser beams emitted by the point projector on a rear projection screen to form three laser points, wherein the laser beams and the Stewart platform have consistent motion tracks, and a camera shoots a plane motion sequence image containing the three laser points;
s2: acquiring image pixel coordinates of a laser central point by using an ellipse-based center positioning method, and converting the image pixel coordinates into coordinate values under a rear projection screen coordinate system according to the corresponding relation between the image pixel coordinates determined by camera calibration and the rear projection screen coordinates;
s3: establishing a decoupling model based on the condition that the actual cone point spatial position is unknown in the initial state;
s4: according to the characteristics of the decoupling model, selecting a Stewart platform motion pose containing angular rotation motion information, obtaining coordinate values of the laser point under the state in a rear projection screen coordinate system, substituting the coordinate values into the decoupling model to obtain the actual cone point spatial position, and finishing the updating of the decoupling model;
s5: substituting the space coordinate matrix of the laser points obtained when the Stewart platform does conical motion into a new decoupling model, and solving to obtain the space pose information of the Stewart platform;
s6: and storing and displaying the measurement result of the spatial pose information.
2. The machine vision-based cone motion measurement method according to claim 1, wherein:
the coordinate value extraction of the laser point under the coordinate system of the rear projection screen is mainly obtained by the following steps;
firstly, obtaining a sequence image F through a Canny operatorjEdge points f of three laser points in (x, y)jn(x, y) where j is 1,2, …, N is the number in the sequence image; n is 1,2 and 3 are the serial numbers of three laser points in the image; then fitting the edge points by adopting a least square method to further obtain a sub-pixel coordinate value O of the center of the ellipsejn(x, y); and finally, converting the image pixel coordinates determined by camera calibration into coordinate values under a corresponding rear projection screen coordinate system { S } according to the corresponding relationship between the image pixel coordinates and the rear projection screen coordinates.
3. The machine vision-based cone motion measurement method according to claim 1, wherein:
the establishment of the decoupling model based on the condition that the origin of the motion coordinate system of the moving platform in the initial state is unknown specifically comprises the following steps:
(1) coordinate system establishment
In order to convert the space motion information of the Stewart platform into the plane motion information of three laser points on the rear projection screen, a plurality of coordinate systems are required to be established; the coordinate systems respectively include:
image pixel coordinate system { Im }: coordinate values of the centers of the three laser points in the image are expressed;
rear projection screen coordinate system { S }: the system is used for expressing coordinate values of the three laser points on the rear projection screen and realizing the conversion from space motion to plane motion;
the coordinate system of the pointer { L }: the method is used for realizing the conversion from the space motion information of the Stewart platform to the motion information of the point thrower;
moving platform coordinate system { M }: the change matrix is used for expressing the movable platform, and the origin of a coordinate system of the change matrix is superposed with the actual cone point; the movable platform coordinate system { M } in the initial position is superposed with the world coordinate system { W };
world coordinate system { W }: coordinate values { Im }, { S }, { L }, and { M };
(2) establishment of decoupling model
According to the physical structure of the measuring system, the following mathematical model can be obtained by converting the coordinate system:
Figure FDA0003418380370000021
wherein, P is a coordinate value of the characteristic point; r is a rotation matrix; t is a translation matrix; subscript original coordinate system, superscript target coordinate system, e.g.SRWIs a rotation matrix from the world coordinate system { W } to the rear projection screen coordinate system { S },Sp is a coordinate value of the characteristic point under a rear projection screen coordinate system { S };
two equations in equation (1) are taken together:
WTMSTL-WRM MTL-STW (2)
wherein the content of the first and second substances,WTMa translation matrix of a movable platform coordinate system { M } relative to a world coordinate system { W };STLthe coordinate values of the three laser points under the coordinate system { S } of the rear projection screen and the physical position relation of the three beams of laser can be obtained through solving;MTLthe coordinate origin of the coordinate system { L } of the point thrower is the coordinate value of the coordinate system { M } of the movable platform, and the point thrower is fixed on the Stewart platform, so the point thrower is used for solving the problem that the coordinate origin of the coordinate system { L } of the point thrower is the coordinate value of the coordinate system { M } of the movable platformMTLIs a certain value matrix and can be obtained;STWthe origin of the movable platform coordinate system { M } is projected in the initial stateCoordinate values under the coordinate system { L } of the point device;
due to the fact thatMTLIn which comprisesSTWThe information of (2) is decomposed to obtain:
MTLSTL(0)-STW (3)
the compound represented by formula (3) may be substituted for formula (1):
WTMSTL-WRM STL(0)+(WRM-I)STW (4)
due to det: (WRMWhere I) is 0, it is necessary to use the pseudo-inverse to determineSTW(ii) a Because the solution obtained by the pseudo-inverse is a general solution, in order to determine a unique solution, two groups of formulas (4) are obtained by rotating two different poses through a Stewart platform, and then the solution is obtainedSTW
4. The machine vision-based cone motion measurement method according to claim 3, wherein:
the spatial pose information of the Stewart platform is obtained by three laser point spatial position information obtained by machine vision and the formula (2);WRMandWTMthe method comprises six-degree-of-freedom information (x, y, z, alpha, beta, gamma).
CN202111553179.XA 2021-12-17 2021-12-17 Cone motion measurement method based on machine vision Pending CN114427832A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111553179.XA CN114427832A (en) 2021-12-17 2021-12-17 Cone motion measurement method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111553179.XA CN114427832A (en) 2021-12-17 2021-12-17 Cone motion measurement method based on machine vision

Publications (1)

Publication Number Publication Date
CN114427832A true CN114427832A (en) 2022-05-03

Family

ID=81310858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111553179.XA Pending CN114427832A (en) 2021-12-17 2021-12-17 Cone motion measurement method based on machine vision

Country Status (1)

Country Link
CN (1) CN114427832A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070013916A1 (en) * 2005-07-18 2007-01-18 Kim Jung H Methods and systems for ultra-precise measurement and control of object motion in six degrees of freedom by projection and measurement of interference fringes
DE102010031270A1 (en) * 2009-07-15 2011-01-27 Check It Etc Gmbh Method for producing calibration data for camera system utilized for position detection of e.g. ball on playing field, involves providing images of two-dimensional spatial coordinate systems by two receiving devices, respectively
CN106971408A (en) * 2017-03-24 2017-07-21 大连理工大学 A kind of camera marking method based on space-time conversion thought
CN112432594A (en) * 2020-10-22 2021-03-02 中国计量科学研究院 Machine vision six-degree-of-freedom measurement method based on physical decoupling
CN112444233A (en) * 2020-10-22 2021-03-05 贵州大学 Monocular vision-based plane motion displacement and track measurement method
CN113049002A (en) * 2020-10-22 2021-06-29 中国计量科学研究院 Conical motion testing method of tilt sensor

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070013916A1 (en) * 2005-07-18 2007-01-18 Kim Jung H Methods and systems for ultra-precise measurement and control of object motion in six degrees of freedom by projection and measurement of interference fringes
DE102010031270A1 (en) * 2009-07-15 2011-01-27 Check It Etc Gmbh Method for producing calibration data for camera system utilized for position detection of e.g. ball on playing field, involves providing images of two-dimensional spatial coordinate systems by two receiving devices, respectively
CN106971408A (en) * 2017-03-24 2017-07-21 大连理工大学 A kind of camera marking method based on space-time conversion thought
CN112432594A (en) * 2020-10-22 2021-03-02 中国计量科学研究院 Machine vision six-degree-of-freedom measurement method based on physical decoupling
CN112444233A (en) * 2020-10-22 2021-03-05 贵州大学 Monocular vision-based plane motion displacement and track measurement method
CN113049002A (en) * 2020-10-22 2021-06-29 中国计量科学研究院 Conical motion testing method of tilt sensor

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张颖;蔡晨光;刘志华;: "基于物理解耦的机器视觉六自由度测量方法", 计量技术, no. 05, 18 May 2020 (2020-05-18), pages 46 - 50 *
霍炬;李云辉;杨明;: "激光投影成像式运动目标位姿测量与误差分析", 光子学报, no. 09, 30 September 2017 (2017-09-30), pages 127 - 137 *

Similar Documents

Publication Publication Date Title
CN104154928B (en) Installation error calibrating method applicable to built-in star sensor of inertial platform
CN110296691A (en) Merge the binocular stereo vision measurement method and system of IMU calibration
CN101539397B (en) Method for measuring three-dimensional attitude of object on precision-optical basis
CN110501712B (en) Method, device and equipment for determining position attitude data in unmanned driving
CN109459059B (en) Star sensor external field conversion reference measuring system and method
CN111811395B (en) Monocular vision-based dynamic plane pose measurement method
CN112629431B (en) Civil structure deformation monitoring method and related equipment
CN109520476B (en) System and method for measuring dynamic pose of rear intersection based on inertial measurement unit
CN109087355B (en) Monocular camera pose measuring device and method based on iterative updating
CN105973268B (en) A kind of Transfer Alignment precision quantitative evaluating method based on the installation of cobasis seat
CN109269525B (en) Optical measurement system and method for take-off or landing process of space probe
CN113267794B (en) Antenna phase center correction method and device with base line length constraint
CN109212497A (en) A kind of measurement of space six degree of freedom vehicle radar antenna pose deviation and interconnection method
CN108375383A (en) The airborne distribution POS flexibility base line measurement method and apparatus of polyphaser auxiliary
CN113218577A (en) Outfield measurement method for star point centroid position precision of star sensor
CN111123280A (en) Laser radar positioning method, device and system, electronic equipment and storage medium
CN111915685A (en) Zoom camera calibration method
CN108225371B (en) Inertial navigation/camera installation error calibration method
CN112229323A (en) Six-degree-of-freedom measurement method of checkerboard cooperative target based on monocular vision of mobile phone and application of six-degree-of-freedom measurement method
CN111486867A (en) Calibration device and method for installation parameters of vision and inertia hybrid tracking assembly
CN108154535A (en) Camera Calibration Method Based on Collimator
KR102226256B1 (en) Electro-optical tracking apparatus capable of automatic viewing angle correction and method thereof
CN109990801B (en) Level gauge assembly error calibration method based on plumb line
CN110686593B (en) Method for measuring relative position relation of image sensors in spliced focal plane
CN112697074A (en) Dynamic object angle measuring instrument and measuring method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination