CN102842117B - Method for correcting kinematic errors in microscopic vision system - Google Patents

Method for correcting kinematic errors in microscopic vision system Download PDF

Info

Publication number
CN102842117B
CN102842117B CN201210243071.5A CN201210243071A CN102842117B CN 102842117 B CN102842117 B CN 102842117B CN 201210243071 A CN201210243071 A CN 201210243071A CN 102842117 B CN102842117 B CN 102842117B
Authority
CN
China
Prior art keywords
mrow
point
mtd
camera
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210243071.5A
Other languages
Chinese (zh)
Other versions
CN102842117A (en
Inventor
刘盛
翟斌斌
金海强
陈胜勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201210243071.5A priority Critical patent/CN102842117B/en
Publication of CN102842117A publication Critical patent/CN102842117A/en
Application granted granted Critical
Publication of CN102842117B publication Critical patent/CN102842117B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for correcting kinematic errors in a microscopic vision system, which is applied to the microscopic vision system. The method comprises the following steps of: calibrating a camera through a calibration board, establishing a camera model, tracking angular points of the calibration board, generating a tracking trajectory and an ideal revolving trajectory of the angular points through a least square method, then performing error evaluation, establishing an error correcting model, and correcting the microscopic image by virtue of the error correct model. The method provided by the invention can be used for establishing the error correct model for the microscopic vision system only through one-time error evaluation; and the model can be used for any microscopic movement video collected by the system, reducing the movement errors of the system and improving the three-dimensional reconstruction precision, thus being high in practicability.

Description

Method for correcting motion error in microscopic vision system
Technical Field
The invention relates to the technical field of computer microscopic imaging, in particular to a method for correcting motion errors in a microscopic vision system.
Background
With the development of signal processing science and computer technology, a new field of computer vision as artificial intelligence is gradually formed and greatly developed, a camera is used for replacing human eyes to obtain a scene image and convert the scene image into a digital signal in the computer vision technology, and a computer is used for replacing the human brain to perform visual perception and interpretation on an objective world. Although it is not possible to make a computer, a robot or other intelligent machines have a vision which is as efficient, flexible and universal as a human being, the vision theory and technology have been rapidly developed since the 50 th century, the computer vision has been widely applied to various fields such as military affairs, manufacturing industry, inspection, document analysis, medical diagnosis and the like from the helmet on apache to the electronic image stabilization on military vehicles and from the image guidance on missiles to the military target recognition. With the development of personal computers, the reduction of hardware cost has promoted the emerging technology of computer vision to gradually enter people's lives and become familiar.
Generally, the working process of a complete stereoscopic vision system comprises camera calibration, feature extraction, stereo matching, three-dimensional reconstruction and the like, and the three-dimensional reconstruction is one of the final purposes of computer vision research. The rotary three-dimensional reconstruction technology takes the rotary motion of an observed object as a limiting condition, so that the motion information of the object is obtained, and the three-dimensional reconstruction efficiency and precision are improved. However, since the microscopic motion video in the microscopic vision system is collected by the high power microscope and the CCD image sensor, the rotational motion of the tiny object has an obvious motion position shift, which greatly affects the later three-dimensional reconstruction.
Meanwhile, the existing error correction methods in computer vision are all based on optical distortion errors, and image correction is generally carried out by establishing corresponding optical distortion models.
Disclosure of Invention
The invention provides a method for correcting a motion error in a microscopic vision system, which aims to overcome the defect that obvious motion position offset exists in a rotary three-dimensional reconstruction process under the microscopic vision system, accurately evaluate the motion error of the microscopic vision system, establish an error correction model, preprocess an acquired microscopic image and obviously reduce the motion error.
A correction method of motion error in micro-vision system is applied to micro-vision system, the micro-vision system includes a monocular optical microscope and a camera for obtaining micro-image of observed object, an object stage and a motion control device for controlling the object stage to rotate at a certain inclination angle, and a computer system for performing vision processing, the correction method includes steps:
(1) establishing an error correction model;
(2) adjusting the position of the objective table to adapt to the observed object, calibrating the camera by adopting a calibration plate, and calculating internal and external parameters of the camera;
(3) and (3) substituting the internal and external parameters of the camera obtained in the step (2) into an error correction model according to the rotation angle corresponding to the microscopic image of the observed object recorded by the camera, calculating an offset correction vector corresponding to the rotation angle, and correcting the microscopic image.
Further, the step (1) of establishing an error correction model comprises the steps of:
(1.1) calibrating the camera by adopting a calibration plate, determining internal and external parameters of the camera, and establishing a camera model of a microscopic vision system;
(1.2) carrying out corner point tracking on the calibration plate;
(1.3) determining a rotation axis of a stage of the micro-vision system;
(1.4) establishing an ideal rotating motion track;
and (1.5) constructing an error correction model.
Further, the camera model passes through a spatial point P ═ x, y, z]With its projected point p ═ u, v on the two-dimensional image displayed in the camera]The relationship of (c) is expressed as:
whereinAndadding vector of P and P plus 1, s is scale factor, A is camera internal reference matrix, [ RT ] respectively]Is a camera external parameter matrix.
Further, the calibration plate is a micro-checkerboard, and the step (1.2) includes the steps of:
(1.2.1) inputting the micro-checkerboard micro-motion video collected by the camera, and acquiring K micro-sequence images by frame truncation;
(1.2.2) taking one of the images, converting the image into a gray image, and establishing a set U of angular points and an empty linked list L;
(1.2.3) searching and finding a checkerboard vertex from the set U, moving the point to a linked list L and setting the point as a base point;
(1.2.4) establishing a search domain of a base point according to the shortest distance between any two points in the set U;
(1.2.5) if two angular points are found in the search domain and the current base point is a boundary angular point, adjusting the sequence of the two angular points according to the relationship between the two angular points and the base point and moving the two angular points to a linked list L; if an angular point is found and the current base point is not a boundary angular point, directly moving the angular point to a linked list L; otherwise, adjusting the search domain and repeating the step (1.2.5);
(1.2.6) if the length of the linked list L reaches N, storing the linked list L, taking down an image, and returning to (1.2.2); otherwise, taking the next point of the base points in the linked list L as the current base point, and returning to the step (1.2.4);
(1.2.7) when the K microscopic sequence images are subjected to angular point detection and numbered, obtaining K linked lists L, and establishing a corresponding relation according to the angular point serial numbers to obtain N angular point tracking tracks.
According to the corner point tracking strategy, the complex corner point matching process can be avoided, and the complete motion trail of the corner points can be accurately tracked.
Further, the step (1.2) further comprises the steps of: for each corner point, K rotating tracking points are obtained, and quadratic curves of corner point motion are fitted to the tracking points through a least square methodThe actual tracking track of the angular point is shown, wherein n is the serial number of the angular point; and repeating the steps to obtain the tracking tracks of the N corner points.
Further, the determining of the rotating shaft of the micro-vision system refers to tracking tracks of N angular points obtained through angular point tracking, averaging is carried out through two-dimensional image coordinates of discrete points on the tracks, two-dimensional image coordinates of an intersection point of the rotating shaft and the objective table are approximately obtained, and world coordinates of the intersection point are obtained through back projection of a camera model, so that the rotating shaft is determined.
Further, the establishing of the ideal rotational motion trajectory means that for one point in space, the point is continuously rotated by different angles around the rotating shaft, a new set of spatial points is obtained through coordinate transformation, then a set of two-dimensional projection points are generated through projection transformation of the camera model, quadratic curves obtained by the points through least square fitting are the ideal rotational motion trajectory of the point, and for the N angular points, an ideal motion trajectory C of the N angular points is establishednWhere n is the number of the corner points.
Further, the step (1.5) of constructing the error correction model comprises the steps of:
the rotation angle theta of the space point P around the rotation axis is used for obtaining an ideal point coordinate P '(x', y ', z') and an actual point coordinatep ' (u ', v ') andare respectively PThen the following two equations are satisfied:
<math> <mrow> <mi>s</mi> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msup> <mi>u</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>v</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mi>A</mi> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>R</mi> </mtd> <mtd> <mi>T</mi> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>z</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
s u ^ v ^ 1 = A R T x ^ y ^ z ^ 1
subtracting the above equations yields:
<math> <mrow> <mi>s</mi> <msubsup> <mover> <mi>d</mi> <mo>~</mo> </mover> <mi>&theta;</mi> <mi>T</mi> </msubsup> <mo>=</mo> <mi>AR</mi> <msubsup> <mi>D</mi> <mi>&theta;</mi> <mi>T</mi> </msubsup> </mrow> </math>
whereinIs the amount of positional offset on the world coordinate system,is a position offset on a two-dimensional image coordinate system, andis dθ0 vector is increased;
obtaining the following result through the tracking tracks of the N angular points and the ideal motion track:
<math> <mrow> <msub> <mover> <mi>d</mi> <mo>~</mo> </mover> <mi>&theta;</mi> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mi>cos</mi> <mi>&theta;</mi> <mo>-</mo> <msub> <mover> <mi>C</mi> <mo>^</mo> </mover> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mi>cos</mi> <mi>&theta;</mi> <mo>)</mo> </mrow> <mo>,</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mi>sin</mi> <mi>&theta;</mi> <mo>-</mo> <msub> <mover> <mi>C</mi> <mo>^</mo> </mover> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mi>sin</mi> <mi>&theta;</mi> <mo>)</mo> </mrow> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein C isnWhich are the twice curves of the ideal motion trajectory of the nth corner point,twice curves of the tracking track of the nth corner point are taken;
obtaining an error correction model E through transformationθComprises the following steps:
<math> <mrow> <msub> <mi>E</mi> <mi>&theta;</mi> </msub> <mo>=</mo> <msubsup> <mi>D</mi> <mi>&theta;</mi> <mi>T</mi> </msubsup> <mo>=</mo> <mi>s</mi> <msup> <mi>R</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <msup> <mi>A</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <msubsup> <mover> <mi>d</mi> <mo>~</mo> </mover> <mi>&theta;</mi> <mi>T</mi> </msubsup> </mrow> </math>
where s is a scale factor, A is a camera reference matrix, and [ R T ] is a camera reference matrix.
The internal and external parameters of the camera comprise a scale factor, an internal parameter matrix and an external parameter matrix.
The invention has the following beneficial effects: the error correction model of the microscopic vision system can be established only through one-time error evaluation, any microscopic motion video acquired by the system can be corrected by utilizing the model, the motion error of the system is reduced, the three-dimensional reconstruction precision is improved, and the method has better practicability.
Drawings
FIG. 1 is a schematic diagram of a micro-vision system architecture;
FIG. 2 is a flow chart of a method for correcting motion errors in a micro-vision system in accordance with the present invention;
FIG. 3 is a flow chart of the method for establishing an error correction model in the micro-vision system of the present invention;
FIG. 4 is a schematic view of a geometric model of a micro-vision system;
FIG. 5 is a schematic diagram of the result of the microscopic image after corner detection and numbering algorithm;
FIG. 6 is a schematic diagram of tracking results obtained after the corner point tracking strategy is performed on 15 corner points in the micro-motion video;
FIG. 7 is a schematic illustration of the results of error estimation;
FIG. 8 is a schematic diagram of an error correction model;
FIG. 9 is a diagram showing the results of four frames of a micro-motion video after error correction;
FIG. 10a is a schematic diagram showing the comparison of the error evaluation results before and after the objective table is tilted by 0 ° according to the embodiment of the present invention;
FIG. 10b is a schematic diagram showing the comparison of the error evaluation results before and after the objective table is tilted by 5 degrees and corrected according to the embodiment of the present invention;
FIG. 10c is a schematic diagram showing the comparison of the error evaluation results before and after the objective table is tilted by 15 degrees according to the embodiment of the present invention;
fig. 11 is a tabular representation of the results of the error evaluations before and after correction.
Detailed Description
The technical solutions of the present invention are further described in detail below with reference to the drawings and examples, which should not be construed as limiting the present invention.
The microscopic vision system adopted by the invention, as shown in fig. 1, comprises a monocular optical microscope 1 and a camera 2, and is responsible for acquiring a microscopic image (or a microscopic video) of an observed object; the system comprises an object stage 3 and a motion control device 4, wherein the camera 2 and the motion control device 4 are both connected to a computer system 5, the computer system 5 controls the object stage 3 to rotate at a certain inclination angle through the motion control device 4 on one hand, and on the other hand, the computer system is used as a visual information processing system to accurately calculate the three-dimensional structure of the micro object by using a rotating three-dimensional reconstruction method by taking the acquired micro-motion video as input data.
The following describes in detail a method for correcting motion errors in a microscopic vision system, taking fig. 1 as an example, and the specific flow of the method is shown in fig. 2, and includes the following steps:
step 201, establishing an error correction model.
Specifically, the flow of the method for establishing the error correction model is shown in fig. 3, and includes the steps of:
step 301, calibrating the camera by adopting a calibration plate micro-checkerboard, accurately obtaining internal and external parameters of the camera, and establishing a camera model of the microscopic vision system.
Specifically, the invention adopts Zhangible NewTechnique for Camera Calibration[J]IEEE Transactions on Pattern analysis and Machine understanding, 2000, 22 (11): 1330-. Accurately obtaining the internal and external parameters of the camera through calibration of the camera, and establishing a camera model of the system, wherein the established camera model is shown in figure 4, and a coordinate system Oc-XcYcZcIs the camera coordinate system { C }, OcIs the optical center of the micro-camera, ZcThe axis is the optical axis of the micro-camera. The coordinate system O-UV is a two-dimensional image coordinate system with the camera axis perpendicularly intersecting O' (u)o,vo). Coordinate system Ow-XwYwZwIs a world coordinate system { W }, is a reference coordinate system, and is used for describing the position of the microscope camera and the position of the tiny object in the physical environment. Space point P ═ x, y, z]With its projected point p ═ u, v on the two-dimensional image displayed in the camera]The relationship of (a) is expressed as follows:
s p ~ = A R T P ~ - - - ( 1 )
whereinAndthe increment vectors of P and P plus 1, respectively, s is a scale factor, A is a camera reference matrix, [ R T ]]Is a camera external parameter matrix.
And step 302, carrying out corner tracking on the micro checkerboard of the calibration plate.
Specifically, the camera collects a micro-checkerboard microscopic motion video, and for the characteristics of a micro-checkerboard microscopic sequence image, K pieces of microscopic sequence images are obtained from a microscopic video frame, then all corner points in each image are detected, and N corner points in each image are numbered to obtain a corner point sequence, as shown in fig. 5, a corresponding relation can be established for K and N corner points in the K pieces of microscopic sequence images according to the corner point sequence numbers, so that N corner point tracking tracks are obtained, as shown in fig. 6.
The specific steps of corner tracking are as follows:
the first step is as follows: inputting a micro-motion video, and cutting frames to obtain K micro-sequence images to obtain K checkerboard angle views;
the second step is that: taking one image, converting the image into a gray scale image, and establishing a corner set U and an empty linked list L by using Harris corner detection (C.Harris, M.Stephens, "A combined corner and edge detector", Proc.AlveyVision Conference, 1988, pp.189-192, a checker of corner and boundary point);
the third step: searching and finding out a vertex of the micro checkerboard from the set U, moving the vertex to a linked list L and setting the vertex as a base point;
the fourth step: establishing a search domain of a base point according to the shortest distance between any two points in the set U;
the fifth step: if two angular points are found in the search domain and the current base point is a boundary angular point, adjusting the sequence of the two angular points according to the relationship between the two angular points and the base point and moving the two angular points to a linked list L; if an angular point is found and the current base point is not a boundary angular point, directly moving the angular point to a linked list L; otherwise, adjusting the search field and repeating the step five
And a sixth step: if the length of the linked list L reaches N, the linked list L is stored, an image is taken down, and the step two is returned; otherwise, taking the next point of the base points in the linked list L as the current base point, and returning to the step four;
the seventh step: and when the K microscopic sequence images are subjected to angular point detection and are numbered, K linked lists L are obtained, and the corresponding relation is established according to the angular point serial numbers to obtain the tracking tracks of the N angular points.
Fitting the tracking points of each angular point by a least square method to obtain a quadratic curve of angular point motionThe real tracking track of the angular point is obtained, wherein n is the serial number of the angular point; and repeating the steps to obtain the tracking tracks of the N corner points. According to the corner point tracking strategy, the complex corner point matching process can be avoided, and the complete motion trail of the corner points can be accurately tracked.
And step 303, determining a rotating shaft of the object stage.
Since the rotation of the stage in the micro-vision system is in the same plane, as shown in fig. 4, the world coordinate system is established on the stage, i.e. the rotation plane of the stage is the O-XY plane in the world coordinate system, and the rotation axis L is then the same as the rotation axis LgIs that v is [0, 0, 1 ]]And the vertical intersection with the O-XY plane point G, the spatial position of the rotating shaft can be determined only by obtaining the world coordinate of the intersection point G of the vertical intersection of the rotating shaft and the object stage.
The two-dimensional image coordinates of an intersection point G are approximated by averaging the two-dimensional image coordinates of discrete points on a trajectory along the trajectory of a certain corner point, the world coordinates of the intersection point G are obtained by formula (1), and a direction vector v between the intersection point G and a rotation axis is [0, 0, 1 ]]Jointly define the axis of rotation L of the stageg
And step 304, establishing an ideal rotating motion track.
Theoretically, a certain point in the space continuously rotates by different angles around a fixed rotating shaft, a group of new space points are obtained through coordinate transformation, then a group of two-dimensional projection points are generated through projection transformation of a camera model respectively, and a quadratic curve obtained by the points through least square fitting is an ideal rotating motion track of the point.
The solution principle of discrete two-dimensional projection points in an ideal rotational motion trajectory is explained in detail as follows:
suppose that the spatial point P is around the axis of rotation LgRotate by different angles thetai(counter-clockwise rotation, rotation angle can be accurately measured by motion control system) to get a new spatial point P'iObtaining two-dimensional image point p 'through camera projection transformation'iThen, P ═ x, y, z]And p'i=[ui,vi]By the relationship ofAndthe expression is as follows:
<math> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mi>s</mi> <msub> <msup> <mover> <mi>p</mi> <mo>~</mo> </mover> <mo>&prime;</mo> </msup> <mi>i</mi> </msub> <mo>=</mo> <mi>A</mi> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>R</mi> </mtd> <mtd> <mi>T</mi> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>I</mi> </mtd> <mtd> <mi>t</mi> </mtd> </mtr> <mtr> <mtd> <msup> <mn>0</mn> <mi>T</mi> </msup> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>R</mi> <mi>v</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <msup> <mn>0</mn> <mi>T</mi> </msup> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>I</mi> </mtd> <mtd> <msup> <mi>t</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mn>0</mn> <mi>T</mi> </msup> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mover> <mi>P</mi> <mo>~</mo> </mover> </mtd> </mtr> <mtr> <mtd> <msub> <mi>R</mi> <mi>v</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>cos</mi> <msub> <mi>&theta;</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>*</mo> <mi>v</mi> <mo>*</mo> <msup> <mi>v</mi> <mi>T</mi> </msup> <mo>+</mo> <mi>cos</mi> <msub> <mi>&theta;</mi> <mi>i</mi> </msub> <mo>*</mo> <mi>I</mi> <mo>+</mo> <mi>sin</mi> <msub> <mi>&theta;</mi> <mi>i</mi> </msub> <mo>*</mo> <msub> <mrow> <mo>[</mo> <mi>v</mi> <mo>]</mo> </mrow> <mo>&times;</mo> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein,andis P and P'iPlus 1 plus vector, Rvi) For a spatial rotation angle theta around a single axisiV is a direction vector of the rotation axis, [ v ]]×Is the antisymmetric vector of v, I is a unit vector of 3 × 3, and t is a shift vector that shifts the world coordinate system origin onto the rotation axis.
The ideal rotating motion track of any space point can be determined by the formula (2). Aiming at N angular points of the micro-checkerboard, an ideal motion track C of each angular point is establishednWhere n is the number of the corner points.
And 305, evaluating the motion error and establishing an error correction model.
By utilizing the angular point tracking strategy provided by the invention, a quadratic curve of the motion of N angular points can be fittedEstablishing an ideal motion track C for each corner point simultaneously for real tracking tracknThe error value existing when rotated to an arbitrary angle θ can be evaluated by the following formula:
<math> <mrow> <msub> <mi>&eta;</mi> <mi>&theta;</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msqrt> <msup> <mrow> <mo>(</mo> <msub> <mover> <mi>C</mi> <mo>^</mo> </mover> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mi>cos</mi> <mi>&theta;</mi> <mo>-</mo> <msub> <mi>C</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mi>cos</mi> <mi>&theta;</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mover> <mi>C</mi> <mo>^</mo> </mover> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mi>sin</mi> <mi>&theta;</mi> <mo>-</mo> <msub> <mi>C</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mi>sin</mi> <mi>&theta;</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
under a microscopic vision system, the motion error generated by the rotary motion of the object stage has a certain rule, and a universal error correction model is established through error evaluation. To be non-trivial, the world coordinate system is established at a fixed location on the stage. Therefore, the position offset of the object stage generated in the rotation process can be represented by the position offset of the spatial point in the world coordinate system, namely, the object stage is a universal error correction model. If the position of the rotating plane of the objective table is unchanged in the same rotating motion video, the internal and external parameters of the camera are unchanged. Suppose that the rotation angle θ of the spatial point P around the rotation axis yields the ideal point coordinate P '(x', y ', z'), and the actual point coordinate isp ' (u ', v ') andare respectively PThen the following two equations are satisfied:
<math> <mrow> <mi>s</mi> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msup> <mi>u</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>v</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mi>A</mi> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>R</mi> </mtd> <mtd> <mi>T</mi> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>z</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
s u ^ v ^ 1 = A R T x ^ y ^ z ^ 1
the two are subtracted from each other, and can be simplified into
<math> <mrow> <mi>s</mi> <msubsup> <mover> <mi>d</mi> <mo>~</mo> </mover> <mi>&theta;</mi> <mi>T</mi> </msubsup> <mo>=</mo> <mi>AR</mi> <msubsup> <mi>D</mi> <mi>&theta;</mi> <mi>T</mi> </msubsup> </mrow> </math>
WhereinIs the amount of positional offset on the world coordinate system,is the amount of positional shift on the two-dimensional image coordinate system,is dθThe 0-added vector can be obtained by the actual tracking track and the ideal motion track:
<math> <mrow> <msub> <mover> <mi>d</mi> <mo>~</mo> </mover> <mi>&theta;</mi> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mi>cos</mi> <mi>&theta;</mi> <mo>-</mo> <msub> <mover> <mi>C</mi> <mo>^</mo> </mover> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mi>cos</mi> <mi>&theta;</mi> <mo>)</mo> </mrow> <mo>,</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mi>sin</mi> <mi>&theta;</mi> <mo>-</mo> <msub> <mover> <mi>C</mi> <mo>^</mo> </mover> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mi>sin</mi> <mi>&theta;</mi> <mo>)</mo> </mrow> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>.</mo> </mrow> </math>
then, the error correction model can be obtained by the following formula:
<math> <mrow> <msub> <mi>E</mi> <mi>&theta;</mi> </msub> <mo>=</mo> <msubsup> <mi>D</mi> <mi>&theta;</mi> <mi>T</mi> </msubsup> <mo>=</mo> <mi>s</mi> <msup> <mi>R</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <msup> <mi>A</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <msubsup> <mover> <mi>d</mi> <mo>~</mo> </mover> <mi>&theta;</mi> <mi>T</mi> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein A and R are invertible matrices.
Thus, an error correction model of the microscopic vision system is established, and the error correction model obtained through the calculation of the steps only depends on the motion state of the objective table and is independent of the relative position of the objective table and the camera. Therefore, no matter what physical position the object stage is in, the micro-motion video acquired by the micro-vision system can be used for correcting the motion error by using the error correction model under the condition that the object stage performs plane rotation motion.
Step 202, adjusting the position of the object stage to adapt to the observed object, calibrating the camera by adopting a calibration plate, and calculating internal and external parameters of the camera.
It should be noted that, when different objects are observed, the position of the object stage often needs to be adjusted according to actual conditions, after the object stage is adjusted to a new position, the internal and external parameters of the camera need to be calibrated again, ten microscopic images (including the initial frame of the microscopic video) of the micro checkerboard at different angles are collected as input data for calibrating the camera, and the internal reference matrix a 'and the external reference matrix [ R' T '] and the scale factor S' of the camera at the new position of the object stage are calculated, and the calibration method is the same as step 301, which is not described herein again.
And 203, calculating an offset correction vector corresponding to the rotation angle by using an error correction model according to the rotation angle corresponding to the microscopic image of the observed object recorded by the camera, and correcting the microscopic image.
Specifically, after the position of the object stage is adjusted, the observed object is placed on the object stage, a rotational motion microscopic video of the observed object is recorded, the microscopic motion video is framed to obtain a Z frame, and a relationship between a rotational angle of the object stage and a frame sequence Z can be expressed as follows: θ is 360 ═ Z-1)/(Z-1. Calculating an offset correction vector of the microscopic image of each frame according to the corresponding rotation angle theta of each frame and the internal and external matrixes of the camera of the initial frame:
<math> <mrow> <msubsup> <msup> <mover> <mi>d</mi> <mo>~</mo> </mover> <mo>&prime;</mo> </msup> <mi>&theta;</mi> <mi>T</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mi>s</mi> <mo>&prime;</mo> </msup> </mfrac> <msup> <mi>A</mi> <mo>&prime;</mo> </msup> <msup> <mi>R</mi> <mo>&prime;</mo> </msup> <msub> <mi>E</mi> <mi>&theta;</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </math>
calculated d'θAnd a and b are offset compensation amounts in the directions of the U axis and the V axis in the two-dimensional image coordinate system respectively and are used for correcting the position of the corresponding microscopic image.
In the invention, the establishment of an error correction model is particularly important, and in the application of an actual microscopic vision system, the position of an objective table is firstly adjusted to be positioned in the parallax center of a microscope, then the objective table is inclined to a certain angle to obtain the depth information of a tiny object, then the objective table is rotated at a constant speed to acquire a section of microscopic motion video, so that a 360-degree all-directional structural view of the tiny object is obtained. By using the micro-checkerboard as an observation object and the corner tracking strategy provided by the invention, the result of the corner tracking is shown in figure 6, and the tracking track has obvious motion errors. In order to obtain an ideal motion track, firstly, a camera calibration is carried out on a microscopic vision system by using a Zhang Zhengyou plane calibration method, an objective table is adjusted to enable the plane of the objective table to be in different angles (including the initial position of the objective table in a microscopic motion video), and simultaneously, microscopic images of a micro checkerboard are respectively collected to be used as input data of the camera calibration, so that a camera model of the system and a spatial transformation relation (namely a camera external reference matrix) between the initial position of the objective table and a camera coordinate system in the microscopic motion video are obtained. Secondly, calculating a rotating shaft of the rotating motion by utilizing angular point tracking, and further obtaining a rotating motion model of the objective table in the space; and finally, obtaining discrete points on the ideal motion track through a formula (2), and performing curve fitting through a least square method to obtain a quadratic curve of the ideal motion track. On the other hand, a corner tracking strategy is utilized to obtain a corner tracking track in the microscopic motion video, accurate motion error evaluation is carried out through a formula (3), and then an error correction model of the system is established through a formula (4).
According to the embodiment of the invention, the established error correction model is applied to a microscopic vision system to correct errors. The position of the object stage is adjusted by using motion control equipment, the object stage rotates 360 degrees around a fixed shaft at a constant speed (the angular speed is 1000 pulses/second, 54000 pulses are needed for one rotation) under the condition of inclining by 10 degrees, 0 degrees, 5 degrees and 15 degrees, and meanwhile, four microscopic motion videos with the resolution of 640 multiplied by 480 pixels are collected by using microscopic image acquisition equipment to serve as experimental data. Firstly, obtaining a camera internal reference matrix of a system and a camera external reference matrix of four microscope video initial frames by using a Zhangyingyou plane calibration method (taking four microscope video initial frames and 6 swinging images at other arbitrary angles as calibration pictures); then, performing corner point tracking on a first microscope video rotated by inclining 10 degrees, wherein the result is shown in fig. 6; then, acquiring the position of the rotating shaft to obtain an ideal motion track; then, performing error evaluation, as shown in fig. 7, describing motion error values generated by rotating 15 corners on the micro checkerboard by 360 °, wherein each curve corresponds to each corner, the X axis represents the motion error values (pixels), and the Y axis represents the rotation angle of the stage; establishing an error correction model, wherein an X axis represents the offset on a U axis in the two-dimensional image, an Y axis represents the offset on a V axis in the two-dimensional image, and a Z axis represents the rotation angle of the objective table, as shown in FIG. 8; then, the other three microscopic videos are subjected to error correction by using the correction model, wherein four frames of the corrected microscopic videos inclined by 5 degrees are respectively a 1 st frame, a 51 st frame, a 101 th frame and a 151 th frame from left to right as shown in fig. 9; finally, error evaluation is carried out on the three groups of microscopic videos before and after correction to obtain three groups of evaluation results, the results are shown in fig. 10, and the error evaluation objects in fig. 10a, 10b and 10c are microscopic motion videos rotating 360 degrees when the object stage inclines by 0 degree, 5 degrees and 15 degrees respectively, wherein the curve with the square block is the error evaluation before correction, and the curve with the triangle is the error evaluation after correction; meanwhile, data analysis can be performed, as shown in fig. 11, which shows that the present embodiment can effectively reduce the motion error of the system, which reaches 76%.
The above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and those skilled in the art can make various corresponding changes and modifications according to the present invention without departing from the spirit and the essence of the present invention, but these corresponding changes and modifications should fall within the protection scope of the appended claims.

Claims (7)

1. A correction method of motion error in micro-vision system, which is applied to micro-vision system, the micro-vision system includes monocular optical microscope and camera for obtaining micro-image of observed object, object stage and motion control device for controlling the object stage to rotate at a certain inclination angle, and computer system for vision processing, the correction method includes steps:
(1) establishing an error correction model, comprising the following steps:
(1.1) calibrating the camera by adopting a calibration plate, determining internal and external parameters of the camera, and establishing a camera model of a microscopic vision system;
(1.2) carrying out angular point tracking on the calibration plate to obtain tracking tracks of N angular points;
(1.3) determining a rotation axis of a stage of the micro-vision system;
(1.4) establishing an ideal rotating motion track;
(1.5) constructing an error correction model;
wherein the step (1.5) comprises the steps of:
the rotation angle theta of the space point P around the rotation axis is used for obtaining an ideal point coordinate P '(x', y ', z') and an actual point coordinatep ' (u ', v ') andare respectively PThen the following two equations are satisfied:
<math> <mrow> <mi>s</mi> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msup> <mi>u</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>v</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mi>A</mi> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>R</mi> </mtd> <mtd> <mi>T</mi> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>z</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
s u ^ v ^ 1 = A R T x ^ y ^ z ^ 1
wherein s is a scale factor, A is a camera reference matrix, and [ R T ] is a camera reference matrix;
subtracting the above equations yields:
<math> <mrow> <mi>s</mi> <msubsup> <mover> <mi>d</mi> <mo>~</mo> </mover> <mi>&theta;</mi> <mi>T</mi> </msubsup> <mo>=</mo> <msubsup> <mi>ARD</mi> <mi>&theta;</mi> <mi>T</mi> </msubsup> <mo>,</mo> </mrow> </math>
whereinIs the amount of positional offset on the world coordinate system,is a position offset on a two-dimensional image coordinate system, andis dθ0 vector is increased; whereinObtaining the following result through the tracking tracks of the N angular points and an ideal rotating motion track:
<math> <mrow> <msub> <mover> <mi>d</mi> <mo>~</mo> </mover> <mi>&theta;</mi> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mi>cos</mi> <mi>&theta;</mi> <mo>-</mo> <msub> <mover> <mi>C</mi> <mo>^</mo> </mover> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mi>cos</mi> <mi>&theta;</mi> <mo>)</mo> </mrow> <mo>,</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mi>sin</mi> <mi>&theta;</mi> <mo>-</mo> <msub> <mover> <mi>C</mi> <mo>^</mo> </mover> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mi>sin</mi> <mi>&theta;</mi> <mo>)</mo> </mrow> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
wherein C isnWhich are the twice curves of the ideal motion trajectory of the nth corner point,twice curves of the tracking track of the nth corner point are taken;
obtaining an error correction model E through transformationθComprises the following steps:
<math> <mrow> <msub> <mi>E</mi> <mi>&theta;</mi> </msub> <mo>=</mo> <msubsup> <mi>D</mi> <mi>&theta;</mi> <mi>T</mi> </msubsup> <mo>=</mo> <msup> <mi>sR</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <msup> <mi>A</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <msubsup> <mover> <mi>d</mi> <mo>~</mo> </mover> <mi>&theta;</mi> <mi>T</mi> </msubsup> </mrow> </math>
wherein s is a scale factor, A is a camera reference matrix, and [ R T ] is a camera reference matrix;
(2) adjusting the position of the objective table to adapt to the observed object, calibrating the camera by adopting a calibration plate, and calculating internal and external parameters of the camera;
(3) and (3) substituting the internal and external parameters of the camera obtained in the step (2) into the error correction model according to the rotation angle corresponding to the microscopic image of the observed object recorded by the camera, calculating an offset correction vector corresponding to the rotation angle, and correcting the microscopic image.
2. A method for correcting motion error according to claim 1, wherein the spatial point P of the camera model is [ x, y, z ═ x, y, z]With its projected point p ═ u, v on the two-dimensional image displayed in the camera]The relationship of (c) is expressed as:
whereinAndthe increment vectors of P and P plus 1, respectively, s is a scale factor, A is a camera reference matrix, [ R T ]]Is a camera external parameter matrix.
3. The method for correcting motion error of claim 1 wherein the calibration board is a micro-checkerboard, and said step (1.2) comprises the steps of:
(1.2.1) inputting the micro-checkerboard micro-motion video collected by the camera, and acquiring K micro-sequence images by frame truncation;
(1.2.2) taking one of the images, converting the image into a gray image, and establishing a set U of angular points and an empty linked list L;
(1.2.3) searching and finding a checkerboard vertex from the set U, moving the point to a linked list L and setting the point as a base point;
(1.2.4) establishing a search domain of a base point according to the shortest distance between any two points in the set U;
(1.2.5) if two angular points are found in the search domain and the current base point is a boundary angular point, adjusting the sequence of the two angular points according to the relationship between the two angular points and the base point and moving the two angular points to a linked list L; if an angular point is found and the current base point is not a boundary angular point, directly moving the angular point to a linked list L; otherwise, adjusting the search domain and repeating the step (1.2.5);
(1.2.6) if the length of the linked list L reaches N, and N is the number of the numbered angular points in the image, storing the linked list L, taking down one image, and returning to the step (1.2.2); otherwise, taking the next point of the base points in the linked list L as the current base point, and returning to the step (1.2.4);
(1.2.7) when the K microscopic sequence images are subjected to angular point detection and numbered, obtaining K linked lists L, and establishing a corresponding relation according to the angular point serial numbers to obtain N angular point tracking tracks.
4. A method of correcting motion error according to claim 3, wherein: the step (1.2) further comprises the steps of:
for each corner point, K rotating tracking points are obtained, and quadratic curves of corner point motion are fitted to the tracking points through a least square methodThe real tracking track of the angular point is obtained, wherein n is the serial number of the angular point;
and repeating the steps to obtain the tracking tracks of the N corner points.
5. The method for correcting motion error according to claim 4, wherein the determining the rotation axis of the stage of the micro-vision system is N tracking tracks of the angular points obtained by tracking the angular points, the two-dimensional image coordinates of discrete points on the tracks are averaged to approximately obtain the two-dimensional image coordinates of the intersection point of the rotation axis and the stage, and the world coordinates of the intersection point are obtained by back projection of the camera model, so that the rotation axis is determined.
6. The method for correcting motion error of claim 5, wherein the establishing of the ideal rotational motion trajectory is to continuously rotate a point in space around the rotation axis by different angles, obtain a new set of spatial points through coordinate transformation, generate a set of two-dimensional projection points through projection transformation of the camera model, respectively, obtain a quadratic curve by least-squares fitting the new spatial points and the two-dimensional projection points, which is the ideal rotational motion trajectory of the point, and establish the ideal motion trajectory C of the N corner points for the N corner pointsnWhere n is the number of the corner points.
7. The method of motion error correction according to claim 1, wherein the intrinsic and extrinsic parameters of the camera include a scale factor, an intrinsic parameter matrix, and an extrinsic parameter matrix.
CN201210243071.5A 2012-07-13 2012-07-13 Method for correcting kinematic errors in microscopic vision system Active CN102842117B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210243071.5A CN102842117B (en) 2012-07-13 2012-07-13 Method for correcting kinematic errors in microscopic vision system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210243071.5A CN102842117B (en) 2012-07-13 2012-07-13 Method for correcting kinematic errors in microscopic vision system

Publications (2)

Publication Number Publication Date
CN102842117A CN102842117A (en) 2012-12-26
CN102842117B true CN102842117B (en) 2015-02-25

Family

ID=47369443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210243071.5A Active CN102842117B (en) 2012-07-13 2012-07-13 Method for correcting kinematic errors in microscopic vision system

Country Status (1)

Country Link
CN (1) CN102842117B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103273310B (en) * 2013-05-24 2015-11-04 中国科学院自动化研究所 A kind of micro-part automatic aligning method based on multipath micro-vision
CN103386598B (en) * 2013-07-12 2016-06-15 中国科学院自动化研究所 A kind of micro-part is automatically directed at and assembles apparatus and method
CN104865893B (en) * 2014-02-24 2018-01-09 大族激光科技产业集团股份有限公司 Motion platform control system and motion platform error calculation method
US9986150B2 (en) * 2015-09-30 2018-05-29 Ricoh Co., Ltd. Algorithm to estimate yaw errors in camera pose
CN106782031A (en) * 2016-12-27 2017-05-31 中国船舶重工集团公司第七0七研究所 Towards the method for the calibration laser galvanometer index error of paper chart operation
CN109035331B (en) * 2017-06-12 2020-11-17 浙江宇视科技有限公司 Position correction method and device for signal lamp group
CN108548824B (en) * 2018-03-30 2021-03-26 湖北工程学院 PVC (polyvinyl chloride) mask detection method and device
FR3083766B1 (en) * 2018-07-13 2020-06-26 Renault S.A.S. METHOD FOR PREPARING A PREDICTIVE ORDER INSTRUCTIONS WHICH CAN BE IMPLANTED IN A TRAJECTORY CONTROL UNIT OF A MOTOR VEHICLE
CN111147741B (en) * 2019-12-27 2021-08-13 Oppo广东移动通信有限公司 Focusing processing-based anti-shake method and device, electronic equipment and storage medium
CN111768445A (en) * 2020-05-09 2020-10-13 江苏集萃微纳自动化系统与装备技术研究所有限公司 Micro-operation platform error self-correction algorithm based on machine vision
CN113074766B (en) * 2021-03-19 2023-01-10 广东工业大学 Micro-nano visual motion tracking system
CN113569843B (en) * 2021-06-21 2024-08-23 影石创新科技股份有限公司 Corner detection method, corner detection device, computer equipment and storage medium
CN114935309B (en) * 2022-04-02 2024-09-20 杭州汇萃智能科技有限公司 Method, system and readable storage medium for correcting installation error in machine vision measurement

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1860612A2 (en) * 2006-05-22 2007-11-28 Canon Kabushiki Kaisha Image distortion correction
CN101673399A (en) * 2009-09-29 2010-03-17 浙江工业大学 Calibration method of coded structured light three-dimensional vision system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1860612A2 (en) * 2006-05-22 2007-11-28 Canon Kabushiki Kaisha Image distortion correction
CN101673399A (en) * 2009-09-29 2010-03-17 浙江工业大学 Calibration method of coded structured light three-dimensional vision system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A Flexible New Technique for Camera Calibration;Zhengyou Zhang;《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》;20001130;第22卷(第11期);第1330-1334页 *
一种新的SLM显微立体视觉畸变矫正模型;王跃宗等;《光电工程》;20050630;第32卷(第6期);第72-75页 *
摄像机畸变矫正的一种简单有效的方法;李鑫等;《医疗设备信息》;20070228;第22卷(第2期);第7-9页 *
显微视觉主轴回转误差测量;袁德亮等;《机械与电子》;20100124(第1期);第36-39页 *

Also Published As

Publication number Publication date
CN102842117A (en) 2012-12-26

Similar Documents

Publication Publication Date Title
CN102842117B (en) Method for correcting kinematic errors in microscopic vision system
CN109859275B (en) Monocular vision hand-eye calibration method of rehabilitation mechanical arm based on S-R-S structure
CN103020952B (en) Messaging device and information processing method
CN103854291B (en) Camera marking method in four-degree-of-freedom binocular vision system
CN108253939B (en) Variable visual axis monocular stereo vision measuring method
CN110728715A (en) Camera angle self-adaptive adjusting method of intelligent inspection robot
CN109242914B (en) Three-dimensional calibration method of movable vision system
CN114529605B (en) Human body three-dimensional posture estimation method based on multi-view fusion
CN111801198A (en) Hand-eye calibration method, system and computer storage medium
CN107358633A (en) Join scaling method inside and outside a kind of polyphaser based on 3 points of demarcation things
CN108413917B (en) Non-contact three-dimensional measurement system, non-contact three-dimensional measurement method and measurement device
CN111667536A (en) Parameter calibration method based on zoom camera depth estimation
CN112229323B (en) Six-degree-of-freedom measurement method of checkerboard cooperative target based on monocular vision of mobile phone and application of six-degree-of-freedom measurement method
JP7185860B2 (en) Calibration method for a multi-axis movable vision system
CN112598706B (en) Multi-camera moving target three-dimensional track reconstruction method without accurate time-space synchronization
CN113724337B (en) Camera dynamic external parameter calibration method and device without depending on tripod head angle
CN113240749B (en) Remote binocular calibration and ranging method for recovery of unmanned aerial vehicle facing offshore ship platform
JP4865065B1 (en) Stereo imaging device control system
CN105374067A (en) Three-dimensional reconstruction method based on PAL cameras and reconstruction system thereof
CN113450416B (en) TCSC method applied to three-dimensional calibration of three-dimensional camera
CN114018291A (en) Calibration method and device for parameters of inertial measurement unit
JP7033294B2 (en) Imaging system, imaging method
CN103310448B (en) Camera head pose estimation and the real-time method generating composite diagram for DAS
Zeng et al. A 3D passive optical localization system based on binocular infrared cameras
Cai et al. A target tracking and location robot system based on omnistereo vision

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant