CN110428457B - Point set affine transformation algorithm in visual positioning - Google Patents

Point set affine transformation algorithm in visual positioning Download PDF

Info

Publication number
CN110428457B
CN110428457B CN201910731363.5A CN201910731363A CN110428457B CN 110428457 B CN110428457 B CN 110428457B CN 201910731363 A CN201910731363 A CN 201910731363A CN 110428457 B CN110428457 B CN 110428457B
Authority
CN
China
Prior art keywords
transformation
point set
matrix
point
point sets
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910731363.5A
Other languages
Chinese (zh)
Other versions
CN110428457A (en
Inventor
刘扬
郭晓锋
余章卫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Zhongke Whole Elephant Intelligent Technology Co ltd
Original Assignee
Suzhou Zhongke Whole Elephant Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhongke Whole Elephant Intelligent Technology Co ltd filed Critical Suzhou Zhongke Whole Elephant Intelligent Technology Co ltd
Priority to CN201910731363.5A priority Critical patent/CN110428457B/en
Publication of CN110428457A publication Critical patent/CN110428457A/en
Application granted granted Critical
Publication of CN110428457B publication Critical patent/CN110428457B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods

Abstract

The invention discloses a point set affine transformation algorithm in visual positioning, and relates to the technical field of camera calibration. When a camera is calibrated and a corresponding point set transformation matrix is solved, firstly, the mapping relation of two groups of point sets is determined, and then when the mapping relation of the two groups of point sets is any affine transformation, a least square method is adopted to solve the transformation matrix; when the mapping relation of the two groups of point sets is rigid affine transformation, solving a transformation matrix by adopting singular value decomposition and a least square method; and when the mapping relation of the two groups of point sets is similar affine transformation, solving the transformation matrix by adopting a least square method in a complex domain. The algorithm disclosed by the invention can well solve the problem that in visual positioning, different mapping matrix solving modes are adopted for different mapping relations of two groups of point sets, so that the positioning precision is improved, and the positioning time is reduced.

Description

Point set affine transformation algorithm in visual positioning
Technical Field
The invention relates to the technical field of camera calibration, in particular to a point set affine transformation algorithm in visual positioning.
Background
With the advance of industrial automation technology, more and more work of assembling, detecting, measuring and the like of production line workpieces is gradually replaced by robots or automation equipment, and the realization of the technology can not leave machine vision for the most part. In the field of industrial machine vision application, a transformation relation between two coordinate systems is often required to be established to realize the feature point positioning of a target object, so that the aims of guiding and assembling a workpiece and the like are fulfilled.
In 2D visual inspection, the mapping relationship between the image coordinates and the world coordinates of an object, the image coordinates and the image coordinates, and the world coordinates is often obtained by any combination of rotation, translation, scaling, flipping, and beveling matrices. In practical industrial applications, arbitrary transformations among corresponding point sets, rigid transformations, and similarity transformations are often used, which are well suited to solve practical positioning problems.
In the process of carrying out visual positioning on the position of a target workpiece and carrying out linear calibration on a camera, the conventional method is to adopt methods such as template matching, blob analysis, corner point detection and the like to carry out feature extraction on a calibration template, so that two groups of corresponding point set coordinate data with the same number under two coordinate systems are obtained, and pose information under the coordinate systems corresponding to detection points can be obtained by establishing an affine transformation relation matrix between image coordinates and actual physical coordinates of the point sets. Considering that the possible mapping relations of the two groups of point sets are different, different transformation modes are needed instead of a unified method, the transformation modes between coordinate systems are different, and the algorithm for solving the transformation matrix solution is also different.
Disclosure of Invention
The invention aims to provide a point set affine transformation algorithm in visual positioning, which adopts different transformation matrix solving methods for two groups of point sets with different mapping relations.
In order to solve the problems, the technical scheme of the invention is as follows: a point set affine transformation algorithm in visual positioning for carrying out linear calibration on a camera comprises the following steps:
step 1: shooting a calibration plate photo, wherein the calibration plate has feature points, and the position coordinates of the feature points are known items to obtain point set physical coordinate data;
step 2: extracting features of the calibration plate photo to obtain point set image coordinate data, wherein the point set image coordinate data and the point set physical coordinate data are equal in number and are in one-to-one correspondence;
and step 3: defining the mapping relation of the two groups of point sets, and solving a transformation matrix according to the coordinate data of the two groups of point sets;
and for different point set mapping relations, different transformation matrix solving algorithms are adopted.
Further, when the mapping relation of the two groups of point sets is any affine transformation, a transformation matrix is solved by adopting a least square method, and the method comprises the following steps:
step a: obtaining point set image coordinate data (x ', y') through feature extraction, wherein the point set physical coordinate data is (x, y);
step b: the transformation matrix transforms the point set physical coordinates (x, y) to point set image coordinates (x ', y'),
x=Ax′+By′+C
y=Dx′+Ey′+F
the A, B, C, D, E and F are coordinate conversion coefficients;
step c: solving A, B, C, D, E and F, adopting inverse mapping and obtaining by a least square method:
vec1=inv([X Y I]′*[X Y I])*[X Y I]′*U
vec2=inv([X Y I]′*[X Y I])*[X Y I]′*V
wherein vec1 ═ a B C, vec2 ═ D E F; x, Y, U, V, I are vectors of x, y, x ', y', 1, respectively, and are expressed as follows:
Figure GDA0003333268350000021
further, when the physical coordinates (x, y) of the point set are transformed into the image coordinates (x ', y') of the point set in step b, the following transformation formula is adopted:
Figure GDA0003333268350000022
further, when the mapping relation of the two groups of point sets is rigid affine transformation, a transformation matrix is solved by adopting singular value decomposition and a least square method, and the method comprises the following steps:
step a: two corresponding point sets in two-dimensional space are P ═ P1,p2,...,pnQ ═ Q1,q2,...,qnAnd converting rigid bodies among the point sets into a rotation matrix R and a translation matrix t, and constructing a model:
Figure GDA0003333268350000023
step b: and calculating R and t.
Further, the process of finding t is as follows:
the two point sets are de-centered to obtain new point sets X and Y, denoted as:
Figure GDA0003333268350000031
Figure GDA0003333268350000032
Figure GDA0003333268350000033
Figure GDA0003333268350000034
at this time, the translation matrix
Figure GDA0003333268350000035
Further, the R is obtained as follows:
Figure GDA0003333268350000036
let tr(∑VTRU) is reached to a maximum value,
I=VTRU
stepwise simplification:
V=RU
R=VUT
further, when the mapping relation of the two groups of point sets is similar affine transformation, a least square method in a complex domain is adopted to solve a transformation matrix, and the method comprises the following steps:
step a: the transformation matrix expression is:
Figure GDA0003333268350000037
the m-th order polynomial model for the real number domain is as follows:
Figure GDA0003333268350000038
Figure GDA0003333268350000039
wherein j + k is less than or equal to m, (X, Y) represents the coordinate after transformation, (X, Y) represents the coordinate before transformation, m represents the highest order of the polynomial model, ajk、bjkRepresenting a transformation parameter;
step b: depending on the nature of the complex operation, the model of the real number domain may be modified to:
Figure GDA00033332683500000310
wherein
Figure GDA0003333268350000041
Is a complex field parameter and has
Figure GDA0003333268350000042
The polynomial model is first order, m is 1, real number domain transformation (a) can be obtained by the formulas (1) and (2), the real number domain transformation has 6 unknown parameters, and when the real number domain is expressed by a complex number domain, the 6 parameters of the real number domain are simplified into a first order polynomial model of complex number domain 3 parameters:
Figure GDA0003333268350000043
in the formula (I), the compound is shown in the specification,
Figure GDA0003333268350000044
for the parameters to be solved, wherein
Figure GDA0003333268350000045
The translation information is included in the translation information,
Figure GDA0003333268350000046
scaling and rotating information are included, and a required similarity transformation matrix is obtained;
step c: the similarity transformation matrix is obtained by calculating parameters and median errors by a complex field least square adjustment method:
translation matrix:
Figure GDA0003333268350000047
Figure GDA0003333268350000048
rotation angle:
Figure GDA0003333268350000049
scaling:
Figure GDA00033332683500000410
compared with the prior art, the invention has the following beneficial effects:
the algorithm disclosed by the invention can well solve the problem that in visual positioning, different mapping matrix solving modes are adopted for different mapping relations of two groups of point sets, so that the positioning precision is improved, and the positioning time is reduced.
Detailed Description
In order to make the technical means, the original characteristics, the achieved purpose and the efficacy of the invention easy to understand, the invention is further described with reference to the specific drawings.
Example 1:
a point set affine transformation algorithm in visual positioning for carrying out linear calibration on a camera comprises the following steps:
step 1: shooting a calibration plate photo, wherein the calibration plate has feature points, and the position coordinates of the feature points are known items to obtain point set physical coordinate data;
step 2: extracting features of the calibration plate photo to obtain point set image coordinate data, wherein the point set image coordinate data and the point set physical coordinate data are equal in number and are in one-to-one correspondence;
and step 3: defining the mapping relation of the two groups of point sets, and solving a transformation matrix according to the coordinate data of the two groups of point sets;
the relation matrix between two sets of mapping point sets is expressed in geometry as that one vector space is subjected to linear transformation and then translated into another vector space, and the number of the two sets of point sets must be equal.
Solving any affine transformation matrix by using a least square method:
obtaining point set image coordinate data (x ', y') through feature extraction, wherein the point set physical coordinate data is (x, y);
a pair vector
Figure GDA0003333268350000051
Translation
Figure GDA0003333268350000052
It can be generally expressed by the following formula:
Figure GDA0003333268350000053
is equivalent to:
Figure GDA0003333268350000054
the affine transformation can be compounded from the following basic transformations: translation, scaling, rotation, miscut, transformation matrix transforms point set physical coordinates (x, y) to point set image coordinates (x ', y'), these basic transformations are expressed as follows:
x=Ax′+By′+C
y=Dx′+Ey′+F
and solving A, B, C, D, E and F. To prevent the occurrence of empty pixels, inverse mapping is generally used, which is obtained by the least squares method:
vec1=inv([X Y I]′*[X Y I])*[X Y I]′*U
vec2=inv([X Y I]′*[X Y I])*[X Y I]′*V
wherein vec1 ═ a B C, vec2 ═ D E F; x, Y, U, V, I are vectors of x, y, x ', y', 1, respectively, and are expressed as follows:
Figure GDA0003333268350000055
further, when the point set physical coordinates (x, y) are transformed into the point set image coordinates (x ', y'), the following transformation formula is used:
Figure GDA0003333268350000056
example 2:
a point set affine transformation algorithm in visual positioning for carrying out linear calibration on a camera comprises the following steps:
step 1: shooting a calibration plate photo, wherein the calibration plate has feature points, and the position coordinates of the feature points are known items to obtain point set physical coordinate data;
step 2: extracting features of the calibration plate photo to obtain point set image coordinate data, wherein the point set image coordinate data and the point set physical coordinate data are equal in number and are in one-to-one correspondence;
and step 3: defining the mapping relation of the two groups of point sets, and solving a transformation matrix according to the coordinate data of the two groups of point sets;
the relation matrix between two sets of mapping point sets is expressed in geometry as that one vector space is subjected to linear transformation and then translated into another vector space, and the number of the two sets of point sets must be equal.
Solving a rigid transformation matrix by a singular value decomposition method:
given two corresponding sets of points in two-dimensional space, P ═ P1,p2,...,pnQ ═ Q1,q2,...,qnTo calculate the rigid body transformation between them, i.e. R and t, the procedure is as follows:
the model for constructing the above problem is:
Figure GDA0003333268350000061
and calculating R and t.
Further, the two point sets are de-centered to obtain new point sets X and Y, which are expressed as:
Figure GDA0003333268350000062
Figure GDA0003333268350000063
Figure GDA0003333268350000064
Figure GDA0003333268350000065
at this time, the translation matrix
Figure GDA0003333268350000066
Further, the model translates into:
Figure GDA0003333268350000067
to make tr(∑VTRU) is reached to a maximum value,
I=VTRU
stepwise simplification:
V=RU
R=VUT
therefore, t can be according to the formula
Figure GDA0003333268350000068
It is calculated from this, and a rotation matrix R and a translation matrix t between the two point sets are obtained.
Example 3:
a point set affine transformation algorithm in visual positioning for carrying out linear calibration on a camera comprises the following steps:
step 1: shooting a calibration plate photo, wherein the calibration plate has feature points, and the position coordinates of the feature points are known items to obtain point set physical coordinate data;
step 2: extracting features of the calibration plate photo to obtain point set image coordinate data, wherein the point set image coordinate data and the point set physical coordinate data are equal in number and are in one-to-one correspondence;
and step 3: defining the mapping relation of the two groups of point sets, and solving a transformation matrix according to the coordinate data of the two groups of point sets;
the relation matrix between two sets of mapping point sets is expressed in geometry as that one vector space is subjected to linear transformation and then translated into another vector space, and the number of the two sets of point sets must be equal.
Solving a similarity transformation matrix by a least square method of a complex number field:
when the mapping relation of the two groups of point sets is formed by combining rotation, translation and scaling, and under the condition of no beveling transformation, a least square method of a complex number field is adopted to solve a relation matrix, so that accurate positioning is realized, and the expression of the transformation matrix is as follows:
Figure GDA0003333268350000071
the similarity transformation matrix has one more degree of freedom than the rigid transformation, and the scaling factors in the X and Y directions are the same. The expression of the similarity transformation matrix is also derived from the real number domain, and in general, the m-order polynomial model of the real number domain is as follows:
Figure GDA0003333268350000072
Figure GDA0003333268350000073
wherein j + k is less than or equal to m, (X, Y) represents the coordinate after transformation, (X, Y) represents the coordinate before transformation, m represents the highest order of the polynomial model, ajk、bjkRepresenting the transformation parameters.
Depending on the nature of the complex operation, the model of the real number domain may be modified to:
Figure GDA0003333268350000074
wherein
Figure GDA0003333268350000075
Is a complex field parameter and has
Figure GDA0003333268350000076
As can be seen from the model with the complex number field and the model equation with the real number field, the number of equations of the complex number model is half less than that of the real number field, and the dimension of parameters is half less, so that the model expression of the complex number field is more efficient than that of the real number field, and the model expression of the complex number field is more efficient in solving a transformation matrix among point sets.
Based on affine transformation of the point set, the polynomial model is first order, and when m is 1, the real number domain transformation (a) can be obtained from the equations (1) and (2), the real number domain transformation has 6 unknown parameters in total, and when represented in the complex number domain, the 6 parameters in the real number domain can be simplified into a first order polynomial model of complex number domain 3 parameters:
Figure GDA0003333268350000077
in the formula (I), the compound is shown in the specification,
Figure GDA0003333268350000078
is to find a complex parameter, then
Figure GDA0003333268350000079
The translation information is included in the translation information,
Figure GDA0003333268350000081
the scaling and rotation information is included, and then the needed similarity transformation matrix can be obtained.
The complex field first-order polynomial adjustment and the real field one-section polynomial adjustment have the same parameter estimation, and the two methods have equivalence, so that the complex field least square adjustment method is adopted to calculate parameters and a middle error to obtain the following result:
translation matrix:
Figure GDA0003333268350000082
Figure GDA0003333268350000083
rotation angle:
Figure GDA0003333268350000084
scaling:
Figure GDA0003333268350000085
the results of the three examples detailed above:
table 1 below shows the data correspondence of the calibration plate for dot under-view by the camera in the actual experiment, where the calibration plate is placed on the marble platform during the experiment, the camera is fixed and installed, and is parallel to the plane of the marble platform, and the obtained pixel coordinates point1x and point1y and the corresponding calibration plate coordinates point2x and point2y are as follows:
point1x point1y point2x point2y
716.5381 567.9981 2 2
1123.073 581.0117 6 2
1529.93 594.1043 10 2
1936.396 607.0153 14 2
2343.156 620.0197 18 2
2750.088 633.2607 22 2
3156.945 646.339 26 2
3563.826 659.0841 30 2
3971.236 672.1598 34 2
TABLE 1
Knowing the pixel location x of a feature point: 4378.461481, y: 685.33244, and the coordinate value on the calibration plate X: 38, Y: for the pixel position, the matrix obtained by three methods is used for transformation, and the result is as follows:
Figure GDA0003333268350000086
Figure GDA0003333268350000091
TABLE 2
The table shows that the coordinate data obtained by the three solving modes of the invention has extremely high precision, and the coordinate precision obtained by the solving modes of the similarity transformation and the rigid transformation is higher than that of any transformation.
The following table 3 is an average time statistic of 1000 times of operation when the similarity transformation matrix, the arbitrary transformation matrix and the rigid transformation matrix are solved by 10 groups, 100 groups, 1000 groups and 10000 groups of point pairs respectively, the algorithm is operated and tested under a release model compiled by C + + under the systematic condition of windows 10(Intel Core i7-8700K CPU 3.7GHZ), and the solving speed of the three methods is extremely high through the table, wherein the solving speed of the similarity transformation and the rigid matrix is obviously higher than that of the arbitrary transformation matrix, and the solving speed of the similarity transformation and the rigid matrix is higher by adopting the corresponding algorithm.
Figure GDA0003333268350000092
TABLE 3
It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (1)

1. A point set affine transformation algorithm in visual positioning for carrying out linear calibration on a camera comprises the following steps:
step 1: shooting a calibration plate photo, wherein the calibration plate has feature points, and the position coordinates of the feature points are known items to obtain point set physical coordinate data;
step 2: extracting features of the calibration plate photo to obtain point set image coordinate data, wherein the point set image coordinate data and the point set physical coordinate data are equal in number and are in one-to-one correspondence;
and step 3: defining the mapping relation of the two groups of point sets, and solving a transformation matrix according to the coordinate data of the two groups of point sets;
the method is characterized in that: for different point set mapping relations, different transformation matrix solving algorithms are adopted;
when the mapping relation of the two groups of point sets is any affine transformation, solving a transformation matrix by adopting a least square method;
the method comprises the following steps:
step a: obtaining point set image coordinate data (x ', y') through feature extraction, wherein the point set physical coordinate data is (x, y);
step b: the transformation matrix transforms the point set physical coordinates (x, y) to point set image coordinates (x ', y'),
x=Ax′+By′+C
y=Dx′+Ey′+F
the A, B, C, D, E and F are coordinate conversion coefficients;
step c: solving A, B, C, D, E and F, adopting inverse mapping and obtaining by a least square method:
vec1=inv([XYI]′*[XYI])*[XYI]′*U
vec2=inv([XYI]′*[XYI])*[XYI]′*V
wherein vec1 ═ a B C, vec2 ═ D E F; x, Y, U, V, I are vectors of x, y, x ', y', 1, respectively, and are expressed as follows:
Figure FDA0003333268340000011
when the physical coordinates (x, y) of the point set are transformed into the image coordinates (x ', y') of the point set in the step b, the following transformation formula is adopted:
Figure FDA0003333268340000012
when the mapping relation of the two groups of point sets is rigid affine transformation, solving a transformation matrix by adopting singular value decomposition and a least square method;
the method comprises the following steps:
step a: two corresponding point sets in two-dimensional space are P ═ P1,p2,...,pnQ ═ Q1,q2,...,qnAnd converting rigid bodies among the point sets into a rotation matrix R and a translation matrix t, and constructing a model:
Figure FDA0003333268340000021
step b: solving R and t;
the t is obtained through the following process:
the two point sets are de-centered to obtain new point sets X and Y, denoted as:
Figure FDA0003333268340000022
Figure FDA0003333268340000023
Figure FDA0003333268340000024
Figure FDA0003333268340000025
at this time, the translation matrix
Figure FDA0003333268340000026
The R is obtained through the following steps:
Figure FDA0003333268340000027
let tr(∑VTRU) is reached to a maximum value,
I=VTRU:
stepwise simplification:
V=RU;
R=VUT
according to the formula
Figure FDA0003333268340000028
Calculating to obtain a rotation matrix P and a translation matrix t between the two point sets;
when the mapping relation of the two groups of point sets is similar affine transformation, a least square method in a complex domain is adopted to solve a transformation matrix;
the method comprises the following steps:
step a: the transformation matrix expression is:
Figure FDA0003333268340000029
the m-th order polynomial model for the real number domain is as follows:
Figure FDA0003333268340000031
Figure FDA0003333268340000032
wherein j + k is less than or equal to m, (X, Y) represents the coordinate after transformation, (X, Y) represents the coordinate before transformation, m represents the highest order of the polynomial model, ajk、bjkRepresenting a transformation parameter; the scaling factors in the X and Y directions are the same;
step b: depending on the nature of the complex operation, the model of the real number domain may be modified to:
Figure FDA0003333268340000033
wherein
Figure FDA0003333268340000034
Is a complex field parameter and has
Figure FDA0003333268340000035
The polynomial model is first order, m is 1, real number domain transformation (a) can be obtained by the formulas (1) and (2), the real number domain transformation has 6 unknown parameters, and when the real number domain is expressed by a complex number domain, the 6 parameters of the real number domain are simplified into a first order polynomial model of complex number domain 3 parameters:
Figure FDA0003333268340000036
in the formula (I), the compound is shown in the specification,
Figure FDA0003333268340000037
for the parameters to be solved, wherein
Figure FDA0003333268340000038
The translation information is included in the translation information,
Figure FDA0003333268340000039
scaling and rotating information are included, and a required similarity transformation matrix is obtained;
step c: the similarity transformation matrix is obtained by calculating parameters and median errors by a complex field least square adjustment method:
translation matrix:
Figure FDA00033332683400000310
Figure FDA00033332683400000311
rotation angle:
Figure FDA00033332683400000312
scaling:
Figure FDA00033332683400000313
CN201910731363.5A 2019-08-08 2019-08-08 Point set affine transformation algorithm in visual positioning Active CN110428457B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910731363.5A CN110428457B (en) 2019-08-08 2019-08-08 Point set affine transformation algorithm in visual positioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910731363.5A CN110428457B (en) 2019-08-08 2019-08-08 Point set affine transformation algorithm in visual positioning

Publications (2)

Publication Number Publication Date
CN110428457A CN110428457A (en) 2019-11-08
CN110428457B true CN110428457B (en) 2022-02-22

Family

ID=68413396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910731363.5A Active CN110428457B (en) 2019-08-08 2019-08-08 Point set affine transformation algorithm in visual positioning

Country Status (1)

Country Link
CN (1) CN110428457B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI711841B (en) * 2019-12-10 2020-12-01 廣達電腦股份有限公司 Method and device for eliminating ring effect
CN112215890B (en) * 2020-09-30 2022-07-05 华中科技大学 Monocular vision-based method for measuring pose of hob holder of shield machine

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104374338A (en) * 2014-09-28 2015-02-25 北京航空航天大学 Single-axis rotation angle vision measurement method based on fixed camera and single target
JP2016015037A (en) * 2014-07-02 2016-01-28 キヤノン株式会社 Information processing apparatus and control method, and video camera
CN105823416A (en) * 2016-03-04 2016-08-03 大族激光科技产业集团股份有限公司 Method for measuring object through multiple cameras and device thereof
JP2016201745A (en) * 2015-04-13 2016-12-01 キヤノン株式会社 Image processing apparatus, imaging device, control method and program for image processing apparatus
CN107014312A (en) * 2017-04-25 2017-08-04 西安交通大学 A kind of integral calibrating method of mirror-vibrating line laser structured light three-dimension measuring system
CN107449403A (en) * 2017-08-09 2017-12-08 天津理工大学 A kind of space-time four-dimension joint imaging model and application
CN108072319A (en) * 2016-11-07 2018-05-25 俞庆平 The Fast Calibration system and scaling method of a kind of motion platform
CN108537832A (en) * 2018-04-10 2018-09-14 安徽大学 Method for registering images, image processing system based on local invariant gray feature
CN109285194A (en) * 2018-09-29 2019-01-29 人加智能机器人技术(北京)有限公司 Camera calibration plate and camera calibration collecting method
CN109859277A (en) * 2019-01-21 2019-06-07 陕西科技大学 A kind of robotic vision system scaling method based on Halcon

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8553275B2 (en) * 2009-11-09 2013-10-08 Xerox Corporation Architecture for controlling placement and minimizing distortion of images

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016015037A (en) * 2014-07-02 2016-01-28 キヤノン株式会社 Information processing apparatus and control method, and video camera
CN104374338A (en) * 2014-09-28 2015-02-25 北京航空航天大学 Single-axis rotation angle vision measurement method based on fixed camera and single target
JP2016201745A (en) * 2015-04-13 2016-12-01 キヤノン株式会社 Image processing apparatus, imaging device, control method and program for image processing apparatus
CN105823416A (en) * 2016-03-04 2016-08-03 大族激光科技产业集团股份有限公司 Method for measuring object through multiple cameras and device thereof
CN108072319A (en) * 2016-11-07 2018-05-25 俞庆平 The Fast Calibration system and scaling method of a kind of motion platform
CN107014312A (en) * 2017-04-25 2017-08-04 西安交通大学 A kind of integral calibrating method of mirror-vibrating line laser structured light three-dimension measuring system
CN107449403A (en) * 2017-08-09 2017-12-08 天津理工大学 A kind of space-time four-dimension joint imaging model and application
CN108537832A (en) * 2018-04-10 2018-09-14 安徽大学 Method for registering images, image processing system based on local invariant gray feature
CN109285194A (en) * 2018-09-29 2019-01-29 人加智能机器人技术(北京)有限公司 Camera calibration plate and camera calibration collecting method
CN109859277A (en) * 2019-01-21 2019-06-07 陕西科技大学 A kind of robotic vision system scaling method based on Halcon

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
"图像的等距变换,相似变换,仿射变换,射影变换及其matlab实现;小cui童鞋;《https://blog.csdn.net/u014096352/article/details/53526747》;20161208;7 *
SVD分解(奇异值分解)求旋转矩阵;Bryan Zhang;《https://blog.csdn.net/dfdfdsfdfdfdf/article/details/53213240》;20161118;第1-4页 *
The least-square method in complex number domain;GU Xiangqian 等;《Progress in Natural Science》;20060630(第3期);全文 *
一种不同坐标系之间的变换矩阵的转换方法;杨卫东 等;《计算机辅助设计与图形学学报》;20000131(第1期);全文 *
利用最小二乘法求解仿射变换参数;AplusX;《https://blog.csdn.net/qq_41598072/article/details/89293029》;20190414;第1-4页 *
复数域与实数域最小二乘平差的等价性研究;刘志平 等;《大地测量与地球动力学》;20160831;第36卷(第8期);7 *
标定中存在的变换(射影变化、仿射变换等);非凡初来乍到;《https://blog.csdn.net/qq_38241538/article/details/83856942》;20181108;第1-2页 *

Also Published As

Publication number Publication date
CN110428457A (en) 2019-11-08

Similar Documents

Publication Publication Date Title
CN111775152B (en) Method and system for guiding mechanical arm to grab scattered stacked workpieces based on three-dimensional measurement
CN111775146B (en) Visual alignment method under industrial mechanical arm multi-station operation
CN105021124B (en) A kind of planar part three-dimensional position and normal vector computational methods based on depth map
CN110014426B (en) Method for grabbing symmetrically-shaped workpieces at high precision by using low-precision depth camera
CN113902810B (en) Robot gear chamfering processing method based on parallel binocular stereoscopic vision
CN110428457B (en) Point set affine transformation algorithm in visual positioning
US10540779B2 (en) Posture positioning system for machine and the method thereof
CN112109072B (en) Accurate 6D pose measurement and grabbing method for large sparse feature tray
CN112669385B (en) Industrial robot part identification and pose estimation method based on three-dimensional point cloud features
CN113379849B (en) Robot autonomous recognition intelligent grabbing method and system based on depth camera
CN112907735B (en) Flexible cable identification and three-dimensional reconstruction method based on point cloud
CN112509063A (en) Mechanical arm grabbing system and method based on edge feature matching
Carlson et al. Six DOF eye-to-hand calibration from 2D measurements using planar constraints
Lee et al. High precision hand-eye self-calibration for industrial robots
CN113172632A (en) Simplified robot vision servo control method based on images
CN111028280B (en) # -shaped structured light camera system and method for performing scaled three-dimensional reconstruction of target
CN110992416A (en) High-reflection-surface metal part pose measurement method based on binocular vision and CAD model
Lin et al. Vision based object grasping of industrial manipulator
JP5228856B2 (en) Work object position detection method and position detection apparatus
Nammoto et al. Model-based compliant motion control scheme for assembly tasks using vision and force information
CN111612847A (en) Point cloud data matching method and system for robot grabbing operation
CN110955958A (en) Working method of workpiece positioning device based on CAD model
Liang et al. An integrated camera parameters calibration approach for robotic monocular vision guidance
Liu et al. Hand-eye Calibration of Industrial Robots with 3D Cameras based on Dual Quaternions
Qingda et al. Workpiece posture measurement and intelligent robot grasping based on monocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant