CN108596980B - Circular target visual positioning precision evaluation method and device, storage medium and processing equipment - Google Patents

Circular target visual positioning precision evaluation method and device, storage medium and processing equipment Download PDF

Info

Publication number
CN108596980B
CN108596980B CN201810273909.2A CN201810273909A CN108596980B CN 108596980 B CN108596980 B CN 108596980B CN 201810273909 A CN201810273909 A CN 201810273909A CN 108596980 B CN108596980 B CN 108596980B
Authority
CN
China
Prior art keywords
circular target
discrete
circular
curve
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810273909.2A
Other languages
Chinese (zh)
Other versions
CN108596980A (en
Inventor
刘传凯
郭祥艳
孙军
谢剑锋
王晓雪
王保丰
万文辉
杨长坤
罗建军
王明明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unit 63920 Of Pla
Original Assignee
Unit 63920 Of Pla
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unit 63920 Of Pla filed Critical Unit 63920 Of Pla
Priority to CN201810273909.2A priority Critical patent/CN108596980B/en
Publication of CN108596980A publication Critical patent/CN108596980A/en
Application granted granted Critical
Publication of CN108596980B publication Critical patent/CN108596980B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method and a device for evaluating the visual positioning accuracy of a circular target, a storage medium and processing equipment. The method comprises the following steps: setting an initial state of the circular target relative to the camera; projecting a continuous circular curve of the circular target to an image plane to obtain an elliptic curve; grid discretization is carried out on the elliptic curve according to the image resolution, imaging errors are simulated and superposed by using a Monte Carlo method, and a discrete pixel point set curve with random errors is generated; performing arc segment extraction, ellipse fitting and space circular pose identification on the discrete pixel point set curve to obtain a deviation space pose state of the circular target; and obtaining the error amount of the deviation space pose state and the initial state of the circular target through comparison, and determining the positioning precision according to the error amount. The method solves the problem that the coupling of multiple error sources is difficult to separate, analyze and quantitatively evaluate in the visual positioning of the circular target, and can provide important support for the selection and utilization of the characteristics of the circular target in the design of a monocular vision measuring system.

Description

Circular target visual positioning precision evaluation method and device, storage medium and processing equipment
Technical Field
The invention relates to the technical field of visual positioning, in particular to a method and a device for evaluating the visual positioning accuracy of a circular target, a storage medium and processing equipment.
Background
Monocular vision pose measurement is taken as an important pose measurement means, has the advantages of simple structure, high measurement accuracy and the like, and needs to know the geometric model of a measurement target during measurement, such as the characteristics of points, straight lines, curves and the like of the target. The circle is an important curve feature, and has important application in the fields of target identification, tracking, positioning and the like, particularly in target positioning, the size of a circular target is calibrated in advance, the target positioning can be realized through a single circular feature or a small amount of feature information, and partial random errors can be balanced due to the symmetrical characteristic of the circle, so that the positioning accuracy is generally superior to that of a positioning method based on point features or line features. In a large number of fields such as industry, aerospace, medical treatment and the like, if the system design of visual perception and measurement can be developed by skillfully utilizing the circular characteristics, the system plays a crucial role in simplifying and stably operating the whole system.
In applications in the fields of industry, aerospace, medical treatment and the like, different requirements are generally put on the accuracy of a visual perception and measurement system due to different task requirements, so that the perception and measurement accuracy of the visual system can be evaluated in the design scheme stage of the system. However, the accuracy of the vision measurement system based on the circular feature is difficult to evaluate by an analytical method due to the complexity of imaging projection, and the error of measurement calculation is closely coupled with the error of image processing and is difficult to be considered separately, so that the quantitative evaluation of the measurement accuracy is more difficult.
Disclosure of Invention
The invention aims to solve the technical problem of the prior art and provides a method and a device for evaluating the visual positioning accuracy of a circular target, a storage medium and processing equipment.
The technical scheme for solving the technical problems is as follows: a method for evaluating the visual positioning accuracy of a circular target comprises the following steps:
setting an initial state of the circular target relative to the camera;
simulating the imaging process of a circular target by a perspective projection principle, and projecting a continuous circular curve of the circular target to an image plane to obtain an elliptic curve;
carrying out grid discretization on the elliptic curve according to the image resolution to obtain discrete projection points, and carrying out simulated superposition on imaging errors on the discrete projection points by utilizing a Monte Carlo method to generate a discrete pixel point set curve with random errors;
performing arc segment extraction, ellipse fitting and space circular pose identification on the discrete pixel point set curve to obtain a deviation space pose state of the circular target;
and obtaining the error amount of the deviation space pose state and the initial state of the circular target through comparison, and determining the positioning precision according to the error amount.
Another technical solution of the present invention for solving the above technical problems is as follows: a circular target visual positioning accuracy assessment device comprises:
a setting unit for setting an initial state of the circular target with respect to the camera;
the projection unit is used for simulating the imaging process of a circular target by a perspective projection principle and projecting a continuous circular curve of the circular target to an image plane to obtain an elliptic curve;
the discrete unit is used for carrying out grid discretization on the elliptic curve according to the image resolution to obtain discrete projection points, and simulating and superposing imaging errors on the discrete projection points by utilizing a Monte Carlo method to generate a discrete pixel point set curve with random errors;
the positioning unit is used for carrying out arc segment extraction, ellipse fitting and space circular pose identification on the discrete pixel point set curve to obtain a deviation space pose state of the circular target;
and the error calculation unit is used for obtaining the error amount of the deviation space pose state and the initial state of the circular target through comparison and determining the positioning accuracy according to the error amount.
Another technical solution of the present invention for solving the above technical problems is as follows: a storage medium having stored thereon a computer program which, when executed by a processor, implements the method of the above-described aspect.
Another technical solution of the present invention for solving the above technical problems is as follows: a processing device, the processing device comprising:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a method as in the previous aspects.
The invention has the beneficial effects that: compared with the existing precision analysis and evaluation method, the method does not need to deduce the explicit expression relation between the positioning precision and the distance and the direction of the circular target, but simulates an error source in target imaging and image processing, and counts the positioning precision under the condition of the existence of errors. The method solves the difficulty that the coupling of multiple error sources is difficult to separate, analyze and quantitatively evaluate in the visual positioning of the circular target, and can provide important support for the selection and utilization of the characteristics of the circular target in the design of a monocular vision measuring system.
Drawings
FIG. 1 is a flowchart of a method for evaluating visual positioning accuracy of a circular target according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a circular target and a camera corresponding to different initial states in an embodiment of the invention;
FIG. 3 is a schematic diagram illustrating a process for sampling the spatial position, orientation and size of a circular target according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a projection of a spatial circular object onto an image plane in an embodiment of the present invention;
FIG. 5 is a schematic diagram of a process for transforming a continuous arc segment into a discrete pixel point set curve with superimposed random errors according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an elliptical curve fit based on a set of discrete pixel points in an embodiment of the present invention;
fig. 7 is a block diagram of an apparatus for evaluating visual positioning accuracy of a circular target according to an embodiment of the present invention.
In the drawings, the components represented by the respective reference numerals are listed below:
1. camera, 2, circular target.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Fig. 1 shows a schematic flowchart of a method for evaluating visual positioning accuracy of a circular target according to an embodiment of the present invention. As shown in fig. 1, the method includes:
s101, setting an initial state of a circular target relative to a camera;
s102, simulating the imaging process of a circular target by a perspective projection principle, and projecting a continuous circular curve of the circular target to an image plane to obtain an elliptic curve;
s103, performing grid discretization on the elliptic curve according to the image resolution to obtain discrete projection points, and performing simulated superposition on imaging errors on the discrete projection points by using a Monte Carlo method to generate a discrete pixel point set curve with random errors;
s104, arc segment extraction, ellipse fitting and space circular pose recognition are carried out on the discrete pixel point set curve to obtain a deviation space pose state of the circular target;
and S105, obtaining the error amount of the deviation space pose state and the initial state of the circular target through comparison, and determining the positioning precision according to the error amount.
It should be noted that, in step S101, a plurality of sets of initial states of the circular target relative to the camera may be set, and steps S102 to S105 are respectively performed for each set of initial states, so as to respectively obtain a positioning error corresponding to each set of initial states of the circular target. As shown in fig. 2, each set of initial states corresponds to a circular target 2 having a different distance, a different direction and a different size with respect to the camera 1.
According to the method provided by the embodiment, the explicit expression relation between the positioning precision and the distance and the direction of the circular target does not need to be deduced, the error source in target imaging and image processing is simulated, and the positioning precision under the condition of the existence of errors is counted. The method solves the difficulty that the coupling of multiple error sources is difficult to separate, analyze and quantitatively evaluate in the visual positioning of the circular target, and can provide important support for the selection and utilization of the characteristics of the circular target in the design of a monocular vision measuring system.
Optionally, as an embodiment of the present invention, S101, setting an initial state of the circular target relative to the camera specifically includes:
the method comprises the steps of determining a closed pyramid region formed by four planes passing through an optical center of a camera by taking the optical center of the camera as a center and a field angle of the camera as a boundary, uniformly sampling the circle center position of a circular target in the closed pyramid region, uniformly sampling the normal direction of the circular target in a preset angle range, setting the size of the circular target, and obtaining an initial state comprising the space pose and the size of the circular target, wherein the space pose of the circular target comprises the space position and the normal direction of the circular target. Wherein the normal direction of the circular target comprises yaw and pitch angles.
Fig. 3 is a schematic diagram illustrating a process for sampling the spatial position, attitude and size of a circular object. Firstly, uniformly sampling the circle center position of a circular target in a closed pyramid, cutting and dividing the closed pyramid at certain intervals, taking the intersection point of cutting surfaces as a sampling point of the circle center position, and setting coordinates (x, y, z). Secondly, sampling is carried out on any position sampling point (x, y, z) according to two dimensions of a yaw angle alpha and a pitch angle beta in the normal direction of the point circular target, the yaw angle alpha is uniformly sampled within the range of 0,180 degrees, the pitch angle beta is uniformly sampled within the range of 0,90 degrees, and a five-dimensional sampling state of (x, y, z, alpha, beta) is obtained. Then, considering the setting of the size of the circular target, namely, setting the value of the radius R of the circular target in a certain range, and forming a large number of six-dimensional space sampling states (R, x, y, z, alpha and beta).
In the embodiment, the camera is set to be static, the circular target moves in the camera view field, the actual moving relation between the camera movement and the circular target is set in a reversed mode, on one hand, the setting of the relative poses of the camera and the circular target is limited in the camera view field to be represented, and therefore traversing of all possible relative poses of the camera and the circular target is more visual and clear; on the other hand, the method is more beneficial to analyzing the change of the measuring precision of the camera on the circular target along with the distance, the posture and the size according to the difference of the setting such as the position, the posture and the size.
Optionally, as an embodiment of the present invention, in step S102, an imaging process of a circular target is simulated through a perspective projection principle, and a continuous circular curve of the circular target is projected onto an image plane to obtain an elliptic curve. In this embodiment, the spatial location, normal and size of the circular target are known.
Fig. 4 presents a schematic view of the projection of a spatially circular object onto an image plane. Two circular targets with the same size in space can be projected to an image plane to form the same elliptical image, so that two groups of solutions can be obtained by solving the pose of the circular target by using an image elliptical equation and the size of the circular target. The calculation process of the relative position and axial relationship between the circular object cameras in step S102 will be described below with reference to fig. 2. Firstly, the imaging of the circular target in the image satisfies the perspective projection principle, and if u is a point on the elliptic arc line in the image, the following formula is satisfied:
u=MpXc=Mp[R(θ) -R(θ)t]Xw, (2-1)
wherein, u ═ 1 (u, v)TRepresenting the coordinates of a point on the curve projected onto the camera image plane at any point, q ═ t, θ representing the translation and rotation of the camera coordinate system relative to the world coordinate system, t ═ t [ t, θ ]x ty tz]Representing a translation of the camera coordinate system with respect to the world coordinate system, theta ═ alpha, beta, gamma]Representing the rotation of the camera coordinate system relative to the world coordinate system, R (theta) representing the rotation matrix of the camera coordinate system relative to the world coordinate system, Xc=(Xc,Yc,Zc)T,Xw=(Xw,Yw,Zw,1)TRespectively representing homogeneous coordinates, M, of any point of the circular target in a camera coordinate system and a world coordinate systemp∈R3×3Representing a perspective projective transformation matrix from a lunar observation point to an image,
Figure GDA0003101159210000061
where f denotes a camera focal length in units of pixels, (u)0,v0) Is the image principal point coordinate. Assuming that the world coordinate system is established at the center of the target circle and the Z-axis direction is directed perpendicular to the circle plane toward the camera optical center, θ ═ α, β,0]Thereby obtaining
Figure GDA0003101159210000062
Wherein c alpha, c beta, s alpha and s beta respectively represent cos alpha, cos beta, sin alpha and sin beta,
Figure GDA0003101159210000063
the expression defines the matrix in front of the equation as the symbolic representation behind the equation, i.e. the corresponding elements within the matrix are equal.
Let radius of circular target be r0Then the equation of the circular curve in the world coordinate system is
Figure GDA0003101159210000071
Let X be Xw-tx,y=Yw-ty,z=Zw-tz=-tzThen, the expression of the projection point u of the circular target on the image plane can be obtained:
Figure GDA0003101159210000072
Figure GDA0003101159210000073
wherein, C1=a1tx+b1ty+c1tz,C2=a2tx+b2ty+c2tz,C3=a3tx+c3tz
Simultaneous (2-3) and (2-4) gives:
Figure GDA0003101159210000074
wherein u' is u-u0、v′=v-v0. Substituting equation (2-5) into equation (2-3) yields:
A0u′2+B0u′v′+C0v′2+D0u′+E0v′+F0=0, (2-6)
wherein A is0、B0、C0、D0、E0And F0Respectively as follows:
Figure GDA0003101159210000075
Figure GDA0003101159210000076
Figure GDA0003101159210000077
D0=2(b1C3)f(b1C2-b2C1)+2(a3C2-a2C3)f(a2C1-a1C2)-2r0 2a3b2f(a2b1-a1b2),
E0=2(b1C3)f(b1C2-b2C1)+2(a1C3-a3C1)f(a2C1-a1C2)-2r0 2a3b1f(a2b1-a1b2),
Figure GDA0003101159210000078
the corresponding continuous curve of the binary quadratic curve equation described by the formula (2-6) in the image plane is an elliptic curve projected to the image plane.
In the embodiment, a continuous curve equation of the projection of the space circular target in the two-dimensional image is formed by deducing the projection relation from the three-dimensional space curve to the two-dimensional plane, so that the random error setting in the imaging and discretization expression of the space circular target is limited in the two-dimensional image plane, the problem is effectively simplified, and conditions are prepared for the simulation of the imaging random error.
Optionally, as an embodiment of the present invention, in step S103, grid discretizing is performed on the elliptic curve according to an image resolution, and a monte carlo method is used to perform simulated superposition on an imaging error, so as to generate a discrete pixel point set curve with a random error, where the method specifically includes: dividing an image plane into (M-1) x (N-1) grids according to an image resolution M x N, carrying out segmentation processing on the elliptic curve by taking the grids as units, dividing the elliptic curve into a plurality of arc section units, defining the midpoint of each arc section unit as a discrete projection point of a continuous circular curve of a circular target, superposing random errors meeting probability distribution on the discrete projection points by using a Monte Carlo simulation method, and correspondingly distributing the random errors to four pixels around the discrete projection points to generate a discrete pixel point set curve with random errors.
FIG. 5 is a schematic diagram of a process for transforming a continuous arc segment into a discrete set of pixel points curves with superimposed random errors. The grid in the figure is a grid unit formed by surrounding four adjacent pixels in the image, and the whole image is divided into grids of (M-1) × (N-1) according to the resolution. And carrying out segmentation processing on the elliptic curve by taking the grid as a unit, and dividing the elliptic curve into a plurality of micro arc-segment units. The realization method comprises the following steps: 1) dividing an image plane into grids of (M-1) x (N-1) according to image resolution, wherein M is the number of image pixel rows, N is the number of image pixel columns, and the area surrounded by adjacent rows and adjacent columns is 1 grid unit; 2) as shown in the first step of fig. 5, the grid is used as a unit to perform segmentation processing on the elliptic curve, the tiny arc segments falling into a single grid unit are used as an arc segment unit, the elliptic curve is divided into tiny arc segment units of the elliptic curve, and the midpoint of each tiny arc segment is defined as a discrete projection point of the circular curve. The end points of each micro arc unit can be solved by the following method:
1) in the image plane, each grid line is described by a line equation, for example, the transverse grid line equation is u ═ i (i e [1, M)]) Which intersects the elliptic curve with an intersection point between two adjacent longitudinal grids, assuming v e j1,j1+1) of, wherein j1∈[1,N](ii) a The equation of the vertical grid line is v ═ j (j ∈ [1, N)]) Which intersects the elliptic curve with an intersection point between two adjacent transverse grids, assuming that u e i1,i1+1) of, wherein i1∈[1,M]。
2) Calculating the intersection relationship between the grid line equation and the elliptic curve equation described by equation (2-6), and substituting u-i and v-j into equation (2-6) to obtain a quadratic equation of unity about v and u, respectively:
C0v′2+[B0(i-u0)+E0]v′+A0(i-u0)2+D0(i-u0)+F0=0,
A0u′2+[B0(j-v0)+D0]u′+C0(j-v0)2+E0(j-v0)+F0=0.
solving the above equations can respectively obtain 2 groups of solutions, namely two intersection points of the grid straight line and the elliptic curve. And (5) transforming the values of i and j to obtain an intersection point set of all grid straight lines and the ellipse.
3) Starting from any intersection point, all the intersection points are sequentially searched along the elliptic curve, an arc section between two adjacent intersection points is a tiny arc section unit, and the adjacent intersection points are two end points of the tiny arc section unit.
The method comprises the following steps of superposing random errors meeting certain probability distribution (such as normal distribution) on discrete projection points by utilizing a Monte Carlo simulation method, correspondingly distributing the random errors to four pixels around the discrete projection points, and generating a discrete pixel point set curve with the random errors, wherein the realization method comprises the following steps: considering the influence of pixel position deviation, light sensitivity and dark noise on the gray scale (or brightness) of a single pixel in an image, the gray scales of four pixel points around a discrete projection point of an image plane are taken according to the following method:
1) as the second step of FIG. 5, consider the effect of pixel position deviation when discretizing the projection point (u)i,vi) Satisfies a normal distribution P (x, y), and generates a random sequence of errors (Deltax) according to the normal distribution P (x, y)ij,Δyij) Will (Δ x)ij,Δyij) Superimposed on the corresponding discrete proxels (u)i,vi) Obtain a new discrete projection point position (x)i,yi)=(ui,vi)+(Δxij,Δyij)。
2) As a third step of fig. 5, the new discrete projection point (x)i,yi) The brightness of the pixel is distributed to four surrounding pixels according to a bilinear transformation relation, and a new discrete projection point (x) is recordedi,yi) Has a gray scale value of IiThe coordinates of the surrounding four pixels are (u)i,v)、(ui+1,v)、(ui,v+1)、(ui+1,vi+1), the gray values of the four pixel points are respectively: (u)i+1-xi)(vi+1-yi)Ii、(xi-ui)(vi+1-yi)Ii、(ui+1-xi)(yi-vi)Ii、(xi-ui)(yi-vi)Ii
3) As shown in the fourth step of fig. 5, the influence of the photosensitivity and the dark noise of the pixels is considered, the four pixels are respectively subjected to the random overlapping and attenuation of the gray scale, and the dark noise w is set to satisfy [0, I ]b]Normal distribution in the range, the light sensitivity attenuation rate eta satisfies [ etar,1]The gray scale value of each pixel becomes: w is ai1i1(ui+1-xi)(vi+1-yi)Ii、wi2i2(xi-ui)(vi+1-yi)Ii、wi3i3(ui+1-xi)(yi-vi)Ii、wi4i4(xi-ui)(yi-vi)Ii
The first step of fig. 5 shows that the grid is used as a unit to perform segmentation processing on the elliptic curve, the tiny arc segments falling into a single grid unit are used as an arc segment unit, the elliptic curve is divided into tiny arc segment units of the elliptic curve, and the midpoint of each tiny arc segment is defined as a discrete projection point of the circular curve;
the second step of fig. 5, which represents the translation after the discrete projection point is superimposed with the error, and the black point in the grid is obtained after the translation;
the third step of fig. 5 shows the distribution of the black dots in the grid to the four surrounding pixel points, and the closer the four pixel points are to the middle point, the higher the brightness is (the larger the corresponding point size is);
the fourth step of fig. 5 shows that the brightness of the four surrounding pixels is respectively subjected to the random superposition and attenuation of the gray scale, and the size of the corresponding point changes after the brightness changes.
Based on the steps, a discrete pixel point set curve with random errors is finally obtained, and the superposition of the errors is mainly reflected in the gray value of the pixel points. Therefore, a discrete point set is thinned, pixels with larger gray values in adjacent pixels are reserved by adopting an image edge tracking method, and pixels with smaller gray values are removed to obtain a single-pixel thinned edge point set.
Optionally, as an embodiment of the present invention, in S104, performing arc segment extraction, ellipse fitting, and spatial circular pose recognition on the discrete pixel point set curve, and obtaining a state of a deviation spatial pose of a circular target includes:
performing edge single-pixel thinning on the discrete pixel point set curve to obtain a thinned edge point set of a single pixel; carrying out ellipse fitting on the refined edge point set of the single pixel to obtain a fitted ellipse curve equation in an image plane; and establishing an elliptic cone surface standard equation which takes the optical center of the camera as a vertex and passes through an elliptic curve based on the elliptic curve equation, solving two sections with the same size as the circular target by using the elliptic cone surface standard equation, determining two groups of space poses of the circular target relative to the camera according to the two sections, and selecting one group with small difference as a deviation space pose state of the circular target according to the known initial state of the circular target.
In S104, edge single-pixel thinning is firstly carried out on the discrete pixel point set curve in the step S103, and the thinning method is that a pixel with a larger gray value in adjacent pixels is reserved by adopting an image edge tracking method, and a pixel with a smaller gray value is removed to obtain a single-pixel thinned edge point set.
Then, two steps of ellipse feature extraction and ellipse feature-based space circular target positioning are respectively performed, and the two steps are sequentially introduced later.
FIG. 6 is a schematic diagram of an elliptical curve fit based on a set of discrete pixel points. Only a part of discrete pixel points for fitting an elliptic curve are shown in the figure. In the embodiment, a least square curve fitting algorithm is adopted to carry out ellipse fitting on discrete pixel clicking, the idea is that the overall average of all pixel points participating in fitting is carried out, and the fitting precision can reach a sub-pixel level. The method for solving the description parameter of the elliptic equation by performing least square fitting on the discrete pixel points is described as follows.
The ellipse equation can be described by a quadratic curve with constraints, which is generally of the form:
Figure GDA0003101159210000111
let Θ be [ ABCDEF],u=[x22xyy22x2y1]The coefficient theta in the formula can generate an array-free solution according to different scale factors, and the value of u is not affected. To achieve uniqueness of the solution Θ, a scale factor may be specified, such as F ═ 1 or | | | Θ | | | 1. The fitting solution process is given below.
Order to
Figure GDA0003101159210000112
And (3) expressing the ith pixel point on the image, and obtaining an optimization problem according to a formula (4-1):
Figure GDA0003101159210000113
wherein
Figure GDA0003101159210000114
Representing all sets of pixel points, H being represented by AC-B2A constraint matrix derived > 0, defined as:
Figure GDA0003101159210000121
solving the optimization problem in the formula (4-2), converting the optimization problem with constraint into an unconstrained problem by introducing a Lagrange factor lambda, and performing derivation operation on a quadratic objective function to obtain:
Figure GDA0003101159210000122
let S be UTU, considering the specificity of U, S and H, we perform matrix blocking to:
Figure GDA0003101159210000123
Figure GDA0003101159210000124
Figure GDA0003101159210000125
Figure GDA0003101159210000126
whereΘ1=[A B C]T2=[D E F]T.
according to the block relation, a matrix equation can be reconstructed:
S1Θ1+S2Θ2=λH1Θ1, (4-5)
Figure GDA0003101159210000127
as can be seen from the formulas (4-6), when S is3In the case of a non-singular matrix, Θ2Can be expressed as theta1By linear transformation of, i.e.
Figure GDA0003101159210000128
Substituting it into equation (4-5) yields:
Figure GDA0003101159210000129
due to H1For non-singular matrices, equations (4-7) can be written as:
Figure GDA00031011592100001210
the equality constraint in equation (4-4) is rewritten as:
Θ1H1Θ1=1. (4-9)
order to
Figure GDA0003101159210000131
Then the ellipse fitting solving problem is converted into solving M theta1=λΘ1The problem of (3), namely, the problem of obtaining the eigenvalue of the matrix M. Consider that1H1Θ1Taking the eigenvector corresponding to the nonnegative eigenvalue of M as theta under the constraint of 1 on the nonnegativity of the eigenvalue1The solution of (1). According to
Figure GDA0003101159210000132
Find theta2Thus, the coefficient theta of the general equation of the elliptic curve is obtained, and the description parameters A, B, C, D, E and F of the elliptic equation of the quadratic curve are obtained.
The algorithm for resolving the space position and the normal direction of the circular target based on the positioning algorithm of the ellipse features gives the following results: elliptic arc line point ujThe ellipse equation is satisfied in the image plane:
Figure GDA0003101159210000133
conversion to matrix form:
Figure GDA0003101159210000134
substituting equation (2-1) into equation (4-11) can obtain the surface equation of the elliptical cone formed by the optical center and the space circle of the camera:
Figure GDA0003101159210000135
order to
Figure GDA0003101159210000136
Then the elliptic conic surface equation can be written as:
Figure GDA0003101159210000137
in a camera coordinate system, the expression of the cone is complex, and the cutting plane is described and calculated unchanged. Therefore, the space of the camera coordinate system is converted into the standard space for calculation, and the calculation result is converted into the camera coordinate system. The rotation process fixes the origin of the elliptical cone still at the optical center of the camera. From Q as a symmetric matrix, the existence of an orthogonal matrix P diagonalizes Q as:
PTQP=diag(λ123),
wherein λ123Is the characteristic value of Q, the point in the new coordinate space can be obtained
Figure GDA0003101159210000141
Thereby obtaining a rotation axis of ZcsStandard elliptic conic surface equation of axis:
λ1Xcs2Ycs3Zcs=0. (4-14)
as can be seen from the equations (4-14), the three coefficients λ123There must be two identical signs and one different sign. Due to lambda123And positive and negative values of the eigenvector matrix P, such that lambda123There are multiple sets of solutions. Let the characteristic value of Q be k123Feature vector corresponding to only v1,v2,v3. Let κ12Has the same number and
1|>|κ2i, then λ1=κ1,λ2=κ2,λ3=κ3. Let P be [ P ]1,p2,p3]If, if
Figure GDA0003101159210000142
Then p is1=v1Else p1=-v1。p2=v2,p2=v2×v3
Based on an elliptic cone surface equation in a standard space, the normal vector of the circle center position and the plane where the circle is located can be obtained as follows:
Figure GDA0003101159210000143
Figure GDA0003101159210000144
converting the results of equations (4-15) and (4-16) to the camera coordinate system for the spatial location and normal of the circular target:
Figure GDA0003101159210000145
Figure GDA0003101159210000146
i.e. the relative position and axial relationship between the circular object cameras.
In the embodiment, the elliptic curve is fitted by adopting a least square method based on discrete pixel points in the image, and the randomness error in the discretization expression of curve imaging can be reduced in a whole average mode, so that the position precision of the elliptic curve in the image reaches a sub-pixel level. On the other hand, the method for obtaining the circular cross section with fixed size intersected with the elliptic conical surface by the way of elliptic conical surface analytic expression further obtains the space position of the circular target is an analytic solution method without introducing uncertainty errors, so that the positioning accuracy of inverting the space target pose from the image plane elliptic curve can be improved to the maximum extent.
Optionally, as an embodiment of the present invention, in S105, error amounts of the deviation space pose state and the initial state of the circular target are obtained by comparison, and the positioning accuracy is determined according to the error amounts.
Optionally, as an embodiment of the present invention, the method further includes repeatedly performing the steps of S103 and S104 a preset number of times to obtain a plurality of error amounts of the deviation space pose state and the initial state of the circular target, and calculating a mathematical expectation of the plurality of error amounts, where the mathematical expectation is used as the positioning error corresponding to the current space pose.
Specifically, the mathematical expectation of the calculated error amount is the positioning error corresponding to the current spatial pose. Assuming that M simulation calculations are performed, the spatial position error e is calculated every timepAnd normal attitude error enIs calculated byRespectively as follows:
Figure GDA0003101159210000151
Figure GDA0003101159210000152
wherein the content of the first and second substances,
Figure GDA0003101159210000153
and Δ nkRespectively representing the errors of the spatial position and the normal direction of the circular target obtained by the k-th spatial pose recognition algorithm,
Figure GDA0003101159210000154
representing the spatial position of the circular target obtained by the k-th spatial circular attitude identification algorithm,
Figure GDA0003101159210000155
represents the normal direction of the circular target space obtained by the k-th spatial circular attitude identification algorithm,
Figure GDA0003101159210000156
for the initially set circular target spatial position,
Figure GDA0003101159210000157
the initially set circular target space is normal.
In the embodiment, by acquiring a plurality of error quantities, error conditions of various ellipse fitting and pose positioning can be covered to a great extent, and by calculating mathematical expectation of the error quantities, various random error conditions can be averaged to obtain an average error of spatial pose calculation. And a sufficient number of error quantities are selected through simulation, so that the accurate calculation of the average error of the spatial pose can be realized.
The circular target visual positioning accuracy evaluation method provided by the embodiment of the invention is described in detail above with reference to fig. 1 to 6. The following describes in detail the circular target visual positioning accuracy evaluation apparatus provided by the embodiment of the invention with reference to fig. 7. The device comprises a setting unit, a projection unit, a discrete unit, a positioning unit and an error calculation unit.
Wherein the setting unit sets an initial state of the circular target with respect to the camera; the projection unit simulates the imaging process of a circular target through a perspective projection principle, and projects a continuous circular curve of the circular target to an image plane to obtain an elliptic curve; the discrete unit carries out grid discretization on the elliptic curve according to the image resolution to obtain discrete projection points, and imaging errors are simulated and superposed on the discrete projection points by utilizing a Monte Carlo method to generate a discrete pixel point set curve with random errors; the positioning unit performs arc segment extraction, ellipse fitting and space circular pose identification on the discrete pixel point set curve to obtain a deviation space pose state of the circular target; and the error calculation unit obtains the error amount of the deviation space pose state and the initial state of the circular target through comparison and determines the positioning precision according to the error amount.
It should be noted that the setting unit may set initial states of a plurality of groups of circular targets relative to the camera, and for each group of initial states, the projection unit, the discrete unit, the positioning unit, and the error calculation unit are respectively invoked to obtain a positioning error corresponding to each group of circular targets. And the circular targets corresponding to each group of initial states have different distances, different directions and different sizes relative to the camera.
The device provided by the embodiment does not need to deduce the explicit expression relation between the positioning precision and the distance and the direction of the circular target, but simulates the error source in target imaging and image processing, and counts the positioning precision under the condition of the existence of errors. The method solves the difficulty that the coupling of multiple error sources is difficult to separate, analyze and quantitatively evaluate in the visual positioning of the circular target, and can provide important support for the selection and utilization of the characteristics of the circular target in the design of a monocular vision measuring system.
Optionally, as an embodiment of the present invention, the setting unit has a function of determining a closed pyramid region formed by four planes passing through the optical center of the camera with the optical center of the camera as a center and the field angle of the camera as a boundary, uniformly sampling a circle center position of the circular target in the closed pyramid region, uniformly sampling a normal direction of the circular target in a preset angle range, and setting a size of the circular target to obtain an initial state including a spatial pose and a size of the circular target, where the spatial pose of the circular target includes the spatial position and the normal direction of the circular target.
Optionally, as an embodiment of the present invention, the discrete unit is configured to divide an image plane into (M-1) × (N-1) grids according to an image resolution M × N, perform segmentation processing on the elliptic curve with the grids as units, divide the elliptic curve into a plurality of arc-segment units, define a midpoint of each arc-segment unit as a discrete projection point of a continuous circular curve of a circular target, superimpose random errors satisfying a probability distribution on the discrete projection point by using a monte carlo simulation method, and correspondingly allocate the random errors to four pixels around the discrete projection point, so as to generate a discrete pixel point set curve with random errors.
Optionally, as an embodiment of the invention, the discrete unit has a memory for:
discrete proxels (u)i,vi) Satisfies a probability distribution P (x, y), and generates a random sequence of errors (Deltax) according to the probability distribution P (x, y)ij,Δyij) Will (Δ x)ij,Δyij) Superimposed on the corresponding discrete proxels (u)i,vi) Obtain a new discrete projection point position (x)i,yi)=(ui,vi)+(Δxij,Δyij)。
New discrete projection point (x)i,yi) The brightness of the pixel is distributed to four surrounding pixels according to a bilinear transformation relation, and a new discrete projection point (x) is recordedi,yi) Has a gray scale value of IiThe coordinates of the surrounding four pixels are (u)i,vi)、(ui+1,vi)、(ui,vi+1)、(ui+1,vi+1), the gray values of the four pixel points are respectively: (u)i+1-xi)(vi+1-yi)Ii、(xi-ui)(vi+1-yi)Ii、(ui+1-xi)(yi-vi)Ii、(xi-ui)(yi-vi)Ii
The four pixels are respectively subjected to the random superposition and attenuation of the gray scale, and the dark noise w is set to satisfy [0, Ib]Probability distribution in the range, and light sensitivity attenuation rate eta satisfying [ etar,1]The gray scale value of each pixel becomes: w is ai1i1(ui+1-xi)(vi+1-yi)Ii、wi2i2(xi-ui)(vi+1-yi)Ii、wi3i3(ui+1-xi)(yi-vi)Ii、wi4i4(xi-ui)(yi-vi)Ii
Optionally, as an embodiment of the present invention, the positioning unit is specifically configured to: performing edge single-pixel thinning on the discrete pixel point set curve to obtain a thinned edge point set of a single pixel; carrying out ellipse fitting on the refined edge point set of the single pixel to obtain a fitted ellipse curve equation in an image plane; and establishing an elliptic cone surface standard equation which takes the optical center of the camera as a vertex and passes through an elliptic curve based on the elliptic curve equation, solving two sections with the same size as the circular target by using the elliptic cone surface standard equation, determining two groups of space poses of the circular target relative to the camera according to the two sections, and selecting one group with small difference as a deviation space pose state of the circular target according to the known initial state of the circular target.
Optionally, as another embodiment of the present invention, the error calculating unit is further configured to repeatedly call the discretization unit and the positioning unit to obtain a plurality of error quantities of the deviated spatial pose state and the initial state of the circular target, calculate a mathematical expectation of the plurality of error quantities, and use the mathematical expectation as the positioning error corresponding to the current spatial pose.
The embodiment of the invention also provides a storage medium, on which a computer program is stored, and when the program is executed by a processor, the method for evaluating the visual positioning accuracy of the circular target provided by the embodiment is realized.
In this embodiment, the computer readable storage medium, when executed by the processor, implements the steps of the circular target visual positioning accuracy assessment method provided in the above embodiments, so that all the beneficial effects of any one of the circular target visual positioning accuracy assessment methods provided in the above embodiments of the present invention are achieved, and details are not described herein again.
An embodiment of the present invention further provides a processing device, where the processing device includes: one or more processors; a memory for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the circular target visual positioning accuracy assessment method as provided in the above embodiments.
In this embodiment, the processing device includes a processor, and the processor is configured to implement the steps of the circular target visual positioning accuracy assessment method provided in the foregoing embodiment when executing the computer program stored in the memory, so that all the beneficial effects of any one of the circular target visual positioning accuracy assessment methods provided in the foregoing embodiments of the present invention are achieved, and details are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (12)

1. A method for evaluating the visual positioning accuracy of a circular target is characterized by comprising the following steps:
setting an initial state of the circular target relative to the camera; the setting an initial state of the circular target with respect to the camera includes: determining a closed pyramid region formed by four planes passing through the optical center of the camera by taking the optical center of the camera as a center and taking the field angle of the camera as a boundary, uniformly sampling the circle center position of a circular target in the closed pyramid region, uniformly sampling the normal direction of the circular target in a preset angle range, setting the size of the circular target, and obtaining an initial state comprising the space pose and the size of the circular target, wherein the space pose of the circular target comprises the space position and the normal direction of the circular target;
simulating the imaging process of a circular target by a perspective projection principle, and projecting a continuous circular curve of the circular target to an image plane to obtain an elliptic curve;
carrying out grid discretization on the elliptic curve according to the image resolution to obtain discrete projection points, and carrying out simulated superposition on imaging errors on the discrete projection points by utilizing a Monte Carlo method to generate a discrete pixel point set curve with random errors;
performing arc segment extraction, ellipse fitting and space circular pose identification on the discrete pixel point set curve to obtain a deviation space pose state of the circular target;
and obtaining the error amount of the deviation space pose state and the initial state of the circular target through comparison, and determining the positioning precision according to the error amount.
2. The method of claim 1, wherein the grid discretizing the elliptic curve according to image resolution to obtain discrete projection points, and the simulated superposition of imaging errors on the discrete projection points by using a monte carlo method to generate a discrete pixel point set curve with random errors comprises:
dividing an image plane into (M-1) x (N-1) grids according to an image resolution M x N, carrying out segmentation processing on the elliptic curve by taking the grids as units, dividing the elliptic curve into a plurality of arc section units, defining the midpoint of each arc section unit as a discrete projection point of a continuous circular curve of a circular target, superposing random errors meeting probability distribution on the discrete projection points by using a Monte Carlo simulation method, and correspondingly distributing the random errors to four pixels around the discrete projection points to generate a discrete pixel point set curve with random errors.
3. The method of claim 2, wherein superimposing random errors satisfying a probability distribution on the discrete projection points using a monte carlo simulation method, and assigning the random errors to four pixels around the discrete projection points, and generating a discrete pixel point set curve with random errors comprises:
discrete proxels (u)i,vi) Satisfies a probability distribution P (x, y), and generates a random sequence of errors (Deltax) according to the probability distribution P (x, y)ij,Δyij) Will (Δ x)ij,Δyij) Superimposed on the corresponding discrete proxels (u)i,vi) Obtain a new discrete projection point position (x)i,yi)=(ui,vi)+(Δxij,Δyij);
New discrete projection point (x)i,yi) The brightness of the pixel is distributed to four surrounding pixels according to a bilinear transformation relation, and a new discrete projection point (x) is recordedi,yi) Has a gray scale value of IiThe coordinates of the surrounding four pixels are (u)i,vi)、(ui+1,vi)、(ui,vi+1)、(ui+1,vi+1), the gray values of the four pixel points are respectively: (u)i+1-xi)(vi+1-yi)Ii、(xi-ui)(vi+1-yi)Ii、(ui+1-xi)(yi-vi)Ii、(xi-ui)(yi-vi)Ii
The four pixels are respectively subjected to the random superposition and attenuation of the gray scale, and the dark noise w is set to satisfy [0, Ib]Probability distribution in the range, and light sensitivity attenuation rate eta satisfying [ etar,1]The gray scale value of each pixel becomes: w is ai1i1(ui+1-xi)(vi+1-yi)Ii、wi2i2(xi-ui)(vi+1-yi)Ii、wi3i3(ui+1-xi)(yi-vi)Ii、wi4i4(xi-ui)(yi-vi)Ii
4. The method according to any one of claims 1 to 3, wherein the performing arc segment extraction, ellipse fitting and spatial circular pose recognition on the discrete pixel point set curve to obtain a deviation spatial pose state of a circular target comprises:
performing edge single-pixel thinning on the discrete pixel point set curve to obtain a thinned edge point set of a single pixel;
carrying out ellipse fitting on the refined edge point set of the single pixel to obtain a fitted ellipse curve equation in an image plane;
and establishing an elliptic cone surface standard equation which takes the optical center of the camera as a vertex and passes through an elliptic curve based on the elliptic curve equation, solving two sections with the same size as the circular target by using the elliptic cone surface standard equation, determining two groups of space poses of the circular target relative to the camera according to the two sections, and selecting one group with small difference as a deviation space pose state of the circular target according to the known initial state of the circular target.
5. The method according to any one of claims 1 to 3, further comprising the steps of repeatedly performing grid discretization on the elliptic curve according to image resolution for a preset number of times to obtain discrete projection points, performing simulated superposition on imaging errors on the discrete projection points by using a Monte Carlo method to generate a discrete pixel point set curve with random errors, performing arc segment extraction, ellipse fitting and spatial circular pose recognition on the discrete pixel point set curve to obtain a deviation spatial pose state of the circular target, so as to obtain a plurality of error quantities of the deviation spatial pose state and the initial state of the circular target, calculating a mathematical expectation of the plurality of error quantities, and taking the mathematical expectation as a positioning error corresponding to the current spatial pose.
6. A circular target visual positioning accuracy assessment device is characterized by comprising:
a setting unit for setting an initial state of the circular target with respect to the camera; the setting unit is provided with a closed pyramid region which is formed by four planes passing through the optical center of the camera and is used for determining the position of the circle center of the circular target in the closed pyramid region by taking the optical center of the camera as a center and the angle of field of view of the camera as a boundary, uniformly sampling the normal direction of the circular target in a preset angle range, setting the size of the circular target and obtaining an initial state comprising the space pose and the size of the circular target, wherein the space pose of the circular target comprises the space position and the normal direction of the circular target;
the projection unit is used for simulating the imaging process of a circular target by a perspective projection principle and projecting a continuous circular curve of the circular target to an image plane to obtain an elliptic curve;
the discrete unit is used for carrying out grid discretization on the elliptic curve according to the image resolution to obtain discrete projection points, and simulating and superposing imaging errors on the discrete projection points by utilizing a Monte Carlo method to generate a discrete pixel point set curve with random errors;
the positioning unit is used for carrying out arc segment extraction, ellipse fitting and space circular pose identification on the discrete pixel point set curve to obtain a deviation space pose state of the circular target;
and the error calculation unit is used for obtaining the error amount of the deviation space pose state and the initial state of the circular target through comparison and determining the positioning accuracy according to the error amount.
7. The apparatus of claim 6, wherein the discrete unit is configured to divide an image plane into (M-1) × (N-1) grids according to an image resolution mxn, segment the elliptic curve in units of grids, divide the elliptic curve into a plurality of arc segment units, define a midpoint of each arc segment unit as a discrete projection point of a continuous circular curve of a circular object, superimpose random errors satisfying a probability distribution on the discrete projection point by using a monte carlo simulation method, and correspondingly allocate the random errors to four pixels around the discrete projection point to generate a discrete pixel point set curve with random errors.
8. The apparatus of claim 6, wherein the discrete unit has means for:
discrete proxels (u)i,vi) Satisfies a probability distribution P (x, y), and generates a random sequence of errors (Deltax) according to the probability distribution P (x, y)ij,Δyij) Will (Δ x)ij,Δyij) Superimposed on the corresponding discrete proxels (u)i,vi) Obtain a new discrete projection point position (x)i,yi)=(ui,vi)+(Δxij,Δyij);
New discrete projection point (x)i,yi) The brightness of the pixel is distributed to four surrounding pixels according to a bilinear transformation relation, and a new discrete projection point (x) is recordedi,yi) Has a gray scale value of IiThe coordinates of the surrounding four pixels are (u)i,vi)、(ui+1,vi)、(ui,vi+1)、(ui+1,vi+1), the gray values of the four pixel points are respectively: (u)i+1-xi)(vi+1-yi)Ii、(xi-ui)(vi+1-yi)Ii、(ui+1-xi)(yi-vi)Ii、(xi-ui)(yi-vi)Ii
The four pixels are respectively subjected to the random superposition and attenuation of the gray scale, and the dark noise w is set to satisfy [0, Ib]Probability distribution in the range, and light sensitivity attenuation rate eta satisfying [ etar,1]The gray scale value of each pixel becomes: w is ai1i1(ui+1-xi)(vi+1-yi)Ii、wi2i2(xi-ui)(vi+1-yi)Ii、wi3i3(ui+1-xi)(yi-vi)Ii、wi4i4(xi-ui)(yi-vi)Ii
9. The device according to any one of claims 6 to 8, wherein the positioning unit is specifically configured to:
performing edge single-pixel thinning on the discrete pixel point set curve to obtain a thinned edge point set of a single pixel;
carrying out ellipse fitting on the refined edge point set of the single pixel to obtain a fitted ellipse curve equation in an image plane;
and establishing an elliptic cone surface standard equation which takes the optical center of the camera as a vertex and passes through an elliptic curve based on the elliptic curve equation, solving two sections with the same size as the circular target by using the elliptic cone surface standard equation, determining two groups of space poses of the circular target relative to the camera according to the two sections, and selecting one group with small difference as a deviation space pose state of the circular target according to the known initial state of the circular target.
10. The apparatus according to any one of claims 6 to 8, wherein the error calculation unit is further configured to repeatedly call the discretization unit and the localization unit to obtain a plurality of error quantities of the deviated spatial pose state and the initial state of the circular object, and calculate a mathematical expectation of the plurality of error quantities, and take the mathematical expectation as the localization error corresponding to the current spatial pose.
11. A storage medium on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1 to 5.
12. A processing device, characterized in that the processing device comprises:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
CN201810273909.2A 2018-03-29 2018-03-29 Circular target visual positioning precision evaluation method and device, storage medium and processing equipment Active CN108596980B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810273909.2A CN108596980B (en) 2018-03-29 2018-03-29 Circular target visual positioning precision evaluation method and device, storage medium and processing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810273909.2A CN108596980B (en) 2018-03-29 2018-03-29 Circular target visual positioning precision evaluation method and device, storage medium and processing equipment

Publications (2)

Publication Number Publication Date
CN108596980A CN108596980A (en) 2018-09-28
CN108596980B true CN108596980B (en) 2021-12-07

Family

ID=63624022

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810273909.2A Active CN108596980B (en) 2018-03-29 2018-03-29 Circular target visual positioning precision evaluation method and device, storage medium and processing equipment

Country Status (1)

Country Link
CN (1) CN108596980B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109760107B (en) * 2019-01-22 2022-04-12 九天创新(广东)智能科技有限公司 Monocular vision-based robot positioning accuracy evaluation method
CN109949367B (en) * 2019-03-11 2023-01-20 中山大学 Visible light imaging positioning method based on circular projection
CN110378970B (en) * 2019-07-08 2023-03-10 武汉理工大学 Monocular vision deviation detection method and device for AGV
CN110674810B (en) * 2019-08-30 2023-04-18 苏州悦谱半导体有限公司 Optimization method applied to semiconductor optical CCD vision
US11610330B2 (en) 2019-10-08 2023-03-21 Samsung Electronics Co., Ltd. Method and apparatus with pose tracking
CN110782492B (en) * 2019-10-08 2023-03-28 三星(中国)半导体有限公司 Pose tracking method and device
CN111210413B (en) * 2020-01-02 2023-10-03 北京机科国创轻量化科学研究院有限公司 Pose detection method in movement process of wire feeding mechanism
CN111380503B (en) * 2020-05-29 2020-09-25 电子科技大学 Monocular camera ranging method adopting laser-assisted calibration
CN111780745B (en) * 2020-06-29 2023-08-01 南京航空航天大学 Short arc ellipse fitting optimization method for deep space exploration optical navigation
CN112365531B (en) * 2020-10-13 2022-08-12 西安理工大学 Reliability evaluation method for ellipse detection result of automatic scrolling system
CN112255869B (en) * 2020-11-03 2021-09-14 成都景中教育软件有限公司 Parameter-based three-dimensional graph dynamic projection implementation method
CN112381880A (en) * 2020-11-27 2021-02-19 航天科工智能机器人有限责任公司 Binocular vision pose estimation method based on circle features
CN112505671B (en) * 2021-02-07 2021-05-25 北方工业大学 Millimeter wave radar target positioning method and device under GNSS signal missing environment
CN113064272B (en) * 2021-03-04 2022-05-17 武汉大学 Optical free-form surface construction method and system under semi-discrete optimal transmission
CN115493499B (en) * 2021-12-30 2024-04-19 北京航天飞行控制中心 Cylinder or cylinder-like assembly method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104715471A (en) * 2014-01-03 2015-06-17 杭州海康威视数字技术股份有限公司 Target positioning and tracking method and device
CN104729481A (en) * 2015-03-12 2015-06-24 北京空间飞行器总体设计部 Cooperative target pose precision measurement method based on PNP perspective model
CN107167116A (en) * 2017-03-13 2017-09-15 湖北汽车工业学院 A kind of visible detection method of space circular arc pose

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104715471A (en) * 2014-01-03 2015-06-17 杭州海康威视数字技术股份有限公司 Target positioning and tracking method and device
CN104729481A (en) * 2015-03-12 2015-06-24 北京空间飞行器总体设计部 Cooperative target pose precision measurement method based on PNP perspective model
CN107167116A (en) * 2017-03-13 2017-09-15 湖北汽车工业学院 A kind of visible detection method of space circular arc pose

Also Published As

Publication number Publication date
CN108596980A (en) 2018-09-28

Similar Documents

Publication Publication Date Title
CN108596980B (en) Circular target visual positioning precision evaluation method and device, storage medium and processing equipment
Zhang et al. Image engineering
Brown et al. A framework for 3D model-based visual tracking using a GPU-accelerated particle filter
Olesen et al. Real-time extraction of surface patches with associated uncertainties by means of kinect cameras
Senin et al. Statistical point cloud model to investigate measurement uncertainty in coordinate metrology
Kroemer et al. Point cloud completion using extrusions
CN114170146A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
Yang et al. Error analysis and planning accuracy for dimensional measurement in active vision inspection
Ou et al. Reinforcement learning-based calibration method for cameras with large FOV
Schröder et al. Design and evaluation of reduced marker layouts for hand motion capture
Wang et al. Accuracy of monocular gaze tracking on 3d geometry
JP2022071850A (en) Computer-implemented method for determining at least one geometric parameter required for evaluating measurement data
Moustakides et al. 3D image acquisition and NURBS based geometry modelling of natural objects
US20150153162A1 (en) Method of three-dimensional measurements by stereo-correlation using a parametric representation of the measured object
Cho et al. Real-time 3D reconstruction method using massive multi-sensor data analysis and fusion
Dierenbach et al. Next-Best-View method based on consecutive evaluation of topological relations
Sesma-Sanchez et al. Design issues of remote eye tracking systems with large range of movement
Iversen et al. Optimizing sensor placement: A mixture model framework using stable poses and sparsely precomputed pose uncertainty predictions
Presenti et al. CNN-based pose estimation of manufactured objects during inline X-ray inspection
Cavoretto et al. Analysis of compactly supported transformations for landmark-based image registration
Stenholt et al. Shaping 3-D boxes: A full 9 degree-of-freedom docking experiment
Saini et al. Free-form surface reconstruction from arbitrary perspective images
Morel et al. High Accuracy Terrain Reconstruction from point clouds using implicit deformable model
Chen et al. Imaging model of the off-axis non-central spherical mirrored stereo vision sensor
Ilie et al. A stochastic quality metric for optimal control of active camera network configurations for 3D computer vision tasks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant