CN104504691A - Camera position and posture measuring method on basis of low-rank textures - Google Patents

Camera position and posture measuring method on basis of low-rank textures Download PDF

Info

Publication number
CN104504691A
CN104504691A CN201410777911.5A CN201410777911A CN104504691A CN 104504691 A CN104504691 A CN 104504691A CN 201410777911 A CN201410777911 A CN 201410777911A CN 104504691 A CN104504691 A CN 104504691A
Authority
CN
China
Prior art keywords
theta
low
video camera
image
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410777911.5A
Other languages
Chinese (zh)
Other versions
CN104504691B (en
Inventor
孙怡
张婷婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201410777911.5A priority Critical patent/CN104504691B/en
Publication of CN104504691A publication Critical patent/CN104504691A/en
Application granted granted Critical
Publication of CN104504691B publication Critical patent/CN104504691B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

The invention relates to a camera position and posture measuring method on the basis of low-rank textures and belongs to the technical field of vision measurement of a computer. The camera position and posture measuring method is characterized in that a plurality of low-rank textures, such as pane textures of a ceiling, in an indoor scene are used for measuring the position and the posture of a camera. Images of the low-rank textures are shot by the camera, a projection matrix of the camera in the shooting process is represented with an euler angle and an euler angle representing the posture of the camera is obtained by solving the optimization problem related to a low-rank texture imaging model; then a geometrical relationship of the textures in the scene and a camera imaging principle are further combined to work out the position of the camera in the scene. The camera position and posture measuring method has the effects and the benefits that six-degree-of-freedom measurement on the posture and the spacial position of the camera can be implemented by utilizing the characteristics of the low-rank textures in the scene; moreover, in the measuring process, the camera can rotate, so that the application range of the vision measurement is wider.

Description

Based on camera position and the attitude measurement method of low-rank texture
Technical field
The invention belongs to computer vision measurement technical field, relate to a kind of video camera spatial attitude based on low-rank texture and location measurement method.
Background technology
Indoor measurement technology is a widely used technology.Along with improving constantly of CCD technology in recent years and image processing techniques, the feature that vision measurement technology is applied widely with it, precision is high, receives researcher and more and more pays close attention to.Vision location technology adopts one or more video camera to obtain containing characteristic image from scene, to be expressed and extract by distinct methods to characteristics of image, and the information finally utilizing these features to provide solves video camera position in space and attitude.The feature of image is divided into local feature and global characteristics.Local feature mainly comprises angle point, straight line etc.Current most of vision positioning method, based on image local feature, extracts point or straight line etc. to realize Camera Positioning from image.The measuring accuracy of these class methods depends on the precision of local shape factor, is very easily subject to noise, and precision does not often reach the requirement of practical application in some scenarios.The color, texture etc. of global characteristics mainly image.The planes such as the ceiling of texture in indoor, floor or wall find than being easier to, and often have repeatability or symmetry, these features that texture can be utilized to have carry out video camera measurement of correlation.But the current method using textural characteristics to carry out camera position and attitude measurement is less.
In addition, video camera is only placed in some planes by current each side fado, the shooting angle of fixed installation video camera, can not rotate during video camera shooting.In addition, most methods has only solved the two-dimensional position coordinate of video camera in its holding plane, does not solve video camera three-dimensional coordinate in space.
Summary of the invention
The object of this invention is to provide a kind of camera position based on low-rank texture and attitude measurement method.In the method employing setting angle unfixed video camera shooting indoor scene, some have the texture of low-rank feature, as the grid texture etc. on ceiling, are carried out the measurement of video camera locus and attitude 6 degree of freedom by processing screened image.
Technical scheme of the present invention is:
Adopt some texture images in video camera shooting indoor scene, as the grid texture etc. on ceiling.Represent the projection matrix in low-rank texture imaging model by Eulerian angle, the low-rank texture imaging model represented by Optimization Method Eulerian angle, solve the Eulerian angle representing video camera attitude.Utilize the geometrical feature of texture and the imaging relations of video camera more further, solve video camera position in space.To be described in detail technical scheme below, and first introduce low-rank texture and video camera to the shooting of low-rank texture, then introduce and how to utilize the low-rank characteristic of texture to solve video camera attitude, finally introduce the step solving video camera locus.
Step one: obtain the image comprising low-rank texture
Ceiling in actual scene, ground, wall often can take out texture as shown in Figure 1.These textures have symmetry and repeatability.Because the image moment rank of matrix of these textures is lower, these textures are called low-rank texture by us.Here the definition of low-rank texture is provided: the texture of two dimension (2D) is considered to be in plane R usually 2the function I of upper definition 0(x, y); If family of functions I 0(x) across limited low-dimensional subspace, then by texture I 0be defined as low-rank texture.Shown in (1), for the integer k that certain is less:
r = rank ( I 0 ) = . dim ( span { x | I 0 ( x , · ) } ) ≤ k - - - ( 1 )
Now, I 0be considered to order-r texture.Usually, when k is less than the half of just number of pixels in evaluated window, be then considered to low-rank.
Low-rank texture in the video camera shooting actual scene that first this method needs utilization to demarcate.The video camera demarcated refers to the video camera being obtained focal length and principal point value by certain scaling method.During shooting, the setting angle of video camera can not be fixed.Video camera while adjust the attitude shooting of oneself, can carry out the measurement of position and attitude.
Step 2: the low-rank Feature-solving video camera attitude utilizing texture
There is projection deformation and noise in low-rank texture, image formed by it no longer possesses low-rank characteristic when video camera is taken.This low-rank texture can be recovered by following steps and try to achieve the shooting attitude of video camera.
1) coordinate system needed for foundation
First image coordinate system, camera coordinate system, world coordinate system is set up.As shown in Figure 2, image coordinate system is to take the center of image for initial point O, and transverse axis OU and longitudinal axis OV is parallel to image edge respectively.Camera coordinate system is for the video camera photocentre O measured cfor initial point, transverse axis O cx cwith longitudinal axis O cy cbe parallel to image coordinate system OU axle and OV axle respectively, vertical pivot O cz cperpendicular to X co cy cplane.World coordinate system O wx wy wz wchoose according to different needs, generally by the X of world coordinate system wo wy wplane is based upon in the plane at low-rank textural characteristics place, and to choose the rank of matrix making to express low-rank texture minimum in the direction of each axle of world coordinate system.
2) set up camera imaging model, represent video camera projection matrix by Eulerian angle
Choose 1 P on low-rank texture, if its coordinate in world coordinate system is (x w, y w, z w, 1) t, the coordinate in camera coordinate system is expressed as (x c, y c, z c) t, the coordinate of picture p is on the image plane (u, v, 1) t.Then under pin-hole imaging model, the relation between P and p can be described as formula (2):
s · u v 1 = N x c y c z c = N R w c T w c x w y w z w 1 Wherein N = f / dx 0 u 0 0 f / dy v 0 0 0 1 , R w c = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 , T w c = t 1 t 2 t 3 - - - ( 2 )
In formula (2), s is the scale factor of a non-zero, and N is the Intrinsic Matrix of video camera, and f is the focal length of video camera; Dx, dy represent the pixel dimension in U, V direction respectively, (u 0, v 0) be the coordinate of video camera photocentre in image coordinate system. represent that video camera is relative to the rotation matrix of world coordinate system and translation matrix respectively.Formula (2) is camera imaging model.
According to Euler theorem, by the rotation matrix in formula (2) represent by three Eulerian angle.The rotational order of each axle of selection video camera is: first rotate around Z axis, then rotate around Y-axis, finally rotate around X-axis, just clockwise turn to, and three corresponding Eulerian angle are respectively θ x, θ y, θ z, then projection matrix separately be written as:
R w c = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 cos θ y cos θ z - cos θ y sin θ z sin θ y cos θ x sin θ z + sin θ x sin θ y cos θ z cos θ x cos θ z - sin θ x sin θ y sin θ z - sin θ x cos θ y sin θ x sin θ z - cos θ x sin θ y cos θ z sin θ x cos θ z + cos θ x sin θ y sin θ z cos θ x cos θ y - - - ( 3 )
Formula (3) is the projection matrix that Eulerian angle represent.After establishing each coordinate system and camera imaging model, namely the attitude of video camera and position measurement are to ask for the Eulerian angle θ of each axle of camera coordinate system in world coordinate system x, θ y, θ z, and video camera photocentre O ccoordinate in world coordinate system
3) the low-rank texture imaging model based on Eulerian angle is set up
Use I 0represent original low-rank texture image, I represents the low-rank texture image of actual photographed.As Fig. 3, by the X of world coordinate system wo wy wplane is based upon in the plane at low-rank textural characteristics place, and to choose the rank of matrix making to express low-rank texture minimum in the direction of each axle of world coordinate system.By I 0when regarding as, video camera is along world coordinate system Z wthe image obtained is carried out taking in the position that direction of principal axis only has translation and do not rotate, referred to as just to shooting image; I is regarded as video camera exists certain rotation and translation position relative to world coordinate system to carry out taking the image obtained, referred to as tilt image.According to camera model and image-forming principle, when to the P (x of on low-rank texture w, y w, z w) carry out just to when taking, its image coordinate (u 1, v 1) and world coordinates (x w, y w, z w) between relation be expressed as formula (4); When carrying out tilt, image coordinate (u 2, v 2) and world coordinates (x w, y w, z w) between relation be expressed as formula (5):
s · u 1 v 1 1 = f u 0 u 0 0 f v v 0 0 0 1 R 1 T 1 x w y w z w 1 Wherein R 1 = 1 0 0 0 1 0 0 0 1 , T 1 = 0 0 d - - - ( 4 )
s · u 2 v 2 1 = f u 0 u 0 0 f v v 0 0 0 1 R 2 T 2 x w y w z w 1 Wherein R 2 = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 , T 2 = t 1 t 2 t 3 - - - ( 5 )
Wherein R i, T irepresent respectively just to shooting and tilt time camera coordinate system relative to the rotation matrix between world coordinate system and translation matrix, i=1,2.Wherein d represents just to the translation degree of depth between camera coordinate system during shooting and world coordinate system, r mnfor the corresponding parameter in the rotation matrix that available Eulerian angle represent, m, n=1,2,3; t 1, t 2, t 3be respectively three translational components in translation matrix.
In conjunction with (4) and (5), try to achieve (u 1, v 1) and (u 2, v 2) between relation as follows:
u 2 = f d ( r 11 u 1 + r 12 v 1 + f d d t 1 ) r 31 u 1 + r 32 v 1 + f d d t 3 = f d ( r 11 u 1 + r 12 v 1 + f d t ′ 1 ) r 31 u 1 + r 32 v 1 + f d t ′ 3 v 2 = f d ( r 21 u 1 + r 22 v 1 + f d d t 2 ) r 31 u 1 + r 32 v 1 + f d d t 3 = f d ( r 21 u 1 + r 22 v 1 + f d t ′ 2 ) r 31 u 1 + r 32 v 1 + f d t ′ 3 - - - ( 6 )
Wherein t'=[t' 1, t' 2, t' 3]=[t 1/ d, t 2/ d, t 3/ d] be normalized displacements vector.During derivation (6), assume that captured plane scene is the z=0 plane of world coordinate system without loss of generality.
With (u in symbol ο expression (6) 1, v 1) and (u 2, v 2) between computing, then the relation between two pixels is rewritten as:
(u 2,v 2)=το(u 1,v 1) (7)
Wherein τ is the parameter r that ο computing relies on mn, t 1', t 2', t 3' set, m, n=1,2,3.R mnfor the parameter in the rotation matrix that world coordinates is tied between video camera, by Eulerian angle θ x, θ y, θ zrepresent.From above-mentioned derivation, just to shooting image I 0in any point can be converted into 1 I in tilt image by ο computing.By I 0be generalized to all pixels of image with the relation of pixel single on I, and because random noise E, then can release I 0as follows with the relation of both I:
I=το(I 0+E) (8)
Wherein τ is the parameter sets that in formula (7), ο computing relies on.So far, the imaging relations model of low-rank texture is set up.
4) solve low-rank texture imaging model, solve the attitude of video camera
Solve video camera shooting attitude and namely will solve τ.For solving τ, first the problems referred to above are converted into such as formula the optimization problem shown in (9) by formula (8):
Wherein, || E|| 0represent the number of non-zero entry in E.For solving this optimization problem, formula (9) is approximately:
Wherein, || I 0|| *represent I 0nuclear norm, || E|| 1represent the l of E 1-norm.Constraint condition in formula (10) is non-linear constrain, it is launched to realize constraint condition linearization at point (u, v) place according to Taylor's formula.Because u, v depend on τ, therefore have after launching:
I(u(τ)+Δτ,v(τ)+Δτ)=I(u(τ),v(τ))+▽I τ(u(τ),v(τ))Δτ (11)
Wherein ▽ I τthe Jacobian matrix of image I about parameters in τ.Through above-mentioned derivation, former optimization problem is rewritten as:
Utilize alternative manner to solve the optimization problem expressed by formula (12), concrete steps are as follows:
S1, accept captured by low-rank texture image as input picture, setting τ initial value be τ 0=(0,0,0,0,0,1), the initial value of first three Parametric Representation Eulerian angle, rear three values represent the initial value of normalized displacements; Selected convergence precision ε > 0; Selected weights λ > 0;
S2, get rectangular window containing low-rank texture over an input image, gained image is designated as I;
S3, to image I, repeat following iterative step, until objective function f=||I 0|| *+ λ || E|| 1global convergence:
Image I is normalized, and normalized value is assigned to I again;
Jacobian matrix about Eulerian angle and normalized displacements is asked for I;
Use augmented vector approach solves the problem that formula is stated, and obtains the locally optimal solution τ ' of τ:
min I 0 , E , Δτ | | I 0 | | * + λ | | E | | 1 s . t I ( u ij ( τ ) , v ij ( τ ) ) + ▿ τ ( u ij ( τ ) , v ij ( τ ) ) Δτ = I 0 + E - - - ( 13 )
Wherein I (u ij(τ), v ij(τ) the image I) for representing with pixel value (u, v), u, v are the function about τ; || I (u ij(τ), v ij(τ)) || frepresent I (u ij(τ), v ij(τ) F-norm); ▽ I τ(u ij(τ), v ij(τ)) for I is about the Jacobian matrix of τ;
Application ο computing, the projective transformation that locally optimal solution τ ' is corresponding acted on each point of image I, such I has just changed into the new image of a width; By the new images of gained again assignment to I;
S4, output globally optimal solution τ *, its first three value is the value of the Eulerian angle representing video camera attitude;
By iterative, the optimum solution of τ can be tried to achieve.The parameter θ representing video camera attitude directly can be obtained from τ x, θ y, θ z.It is pointed out that the normalized displacements value required by iteration, the actual value of video camera place displacement can not be represented, need the displacement that just can be solved video camera by step 3.
Step 3: utilize image-forming principle and low-rank texture geometrical property to solve video camera position in space
Solve video camera to carry out in two steps position in space.First, world coordinate system initial point O is asked for wcoordinate under camera coordinate system, then tries to achieve video camera photocentre O by coordinate transform ccoordinate in world coordinate system.Video camera photocentre O cnamely coordinate in world coordinate system is video camera position in space.Concrete steps are as follows:
By photocentre O ccoordinate in world coordinate system is used represent, by world coordinate system initial point O wcoordinate under video camera system is used represent.Choose low-rank texture two unique point P1, P2 in the plane, as Fig. 4.2 choose facilitates that to extract this picture of 2 from image be principle.To square low-rank texture, its summit can be chosen.Length between 2 is recorded by ruler, and is translated into the coordinate in world coordinate system, uses represent, wherein i=1,2.If P1, the P2 coordinate in camera coordinate system is extracting these two unique points at the coordinate that image coordinate is fastened from image is (u pi, v pi), convolution (2) is tried to achieve (u pi, v pi) the satisfied relation such as formula (14):
s · u pi v pi 1 = f d 0 u 0 0 f d v 0 0 0 1 r 11 r 12 r 13 t x c r 21 r 22 r 23 t y c r 31 r 32 r 33 t z c x i w y i w z i w 1 i = 1,2 - - - ( 14 )
Can be solved by (14)
t x c = ( u p 1 - u 0 ) ( t z c + C 1 ) / f d - A 1 ,
t y c = ( v p 1 - v 0 ) ( t z c + C 1 ) / f d - B 1 ,
t z c = f d ( A 1 - A 2 ) - u p 1 C 1 + u p 2 C 2 + u 0 ( C 1 - C 2 ) u p 1 - u p 2 - - - ( 15 )
Wherein A i = r 11 x i w + r 12 y i w + r 13 z i w , B i = r 21 x i w + r 22 y i w + r 23 z i w , C i = r 31 x i w + r 32 y i w + r 33 z i w I=1,2 due to the coordinate (x of camera coordinate system c, y c, z c) and the coordinate (x of world coordinate system w, y w, z w) can change mutually by rotating peaceful phase shift, namely there is following relation:
x c y c z c = R w c x w y w z w + T w c = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 x w y w z w + t x c t y c t z c - - - ( 16 )
The coordinate of video camera photocentre in camera coordinate system (0,0,0) is substituted into formula (16), tries to achieve video camera photocentre O ccoordinate under world coordinate system
p x w p y w p z w = - R w c - 1 T w c = - R 11 R 12 R 13 R 21 R 22 R 23 R 31 R 32 R 33 t x c t y c t z c - - - ( 17 )
Wherein R ijfor middle parameter, i, j=1,2,3.Try to achieve video camera photocentre O ccoordinate under world coordinate system, namely tries to achieve video camera position in space.
Effect of the present invention and benefit make use of the low-rank textural characteristics existed in indoor scene, in conjunction with the geometric relationship of these textural characteristics, asks for video camera position in space and attitude.The method can reach higher precision, and in measuring process, this method can rotate video camera, and the six degree of freedom achieving video camera attitude and locus is measured, and range of application is broader, flexible.
Accompanying drawing explanation
Fig. 1 is the low-rank texture maps taken out in actual scene.Wherein Fig. 1 (a) is strip low-rank texture maps, and Fig. 1 (b) is gridiron pattern low-rank texture maps, and Fig. 1 (c) is square low-rank texture.
Fig. 2 solves required coordinate system schematic diagram.Wherein 1 is world coordinate system O wx wy wz w, for describing the position of real world mid point, set up according to oneself needing.2 is image coordinate system OUV, and with CCD center for initial point O, transverse axis OU and longitudinal axis OV is parallel to two vertical edges edges of CCD respectively.3 is camera coordinate system, with video camera photocentre O cfor initial point, transverse axis O cx cwith longitudinal axis O cy cbe parallel to image coordinate system OU axle and OV axle respectively, vertical pivot O cz cperpendicular to X co cy cplane.P be in scene on low-rank texture a bit, its picture is on the image plane p.
Fig. 3 aligns shooting and tilt schematic diagram.Wherein 4 is the world coordinate system now set up as required, 5 be shooting time square texture target.6 is just to camera coordinate system during shooting, 7 camera coordinate systems when being tilt.
Fig. 4 utilizes geometric relationship low-rank texture always in scene to solve the schematic diagram of video camera 3 d space coordinate.In figure, 1,2,3 respectively with 1 in Fig. 1, and 2,3.P1, P2 are two unique points of low-rank texture, its on the image plane picture be p1, p2.
Fig. 5 is experiment scene schematic diagram in embodiment.In figure, 8 is selected world coordinate system during experiment, and 9 is selected camera coordinate system during experiment, and 10 is the starting point measured.When using video camera to carry out taking during experiment, from the point of 10 indications, along arrow direction, take in * position.
Embodiment
The specific embodiment of the present invention is described in detail below in conjunction with technical scheme and accompanying drawing.The solution utilizing low-rank texture image to solve video camera attitude can be summarized as follows:
Step one: the selected low-rank texture that can be used for measuring.Selected texture should have symmetry or repeatability, generally should have a little or the feature such as straight line, identify to facilitate from image.In indoor scene, ceiling grid texture, wall grid texture, floor grid texture are all chosen as the texture for measuring.
After selected good low-rank texture, in indoor scene, place video camera, the position of video camera and angle can not be fixed, but need ensure that shooting function photographs the low-rank texture in indoor scene.Adjustment focal length of camera, makes it focus on.Use scaling method (as Zhang Zhengyou standardization) to demarcate to the video camera adjusting focal length, obtain focal length of camera size and figure principal point value.After having demarcated video camera, the focal length of video camera can not regulate again.Use the low-rank texture image in the video camera shooting indoor scene demarcated with symmetrical repeated characteristic.
Step 2: shooting image is processed, solves the shooting attitude of video camera.Choose in captured image comprise low-rank texture one piece of rectangular area as input, be designated as I.The initial value τ of Selecting All Parameters collection τ 0=(0,0,0,0,0,1), weights λ > 0.For asking for the parameter τ representing video camera attitude, as follows to above-mentioned input applying step:
1, input is accepted: the initial value τ of image I, τ 0, weights λ
2, as objective function f=||I 0|| *+ λ || E|| 1non-global convergence, is repeated below circulation:
S1: image I is normalized, and value is assigned to I, and ask for the Jacobian matrix ▽ I of I about τ τ(u ij(τ), v ij(τ)):
I ( u ij ( τ ) , v ij ( τ ) ) ← I ( u ij ( τ ) , v ij ( τ ) ) | | I ( u ij ( τ ) , v ij ( τ ) ) | | F
▿ τ I ( u ij ( τ ) , v ij ( τ ) ) ← ∂ ∂ τ ( I ( u ij ( τ ) , v ij ( τ ) ) | | I ( u ij ( τ ) , v ij ( τ ) ) | | F )
Wherein I (u ij(τ), v ij(τ) the image I) for representing by pixel, || I (u ij(τ), v ij(τ)) || frepresent I (u ij(τ), v ij(τ) F-norm).
S2: solve following problem, obtain optimum solution e *, Δ τ *:
( I 0 * , E * , Δτ * ) ← min I 0 , E , Δτ | | I 0 | | * + λ | | E | | 1 s . t I ( u ij ( τ ) , v ij ( τ ) ) + ▿ τ ( u ij ( τ ) , v ij ( τ ) ) Δτ = I 0 + E
S3: undated parameter τ: τ ← τ+Δ τ *
3, export: the optimum solution τ that optimization problem is final *, and get τ *front 3 values as the solutions of Eulerian angle
The problem solved needed for S2 in above-mentioned steps needs to solve with augmented vector approach, and its specific algorithm step is summarized as follows:
1, input is accepted: the image I ο τ of I after ο computing, I is that each takes turns the locally optimal solution of iteration about the Jacobian matrix ▽ I of τ, weights λ > 0, τ here.
2, initial value: Y is substituted into 0=0, E 0=0, Δ τ 0=0, μ 0> 0, ρ > 1, k=0. wherein Y 0lagrange multiplier, E 0the initial value of E, ▽ τ 0the initial renewal step-length of τ, μ 0be the initial value of augmentation term coefficient μ in Augmented Lagrangian Functions, ρ is the renewal factor of μ, and k is iterations;
3, carry out iterative computation according to the following step successively until || I ο τ-I 0-E+ ▽ I Δ τ || fconvergence:
I k + 1 0 ← U k S μ k - 1 [ Σ k ] V k T
μ k+1=ρμ k
Above-mentioned various in, SVD () represent svd is carried out to expression formula in bracket; Σ kfor the zoom factor of kth time iteration; Y k, Y k+1kth time and kth+1 iterative value of Lagrange multiplier respectively; E k, E k+1kth time and kth+1 iterative value of random noise; ▽ τ k, ▽ τ k+1kth time and kth+1 iteration step length of τ; μ k, μ k+1kth time and kth+1 iterative value of augmentation term coefficient in Augmented Lagrangian Functions;
4, export: the locally optimal solution I of low-rank image 0, the locally optimal solution E of random noise, the locally optimal solution Δ τ of step-length.
Supplementary notes, if during shooting low-rank texture, video camera is little relative to the deflection angle of world coordinate system, can suppose that the low-rank texture area before and after projection in image is constant, also can optimization problem in step 2 min I 0 , E , Δτ | | I 0 | | * + λ | | E | | 1 s . t I ( u ij ( τ ) , v ij ( τ ) ) + ▿ τ ( u ij ( τ ) , v ij ( τ ) ) Δτ = I 0 + E Middle adding conditional ▽ S (τ)=0, to make solving result more accurate.Through above-mentioned steps, represent that the Eulerian angle parameter of video camera attitude is solved.
Step 3: feature extraction is carried out to shooting image, the geometric properties of texture in scene and its geometric properties under image coordinate system are mapped by imaging relations, asking for the coordinate under video camera photocentre alive boundary coordinate system, is also video camera position in space.The concrete steps calculating video camera photocentre coordinate are as follows:
1, on real low-rank texture, two unique points for measuring are chosen.Extract its picture for principle to facilitate from image during selected characteristic point, square texture generally chooses the summit of square texture.Measure the actual length on texture between unique point with ruler, and draw the coordinate of unique point in world coordinate system according to actual length.Here, unique point at least chooses a pair, also can multiselect get several right.When choosing multipair, utilize every a pair unique point to ask for the position of video camera respectively, then using integrate each pair of unique point ask for the actual position of the result after result as video camera.
2, utilize image processing method, extract selected texture characteristic points location of pixels corresponding on image from image, obtain the pixel coordinate of unique point.Image processing method used can according to the not equal selecting factors of texture, and general with Harris Robust Algorithm of Image Corner Extraction extraction angle point, Hough transform extracts straight line, with operator extraction edges such as Canny.To selecting summit to be the square texture of unique point, available Hough transform extracts straight line, asks for the intersection point of straight line, just obtains the pixel coordinate of unique point.
3, bring the Eulerian angle parameter in τ striked in step 2 into formula (3), calculate projection matrix will the coordinate of selected unique point in world coordinate system and its coordinate under pixel coordinate system bring formula (15) and (17) into, calculating video camera position in space
Embodiment:
Below provide the embodiment utilizing the square texture on ceiling to carry out actual measurement.Experiment adopts MCV1000SAM-HD camera, and optical dimensions is 1/2 inch, and Pixel size is 5.2 × 5.2 μm 2, image resolution ratio is 1280*1024, and interface is gigabit network interface, use camera lens to be the zoom lens of 4 ~ 10mm.In experiment, ceiling texture is the square of length of side 600mm, ceiling distance ground 3330mm, video camera not fixed angle ground mounting distance ground 425mm tripod on.Carry out camera intrinsic parameter demarcation before experiment, obtain focal length of camera value and photocentre coordinate on the image plane.During experiment, experiment scene schematic diagram is as Fig. 5, and video camera moves according to position shown in label 1 to 24 in Fig. 5, takes ceiling.The summit choosing ceiling grid is unique point, by edge extracting, morphological image opening operation, closing operation of mathematical morphology, Harris Robust Algorithm of Image Corner Extraction, the ceiling texture characteristic points in the image of shooting is extracted successively, obtain the pixel coordinate of unique point, and calculate video camera photocentre position corresponding to every width picture in conjunction with its coordinate in world coordinate system.Measure gained video camera at the Eulerian angle of each shooting point, each shooting point coordinates measurements (x in world coordinate system i, y i, z i) with the standard reference value of each shooting point as shown in table 1.In table, data are the mean value of repetitive measurement.I represents the sequence number of measurement point, d i, i-1represent the measuring distance between sequence number i point and sequence number i-1 point, calculating formula such as formula (18), d 0 i, i-1represent the actual distance between sequence number i point and sequence number i-1 point, obtained by ruler measurement.Absolute value error is d i, i-1with d 0 i, i-1the absolute value of difference.
d i , i - 1 = ( x i - x i - 1 ) 2 + ( y i - y i - 1 ) 2 + ( z i - z i - 1 ) 2 - - - ( 18 )
Table 1 utilizes ceiling texture to measure the experimental result of camera position and attitude

Claims (2)

1., based on camera position and the attitude measurement method of low-rank texture, it is characterized in that, step is as follows:
Step one: obtain the image comprising low-rank texture
With the video camera having demarcated focal length and principle point location, the low-rank texture in scene is taken, obtain the image comprising low-rank texture;
Step 2: utilize the low-rank texture image of shooting to solve video camera shooting attitude
Image coordinate system is set up at CCD center, low-rank texture plane sets up world coordinate system, at video camera photocentre, place sets up camera coordinate system, and based on the relation that above-mentioned coordinate system to be derived on low-rank texture a bit in three coordinate systems between coordinate, such as formula (2):
s · u v 1 = N x c y c z c = N R w c T w c x w y w z w 1 Wherein N = f / dx 0 u 0 0 f / dy v 0 0 0 1 , R w c = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 , T w c = t 1 t 2 t 3 - - - ( 2 )
In formula (2), (u, v) is any pixel coordinate on low-rank texture, (x c, y c, z c) be this coordinate in camera coordinate system, (x w, y w, z w) be this coordinate in world coordinate system; S is the scale factor of a non-zero, and N is the Intrinsic Matrix of video camera, and f is the focal length of video camera; Dx, dy represent the pixel dimension in U, V direction respectively, (u 0, v 0) be the coordinate of video camera photocentre in image coordinate system, be also principal point coordinate; represent that video camera is relative to the rotation matrix of world coordinate system and translation matrix respectively;
With Eulerian angle represent video camera shooting low-rank texture time projection matrix; And low-rank texture imaging model is revised as the low-rank texture imaging model based on Eulerian angle based on formula (2), the Solve problems of video camera attitude is converted into an optimization problem: wherein I 0represent original low-rank texture image, I represents the actual photographed image of low-rank texture, and τ represents the parameter set that projective transformation relies on, and comprises 6 parameters, is respectively three Eulerian angle θ that each axle of camera coordinate system rotates around each axle of world coordinate system x, θ y, θ zwith the normalized displacements amount t ' of sign video camera photocentre relative to world coordinate system initial point 1, t ' 2, t ' 3; Symbol o represents a kind of computing projective transformation acted on I, and E represents random noise, || E|| 0represent the number of non-zero entry in random noise E, λ is the weights of E;
Utilize augmented vector approach to solve this optimization problem, solve the Eulerian angle parameter representing video camera attitude; The concrete steps solving Eulerian angle are as follows:
S1, accept captured by low-rank texture image as input picture, setting τ initial value be τ 0=(0,0,0,0,0,1), the initial value of its first three Parametric Representation Eulerian angle, rear three values represent the initial value of normalized displacements; Selected convergence precision ε > 0; Selected weights λ > 0;
S2, get rectangular window containing low-rank texture over an input image, gained image is designated as I;
S3, to image I, repeat following iterative step, until objective function f=||I 0|| *+ λ || E|| 1global convergence:
Image I is normalized, and normalized value is assigned to I again;
Jacobian matrix about Eulerian angle and normalized displacements is asked for I;
Solve with augmented vector approach the problem that formula (12) states, obtain the locally optimal solution τ ' of τ:
min I 0 , E , Δτ | | I 0 | | * + λ | | E | | 1 s . t I ( u ij ( τ ) , v ij ( τ ) ) + ▿ I τ ( u ij ( τ ) , v ij ( τ ) ) Δτ = I 0 + E - - - ( 13 )
Wherein || I 0|| *represent I 0nuclear norm, || E|| 1represent the l1-norm of the number of non-zero entry in random noise E, I (u ij(τ), v ij(τ) the image I) for representing with pixel value (u, v), and u, v are all the functions about τ, for I is about the Jacobian matrix of τ, Δ τ is the increment of τ, and i, j represent line order number and the row sequence number of I respectively;
Application o computing, the projective transformation that locally optimal solution τ ' is corresponding acted on each point of image I, such I has just changed into the new image of a width; By the new images of gained again assignment to I;
S4, output globally optimal solution τ *, its first three value is the value of the Eulerian angle representing video camera attitude;
Step 3: utilize image-forming principle and low-rank texture geometric relationship to solve video camera position in space
Low-rank texture is chosen two unique points, measures its geometric relationship in space, calculate its coordinate under world coordinate system, simultaneously from the pixel that extract minutiae the image of video camera shooting is corresponding, obtain the pixel coordinate of unique point; Pin-hole imaging principle is utilized to derive the solution formula of camera position:
t x c = ( u p 1 - u 0 ) ( t z c + C 1 ) / f d - A 1 , t y c = ( v p 1 - v 0 ) ( t z c + C 1 ) / f d - B 1 , t z c = f d ( A 1 - A 2 ) - u p 1 C 1 + u p 2 C 2 + u 0 ( C 1 - C 2 ) u p 1 - u p 2 - - - ( 15 )
p x w p y w p z w = - R w c - 1 T w c = - R 11 R 12 R 13 R 21 R 22 R 23 R 31 R 32 R 33 t x c t y c t z c - - - ( 17 )
Wherein represent world coordinate system initial point O wcoordinate under camera coordinate system; represent video camera photocentre O ccoordinate in world coordinate system; (u pi, v pi) pixel coordinate of representation feature point; (u 0, v 0) be video camera principal point, f dfor the focal length of video camera picture; A i = r 11 x i w + r 12 y i w + r 13 z i w , B i = r 21 x i w + r 22 y i w + r 23 z i w C i = r 31 x i w + r 32 y i w + r 33 z i w , And i, j=1,2; R mnfor the inverse matrix of projection matrix middle parameter, parameter r mnfor projection matrix middle parameter, m, n=1,2,3, such as formula (3);
Bring the coordinate of picture in image coordinate system of coordinate in world coordinate system of the value of Eulerian angle, unique point and unique point into formula (15) and (17), calculate the locus of video camera.
2. the camera position based on low-rank texture according to claim 1 and attitude measurement method, is characterized in that, the step of o computing in step 2 is by low-rank texture I 0on a bit (u 1, v 1) through type (6), change into the corresponding point (u on image I 2, v 2):
u 2 = f d ( r 11 u 1 + r 12 v 1 + f d d t 1 ) r 31 u 1 + r 32 v 1 + f d d t 3 = f d ( r 11 u 1 + r 12 v 1 + f d t ′ 1 ) r 31 u 1 + r 32 v 1 + f d t ′ 3 v 2 = f d ( r 21 u 1 + r 22 v 1 + f d d t 2 ) r 31 u 1 + r 32 v 1 + f d d t 3 = f d ( r 21 u 1 + r 22 v 1 + f d t ′ 2 ) r 31 u 1 + r 32 v 1 + f d f ′ 3 - - - ( 6 )
Wherein t'=[t' 1, t' 2, t' 3]=[t 1/ d, t 2/ d, t 3/ d] be normalized displacements vector, t 1, t 2, t 3for the real displacement vector of video camera; D is the translation degree of depth of video camera, f dfor focal length of camera; Parameter r mnfor projection matrix middle parameter and m, n=1,2,3, such as formula (3):
R w c = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 cos θ y cos θ z - cos θ y sin θ z sin θ y cos θ x sin θ z + sin θ x sin θ y cos θ z cos θ x cos θ z - sin θ x sin θ y sin θ z - sin θ x cos θ y sin θ x sin θ z - cos θ x sin θ y cos θ z sin θ x cos θ z + cos θ x sin θ y sin θ z cos θ x cos θ y - - - ( 3 )
Wherein θ x, θ y, θ zfor three Eulerian angle that each axle of camera coordinate system revolves around each axle of world coordinate system.
CN201410777911.5A 2014-12-15 2014-12-15 Camera position and posture measuring method on basis of low-rank textures Active CN104504691B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410777911.5A CN104504691B (en) 2014-12-15 2014-12-15 Camera position and posture measuring method on basis of low-rank textures

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410777911.5A CN104504691B (en) 2014-12-15 2014-12-15 Camera position and posture measuring method on basis of low-rank textures

Publications (2)

Publication Number Publication Date
CN104504691A true CN104504691A (en) 2015-04-08
CN104504691B CN104504691B (en) 2017-05-24

Family

ID=52946085

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410777911.5A Active CN104504691B (en) 2014-12-15 2014-12-15 Camera position and posture measuring method on basis of low-rank textures

Country Status (1)

Country Link
CN (1) CN104504691B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108475433A (en) * 2015-11-20 2018-08-31 奇跃公司 Method and system for determining RGBD camera postures on a large scale
CN108827300A (en) * 2018-04-17 2018-11-16 四川九洲电器集团有限责任公司 A kind of the equipment posture position measurement method and system of view-based access control model
CN109936712A (en) * 2017-12-19 2019-06-25 陕西外号信息技术有限公司 Localization method and system based on optical label
CN113221253A (en) * 2021-06-01 2021-08-06 山东贝特建筑项目管理咨询有限公司 Unmanned aerial vehicle control method and system for anchor bolt image detection
WO2022021132A1 (en) * 2020-07-29 2022-02-03 上海高仙自动化科技发展有限公司 Computer device positioning method and apparatus, computer device, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006082928A1 (en) * 2005-02-04 2006-08-10 Canon Kabushiki Kaisha Position posture measuring method and device
CN102122172A (en) * 2010-12-31 2011-07-13 中国科学院计算技术研究所 Image pickup system and control method thereof for machine motion control
CN103268612A (en) * 2013-05-27 2013-08-28 浙江大学 Single image fisheye camera calibration method based on low rank characteristic recovery

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006082928A1 (en) * 2005-02-04 2006-08-10 Canon Kabushiki Kaisha Position posture measuring method and device
CN102122172A (en) * 2010-12-31 2011-07-13 中国科学院计算技术研究所 Image pickup system and control method thereof for machine motion control
CN103268612A (en) * 2013-05-27 2013-08-28 浙江大学 Single image fisheye camera calibration method based on low rank characteristic recovery

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
NIANJUAN JIANG 等: "Symmetric Architecture Modeling with a Single Image", 《ACM TRANSACTIONS ON GRAPHICS》 *
孟晓桥 等: "一种新的基于圆环点的摄像机自标定方法", 《软件学报》 *
雷成 等: "一种新的基于主动视觉系统的摄像机自标定方法", 《计算机学报》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108475433A (en) * 2015-11-20 2018-08-31 奇跃公司 Method and system for determining RGBD camera postures on a large scale
CN108475433B (en) * 2015-11-20 2021-12-14 奇跃公司 Method and system for large scale determination of RGBD camera poses
US11838606B2 (en) 2015-11-20 2023-12-05 Magic Leap, Inc. Methods and systems for large-scale determination of RGBD camera poses
CN109936712A (en) * 2017-12-19 2019-06-25 陕西外号信息技术有限公司 Localization method and system based on optical label
CN109936712B (en) * 2017-12-19 2020-12-11 陕西外号信息技术有限公司 Positioning method and system based on optical label
CN108827300A (en) * 2018-04-17 2018-11-16 四川九洲电器集团有限责任公司 A kind of the equipment posture position measurement method and system of view-based access control model
WO2022021132A1 (en) * 2020-07-29 2022-02-03 上海高仙自动化科技发展有限公司 Computer device positioning method and apparatus, computer device, and storage medium
CN113221253A (en) * 2021-06-01 2021-08-06 山东贝特建筑项目管理咨询有限公司 Unmanned aerial vehicle control method and system for anchor bolt image detection

Also Published As

Publication number Publication date
CN104504691B (en) 2017-05-24

Similar Documents

Publication Publication Date Title
CN108510551B (en) Method and system for calibrating camera parameters under long-distance large-field-of-view condition
US9686527B2 (en) Non-feature extraction-based dense SFM three-dimensional reconstruction method
EP2111530B1 (en) Automatic stereo measurement of a point of interest in a scene
CN103106688B (en) Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
EP3182371B1 (en) Threshold determination in for example a type ransac algorithm
CN101998136B (en) Homography matrix acquisition method as well as image pickup equipment calibrating method and device
Saurer et al. Homography based visual odometry with known vertical direction and weak manhattan world assumption
CN104182982A (en) Overall optimizing method of calibration parameter of binocular stereo vision camera
CN102750697A (en) Parameter calibration method and device
CN107767456A (en) A kind of object dimensional method for reconstructing based on RGB D cameras
CN103971378A (en) Three-dimensional reconstruction method of panoramic image in mixed vision system
EP2022007A2 (en) System and architecture for automatic image registration
CN108362205B (en) Space distance measuring method based on fringe projection
CN104504691A (en) Camera position and posture measuring method on basis of low-rank textures
CN101887585A (en) Method for calibrating camera based on non-coplanar characteristic point
CN103473771A (en) Method for calibrating camera
Phuc Truong et al. Registration of RGB and thermal point clouds generated by structure from motion
CN109214254B (en) Method and device for determining displacement of robot
CN111144349A (en) Indoor visual relocation method and system
EP3185212B1 (en) Dynamic particle filter parameterization
CN102914295A (en) Computer vision cube calibration based three-dimensional measurement method
CN103700082B (en) Image split-joint method based on dual quaterion relative orientation
CN103900504A (en) Nano-scale real-time three-dimensional visual information feedback method
TWI599987B (en) System and method for combining point clouds
CN113658279B (en) Camera internal reference and external reference estimation method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant