CN110120101B - Cylinder augmented reality method, system and device based on three-dimensional vision - Google Patents

Cylinder augmented reality method, system and device based on three-dimensional vision Download PDF

Info

Publication number
CN110120101B
CN110120101B CN201910360629.XA CN201910360629A CN110120101B CN 110120101 B CN110120101 B CN 110120101B CN 201910360629 A CN201910360629 A CN 201910360629A CN 110120101 B CN110120101 B CN 110120101B
Authority
CN
China
Prior art keywords
image
cylinder
dimensional
camera
augmented reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910360629.XA
Other languages
Chinese (zh)
Other versions
CN110120101A (en
Inventor
唐付林
吴毅红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201910360629.XA priority Critical patent/CN110120101B/en
Publication of CN110120101A publication Critical patent/CN110120101A/en
Application granted granted Critical
Publication of CN110120101B publication Critical patent/CN110120101B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention belongs to the field of computer vision, and particularly relates to a method, a system and a device for realizing the augmented reality of a cylinder based on three-dimensional vision, aiming at solving the problem that the cylinder is difficult to realize the augmented reality in the prior art. The method comprises the following steps: for each image in the acquired multi-view video image set of the cylinder, fitting the contour line of the cylinder in the image by adopting a Hough transform method, and establishing a world coordinate system; calculating the camera attitude corresponding to each image based on projective invariance and an imaging principle; reconstructing a three-dimensional model of the cylinder; acquiring video images based on the reconstructed three-dimensional model, and performing interframe camera posture tracking calculation to obtain a camera posture corresponding to each frame of image; and superposing the virtual image to the cylinder video image through the obtained camera attitude to realize the cylinder augmented reality. The method has high off-line reconstruction and on-line tracking precision and high speed, and the superposed virtual object is stable, thereby realizing the purpose of enhancing reality of the cylinder.

Description

Cylinder augmented reality method, system and device based on three-dimensional vision
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a three-dimensional vision-based cylinder augmented reality method, system and device.
Background
In recent years, the concept of traditional video information interactive application is changed by augmented reality, and the method has great application prospects in the fields of medical treatment, military affairs, education, entertainment and the like. Augmented reality has received a great deal of attention, both in the academic and industrial sectors. Initially, a number of planar square markers were used for augmented reality, such as ARToolKit, ARTag, AprilTag, and the like. ARToolKit was the earliest most popular, and ARTag and aprilat were both improved based on the idea of ARToolKit. Later, planar circular markers became popular, such as Mono-spectra, CCTAg, RUNETag, and the like. Whether the marker is a plane square marker or a plane circular marker, the virtual object is projected onto the real object according to the calculated camera posture so as to realize augmented reality. For these planar markers, they all have a common disadvantage, and the planar markers must be printed and then placed in the scene to achieve augmented reality, which is inconvenient. Furthermore, augmented reality technologies which do not need to be marked appear, but are limited to plane enhancement, the augmented reality of the cylindrical curved surface is difficult, and related technologies are few at home and abroad.
Disclosure of Invention
In order to solve the above problems in the prior art, that is, the problem that it is difficult to perform augmented reality on a cylinder in the prior art, the present invention provides a method for augmented reality of a cylinder based on three-dimensional vision, including:
step S10, acquiring a cylindrical multi-view video image set as an input image set;
step S20, for each image in the input image set, fitting the contour line of the cylinder in the image by adopting a Hough transform method, and establishing a world coordinate system of the cylinder in the image;
step S30, calculating the camera pose corresponding to each image based on the contour line of the cylinder obtained by fitting each image in the input image set and the corresponding world coordinate system and based on projective invariance and the imaging principle;
step S40, extracting the feature point of each image in the input image set, reconstructing the space point corresponding to the feature point based on the camera posture corresponding to the image, and obtaining a reconstructed cylinder three-dimensional model;
step S50, acquiring a corresponding cylinder video image and initializing the image based on the reconstructed cylinder three-dimensional model to acquire an initial camera attitude and a three-dimensional-two-dimensional corresponding relation;
step S60, based on the initial camera pose and the three-dimensional-two-dimensional corresponding relation, carrying out interframe camera pose tracking calculation to obtain a camera pose corresponding to each frame of image;
and step S70, superimposing the input virtual image on the cylindrical video image by adopting the camera posture corresponding to each frame of image so as to realize the cylindrical augmented reality.
In some preferred embodiments, in step S20, "for each image in the input image set, fitting the contour of the cylinder in the image by using hough transform method, and establishing the world coordinate system of the cylinder in the image" is performed by:
step S201, for each image in the input image set, fitting two edge straight lines l of a cylinder by adopting a Hough transform method1And l2Fitting two quadratic curves c of the cylinder simultaneously1And c2
Step S202, using the curve c2Center point o of2The corresponding space point is used as the origin of the world coordinate system, and the curve c2Center point o of2To curve c2A straight line of one point above as the X-axis of the world coordinate system, said curve c2Center point o of2To curve c1Center point o of1As the Z-axis of the world coordinate system, said quadratic curve c2The corresponding space plane is the X-Y plane of the world coordinate system, and the establishment of the world coordinate system is completed.
In some preferred embodiments, in step S30, "calculating a camera pose corresponding to each image based on the projection invariance and the imaging principle based on the contour line of the cylinder fitted to each image in the input image set, the corresponding world coordinate system", includes:
and respectively calculating a rotation transformation matrix R and a translational vector t of each image from the world coordinate system to the camera coordinate system based on the fitted two straight lines, the fitted two curves and the world coordinate system, wherein the rotation transformation matrix R and the translational vector t are the camera postures corresponding to the images.
In some preferred embodiments, after "extracting the feature point of each image in the input image set and reconstructing the spatial point corresponding to the feature point based on the camera pose corresponding to the image" in step S40, there is further provided a step of spatial point optimization, where the method is as follows:
step B10, according to the three-dimensional-two-dimensional corresponding relation, adopting the reprojection error minimization to optimize the posture of each frame of image and all the space points observed by the image;
and step B20, optimizing the spatial points of all images and the camera poses of all images by adopting global binding adjustment based on the optimized pose of each frame of image and all the spatial points observed by the image.
In some preferred embodiments, in step S50, "acquiring and initializing a corresponding cylinder video image based on the reconstructed cylinder three-dimensional model", the method includes:
step S501, processing images by using Linear P3P RANSAC based on the acquired cylindrical video images, and continuously acquiring camera postures corresponding to images with preset frame numbers;
step S502, judging whether the proximity degree of the camera posture corresponding to the preset frame number image exceeds a set threshold value, if so, finishing initialization; if the determination result is no, step S501 is executed.
In some preferred embodiments, in step S60, "performing inter-frame camera pose tracking calculation based on the initial camera pose and the three-dimensional to two-dimensional correspondence, to obtain a camera pose corresponding to each frame image", the method includes:
step S601, detecting an angular point in the interested area of the current frame image and extracting a binary descriptor;
step S602, matching the binary descriptor of the current frame image with the binary descriptor of the previous frame image to obtain the two-dimensional-two-dimensional relationship between the current frame image and the previous frame image;
step S603, acquiring the three-dimensional-two-dimensional relationship of the current frame image based on the three-dimensional-two-dimensional relationship of the previous frame image and the two-dimensional-two-dimensional relationship of the current frame image and the previous frame image;
and step S604, calculating the camera attitude corresponding to the current frame image by adopting an EPnP method based on the three-dimensional-two-dimensional relation of the current frame image.
In some preferred embodiments, before "superimposing the input virtual image onto the cylindrical video image using the camera pose corresponding to each frame image and performing the cylindrical augmented reality" in step S70, a step of eliminating instability of the camera pose is further provided, where the method includes:
and smoothing the camera attitude by adopting an extended Kalman filter to eliminate instability of the camera attitude.
On the other hand, the invention provides a three-dimensional vision-based cylinder augmented reality system, which comprises an input module, a world coordinate system establishing module, a camera attitude calculating module, a cylinder three-dimensional reconstruction module, a cylinder video image initializing module, an inter-frame camera attitude tracking module, an augmented reality module and an output module, wherein the input module is used for acquiring a three-dimensional image;
the input module is configured to acquire and input a cylindrical video image set with multiple view angles;
the world coordinate system establishing module is configured to fit a cylinder contour line to each image in the multi-view video image set and establish a world coordinate system;
the camera attitude calculation module is configured to calculate a camera attitude corresponding to each image based on the fitted cylinder contour line and the world coordinate system by utilizing the projective invariance and the imaging principle;
the cylinder three-dimensional reconstruction module is configured to extract a feature point of each image in the multi-view video image set, reconstruct a space point corresponding to the feature point based on a camera posture corresponding to the image, and obtain a reconstructed cylinder three-dimensional model;
the cylinder video image initialization module is configured to acquire and initialize a corresponding cylinder video image based on the reconstructed cylinder three-dimensional model to acquire an initial camera attitude and a three-dimensional-two-dimensional corresponding relation;
the interframe camera attitude tracking module is configured to track and calculate the interframe camera attitude based on the initial camera attitude and the three-dimensional-two-dimensional corresponding relation to obtain the camera attitude corresponding to each frame of image
The augmented reality module is configured to adopt the camera posture corresponding to each frame of image to superimpose the input virtual image on the cylindrical video image to perform cylindrical augmented reality;
and the output module is configured to output the cylinder video image after augmented reality.
In a third aspect of the present invention, a storage device is provided, in which a plurality of programs are stored, the programs being adapted to be loaded and executed by a processor to implement the above-mentioned three-dimensional vision-based cylinder augmented reality method.
In a fourth aspect of the present invention, a processing apparatus is provided, which includes a processor, a storage device; the processor is suitable for executing various programs; the storage device is suitable for storing a plurality of programs; the program is adapted to be loaded and executed by a processor to implement the three-dimensional vision based cylinder augmented reality method described above.
The invention has the beneficial effects that:
the invention provides a cylinder augmented reality technology based on three-dimensional vision, which realizes the reconstruction of a cylinder three-dimensional model in an off-line stage and realizes the on-line three-dimensional tracking of a cylinder in an on-line stage. In actual use, whether the reconstruction is off-line or the tracking is on-line, very high precision is obtained. In addition, the online tracking speed exceeds 50FPS, the camera attitude obtained by online tracking is utilized to superpose the virtual object, the virtual object is very stable, and the purpose of enhancing reality of the cylinder is achieved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a system flow diagram of a three-dimensional vision-based cylinder augmented reality method according to the present invention;
FIG. 2 is a diagram of an example of a cylindrical object according to an embodiment of the present invention based on a three-dimensional visual augmented reality method;
FIG. 3 is a schematic diagram of a world coordinate system established by an embodiment of the three-dimensional vision-based cylinder augmented reality method of the present invention;
FIG. 4 is a diagram of an example of a cylinder reconstruction model according to an embodiment of the cylinder augmented reality method based on three-dimensional vision;
FIG. 5 is a schematic diagram of a reprojection error of each frame in camera pose interframe tracking according to an embodiment of the three-dimensional vision-based cylinder augmented reality method of the present invention;
FIG. 6 is a schematic diagram of time consumed by each frame in camera pose interframe tracking of the three-dimensional vision-based cylinder augmented reality method according to the present invention;
FIG. 7 is a schematic diagram illustrating the system operation visualization of the three-dimensional vision-based cylinder augmented reality method according to the present invention;
FIG. 8 is an exemplary diagram of an augmented reality result of a superimposed virtual earth for one embodiment of a three-dimensional vision based cylinder augmented reality method of the present invention;
FIG. 9 is an exemplary diagram of an augmented reality result replacing textures around a cylinder according to an embodiment of the three-dimensional vision-based cylinder augmented reality method of the present invention.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
The invention discloses a three-dimensional vision-based cylinder augmented reality method, which comprises the following steps:
step S10, acquiring a cylindrical multi-view video image set as an input image set;
step S20, for each image in the input image set, fitting the contour line of the cylinder in the image by adopting a Hough transform method, and establishing a world coordinate system of the cylinder in the image;
step S30, calculating the camera pose corresponding to each image based on the contour line of the cylinder obtained by fitting each image in the input image set and the corresponding world coordinate system and based on projective invariance and the imaging principle;
step S40, extracting the feature point of each image in the input image set, reconstructing the space point corresponding to the feature point based on the camera posture corresponding to the image, and obtaining a reconstructed cylinder three-dimensional model;
step S50, acquiring a corresponding cylinder video image and initializing the image based on the reconstructed cylinder three-dimensional model to acquire an initial camera attitude and a three-dimensional-two-dimensional corresponding relation;
step S60, based on the initial camera pose and the three-dimensional-two-dimensional corresponding relation, carrying out interframe camera pose tracking calculation to obtain a camera pose corresponding to each frame of image;
and step S70, superimposing the input virtual image on the cylindrical video image by adopting the camera posture corresponding to each frame of image so as to realize the cylindrical augmented reality.
In order to more clearly describe the three-dimensional vision-based cylinder augmented reality method of the present invention, the following will describe each step in the embodiment of the method of the present invention in detail with reference to fig. 1.
The three-dimensional vision-based cylinder augmented reality method comprises the following steps of S10-S70, wherein the steps are described in detail as follows:
in step S10, a cylindrical multi-view video image set is obtained as an input image set.
Before the off-line three-dimensional reconstruction of the cylinder is carried out, firstly, calibrating an in-camera parameter matrix K, and normalizing the image by using the in-camera parameter matrix K; then, images are taken, which contain a cylinder and which have multiple viewing angles.
Fig. 2 is a diagram illustrating an example of a cylindrical object according to an embodiment of the three-dimensional vision-based cylinder augmented reality method of the present invention, which sequentially shows a can-can bottle, a cola-cola bottle, a sprite-sprite bottle, and a water-mineral water bottle from left to right.
And step S20, fitting the contour line of the cylinder in the image by adopting a Hough transform method for each image in the input image set, and establishing a world coordinate system of the cylinder in the image.
The basic principle of Hough transform is to change a given curve in an original image space into a point in a parameter space by using the duality of the point and the line, so that the detection problem of the given curve in the original image is converted into the problem of finding a peak value in the parameter space, that is, the detection overall characteristic is converted into the detection local characteristic, such as a straight line, an ellipse, a circle, an arc line and the like.
Step S201, for each image in the input image set, fitting two edge straight lines l of a cylinder by adopting a Hough transform method1And l2Fitting two quadratic curves c of the cylinder simultaneously1And c2
Step S202, using the curve c2Center point o of2The corresponding space point is used as the origin of the world coordinate system, and the curve c2Center point o of2To curve c2A straight line of one point above as the X-axis of the world coordinate system, said curve c2Center point o of2To curve c1Center point o of1As the Z-axis of the world coordinate system, said quadratic curve c2The corresponding space plane is an X-Y plane of a world coordinate system, and the world coordinate system is established.
Setting a fitted quadratic curve c1Has a center of o1Fitted quadratic curve c2Has a center of o2Selecting o2(or o)1) The corresponding space point is the origin of the world coordinate system, and the homogeneous coordinate is (0001)T. On the quadratic curve c2To select an image point M0,o2To M0Is the X-axis of the space coordinate system, then M0The corresponding spatial point homogeneous coordinate is (r 001)TAnd r is the radius of the cylinder. o2To o1Is taken as the Z axis of the space coordinate system, o1The corresponding space point homogeneous coordinate is (00 h 1)TAnd h is the height of the cylinder. Selecting a quadratic curve c2The corresponding space plane is the X-Y plane of the world coordinate system. To this end, a world coordinate system is established.
FIG. 3 is a schematic diagram of a world coordinate system established according to an embodiment of the cylinder augmented reality method based on three-dimensional vision of the present invention, where o is an origin of the world coordinate system and o is a curve c2A point M of0Is the X-axis of the world coordinate system, o to the curve c1Center point o of1The straight line of (a) is the Z axis of the world coordinate system, the quadratic curve c2The corresponding space plane is the X-Y plane of the world coordinate system. In the practical application process, the world coordinate system is established by the method, other different points can be selected as the origin of the world coordinate system, and other straight lines passing through the origin can also be selected as X, Y, Z axes of the world coordinate system, which is not described in detail herein.
And step S30, calculating the camera pose corresponding to each image based on the contour line of the cylinder obtained by fitting each image in the input image set and the corresponding world coordinate system and based on projective invariance and the imaging principle.
Common projective invariance includes that the projections of collinear points are still collinear, the projections of parallel straight lines intersect at one point, the intersection ratio of the projections of straight line segments is not changed, and the like.
And respectively calculating a rotation matrix R and a translational vector t of each image from the world coordinate system to the camera coordinate system based on the fitted cylinder contour line (comprising two straight lines and two curves) and the world coordinate system, wherein the rotation transformation matrix R and the translational vector t are camera postures corresponding to the images.
The rotation transformation matrix R and the translation vector t are shown as formula (1) and formula (2):
R=(r1 r2 r3) Formula (1)
t=(t1 t2 t3) Formula (2)
Wherein r is1=(r11 r21 r31)T,r2=(r12 r22 r32)T,r3=(r13 r23 r33)TThree columns of the rotation matrix R.
In the world coordinate system, the point V at infinity on the Z-axisz=(0 0 1 0)TDefining a fitted straight line l1And l2Has a crossing point of VzAt the projected point v of the two-dimensional imagez. By the imaging principle of the image, v is calculatedzAs shown in formula (3):
vz=l1×l2≈(r1 r2 r3 t)(0 0 1 0)Tformula (3)
So that r can be calculated3As shown in formula (4):
Figure BDA0002046719980000101
calculating a fitting curve c according to the projective invariance1Center point o of1Fitting curve c2Center point o of2As shown in formulas (5) and (6):
Figure BDA0002046719980000102
Figure BDA0002046719980000103
wherein u is1、v1Each represents o1On the abscissa, ordinate, u, of the two-dimensional image2、v2Each represents o2On the abscissa and ordinate of the two-dimensional image.
According to o1Homogeneous coordinate of corresponding space point (00 h 1)TAnd o2Homogeneous coordinate of corresponding space point (0001)TScale factor s based on the principle of image imaging1、s2The calculation method of (2) is shown in the formula (7) and the formula (8):
Figure BDA0002046719980000104
Figure BDA0002046719980000105
t can then be calculated as shown in equation (9):
t=s1*o1-h*r3formula (9)
In the world coordinate system, the homogeneous coordinate of the infinity point on the X-axis is (1001)TCalculating v according to the principle of image formationxAs shown in formula (10):
vx=(o2×m0)×Vz≈(r1 r2 r3 t)(1 0 0 0)Tformula (10)
So that r can be calculated1As shown in formula (11):
Figure BDA0002046719980000111
calculating r2As shown in formula (12):
r2=r3×r1formula (12)
In summary, the rotation transformation matrix R and the translational vector t are obtained as the camera poses corresponding to the image.
And step S40, extracting the characteristic points of each frame of image in the input image set, reconstructing the space points corresponding to the characteristic points based on the camera postures corresponding to the images, and obtaining the reconstructed cylinder three-dimensional model.
Selecting a feature point m on a first frame image of the input image seti=(ui vi 1)TThe homogeneous coordinate of the corresponding space point is Mi=(Xi Yi Zi 1)TAccording to the imaging process, MiThe calculation method of the reconstruction is shown as formula (13):
(Xi Yi Zi)T=s*RT*mi-RTt type (13)
Wherein s is a scale factor; space point MiOn the cylinder body to satisfy
Figure BDA0002046719980000112
r is the cylinder radius.
In step S40, after "extracting a feature point of each image in the input image set and reconstructing a spatial point corresponding to the feature point based on a camera pose corresponding to the image", a step of spatial point optimization is further provided, in which the method includes:
step B10, according to the three-dimensional-two-dimensional correspondence, the pose of each frame of image and all the spatial points observed by the image are optimized by adopting the minimization of the reprojection error, as shown in formula (14):
Figure BDA0002046719980000121
step B20, based on the optimized pose of each frame of image and all the spatial points observed by the image, optimizing the spatial points of all the images and the camera poses of all the images by using global binding adjustment, as shown in equation (15):
Figure BDA0002046719980000122
wherein, KlRepresenting all pictures, PlRepresents all spatial points, and E (i, j) represents, and is calculated according to formula (16):
E(i,j)=||mi-[Rj,tj]Mi||2formula (16)
Fig. 4 is a diagram showing an example of a cylinder reconstruction model according to an embodiment of the cylinder augmented reality method based on three-dimensional vision, in which the top left is a can bottle, the top right is a cola bottle, the bottom left is a sprite-sprite bottle, and the bottom right is a water-mineral water bottle.
The method of the invention is adopted to carry out the three-dimensional reconstruction of the cylinder, the precision is high, the error is low, and the average reconstruction error is shown in the table 1:
TABLE 1
Cylinder body Can bottle Cola bottle Xuebi bottle Mineral water bottle
Mean reconstruction error (pixel) 1.92×10-5 1.66×10-5 1.60×10-5 1.66×10-5
And step S50, acquiring a corresponding cylinder video image based on the reconstructed cylinder three-dimensional image, and initializing to obtain an initial camera attitude and a three-dimensional-two-dimensional corresponding relation.
Pose solution is commonly encountered in computer vision, and P3P (P3P, Perspective-3-Points) provides a solution, which is a 3D-2D pose solution mode, and requires known matched 3D Points and image 2D Points.
Step S501, based on the acquired cylindrical video image, processing the image by using Linear P3P RANSAC, and continuously acquiring the camera posture corresponding to the image with the preset frame number.
Step S502, judging whether the proximity degree of the camera posture corresponding to the preset frame number image exceeds a set threshold value, if so, finishing initialization; if the determination result is no, step S501 is executed.
During the initialization process of the cylinder, Linear P3P RANSAC is adopted:
firstly, using Hough transform to detect and fit straight lines, eliminating straight lines and short straight lines close to the edge of an image, merging the same straight lines, and using a set Si={liAnd i ═ 1,2 … N } represents the remaining straight lines. In addition, by calculating the angle between two straight lines, many parallel straight line pairs are found, and the straight line l is recorded in paralleliSet of straight lines of STi. In set STiIn (2), two lines with the farthest distance are selected, other lines are eliminated, and the intersection point of the two lines with the farthest distance is recorded as v, then the third column of the rotation matrix R can be calculated, as shown in equation (17):
Figure BDA0002046719980000131
extracting corner points from the region on the image sandwiched by the two farthest straight lines and describing the extracted corner points by using a binary descriptor.
Then, finding out the 3D-2D corresponding point of the current frame by matching the descriptor of the current frame and the descriptor corresponding to the reconstructed space point, and recording the set of the corresponding points as Si={(mj,Mj) J is 1,2 … n }. From the set SiThree pairs of 3D-2D corresponding points are arbitrarily selected from the three pairs and are marked as (m)i,Mi) I ═ 1,2,3, we can get formula (18) according to the imaging procedure:
si*mi=RMi+ t type (18)
Wherein s isiIs a scale factor, Mi=(Xi,Yi,Zi)T,i=1,2,3。
Can use the scale factorSon s1Representing a scale factor s2、s3As shown in formulas (19) and (20):
Figure BDA0002046719980000132
Figure BDA0002046719980000133
further, a system of equations is obtained, as shown in equation (21):
Figure BDA0002046719980000141
substituting equations (19) and (20) into equation set (21) yields a correlation value for s1Using SVD decomposition to solve the system of equations, and then solving the obtained s1Substituting into formula (19) and formula (20) to obtain s2And s3. Will calculate the obtained s1、s2And s3Substituting into equation (18), linear solving can obtain r1、r2As shown in formulas (22) and (23):
Figure BDA0002046719980000142
Figure BDA0002046719980000143
wherein A is1=(s2m2-s1m1-(Z2-Z1)r3)(Y3-Y1),A2=(s3m3-s1m1-(Z3-Z1)r3)(Y2-Y1),A3=(s2m2-s1m1-(Z2-Z1)r3)(X3-X1),A4=(s3m3-s1m1-(Z3-Z1)r3)(X2-X1),B==(X2-X1)(Y3-Y1)-(X3-X1)(Y2-Y1)。
The rotation matrix R obtained thereby ═ (R)1 r2 r3) And calculating a translation vector t as shown in formula (24):
t=simi-RMii-1, 2,3 formula (24)
From the camera poses R and t found above, we can calculate the number of inliers and save the number of inliers. And reselecting three different pairs of 3D-2D corresponding points, repeating the above process to find the number of the interior points and storing the number of the interior points. And after all the different three pairs of 3D-2D corresponding points are processed, selecting the camera posture corresponding to the combination with the largest number of inner points. To further get the exact camera pose, the three pairs of 3D-2D corresponding points and their interior points are put together and the camera pose is recalculated using EPnP.
Finally, in each set STiIn the method, the camera pose and the corresponding interior points are calculated, and the camera pose corresponding to the set with the largest number of interior points is selected. Then, we further optimize the camera pose according to the corresponding interior points, as shown in equation (25):
Figure BDA0002046719980000151
the above method is called "Linear P3P RANSAC". And processing continuous three-frame images by using 'Linear P3P RANSAC', if the camera postures of the images are relatively close to each other, considering that the initialization is successful, and otherwise, reselecting the continuous three-frame images for initialization until the initialization is successful.
And step S60, based on the initial camera pose and the three-dimensional-two-dimensional corresponding relation, carrying out interframe camera pose tracking calculation to obtain the camera pose corresponding to each frame of image.
Step S601, detecting a corner point in the region of interest of the current frame image and extracting a binary descriptor.
Step S602, matching the feature points of the current frame image with the feature points of the previous frame image, and obtaining the two-dimensional-two-dimensional relationship between the current frame image and the previous frame image.
Step S603, a three-dimensional-two-dimensional relationship of the current frame image is obtained based on the three-dimensional-two-dimensional relationship of the previous frame image and the two-dimensional-two-dimensional relationship of the current frame image and the previous frame image.
And step S604, calculating the camera attitude corresponding to the current frame image by adopting an EPnP method based on the three-dimensional-two-dimensional relation of the current frame image.
And after the initialization is successful, performing inter-frame tracking by using the information of the previous frame. Inter-frame tracking includes tracking the last frame and tracking the model. In the process of tracking the previous frame, firstly, detecting an angular point and extracting a descriptor in a current frame region of interest (a region where a cylinder is located); then, matching the descriptor of the current frame with the descriptor of the previous frame to find the 2D-2D corresponding relation between the current frame and the previous frame, thereby finding the 3D-2D corresponding relation of the current frame; and finally, calculating the camera attitude of the current frame by using the EPnP according to the 3D-2D corresponding relation of the current frame. If tracking of the previous frame fails, the camera pose of the current frame is predicted using the motion model. In tracking the model, the model is tracked by the formula s mi=R(Xi Yi Zi)T+ t and formula
Figure BDA0002046719980000161
Reconstructing the feature points which are not matched with the previous frame in the current frame, further matching the newly reconstructed space points with the model to obtain more 3D-2D corresponding points, and optimizing the camera attitude of the current frame by using all the 3D-2D corresponding points of the current frame, as shown in formula (26):
Figure BDA0002046719980000162
and finally, predicting the interested area of the next frame according to the optimized camera posture.
Fig. 5 is a schematic diagram of the reprojection error of each frame in the camera pose interframe tracking according to an embodiment of the invention, which represents can-can bottle, cola-cola bottle, sprite-snow-water bottle, and water-mineral water bottle from left to right, where Frames represents all image Frames in a video, reprojection error (pixel) represents the reprojection error, and a straight line represents that the reprojection error of most image Frames in the video is smaller than a certain set threshold.
As shown in fig. 6, which is a schematic diagram of time consumed by each frame in camera pose interframe tracking of the three-dimensional vision-based cylinder augmented reality method of the present invention, Frames represents all image Frames in a video, time (ms) represents time consumed by each frame in an online tracking process, and a straight line represents that time consumed by online tracking of most image Frames is less than a certain set threshold.
When the method is adopted for augmented reality, the frame loss rate is low, the reprojection error is small, and the number of FPS (frame number of images) which can be tracked online per second is high in the process of tracking the attitude of the online camera, as shown in the table 2:
video Number of frames Frame losing rate Reprojection error (pixel) FPS(Hz)
Can 3356 0.70% 0.90 63
Cola 3507 1.64% 1.26 59
Sprite 3386 1.27% 0.93 63
Water 3449 0.19% 0.95 56
And step S70, superimposing the input virtual image on the cylindrical video image by adopting the camera posture corresponding to each frame of image so as to realize the cylindrical augmented reality.
In step S70, "adopt the camera pose corresponding to each frame image, superimpose the input virtual image on the cylinder video image to implement the cylinder augmented reality", there is also a step of eliminating instability of the camera pose, and the method is:
and smoothing the camera attitude by adopting an extended Kalman filter to eliminate instability of the camera attitude.
When the video is shot, the superposed virtual object may have instability to some extent due to the shaking of the equipment to some extent, and the camera attitude is smoothed by using the extended Kalman filter. Then, the virtual object is projected to the real cylinder by using the smoothed camera gesture, so that the virtual object can be stably superposed on the cylinder, and the purpose of augmented reality is achieved.
Fig. 7 is a schematic diagram showing the system operation visualization of the cylinder augmented reality method based on three-dimensional vision according to the present invention, the left diagram of fig. 7 is a virtual earth superimposed on an image containing a mineral water bottle, the right diagram of fig. 7 is a model of an offline three-dimensional reconstruction of a cylinder, and a pyramid in the upper left corner of the right diagram represents the spatial position of a camera.
As shown in fig. 8, which is an exemplary diagram of an augmented reality result of superimposing a virtual earth according to an embodiment of the cylinder augmented reality method based on three-dimensional vision of the present invention, the effect of superimposing the virtual earth on can-can bottles, cola-cola bottles, sprite-sprite bottles, and water-spa bottles is represented from left to right, respectively.
As shown in fig. 9, which is an exemplary diagram of augmented reality result of replacing the surrounding texture of a cylinder according to an embodiment of the three-dimensional vision based cylinder augmented reality method of the present invention, the effects of replacing the surrounding texture of can-can bottles, cola-cola bottles, sprite-sprite bottles, and water-mineral water bottles are represented from left to right, respectively.
The cylinder augmented reality system based on the three-dimensional vision comprises an input module, a world coordinate system establishing module, a camera posture calculating module, a cylinder three-dimensional reconstruction module, a cylinder video image initializing module, an inter-frame camera posture tracking module, an augmented reality module and an output module, wherein the input module is used for acquiring a world coordinate system;
the input module is configured to acquire and input a cylindrical video image set with multiple view angles;
the world coordinate system establishing module is configured to fit a cylinder contour line to each image in the multi-view video image set and establish a world coordinate system;
the camera attitude calculation module is configured to calculate a camera attitude corresponding to each image based on the fitted cylinder contour line and the world coordinate system by utilizing the projective invariance and the imaging principle;
the cylinder three-dimensional reconstruction module is configured to extract a feature point of each image in the multi-view video image set, reconstruct a space point corresponding to the feature point based on a camera posture corresponding to the image, and obtain a reconstructed cylinder three-dimensional model;
the cylinder video image initialization module is configured to acquire and initialize a corresponding cylinder video image based on the reconstructed cylinder three-dimensional model to acquire an initial camera attitude and a three-dimensional-two-dimensional corresponding relation;
the interframe camera attitude tracking module is configured to track and calculate the interframe camera attitude based on the initial camera attitude and the three-dimensional-two-dimensional corresponding relation to obtain the camera attitude corresponding to each frame of image
The augmented reality module is configured to adopt the camera posture corresponding to each frame of image to superimpose the input virtual image on the cylindrical video image to perform cylindrical augmented reality;
and the output module is configured to output the cylinder video image after augmented reality.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process and related description of the system described above may refer to the corresponding process in the foregoing method embodiments, and will not be described herein again.
It should be noted that, the three-dimensional vision-based cylinder augmented reality system provided in the above embodiment is only illustrated by dividing the above functional modules, and in practical applications, the above functions may be allocated to different functional modules according to needs, that is, the modules or steps in the embodiment of the present invention are further decomposed or combined, for example, the modules in the above embodiment may be combined into one module, or may be further split into multiple sub-modules, so as to complete all or part of the above described functions. The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing the modules or steps, and are not to be construed as unduly limiting the present invention.
A storage device according to a third embodiment of the present invention stores a plurality of programs, and the programs are suitable for being loaded and executed by a processor to implement the three-dimensional vision-based cylinder augmented reality method.
A processing apparatus according to a fourth embodiment of the present invention includes a processor, a storage device; a processor adapted to execute various programs; a storage device adapted to store a plurality of programs; the program is adapted to be loaded and executed by a processor to implement the three-dimensional vision based cylinder augmented reality method described above.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Those of skill in the art would appreciate that the various illustrative modules, method steps, and modules described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that programs corresponding to the software modules, method steps may be located in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. To clearly illustrate this interchangeability of electronic hardware and software, various illustrative components and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.

Claims (10)

1. A cylinder augmented reality method based on three-dimensional vision is characterized by comprising the following steps:
step S10, acquiring a cylindrical multi-view video image set as an input image set;
step S20, for each image in the input image set, fitting the contour line of the cylinder in the image by adopting a Hough transform method, and establishing a world coordinate system of the cylinder in the image;
step S30, calculating the camera pose corresponding to each image based on the contour line of the cylinder obtained by fitting each image in the input image set and the corresponding world coordinate system and based on projective invariance and the imaging principle;
step S40, extracting the feature point of each image in the input image set, reconstructing the space point corresponding to the feature point based on the camera posture corresponding to the image, and obtaining a reconstructed cylinder three-dimensional model;
step S50, acquiring a corresponding cylinder video image and initializing the image based on the reconstructed cylinder three-dimensional model to acquire an initial camera attitude and a three-dimensional-two-dimensional corresponding relation;
step S60, based on the initial camera pose and the three-dimensional-two-dimensional corresponding relation, carrying out interframe camera pose tracking calculation to obtain a camera pose corresponding to each frame of image;
and step S70, superimposing the input virtual image on the cylindrical video image by adopting the camera posture corresponding to each frame of image so as to realize the cylindrical augmented reality.
2. The method of claim 1, wherein in step S20, "for each image in the input image set, fitting a contour line of a cylinder in the image by using a hough transform method, and establishing a world coordinate system of the cylinder in the image" is performed by:
step S201, for each image in the input image set, fitting two edge straight lines l of a cylinder by adopting a Hough transform method1And l2Fitting two quadratic curves c of the cylinder simultaneously1And c2
Step S202, using the curve c2Center point o of2The corresponding space point is used as the origin of the world coordinate system, and the curve c2Center point o of2To curve c2A straight line of one point above as the X-axis of the world coordinate system, said curve c2Center point o of2To curve c1Center point o of1As the Z-axis of the world coordinate system, said quadratic curve c2The corresponding space plane is the X-Y plane of the world coordinate system, and the establishment of the world coordinate system is completed.
3. The method of claim 1, wherein in step S30, "calculating a camera pose corresponding to each image based on a contour line of the cylinder fitted to each image in the input image set, a corresponding world coordinate system, and based on projective invariance and an imaging principle" comprises:
and respectively calculating a rotation transformation matrix R and a translational vector t of each image from the world coordinate system to the camera coordinate system based on the fitted two straight lines, the fitted two curves and the world coordinate system, wherein the rotation transformation matrix R and the translational vector t are the camera postures corresponding to the images.
4. The three-dimensional vision based cylinder augmented reality method according to claim 1, wherein a step of spatial point optimization is further provided after "extracting the feature points of each image in the input image set and reconstructing the spatial points corresponding to the feature points based on the camera pose corresponding to the image" in step S40, and the method comprises:
step B10, according to the three-dimensional-two-dimensional corresponding relation, adopting the reprojection error minimization to optimize the posture of each frame of image and all the space points observed by the image;
and step B20, optimizing the spatial points of all images and the camera poses of all images by adopting global binding adjustment based on the optimized pose of each frame of image and all the spatial points observed by the image.
5. The method for augmented reality of a cylinder based on three-dimensional vision according to claim 1, wherein in step S50, "acquiring and initializing a corresponding cylinder video image based on the reconstructed cylinder three-dimensional model" includes:
step S501, processing images by using Linear P3P RANSAC based on the acquired cylindrical video images, and continuously acquiring camera postures corresponding to images with preset frame numbers;
step S502, judging whether the proximity degree of the camera posture corresponding to the preset frame number image exceeds a set threshold value, if so, finishing initialization; if the determination result is no, step S501 is executed.
6. The method of claim 1, wherein in step S60, "based on the initial camera pose and the three-dimensional-two-dimensional correspondence, performing inter-frame camera pose tracking calculation to obtain a camera pose corresponding to each frame of image", the method includes:
step S601, detecting an angular point in the interested area of the current frame image and extracting a binary descriptor;
step S602, matching the binary descriptor of the current frame image with the binary descriptor of the previous frame image to obtain the two-dimensional-two-dimensional relationship between the current frame image and the previous frame image;
step S603, acquiring the three-dimensional-two-dimensional relationship of the current frame image based on the three-dimensional-two-dimensional relationship of the previous frame image and the two-dimensional-two-dimensional relationship of the current frame image and the previous frame image;
and step S604, calculating the camera attitude corresponding to the current frame image by adopting an EPnP method based on the three-dimensional-two-dimensional relation of the current frame image.
7. The method for enhancing reality based on three-dimensional vision of claim 1, wherein a step of eliminating instability of camera pose before the step S70, which adopts the camera pose corresponding to each frame of image, superimposes the input virtual image on the cylinder video image for cylinder augmented reality, is further provided, and the method comprises:
and smoothing the camera attitude by adopting an extended Kalman filter to eliminate instability of the camera attitude.
8. A cylinder augmented reality system based on three-dimensional vision is characterized by comprising an input module, a world coordinate system establishing module, a camera attitude calculating module, a cylinder three-dimensional reconstruction module, a cylinder video image initializing module, an inter-frame camera attitude tracking module, an augmented reality module and an output module;
the input module is configured to acquire and input a cylindrical video image set with multiple view angles;
the world coordinate system establishing module is configured to fit a cylinder contour line to each image in the multi-view video image set and establish a world coordinate system;
the camera attitude calculation module is configured to calculate a camera attitude corresponding to each image based on the fitted cylinder contour line and the world coordinate system by utilizing the projective invariance and the imaging principle;
the cylinder three-dimensional reconstruction module is configured to extract a feature point of each image in the multi-view video image set, reconstruct a space point corresponding to the feature point based on a camera posture corresponding to the image, and obtain a reconstructed cylinder three-dimensional model;
the cylinder video image initialization module is configured to acquire and initialize a corresponding cylinder video image based on the reconstructed cylinder three-dimensional model to acquire an initial camera attitude and a three-dimensional-two-dimensional corresponding relation;
the inter-frame camera attitude tracking module is configured to perform inter-frame camera attitude tracking calculation based on the initial camera attitude and the three-dimensional-two-dimensional corresponding relation to obtain a camera attitude corresponding to each frame of image;
the augmented reality module is configured to adopt the camera posture corresponding to each frame of image to superimpose the input virtual image on the cylindrical video image to perform cylindrical augmented reality;
and the output module is configured to output the cylinder video image after augmented reality.
9. A storage device having a plurality of programs stored therein, wherein the programs are adapted to be loaded and executed by a processor to implement the three-dimensional vision based cylinder augmented reality method of any one of claims 1 to 7.
10. A treatment apparatus comprises
A processor adapted to execute various programs; and
a storage device adapted to store a plurality of programs;
wherein the program is adapted to be loaded and executed by a processor to perform:
the three-dimensional vision based cylinder augmented reality method of any one of claims 1-7.
CN201910360629.XA 2019-04-30 2019-04-30 Cylinder augmented reality method, system and device based on three-dimensional vision Active CN110120101B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910360629.XA CN110120101B (en) 2019-04-30 2019-04-30 Cylinder augmented reality method, system and device based on three-dimensional vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910360629.XA CN110120101B (en) 2019-04-30 2019-04-30 Cylinder augmented reality method, system and device based on three-dimensional vision

Publications (2)

Publication Number Publication Date
CN110120101A CN110120101A (en) 2019-08-13
CN110120101B true CN110120101B (en) 2021-04-02

Family

ID=67520319

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910360629.XA Active CN110120101B (en) 2019-04-30 2019-04-30 Cylinder augmented reality method, system and device based on three-dimensional vision

Country Status (1)

Country Link
CN (1) CN110120101B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113673283A (en) * 2020-05-14 2021-11-19 惟亚(上海)数字科技有限公司 Smooth tracking method based on augmented reality
CN112288823B (en) * 2020-10-15 2022-12-06 武汉工程大学 Calibration method of standard cylinder curved surface point measuring equipment
CN112734914A (en) * 2021-01-14 2021-04-30 温州大学 Image stereo reconstruction method and device for augmented reality vision
CN114549660B (en) * 2022-02-23 2022-10-21 北京大学 Multi-camera calibration method, device and equipment based on cylindrical self-identification marker
CN115115708B (en) * 2022-08-22 2023-01-17 荣耀终端有限公司 Image pose calculation method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101101505A (en) * 2006-07-07 2008-01-09 华为技术有限公司 Method and system for implementing three-dimensional enhanced reality
KR20170090165A (en) * 2016-01-28 2017-08-07 허상훈 Apparatus for realizing augmented reality using multiple projector and method thereof
CN108898630A (en) * 2018-06-27 2018-11-27 清华-伯克利深圳学院筹备办公室 A kind of three-dimensional rebuilding method, device, equipment and storage medium
CN109389634A (en) * 2017-08-02 2019-02-26 蒲勇飞 Virtual shopping system based on three-dimensional reconstruction and augmented reality
CN109472873A (en) * 2018-11-02 2019-03-15 北京微播视界科技有限公司 Generation method, device, the hardware device of threedimensional model
CN109685913A (en) * 2018-12-21 2019-04-26 西安电子科技大学 Augmented reality implementation method based on computer vision positioning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10282913B2 (en) * 2017-07-24 2019-05-07 Visom Technology, Inc. Markerless augmented reality (AR) system
WO2019032736A1 (en) * 2017-08-08 2019-02-14 Smart Picture Technologies, Inc. Method for measuring and modeling spaces using markerless augmented reality

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101101505A (en) * 2006-07-07 2008-01-09 华为技术有限公司 Method and system for implementing three-dimensional enhanced reality
KR20170090165A (en) * 2016-01-28 2017-08-07 허상훈 Apparatus for realizing augmented reality using multiple projector and method thereof
CN109389634A (en) * 2017-08-02 2019-02-26 蒲勇飞 Virtual shopping system based on three-dimensional reconstruction and augmented reality
CN108898630A (en) * 2018-06-27 2018-11-27 清华-伯克利深圳学院筹备办公室 A kind of three-dimensional rebuilding method, device, equipment and storage medium
CN109472873A (en) * 2018-11-02 2019-03-15 北京微播视界科技有限公司 Generation method, device, the hardware device of threedimensional model
CN109685913A (en) * 2018-12-21 2019-04-26 西安电子科技大学 Augmented reality implementation method based on computer vision positioning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Design of an AR Marker for Cylindrical Surface;Asahi Suzuki 等;《2013 IEEE International Symposium on Mixed and Augmented Reality ISMAR》;20131231;第293-294页 *

Also Published As

Publication number Publication date
CN110120101A (en) 2019-08-13

Similar Documents

Publication Publication Date Title
CN110120101B (en) Cylinder augmented reality method, system and device based on three-dimensional vision
CN109076172B (en) Method and system for generating an efficient canvas view from an intermediate view
US8885920B2 (en) Image processing apparatus and method
CN108629843B (en) Method and equipment for realizing augmented reality
Kutulakos et al. Calibration-free augmented reality
US8077906B2 (en) Apparatus for extracting camera motion, system and method for supporting augmented reality in ocean scene using the same
Tang et al. 3D mapping and 6D pose computation for real time augmented reality on cylindrical objects
CN108028871A (en) The more object augmented realities of unmarked multi-user in mobile equipment
CN105701828B (en) A kind of image processing method and device
CN106296598B (en) 3 d pose processing method, system and camera terminal
KR101410273B1 (en) Method and apparatus for environment modeling for ar
JP2003533817A (en) Apparatus and method for pointing a target by image processing without performing three-dimensional modeling
da Silveira et al. Dense 3D scene reconstruction from multiple spherical images for 3-DoF+ VR applications
CN112348958A (en) Method, device and system for acquiring key frame image and three-dimensional reconstruction method
Pulli et al. Mobile panoramic imaging system
Sweeney et al. Structure from motion for panorama-style videos
Subramanyam Automatic image mosaic system using steerable Harris corner detector
CN114730482A (en) Device coordinate system in associated multi-person augmented reality system
CN109074658A (en) The method for carrying out the reconstruction of 3D multiple view by signature tracking and Model registration
CN112734628B (en) Projection position calculation method and system for tracking point after three-dimensional conversion
US11120606B1 (en) Systems and methods for image texture uniformization for multiview object capture
US11176636B2 (en) Method of plane tracking
JP2002008014A (en) Method and device for extracting three-dimensional shape, and recording medium
Wang et al. Dtf-net: Category-level pose estimation and shape reconstruction via deformable template field
JP2002094849A (en) Wide view image pickup device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant