CN104240289A - Three-dimensional digitalization reconstruction method and system based on single camera - Google Patents

Three-dimensional digitalization reconstruction method and system based on single camera Download PDF

Info

Publication number
CN104240289A
CN104240289A CN201410339682.9A CN201410339682A CN104240289A CN 104240289 A CN104240289 A CN 104240289A CN 201410339682 A CN201410339682 A CN 201410339682A CN 104240289 A CN104240289 A CN 104240289A
Authority
CN
China
Prior art keywords
dimensional
camera
color
point
triple channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410339682.9A
Other languages
Chinese (zh)
Other versions
CN104240289B (en
Inventor
崔岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201410339682.9A priority Critical patent/CN104240289B/en
Publication of CN104240289A publication Critical patent/CN104240289A/en
Application granted granted Critical
Publication of CN104240289B publication Critical patent/CN104240289B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to a three-dimensional digitalization reconstruction method and system based on a single camera. A client side obtains and sends a set of two-dimensional images to a cloud server; the cloud server collects RGB channel data of the two-dimensional images and converts the RGB channel data into UVW channel data; selecting and matching of image feature points are carried out on the UVW channel color space; under the situation that internal parameters of the camera are not known, the position of the camera is calibrated according to the corresponding triangle relationship, and the rotation matrix R and the horizontal movement vector t of the position of the camera are obtained; three-dimensional reconstruction is carried out on an object according to the matching of the image feature points and the rotating matrix R and the horizontal movement vector t of the position of the camera; a color chartlet is automatically generated on the surface of the object according to the three-dimensional object area analyzing method. According to the three-dimensional digitalization reconstruction method, the position of the camera can be positioned without knowing the internal parameters of the camera, the data collecting flexibility is improved, the image data are processed in the UVW channel color space, the data loss is greatly reduced, and the stability of a computation result is improved.

Description

A kind of three-dimensional digital method for reconstructing based on single camera and system
Technical field
The present invention relates to 3-D technology field, particularly relate to a kind of three-dimensional digital method for reconstructing based on single camera and system.
Background technology
The three-dimensional digital modeling technique of physical world is all widely used in a lot of field, such as three-dimensional map, film digitizer, digital museum, ecommerce, and 3D prints.We are faced with the time of large data from two dimensional image to three-dimensional.But how efficiently accurate recover 3 D stereo information from two-dimensional numerals photograph, be vital technical matters.Three-dimensional reconstruction based on image is the method recovering the three-dimensional model of object and scene according to these width pictures.The method is a crossing domain, relates to the subjects such as Computer Image Processing, computer graphics, computer vision and pattern-recognition.
Obtain compared with the method for stereoscopic model with traditional modeling software or spatial digitizer of utilizing, the method based on 3-dimensional reconstruction is with low cost, strong sense of reality, and automaticity is high.In addition, the inverse problem of computer graphics is actually in theory based on 3-dimensional reconstruction.How according to be disturbed or incomplete two-dimensional signal recovers the large difficult point that three-dimensional information is this technology, it is also a large difficult point of computer vision.Obtain the three-dimensional model of object from image sequence, utilize the achievement in research of machine vision to realize the reconstruction of the scenery of image sequence fast, be the important subject in computer vision field for a long time always.
In general, can according to whether oneself knows that the internal reference of video camera is divided into two large classes from the method for image sequence acquisition object dimensional model.
(1), when known video camera internal reference, be called from exercise recovery (calibration) structure.Early stage method mainly concentrates in the research to two width images, by finding out a series of corresponding point pair in two width images, and then obtains the three-dimensional model of object.People transferred to energy in the research of complicated long sequence image gradually afterwards, because utilize the redundancy (redundancy) of long sequence image information, can obtain more accurate three-dimensional model.The use factor under affine camera model that the more representational Tomasi of being of these methods and Kanade proposes decomposes the method extracting three-dimensional profile.Its advantage is the three-dimensional model that can accurately obtain under euclidean geometry meaning, and limitation is the intrinsic parameter needing known video camera, namely needs in advance to calibrate video camera, and this is all very inconvenient in numerous applications.
(2), in actual applications, the internal reference of video camera is usually unknown.This just requires to obtain object dimensional model when unknown video camera internal reference, is called from the unregulated structure of exercise recovery.It is a more difficult problem, does not also have reasonable method can obtain satisfied result up to now.Main very limited because of just having started the obtainable information of institute, the geometry of scenery and the positional information of video camera are all unknown.Unique getable information be some conventional hypothesis as: scenery is rigidity; The greyscale transformation of body surface is continuous print; The model of video camera is pin-hole model etc.
There is following shortcoming in prior art:
(1) need by correction plate check and correction camera internal parameter: existing technical products, the inner parameter needing to use ready chequered with black and white correction plate in advance to proofread camera (comprises the focal length of camera, optical centre position, migration imagery etc.).After camera internal reference has been proofreaded, the focal length that can not change the inner parameter, particularly camera of camera in the process of image data (taking pictures) can not change, and constrains the process of taking pictures, and this is all very inconvenient in numerous applications;
(2) by such environmental effects: existing technical products is by many such environmental effects, factors in scene, comprise illumination and light source situation, the characteristic of the geometric configuration of object and physical property (particularly surperficial reflectivity properties), video camera, light source and the spatial relationship etc. between object and video camera in scene, the change of any factor is all by the change of effect diagram picture, thus the result that three-dimensional digital is rebuild is very unstable.
(3) in prior art image characteristic point choose and Feature Points Matching algorithm all based on but passage gray level image, even if the data gathered are triple channel RGB color images, also will by triple channel RGB model programming single channel gray scale Gray model, because each pixel of one channel model only has one to represent its gray scale from the numerical value between 0 to 255, data are simple, convenient calculating, but its shortcoming is also apparent, namely useful data are lost in a large number, make the result of calculating bad, and unstable.
Summary of the invention
Technical matters to be solved by this invention is for the deficiencies in the prior art, provides a kind of three-dimensional digital method for reconstructing based on single camera and system.
The technical scheme that the present invention solves the problems of the technologies described above is as follows: a kind of three-dimensional digital method for reconstructing based on single camera, comprises the steps:
Step 1: client obtains one group of two-dimension picture of rebuilt object, and sends to cloud server;
Step 2: cloud server gathers the RGB triple channel color image data of two-dimension picture, and is converted to UVW triple channel color image data;
Step 3: carry out choosing of image characteristic point at UVW triple channel color space and mate;
Step 4: when unknown camera internal reference, according to triangle corresponding relation, carry out camera position calibration, obtains rotation matrix R and the translation vector t of camera position;
Step 5: the three-dimensional reconstruction carrying out object according to the coupling of image characteristic point and camera position rotation matrix R and translation vector t;
Step 6: utilize three-dimensional body regional analysis automatically to generate color pinup picture on a surface of an.
The invention has the beneficial effects as follows: when present invention can be implemented in unknown camera internal reference, carry out camera position location, without the need to verify after camera internal reference image data more at every turn, improve the dirigibility of data acquisition; And at UVW triple channel color space, view data is processed, greatly reduce data degradation, improve the stability of result of calculation.
On the basis of technique scheme, the present invention can also do following improvement.
Further, in step 2, triple channel RGB color image data is converted to being implemented as of triple channel UVW color image data:
Step 2.1: the XYZ triple channel color model nonlinear RGB triple channel color model being converted to opposite linear, conversion formula is as follows,
X Y Z = 1 0.177 0.49 0.31 0.20 0.177 0.812 0.011 0.00 0.01 0.99 γ ( R ) γ ( G ) γ ( B ) ;
Wherein, gamma correction factor γ=2.0 of color space;
Step 2.2: the UVW triple channel color model XYZ triple channel color model of opposite linear being converted to three-dimensional geometry meaning of having living space, conversion formula is
x → → F ( x → ) = A ( ln ^ ( B x → ) )
Wherein, F is the function from XYZ triple channel color space to UVW triple channel color space, represent the XYZ triple channel color space value of pixel, A, B are constant matrices,
A = 27.07439 - 22.80783 - 1,806681 - 5.646736 - 7.722125 12.86503 - 4.163133 - 4.579428 - 4.578049 B = 0.9465229 0.2946927 - 0.1313419 - 0.117917 0.9929960 0.007371554 . 0.0923046 - 0.04645794 0.9946464
Adopt the beneficial effect of above-mentioned further scheme: by above calculating, the data of RGB color model can be converted to UVW color model, under UVW color model, color dot can regard the three-dimensional point with geometric meaning as, during such process data, greatly reduce the loss of useful data, make the result of calculating accurate, and stable.
Further, in UVW triple channel color model, choosing of unique point and being implemented as of the coupling of unique point is carried out in step 3:
Step 3.1: redefine neighborhood gradient in UVW triple channel color model, the vector that the gradient in X, Y-direction is respectively on respective direction subtracts each other, and obtains G respectively xand G y, θ is G xand G yangle;
Step 3.2: each unique point is represented by a triangle, when two triangles are same or similar, represents two unique points same or similar.
Adopt the beneficial effect of above-mentioned further scheme: if these two triangles are same or similar, it is same or analogous for representing these two unique points, if these two triangle difference are very large, it is diverse for representing these two unique points, because represent the change of radiation direction and colour temperature in UVW translation spatially, aspect ratio is to the size and shape only considering triangle itself, do not consider position, so color characteristic is the interference of anti-light line and colour temperature change, therefore, when above-mentioned further scheme carries out Feature Points Matching, coupling is simple, and the interference of anti-light line and colour temperature change can be realized.
Further, in step 4 when unknown camera internal reference, according to triangle corresponding relation, carry out being implemented as of camera position calibration: a camera is taken pictures with different internal reference state at two diverse locations, be equivalent to two video cameras, the first video camera and the second video camera;
Step 4.1: utilize fundamental matrix relational expression, according to coordinate m and m ' of corresponding point, calculate fundamental matrix F, wherein fundamental matrix relational expression is
m' TFm=0
Wherein, F is fundamental matrix, m and m ' is respectively the subpoint of objective point M at two camera positions;
Step 4.2: establish the first camera coordinate system identical with world coordinate system, R and t is respectively the second camera coordinate system relative to the rotation matrix of world coordinate system and translation vector, the other representation of fundamental matrix F as shown in the formula
F=K' -T[t] ×RK -1
Wherein, K is the internal reference of the first video camera, and K ' is the internal reference of the second video camera camera, all non-unknown quantity, [t] ×for the antisymmetric matrix of t,
[ t ] × = 0 - t z t y t z 0 - t x - t y t x 0
Step 4.3: according to the relational expression of the fundamental matrix F in step 4.2, calculates rotation matrix R and translation vector t.
Adopt the beneficial effect of above-mentioned further scheme to be: usually, exist between the image of the same object under same world coordinate system a kind of geometrically to pole restriction relation.In stereoscopic vision, the coupling of picture point can be utilized to recover this geometric relationship, conversely, this geometric relationship also can be utilized to carry out constrained matching, make the hunting zone of corresponding point be reduced to corresponding one dimension polar curve by two dimensional surface, robustness, the precision of mating all is greatly improved.
Further, step 5 carries out being implemented as of the three-dimensional reconstruction of object according to the coupling of image characteristic point and camera position rotation matrix R and translation vector t:
Step 5.1: rebuild the sparse three-dimensional point cloud obtaining the object skeleton formed according to the coupling of image characteristic point and camera position calibration;
Step 5.2: to sparse three-dimensional point cloud expand obtain body surface all being covered but dense three-dimensional point cloud containing mistake surface;
Step 5.3: the body surface obtained is filtered according to gray consistency and observability;
Step 5.4: repeat step 5.2 and 5.3 and reach pre-determined number, obtain the dense three-dimensional point cloud of the expression body surface on inerrancy surface;
Step 5.5: on the basis of the dense three-dimensional point cloud of object, carries out object plane reconstruction.
Further, being implemented as of described step 6:
Step 6.1: carry out Region dividing by rebuilding the three-dimensional body obtained in step 5 through face, find region that color is relatively consistent as a view field, three-dimensional body is divided into several view fields;
Step 6.2: the angle of normal direction with camera direction calculating each view field, the camera color choosing minimum angle is the color pinup picture of this view field.
Another technical scheme that the present invention solves the problems of the technologies described above is as follows: a kind of three-dimensional digital reconstructing system based on single camera, comprise client and cloud server, described cloud server comprises data acquisition module, unique point is chosen and matching module, camera position direction scaling module, three-dimensional reconstruction module and color pinup picture module;
Described client user obtains one group of two-dimension picture of single camera shooting, and sends to cloud server;
Described data acquisition module, its two-dimension picture sent for receiving client, and carry out the collection of picture RGB triple channel color image data, the data of collection are passed to unique point and chooses and matching module;
Described unique point is chosen and matching module, and the unique point chosen and match information, for carrying out choosing of image characteristic point at UVW triple channel color space and mate, are sent to camera position direction scaling module by it;
Camera position direction scaling module, it, for when unknown camera internal reference, according to triangle corresponding relation, carries out camera position calibration, obtain rotation matrix R and the translation vector t of camera position, camera position direction targeted message is sent to three-dimensional reconstruction module;
Described three-dimensional reconstruction module, it is for carrying out the three-dimensional reconstruction of object according to the coupling of image characteristic point and camera position rotation matrix R and translation vector t, and the dense three-dimensional point cloud obtained by three-dimensional reconstruction sends to color pinup picture module;
Described color pinup picture module, it is for the basis at dense three-dimensional point cloud, utilizes three-dimensional body regional analysis automatically to generate color pinup picture on a surface of an.
On the basis of technique scheme, the present invention can also do following improvement.
Further, described unique point is chosen and matching module comprises spatial model converting unit, unique point is chosen and matching unit;
Described spatial model converting unit, it is for being converted to UVW triple channel color image data by RGB triple channel color image data, and sends to unique point to choose and matching unit the data after conversion;
Described unique point is chosen and matching unit, and the unique point chosen and match information, for carrying out choosing of image characteristic point at UVW triple channel color space and mate, are sent to camera position direction scaling module by it.
Further, described three-dimensional reconstruction module comprises sparse some reconstruction unit, dense point reconstruction unit, surface filtration unit and face reconstruction unit;
Described sparse some reconstruction unit, it is for rebuilding according to the coupling of image characteristic point and camera position calibration the sparse three-dimensional point cloud obtaining the sparse point of object formed;
Described dense point reconstruction unit, it is for expanding the dense three-dimensional point cloud obtaining body surface all to be covered to sparse three-dimensional point cloud;
Surface filtration unit, it, for filtering the body surface obtained according to gray consistency and observability, finally obtains the dense three-dimensional point cloud of the expression body surface on inerrancy surface;
Described reconstruction unit, it carries out object plane reconstruction for the basis at dense three-dimensional point cloud.
Further, described color pinup picture module comprise projection division unit and projected color choose unit;
Described projection division unit, it, for carrying out Region dividing by rebuilding the three-dimensional body obtained through face, finds region that color is relatively consistent as a view field, three-dimensional body is divided into several view fields;
Described projected color chooses unit, and it is for calculating the angle of normal direction with camera direction of each view field, and the camera color choosing minimum angle is the color pinup picture of this view field.
Accompanying drawing explanation
Fig. 1 is a kind of three-dimensional digital method for reconstructing process flow diagram based on single camera of the present invention;
Fig. 2 is character representation under UVW color model and characteristic matching schematic diagram;
Fig. 3 is the Epipolar geometry relation schematic diagram between two width images;
Fig. 4 is a kind of three-dimensional digital reconstructing system block diagram based on single camera of the present invention;
Fig. 5 is that unique point of the present invention is chosen and the structured flowchart of matching module;
Fig. 6 is the structured flowchart of three-dimensional reconstruction module of the present invention;
Fig. 7 is the structured flowchart of color pinup picture module of the present invention.
In accompanying drawing, the list of parts representated by each label is as follows:
1, client, 2, cloud server, 21, comprise data acquisition module, 22, unique point is chosen and matching module, 23, camera position direction scaling module, 24, three-dimensional reconstruction module, 25, color pinup picture module, 221, spatial model converting unit, 222, unique point chooses and matching unit; 241, a some reconstruction unit is dredged, 242, dense point reconstruction unit, 243, surface filtration unit, 244 reconstruction units 244; 251, project division unit, and 252, projected color chooses unit.
Embodiment
Be described principle of the present invention and feature below in conjunction with accompanying drawing, example, only for explaining the present invention, is not intended to limit scope of the present invention.
In the present invention, client takes 30 to 60 two-dimensional numerals photos on request, passes to server, server automatic data processing.For the two dimensional image group of input, first find the position of camera, and sparse point is rebuild; Then on the basis of sparse point, carry out dense point rebuild; Finally three-dimensional reconstruction and the color pinup picture in face.Within 1-2 minute, obtain last result, and return to client.The data geometric accuracy of result can reach 1mm, and color pinup picture generates automatically.
As shown in Figure 1, a kind of three-dimensional digital method for reconstructing based on single camera, comprises the steps:
Step 1: client obtains one group of two-dimension picture of rebuilt object, and sends to cloud server;
Step 2: cloud server gathers the RGB triple channel color image data of two-dimension picture, and is converted to UVW triple channel color image data;
Step 3: carry out choosing of image characteristic point at UVW triple channel color space and mate;
Step 4: when unknown camera internal reference, according to triangle corresponding relation, carry out camera position calibration, obtains rotation matrix R and the translation vector t of camera position;
Step 5: the three-dimensional reconstruction carrying out object according to the coupling of image characteristic point and camera position rotation matrix R and translation vector t;
Step 6: utilize three-dimensional body regional analysis automatically to generate color pinup picture on a surface of an.
2nd step, cloud server gathers the RGB triple channel color image data of two-dimension picture, and is converted to UVW triple channel color image data.
Choosing and Feature Points Matching algorithm (such as SIFT of in the past all image characteristic points, SURF etc.) all based on single channel gray level image, even if the data gathered are triple channel RGB color images, also by Gray=R*0.3+G*0.59+B*0.11 formula, triple channel color RGB model to be become single channel gray scale Gray model.Because each pixel of one channel model only has one to represent its gray scale from the numerical value between 0 to 255, data are simple, convenient calculating, and shortcoming is also that apparent, useful data are lost in a large number, make the result of calculating bad, and unstable.
The present invention directly uses triple channel color model to calculate unique point, and the coupling of unique point.First be the XYZ triple channel color model of opposite linear nonlinear RGB triple channel color space conversion, the matrix of conversion is as follows:
X Y Z = 1 0.177 0.49 0.31 0.20 0.177 0.812 0.011 0.00 0.01 0.99 γ ( R ) γ ( G ) γ ( B )
Gamma correction factor γ=2.0 of color space.Then we are converted to three-channel XYZ color model the UVW color space of three-dimensional geometry meaning of having living space.At this color space of UVW, three-channel color model can be regarded as has geometry xyz three-dimensional model, and in this space, we can redefine the same or similar of feature, if the European geometric distance of 2 UVW coordinates is very near, illustrate that their feature is very same or similar.How to find corresponding UVW color space? assuming that the function of color space from XYZ to UVW that F is us to wish to obtain, represent the XYZ color space value of pixel.According to the imaging BRDF principle of object, the color of object is by light reflecting to form in object material.Can obtain according to a large amount of experimental datas, F function can pass through following functional simulation:
x → → F ( x → ) = A ( ln ^ ( B x → ) )
In to in matrix, each gets ln value in representative.A and B is the matrix of 3 × 3, and its concrete numerical value is as follows:
A = 27.07439 - 22.80783 - 1,806681 - 5.646736 - 7.722125 12.86503 - 4.163133 - 4.579428 - 4.578049 B = 0.9465229 0.2946927 - 0.1313419 - 0.117917 0.9929960 0.007371554 0.0923046 - 0.04645794 0.9946464
By above calculating, the data of RGB color model can be converted to UVW color model.Under UVW color model, color dot can regard the three-dimensional point with geometric meaning as.
3rd step, carries out choosing of image characteristic point at UVW triple channel color space and mates.
Step 3.1: redefine neighborhood gradient in UVW triple channel color model, the vector that the gradient in X, Y-direction is respectively on respective direction subtracts each other, and obtains G respectively xand G y, θ is G xand G yangle;
Step 3.2: each unique point is represented by a triangle, when two triangles are same or similar, represents two unique points same or similar.
Under triple channel color model, we will redefine the gradient of neighborhood.Gradient definition based on the neighborhood of gray level image is very simple, because be single channel, just the gray-scale value of neighborhood subtracts each other.Three-channel neighborhood gradient is that vector subtracts each other, and the gradient on x direction is exactly that phasor on x direction is subtracted each other, and obtains G x, y direction in like manner obtains G y.θ angle is G xand G yangle.
As shown in Figure 2, two triangles represent two unique points respectively, if these two triangles are same or similar, it is same or analogous for representing these two unique points.If these two triangle difference are very large, it is diverse for representing these two unique points.Because represent the change of radiation direction and colour temperature in UVW translation spatially, our aspect ratio does not consider position to the size and shape only considering triangle itself.So our color characteristic is the interference of anti-light line and colour temperature change.
4th step, when unknown camera internal reference, the position of camera and direction when determining to obtain each two-dimension picture according to utilizing image characteristic point to geometric relationship principle, and set up the corresponding relation of camera and rebuilt object.
When unknown camera internal reference, the demarcation in camera position direction.Usually, exist between the image of the same object under same world coordinate system a kind of geometrically to pole restriction relation.In stereoscopic vision, the coupling of picture point can be utilized to recover this geometric relationship, conversely, this geometric relationship also can be utilized to carry out constrained matching, make the hunting zone of corresponding point be reduced to corresponding one dimension polar curve by two dimensional surface, robustness, the precision of mating all is greatly improved.
Epipolar geometry relation mathematically can represent with basis matrix F, and therefore, Epipolar geometry problem is just converted into the estimation problem to basis matrix F.Accurately calculating F founds a capital significant for demarcation, searching exact matching and Three-dimensional Gravity.
As shown in Figure 3, suppose in a stereo visual system, there are two video cameras, if C and C' is respectively the photocentre of two video cameras, the image of two video camera acquisitions is respectively I and I', M is any point in three dimensions, m and m' is the picture point (subpoint) of a M on two images, claims m and m' to be a pair corresponding point.The straight line connecting photocentre C and C' is called baseline (base line).Spatial point M and two photocentre C and C' is coplanar, if the plane at their places is Π, this face is called polar plane (epipolar plane).Intersection l and l' of polar plane and the plane of delineation is called polar curve (epipolar line).Because m (m') is also simultaneously on plane Π and picture planar I (I'), therefore l (l') must cross m (m') point, that is m'(m) corresponding point m (m') inevitable on l (l').It can be seen, find the corresponding point m'(m of m (m')) time, need not at I'(I) find in entire image, only need at m'(m) at I'(I) polar curve on find.Which provides an important epipolar-line constraint, the search volume of corresponding point has been dropped to one dimension from two dimension.When three dimensions point M moves, all polar curves of generation all pass limit (epipole) e (e'), and limit is the intersection point of baseline and the plane of delineation.
If the projection matrix of two video cameras is respectively P 1and P 2, then the projection equation of two video cameras is as follows:
Z cm=P 1M=[P 1A?P 1B]M (4.1)
Z c'm'=P 2M=[P 2A?P 2B]M (4.2)
Wherein, M is the homogeneous coordinates of three dimensions point M under world coordinate system; M, m' are subpoint m, the m' homogeneous coordinates under image coordinate system respectively; By projection matrix P 1and P 23 × 3 parts on the middle left side are designated as P 1Aand P 2A, 3 × 1 parts on the right are designated as P 1Band P 2B.If by M=(X w, Y w, Z w, 1) tbe designated as wherein then formula (4.1), (4.2) are deployable is
Z c m = P 1 A M ‾ + P 1 B - - - ( 4.3 )
Z c m ′ = P 2 A M ‾ + P 2 B - - - ( 4.4 )
By above formula cancellation ?
Z c'm'-Z cP 2AP 1A -1m=P 2B-P 2AP 1A -1P 1B (4.5)
If t is tri-vector, t=(t x, t y, t z) t, claim lower column matrix to be the antisymmetric matrix defined by t, be designated as [t] ×.
[ t ] × = 0 - t z t y t z 0 - t x - t y t x 0
The vector of formula (4.5) right-hand member is designated as p, namely
p=P 2B-P 2AP 1A -1P 1B (4.6)
The antisymmetric matrix defined by p is designated as [p] ×, by [p] ×premultiplication formula (4.5) both sides, then by [p] ×p=0 is known
[p] ×(Z c'm'-Z cP 2AP 1A -1m)=0 (4.7)
By above formula both sides divided by Z c', and remember Z=Z c/ Z c',
[p] ×ZP 2AP 1A -1m=[p] ×m' (4.8)
On the right of above formula, vector is [p] ×m'=p × m', this vector is orthogonal with m' as seen, by m' tpremultiplication above formula both sides, and by gained both sides divided by obtaining following important relationship after Z:
m' T[p] ×P 2AP 1A -1m=0 (4.9)
The meaning of formula (4.9) is, which gives the homogeneous coordinates of subpoint m and m' of m and m'(three dimensions point M in two picture planes) relation that must meet.Can find out, when given m (m'), formula (4.9) be one about m'(m) linear equation, i.e. image I'(I) on polar curve equation.
Make F=[p] ×p 2Ap 1A -1, then formula (4.9) can be rewritten into:
m' TFm=0 (4.10)
Matrix F is a very important matrix in stereoscopic vision and movement vision, is called fundamental matrix (fundamental matrix).
If establish first camera coordinate system identical with world coordinate system, K is the inner parameter of first video camera, and K' is the inner parameter of second video camera, R and t is second camera coordinate system relative to the rotation matrix of world coordinate system and translation vector, namely
P 1=K[I?0],P 2=K'[R?t] (4.11)
Then fundamental matrix also can be expressed as follows:
F=K' -T[t] ×RK -1 (4.12)
The importance of formula (4.10) is to which give the method that describes fundamental matrix: the projection matrix not needing video camera, and only needs the corresponding point between two width images.This can utilize the relation of corresponding point to recover fundamental matrix F with regard to making us.This process is known as the weak demarcation (weak calibration) of stereo camera.Can not know the condition of camera internal reference (focal length, side-play amount etc.) according to our method of above elaboration under, camera position and direction can be calculated equally.
5th step, carries out the three-dimensional reconstruction of object according to the coupling of image characteristic point and camera position rotation matrix R and translation vector t.
We adopt PMVS algorithm to carry out dense reconstruction.First by the Feature Points Matching of the first step, with the camera Calibration of second step, rebuild and obtain sparse three-dimensional point cloud, these sparse three-dimensional point clouds are reconstructions of distinguished point based, a sparse point of the just object that sparse reconstruction obtains, then carry out expansion and obtain dense point cloud.
Step 5.1: rebuild the sparse three-dimensional point cloud obtaining the object skeleton formed according to the coupling of image characteristic point and camera position calibration;
Step 5.2: to sparse three-dimensional point cloud expand obtain body surface all being covered but dense three-dimensional point cloud containing mistake surface;
Step 5.3: the body surface obtained is filtered according to gray consistency and observability;
Step 5.4: repeat step 5.2 and 5.3 and reach pre-determined number, obtain the dense three-dimensional point cloud of the expression body surface on inerrancy surface;
Step 5.5: on the basis of the dense three-dimensional point cloud of object, carries out object plane reconstruction.
In step 5.1, for a feature f, if its corresponding dough sheet is p, its place image is as reference image R (p), the feature f ' of the same type meeting epipolar-line constraint with feature f is found in dough sheet set * V (p), all unique points that finds form set V (f), according to principle of triangulation, calculate matching characteristic to (f, f ') corresponding dough sheet center c (p), the vector of unit length in R (p) corresponding camera photocentre direction is pointed to as dough sheet normal direction n (p) with c (p), using c (p) and n (p) as variable, gray difference function * g (p) is minimized by method of conjugate gradient, one group of candidate's dough sheet is obtained by all unique points and current signature Point matching in set F, therefrom select an optimum final dough sheet of conduct, the spatial point of all Feature point correspondence constitutes the sparse some cloud representing body surface.
In step 5.2-5.5, a dough sheet p in sparse some cloud is expanded, first find its adjacent image block C (p), again according to the adjacent dough sheet p ' of dough sheet p initialization, c (p ') is optimized by method of conjugate gradient, n (p '), thus expand a new dough sheet, just obtain dense point cloud when body surface can all cover by dough sheet.But after expansion containing vicious dough sheet (because vicious Three-dimensional Gravity is laid foundations M, if Feature Points Matching mistake, the Three-dimensional Gravity of mistake will be found to lay foundations M, will vicious M, the dough sheet MCC ' led to errors), finally also need to filter according to gray consistency and observability, expansion and filtration repeat n time (being 3 times) herein, finally obtain the dense three-dimensional point cloud representing body surface.
6th step, utilizes three-dimensional body regional analysis automatically to generate color pinup picture on a surface of an.
Step 6.1: carry out Region dividing by rebuilding the three-dimensional body obtained in step 5 through face, find region that color is relatively consistent as a view field, three-dimensional body is divided into several view fields;
Step 6.2: the angle of normal direction with camera direction calculating each view field, the camera color choosing minimum angle is the color pinup picture of this view field.
Owing to obtaining position and the direction of camera, and establish the corresponding relation of camera with rebuilt object, also just obtain the projection relation from the image of two dimension to three-dimensional reconstruction object.Point on the surface of each three-dimensional reconstruction object can find corresponding colouring information on the image of two dimension, and this color relation is the color pinup picture information of end product body.Due to the projection to angle, under the condition not considering to block, each three-dimensional point has at least 5 cameras to project this point.How to choose suitable color? we adopt and realize based on the method for three-dimensional body regional analysis, and so-called regional analysis, finds the region that object color is relatively consistent exactly, according to different objects, object segmentation are become 50 to 500 such view fields.Calculate the angle of normal direction with camera direction in each region, the color pinup picture in the camera color choosing minimum angle this region the most.Color pinup picture completes automatically thus.
As shown in Figure 4, a kind of three-dimensional digital reconstructing system based on single camera, comprise client 1 and cloud server 2, described cloud server 2 comprises data acquisition module 21, unique point is chosen and matching module 22, camera position direction scaling module 23, three-dimensional reconstruction module 24 and color pinup picture module 25;
Described client 1 user obtains one group of two-dimension picture of single camera shooting, and sends to cloud server 2;
Described data acquisition module 21, its two-dimension picture sent for receiving client 1, and carry out the collection of picture RGB triple channel color image data, the data of collection are passed to unique point and choose and matching module 22;
Described unique point is chosen and matching module 22, and the unique point chosen and match information, for carrying out choosing of image characteristic point at UVW triple channel color space and mate, are sent to camera position direction scaling module 23 by it;
Camera position direction scaling module 23, it, for when unknown camera internal reference, according to triangle corresponding relation, carries out camera position calibration, obtain rotation matrix R and the translation vector t of camera position, camera position direction targeted message is sent to three-dimensional reconstruction module 24;
Described three-dimensional reconstruction module 24, it is for carrying out the three-dimensional reconstruction of object according to the coupling of image characteristic point and camera position rotation matrix R and translation vector t, and the dense three-dimensional point cloud obtained by three-dimensional reconstruction sends to color pinup picture module 25;
Described color pinup picture module 25, it is for the basis at dense three-dimensional point cloud, utilizes three-dimensional body regional analysis automatically to generate color pinup picture on a surface of an.
As shown in Figure 5, described unique point is chosen and matching module 22 comprises spatial model converting unit 221, unique point is chosen and matching unit 222;
Described spatial model converting unit 221, it is for being converted to UVW triple channel color image data by RGB triple channel color image data, and sends to unique point to choose and matching unit the data after conversion;
Described unique point is chosen and matching unit 222, and the unique point chosen and match information, for carrying out choosing of image characteristic point at UVW triple channel color space and mate, are sent to camera position direction scaling module by it.
As shown in Figure 6, described three-dimensional reconstruction module 24 comprises sparse some reconstruction unit 241, dense point reconstruction unit 242, surface filtration unit 243 and face reconstruction unit 244; Described three-dimensional reconstruction module 24 comprises sparse some reconstruction unit 241, dense point reconstruction unit 242, surface filtration unit 243 and face reconstruction unit 244; Described sparse some reconstruction unit 241, it is for rebuilding according to the coupling of image characteristic point and camera position calibration the sparse three-dimensional point cloud obtaining the sparse point of object formed; Described dense point reconstruction unit 242, it is for expanding the dense three-dimensional point cloud obtaining body surface all to be covered to sparse three-dimensional point cloud; Surface filtration unit 243, it, for filtering the body surface obtained according to gray consistency and observability, finally obtains the dense three-dimensional point cloud of the expression body surface on inerrancy surface; Described reconstruction unit 244, it carries out object plane reconstruction for the basis at dense three-dimensional point cloud.
As shown in Figure 7, described color pinup picture module 25 comprise projection division unit 251 and projected color choose unit 252; Described projection division unit 251, it, for carrying out Region dividing by rebuilding the three-dimensional body obtained through face, finds region that color is relatively consistent as a view field, three-dimensional body is divided into several view fields; Described projected color chooses unit 252, and it is for calculating the angle of normal direction with camera direction of each view field, and the camera color choosing minimum angle is the color pinup picture of this view field.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1., based on a three-dimensional digital method for reconstructing for single camera, it is characterized in that, comprise the steps:
Step 1: client obtains one group of two-dimension picture of rebuilt object, and sends to cloud server;
Step 2: cloud server gathers the RGB triple channel color image data of two-dimension picture, and is converted to UVW triple channel color image data;
Step 3: carry out choosing of image characteristic point at UVW triple channel color space and mate;
Step 4: when unknown camera internal reference, according to triangle corresponding relation, carry out camera position calibration, obtains rotation matrix R and the translation vector t of camera position;
Step 5: the three-dimensional reconstruction carrying out object according to the coupling of image characteristic point and camera position rotation matrix R and translation vector t;
Step 6: utilize three-dimensional body regional analysis automatically to generate color pinup picture on a surface of an.
2. a kind of three-dimensional digital method for reconstructing based on single camera according to claim 1, is characterized in that, in step 2, triple channel RGB color image data is converted to being implemented as of triple channel UVW color image data:
Step 2.1: the XYZ triple channel color model nonlinear RGB triple channel color model being converted to opposite linear, conversion formula is as follows,
X Y Z = 1 0.177 0.49 0.31 0.20 0.177 0.812 0.011 0.00 0.01 0.99 γ ( R ) γ ( G ) γ ( B ) ;
Wherein, gamma correction factor γ=2.0 of color space;
Step 2.2: the UVW triple channel color model XYZ triple channel color model of opposite linear being converted to three-dimensional geometry meaning of having living space, conversion formula is
x → → F ( x → ) = A ( ln ^ ( B x → ) )
Wherein, F is the function from XYZ triple channel color space to UVW triple channel color space, represent the XYZ triple channel color space value of pixel, A, B are constant matrices,
A = 27.07439 - 22.80783 - 1,806681 - 5.646736 - 7.722125 12.86503 - 4.163133 - 4.579428 - 4.578049 B = 0.9465229 0.2946927 - 0.1313419 - 0.117917 0.9929960 0.007371554 . 0.0923046 - 0.04645794 0.9946464
3. a kind of three-dimensional digital method for reconstructing based on single camera according to claim 1, is characterized in that, carry out choosing of unique point and being implemented as of the coupling of unique point in step 3 in UVW triple channel color model:
Step 3.1: redefine neighborhood gradient in UVW triple channel color model, the vector that the gradient in X, Y-direction is respectively on respective direction subtracts each other, and obtains G respectively xand G y, θ is G xand G yangle;
Step 3.2: each unique point is represented by a triangle, when two triangles are same or similar, represents two unique points same or similar.
4. a kind of three-dimensional digital method for reconstructing based on single camera according to claim 1, it is characterized in that, in step 4 when unknown camera internal reference, according to triangle corresponding relation, carry out being implemented as of camera position calibration: a camera is taken pictures with different internal reference state at two diverse locations, be equivalent to two video cameras, the first video camera and the second video camera;
Step 4.1: utilize fundamental matrix relational expression, according to coordinate m and m ' of corresponding point, calculate fundamental matrix F, wherein fundamental matrix relational expression is
m' TFm=0
Wherein, F is fundamental matrix, m and m ' is respectively the subpoint of objective point M at two camera positions;
Step 4.2: establish the first camera coordinate system identical with world coordinate system, R and t is respectively the second camera coordinate system relative to the rotation matrix of world coordinate system and translation vector, the other representation of fundamental matrix F as shown in the formula
F=K' -T[t] ×RK -1
Wherein, K is the internal reference of the first video camera, and K ' is the internal reference of the second video camera camera, all non-unknown quantity, [t] ×for the antisymmetric matrix of t,
[ t ] × = 0 - t z t y t z 0 - t x - t y t x 0
Step 4.3: according to the relational expression of the fundamental matrix F in step 4.2, calculates rotation matrix R and translation vector t.
5. a kind of three-dimensional digital method for reconstructing based on single camera according to claim 1, it is characterized in that, step 5 carries out being implemented as of the three-dimensional reconstruction of object according to the coupling of image characteristic point and camera position rotation matrix R and translation vector t:
Step 5.1: rebuild the sparse three-dimensional point cloud obtaining the object skeleton formed according to the coupling of image characteristic point and camera position calibration;
Step 5.2: to sparse three-dimensional point cloud expand obtain body surface all being covered but dense three-dimensional point cloud containing mistake surface;
Step 5.3: the body surface obtained is filtered according to gray consistency and observability;
Step 5.4: repeat step 5.2 and 5.3 and reach pre-determined number, obtain the dense three-dimensional point cloud of the expression body surface on inerrancy surface;
Step 5.5: on the basis of the dense three-dimensional point cloud of object, carries out object plane reconstruction.
6. a kind of three-dimensional digital method for reconstructing based on single camera according to claim 1, is characterized in that, being implemented as of described step 6:
Step 6.1: carry out Region dividing by rebuilding the three-dimensional body obtained in step 5 through face, find region that color is relatively consistent as a view field, three-dimensional body is divided into several view fields;
Step 6.2: the angle of normal direction with camera direction calculating each view field, the camera color choosing minimum angle is the color pinup picture of this view field.
7. the three-dimensional digital reconstructing system based on single camera, it is characterized in that, comprise client and cloud server, described cloud server comprises data acquisition module, unique point is chosen and matching module, camera position direction scaling module, three-dimensional reconstruction module and color pinup picture module;
Described client user obtains one group of two-dimension picture of single camera shooting, and sends to cloud server;
Described data acquisition module, its two-dimension picture sent for receiving client, and carry out the collection of picture RGB triple channel color image data, the data of collection are passed to unique point and chooses and matching module;
Described unique point is chosen and matching module, and the unique point chosen and match information, for carrying out choosing of image characteristic point at UVW triple channel color space and mate, are sent to camera position direction scaling module by it;
Camera position direction scaling module, it, for when unknown camera internal reference, according to triangle corresponding relation, carries out camera position calibration, obtain rotation matrix R and the translation vector t of camera position, camera position direction targeted message is sent to three-dimensional reconstruction module;
Described three-dimensional reconstruction module, it is for carrying out the three-dimensional reconstruction of object according to the coupling of image characteristic point and camera position rotation matrix R and translation vector t, and the dense three-dimensional point cloud obtained by three-dimensional reconstruction sends to color pinup picture module;
Described color pinup picture module, it is for the basis at dense three-dimensional point cloud, utilizes three-dimensional body regional analysis automatically to generate color pinup picture on a surface of an.
8. a kind of three-dimensional digital reconstructing system based on single camera according to claim 7, is characterized in that, described unique point is chosen and matching module comprises spatial model converting unit, unique point chooses and matching unit;
Described spatial model converting unit, it is for being converted to UVW triple channel color image data by RGB triple channel color image data, and sends to unique point to choose and matching unit the data after conversion;
Described unique point is chosen and matching unit, and the unique point chosen and match information, for carrying out choosing of image characteristic point at UVW triple channel color space and mate, are sent to camera position direction scaling module by it.
9. a kind of three-dimensional digital reconstructing system based on single camera according to claim 7, it is characterized in that, described three-dimensional reconstruction module comprises sparse some reconstruction unit, dense point reconstruction unit, surface filtration unit and face reconstruction unit;
Described sparse some reconstruction unit, it is for rebuilding according to the coupling of image characteristic point and camera position calibration the sparse three-dimensional point cloud obtaining the sparse point of object formed;
Described dense point reconstruction unit, it is for expanding the dense three-dimensional point cloud obtaining body surface all to be covered to sparse three-dimensional point cloud;
Surface filtration unit, it, for filtering the body surface obtained according to gray consistency and observability, finally obtains the dense three-dimensional point cloud of the expression body surface on inerrancy surface;
Described reconstruction unit, it carries out object plane reconstruction for the basis at dense three-dimensional point cloud.
10. a kind of three-dimensional digital reconstructing system based on single camera according to claim 7, is characterized in that, described color pinup picture module comprises projection division unit and projected color chooses unit;
Described projection division unit, it, for carrying out Region dividing by rebuilding the three-dimensional body obtained through face, finds region that color is relatively consistent as a view field, three-dimensional body is divided into several view fields;
Described projected color chooses unit, and it is for calculating the angle of normal direction with camera direction of each view field, and the camera color choosing minimum angle is the color pinup picture of this view field.
CN201410339682.9A 2014-07-16 2014-07-16 Three-dimensional digitalization reconstruction method and system based on single camera Active CN104240289B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410339682.9A CN104240289B (en) 2014-07-16 2014-07-16 Three-dimensional digitalization reconstruction method and system based on single camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410339682.9A CN104240289B (en) 2014-07-16 2014-07-16 Three-dimensional digitalization reconstruction method and system based on single camera

Publications (2)

Publication Number Publication Date
CN104240289A true CN104240289A (en) 2014-12-24
CN104240289B CN104240289B (en) 2017-05-03

Family

ID=52228290

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410339682.9A Active CN104240289B (en) 2014-07-16 2014-07-16 Three-dimensional digitalization reconstruction method and system based on single camera

Country Status (1)

Country Link
CN (1) CN104240289B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105303554A (en) * 2015-09-16 2016-02-03 东软集团股份有限公司 Image feature point 3D reconstruction method and device
CN105737802A (en) * 2016-01-26 2016-07-06 中国科学院水利部成都山地灾害与环境研究所 Accumulated profile space structure information analysis method based on motion sensing photographing technology
CN106296797A (en) * 2015-06-10 2017-01-04 西安蒜泥电子科技有限责任公司 A kind of spatial digitizer characteristic point modeling data processing method
CN106408598A (en) * 2016-09-23 2017-02-15 邹建成 Three-dimensional portrait reconstruction printing device based on array lens
CN106651826A (en) * 2015-10-30 2017-05-10 财团法人工业技术研究院 Method for scanning object
CN108182726A (en) * 2017-12-29 2018-06-19 努比亚技术有限公司 Three-dimensional rebuilding method, cloud server and computer readable storage medium
CN108510434A (en) * 2018-02-12 2018-09-07 中德(珠海)人工智能研究院有限公司 The method for carrying out three-dimensional modeling by ball curtain camera
CN108961381A (en) * 2017-05-17 2018-12-07 富士通株式会社 Method and apparatus for the 3-D geometric model coloring to object
CN108961151A (en) * 2018-05-08 2018-12-07 中德(珠海)人工智能研究院有限公司 A method of the three-dimensional large scene that ball curtain camera obtains is changed into sectional view
CN109934786A (en) * 2019-03-14 2019-06-25 河北师范大学 A kind of color calibration method of image, system and terminal device
CN109961505A (en) * 2019-03-13 2019-07-02 武汉零点视觉数字科技有限公司 A kind of ancient times coffin chamber architecture digital reconstructing system
CN110276768A (en) * 2019-06-28 2019-09-24 京东方科技集团股份有限公司 Image partition method, image segmentation device, image segmentation apparatus and medium
WO2020006941A1 (en) * 2018-07-03 2020-01-09 上海亦我信息技术有限公司 Method for reconstructing three-dimensional space scene on basis of photography
CN110956083A (en) * 2019-10-21 2020-04-03 山东科技大学 Bohai sea ice drift remote sensing detection method based on high-resolution four-signal optical satellite
CN111242990A (en) * 2020-01-06 2020-06-05 西南电子技术研究所(中国电子科技集团公司第十研究所) 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching
WO2020124976A1 (en) * 2018-12-21 2020-06-25 北京市商汤科技开发有限公司 Image processing method and apparatus, and electronic device and storage medium
CN111815757A (en) * 2019-06-29 2020-10-23 浙江大学山东工业技术研究院 Three-dimensional reconstruction method for large component based on image sequence
CN113538646A (en) * 2020-04-21 2021-10-22 中移(成都)信息通信科技有限公司 Object digitalization system, method, server, client and storage medium
CN115086541A (en) * 2021-03-15 2022-09-20 北京字跳网络技术有限公司 Shooting position determining method, device, equipment and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030137508A1 (en) * 2001-12-20 2003-07-24 Mirko Appel Method for three dimensional image reconstruction
KR100792172B1 (en) * 2006-06-30 2008-01-07 중앙대학교 산학협력단 Apparatus and method for estimating fundamental matrix using robust correspondence point
CN101398937A (en) * 2008-10-29 2009-04-01 北京航空航天大学 Three-dimensional reconstruction method based on fringe photograph collection of same scene
CN101515374A (en) * 2008-02-20 2009-08-26 中国科学院自动化研究所 Individualized realistic virtual character modeling method based on images
CN101581569A (en) * 2009-06-17 2009-11-18 北京信息科技大学 Calibrating method of structural parameters of binocular visual sensing system
CN101750029A (en) * 2008-12-10 2010-06-23 中国科学院沈阳自动化研究所 Characteristic point three-dimensional reconstruction method based on trifocal tensor
CN103761768A (en) * 2014-01-22 2014-04-30 杭州匡伦科技有限公司 Stereo matching method of three-dimensional reconstruction

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030137508A1 (en) * 2001-12-20 2003-07-24 Mirko Appel Method for three dimensional image reconstruction
KR100792172B1 (en) * 2006-06-30 2008-01-07 중앙대학교 산학협력단 Apparatus and method for estimating fundamental matrix using robust correspondence point
CN101515374A (en) * 2008-02-20 2009-08-26 中国科学院自动化研究所 Individualized realistic virtual character modeling method based on images
CN101398937A (en) * 2008-10-29 2009-04-01 北京航空航天大学 Three-dimensional reconstruction method based on fringe photograph collection of same scene
CN101750029A (en) * 2008-12-10 2010-06-23 中国科学院沈阳自动化研究所 Characteristic point three-dimensional reconstruction method based on trifocal tensor
CN101581569A (en) * 2009-06-17 2009-11-18 北京信息科技大学 Calibrating method of structural parameters of binocular visual sensing system
CN103761768A (en) * 2014-01-22 2014-04-30 杭州匡伦科技有限公司 Stereo matching method of three-dimensional reconstruction

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孔晓东等: "基于极约束和边缘点检测的图像密集匹配", 《计算机工程》 *
朱玲利等: "基于VTK的医学图像三维重建应用研究", 《洛阳师范学院学报》 *

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296797A (en) * 2015-06-10 2017-01-04 西安蒜泥电子科技有限责任公司 A kind of spatial digitizer characteristic point modeling data processing method
CN105303554B (en) * 2015-09-16 2017-11-28 东软集团股份有限公司 The 3D method for reconstructing and device of a kind of image characteristic point
CN105303554A (en) * 2015-09-16 2016-02-03 东软集团股份有限公司 Image feature point 3D reconstruction method and device
CN106651826A (en) * 2015-10-30 2017-05-10 财团法人工业技术研究院 Method for scanning object
CN105737802A (en) * 2016-01-26 2016-07-06 中国科学院水利部成都山地灾害与环境研究所 Accumulated profile space structure information analysis method based on motion sensing photographing technology
CN106408598A (en) * 2016-09-23 2017-02-15 邹建成 Three-dimensional portrait reconstruction printing device based on array lens
CN108961381A (en) * 2017-05-17 2018-12-07 富士通株式会社 Method and apparatus for the 3-D geometric model coloring to object
CN108182726A (en) * 2017-12-29 2018-06-19 努比亚技术有限公司 Three-dimensional rebuilding method, cloud server and computer readable storage medium
CN108510434B (en) * 2018-02-12 2019-08-20 中德(珠海)人工智能研究院有限公司 The method for carrying out three-dimensional modeling by ball curtain camera
CN108510434A (en) * 2018-02-12 2018-09-07 中德(珠海)人工智能研究院有限公司 The method for carrying out three-dimensional modeling by ball curtain camera
CN108961151A (en) * 2018-05-08 2018-12-07 中德(珠海)人工智能研究院有限公司 A method of the three-dimensional large scene that ball curtain camera obtains is changed into sectional view
WO2020006941A1 (en) * 2018-07-03 2020-01-09 上海亦我信息技术有限公司 Method for reconstructing three-dimensional space scene on basis of photography
US11200734B2 (en) 2018-07-03 2021-12-14 Shanghai Yiwo Information Technology Co., Ltd. Method for reconstructing three-dimensional space scene based on photographing
CN111353930A (en) * 2018-12-21 2020-06-30 北京市商汤科技开发有限公司 Data processing method and device, electronic equipment and storage medium
WO2020124976A1 (en) * 2018-12-21 2020-06-25 北京市商汤科技开发有限公司 Image processing method and apparatus, and electronic device and storage medium
CN111353930B (en) * 2018-12-21 2022-05-24 北京市商汤科技开发有限公司 Data processing method and device, electronic equipment and storage medium
CN109961505A (en) * 2019-03-13 2019-07-02 武汉零点视觉数字科技有限公司 A kind of ancient times coffin chamber architecture digital reconstructing system
CN109934786A (en) * 2019-03-14 2019-06-25 河北师范大学 A kind of color calibration method of image, system and terminal device
CN109934786B (en) * 2019-03-14 2023-03-17 河北师范大学 Image color correction method and system and terminal equipment
CN110276768A (en) * 2019-06-28 2019-09-24 京东方科技集团股份有限公司 Image partition method, image segmentation device, image segmentation apparatus and medium
US11367195B2 (en) 2019-06-28 2022-06-21 Beijing Boe Optoelectronics Technology Co., Ltd. Image segmentation method, image segmentation apparatus, image segmentation device
CN111815757A (en) * 2019-06-29 2020-10-23 浙江大学山东工业技术研究院 Three-dimensional reconstruction method for large component based on image sequence
CN110956083A (en) * 2019-10-21 2020-04-03 山东科技大学 Bohai sea ice drift remote sensing detection method based on high-resolution four-signal optical satellite
CN111242990A (en) * 2020-01-06 2020-06-05 西南电子技术研究所(中国电子科技集团公司第十研究所) 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching
CN111242990B (en) * 2020-01-06 2024-01-30 西南电子技术研究所(中国电子科技集团公司第十研究所) 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching
CN113538646A (en) * 2020-04-21 2021-10-22 中移(成都)信息通信科技有限公司 Object digitalization system, method, server, client and storage medium
WO2022194145A1 (en) * 2021-03-15 2022-09-22 北京字跳网络技术有限公司 Photographing position determination method and apparatus, device, and medium
CN115086541B (en) * 2021-03-15 2023-12-22 北京字跳网络技术有限公司 Shooting position determining method, device, equipment and medium
CN115086541A (en) * 2021-03-15 2022-09-20 北京字跳网络技术有限公司 Shooting position determining method, device, equipment and medium

Also Published As

Publication number Publication date
CN104240289B (en) 2017-05-03

Similar Documents

Publication Publication Date Title
CN104240289A (en) Three-dimensional digitalization reconstruction method and system based on single camera
Aicardi et al. Recent trends in cultural heritage 3D survey: The photogrammetric computer vision approach
CN107767442B (en) Foot type three-dimensional reconstruction and measurement method based on Kinect and binocular vision
CN110288642B (en) Three-dimensional object rapid reconstruction method based on camera array
CN109242954B (en) Multi-view three-dimensional human body reconstruction method based on template deformation
Yang et al. Image-based 3D scene reconstruction and exploration in augmented reality
CN103106688B (en) Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
Zhang et al. A UAV-based panoramic oblique photogrammetry (POP) approach using spherical projection
Hoppe et al. Online Feedback for Structure-from-Motion Image Acquisition.
CN110135455A (en) Image matching method, device and computer readable storage medium
CN105931234A (en) Ground three-dimensional laser scanning point cloud and image fusion and registration method
JP7273927B2 (en) Image-based positioning method and system
CN110728671B (en) Dense reconstruction method of texture-free scene based on vision
CN106204731A (en) A kind of multi-view angle three-dimensional method for reconstructing based on Binocular Stereo Vision System
Maurer et al. Tapping into the Hexagon spy imagery database: A new automated pipeline for geomorphic change detection
CN108648194A (en) Based on the segmentation of CAD model Three-dimensional target recognition and pose measuring method and device
Peña-Villasenín et al. 3-D modeling of historic façades using SFM photogrammetry metric documentation of different building types of a historic center
Mayer et al. Dense 3D reconstruction from wide baseline image sets
Kuschk Large scale urban reconstruction from remote sensing imagery
Lin et al. Vision system for fast 3-D model reconstruction
CN111105451A (en) Driving scene binocular depth estimation method for overcoming occlusion effect
Ruano et al. Aerial video georegistration using terrain models from dense and coherent stereo matching
Remondino 3D reconstruction of static human body with a digital camera
Wong et al. 3D object model reconstruction from image sequence based on photometric consistency in volume space
Zhang et al. Integrating smartphone images and airborne lidar data for complete urban building modelling

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant