CN107067462A - Fabric three-dimensional draping shape method for reconstructing based on video flowing - Google Patents
Fabric three-dimensional draping shape method for reconstructing based on video flowing Download PDFInfo
- Publication number
- CN107067462A CN107067462A CN201710141162.0A CN201710141162A CN107067462A CN 107067462 A CN107067462 A CN 107067462A CN 201710141162 A CN201710141162 A CN 201710141162A CN 107067462 A CN107067462 A CN 107067462A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- point
- fabric
- image
- formula
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention provides a kind of fabric three-dimensional draping shape method for reconstructing based on video flowing, it is characterised in that comprises the following steps:By fabric cut out it is rounded after be placed on the pallet of Pendant meter, be subsequently placed with that taking over a business for chessboard is installed, between fabric is centrally placed in pallet and taken over a business, and center of the board and take over a business center superposition;Imaged along upper annular trace, lower annular trace at the uniform velocity dollying equipment;The video flowing that second step is photographed is converted into image sequence;It is utilized respectively the image sequence progress feature point detection that SIFT algorithms and Harris algorithms are obtained to the 3rd step;Obtain three-dimensional point cloud;Poisson reconstruction is carried out to three-dimensional point cloud, reconstruction model is obtained, texture mapping is carried out, three-dimensional drape model is obtained.A kind of process of reconstruction simple and stable for fabric three-dimensional draping shape method for reconstructing based on video flowing that the present invention is provided, reconstructed results precision is higher, truly can intactly reflect the three-dimensional drape form of fabric.
Description
Technical field
The present invention relates to a kind of by gathering the video of fabric draping from the side for the three-dimensional colour model for obtaining fabric draping
Method.
Background technology
The draping shape of fabric is primarily referred to as the three-dimensional appearance form of fabric draping curved surface.Prior art is mainly by outstanding
The two-dimensional signal in projection of hanging down reflects the draping shape of fabric indirectly, with larger limitation.
The content of the invention
The purpose of the present invention is the three-dimensional appearance form that fabric draping curved surface is obtained by video flowing.
In order to achieve the above object, the technical scheme is that being dangled there is provided a kind of fabric three-dimensional based on video flowing
Morphology Remodeling method, it is characterised in that comprise the following steps:
The first step, by fabric cut out it is rounded after be placed on the pallet of Pendant meter, be subsequently placed with being provided with the top of chessboard
Disk, between fabric is centrally placed in pallet and taken over a business, and center of the board is with taking over a business center superposition;
Second step, imaged along upper annular trace, lower annular trace at the uniform velocity dollying equipment, upper annular trace is located at
Directly over fabric, lower annular trace is concordant with the pendency base of fabric;
3rd step, the video flowing that second step is photographed is converted into image sequence;
4th step, it is utilized respectively the image sequence that SIFT algorithms and Harris algorithms obtain to the 3rd step and carries out characteristic point inspection
Survey;
5th step, extract after characteristic point in image sequence on all images, calculate per piece image with figure to be matched
The arest neighbors matching of characteristic point as in;
6th step, outer ginseng matrix between image two-by-two is obtained in image sequence, and be normalized under the same coordinate system
The three-dimensional coordinate of characteristic point can be calculated, so as to obtain three-dimensional point cloud;
7th step, the three-dimensional point cloud progress Poisson reconstruction obtained to the 6th step, obtain reconstruction model;
8th step, on the reconstruction model that the 7th step is obtained carry out texture mapping, obtain three-dimensional drape model.
Preferably, in the first step, if fabric is pure color fabric, grid lines is being drawn in fabric face.
Preferably, in the 3rd step, the video photographed to second step extracts phase by sampling density set in advance
The sectional drawing answered, so that video flowing is converted into image sequence.
Preferably, in the 4th step, carrying out feature point detection to image sequence using SIFT algorithms includes following step
Suddenly:
Step 4A.1, for any one secondary two-dimensional image I (x, y) in image sequence, by two-dimensional image I (x, y) with it is high
This kernel function G (x, y, σ) convolution obtains the scale space images L (x, y, σ) under different scale, and σ joins for the width of function
Number, controls the radial effect scope of function.Set up the DOG pyramids of image, D (x, y, σ) be two adjacent scalogram pictures it
Difference, then have:
In D (x, y, σ)=(G (x, y, k σ)-G (x, y, σ)) * I (x, y)=L (x, y, k σ)-L (x, y, σ), formula, k is chi
Spend parameter;
Step 4A.2, be each characteristic point assigned direction parameter, operator is possessed rotational invariance:
The gradient modulus value at pixel (x, y) place is m (x, y), then has:
The gradient direction at pixel (x, y) place is θ (x, y), then has:
In θ (x, y)=α tan2 ((L (x, y+1, σ)-L (x, y-1, σ))/(L (x+1, y, σ)-L (x-1, y, σ))), formula, α
For the characteristic value of principal curvatures;
Step 4A.3, feature point description generation:Reference axis is rotated into characteristic point direction, each key point is used
4 × 4 totally 16 seed points describe, so just produce 128 data for a characteristic point, i.e., the SIFT features of 128 dimensions to
Amount.
Preferably, in the 4th step, feature point detection is carried out to image sequence including following using Harris algorithms
Step:
Step 4B.1, the wicket centered on pixel (x, y) move u in the X direction, and v is moved in the Y direction, its
The analytic expression of grey scale change is:
In formula, E (x, y) is grey scale change amount,O is dimensionless operator;
Step 4B.2, E (x, y) is turned into quadratic form had:
In formula, M is real symmetric matrix,IxFor image I (x, y)
In the gradient of X-direction, IyFor the gradients of image I (x, y) in the Y direction;
Step 4B.3, angle point receptance function CRF is defined as:
CRF=det (M) -0.04*trace2(M), in formula, det (M) is real symmetric matrix M determinant, trace (M)
For real symmetric matrix M mark;
Angle point receptance function CRF local maximum point is angle point.
Preferably, in the 6th step, two-dimensional points p=[u0, v0]TCorresponding three-dimensional point Pw=[x, y, z]TBetween
Relation is:
P=K [R/t] Pw, in formula, [R/t] is the outer ginseng matrix of picture pick-up device, represents picture pick-up device in world coordinate system
Position, K is the internal reference matrix of picture pick-up device, is the point of identical one P on camera lens preset parameter, objectwIn any two images
Middle corresponding points p1And p2Relation be:
p1Fp2=0, in formula, matrix based on F, F=K-T[t]×RK-1.
Preferably, in the 6th step, three-dimensional coordinate will be obtained and further optimized using BA algorithms, so as to obtain institute
Three-dimensional point cloud is stated, is expressed as follows:
In formula,It is the two-dimensional coordinate of j-th of characteristic point in i-th image, K
[Ri|ti]XjIt is a littleThe re-projection coordinate of corresponding three-dimensional points.
Preferably, the 7th step comprises the following steps:
Step 7.1, Octree topological relation is set up to the three dimensional point cloud that the 6th step is obtained, by three-dimensional point cloud at random
Data are all added in Octree;
Step 7.2, for each node installation space function F in Octree topological relationc:
In formula, RcFor the center of node, rwFor the width of node,For basic function, R is to appoint
Meaning data point, is set to (x, y, z), then function space F (x, y, z) is expressed as by the coordinate of any point in three-dimensional point cloud:
F (x, y, z)=(A (x) A (y) A (z))3, in formula,For filter function, if filter functionVariable be t,
Then have:
Step 7.3, in the case of uniform sampling, it is assumed that divide block is constant, pass through vector fieldApproach instruction letter
Several gradients, defines and is to the approximations of the gradient fields of indicator functionThen have:
In formula, s is a bit in point cloud, and S is a point cloud sample set, and o is eight
Node in fork tree, NgbrD(s) node that eight depth for being arest neighbors s.p are D, s.p is point cloud sample, αO, sFor cubic curve
The power of property interpolation, Fo(q) it is node function,For a normal for cloud sample;
Step 7.4, the equation according to step 7.3 obtain vector fieldAfterwards, using the side of Laplacian Matrix iteration
Formula seeks Poisson's equationSolution, in formula, Δ is representedIncrement,For the location estimation of sampled point,It is micro- for vector
Divide operator.
Step 7.5, the location estimation by sampled pointIsosurface extraction is carried out with its average value:
In formula,For contour surface, q is cloud data,For a cloud sample distribution
Function, r is the average of point cloud sample distribution coordinate, For a cloud sample distribution coordinate;
Step 7.6, the contour surface that step 7.5 is extracted are spliced, you can obtain reconstruction model.
Preferably, in the 8th step:
If the texture image sequence obtained is I={ I1, I2, I3..., In, when obtaining each image, picture pick-up device is relative
P={ P are combined into the projection matrix collection of object1, P2, P3..., Pn, then texture mapping function (u, v) is defined as:
(u, v)=F (x, y, z, I, P)
Each three-dimensional point back projection is returned in corresponding two dimensional image, had:
Y=PiIn Y, formula, y=(x, y)TIt is the corresponding points being projected back in on two dimensional image, Y=(x, y, z)TIt is three-dimensional point cloud
In a three-dimensional point, PiIt is the projection matrix of viewpoint residing for the image.
Preferably, after the 8th step, in addition to:
9th step, by three-dimensional drape model from XCYCZCCoordinate system is transformed into XDYDZDCoordinate system, XCYCZCCoordinate system is with outstanding
Vertical model takes over a business the coordinate system that the center of circle is origin, XDYDZDCoordinate system is the coordinate system using user's Chosen Point as origin.
A kind of process of reconstruction for fabric three-dimensional draping shape method for reconstructing based on video flowing that the present invention is provided is simply steady
Fixed, reconstructed results precision is higher, truly can intactly reflect the three-dimensional drape form of fabric.
Brief description of the drawings
Fig. 1 is the schematic diagram of Pendant meter used in the present invention;
Fig. 2 is the schematic diagram of chessboard;
Fig. 3 is video acquisition schematic diagram of the invention;
Fig. 4 is that SIFT and Harris union features point detects figure;
Fig. 5 is pinhole camera modeling;
Fig. 6 is Coordinate Conversion schematic diagram.
Embodiment
With reference to specific embodiment, the present invention is expanded on further.It should be understood that these embodiments are merely to illustrate the present invention
Rather than limitation the scope of the present invention.In addition, it is to be understood that after the content of the invention lectured has been read, people in the art
Member can make various changes or modifications to the present invention, and these equivalent form of values equally fall within the application appended claims and limited
Scope.
The present invention relates to a kind of fabric three-dimensional draping shape method for reconstructing based on video flowing, comprise the following steps:
The first step, by fabric cut out it is rounded after be placed on the pallet 2 of Pendant meter 1 as shown in Figure 1, be subsequently placed with peace
Equipped with taking over a business for chessboard 3 as shown in Figure 2, between fabric is centrally placed in pallet 2 and taken over a business, and the center of chessboard 3 is with taking over a business center
Overlap.
Pallet 2 is 12cm with taking over a business diameter, takes over a business surface using pattern textured pattern.The size of chessboard 3 is 6cm × 4cm,
Grid length of side L is 1cm.
Fabric is cut into a diameter of 24cm circular specimen.For pure color fabric, it is necessary to use the note different from fabric color
Number pen draws the grid lines that spacing is 3cm in fabric face.
Second step, as shown in figure 3, at the uniform velocity move camera along upper annular trace, lower annular trace and imaged, upper circular rails
Mark is located at directly over fabric, and lower annular trace is concordant with the pendency base of fabric, and each annular trace shooting duration is about 5 seconds
Clock.
3rd step, according to the sampling density of 4 frames/second corresponding sectional drawing is extracted to captured video, so that will be by second
Walk the video flowing photographed and be converted into image sequence.
4th step, it is utilized respectively the image sequence that SIFT algorithms and Harris algorithms obtain to the 3rd step and carries out characteristic point inspection
Survey.
Feature point detection is carried out using SIFT algorithms to image sequence to comprise the following steps:
Step 4A.1, for any one secondary two-dimensional image I (x, y) in image sequence, by two-dimensional image I (x, y) with it is high
This kernel function G (x, y, σ) convolution obtains the scale space images L (x, y, σ) under different scale, and σ joins for the width of function
Number, controls the radial effect scope of function, sets up DOG (Difference of Gaussian) pyramid of image, D (x,
Y, σ) be two adjacent scalogram pictures difference, then have:
In D (x, y, σ)=(G (x, y, k σ)-G (x, y, σ)) * I (x, y)=L (x, y, k σ)-L (x, y, σ), formula, k is chi
Spend parameter;
Step 4A.2, be each characteristic point assigned direction parameter, operator is possessed rotational invariance:
The gradient modulus value at pixel (x, y) place is m (x, y), then has:
The gradient direction at pixel (x, y) place is θ (x, y), then has:
In θ (x, y)=α tan2 ((L (x, y+1, σ)-L (x, y-1, σ))/(L (x+1, y, σ)-L (x-1, y, σ))), formula, α
For the characteristic value of principal curvatures;
Step 4A.3, feature point description generation:Reference axis is rotated into characteristic point direction, each key point is used
4 × 4 totally 16 seed points describe, so just produce 128 data for a characteristic point, i.e., the SIFT features of 128 dimensions to
Amount.
Feature point detection is carried out using Harris algorithms to image sequence to comprise the following steps:
Step 4B.1, the wicket centered on pixel (x, y) move u in the X direction, and v is moved in the Y direction, its
The analytic expression of grey scale change is:
In formula, E (x, y) is grey scale change amount,O is dimensionless operator;
Step 4B.2, E (x, y) is turned into quadratic form had:
In formula, M is real symmetric matrix,IxFor image I (x, y)
In the gradient of X-direction, IyFor the gradients of image I (x, y) in the Y direction;
Step 4B.3, angle point receptance function CRF is defined as:
CRF=det (M) -0.04*trace2(M), in formula, det (M) is real symmetric matrix M determinant, trace (M)
For real symmetric matrix M mark;
Angle point receptance function CRF local maximum point is angle point.
As shown in figure 4, wherein star point is SIFT feature, and circular dot is Harris characteristic points.
5th step, extract after characteristic point in image sequence on all images, calculate per piece image with figure to be matched
The arest neighbors matching of characteristic point as in.
Calculate two characteristic vector a={ a1, a2..., an, b={ b1, b2..., bnThe distance between UabRepresent such as
Under:
6th step, outer ginseng matrix between image two-by-two is obtained in image sequence, and be normalized under the same coordinate system
The three-dimensional coordinate of characteristic point can be calculated, so as to obtain three-dimensional point cloud.
As shown in figure 5, according to pinhole camera modeling, the two-dimensional points p=[u on photo0, v0]TCorresponding three-dimensional point
Pw=[x, y, z]TBetween relation be:
P=K [R/t] Pw, in formula, [R/t] is the outer ginseng matrix of camera, represents position of the camera in world coordinate system, K
It is the internal reference matrix of camera, is the point of identical one P on camera lens preset parameter, objectwThe corresponding points p in any two images1And p2
Relation be:
p1Fp2=0, in formula, matrix based on F, F=K-T[t]×RK-1。
Three-dimensional coordinate will be obtained using BA (Bundle adjustment) algorithms further to optimize, so as to obtain described
Three-dimensional point cloud, is expressed as follows:
In formula,It is the two-dimensional coordinate of j-th of characteristic point in i-th image, K
[Ri|ti]XjIt is a littleThe re-projection coordinate of corresponding three-dimensional points.
7th step, the three-dimensional point cloud progress Poisson reconstruction obtained to the 6th step, obtain reconstruction model, comprise the following steps:
Step 7.1, Octree topological relation is set up to the three dimensional point cloud that the 6th step is obtained, by three-dimensional point cloud at random
Data are all added in Octree;
Step 7.2, for each node installation space function F in Octree topological relationc:
In formula, RcFor the center of node, rwFor the width of node,For basic function, R is to appoint
Meaning data point, is set to (x, y, z), then function space F (x, y, z) is expressed as by the coordinate of any point in three-dimensional point cloud:
F (x, y, z)=(A (x) A (y) A (z))3, in formula,For filter function, if filter functionVariable be t,
Then have:
Step 7.3, in the case of uniform sampling, it is assumed that divide block is constant, pass through vector fieldApproach indicator function
Gradient, define and be to the approximations of the gradient fields of indicator functionThen have:
In formula, s is the point in point cloud, and S is point cloud sample set, and o is eight forks
Node in tree, NgbrD(s) node that eight depth for being arest neighbors s.p are D, s.p is point cloud sample, αO, sFor trilinear
The power of interpolation, Fo(q) it is node function,For a normal for cloud sample;
Step 7.4, the equation according to step 7.3 obtain vector fieldAfterwards, by the way of Laplacian Matrix iteration
Seek Poisson's equationSolution, in formula, Δ is representsIncrement,For the location estimation of sampled point,It is micro- for vector
Divide operator;
Step 7.5, the location estimation by sampled pointIsosurface extraction is carried out with its average value:
In formula,For contour surface, q is cloud data,For a cloud sample distribution
Function, r is the average of point cloud sample distribution coordinate, For a cloud sample distribution function;
Step 7.6, the contour surface that step 7.5 is extracted are spliced, you can obtain reconstruction model.
8th step, on the reconstruction model that the 7th step is obtained carry out texture mapping, obtain three-dimensional drape model.In the step
In:
If the texture image sequence obtained is I={ I1, I2, I3..., In, when obtaining each image, camera is relative to thing
The projection matrix collection of body is combined into P={ P1, P2, P3..., Pn, then texture mapping function (u, v) is defined as:
(u, v)=F (x, y, z, I, P)
Each three-dimensional point back projection is returned in corresponding two dimensional image, had:
Y=PiIn Y, formula, y=(x, y)TIt is the corresponding points being projected back in on two dimensional image, Y=(x, y, z)TIt is three-dimensional point cloud
In a three-dimensional point, PiIt is the projection matrix of viewpoint residing for the image.
9th step, by three-dimensional drape model from XCYCZCCoordinate system is transformed into XDYDZDCoordinate system, XCYCZCCoordinate system is with outstanding
Vertical model takes over a business the coordinate system that the center of circle is origin, XDYDZDCoordinate system is the coordinate system using user's Chosen Point as origin.
With reference to Fig. 6, the 9th step comprises the following steps:
Step 9.1, its corresponding coordinate in three-dimensional point cloud model gone out according to X-comers coordinated indexing in image;
Step 9.2, the three-dimensional angular coordinate obtained according to step 9.1 calculate plane normal vector
Step 9.3, by O1Point moves to the origin of coordinates and obtains transformation matrix T1;
Step 9.4, by O1P1Around YDAxle turns θ clockwisey, with YDO0ZDPlane is overlapped, and obtains transformation matrix T2;
Step 9.5, by O1P1Around XDAxle turns θ clockwisex, with ZDOverlapping of axles, obtains transformation matrix T3;
Step 9.6, from demarcation coordinate be tied to pendency coordinate system transition matrix T=T1×T2×T3;
Step 9.7, the point coordinates on three-dimensional drape model is multiplied by 1/l, formula, l be between adjacent three-dimensional angle point away from
From.
Claims (10)
1. a kind of fabric three-dimensional draping shape method for reconstructing based on video flowing, it is characterised in that comprise the following steps:
The first step, by fabric cut out it is rounded after be placed on the pallet of Pendant meter (1) (2), be subsequently placed with that chessboard (3) is installed
Take over a business, between fabric is centrally placed in pallet (2) and taken over a business, and chessboard (3) center and take over a business center superposition;
Second step, imaged along upper annular trace, lower annular trace at the uniform velocity dollying equipment, upper annular trace is located at fabric
Surface, lower annular trace is concordant with the pendency base of fabric;
3rd step, the video flowing that second step is photographed is converted into image sequence;
4th step, it is utilized respectively the image sequence that SIFT algorithms and Harris algorithms obtain to the 3rd step and carries out feature point detection;
5th step, extract after characteristic point in image sequence on all images, calculate per piece image with image to be matched
Characteristic point arest neighbors matching;
6th step, outer ginseng matrix between image two-by-two is obtained in image sequence, and be normalized under the same coordinate system
The three-dimensional coordinate of characteristic point is calculated, so as to obtain three-dimensional point cloud;
7th step, the three-dimensional point cloud progress Poisson reconstruction obtained to the 6th step, obtain reconstruction model;
8th step, on the reconstruction model that the 7th step is obtained carry out texture mapping, obtain three-dimensional drape model.
2. a kind of fabric three-dimensional draping shape method for reconstructing based on video flowing as claimed in claim 1, it is characterised in that
In the first step, if fabric is pure color fabric, grid lines is being drawn in fabric face.
3. a kind of fabric three-dimensional draping shape method for reconstructing based on video flowing as claimed in claim 1, it is characterised in that
In 3rd step, the video photographed to second step extracts corresponding sectional drawing by sampling density set in advance, so as to will regard
Frequency circulation turns to image sequence.
4. a kind of fabric three-dimensional draping shape method for reconstructing based on video flowing as claimed in claim 1, it is characterised in that
In 4th step, feature point detection is carried out to image sequence using SIFT algorithms and comprised the following steps:
Step 4A.1, for any one secondary two-dimensional image I (x, y) in image sequence, by two-dimensional image I (x, y) and Gaussian kernel
Function G (x, y, σ) convolution obtains the scale space images L (x, y, σ) under different scale, and σ is the width parameter of function,
The radial effect scope of function is controlled, the DOG pyramids of image are set up, D (x, y, σ) is the difference of two adjacent scalogram pictures,
Then have:
In D (x, y, σ)=(G (x, y, k σ)-G (x, y, σ)) * I (x, y)=L (x, y, k σ)-L (x, y, σ), formula, k joins for yardstick
Number;
Step 4A.2, be each characteristic point assigned direction parameter, operator is possessed rotational invariance:
The gradient modulus value at pixel (x, y) place is m (x, y), then has:
The gradient direction at pixel (x, y) place is θ (x, y), then has:
In θ (x, y)=α tan2 ((L (x, y+1, σ)-L (x, y-1, σ))/(L (x+1, y, σ)-L (x-1, y, σ))), formula, based on α
The characteristic value of curvature;
Step 4A.3, feature point description generation:Reference axis is rotated into characteristic point direction, 4 × 4 are used to each key point
Totally 16 seed points are described, and so just produce 128 data, i.e., the SIFT features vector of 128 dimensions for a characteristic point.
5. a kind of fabric three-dimensional draping shape method for reconstructing based on video flowing as claimed in claim 4, it is characterised in that
In 4th step, feature point detection is carried out to image sequence using Harris algorithms and comprised the following steps:
Step 4B.1, the wicket centered on pixel (x, y) move u in the X direction, and v is moved in the Y direction, its gray scale
The analytic expression of change is:
In formula, E (x, y) is grey scale change amount,O is dimensionless operator;
Step 4B.2, E (x, y) is turned into quadratic form had:
In formula, M is real symmetric matrix,IxIt is image I (x, y) in X side
To gradient, IyFor the gradients of image I (x, y) in the Y direction;
Step 4B.3, angle point receptance function CRF is defined as:
CRF=det (M) -0.04*trace2(M), in formula, det (M) is real symmetric matrix M determinant, and trace (M) is real right
Claim matrix M mark;
Angle point receptance function CRF local maximum point is angle point.
6. a kind of fabric three-dimensional draping shape method for reconstructing based on video flowing as claimed in claim 1, it is characterised in that
In 6th step, two-dimensional points p=[u0, v0]TCorresponding three-dimensional point Pw=[x, y, z]TBetween relation be:
P=K [R/t] Pw, in formula, [R/t] is the outer ginseng matrix of picture pick-up device, represents position of the picture pick-up device in world coordinate system
Put, K is the internal reference matrix of picture pick-up device, be the point of identical one P on camera lens preset parameter, objectwIt is right in any two images
Should point p1And p2Relation be:
p1Fp2=0, in formula, matrix based on F, F=K-T[t]×RK-1。
7. a kind of fabric three-dimensional draping shape method for reconstructing based on video flowing as claimed in claim 1, it is characterised in that
In 6th step, three-dimensional coordinate will be obtained and further optimized using BA algorithms, so as to obtain the three-dimensional point cloud, represented such as
Under:
In formula,It is the two-dimensional coordinate of j-th of characteristic point in i-th image, K [Ri|
ti]XjIt is a littleThe re-projection coordinate of corresponding three-dimensional points.
8. a kind of fabric three-dimensional draping shape method for reconstructing based on video flowing as claimed in claim 1, it is characterised in that institute
The 7th step is stated to comprise the following steps:
Step 7.1, Octree topological relation is set up to the three dimensional point cloud that the 6th step is obtained, by three dimensional point cloud at random
All it is added in Octree;
Step 7.2, for each node installation space function F in Octree topological relationc:
In formula, RcFor the center of node, rwFor the width of node,For basic function, R is Arbitrary Digit
Strong point, (x, y, z) is set to by the coordinate of any point in three-dimensional point cloud, then function space F (x, y, z) is expressed as:
F (x, y, z)=(A (x) A (y) A (z))3, in formula,For filter function, if filter functionVariable be t, then
Have:
Step 7.3, in the case of uniform sampling, it is assumed that divide block is constant, pass through vector fieldApproach the ladder of indicator function
Spend, definition is to the approximation of the gradient fields of indicator functionThen have:
In formula, s is the point in point cloud, and S is point cloud sample set, and o is in Octree
Node, NgbrD(s) node that eight depth for being arest neighbors s.p are D, s.p is point cloud sample, αO, sFor trilinear interpolation
Power, Fo(q) it is node function,For a normal for cloud sample;
Step 7.4, the equation according to step 7.3 obtain vector fieldAfterwards, pool is sought by the way of Laplacian Matrix iteration
Loose measure journeySolution, in formula, Δ is representsIncrement,For the location estimation of sampled point,Calculated for vector differential
Symbol;
Step 7.5, the location estimation by sampled pointIsosurface extraction is carried out with its average value:
In formula,For contour surface, q is cloud data,For a cloud sample distribution function,
R is the average of point cloud sample distribution coordinate, For a cloud sample distribution function;
Step 7.6, the contour surface that step 7.5 is extracted are spliced, you can obtain reconstruction model.
9. a kind of fabric three-dimensional draping shape method for reconstructing based on video flowing as claimed in claim 1, it is characterised in that
In 8th step:
If the texture image sequence obtained is I={ I1, I2, I3..., In, when obtaining each image, picture pick-up device is relative to thing
The projection matrix collection of body is combined into P={ P1, P2, P3..., Pn, then texture mapping function (u, v) is defined as:
(u, v)=F (x, y, z, I, P)
Each three-dimensional point back projection is returned in corresponding two dimensional image, had:
Y=PiIn Y, formula, y=(x, y)TIt is the corresponding points being projected back in on two dimensional image, Y=(x, y, z)TIn being three-dimensional point cloud
One three-dimensional point, PiIt is the projection matrix of viewpoint residing for the image.
10. a kind of fabric three-dimensional draping shape method for reconstructing based on video flowing as claimed in claim 1, it is characterised in that
After the 8th step, in addition to:
9th step, by three-dimensional drape model from XCYCZCCoordinate system is transformed into XDYDZDCoordinate system, XCYCZCCoordinate system is with the mould that dangles
Type takes over a business the coordinate system that the center of circle is origin, XDYDZDCoordinate system is the coordinate system using user's Chosen Point as origin.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710141162.0A CN107067462A (en) | 2017-03-10 | 2017-03-10 | Fabric three-dimensional draping shape method for reconstructing based on video flowing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710141162.0A CN107067462A (en) | 2017-03-10 | 2017-03-10 | Fabric three-dimensional draping shape method for reconstructing based on video flowing |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107067462A true CN107067462A (en) | 2017-08-18 |
Family
ID=59622371
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710141162.0A Pending CN107067462A (en) | 2017-03-10 | 2017-03-10 | Fabric three-dimensional draping shape method for reconstructing based on video flowing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107067462A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021062645A1 (en) * | 2019-09-30 | 2021-04-08 | Zte Corporation | File format for point cloud data |
TWI801193B (en) * | 2022-04-01 | 2023-05-01 | 適着三維科技股份有限公司 | Swiveling table system and method thereof |
CN117372608A (en) * | 2023-09-14 | 2024-01-09 | 成都飞机工业(集团)有限责任公司 | Three-dimensional point cloud texture mapping method, system, equipment and medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101587082A (en) * | 2009-06-24 | 2009-11-25 | 天津工业大学 | Quick three-dimensional reconstructing method applied for detecting fabric defect |
CN102867327A (en) * | 2012-09-05 | 2013-01-09 | 浙江理工大学 | Textile flexible movement reestablishing method based on neural network system |
CN103454276A (en) * | 2013-06-30 | 2013-12-18 | 上海工程技术大学 | Textile form and style evaluation method based on dynamic sequence image |
CN105279789A (en) * | 2015-11-18 | 2016-01-27 | 中国兵器工业计算机应用技术研究所 | A three-dimensional reconstruction method based on image sequences |
-
2017
- 2017-03-10 CN CN201710141162.0A patent/CN107067462A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101587082A (en) * | 2009-06-24 | 2009-11-25 | 天津工业大学 | Quick three-dimensional reconstructing method applied for detecting fabric defect |
CN102867327A (en) * | 2012-09-05 | 2013-01-09 | 浙江理工大学 | Textile flexible movement reestablishing method based on neural network system |
CN103454276A (en) * | 2013-06-30 | 2013-12-18 | 上海工程技术大学 | Textile form and style evaluation method based on dynamic sequence image |
CN105279789A (en) * | 2015-11-18 | 2016-01-27 | 中国兵器工业计算机应用技术研究所 | A three-dimensional reconstruction method based on image sequences |
Non-Patent Citations (5)
Title |
---|
HARRIS C等: "A combined corner and edge detector", 《ALVEY VISION CONFERENCE》 * |
侯建辉等: "自适应的 Harris 棋盘格角点检测算法", 《计算机工程与设计》 * |
刘为宏: "点云数据曲面重建算法及研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
胡堃: "基于图像序列的织物悬垂形态重建及测量", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
胡堃等: "基于照片序列的织物悬垂形态重建及测量", 《东华大学学报(自然科学版)》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021062645A1 (en) * | 2019-09-30 | 2021-04-08 | Zte Corporation | File format for point cloud data |
TWI801193B (en) * | 2022-04-01 | 2023-05-01 | 適着三維科技股份有限公司 | Swiveling table system and method thereof |
CN117372608A (en) * | 2023-09-14 | 2024-01-09 | 成都飞机工业(集团)有限责任公司 | Three-dimensional point cloud texture mapping method, system, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106651942B (en) | Three-dimensional rotating detection and rotary shaft localization method based on characteristic point | |
WO2022213612A1 (en) | Non-contact three-dimensional human body size measurement method | |
CN104867160B (en) | A kind of directionality demarcation target demarcated for camera interior and exterior parameter | |
CN104933717B (en) | The camera interior and exterior parameter automatic calibration method of target is demarcated based on directionality | |
CN106097436B (en) | A kind of three-dimensional rebuilding method of large scene object | |
CN109242954B (en) | Multi-view three-dimensional human body reconstruction method based on template deformation | |
CN109345620A (en) | Merge the improvement ICP object under test point cloud method of quick point feature histogram | |
CN110458895A (en) | Conversion method, device, equipment and the storage medium of image coordinate system | |
CN105608421B (en) | A kind of recognition methods of human action and device | |
CN104504723B (en) | Image registration method based on remarkable visual features | |
CN106780619A (en) | A kind of human body dimension measurement method based on Kinect depth cameras | |
CN107833181A (en) | A kind of three-dimensional panoramic image generation method and system based on zoom stereoscopic vision | |
CN103559736B (en) | The unmarked three-dimensional real-time capture system of performing artist | |
CN106997605A (en) | It is a kind of that the method that foot type video and sensing data obtain three-dimensional foot type is gathered by smart mobile phone | |
CN106651904A (en) | Wide-size-range multi-space target capture tracking method | |
CN107067462A (en) | Fabric three-dimensional draping shape method for reconstructing based on video flowing | |
CN109754461A (en) | Image processing method and related product | |
CN103218812A (en) | Method for rapidly acquiring tree morphological model parameters based on photogrammetry | |
CN106649747A (en) | Scenic spot identification method and system | |
CN112330813A (en) | Wearing three-dimensional human body model reconstruction method based on monocular depth camera | |
CN108010082A (en) | A kind of method of geometric match | |
CN105488541A (en) | Natural feature point identification method based on machine learning in augmented reality system | |
CN110532865A (en) | Spacecraft structure recognition methods based on visible light and laser fusion | |
CN104751451B (en) | Point off density cloud extracting method based on unmanned plane low latitude high resolution image | |
CN106778649A (en) | A kind of image recognition algorithm of judgement sight spot marker |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170818 |
|
WD01 | Invention patent application deemed withdrawn after publication |