CN107067462A - Fabric three-dimensional draping shape method for reconstructing based on video flowing - Google Patents
Fabric three-dimensional draping shape method for reconstructing based on video flowing Download PDFInfo
- Publication number
- CN107067462A CN107067462A CN201710141162.0A CN201710141162A CN107067462A CN 107067462 A CN107067462 A CN 107067462A CN 201710141162 A CN201710141162 A CN 201710141162A CN 107067462 A CN107067462 A CN 107067462A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- point
- fabric
- image
- function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000004744 fabric Substances 0.000 title claims abstract description 48
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000001514 detection method Methods 0.000 claims abstract description 11
- 238000013507 mapping Methods 0.000 claims abstract description 7
- 239000011159 matrix material Substances 0.000 claims description 34
- 238000005070 sampling Methods 0.000 claims description 10
- 230000008859 change Effects 0.000 claims description 6
- 238000005315 distribution function Methods 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 3
- 241000208340 Araliaceae Species 0.000 claims 2
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 claims 2
- 235000003140 Panax quinquefolius Nutrition 0.000 claims 2
- HUTDUHSNJYTCAR-UHFFFAOYSA-N ancymidol Chemical compound C1=CC(OC)=CC=C1C(O)(C=1C=NC=NC=1)C1CC1 HUTDUHSNJYTCAR-UHFFFAOYSA-N 0.000 claims 2
- 239000000284 extract Substances 0.000 claims 2
- 235000008434 ginseng Nutrition 0.000 claims 2
- 229920000535 Tan II Polymers 0.000 claims 1
- 238000009434 installation Methods 0.000 claims 1
- 230000008569 process Effects 0.000 abstract description 2
- 230000009466 transformation Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000005316 response function Methods 0.000 description 4
- 101100240465 Danio rerio nus1 gene Proteins 0.000 description 2
- 101100240467 Mus musculus Nus1 gene Proteins 0.000 description 2
- 239000003550 marker Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000000725 suspension Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
本发明提供了一种基于视频流的织物三维悬垂形态重建方法,其特征在于,包括以下步骤:将织物剪裁呈圆形后放置在悬垂仪的托盘上,随后放置安装有棋盘的顶盘,织物居中位于托盘与顶盘之间,且棋盘中心与顶盘中心重合;沿上环形轨迹、下环形轨迹匀速移动摄像设备进行摄像;将第二步拍摄到的视频流转化为图像序列;分别利用SIFT算法和Harris算法对第三步得到的图像序列进行特征点检测;获得三维点云;对三维点云进行泊松重建,获得重建模型,进行纹理映射,获得三维悬垂模型。本发明提供的一种基于视频流的织物三维悬垂形态重建方法的重建过程简单稳定,重建结果精度较高,能够真实完整地反映出织物的三维悬垂形态。The invention provides a method for reconstructing the three-dimensional drape shape of fabric based on video stream, which is characterized in that it comprises the following steps: cutting the fabric into a circle and placing it on the tray of the drape instrument, and then placing the top plate with a checkerboard installed on the fabric. The center is located between the tray and the top plate, and the center of the chessboard coincides with the center of the top plate; the camera equipment is moved at a constant speed along the upper circular track and the lower circular track to take pictures; the video stream captured in the second step is converted into an image sequence; respectively use SIFT The algorithm and the Harris algorithm perform feature point detection on the image sequence obtained in the third step; obtain a 3D point cloud; perform Poisson reconstruction on the 3D point cloud to obtain a reconstructed model, perform texture mapping, and obtain a 3D draping model. The method for reconstructing the three-dimensional draping form of the fabric based on the video stream provided by the present invention has a simple and stable reconstruction process, high precision of the reconstruction result, and can truly and completely reflect the three-dimensional draping form of the fabric.
Description
技术领域technical field
本发明涉及一种通过采集织物悬垂的视频从得到织物悬垂的三维彩色模型的方法。The invention relates to a method for obtaining a three-dimensional color model of fabric drape by collecting video of fabric drape.
背景技术Background technique
织物的悬垂形态主要是指织物悬垂曲面的三维外观形态。现有技术主要是通过悬垂投影上的二维信息间接反映织物的悬垂形态,具有较大的局限性。The drape shape of the fabric mainly refers to the three-dimensional appearance of the fabric drape surface. The existing technology mainly indirectly reflects the drape shape of the fabric through the two-dimensional information on the drape projection, which has relatively large limitations.
发明内容Contents of the invention
本发明的目的是通过视频流来获取织物悬垂曲面的三维外观形态。The purpose of the invention is to obtain the three-dimensional appearance of the fabric hanging curved surface through the video stream.
为了达到上述目的,本发明的技术方案是提供了一种基于视频流的织物三维悬垂形态重建方法,其特征在于,包括以下步骤:In order to achieve the above object, the technical solution of the present invention provides a method for reconstructing the three-dimensional drape shape of fabric based on video stream, which is characterized in that it comprises the following steps:
第一步、将织物剪裁呈圆形后放置在悬垂仪的托盘上,随后放置安装有棋盘的顶盘,织物居中位于托盘与顶盘之间,且棋盘中心与顶盘中心重合;The first step is to cut the fabric into a circle and place it on the tray of the drape instrument, and then place the top plate with the chessboard installed. The fabric is centered between the tray and the top plate, and the center of the chessboard coincides with the center of the top plate;
第二步、沿上环形轨迹、下环形轨迹匀速移动摄像设备进行摄像,上环形轨迹位于织物正上方,下环形轨迹与织物的悬垂底边平齐;The second step is to move the camera equipment at a constant speed along the upper circular track and the lower circular track to take pictures. The upper circular track is located directly above the fabric, and the lower circular track is flush with the hanging bottom edge of the fabric;
第三步、将第二步拍摄到的视频流转化为图像序列;The third step is to convert the video stream captured in the second step into an image sequence;
第四步、分别利用SIFT算法和Harris算法对第三步得到的图像序列进行特征点检测;The 4th step, utilize SIFT algorithm and Harris algorithm to carry out feature point detection to the image sequence that the 3rd step obtains respectively;
第五步、提取出图像序列中所有图像上的特征点后,计算每一幅图像与待匹配图像中的特征点的最近邻匹配;The fifth step, after extracting the feature points on all images in the image sequence, calculate the nearest neighbor matching between each image and the feature points in the image to be matched;
第六步、求出图像序列中两两图像之间的外参矩阵,并将其归一到同一坐标系下即可计算出特征点的三维坐标,从而获得三维点云;The sixth step is to find the external parameter matrix between two images in the image sequence, and normalize it to the same coordinate system to calculate the three-dimensional coordinates of the feature points, thereby obtaining a three-dimensional point cloud;
第七步、对第六步获得的三维点云进行泊松重建,获得重建模型;The seventh step is to perform Poisson reconstruction on the 3D point cloud obtained in the sixth step to obtain the reconstruction model;
第八步、在第七步获得的重建模型上进行纹理映射,获得三维悬垂模型。In the eighth step, texture mapping is performed on the reconstructed model obtained in the seventh step to obtain a three-dimensional overhanging model.
优选地,在所述第一步中,若织物为纯色织物,在在织物表面绘制网格线。Preferably, in the first step, if the fabric is a solid-color fabric, grid lines are drawn on the surface of the fabric.
优选地,在所述第三步中,对第二步拍摄到的视频按预先设定的采样密度提取相应的截图,从而将视频流转化为图像序列。Preferably, in the third step, corresponding screenshots are extracted from the video captured in the second step according to a preset sampling density, so as to convert the video stream into an image sequence.
优选地,在所述第四步中,利用SIFT算法对图像序列进行特征点检测包括以下步骤:Preferably, in the fourth step, using the SIFT algorithm to perform feature point detection on the image sequence includes the following steps:
步骤4A.1、对于图像序列中的任意一副二维图像I(x,y),由二维图像I(x,y)与高斯核函数G(x,y,σ)的卷积得到在不同尺度下的尺度空间图像L(x,y,σ),σ为函数的宽度参数,控制了函数的径向作用范围。建立图像的DOG金字塔,D(x,y,σ)是两个相邻尺度图像之差,则有:Step 4A.1. For any two-dimensional image I(x, y) in the image sequence, the convolution of the two-dimensional image I(x, y) with the Gaussian kernel function G(x, y, σ) is obtained in Scale-space image L(x, y, σ) at different scales, σ is the width parameter of the function, which controls the radial range of the function. Establish the DOG pyramid of the image, D(x, y, σ) is the difference between two adjacent scale images, then:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ),式中,k为尺度参数;D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y , σ), where k is the scale parameter;
步骤4A.2、为每个特征点指定方向参数,使算子具备旋转不变性:Step 4A.2, specify the direction parameter for each feature point, so that the operator has rotation invariance:
像素点(x,y)处的梯度模值为m(x,y),则有:The gradient modulus at the pixel point (x, y) is m(x, y), then:
像素点(x,y)处的梯度方向为θ(x,y),则有:The gradient direction at the pixel point (x, y) is θ(x, y), then:
θ(x,y)=αtan2((L(x,y+1,σ)-L(x,y-1,σ))/(L(x+1,y,σ)-L(x-1,y,σ))),式中,α为主曲率的特征值;θ(x,y)=αtan2((L(x,y+1,σ)-L(x,y-1,σ))/(L(x+1,y,σ)-L(x-1 , y, σ))), where α is the eigenvalue of the principal curvature;
步骤4A.3、特征点描述子的生成:将坐标轴旋转到特征点方向,对每个关键点使用4×4共16个种子点来描述,这样对于一个特征点就产生128个数据,即128维的SIFT特征向量。Step 4A.3. Generation of feature point descriptors: Rotate the coordinate axis to the direction of feature points, and describe each key point using a total of 16 seed points of 4×4, so that 128 data are generated for one feature point, namely 128-dimensional SIFT feature vector.
优选地,在所述第四步中,利用Harris算法对图像序列进行特征点检测包括以下步骤:Preferably, in the fourth step, utilizing the Harris algorithm to perform feature point detection on the image sequence includes the following steps:
步骤4B.1、以像素点(x,y)为中心的小窗口在X方向上移动u,在Y方向上移动v,其灰度变化的解析式为:Step 4B.1. The small window centered on the pixel point (x, y) moves u in the X direction and moves v in the Y direction. The analytical formula for the grayscale change is:
式中,E(x,y)为灰度变化量,o为无穷小量运算符;In the formula, E(x, y) is the amount of gray scale change, o is an infinitesimal operator;
步骤4B.2、将E(x,y)化为二次型有:Step 4B.2, transform E(x, y) into quadratic form:
式中,M为实对称矩阵,Ix为图像I(x,y)在X方向的梯度,Iy为图像I(x,y)在Y方向的梯度; In the formula, M is a real symmetric matrix, I x is the gradient of the image I (x, y) in the X direction, and I y is the gradient of the image I (x, y) in the Y direction;
步骤4B.3、将角点响应函数CRF定义为:Step 4B.3, define the corner response function CRF as:
CRF=det(M)-0.04*trace2(M),式中,det(M)为实对称矩阵M的行列式,trace(M)为实对称矩阵M的迹;CRF=det (M)-0.04*trace 2 (M), in the formula, det (M) is the determinant of real symmetric matrix M, and trace (M) is the track of real symmetric matrix M;
角点响应函数CRF的局部极大值所在点即为角点。The point where the local maximum value of the corner response function CRF is located is the corner point.
优选地,在所述第六步中,二维点p=[u0,v0]T与其对应的三维点Pw=[x,y,z]T间的关系为:Preferably, in the sixth step, the relationship between the two-dimensional point p=[u 0 , v 0 ] T and its corresponding three-dimensional point P w =[x, y, z] T is:
p=K[R/t]Pw,式中,[R/t]是摄像设备的外参矩阵,表示摄像设备在世界坐标系中的位置,K是摄像设备的内参矩阵,为镜头固定参数,物体上相同的一点Pw在任意两张图像中对应点p1和p2的关系为:p=K[R/t]P w , in the formula, [R/t] is the external parameter matrix of the camera equipment, indicating the position of the camera equipment in the world coordinate system, and K is the internal parameter matrix of the camera equipment, which is the fixed parameter of the lens , the relationship between the same point P w on the object corresponding to points p 1 and p 2 in any two images is:
p1Fp2=0,式中,F为基础矩阵,F=K-T[t]×RK-1.p 1 Fp 2 =0, where F is the fundamental matrix, F=K -T [t]×RK -1 .
优选地,在所述第六步中,将获得三维坐标利用BA算法来进一步优化,从而得到所述三维点云,表示如下:Preferably, in the sixth step, the obtained three-dimensional coordinates are further optimized using the BA algorithm, thereby obtaining the three-dimensional point cloud, expressed as follows:
式中,是第i张图像中第j个特征点的二维坐标,K[Ri|ti]Xj是点对应三维点的重投影坐标。 In the formula, is the two-dimensional coordinates of the jth feature point in the i-th image, K[R i |t i ]X j is the point Reprojected coordinates corresponding to 3D points.
优选地,所述第七步包括以下步骤:Preferably, the seventh step includes the following steps:
步骤7.1、对第六步得到的三维点云数据建立八叉树拓扑关系,将散乱的三维点云数据都加入到八叉树中;Step 7.1, establish the octree topological relationship for the 3D point cloud data obtained in the sixth step, and add the scattered 3D point cloud data into the octree;
步骤7.2、对于八叉树拓扑关系中的每个节点设置空间函数Fc:Step 7.2. Set the spatial function F c for each node in the octree topology:
式中,Rc为节点的中心,rw为节点的宽度,为基函数,R为任意数据点,将三维点云中任一点的坐标设为(x,y,z),则函数空间F(x,y,z)表示为: In the formula, R c is the center of the node, r w is the width of the node, is the basis function, R is any data point, and the coordinates of any point in the 3D point cloud are set as (x, y, z), then the function space F(x, y, z) is expressed as:
F(x,y,z)=(A(x)A(y)A(z))3,式中,为滤波函数,设滤波函数的变量为t,则有:F(x, y, z)=(A(x)A(y)A(z)) 3 , where, is the filter function, let the filter function The variable is t, then there are:
步骤7.3、在均匀采样的情况下,假设划分得块是常量,通过向量场逼近指示函数的梯度,定义对指示函数的梯度场的近似值为则有:Step 7.3, in the case of uniform sampling, assuming that the divided blocks are constant, through the vector field Approximate the gradient of the indicator function, define the approximation to the gradient field of the indicator function as Then there are:
式中,s为点云中的一点,S为点云样本集合,o为八叉树中的节点,NgbrD(s)为最近邻s.p的八个深度为D的节点,s.p为点云样本,αo,s为三次线性插值的权,Fo(q)为节点函数,为点云样本的法线; In the formula, s is a point in the point cloud, S is the set of point cloud samples, o is a node in the octree, Ngbr D (s) is the eight nodes with depth D of the nearest neighbor sp, and sp is the point cloud sample , α o, s is the weight of cubic linear interpolation, F o (q) is the node function, is the normal of the point cloud sample;
步骤7.4、根据步骤7.3所示的方程得到向量场后,采用拉普拉斯矩阵迭代的方式求泊松方程的解,式中,Δ表示的增量,为采样点的位置估计,为矢量微分算符。Step 7.4, obtain the vector field according to the equation shown in step 7.3 Finally, the Poisson equation is obtained by using the Laplacian matrix iteration method solution, where Δ represents increment, is the position estimate of the sampling point, is a vector differential operator.
步骤7.5、通过采样点的位置估计用其平均值进行等值面提取:Step 7.5, through the position estimation of the sampling point Use its mean for isosurface extraction:
式中,为等值面,q为点云数据,为点云样本分布函数,r为点云样本分布坐标的均值, 为点云样本分布坐标; In the formula, is the isosurface, q is the point cloud data, is the point cloud sample distribution function, r is the mean value of the point cloud sample distribution coordinates, Distribution coordinates for point cloud samples;
步骤7.6、把步骤7.5提取的等值面进行拼接,即可获得重建模型。In step 7.6, splice the isosurfaces extracted in step 7.5 to obtain the reconstructed model.
优选地,在所述第八步中:Preferably, in the eighth step:
设获取的纹理图像序列为I={I1,I2,I3,...,In},获取每幅图像时,摄像设备相对于物体的投影矩阵集合为P={P1,P2,P3,...,Pn},则纹理映射函数(u,v)定义为:Assuming that the acquired texture image sequence is I= { I 1 , I 2 , I 3 ,...,In }, when each image is acquired, the projection matrix set of the camera device relative to the object is P={P 1 , P 2 , P 3 ,..., P n }, then the texture mapping function (u, v) is defined as:
(u,v)=F(x,y,z,I,P)(u,v)=F(x,y,z,I,P)
将每一个三维点反投影回所对应的二维图像中,有:To back-project each 3D point into the corresponding 2D image, there are:
y=PiY,式中,y=(x,y)T是投影回二维图像上的对应点,Y=(x,y,z)T是三维点云中的一个三维点,Pi是该图像所处视点的投影矩阵。y=P i Y, where, y=(x, y) T is the corresponding point projected back on the two-dimensional image, Y=(x, y, z) T is a three-dimensional point in the three-dimensional point cloud, P i is the projection matrix of the viewpoint where the image is located.
优选地,在所述第八步后,还包括:Preferably, after the eighth step, it also includes:
第九步、将三维悬垂模型从XCYCZC坐标系转换到XDYDZD坐标系,XCYCZC坐标系为以悬垂模型顶盘圆心为原点的坐标系,XDYDZD坐标系为以用户选定点为原点的坐标系。The ninth step is to convert the three-dimensional overhang model from the X C Y C Z C coordinate system to the X D Y D Z D coordinate system. The X C Y C Z C coordinate system is the coordinate system with the center of the top plate of the overhang model as the origin. X The D Y D Z D coordinate system is the coordinate system with the point selected by the user as the origin.
本发明提供的一种基于视频流的织物三维悬垂形态重建方法的重建过程简单稳定,重建结果精度较高,能够真实完整地反映出织物的三维悬垂形态。The method for reconstructing the three-dimensional draping form of the fabric based on the video stream provided by the present invention has a simple and stable reconstruction process, high precision of the reconstruction result, and can truly and completely reflect the three-dimensional draping form of the fabric.
附图说明Description of drawings
图1为本发明所使用的悬垂仪的示意图;Fig. 1 is the schematic diagram of the drape instrument used in the present invention;
图2为棋盘的示意图;Fig. 2 is the schematic diagram of chessboard;
图3为本发明的视频采集示意图;Fig. 3 is a schematic diagram of video acquisition of the present invention;
图4为SIFT和Harris联合特征点检测图;Figure 4 is a joint feature point detection map of SIFT and Harris;
图5为针孔摄像机模型;Figure 5 is a pinhole camera model;
图6为坐标转换示意图。Fig. 6 is a schematic diagram of coordinate transformation.
具体实施方式detailed description
下面结合具体实施例,进一步阐述本发明。应理解,这些实施例仅用于说明本发明而不用于限制本发明的范围。此外应理解,在阅读了本发明讲授的内容之后,本领域技术人员可以对本发明作各种改动或修改,这些等价形式同样落于本申请所附权利要求书所限定的范围。Below in conjunction with specific embodiment, further illustrate the present invention. It should be understood that these examples are only used to illustrate the present invention and are not intended to limit the scope of the present invention. In addition, it should be understood that after reading the teachings of the present invention, those skilled in the art can make various changes or modifications to the present invention, and these equivalent forms also fall within the scope defined by the appended claims of the present application.
本发明涉及一种基于视频流的织物三维悬垂形态重建方法,包括以下步骤:The invention relates to a method for reconstructing a three-dimensional fabric drape form based on a video stream, comprising the following steps:
第一步、将织物剪裁呈圆形后放置在如图1所示的悬垂仪1的托盘2上,随后放置安装有如图2所示的棋盘3的顶盘,织物居中位于托盘2与顶盘之间,且棋盘3中心与顶盘中心重合。The first step is to cut the fabric into a circle and place it on the tray 2 of the drape instrument 1 as shown in Figure 1, then place the top plate with the checkerboard 3 as shown in Figure 2, and the fabric is centered between the tray 2 and the top plate Between, and the center of chessboard 3 coincides with the center of the top board.
托盘2与顶盘直径均为12cm,顶盘表面采用花色纹理图案。棋盘3尺寸为6cm×4cm,方格边长L为1cm。Both the diameter of the tray 2 and the top plate are 12 cm, and the surface of the top plate adopts patterns of color and texture. The size of the chessboard 3 is 6cm×4cm, and the side length L of the grid is 1cm.
织物裁剪成直径为24cm的圆形试样。对于纯色织物,需要用不同于织物颜色的记号笔在织物表面绘制间距为3cm的网格线。The fabric was cut into circular samples with a diameter of 24 cm. For solid-color fabrics, it is necessary to use a marker pen of a different color from the fabric to draw grid lines with a spacing of 3cm on the surface of the fabric.
第二步、如图3所示,沿上环形轨迹、下环形轨迹匀速移动相机进行摄像,上环形轨迹位于织物正上方,下环形轨迹与织物的悬垂底边平齐,每个环形轨迹摄像时长大概为5秒钟。The second step, as shown in Figure 3, is to move the camera at a constant speed along the upper and lower circular tracks to take pictures. The upper circular track is located directly above the fabric, and the lower circular track is flush with the hanging bottom edge of the fabric. About 5 seconds.
第三步、对所拍摄的视频按照4帧/秒的采样密度提取相应的截图,从而将将第二步拍摄到的视频流转化为图像序列。In the third step, corresponding screenshots are extracted from the captured video at a sampling density of 4 frames per second, so as to convert the video stream captured in the second step into an image sequence.
第四步、分别利用SIFT算法和Harris算法对第三步得到的图像序列进行特征点检测。In the fourth step, the feature point detection is performed on the image sequence obtained in the third step by using the SIFT algorithm and the Harris algorithm respectively.
利用SIFT算法对图像序列进行特征点检测包括以下步骤:Using the SIFT algorithm to perform feature point detection on image sequences includes the following steps:
步骤4A.1、对于图像序列中的任意一副二维图像I(x,y),由二维图像I(x,y)与高斯核函数G(x,y,σ)的卷积得到在不同尺度下的尺度空间图像L(x,y,σ),σ为函数的宽度参数,控制了函数的径向作用范围,建立图像的DOG(Difference of Gaussian)金字塔,D(x,y,σ)是两个相邻尺度图像之差,则有:Step 4A.1. For any two-dimensional image I(x, y) in the image sequence, the convolution of the two-dimensional image I(x, y) with the Gaussian kernel function G(x, y, σ) is obtained in Scale space image L(x, y, σ) at different scales, σ is the width parameter of the function, which controls the radial range of the function, and establishes the DOG (Difference of Gaussian) pyramid of the image, D(x, y, σ ) is the difference between two adjacent scale images, then:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ),式中,k为尺度参数;D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y , σ), where k is the scale parameter;
步骤4A.2、为每个特征点指定方向参数,使算子具备旋转不变性:Step 4A.2, specify the direction parameter for each feature point, so that the operator has rotation invariance:
像素点(x,y)处的梯度模值为m(x,y),则有:The gradient modulus at the pixel point (x, y) is m(x, y), then:
像素点(x,y)处的梯度方向为θ(x,y),则有:The gradient direction at the pixel point (x, y) is θ(x, y), then:
θ(x,y)=αtan2((L(x,y+1,σ)-L(x,y-1,σ))/(L(x+1,y,σ)-L(x-1,y,σ))),式中,α为主曲率的特征值;θ(x,y)=αtan2((L(x,y+1,σ)-L(x,y-1,σ))/(L(x+1,y,σ)-L(x-1 , y, σ))), where α is the eigenvalue of the principal curvature;
步骤4A.3、特征点描述子的生成:将坐标轴旋转到特征点方向,对每个关键点使用4×4共16个种子点来描述,这样对于一个特征点就产生128个数据,即128维的SIFT特征向量。Step 4A.3. Generation of feature point descriptors: Rotate the coordinate axis to the direction of feature points, and describe each key point using a total of 16 seed points of 4×4, so that 128 data are generated for one feature point, namely 128-dimensional SIFT feature vector.
利用Harris算法对图像序列进行特征点检测包括以下步骤:Using the Harris algorithm to perform feature point detection on image sequences includes the following steps:
步骤4B.1、以像素点(x,y)为中心的小窗口在X方向上移动u,在Y方向上移动v,其灰度变化的解析式为:Step 4B.1. The small window centered on the pixel point (x, y) moves u in the X direction and moves v in the Y direction. The analytical formula for the grayscale change is:
式中,E(x,y)为灰度变化量,o为无穷小量运算符;In the formula, E(x, y) is the amount of gray scale change, o is an infinitesimal operator;
步骤4B.2、将E(x,y)化为二次型有:Step 4B.2, transform E(x, y) into quadratic form:
式中,M为实对称矩阵,Ix为图像I(x,y)在X方向的梯度,Iy为图像I(x,y)在Y方向的梯度; In the formula, M is a real symmetric matrix, I x is the gradient of the image I (x, y) in the X direction, and I y is the gradient of the image I (x, y) in the Y direction;
步骤4B.3、将角点响应函数CRF定义为:Step 4B.3, define the corner response function CRF as:
CRF=det(M)-0.04*trace2(M),式中,det(M)为实对称矩阵M的行列式,trace(M)为实对称矩阵M的迹;CRF=det (M)-0.04*trace 2 (M), in the formula, det (M) is the determinant of real symmetric matrix M, and trace (M) is the track of real symmetric matrix M;
角点响应函数CRF的局部极大值所在点即为角点。The point where the local maximum value of the corner response function CRF is located is the corner point.
如图4所示,其中星形点为SIFT特征点,而圆形点为Harris特征点。As shown in Figure 4, the star points are SIFT feature points, and the circle points are Harris feature points.
第五步、提取出图像序列中所有图像上的特征点后,计算每一幅图像与待匹配图像中的特征点的最近邻匹配。Step 5: After extracting the feature points on all the images in the image sequence, calculate the nearest neighbor matching between each image and the feature points in the image to be matched.
计算两个特征向量a={a1,a2,...,an},b={b1,b2,...,bn}之间的距离Uab表示如下:Calculating the distance U ab between two eigenvectors a={a 1 , a 2 , . . . , a n }, b={b 1 , b 2 , . . . , b n } is expressed as follows:
第六步、求出图像序列中两两图像之间的外参矩阵,并将其归一到同一坐标系下即可计算出特征点的三维坐标,从而获得三维点云。The sixth step is to calculate the external parameter matrix between two images in the image sequence, and normalize it to the same coordinate system to calculate the three-dimensional coordinates of the feature points, thereby obtaining a three-dimensional point cloud.
如图5所示,根据针孔摄像机模型,照片上的二维点p=[u0,v0]T与其对应的三维点Pw=[x,y,z]T间的关系为:As shown in Figure 5, according to the pinhole camera model, the relationship between the two-dimensional point p = [u 0 , v 0 ] T on the photo and its corresponding three-dimensional point P w = [x, y, z] T is:
p=K[R/t]Pw,式中,[R/t]是相机的外参矩阵,表示相机在世界坐标系中的位置,K是相机的内参矩阵,为镜头固定参数,物体上相同的一点Pw在任意两张图像中对应点p1和p2的关系为:p=K[R/t]P w , where [R/t] is the external parameter matrix of the camera, indicating the position of the camera in the world coordinate system, and K is the internal parameter matrix of the camera, which is the fixed parameter of the lens. The relationship between the corresponding points p 1 and p 2 in any two images of the same point P w is:
p1Fp2=0,式中,F为基础矩阵,F=K-T[t]×RK-1。p 1 Fp 2 =0, where F is the fundamental matrix, F=K -T [t]×RK -1 .
将获得三维坐标利用BA(Bundle adjustment)算法来进一步优化,从而得到所述三维点云,表示如下:The obtained three-dimensional coordinates are further optimized using the BA (Bundle adjustment) algorithm, thereby obtaining the three-dimensional point cloud, which is expressed as follows:
式中,是第i张图像中第j个特征点的二维坐标,K[Ri|ti]Xj是点对应三维点的重投影坐标。 In the formula, is the two-dimensional coordinates of the jth feature point in the i-th image, K[R i |t i ]X j is the point Reprojected coordinates corresponding to 3D points.
第七步、对第六步获得的三维点云进行泊松重建,获得重建模型,包括以下步骤:The seventh step is to perform Poisson reconstruction on the 3D point cloud obtained in the sixth step to obtain the reconstruction model, including the following steps:
步骤7.1、对第六步得到的三维点云数据建立八叉树拓扑关系,将散乱的三维点云数据都加入到八叉树中;Step 7.1, establish the octree topological relationship for the 3D point cloud data obtained in the sixth step, and add the scattered 3D point cloud data into the octree;
步骤7.2、对于八叉树拓扑关系中的每个节点设置空间函数Fc:Step 7.2. Set the spatial function F c for each node in the octree topology:
式中,Rc为节点的中心,rw为节点的宽度,为基函数,R为任意数据点,将三维点云中任一点的坐标设为(x,y,z),则函数空间F(x,y,z)表示为: In the formula, R c is the center of the node, r w is the width of the node, is the basis function, R is any data point, and the coordinates of any point in the 3D point cloud are set as (x, y, z), then the function space F(x, y, z) is expressed as:
F(x,y,z)=(A(x)A(y)A(z))3,式中,为滤波函数,设滤波函数的变量为t,则有:F(x, y, z)=(A(x)A(y)A(z)) 3 , where, is the filter function, let the filter function The variable is t, then there are:
步骤7.3、在均匀采样的情况下,假设划分得块是常量,通过向量场逼近指示函数的梯度,定义对指示函数的梯度场的近似值为则有:Step 7.3, in the case of uniform sampling, assuming that the divided blocks are constant, through the vector field Approximate the gradient of the indicator function, define the approximation to the gradient field of the indicator function as Then there are:
式中,s为点云中的点,S为点云样本集合,o为八叉树中的节点,NgbrD(s)为最近邻s.p的八个深度为D的节点,s.p为点云样本,αo,s为三次线性插值的权,Fo(q)为节点函数,为点云样本的法线; In the formula, s is the point in the point cloud, S is the set of point cloud samples, o is the node in the octree, Ngbr D (s) is the eight nodes with depth D of the nearest neighbor sp, and sp is the point cloud sample , α o, s is the weight of cubic linear interpolation, F o (q) is the node function, is the normal of the point cloud sample;
步骤7.4、根据步骤7.3所示的方程得到向量场后,采用拉普拉斯矩阵迭代的方式求泊松方程的解,式中,Δ为表示的增量,为采样点的位置估计,为矢量微分算符;Step 7.4, obtain the vector field according to the equation shown in step 7.3 Finally, the Poisson equation is obtained by using the Laplacian matrix iteration method The solution of , where Δ is the expression increment, is the position estimate of the sampling point, is a vector differential operator;
步骤7.5、通过采样点的位置估计用其平均值进行等值面提取:Step 7.5, through the position estimation of the sampling point Use its mean for isosurface extraction:
式中,为等值面,q为点云数据,为点云样本分布函数,r为点云样本分布坐标的均值, 为点云样本分布函数; In the formula, is the isosurface, q is the point cloud data, is the point cloud sample distribution function, r is the mean value of the point cloud sample distribution coordinates, is the point cloud sample distribution function;
步骤7.6、把步骤7.5提取的等值面进行拼接,即可获得重建模型。In step 7.6, splice the isosurfaces extracted in step 7.5 to obtain the reconstructed model.
第八步、在第七步获得的重建模型上进行纹理映射,获得三维悬垂模型。在该步骤中:In the eighth step, texture mapping is performed on the reconstructed model obtained in the seventh step to obtain a three-dimensional overhanging model. In this step:
设获取的纹理图像序列为I={I1,I2,I3,...,In},获取每幅图像时,相机相对于物体的投影矩阵集合为P={P1,P2,P3,...,Pn},则纹理映射函数(u,v)定义为:Assuming that the obtained texture image sequence is I={I 1 , I 2 , I 3 ,...,I n }, when acquiring each image, the projection matrix set of the camera relative to the object is P={P 1 , P 2 , P 3 ,..., P n }, then the texture mapping function (u, v) is defined as:
(u,v)=F(x,y,z,I,P)(u,v)=F(x,y,z,I,P)
将每一个三维点反投影回所对应的二维图像中,有:To back-project each 3D point into the corresponding 2D image, there are:
y=PiY,式中,y=(x,y)T是投影回二维图像上的对应点,Y=(x,y,z)T是三维点云中的一个三维点,Pi是该图像所处视点的投影矩阵。y=P i Y, where, y=(x, y) T is the corresponding point projected back on the two-dimensional image, Y=(x, y, z) T is a three-dimensional point in the three-dimensional point cloud, P i is the projection matrix of the viewpoint where the image is located.
第九步、将三维悬垂模型从XCYCZC坐标系转换到XDYDZD坐标系,XCYCZC坐标系为以悬垂模型顶盘圆心为原点的坐标系,XDYDZD坐标系为以用户选定点为原点的坐标系。The ninth step is to convert the three-dimensional overhang model from the X C Y C Z C coordinate system to the X D Y D Z D coordinate system. The X C Y C Z C coordinate system is the coordinate system with the center of the top plate of the overhang model as the origin. X The D Y D Z D coordinate system is the coordinate system with the point selected by the user as the origin.
结合图6,第九步包括以下步骤:In conjunction with Figure 6, the ninth step includes the following steps:
步骤9.1、根据图像中棋盘格角点坐标索引出其在三维点云模型中对应的坐标;Step 9.1, index the corresponding coordinates in the three-dimensional point cloud model according to the coordinates of the checkerboard corner points in the image;
步骤9.2、根据步骤9.1得到的三维角点坐标计算顶盘平面法向量 Step 9.2, calculate the top plate plane normal vector according to the three-dimensional corner point coordinates obtained in step 9.1
步骤9.3、将O1点平移到坐标原点得到变换矩阵T1;Step 9.3, translate the O 1 point to the coordinate origin to obtain the transformation matrix T 1 ;
步骤9.4、将O1P1绕YD轴顺时针转θy,与YDO0ZD平面重合,得到变换矩阵T2;Step 9.4, rotate O 1 P 1 clockwise around the Y D axis by θ y , coincide with the Y D O 0 Z D plane, and obtain the transformation matrix T 2 ;
步骤9.5、将O1P1绕XD轴顺时针转θx,与ZD轴重合,得到变换矩阵T3;Step 9.5, rotate O 1 P 1 clockwise around the X D axis by θ x to coincide with the Z D axis to obtain the transformation matrix T 3 ;
步骤9.6、从标定坐标系到悬垂坐标系的转换矩阵T=T1×T2×T3;Step 9.6, transformation matrix T from the calibration coordinate system to the suspension coordinate system = T 1 ×T 2 ×T 3 ;
步骤9.7、将三维悬垂模型上的点坐标乘以1/l,式中,l是相邻三维角点之间的距离。Step 9.7: Multiply the point coordinates on the three-dimensional overhang model by 1/l, where l is the distance between adjacent three-dimensional corner points.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710141162.0A CN107067462A (en) | 2017-03-10 | 2017-03-10 | Fabric three-dimensional draping shape method for reconstructing based on video flowing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710141162.0A CN107067462A (en) | 2017-03-10 | 2017-03-10 | Fabric three-dimensional draping shape method for reconstructing based on video flowing |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107067462A true CN107067462A (en) | 2017-08-18 |
Family
ID=59622371
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710141162.0A Pending CN107067462A (en) | 2017-03-10 | 2017-03-10 | Fabric three-dimensional draping shape method for reconstructing based on video flowing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107067462A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021062645A1 (en) * | 2019-09-30 | 2021-04-08 | Zte Corporation | File format for point cloud data |
TWI801193B (en) * | 2022-04-01 | 2023-05-01 | 適着三維科技股份有限公司 | Swiveling table system and method thereof |
CN117372608A (en) * | 2023-09-14 | 2024-01-09 | 成都飞机工业(集团)有限责任公司 | Three-dimensional point cloud texture mapping method, system, equipment and medium |
US12217354B2 (en) | 2022-03-25 | 2025-02-04 | Zte Corporation | File format for point cloud data |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101587082A (en) * | 2009-06-24 | 2009-11-25 | 天津工业大学 | Quick three-dimensional reconstructing method applied for detecting fabric defect |
CN102867327A (en) * | 2012-09-05 | 2013-01-09 | 浙江理工大学 | Textile flexible movement reestablishing method based on neural network system |
CN103454276A (en) * | 2013-06-30 | 2013-12-18 | 上海工程技术大学 | Textile form and style evaluation method based on dynamic sequence image |
CN105279789A (en) * | 2015-11-18 | 2016-01-27 | 中国兵器工业计算机应用技术研究所 | A three-dimensional reconstruction method based on image sequences |
-
2017
- 2017-03-10 CN CN201710141162.0A patent/CN107067462A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101587082A (en) * | 2009-06-24 | 2009-11-25 | 天津工业大学 | Quick three-dimensional reconstructing method applied for detecting fabric defect |
CN102867327A (en) * | 2012-09-05 | 2013-01-09 | 浙江理工大学 | Textile flexible movement reestablishing method based on neural network system |
CN103454276A (en) * | 2013-06-30 | 2013-12-18 | 上海工程技术大学 | Textile form and style evaluation method based on dynamic sequence image |
CN105279789A (en) * | 2015-11-18 | 2016-01-27 | 中国兵器工业计算机应用技术研究所 | A three-dimensional reconstruction method based on image sequences |
Non-Patent Citations (5)
Title |
---|
HARRIS C等: "A combined corner and edge detector", 《ALVEY VISION CONFERENCE》 * |
侯建辉等: "自适应的 Harris 棋盘格角点检测算法", 《计算机工程与设计》 * |
刘为宏: "点云数据曲面重建算法及研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
胡堃: "基于图像序列的织物悬垂形态重建及测量", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
胡堃等: "基于照片序列的织物悬垂形态重建及测量", 《东华大学学报(自然科学版)》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021062645A1 (en) * | 2019-09-30 | 2021-04-08 | Zte Corporation | File format for point cloud data |
CN114365194A (en) * | 2019-09-30 | 2022-04-15 | 中兴通讯股份有限公司 | File format used for point cloud data |
US12217354B2 (en) | 2022-03-25 | 2025-02-04 | Zte Corporation | File format for point cloud data |
TWI801193B (en) * | 2022-04-01 | 2023-05-01 | 適着三維科技股份有限公司 | Swiveling table system and method thereof |
CN117372608A (en) * | 2023-09-14 | 2024-01-09 | 成都飞机工业(集团)有限责任公司 | Three-dimensional point cloud texture mapping method, system, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106651942B (en) | Three-dimensional rotating detection and rotary shaft localization method based on characteristic point | |
CN105825518B (en) | Sequence image quick three-dimensional reconstructing method based on mobile platform shooting | |
CN104992441B (en) | A kind of real human body three-dimensional modeling method towards individualized virtual fitting | |
CN113177977A (en) | Non-contact three-dimensional human body size measuring method | |
CN106997605B (en) | A method for obtaining three-dimensional foot shape by collecting foot shape video and sensor data through smart phones | |
CN109886961A (en) | Depth image-based volume measurement method for medium and large cargoes | |
CN101398886A (en) | Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision | |
Urban et al. | Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds | |
CN110070567A (en) | A kind of ground laser point cloud method for registering | |
CN110838115A (en) | Ancient cultural relic three-dimensional model change detection method by contour line extraction and four-dimensional surface fitting | |
CN112330813A (en) | A reconstruction method of 3D human model under clothing based on monocular depth camera | |
CN111862315A (en) | A method and system for multi-dimension measurement of human body based on depth camera | |
CN101794459A (en) | Seamless integration method of stereoscopic vision image and three-dimensional virtual object | |
CN106649747A (en) | Scenic spot identification method and system | |
CN108154531A (en) | A kind of method and apparatus for calculating body-surface rauma region area | |
CN107067462A (en) | Fabric three-dimensional draping shape method for reconstructing based on video flowing | |
CN115690138A (en) | Road boundary extraction and vectorization method fusing vehicle-mounted image and point cloud | |
CN114758061A (en) | Method for constructing temperature field based on three-dimensional model point cloud grid data | |
CN108010122B (en) | Method and system for reconstructing and measuring three-dimensional model of human body | |
CN106778649A (en) | A kind of image recognition algorithm of judgement sight spot marker | |
Tong et al. | 3D point cloud initial registration using surface curvature and SURF matching | |
CN102722906B (en) | Feature-based top-down image modeling method | |
Li et al. | Using laser measuring and SFM algorithm for fast 3D reconstruction of objects | |
CN114677474A (en) | Hyperspectral 3D reconstruction system, method and application based on SfM and deep learning | |
CN111915725A (en) | Human body measuring method based on motion reconstruction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170818 |