CN107067462A - Fabric three-dimensional draping shape method for reconstructing based on video flowing - Google Patents

Fabric three-dimensional draping shape method for reconstructing based on video flowing Download PDF

Info

Publication number
CN107067462A
CN107067462A CN201710141162.0A CN201710141162A CN107067462A CN 107067462 A CN107067462 A CN 107067462A CN 201710141162 A CN201710141162 A CN 201710141162A CN 107067462 A CN107067462 A CN 107067462A
Authority
CN
China
Prior art keywords
dimensional
point
fabric
image
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710141162.0A
Other languages
Chinese (zh)
Inventor
毋戈
钟跃崎
李端
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Donghua University
Original Assignee
Donghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Donghua University filed Critical Donghua University
Priority to CN201710141162.0A priority Critical patent/CN107067462A/en
Publication of CN107067462A publication Critical patent/CN107067462A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明提供了一种基于视频流的织物三维悬垂形态重建方法,其特征在于,包括以下步骤:将织物剪裁呈圆形后放置在悬垂仪的托盘上,随后放置安装有棋盘的顶盘,织物居中位于托盘与顶盘之间,且棋盘中心与顶盘中心重合;沿上环形轨迹、下环形轨迹匀速移动摄像设备进行摄像;将第二步拍摄到的视频流转化为图像序列;分别利用SIFT算法和Harris算法对第三步得到的图像序列进行特征点检测;获得三维点云;对三维点云进行泊松重建,获得重建模型,进行纹理映射,获得三维悬垂模型。本发明提供的一种基于视频流的织物三维悬垂形态重建方法的重建过程简单稳定,重建结果精度较高,能够真实完整地反映出织物的三维悬垂形态。The invention provides a method for reconstructing the three-dimensional drape shape of fabric based on video stream, which is characterized in that it comprises the following steps: cutting the fabric into a circle and placing it on the tray of the drape instrument, and then placing the top plate with a checkerboard installed on the fabric. The center is located between the tray and the top plate, and the center of the chessboard coincides with the center of the top plate; the camera equipment is moved at a constant speed along the upper circular track and the lower circular track to take pictures; the video stream captured in the second step is converted into an image sequence; respectively use SIFT The algorithm and the Harris algorithm perform feature point detection on the image sequence obtained in the third step; obtain a 3D point cloud; perform Poisson reconstruction on the 3D point cloud to obtain a reconstructed model, perform texture mapping, and obtain a 3D draping model. The method for reconstructing the three-dimensional draping form of the fabric based on the video stream provided by the present invention has a simple and stable reconstruction process, high precision of the reconstruction result, and can truly and completely reflect the three-dimensional draping form of the fabric.

Description

基于视频流的织物三维悬垂形态重建方法Fabric 3D Drape Shape Reconstruction Method Based on Video Stream

技术领域technical field

本发明涉及一种通过采集织物悬垂的视频从得到织物悬垂的三维彩色模型的方法。The invention relates to a method for obtaining a three-dimensional color model of fabric drape by collecting video of fabric drape.

背景技术Background technique

织物的悬垂形态主要是指织物悬垂曲面的三维外观形态。现有技术主要是通过悬垂投影上的二维信息间接反映织物的悬垂形态,具有较大的局限性。The drape shape of the fabric mainly refers to the three-dimensional appearance of the fabric drape surface. The existing technology mainly indirectly reflects the drape shape of the fabric through the two-dimensional information on the drape projection, which has relatively large limitations.

发明内容Contents of the invention

本发明的目的是通过视频流来获取织物悬垂曲面的三维外观形态。The purpose of the invention is to obtain the three-dimensional appearance of the fabric hanging curved surface through the video stream.

为了达到上述目的,本发明的技术方案是提供了一种基于视频流的织物三维悬垂形态重建方法,其特征在于,包括以下步骤:In order to achieve the above object, the technical solution of the present invention provides a method for reconstructing the three-dimensional drape shape of fabric based on video stream, which is characterized in that it comprises the following steps:

第一步、将织物剪裁呈圆形后放置在悬垂仪的托盘上,随后放置安装有棋盘的顶盘,织物居中位于托盘与顶盘之间,且棋盘中心与顶盘中心重合;The first step is to cut the fabric into a circle and place it on the tray of the drape instrument, and then place the top plate with the chessboard installed. The fabric is centered between the tray and the top plate, and the center of the chessboard coincides with the center of the top plate;

第二步、沿上环形轨迹、下环形轨迹匀速移动摄像设备进行摄像,上环形轨迹位于织物正上方,下环形轨迹与织物的悬垂底边平齐;The second step is to move the camera equipment at a constant speed along the upper circular track and the lower circular track to take pictures. The upper circular track is located directly above the fabric, and the lower circular track is flush with the hanging bottom edge of the fabric;

第三步、将第二步拍摄到的视频流转化为图像序列;The third step is to convert the video stream captured in the second step into an image sequence;

第四步、分别利用SIFT算法和Harris算法对第三步得到的图像序列进行特征点检测;The 4th step, utilize SIFT algorithm and Harris algorithm to carry out feature point detection to the image sequence that the 3rd step obtains respectively;

第五步、提取出图像序列中所有图像上的特征点后,计算每一幅图像与待匹配图像中的特征点的最近邻匹配;The fifth step, after extracting the feature points on all images in the image sequence, calculate the nearest neighbor matching between each image and the feature points in the image to be matched;

第六步、求出图像序列中两两图像之间的外参矩阵,并将其归一到同一坐标系下即可计算出特征点的三维坐标,从而获得三维点云;The sixth step is to find the external parameter matrix between two images in the image sequence, and normalize it to the same coordinate system to calculate the three-dimensional coordinates of the feature points, thereby obtaining a three-dimensional point cloud;

第七步、对第六步获得的三维点云进行泊松重建,获得重建模型;The seventh step is to perform Poisson reconstruction on the 3D point cloud obtained in the sixth step to obtain the reconstruction model;

第八步、在第七步获得的重建模型上进行纹理映射,获得三维悬垂模型。In the eighth step, texture mapping is performed on the reconstructed model obtained in the seventh step to obtain a three-dimensional overhanging model.

优选地,在所述第一步中,若织物为纯色织物,在在织物表面绘制网格线。Preferably, in the first step, if the fabric is a solid-color fabric, grid lines are drawn on the surface of the fabric.

优选地,在所述第三步中,对第二步拍摄到的视频按预先设定的采样密度提取相应的截图,从而将视频流转化为图像序列。Preferably, in the third step, corresponding screenshots are extracted from the video captured in the second step according to a preset sampling density, so as to convert the video stream into an image sequence.

优选地,在所述第四步中,利用SIFT算法对图像序列进行特征点检测包括以下步骤:Preferably, in the fourth step, using the SIFT algorithm to perform feature point detection on the image sequence includes the following steps:

步骤4A.1、对于图像序列中的任意一副二维图像I(x,y),由二维图像I(x,y)与高斯核函数G(x,y,σ)的卷积得到在不同尺度下的尺度空间图像L(x,y,σ),σ为函数的宽度参数,控制了函数的径向作用范围。建立图像的DOG金字塔,D(x,y,σ)是两个相邻尺度图像之差,则有:Step 4A.1. For any two-dimensional image I(x, y) in the image sequence, the convolution of the two-dimensional image I(x, y) with the Gaussian kernel function G(x, y, σ) is obtained in Scale-space image L(x, y, σ) at different scales, σ is the width parameter of the function, which controls the radial range of the function. Establish the DOG pyramid of the image, D(x, y, σ) is the difference between two adjacent scale images, then:

D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ),式中,k为尺度参数;D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y , σ), where k is the scale parameter;

步骤4A.2、为每个特征点指定方向参数,使算子具备旋转不变性:Step 4A.2, specify the direction parameter for each feature point, so that the operator has rotation invariance:

像素点(x,y)处的梯度模值为m(x,y),则有:The gradient modulus at the pixel point (x, y) is m(x, y), then:

像素点(x,y)处的梯度方向为θ(x,y),则有:The gradient direction at the pixel point (x, y) is θ(x, y), then:

θ(x,y)=αtan2((L(x,y+1,σ)-L(x,y-1,σ))/(L(x+1,y,σ)-L(x-1,y,σ))),式中,α为主曲率的特征值;θ(x,y)=αtan2((L(x,y+1,σ)-L(x,y-1,σ))/(L(x+1,y,σ)-L(x-1 , y, σ))), where α is the eigenvalue of the principal curvature;

步骤4A.3、特征点描述子的生成:将坐标轴旋转到特征点方向,对每个关键点使用4×4共16个种子点来描述,这样对于一个特征点就产生128个数据,即128维的SIFT特征向量。Step 4A.3. Generation of feature point descriptors: Rotate the coordinate axis to the direction of feature points, and describe each key point using a total of 16 seed points of 4×4, so that 128 data are generated for one feature point, namely 128-dimensional SIFT feature vector.

优选地,在所述第四步中,利用Harris算法对图像序列进行特征点检测包括以下步骤:Preferably, in the fourth step, utilizing the Harris algorithm to perform feature point detection on the image sequence includes the following steps:

步骤4B.1、以像素点(x,y)为中心的小窗口在X方向上移动u,在Y方向上移动v,其灰度变化的解析式为:Step 4B.1. The small window centered on the pixel point (x, y) moves u in the X direction and moves v in the Y direction. The analytical formula for the grayscale change is:

式中,E(x,y)为灰度变化量,o为无穷小量运算符;In the formula, E(x, y) is the amount of gray scale change, o is an infinitesimal operator;

步骤4B.2、将E(x,y)化为二次型有:Step 4B.2, transform E(x, y) into quadratic form:

式中,M为实对称矩阵,Ix为图像I(x,y)在X方向的梯度,Iy为图像I(x,y)在Y方向的梯度; In the formula, M is a real symmetric matrix, I x is the gradient of the image I (x, y) in the X direction, and I y is the gradient of the image I (x, y) in the Y direction;

步骤4B.3、将角点响应函数CRF定义为:Step 4B.3, define the corner response function CRF as:

CRF=det(M)-0.04*trace2(M),式中,det(M)为实对称矩阵M的行列式,trace(M)为实对称矩阵M的迹;CRF=det (M)-0.04*trace 2 (M), in the formula, det (M) is the determinant of real symmetric matrix M, and trace (M) is the track of real symmetric matrix M;

角点响应函数CRF的局部极大值所在点即为角点。The point where the local maximum value of the corner response function CRF is located is the corner point.

优选地,在所述第六步中,二维点p=[u0,v0]T与其对应的三维点Pw=[x,y,z]T间的关系为:Preferably, in the sixth step, the relationship between the two-dimensional point p=[u 0 , v 0 ] T and its corresponding three-dimensional point P w =[x, y, z] T is:

p=K[R/t]Pw,式中,[R/t]是摄像设备的外参矩阵,表示摄像设备在世界坐标系中的位置,K是摄像设备的内参矩阵,为镜头固定参数,物体上相同的一点Pw在任意两张图像中对应点p1和p2的关系为:p=K[R/t]P w , in the formula, [R/t] is the external parameter matrix of the camera equipment, indicating the position of the camera equipment in the world coordinate system, and K is the internal parameter matrix of the camera equipment, which is the fixed parameter of the lens , the relationship between the same point P w on the object corresponding to points p 1 and p 2 in any two images is:

p1Fp2=0,式中,F为基础矩阵,F=K-T[t]×RK-1.p 1 Fp 2 =0, where F is the fundamental matrix, F=K -T [t]×RK -1 .

优选地,在所述第六步中,将获得三维坐标利用BA算法来进一步优化,从而得到所述三维点云,表示如下:Preferably, in the sixth step, the obtained three-dimensional coordinates are further optimized using the BA algorithm, thereby obtaining the three-dimensional point cloud, expressed as follows:

式中,是第i张图像中第j个特征点的二维坐标,K[Ri|ti]Xj是点对应三维点的重投影坐标。 In the formula, is the two-dimensional coordinates of the jth feature point in the i-th image, K[R i |t i ]X j is the point Reprojected coordinates corresponding to 3D points.

优选地,所述第七步包括以下步骤:Preferably, the seventh step includes the following steps:

步骤7.1、对第六步得到的三维点云数据建立八叉树拓扑关系,将散乱的三维点云数据都加入到八叉树中;Step 7.1, establish the octree topological relationship for the 3D point cloud data obtained in the sixth step, and add the scattered 3D point cloud data into the octree;

步骤7.2、对于八叉树拓扑关系中的每个节点设置空间函数FcStep 7.2. Set the spatial function F c for each node in the octree topology:

式中,Rc为节点的中心,rw为节点的宽度,为基函数,R为任意数据点,将三维点云中任一点的坐标设为(x,y,z),则函数空间F(x,y,z)表示为: In the formula, R c is the center of the node, r w is the width of the node, is the basis function, R is any data point, and the coordinates of any point in the 3D point cloud are set as (x, y, z), then the function space F(x, y, z) is expressed as:

F(x,y,z)=(A(x)A(y)A(z))3,式中,为滤波函数,设滤波函数的变量为t,则有:F(x, y, z)=(A(x)A(y)A(z)) 3 , where, is the filter function, let the filter function The variable is t, then there are:

步骤7.3、在均匀采样的情况下,假设划分得块是常量,通过向量场逼近指示函数的梯度,定义对指示函数的梯度场的近似值为则有:Step 7.3, in the case of uniform sampling, assuming that the divided blocks are constant, through the vector field Approximate the gradient of the indicator function, define the approximation to the gradient field of the indicator function as Then there are:

式中,s为点云中的一点,S为点云样本集合,o为八叉树中的节点,NgbrD(s)为最近邻s.p的八个深度为D的节点,s.p为点云样本,αo,s为三次线性插值的权,Fo(q)为节点函数,为点云样本的法线; In the formula, s is a point in the point cloud, S is the set of point cloud samples, o is a node in the octree, Ngbr D (s) is the eight nodes with depth D of the nearest neighbor sp, and sp is the point cloud sample , α o, s is the weight of cubic linear interpolation, F o (q) is the node function, is the normal of the point cloud sample;

步骤7.4、根据步骤7.3所示的方程得到向量场后,采用拉普拉斯矩阵迭代的方式求泊松方程的解,式中,Δ表示的增量,为采样点的位置估计,为矢量微分算符。Step 7.4, obtain the vector field according to the equation shown in step 7.3 Finally, the Poisson equation is obtained by using the Laplacian matrix iteration method solution, where Δ represents increment, is the position estimate of the sampling point, is a vector differential operator.

步骤7.5、通过采样点的位置估计用其平均值进行等值面提取:Step 7.5, through the position estimation of the sampling point Use its mean for isosurface extraction:

式中,为等值面,q为点云数据,为点云样本分布函数,r为点云样本分布坐标的均值, 为点云样本分布坐标; In the formula, is the isosurface, q is the point cloud data, is the point cloud sample distribution function, r is the mean value of the point cloud sample distribution coordinates, Distribution coordinates for point cloud samples;

步骤7.6、把步骤7.5提取的等值面进行拼接,即可获得重建模型。In step 7.6, splice the isosurfaces extracted in step 7.5 to obtain the reconstructed model.

优选地,在所述第八步中:Preferably, in the eighth step:

设获取的纹理图像序列为I={I1,I2,I3,...,In},获取每幅图像时,摄像设备相对于物体的投影矩阵集合为P={P1,P2,P3,...,Pn},则纹理映射函数(u,v)定义为:Assuming that the acquired texture image sequence is I= { I 1 , I 2 , I 3 ,...,In }, when each image is acquired, the projection matrix set of the camera device relative to the object is P={P 1 , P 2 , P 3 ,..., P n }, then the texture mapping function (u, v) is defined as:

(u,v)=F(x,y,z,I,P)(u,v)=F(x,y,z,I,P)

将每一个三维点反投影回所对应的二维图像中,有:To back-project each 3D point into the corresponding 2D image, there are:

y=PiY,式中,y=(x,y)T是投影回二维图像上的对应点,Y=(x,y,z)T是三维点云中的一个三维点,Pi是该图像所处视点的投影矩阵。y=P i Y, where, y=(x, y) T is the corresponding point projected back on the two-dimensional image, Y=(x, y, z) T is a three-dimensional point in the three-dimensional point cloud, P i is the projection matrix of the viewpoint where the image is located.

优选地,在所述第八步后,还包括:Preferably, after the eighth step, it also includes:

第九步、将三维悬垂模型从XCYCZC坐标系转换到XDYDZD坐标系,XCYCZC坐标系为以悬垂模型顶盘圆心为原点的坐标系,XDYDZD坐标系为以用户选定点为原点的坐标系。The ninth step is to convert the three-dimensional overhang model from the X C Y C Z C coordinate system to the X D Y D Z D coordinate system. The X C Y C Z C coordinate system is the coordinate system with the center of the top plate of the overhang model as the origin. X The D Y D Z D coordinate system is the coordinate system with the point selected by the user as the origin.

本发明提供的一种基于视频流的织物三维悬垂形态重建方法的重建过程简单稳定,重建结果精度较高,能够真实完整地反映出织物的三维悬垂形态。The method for reconstructing the three-dimensional draping form of the fabric based on the video stream provided by the present invention has a simple and stable reconstruction process, high precision of the reconstruction result, and can truly and completely reflect the three-dimensional draping form of the fabric.

附图说明Description of drawings

图1为本发明所使用的悬垂仪的示意图;Fig. 1 is the schematic diagram of the drape instrument used in the present invention;

图2为棋盘的示意图;Fig. 2 is the schematic diagram of chessboard;

图3为本发明的视频采集示意图;Fig. 3 is a schematic diagram of video acquisition of the present invention;

图4为SIFT和Harris联合特征点检测图;Figure 4 is a joint feature point detection map of SIFT and Harris;

图5为针孔摄像机模型;Figure 5 is a pinhole camera model;

图6为坐标转换示意图。Fig. 6 is a schematic diagram of coordinate transformation.

具体实施方式detailed description

下面结合具体实施例,进一步阐述本发明。应理解,这些实施例仅用于说明本发明而不用于限制本发明的范围。此外应理解,在阅读了本发明讲授的内容之后,本领域技术人员可以对本发明作各种改动或修改,这些等价形式同样落于本申请所附权利要求书所限定的范围。Below in conjunction with specific embodiment, further illustrate the present invention. It should be understood that these examples are only used to illustrate the present invention and are not intended to limit the scope of the present invention. In addition, it should be understood that after reading the teachings of the present invention, those skilled in the art can make various changes or modifications to the present invention, and these equivalent forms also fall within the scope defined by the appended claims of the present application.

本发明涉及一种基于视频流的织物三维悬垂形态重建方法,包括以下步骤:The invention relates to a method for reconstructing a three-dimensional fabric drape form based on a video stream, comprising the following steps:

第一步、将织物剪裁呈圆形后放置在如图1所示的悬垂仪1的托盘2上,随后放置安装有如图2所示的棋盘3的顶盘,织物居中位于托盘2与顶盘之间,且棋盘3中心与顶盘中心重合。The first step is to cut the fabric into a circle and place it on the tray 2 of the drape instrument 1 as shown in Figure 1, then place the top plate with the checkerboard 3 as shown in Figure 2, and the fabric is centered between the tray 2 and the top plate Between, and the center of chessboard 3 coincides with the center of the top board.

托盘2与顶盘直径均为12cm,顶盘表面采用花色纹理图案。棋盘3尺寸为6cm×4cm,方格边长L为1cm。Both the diameter of the tray 2 and the top plate are 12 cm, and the surface of the top plate adopts patterns of color and texture. The size of the chessboard 3 is 6cm×4cm, and the side length L of the grid is 1cm.

织物裁剪成直径为24cm的圆形试样。对于纯色织物,需要用不同于织物颜色的记号笔在织物表面绘制间距为3cm的网格线。The fabric was cut into circular samples with a diameter of 24 cm. For solid-color fabrics, it is necessary to use a marker pen of a different color from the fabric to draw grid lines with a spacing of 3cm on the surface of the fabric.

第二步、如图3所示,沿上环形轨迹、下环形轨迹匀速移动相机进行摄像,上环形轨迹位于织物正上方,下环形轨迹与织物的悬垂底边平齐,每个环形轨迹摄像时长大概为5秒钟。The second step, as shown in Figure 3, is to move the camera at a constant speed along the upper and lower circular tracks to take pictures. The upper circular track is located directly above the fabric, and the lower circular track is flush with the hanging bottom edge of the fabric. About 5 seconds.

第三步、对所拍摄的视频按照4帧/秒的采样密度提取相应的截图,从而将将第二步拍摄到的视频流转化为图像序列。In the third step, corresponding screenshots are extracted from the captured video at a sampling density of 4 frames per second, so as to convert the video stream captured in the second step into an image sequence.

第四步、分别利用SIFT算法和Harris算法对第三步得到的图像序列进行特征点检测。In the fourth step, the feature point detection is performed on the image sequence obtained in the third step by using the SIFT algorithm and the Harris algorithm respectively.

利用SIFT算法对图像序列进行特征点检测包括以下步骤:Using the SIFT algorithm to perform feature point detection on image sequences includes the following steps:

步骤4A.1、对于图像序列中的任意一副二维图像I(x,y),由二维图像I(x,y)与高斯核函数G(x,y,σ)的卷积得到在不同尺度下的尺度空间图像L(x,y,σ),σ为函数的宽度参数,控制了函数的径向作用范围,建立图像的DOG(Difference of Gaussian)金字塔,D(x,y,σ)是两个相邻尺度图像之差,则有:Step 4A.1. For any two-dimensional image I(x, y) in the image sequence, the convolution of the two-dimensional image I(x, y) with the Gaussian kernel function G(x, y, σ) is obtained in Scale space image L(x, y, σ) at different scales, σ is the width parameter of the function, which controls the radial range of the function, and establishes the DOG (Difference of Gaussian) pyramid of the image, D(x, y, σ ) is the difference between two adjacent scale images, then:

D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ),式中,k为尺度参数;D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y , σ), where k is the scale parameter;

步骤4A.2、为每个特征点指定方向参数,使算子具备旋转不变性:Step 4A.2, specify the direction parameter for each feature point, so that the operator has rotation invariance:

像素点(x,y)处的梯度模值为m(x,y),则有:The gradient modulus at the pixel point (x, y) is m(x, y), then:

像素点(x,y)处的梯度方向为θ(x,y),则有:The gradient direction at the pixel point (x, y) is θ(x, y), then:

θ(x,y)=αtan2((L(x,y+1,σ)-L(x,y-1,σ))/(L(x+1,y,σ)-L(x-1,y,σ))),式中,α为主曲率的特征值;θ(x,y)=αtan2((L(x,y+1,σ)-L(x,y-1,σ))/(L(x+1,y,σ)-L(x-1 , y, σ))), where α is the eigenvalue of the principal curvature;

步骤4A.3、特征点描述子的生成:将坐标轴旋转到特征点方向,对每个关键点使用4×4共16个种子点来描述,这样对于一个特征点就产生128个数据,即128维的SIFT特征向量。Step 4A.3. Generation of feature point descriptors: Rotate the coordinate axis to the direction of feature points, and describe each key point using a total of 16 seed points of 4×4, so that 128 data are generated for one feature point, namely 128-dimensional SIFT feature vector.

利用Harris算法对图像序列进行特征点检测包括以下步骤:Using the Harris algorithm to perform feature point detection on image sequences includes the following steps:

步骤4B.1、以像素点(x,y)为中心的小窗口在X方向上移动u,在Y方向上移动v,其灰度变化的解析式为:Step 4B.1. The small window centered on the pixel point (x, y) moves u in the X direction and moves v in the Y direction. The analytical formula for the grayscale change is:

式中,E(x,y)为灰度变化量,o为无穷小量运算符;In the formula, E(x, y) is the amount of gray scale change, o is an infinitesimal operator;

步骤4B.2、将E(x,y)化为二次型有:Step 4B.2, transform E(x, y) into quadratic form:

式中,M为实对称矩阵,Ix为图像I(x,y)在X方向的梯度,Iy为图像I(x,y)在Y方向的梯度; In the formula, M is a real symmetric matrix, I x is the gradient of the image I (x, y) in the X direction, and I y is the gradient of the image I (x, y) in the Y direction;

步骤4B.3、将角点响应函数CRF定义为:Step 4B.3, define the corner response function CRF as:

CRF=det(M)-0.04*trace2(M),式中,det(M)为实对称矩阵M的行列式,trace(M)为实对称矩阵M的迹;CRF=det (M)-0.04*trace 2 (M), in the formula, det (M) is the determinant of real symmetric matrix M, and trace (M) is the track of real symmetric matrix M;

角点响应函数CRF的局部极大值所在点即为角点。The point where the local maximum value of the corner response function CRF is located is the corner point.

如图4所示,其中星形点为SIFT特征点,而圆形点为Harris特征点。As shown in Figure 4, the star points are SIFT feature points, and the circle points are Harris feature points.

第五步、提取出图像序列中所有图像上的特征点后,计算每一幅图像与待匹配图像中的特征点的最近邻匹配。Step 5: After extracting the feature points on all the images in the image sequence, calculate the nearest neighbor matching between each image and the feature points in the image to be matched.

计算两个特征向量a={a1,a2,...,an},b={b1,b2,...,bn}之间的距离Uab表示如下:Calculating the distance U ab between two eigenvectors a={a 1 , a 2 , . . . , a n }, b={b 1 , b 2 , . . . , b n } is expressed as follows:

第六步、求出图像序列中两两图像之间的外参矩阵,并将其归一到同一坐标系下即可计算出特征点的三维坐标,从而获得三维点云。The sixth step is to calculate the external parameter matrix between two images in the image sequence, and normalize it to the same coordinate system to calculate the three-dimensional coordinates of the feature points, thereby obtaining a three-dimensional point cloud.

如图5所示,根据针孔摄像机模型,照片上的二维点p=[u0,v0]T与其对应的三维点Pw=[x,y,z]T间的关系为:As shown in Figure 5, according to the pinhole camera model, the relationship between the two-dimensional point p = [u 0 , v 0 ] T on the photo and its corresponding three-dimensional point P w = [x, y, z] T is:

p=K[R/t]Pw,式中,[R/t]是相机的外参矩阵,表示相机在世界坐标系中的位置,K是相机的内参矩阵,为镜头固定参数,物体上相同的一点Pw在任意两张图像中对应点p1和p2的关系为:p=K[R/t]P w , where [R/t] is the external parameter matrix of the camera, indicating the position of the camera in the world coordinate system, and K is the internal parameter matrix of the camera, which is the fixed parameter of the lens. The relationship between the corresponding points p 1 and p 2 in any two images of the same point P w is:

p1Fp2=0,式中,F为基础矩阵,F=K-T[t]×RK-1p 1 Fp 2 =0, where F is the fundamental matrix, F=K -T [t]×RK -1 .

将获得三维坐标利用BA(Bundle adjustment)算法来进一步优化,从而得到所述三维点云,表示如下:The obtained three-dimensional coordinates are further optimized using the BA (Bundle adjustment) algorithm, thereby obtaining the three-dimensional point cloud, which is expressed as follows:

式中,是第i张图像中第j个特征点的二维坐标,K[Ri|ti]Xj是点对应三维点的重投影坐标。 In the formula, is the two-dimensional coordinates of the jth feature point in the i-th image, K[R i |t i ]X j is the point Reprojected coordinates corresponding to 3D points.

第七步、对第六步获得的三维点云进行泊松重建,获得重建模型,包括以下步骤:The seventh step is to perform Poisson reconstruction on the 3D point cloud obtained in the sixth step to obtain the reconstruction model, including the following steps:

步骤7.1、对第六步得到的三维点云数据建立八叉树拓扑关系,将散乱的三维点云数据都加入到八叉树中;Step 7.1, establish the octree topological relationship for the 3D point cloud data obtained in the sixth step, and add the scattered 3D point cloud data into the octree;

步骤7.2、对于八叉树拓扑关系中的每个节点设置空间函数FcStep 7.2. Set the spatial function F c for each node in the octree topology:

式中,Rc为节点的中心,rw为节点的宽度,为基函数,R为任意数据点,将三维点云中任一点的坐标设为(x,y,z),则函数空间F(x,y,z)表示为: In the formula, R c is the center of the node, r w is the width of the node, is the basis function, R is any data point, and the coordinates of any point in the 3D point cloud are set as (x, y, z), then the function space F(x, y, z) is expressed as:

F(x,y,z)=(A(x)A(y)A(z))3,式中,为滤波函数,设滤波函数的变量为t,则有:F(x, y, z)=(A(x)A(y)A(z)) 3 , where, is the filter function, let the filter function The variable is t, then there are:

步骤7.3、在均匀采样的情况下,假设划分得块是常量,通过向量场逼近指示函数的梯度,定义对指示函数的梯度场的近似值为则有:Step 7.3, in the case of uniform sampling, assuming that the divided blocks are constant, through the vector field Approximate the gradient of the indicator function, define the approximation to the gradient field of the indicator function as Then there are:

式中,s为点云中的点,S为点云样本集合,o为八叉树中的节点,NgbrD(s)为最近邻s.p的八个深度为D的节点,s.p为点云样本,αo,s为三次线性插值的权,Fo(q)为节点函数,为点云样本的法线; In the formula, s is the point in the point cloud, S is the set of point cloud samples, o is the node in the octree, Ngbr D (s) is the eight nodes with depth D of the nearest neighbor sp, and sp is the point cloud sample , α o, s is the weight of cubic linear interpolation, F o (q) is the node function, is the normal of the point cloud sample;

步骤7.4、根据步骤7.3所示的方程得到向量场后,采用拉普拉斯矩阵迭代的方式求泊松方程的解,式中,Δ为表示的增量,为采样点的位置估计,为矢量微分算符;Step 7.4, obtain the vector field according to the equation shown in step 7.3 Finally, the Poisson equation is obtained by using the Laplacian matrix iteration method The solution of , where Δ is the expression increment, is the position estimate of the sampling point, is a vector differential operator;

步骤7.5、通过采样点的位置估计用其平均值进行等值面提取:Step 7.5, through the position estimation of the sampling point Use its mean for isosurface extraction:

式中,为等值面,q为点云数据,为点云样本分布函数,r为点云样本分布坐标的均值, 为点云样本分布函数; In the formula, is the isosurface, q is the point cloud data, is the point cloud sample distribution function, r is the mean value of the point cloud sample distribution coordinates, is the point cloud sample distribution function;

步骤7.6、把步骤7.5提取的等值面进行拼接,即可获得重建模型。In step 7.6, splice the isosurfaces extracted in step 7.5 to obtain the reconstructed model.

第八步、在第七步获得的重建模型上进行纹理映射,获得三维悬垂模型。在该步骤中:In the eighth step, texture mapping is performed on the reconstructed model obtained in the seventh step to obtain a three-dimensional overhanging model. In this step:

设获取的纹理图像序列为I={I1,I2,I3,...,In},获取每幅图像时,相机相对于物体的投影矩阵集合为P={P1,P2,P3,...,Pn},则纹理映射函数(u,v)定义为:Assuming that the obtained texture image sequence is I={I 1 , I 2 , I 3 ,...,I n }, when acquiring each image, the projection matrix set of the camera relative to the object is P={P 1 , P 2 , P 3 ,..., P n }, then the texture mapping function (u, v) is defined as:

(u,v)=F(x,y,z,I,P)(u,v)=F(x,y,z,I,P)

将每一个三维点反投影回所对应的二维图像中,有:To back-project each 3D point into the corresponding 2D image, there are:

y=PiY,式中,y=(x,y)T是投影回二维图像上的对应点,Y=(x,y,z)T是三维点云中的一个三维点,Pi是该图像所处视点的投影矩阵。y=P i Y, where, y=(x, y) T is the corresponding point projected back on the two-dimensional image, Y=(x, y, z) T is a three-dimensional point in the three-dimensional point cloud, P i is the projection matrix of the viewpoint where the image is located.

第九步、将三维悬垂模型从XCYCZC坐标系转换到XDYDZD坐标系,XCYCZC坐标系为以悬垂模型顶盘圆心为原点的坐标系,XDYDZD坐标系为以用户选定点为原点的坐标系。The ninth step is to convert the three-dimensional overhang model from the X C Y C Z C coordinate system to the X D Y D Z D coordinate system. The X C Y C Z C coordinate system is the coordinate system with the center of the top plate of the overhang model as the origin. X The D Y D Z D coordinate system is the coordinate system with the point selected by the user as the origin.

结合图6,第九步包括以下步骤:In conjunction with Figure 6, the ninth step includes the following steps:

步骤9.1、根据图像中棋盘格角点坐标索引出其在三维点云模型中对应的坐标;Step 9.1, index the corresponding coordinates in the three-dimensional point cloud model according to the coordinates of the checkerboard corner points in the image;

步骤9.2、根据步骤9.1得到的三维角点坐标计算顶盘平面法向量 Step 9.2, calculate the top plate plane normal vector according to the three-dimensional corner point coordinates obtained in step 9.1

步骤9.3、将O1点平移到坐标原点得到变换矩阵T1Step 9.3, translate the O 1 point to the coordinate origin to obtain the transformation matrix T 1 ;

步骤9.4、将O1P1绕YD轴顺时针转θy,与YDO0ZD平面重合,得到变换矩阵T2Step 9.4, rotate O 1 P 1 clockwise around the Y D axis by θ y , coincide with the Y D O 0 Z D plane, and obtain the transformation matrix T 2 ;

步骤9.5、将O1P1绕XD轴顺时针转θx,与ZD轴重合,得到变换矩阵T3Step 9.5, rotate O 1 P 1 clockwise around the X D axis by θ x to coincide with the Z D axis to obtain the transformation matrix T 3 ;

步骤9.6、从标定坐标系到悬垂坐标系的转换矩阵T=T1×T2×T3Step 9.6, transformation matrix T from the calibration coordinate system to the suspension coordinate system = T 1 ×T 2 ×T 3 ;

步骤9.7、将三维悬垂模型上的点坐标乘以1/l,式中,l是相邻三维角点之间的距离。Step 9.7: Multiply the point coordinates on the three-dimensional overhang model by 1/l, where l is the distance between adjacent three-dimensional corner points.

Claims (10)

1. a kind of fabric three-dimensional draping shape method for reconstructing based on video flowing, it is characterised in that comprise the following steps:
The first step, by fabric cut out it is rounded after be placed on the pallet of Pendant meter (1) (2), be subsequently placed with that chessboard (3) is installed Take over a business, between fabric is centrally placed in pallet (2) and taken over a business, and chessboard (3) center and take over a business center superposition;
Second step, imaged along upper annular trace, lower annular trace at the uniform velocity dollying equipment, upper annular trace is located at fabric Surface, lower annular trace is concordant with the pendency base of fabric;
3rd step, the video flowing that second step is photographed is converted into image sequence;
4th step, it is utilized respectively the image sequence that SIFT algorithms and Harris algorithms obtain to the 3rd step and carries out feature point detection;
5th step, extract after characteristic point in image sequence on all images, calculate per piece image with image to be matched Characteristic point arest neighbors matching;
6th step, outer ginseng matrix between image two-by-two is obtained in image sequence, and be normalized under the same coordinate system The three-dimensional coordinate of characteristic point is calculated, so as to obtain three-dimensional point cloud;
7th step, the three-dimensional point cloud progress Poisson reconstruction obtained to the 6th step, obtain reconstruction model;
8th step, on the reconstruction model that the 7th step is obtained carry out texture mapping, obtain three-dimensional drape model.
2. a kind of fabric three-dimensional draping shape method for reconstructing based on video flowing as claimed in claim 1, it is characterised in that In the first step, if fabric is pure color fabric, grid lines is being drawn in fabric face.
3. a kind of fabric three-dimensional draping shape method for reconstructing based on video flowing as claimed in claim 1, it is characterised in that In 3rd step, the video photographed to second step extracts corresponding sectional drawing by sampling density set in advance, so as to will regard Frequency circulation turns to image sequence.
4. a kind of fabric three-dimensional draping shape method for reconstructing based on video flowing as claimed in claim 1, it is characterised in that In 4th step, feature point detection is carried out to image sequence using SIFT algorithms and comprised the following steps:
Step 4A.1, for any one secondary two-dimensional image I (x, y) in image sequence, by two-dimensional image I (x, y) and Gaussian kernel Function G (x, y, σ) convolution obtains the scale space images L (x, y, σ) under different scale, and σ is the width parameter of function, The radial effect scope of function is controlled, the DOG pyramids of image are set up, D (x, y, σ) is the difference of two adjacent scalogram pictures, Then have:
In D (x, y, σ)=(G (x, y, k σ)-G (x, y, σ)) * I (x, y)=L (x, y, k σ)-L (x, y, σ), formula, k joins for yardstick Number;
Step 4A.2, be each characteristic point assigned direction parameter, operator is possessed rotational invariance:
The gradient modulus value at pixel (x, y) place is m (x, y), then has:
The gradient direction at pixel (x, y) place is θ (x, y), then has:
In θ (x, y)=α tan2 ((L (x, y+1, σ)-L (x, y-1, σ))/(L (x+1, y, σ)-L (x-1, y, σ))), formula, based on α The characteristic value of curvature;
Step 4A.3, feature point description generation:Reference axis is rotated into characteristic point direction, 4 × 4 are used to each key point Totally 16 seed points are described, and so just produce 128 data, i.e., the SIFT features vector of 128 dimensions for a characteristic point.
5. a kind of fabric three-dimensional draping shape method for reconstructing based on video flowing as claimed in claim 4, it is characterised in that In 4th step, feature point detection is carried out to image sequence using Harris algorithms and comprised the following steps:
Step 4B.1, the wicket centered on pixel (x, y) move u in the X direction, and v is moved in the Y direction, its gray scale The analytic expression of change is:
In formula, E (x, y) is grey scale change amount,O is dimensionless operator;
Step 4B.2, E (x, y) is turned into quadratic form had:
In formula, M is real symmetric matrix,IxIt is image I (x, y) in X side To gradient, IyFor the gradients of image I (x, y) in the Y direction;
Step 4B.3, angle point receptance function CRF is defined as:
CRF=det (M) -0.04*trace2(M), in formula, det (M) is real symmetric matrix M determinant, and trace (M) is real right Claim matrix M mark;
Angle point receptance function CRF local maximum point is angle point.
6. a kind of fabric three-dimensional draping shape method for reconstructing based on video flowing as claimed in claim 1, it is characterised in that In 6th step, two-dimensional points p=[u0, v0]TCorresponding three-dimensional point Pw=[x, y, z]TBetween relation be:
P=K [R/t] Pw, in formula, [R/t] is the outer ginseng matrix of picture pick-up device, represents position of the picture pick-up device in world coordinate system Put, K is the internal reference matrix of picture pick-up device, be the point of identical one P on camera lens preset parameter, objectwIt is right in any two images Should point p1And p2Relation be:
p1Fp2=0, in formula, matrix based on F, F=K-T[t]×RK-1
7. a kind of fabric three-dimensional draping shape method for reconstructing based on video flowing as claimed in claim 1, it is characterised in that In 6th step, three-dimensional coordinate will be obtained and further optimized using BA algorithms, so as to obtain the three-dimensional point cloud, represented such as Under:
In formula,It is the two-dimensional coordinate of j-th of characteristic point in i-th image, K [Ri| ti]XjIt is a littleThe re-projection coordinate of corresponding three-dimensional points.
8. a kind of fabric three-dimensional draping shape method for reconstructing based on video flowing as claimed in claim 1, it is characterised in that institute The 7th step is stated to comprise the following steps:
Step 7.1, Octree topological relation is set up to the three dimensional point cloud that the 6th step is obtained, by three dimensional point cloud at random All it is added in Octree;
Step 7.2, for each node installation space function F in Octree topological relationc
In formula, RcFor the center of node, rwFor the width of node,For basic function, R is Arbitrary Digit Strong point, (x, y, z) is set to by the coordinate of any point in three-dimensional point cloud, then function space F (x, y, z) is expressed as:
F (x, y, z)=(A (x) A (y) A (z))3, in formula,For filter function, if filter functionVariable be t, then Have:
Step 7.3, in the case of uniform sampling, it is assumed that divide block is constant, pass through vector fieldApproach the ladder of indicator function Spend, definition is to the approximation of the gradient fields of indicator functionThen have:
In formula, s is the point in point cloud, and S is point cloud sample set, and o is in Octree Node, NgbrD(s) node that eight depth for being arest neighbors s.p are D, s.p is point cloud sample, αO, sFor trilinear interpolation Power, Fo(q) it is node function,For a normal for cloud sample;
Step 7.4, the equation according to step 7.3 obtain vector fieldAfterwards, pool is sought by the way of Laplacian Matrix iteration Loose measure journeySolution, in formula, Δ is representsIncrement,For the location estimation of sampled point,Calculated for vector differential Symbol;
Step 7.5, the location estimation by sampled pointIsosurface extraction is carried out with its average value:
In formula,For contour surface, q is cloud data,For a cloud sample distribution function, R is the average of point cloud sample distribution coordinate, For a cloud sample distribution function;
Step 7.6, the contour surface that step 7.5 is extracted are spliced, you can obtain reconstruction model.
9. a kind of fabric three-dimensional draping shape method for reconstructing based on video flowing as claimed in claim 1, it is characterised in that In 8th step:
If the texture image sequence obtained is I={ I1, I2, I3..., In, when obtaining each image, picture pick-up device is relative to thing The projection matrix collection of body is combined into P={ P1, P2, P3..., Pn, then texture mapping function (u, v) is defined as:
(u, v)=F (x, y, z, I, P)
Each three-dimensional point back projection is returned in corresponding two dimensional image, had:
Y=PiIn Y, formula, y=(x, y)TIt is the corresponding points being projected back in on two dimensional image, Y=(x, y, z)TIn being three-dimensional point cloud One three-dimensional point, PiIt is the projection matrix of viewpoint residing for the image.
10. a kind of fabric three-dimensional draping shape method for reconstructing based on video flowing as claimed in claim 1, it is characterised in that After the 8th step, in addition to:
9th step, by three-dimensional drape model from XCYCZCCoordinate system is transformed into XDYDZDCoordinate system, XCYCZCCoordinate system is with the mould that dangles Type takes over a business the coordinate system that the center of circle is origin, XDYDZDCoordinate system is the coordinate system using user's Chosen Point as origin.
CN201710141162.0A 2017-03-10 2017-03-10 Fabric three-dimensional draping shape method for reconstructing based on video flowing Pending CN107067462A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710141162.0A CN107067462A (en) 2017-03-10 2017-03-10 Fabric three-dimensional draping shape method for reconstructing based on video flowing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710141162.0A CN107067462A (en) 2017-03-10 2017-03-10 Fabric three-dimensional draping shape method for reconstructing based on video flowing

Publications (1)

Publication Number Publication Date
CN107067462A true CN107067462A (en) 2017-08-18

Family

ID=59622371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710141162.0A Pending CN107067462A (en) 2017-03-10 2017-03-10 Fabric three-dimensional draping shape method for reconstructing based on video flowing

Country Status (1)

Country Link
CN (1) CN107067462A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021062645A1 (en) * 2019-09-30 2021-04-08 Zte Corporation File format for point cloud data
TWI801193B (en) * 2022-04-01 2023-05-01 適着三維科技股份有限公司 Swiveling table system and method thereof
CN117372608A (en) * 2023-09-14 2024-01-09 成都飞机工业(集团)有限责任公司 Three-dimensional point cloud texture mapping method, system, equipment and medium
US12217354B2 (en) 2022-03-25 2025-02-04 Zte Corporation File format for point cloud data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587082A (en) * 2009-06-24 2009-11-25 天津工业大学 Quick three-dimensional reconstructing method applied for detecting fabric defect
CN102867327A (en) * 2012-09-05 2013-01-09 浙江理工大学 Textile flexible movement reestablishing method based on neural network system
CN103454276A (en) * 2013-06-30 2013-12-18 上海工程技术大学 Textile form and style evaluation method based on dynamic sequence image
CN105279789A (en) * 2015-11-18 2016-01-27 中国兵器工业计算机应用技术研究所 A three-dimensional reconstruction method based on image sequences

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587082A (en) * 2009-06-24 2009-11-25 天津工业大学 Quick three-dimensional reconstructing method applied for detecting fabric defect
CN102867327A (en) * 2012-09-05 2013-01-09 浙江理工大学 Textile flexible movement reestablishing method based on neural network system
CN103454276A (en) * 2013-06-30 2013-12-18 上海工程技术大学 Textile form and style evaluation method based on dynamic sequence image
CN105279789A (en) * 2015-11-18 2016-01-27 中国兵器工业计算机应用技术研究所 A three-dimensional reconstruction method based on image sequences

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
HARRIS C等: "A combined corner and edge detector", 《ALVEY VISION CONFERENCE》 *
侯建辉等: "自适应的 Harris 棋盘格角点检测算法", 《计算机工程与设计》 *
刘为宏: "点云数据曲面重建算法及研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
胡堃: "基于图像序列的织物悬垂形态重建及测量", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
胡堃等: "基于照片序列的织物悬垂形态重建及测量", 《东华大学学报(自然科学版)》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021062645A1 (en) * 2019-09-30 2021-04-08 Zte Corporation File format for point cloud data
CN114365194A (en) * 2019-09-30 2022-04-15 中兴通讯股份有限公司 File format used for point cloud data
US12217354B2 (en) 2022-03-25 2025-02-04 Zte Corporation File format for point cloud data
TWI801193B (en) * 2022-04-01 2023-05-01 適着三維科技股份有限公司 Swiveling table system and method thereof
CN117372608A (en) * 2023-09-14 2024-01-09 成都飞机工业(集团)有限责任公司 Three-dimensional point cloud texture mapping method, system, equipment and medium

Similar Documents

Publication Publication Date Title
CN106651942B (en) Three-dimensional rotating detection and rotary shaft localization method based on characteristic point
CN105825518B (en) Sequence image quick three-dimensional reconstructing method based on mobile platform shooting
CN104992441B (en) A kind of real human body three-dimensional modeling method towards individualized virtual fitting
CN113177977A (en) Non-contact three-dimensional human body size measuring method
CN106997605B (en) A method for obtaining three-dimensional foot shape by collecting foot shape video and sensor data through smart phones
CN109886961A (en) Depth image-based volume measurement method for medium and large cargoes
CN101398886A (en) Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision
Urban et al. Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds
CN110070567A (en) A kind of ground laser point cloud method for registering
CN110838115A (en) Ancient cultural relic three-dimensional model change detection method by contour line extraction and four-dimensional surface fitting
CN112330813A (en) A reconstruction method of 3D human model under clothing based on monocular depth camera
CN111862315A (en) A method and system for multi-dimension measurement of human body based on depth camera
CN101794459A (en) Seamless integration method of stereoscopic vision image and three-dimensional virtual object
CN106649747A (en) Scenic spot identification method and system
CN108154531A (en) A kind of method and apparatus for calculating body-surface rauma region area
CN107067462A (en) Fabric three-dimensional draping shape method for reconstructing based on video flowing
CN115690138A (en) Road boundary extraction and vectorization method fusing vehicle-mounted image and point cloud
CN114758061A (en) Method for constructing temperature field based on three-dimensional model point cloud grid data
CN108010122B (en) Method and system for reconstructing and measuring three-dimensional model of human body
CN106778649A (en) A kind of image recognition algorithm of judgement sight spot marker
Tong et al. 3D point cloud initial registration using surface curvature and SURF matching
CN102722906B (en) Feature-based top-down image modeling method
Li et al. Using laser measuring and SFM algorithm for fast 3D reconstruction of objects
CN114677474A (en) Hyperspectral 3D reconstruction system, method and application based on SfM and deep learning
CN111915725A (en) Human body measuring method based on motion reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170818