CN110599578A - Realistic three-dimensional color texture reconstruction method - Google Patents

Realistic three-dimensional color texture reconstruction method Download PDF

Info

Publication number
CN110599578A
CN110599578A CN201910687176.1A CN201910687176A CN110599578A CN 110599578 A CN110599578 A CN 110599578A CN 201910687176 A CN201910687176 A CN 201910687176A CN 110599578 A CN110599578 A CN 110599578A
Authority
CN
China
Prior art keywords
image
texture
dimensional
color texture
realistic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910687176.1A
Other languages
Chinese (zh)
Inventor
陈海龙
曹良才
吴佳琛
刘梦龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN ESUN DISPLAY CO Ltd
Tsinghua University
Original Assignee
SHENZHEN ESUN DISPLAY CO Ltd
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN ESUN DISPLAY CO Ltd, Tsinghua University filed Critical SHENZHEN ESUN DISPLAY CO Ltd
Priority to CN201910687176.1A priority Critical patent/CN110599578A/en
Publication of CN110599578A publication Critical patent/CN110599578A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Graphics (AREA)
  • Image Generation (AREA)
  • Image Processing (AREA)

Abstract

本发明提供一种真实感三维彩色纹理重建方法,包括:预标定三维传感器以及彩色纹理相机的系统参数;获取多视角三维图像与二维彩色纹理图像;利用所述多视角三维图像生成三维网格模型;由所述系统参数建立各个视角下所述二维彩色纹理图像与所述三维网格模型之间的映射关系;基于所述映射关系进行纹理融合得到融合图像以实现整体三维模型的彩色纹理重建;根据所述映射关系生成对应的纹理贴图。本发明通过引入复合权重参数来评价纹理颜色的置信度。通过对投影纹理图像进行加权平均,可以消除纹理的不连续性。对于不精确的几何图形,引入了表示两幅图像结构相似性的双向相似性函数来校正不一致,生成真实感纹理。

The present invention provides a realistic three-dimensional color texture reconstruction method, comprising: pre-calibrating the system parameters of a three-dimensional sensor and a color texture camera; acquiring a multi-view three-dimensional image and a two-dimensional color texture image; using the multi-view three-dimensional image to generate a three-dimensional grid Model; establish the mapping relationship between the two-dimensional color texture image and the three-dimensional mesh model under each viewing angle by the system parameters; perform texture fusion based on the mapping relationship to obtain a fusion image to realize the color texture of the overall three-dimensional model Reconstruction; generate a corresponding texture map according to the mapping relationship. The present invention evaluates the confidence of texture color by introducing compound weight parameters. Texture discontinuities can be removed by weighted averaging of projected texture images. For imprecise geometries, a bidirectional similarity function representing the structural similarity of two images is introduced to correct inconsistencies and generate realistic textures.

Description

一种真实感三维彩色纹理重建方法A Realistic 3D Color Texture Reconstruction Method

技术领域technical field

本发明属于电子技术领域,更具体地说,是涉及一种真实感三维彩色纹理重建方法。The invention belongs to the field of electronic technology, and more specifically relates to a realistic three-dimensional color texture reconstruction method.

背景技术Background technique

三维测量技术在城市测量、人体测量、原型制作等多个行业和学科中得到了广泛的应用,光学三维测量技术为获取三维图像提供了一种灵活的方法。随着电荷耦合器件(CCD)、数字光处理(DLP)投影仪等高性能光电器件的发展,光学三维测量可以获得高灵敏度、高速度的数据。通过动态空间变化图案照明的结构光三维测量技术在三维测量系统中得到了广泛的应用,图案可以是周期性条纹、二维网格或随机斑点。物体的几何形状被扭曲的结构光图案编码以期从捕捉到的图像中准确地解调出来。3D measurement technology has been widely used in many industries and disciplines such as urban surveying, anthropometry, prototyping, etc. Optical 3D measurement technology provides a flexible method for acquiring 3D images. With the development of high-performance optoelectronic devices such as charge-coupled devices (CCD) and digital light processing (DLP) projectors, optical three-dimensional measurement can obtain high-sensitivity, high-speed data. Structured light 3D measurement technology illuminated by dynamic spatially varying patterns has been widely used in 3D measurement systems, and the patterns can be periodic stripes, 2D grids, or random spots. The geometry of the object is encoded by the distorted structured light pattern in order to be accurately decoded from the captured image.

然而,三维几何测量无法解决颜色问题。一般情况下,多视图图像被捕获用于几何曲面上的映射,以生成独立于几何重构的颜色信息。几何形状或相机姿势的任何误差都可能导致映射难以对齐,不同视图之间的不一致照明可能导致颜色不真实,上述问题将导致纹理伪影,如模糊、重影和颜色不连续。However, 3D geometric measurements cannot resolve color issues. In general, multi-view images are captured for mapping on geometric surfaces to generate color information independent of geometric reconstruction. Any errors in geometry or camera pose can make the mapping difficult to align, inconsistent lighting between different views can lead to unrealistic colors, and the above problems will lead to texture artifacts such as blurring, ghosting and color discontinuity.

为了解决上述问题,目前已提出了多种纹理重建方法。通过在空间域或频率域中进行图像融合可以提升颜色的一致性,但这些方法是以牺牲图像清晰度为代价来实现的。利用马尔可夫随机场优化方法进行图像拼接是避免图像退化的另一种方法,然而可见的接缝并不能完全消除。后处理通常用于调整接缝处纹理块的颜色,如泊松融合法、热扩散法和其他颜色调整方法。通过优化摄像机姿态可以用来纠正对齐不准的现象,如手动摄像机校准、基于互信息的方法和最大化颜色一致性的方法等。有些方法通过使用非刚性校准技术校正输入图像来处理偏差。例如,引入了用于图像扭曲的光流方法来解决图像偏差问题。另外,超分辨率方法已被提出,以克服模糊问题。最近的一种方法提出了一种基于补丁的多幅图像纹理映射优化方法。在上述方法中,虽然采用了多种手段来消除各种伪影,但均需要人工参与从而降低了效率。In order to solve the above problems, a variety of texture reconstruction methods have been proposed. Color consistency can be improved by image fusion in the spatial or frequency domain, but these methods do so at the expense of image clarity. Image stitching using Markov random field optimization method is another way to avoid image degradation, however visible seams cannot be completely eliminated. Post-processing is usually used to adjust the color of texture blocks at seams, such as Poisson fusion, thermal diffusion, and other color adjustment methods. The misalignment can be corrected by optimizing the camera pose, such as manual camera calibration, methods based on mutual information, and methods for maximizing color consistency. Some methods deal with bias by correcting the input image using a non-rigid calibration technique. For example, the optical flow method for image warping is introduced to solve the image deviation problem. Additionally, super-resolution methods have been proposed to overcome the blurring problem. A recent approach proposes a patch-based optimization method for texture mapping of multiple images. In the above methods, although various means are used to eliminate various artifacts, all of them require manual participation, thus reducing the efficiency.

为了解决上述问题中的至少一种,本文提出了一种真实感三维彩色纹理重建方法。To solve at least one of the above problems, this paper proposes a photorealistic 3D color texture reconstruction method.

发明内容Contents of the invention

为解决上述问题,本发明提供一种真实感三维彩色纹理重建方法,其特征在于,包括:预标定三维传感器以及彩色纹理相机的系统参数;获取多视角三维图像与二维彩色纹理图像;利用所述多视角三维图像生成三维网格模型;由所述系统参数建立各个视角下所述二维彩色纹理图像与所述三维网格模型之间的映射关系;基于所述映射关系进行纹理融合得到融合图像以实现整体三维模型的彩色纹理重建;根据所述映射关系生成对应的纹理贴图。In order to solve the above problems, the present invention provides a realistic three-dimensional color texture reconstruction method, which is characterized in that it includes: pre-calibrating the system parameters of the three-dimensional sensor and the color texture camera; acquiring multi-view three-dimensional images and two-dimensional color texture images; using the The multi-view 3D image generates a 3D mesh model; the mapping relationship between the 2D color texture image and the 3D mesh model under each viewing angle is established by the system parameters; texture fusion is performed based on the mapping relationship to obtain fusion Image to realize the color texture reconstruction of the overall three-dimensional model; generate the corresponding texture map according to the mapping relationship.

在一些实施例中,所述纹理融合是通过深度数据定义复合权重的方法评估每个纹理像素的置信度,并根据各个视角下的置信度进行加权平均以计算融合结果。所述复合权重由下式计算获得:In some embodiments, the texture fusion is to evaluate the confidence of each texel by defining composite weights through depth data, and perform a weighted average according to the confidence of each viewing angle to calculate the fusion result. The composite weight is calculated by the following formula:

f(xk)=fnorm(xk)·fdepth(xk)·fedge(xk)f(x k )=f norm (x k )·f depth (x k )·f edge (x k )

其中,法线权重深度权重边缘权重且每项权重经过归一化,取值范围为[0,1]。所述法线权重中的系数取值为:a=0.1,b=50°,所述深度权重中的系数取值为:a=0.4,b=50mm,d0=55mm,所述边缘权重中的系数取值为:a=-0.08,b=50mm。Among them, the normal weight depth weight edge weight And each weight is normalized, and the value range is [0,1]. The value of the coefficient in the normal weight is: a=0.1, b=50°, the value of the coefficient in the depth weight is: a=0.4, b=50mm, d 0 =55mm, the edge weight The values of the coefficients are: a=-0.08, b=50mm.

在一些实施例中,在原始所述二维彩色纹理图像与融合图像之间引入目标图像,并根据每个视角下所述融合图像的位移情况,采用双向相似度函数EBDS(S,T)对所述原始二维彩色纹理图像进行重建计算生成能量函数,并通过最小化能量函数以使得全局的所述目标图像发生位移而减少整体模型纹理融合的图像模糊,最终得到新的高分辨率的目标图像。In some embodiments, a target image is introduced between the original two-dimensional color texture image and the fused image, and according to the displacement of the fused image at each viewing angle, a bidirectional similarity function E BDS (S, T) is used The original two-dimensional color texture image is reconstructed and calculated to generate an energy function, and the image blur of the overall model texture fusion is reduced by minimizing the energy function so that the global target image is displaced, and finally a new high-resolution image is obtained. target image.

在一些实施例中,所述能量函数还包括光学测度一致性函数ECIn some embodiments, the energy function also includes an optical measure consistency function E C :

其中Mi表示第i个视角下的所述融合图像,xk表示图像的像素位置,P(·)表示投影函数,N表示视角的数量。wj表示第j个视角下图像的复合权重。Where M i represents the fused image under the i-th viewing angle, x k represents the pixel position of the image, P(·) represents a projection function, and N represents the number of viewing angles. wj denotes the composite weight of the image at the jth viewing angle.

在一些实施例中,所述能量函数E最终被构造成:In some embodiments, the energy function E is finally constructed as:

E=E1+λE2 E=E 1 +λE 2

其中λ是两个能量函数E1和E2之间的比例因子。where λ is the scaling factor between the two energy functions E1 and E2.

在一些实施例中,所述目标函数通过两步交替优化策略求解,即对以下两个步骤进行迭代求解:S1:固定所述融合图像Mi,优化所述目标图像Ti;S2:固定所述目标图像Ti,优化所述融合图像MiIn some embodiments, the objective function is solved by a two-step alternate optimization strategy, that is, the following two steps are iteratively solved: S1: fix the fused image M i , optimize the target image T i ; S2: fix the The target image T i is used to optimize the fused image M i .

所述步骤S1中,所述目标图像Ti用下式表达:In the step S1, the target image Ti is expressed by the following formula:

所述步骤S2中,所述目标函数Mi用下式表示:In the step S2, the objective function Mi is represented by the following formula:

在一些实施例中,所述迭代运算过程中采用多尺度优化方法,即在低尺度阶段,所有图像降采样到低的分辨率并进行上述迭代运算。当能量函数E收敛后,所述目标图像和所述融合图像升采样到更大的尺度,而所述原始二维彩色纹理图像依然进行降采样,目的是为了将原始二维彩色纹理图像的高频信息注入到所述目标图像和所述融合图像中。In some embodiments, a multi-scale optimization method is adopted in the iterative operation process, that is, in the low-scale stage, all images are down-sampled to a low resolution and the above-mentioned iterative operation is performed. After the energy function E converges, the target image and the fused image are up-sampled to a larger scale, while the original two-dimensional color texture image is still down-sampled, in order to reduce the high Injecting video information into the target image and the fused image.

本发明的有益效果:提出了一种真实感三维彩色纹理重建方法,通过引入复合权重参数来评价纹理颜色的置信度。通过对投影纹理图像进行加权平均,可以消除纹理的不连续性。对于不精确的几何图形,引入了表示两幅图像结构相似性的双向相似性(BDS)函数来校正不一致,生成真实感纹理。Beneficial effects of the present invention: A realistic three-dimensional color texture reconstruction method is proposed, which evaluates the confidence of texture color by introducing composite weight parameters. Texture discontinuities can be removed by weighted averaging of projected texture images. For imprecise geometries, a Bidirectional Similarity (BDS) function representing the structural similarity of two images is introduced to correct inconsistencies and generate realistic textures.

附图说明Description of drawings

图1是根据本发明一个实施例的真实感三维彩色纹理重建方法示意图。Fig. 1 is a schematic diagram of a realistic three-dimensional color texture reconstruction method according to an embodiment of the present invention.

图2是根据本发明一个实施例的各类权重曲线示意图。Fig. 2 is a schematic diagram of various weight curves according to an embodiment of the present invention.

图3是根据本发明一个实施例的基于BSF的彩色纹理融合算法流程图。Fig. 3 is a flowchart of a BSF-based color texture fusion algorithm according to an embodiment of the present invention.

具体实施方式Detailed ways

下面结合具体实施方式并对照附图对本发明作进一步详细说明,应该强调的是,下述说明仅仅是示例性的,而不是为了限制本发明的范围及其应用。The present invention will be described in further detail below in conjunction with specific embodiments and with reference to the accompanying drawings. It should be emphasized that the following descriptions are only exemplary and not intended to limit the scope of the present invention and its application.

图1所示的是根据本发明一个实施例的真实感三维彩色纹理重建方法示意图。FIG. 1 is a schematic diagram of a realistic three-dimensional color texture reconstruction method according to an embodiment of the present invention.

步骤101中先预标定三维传感器以及彩色纹理相机的系统参数。这里的三维传感器用于获取采集深度图像,即三维图像,三维传感器可以是基于结构光技术的双目视觉三维传感器,比如由数字条纹投影仪以及双彩色相机组成的双目视觉三维传感器。当然,任意可采集三维图像的三维传感器均可采用,比如单目结构光三维传感器、时间飞行三维传感器等。彩色纹理相机用于采集高分辨率的物体纹理信息,比如可以采用高分辨率单反相机等。标定即要对三维传感器、彩色纹理相机的内部参数进行标定,也要对二者的相对外参进行标定。In step 101, the system parameters of the 3D sensor and the color texture camera are pre-calibrated. The 3D sensor here is used to acquire depth images, that is, 3D images. The 3D sensor can be a binocular vision 3D sensor based on structured light technology, such as a binocular vision 3D sensor composed of a digital stripe projector and a dual-color camera. Of course, any 3D sensor that can collect 3D images can be used, such as a monocular structured light 3D sensor, a time-of-flight 3D sensor, and the like. The color texture camera is used to collect high-resolution object texture information, for example, a high-resolution SLR camera can be used. Calibration means to calibrate the internal parameters of the 3D sensor and the color texture camera, and also to calibrate the relative external parameters of the two.

在一个实施例中,三维传感器含有三个相机,其中两个是用于生成深度数据的黑白工业相机,第三个是用于采集纹理图片的彩色相机。三维传感器的标定是后续生成深度数据及进行纹理融合的前提。以下将从单相机的数学模型出发,描述黑白相机组成的双目传感器的标定,以及彩色纹理相机标定的原理。In one embodiment, the 3D sensor contains three cameras, two of which are black and white industrial cameras used to generate depth data, and the third is a color camera used to capture texture pictures. The calibration of the 3D sensor is the prerequisite for the subsequent generation of depth data and texture fusion. The following will start from the mathematical model of a single camera to describe the calibration of the binocular sensor composed of black and white cameras and the principle of color texture camera calibration.

相机模型。camera model.

如果忽略成像系统的衍射效应,并假设相机镜头严格满足傍轴条件,则相机成像可以等效为小孔成像,其成像过程满足透视投影变换,设物点在世界坐标系下标记为Xw=(Xw,Yw,Zw)T,在图像坐标系下的理想像点为mc=(u,v)T,则成像过程表示为If the diffraction effect of the imaging system is ignored, and assuming that the camera lens strictly satisfies the paraxial condition, the camera imaging can be equivalent to the pinhole imaging, and the imaging process satisfies the perspective projection transformation, and the object point is marked as X w = in the world coordinate system (X w ,Y w ,Z w ) T , the ideal pixel in the image coordinate system is m c =(u,v) T , then the imaging process is expressed as

其中,上标表示齐次坐标;Xc表示物点在相机坐标系下的坐标;是Xc在相机像面上的投影;Rc、tc分别为世界坐标系到相机坐标系的旋转矩阵和平移向量,称为相机的外部参数;Kc是相机内部参数矩阵,包括沿图像坐标轴的等效焦距(fu,fv)T,光心在图像平面上的投影,即图像平面的主点(u0,v0)T,以及图像的倾斜因子γ;Mc=Kc[Rc|tc]称为投影矩阵,同时包含了相机的内外参;λ=Zc是尺度因子。where superscript Indicates homogeneous coordinates; X c indicates the coordinates of the object point in the camera coordinate system; is the projection of X c on the camera image plane; R c and t c are the rotation matrix and translation vector from the world coordinate system to the camera coordinate system respectively, which are called the external parameters of the camera; K c is the internal parameter matrix of the camera, including The equivalent focal length of the coordinate axis (fu , f v ) T , the projection of the optical center on the image plane, that is, the principal point ( u 0 , v 0 ) T of the image plane, and the tilt factor γ of the image; M c =K c [R c |t c ] is called the projection matrix, which also includes the internal and external parameters of the camera; λ=Z c is the scale factor.

真实的光学成像系统由于成像镜头的加工工艺、结构装配等因素的影响,必然导致实际成像面与上述理想成像面之间存在偏差,称之为相机镜头畸变。经典的Brown-Conrady模型是目前应用最广泛的镜头畸变模型,其畸变可表示为:In a real optical imaging system, due to the influence of factors such as the processing technology and structural assembly of the imaging lens, there will inevitably be a deviation between the actual imaging surface and the above-mentioned ideal imaging surface, which is called camera lens distortion. The classic Brown-Conrady model is currently the most widely used lens distortion model, and its distortion can be expressed as:

x′c=xc+Δ(xc),x′ c =x c +Δ(x c ),

其中,x′c=(x′c,y′c)T表示有畸变像点;Δ(xc)表示畸变项,包括径向畸变(radialdistortion)和离心畸变(decentering distortion);是无畸变图像点到主点的距离;(k1,k2,k3,…)和(p1,p2,p3,…)分别是径向畸变和离心畸变参数。通常三项径向畸变和两项离心畸变已经满足精度要求。令k=(k1,k2,k3,p1,p2)T表示畸变参数向量,则包含镜头畸变的相机模型表示为Among them, x' c = (x' c , y' c ) T represents a distorted image point; Δ(x c ) represents a distortion item, including radial distortion (radial distortion) and centrifugal distortion (decentering distortion); is the distance from the undistorted image point to the principal point; (k 1 ,k 2 ,k 3 ,…) and (p 1 ,p 2 ,p 3 ,…) are the radial and centrifugal distortion parameters, respectively. Usually three radial distortions and two centrifugal distortions have met the accuracy requirements. Let k=(k 1 ,k 2 ,k 3 ,p 1 ,p 2 ) T represent the distortion parameter vector, then the camera model including lens distortion is expressed as

在式(3)表示的非线性相机模型中,k与Kc表示相机的内参,Rc与tc表示相机的外参。相机标定过程通常是最小化标靶基准点到实际图像点的重投影误差,即Xc→mc,因此通过解析表达式x′c=xc+Δ(xc;k)可以对理想像点加畸变,即从无畸变像点xc计算出有畸变像点x′c。而三维重建过程则是通过实际图像点构建出准确物点,即Xc→mc。三维重建过程则是通过实际图像点重构物点的空间坐标,即mc→Xc,此过程需要去除畸变,即从有畸变的实际像点x′c计算出无畸变像点xc。由于式(2)是一个复杂的非线性函数,无法获得其逆函数的解析表达式,考虑到畸变项相对较小,可用递归逼近的方法得到无畸变像点的数值解:In the nonlinear camera model represented by formula (3), k and K c represent the internal parameters of the camera, and R c and t c represent the external parameters of the camera. The camera calibration process is usually to minimize the reprojection error from the target reference point to the actual image point, that is, X c →m c , so the ideal image can be calculated by the analytical expression x′ c =x c +Δ(x c ; Point plus distortion, that is, calculate the distorted image point x′ c from the undistorted image point x c . The 3D reconstruction process is to construct an accurate object point through actual image points, that is, X c → m c . The 3D reconstruction process is to reconstruct the spatial coordinates of the object point through the actual image point, that is, m c → X c . This process needs to remove the distortion, that is, calculate the undistorted image point x c from the actual image point x′ c with distortion. Since formula (2) is a complex nonlinear function, the analytical expression of its inverse function cannot be obtained. Considering that the distortion term is relatively small, the numerical solution of the undistorted image point can be obtained by recursive approximation:

使用有畸变像点近似为初始的无畸变像点,通过控制迭代次数即可得到较为准确的无畸变像点。Using the distorted image point to approximate the initial undistorted image point, a more accurate undistorted image point can be obtained by controlling the number of iterations.

双目传感器标定。Binocular sensor calibration.

由左右两个相机可构成基于双目立体视觉的三维传感器,通常将世界坐标系设立在相机上。以左相机为例,则双目传感器的数学模型表示为Two left and right cameras can form a three-dimensional sensor based on binocular stereo vision, and the world coordinate system is usually set up on the camera. Taking the left camera as an example, the mathematical model of the binocular sensor is expressed as

Xl=RlXw+tl (5)X l =R l X w +t l (5)

其中,I为单位矩阵,Rs和ts为左相机坐标系到右相机坐标系的旋转矩阵与平移向量,[Rs|ts]表示传感器的结构参数,满足Among them, I is the identity matrix, R s and t s are the rotation matrix and translation vector from the left camera coordinate system to the right camera coordinate system, [R s |t s ] represents the structural parameters of the sensor, satisfying

双目标定的目标就是确定两个相机的内部参数以及两个相机间的结构参数。The goal of binocular setting is to determine the internal parameters of the two cameras and the structural parameters between the two cameras.

相机标定的过程通常以标靶基准点的投影像点与实际测量图像点的重投影误差作为优化目标函数,从标靶图像数据中获得内外参数的最优估计值。双目传感器标定的目标函数表示为The process of camera calibration usually takes the reprojection error between the projected image point of the target reference point and the actual measured image point as the optimization objective function, and obtains the optimal estimated value of the internal and external parameters from the target image data. The objective function of binocular sensor calibration is expressed as

其中,m′l和m′r是真实图像坐标,是根据模型计算的基准点的重投影图像坐标。可以采用通过Gauss-Newton或者Levenberg-Marquardt优化算法[207]对上式进行优化求解,最终得到系统参数。Among them, m′ l and m′ r are the real image coordinates, and are the reprojected image coordinates of the fiducial points calculated from the model. Gauss-Newton or Levenberg-Marquardt optimization algorithm [207] can be used to optimize and solve the above formula, and finally obtain the system parameters.

彩色纹理相机标定。Color Texture Camera Calibration.

彩色纹理相机的作用是获取物体的彩色二维图像信息,通过建立三维几何模型到二维彩色图像的映射关系,获得三维网格的颜色信息,最终实现彩色三维成像。为实现三维几何模型到二维彩色图像的映射,需要预先求解彩色纹理相机的内部参数和结构参数,即标定彩色纹理相机。通常情况下,彩色三维传感器获取物体彩色图像的方式有两种:①双目传感器的两个相机本身是彩色相机,可以同时获取深度信息和彩色纹理信息,这种情况下通过双目传感器标定已经确定了相机内外参数,无需额外的标定操作;②双目相机与彩色相机分离,需要额外标定彩色相机。在实际应用过程中,这两种方式均有优缺点。第一种方式结构简单,成本低,但是由于彩色相机受到拜尔滤波成像原理的影响,由彩色图像转换得到的灰度图像的灰阶精度比直接从黑白相机获取的更低,从而影响了深度数据的精度。另外考虑到三维测量速度,双目传感器所使用的工业相机的分辨率不能太高,因而限制了彩色纹理图像的分辨率。第二种方式结构相对复杂,成本也较高,但是由于不受深度数据采集的限制,可以根据需求选取专用的彩色相机,比如专业单反相机等,从而达到非常高的分辨率和色彩还原度。The function of the color texture camera is to obtain the color two-dimensional image information of the object, and obtain the color information of the three-dimensional mesh by establishing the mapping relationship between the three-dimensional geometric model and the two-dimensional color image, and finally realize the color three-dimensional imaging. In order to realize the mapping from the 3D geometric model to the 2D color image, it is necessary to solve the internal parameters and structural parameters of the color texture camera in advance, that is, to calibrate the color texture camera. Usually, there are two ways for color 3D sensors to acquire color images of objects: ① The two cameras of the binocular sensor are color cameras, which can simultaneously acquire depth information and color texture information. The internal and external parameters of the camera are determined, and no additional calibration operation is required; ②The binocular camera is separated from the color camera, and an additional calibration of the color camera is required. In practical application, both methods have advantages and disadvantages. The first method is simple in structure and low in cost, but because the color camera is affected by the Bayer filter imaging principle, the grayscale accuracy of the grayscale image converted from the color image is lower than that obtained directly from the black and white camera, thus affecting the depth The precision of the data. In addition, considering the three-dimensional measurement speed, the resolution of the industrial camera used by the binocular sensor cannot be too high, thus limiting the resolution of the color texture image. The second method has a relatively complex structure and high cost, but since it is not limited by depth data collection, a dedicated color camera can be selected according to the needs, such as a professional SLR camera, etc., so as to achieve very high resolution and color reproduction.

设左相机坐标系为三维传感器坐标系,则三个相机的结构参数为:Let the left camera coordinate system be the 3D sensor coordinate system, then the structural parameters of the three cameras are:

其中,Rt和tt分别为世界坐标系到彩色相机坐标系的旋转矩阵和平移向量,Rp和tp为左相机和彩色纹理相机之间的旋转矩阵与平移向量。为了获得更高精度的结构参数,我们把变换矩阵加入到三相机的非线性目标函数中,通过Gauss-Newton或者Levenberg-Marquardt的方法最小化目标函数实现相机参数估计:Among them, R t and t t are the rotation matrix and translation vector from the world coordinate system to the color camera coordinate system, respectively, and R p and t p are the rotation matrix and translation vector between the left camera and the color texture camera. In order to obtain higher-precision structural parameters, we add the transformation matrix to the nonlinear objective function of the three-camera, and minimize the objective function through the Gauss-Newton or Levenberg-Marquardt method to achieve camera parameter estimation:

其中τ={Kl,Kr,Kc,kl,kr,kc,Rs,ts,Rp,tp},Kc、kc分别为彩色相机的内参。Where τ={K l , K r , K c , k l , k r , k c , R s , t s , R p , t p }, K c and k c are the internal references of the color camera respectively.

三维传感器标定一般流程General process of 3D sensor calibration

以圆形基准点的平面标靶为例,具体标定流程如下:Taking the plane target with a circular reference point as an example, the specific calibration process is as follows:

(1)标靶图像基准点提取:拍摄多组标靶图像,提取标靶图像上的圆心坐标并与已知的基准点三维坐标进行对应,以基准点的三维坐标和图像坐标作为输入参数求解和优化传感器的系统参数;(1) Target image fiducial point extraction: take multiple groups of target images, extract the coordinates of the center of the target image and correspond to the known three-dimensional coordinates of the fiducial point, and use the three-dimensional coordinates of the fiducial point and the image coordinates as input parameters to solve the problem and optimize the system parameters of the sensor;

(2)获取相机参数初值:先不考虑镜头畸变,采用线性相机模型估计相机的内外参数;为了防止下一步骤目标函数过拟合以及加速目标函数的收敛速度,采用最小二乘算法对估计的参数进一步优化,得到的结果作为相机参数和传感器结构参数的初值;(2) Obtain the initial value of the camera parameters: ignore the lens distortion first, and use the linear camera model to estimate the internal and external parameters of the camera; The parameters are further optimized, and the obtained results are used as the initial values of the camera parameters and sensor structure parameters;

(3)非线性优化传感器参数:在相机模型中加入镜头畸变,采用非线性相机模型和传感器结构参数构造优化目标函数,通过最小化目标函数实现传感器参数的最优估计。(3) Nonlinear optimization of sensor parameters: Lens distortion is added to the camera model, the optimization objective function is constructed by using the nonlinear camera model and sensor structure parameters, and the optimal estimation of sensor parameters is realized by minimizing the objective function.

回到图1,步骤102中利用三维传感器以及彩色纹理相机对被物体进行多视角采集以获取多视角三维图像与二维彩色纹理图像。在一个实施例中,可以将被测物体放置在一个旋转台上,当旋转台进行旋转时利用三维传感器以及彩色纹理相机对被测物体进行采集,从而可以采集到包含被测物体360度信息的多个视角下的三维图像以及二维彩色纹理图像。Returning to FIG. 1 , in step 102 , the three-dimensional sensor and the color texture camera are used to collect multi-view images of the object to obtain multi-view three-dimensional images and two-dimensional color texture images. In one embodiment, the object to be measured can be placed on a rotating platform. When the rotating table rotates, the three-dimensional sensor and the color texture camera are used to collect the object to be measured, so that the image containing the 360-degree information of the object to be measured can be collected. 3D images from multiple viewing angles and 2D color texture images.

步骤103利用所述多视角三维图像生成三维网格模型。Step 103 uses the multi-view 3D images to generate a 3D mesh model.

步骤104由所述系统参数建立各个视角下所述二维彩色纹理图像与所述三维网格模型之间的映射关系以实现网格参数化。Step 104 establishes a mapping relationship between the two-dimensional color texture image and the three-dimensional mesh model at each viewing angle from the system parameters to realize mesh parameterization.

步骤105基于所述映射关系进行纹理融合得到融合图像以实现整体三维模型的彩色纹理重建。在一个实施例中,可以根据映射关系将多视角的二维彩色纹理图像投影到三维网络模型上,当所有视角下的二维彩色纹理图像均进行投影之后,即可以得到彩色三维纹理模型,即实现了整体三维模型的彩色纹理重建。Step 105 performs texture fusion based on the mapping relationship to obtain a fused image to realize color texture reconstruction of the overall 3D model. In one embodiment, the multi-view two-dimensional color texture images can be projected onto the three-dimensional network model according to the mapping relationship. After the two-dimensional color texture images under all viewing angles are projected, the color three-dimensional texture model can be obtained, namely The color texture reconstruction of the overall 3D model is realized.

步骤106根据所述映射关系生成对应的纹理贴图。为了便于保存,最后根据映射关系生成多幅纹理贴图,并将三维网格模型和纹理贴图以obj、ply、wrl等格式进行保存。可以理解的是,在实际应用中,需要经常对该模型进行新的网格参数化并生成新的纹理贴图。Step 106 generates a corresponding texture map according to the mapping relationship. For the convenience of storage, multiple texture maps are generated according to the mapping relationship, and the 3D mesh model and texture maps are saved in obj, ply, wrl and other formats. Understandably, in practice, new mesh parameterizations and new texture maps for this model need to be frequently generated.

将所有建立映射关系的纹理图像进行均值运算是一种直接且简单的全局纹理融合方法,但是实际上由于光照不均以及物体形貌变化等关系,彩色相机采集的物体表面图像的亮度也不一致,导致融合结果出现比较明显的颜色跳变。为了实现具有真实感的三维彩色纹理重建,本专利提出采用基于复合权重的纹理融合是解决纹理色彩跳变、实现不同视角的纹理边界自然过渡的有效方法。即通过深度数据定义复合权重的方法来评估每个纹理像素的置信度,根据各个视角下的置信度进行加权平均计算融合结果。It is a direct and simple global texture fusion method to perform mean calculation on all texture images that establish a mapping relationship, but in fact, due to uneven illumination and changes in object shape, the brightness of object surface images collected by color cameras is also inconsistent. This leads to a more obvious color jump in the fusion result. In order to achieve realistic 3D color texture reconstruction, this patent proposes that texture fusion based on compound weights is an effective method to solve texture color jumps and realize natural transition of texture boundaries from different perspectives. That is, the confidence of each texel is evaluated by defining the composite weight through the depth data, and the fusion result is calculated by weighted average according to the confidence of each viewing angle.

这里引入Sigmoid核函数(也叫logistic函数)进行复合权重的分配。Sigmoid核函数定义如下:Here, the Sigmoid kernel function (also called logistic function) is introduced to assign compound weights. The Sigmoid kernel function is defined as follows:

其中f(·)∈(0,1),系数a和b为实数,控制Sigmoid曲线的分布。在复合权重中引入Sigmoid核函数是为了通过系数的调节灵活控制权重曲线以满足实际需求。复合权重包含法线权重、深度权重以及边缘权重。Among them, f(·)∈(0,1), the coefficients a and b are real numbers, which control the distribution of the Sigmoid curve. The purpose of introducing the Sigmoid kernel function in the composite weight is to flexibly control the weight curve through the adjustment of coefficients to meet actual needs. Composite weights include normal weights, depth weights, and edge weights.

法线权重是根据物体表面法向与相机视线方向的夹角进行权重分配的。根据经典的双向反射率(BRDF)模型,相机采集物体表面的亮度与光源入射角、物体表面法向、相机视线方向、表面反射率等参数有关系。然而,在实际应用中很难获得这些参数的准确值,比如光源的具体空间位置、物体表面的真实反射率等等。因此,我们将法向权重近似为Sigmoid函数。设图像某一有效区域中物体表面法向与相机视线方向的夹角为Δθk,则法向权重满足:The normal weight is assigned based on the angle between the object's surface normal and the camera's line of sight. According to the classic bidirectional reflectance (BRDF) model, the brightness of the surface of the object collected by the camera is related to the incident angle of the light source, the normal direction of the object surface, the direction of the camera's line of sight, and the surface reflectance. However, it is difficult to obtain accurate values of these parameters in practical applications, such as the specific spatial position of the light source, the real reflectivity of the object surface, and so on. Therefore, we approximate the normal weights as a sigmoid function. Assuming that the angle between the normal of the object surface and the direction of the camera's line of sight in a certain effective area of the image is Δθ k , the weight of the normal direction satisfies:

其中xk为图像中的像素,夹角越大,权重越低。法向权重曲线如图2(a)所示,曲线中系数a=0.1,b=50°。Among them, x k is the pixel in the image, and the larger the included angle, the lower the weight. The normal weight curve is shown in Fig. 2(a), the coefficients in the curve are a=0.1, b=50°.

深度权重是根据物体表面到相机成像面的距离进行权重分配的。由于相机成像模型受到景深(DOF)的限制,当物体表面的某些区域超出镜头的景深范围时,会因为离焦而变得模糊,从而对纹理融合质量造成不良影响。深度权重以最佳成像距离为基准,偏离程度越大,权重越小,且在景深范围内权重衰减慢,临近景深边界处衰减得快,超出景深范围迅速截止。深度权重定义如下:The depth weight is assigned according to the distance from the object surface to the camera imaging plane. Because the camera imaging model is limited by the depth of field (DOF), when some areas of the object surface exceed the depth of field range of the lens, they will become blurred due to defocus, which will adversely affect the quality of texture fusion. The depth weight is based on the optimal imaging distance, the greater the deviation, the smaller the weight, and the weight decays slowly within the depth of field range, and the decay is fast near the depth of field boundary, and it cuts off quickly beyond the depth of field range. The depth weights are defined as follows:

其中D(·)表示物体表面点d到最佳成像基准面d0的最短距离。深度权重曲线如图2(b)所示,曲线中系数a=0.4,b=50mm,d0=55mm。where D( ) represents the shortest distance from the object surface point d to the best imaging datum plane d 0 . The depth weight curve is shown in Fig. 2(b), and the coefficients in the curve are a=0.4, b=50mm, and d 0 =55mm.

边缘权重是根据图像中目标像素点到有效区域的边缘轮廓线的最短欧氏距离进行权重分配的。由于不同视角下的纹理融合在边缘处容易引起亮度跳变,所以目标像素点距离有效区域的边缘轮廓线越近,权重越低。本文的边缘权重定义如下:The edge weight is assigned according to the shortest Euclidean distance from the target pixel point in the image to the edge contour line of the effective area. Since texture fusion under different viewing angles is likely to cause brightness jumps at the edge, the closer the target pixel is to the edge contour of the effective area, the lower the weight. The edge weights in this paper are defined as follows:

其中D(·)表示点xk到边缘轮廓线的最短距离。边缘权重可以显著减少纹理边界处的颜色跳变,是纹理融合中非常重要的权重函数。边缘权重曲线如图2(c)所示,系数a=-0.08,b=50mm。where D( ) represents the shortest distance from point x k to the edge contour. Edge weight can significantly reduce the color jump at the texture boundary, and is a very important weight function in texture fusion. The edge weight curve is shown in Fig. 2(c), coefficient a=-0.08, b=50mm.

通过对以上三项权重相乘构造复合权重:Composite weights are constructed by multiplying the above three weights:

f(xk)=fnorm(xk)·fdepth(xk)·fedge(xk) (15)f(x k )=f norm (x k )·f depth (x k )·f edge (x k ) (15)

每项权重经过归一化,取值范围在[0,1]之间。Each weight is normalized, and the value range is between [0,1].

如果各个视角下的纹理映射关系足够精确,以及重建的几何模型足够精细,基于复合权重的纹理融合能够实现良好的纹理重建效果。然而,受系统标定、单视角深度数据重建以及ICP匹配等三维成像各个环节的综合影响,实际上很难满足这些假定条件,从而导致纹理错位和模糊现象,降低了纹理重建质量。为此,本专利还提出一种复合权重与双向相似度(Bidirectional similarity,BDS)函数相结合的纹理融合算法。该算法的主要思路是在原始图像和融合图像之间引入目标图像,根据每个视角下融合图像的位移情况,采用双向相似度函数对原始图像进行重建计算生成能量函数,通过最小化能量函数,使得全局的目标图像发生位移而减少整体模型纹理融合的图像模糊,以得到新的目标图像。引入双向相似函数是为了在目标图像重建过程中,根据融合图像进行变形,同时尽可能地包含原始图像信息。If the texture mapping relationship in each viewing angle is accurate enough and the reconstructed geometric model is fine enough, texture fusion based on compound weights can achieve good texture reconstruction effect. However, due to the combined effects of system calibration, single-view depth data reconstruction, and ICP matching and other 3D imaging links, it is actually difficult to meet these assumptions, resulting in texture misalignment and blurring, and reducing the quality of texture reconstruction. For this reason, this patent also proposes a texture fusion algorithm combining compound weights and a bidirectional similarity (BDS) function. The main idea of the algorithm is to introduce the target image between the original image and the fused image, and according to the displacement of the fused image at each viewing angle, use the bidirectional similarity function to reconstruct the original image and generate an energy function. By minimizing the energy function, The global target image is displaced to reduce the image blurring of the overall model texture fusion to obtain a new target image. The bidirectional similarity function is introduced in order to deform according to the fused image during the reconstruction of the target image while containing the original image information as much as possible.

2008年,Simakov等人将双向相似度函数定义为:In 2008, Simakov et al. defined the bidirectional similarity function as:

其中S表示原始图像,T表示目标图像,s,t分别表示原始图像和目标图像的区块,D(s,t)表示区块s和t在RGB彩色空间内的平方差之和,α为两项的比例参数,L表示每个区块内像素的个数,比如7×7大小的区块,则L=49。式(16)右边第一项为完整项(Completeness),表示目标图像包含原图像中的信息的完整性,值越低则越完整;第二项为相关项(Coherence),表示在目标图像中出现的相对于原图像的新的可视结构(比如,由人工痕迹所导致),值越低则新的可视结构越少。通过最小化该函数,使得目标图像在视觉相关性约束下最大程度地包含原始图像信息。Where S represents the original image, T represents the target image, s, t represent the blocks of the original image and the target image respectively, D(s, t) represents the sum of the square differences between blocks s and t in the RGB color space, and α is The ratio parameter of two items, L represents the number of pixels in each block, for example, for a block with a size of 7×7, L=49. The first item on the right side of formula (16) is Completeness, which means that the target image contains the completeness of the information in the original image, and the lower the value, the more complete; the second item is Coherence, which means that in the target image The new visible structure that appears relative to the original image (eg, caused by artifacts), the lower the value, the less the new visible structure. By minimizing this function, the target image can contain the original image information to the greatest extent under the constraint of visual correlation.

然而,单纯依赖双向相似度函数并不能很好地改善纹理融合质量,还需要让多视角下的目标图像与融合图像保持光学测度一致性(Photometrically Consistent)。为此引入另外一项能量函数:However, relying solely on the bidirectional similarity function cannot improve the quality of texture fusion very well, and it is also necessary to keep the target image under multiple views and the fusion image photometrically consistent. To this end, another energy function is introduced:

其中Mi表示第i个视角下的融合图像,xk表示图像的像素位置,P(·)表示投影函数,例如Pi(Tj)表示第j个视角的目标图像往第i个视角的目标图像进行投影,N表示视角的数量。wj表示第j个视角下图像的复合权重,权重值根据式(5)生成。将式(6)和(7)由单视角扩展到全局视角,并构造最终的能量函数:Where M i represents the fused image at the i-th viewing angle, x k represents the pixel position of the image, P(·) represents the projection function, for example, P i (T j ) represents the target image at the j-th viewing angle to the i-th viewing angle The target image is projected, and N represents the number of viewing angles. w j represents the composite weight of the image at the jth viewing angle, and the weight value is generated according to formula (5). Extend formulas (6) and (7) from single perspective to global perspective, and construct the final energy function:

其中λ是两个能量函数E1和E2之间的比例因子。通过最小化能量函数E,可以生成各个视角下的目标图像Ti,使得目标图像满足两个约束:相似性(Similarity)约束,即尽可能地包含原始图像的信息(对应于能量函数E1);一致性(Consistency)约束,即保持与融合图像的一致性(对应于能量函数E2)。where λ is the scaling factor between the two energy functions E1 and E2. By minimizing the energy function E, the target image T i under each viewing angle can be generated, so that the target image satisfies two constraints: the similarity (Similarity) constraint, that is, contains the information of the original image as much as possible (corresponding to the energy function E 1 ) ; Consistency constraint, that is, to maintain consistency with the fused image (corresponding to the energy function E 2 ).

纹理对齐与融合Texture Alignment and Blending

由能量函数式(8)可以看到,目标函数T1,...,TN和融合图像M1,...,MN均为变量。为了求得能量函数(8)的最优解,本文使用两步交替优化策略,其基本思路是当优化目标图像时,所有融合图像保持不变,而当生成融合图像时,所有目标图像保持不变。首先初始化以原始图像作为目标图像和融合图像的初始化图像,即Ti=Si,Mi=Si,两步交替优化方法如下:It can be seen from the energy function formula (8) that the target functions T 1 ,...,T N and the fused images M 1 ,...,M N are all variables. In order to obtain the optimal solution of the energy function (8), this paper uses a two-step alternate optimization strategy. The basic idea is that when the target image is optimized, all fused images remain unchanged, and when the fused image is generated, all target images remain unchanged. Change. First initialize the initial image with the original image as the target image and the fused image, that is, T i =S i , M i =S i , the two-step alternate optimization method is as follows:

步骤S1:固定融合图像Mi,优化目标图像Ti。在这个阶段,融合图像M1,...,MN被当作已知项。根据式(18),目标图像与能量函数E1和E2均有关联,因此分开求解目标图像Ti。对于式(16),根据Simakov的方法,通过最小化D(s,t)进行区块搜索,确定目标图像中所有区块与原始图像中与之误差最小的区块的对应关系。为了更加清晰描述求解过程,重写式(16):Step S1: Fix the fused image M i and optimize the target image T i . At this stage, the fused images M 1 ,...,M N are taken as known items. According to formula (18), the target image is related to both energy functions E 1 and E 2 , so the target image T i is solved separately. For formula (16), according to Simakov's method, block search is performed by minimizing D(s, t), and the corresponding relationship between all blocks in the target image and the block with the smallest error in the original image is determined. In order to describe the solution process more clearly, rewrite equation (16):

其中E1(i,xk)为第i个视角下目标图像像素xk处的能量函数,su和sv分别为BSF函数的完整项和相关项中覆盖目标图像像素xk的区块所对应原始图像的区块,通过区块搜索确定,yu和yv分别为区块su和sv中的像素,且对应于目标图像区块中xk的像素位置,U和V分别对应完整项和相关项中区块的数量,例如,区块的大小为7×7,则U和V小于等于49。由式(19)可以看到能量函数E1(i,xk)是关于Ti(xk)的二次方程,对是(19)进行求导并令导数为0,得到Where E 1 (i,x k ) is the energy function at the target image pixel x k at the i-th viewing angle, s u and s v are the complete term of the BSF function and the block covering the target image pixel x k in the related term, respectively The block corresponding to the original image is determined by block search, y u and y v are the pixels in blocks s u and s v respectively, and correspond to the pixel position of x k in the target image block, U and V are respectively Corresponding to the number of blocks in the complete item and the related item, for example, if the size of the block is 7×7, then U and V are less than or equal to 49. It can be seen from formula (19) that the energy function E 1 (i, x k ) is a quadratic equation about T i (x k ), and taking the derivative of (19) and setting the derivative to 0, we get

从而得到目标图像的表达式:Thus, the expression of the target image is obtained:

由式(21)可以看到第一项的目标图像是根据原始图像的信息进行重建的。注意到在式中保留1/L是为了与式(14)进行合并。From formula (21), it can be seen that the target image of the first item is reconstructed according to the information of the original image. Note that 1/L is retained in Eq. (14) for merging.

能量函数E2的最小化求解方法类似,考虑到Pj(Pi(Tj))=Tj,重写能量函数E2的表达式:The solution to the minimization of the energy function E 2 is similar. Considering P j (P i (T j ))=T j , rewrite the expression of the energy function E 2 as follows:

对式(22)进行求导并令导数为0,得到Deriving formula (22) and setting the derivative to 0, we get

为了与E1的表达式保持一致,将符号i和j调换获得Ti的表达式:In order to be consistent with the expression of E 1 , the symbols i and j are exchanged to obtain the expression of T i :

同样的,没有将wi(xk)约去是为了与式(21)进行合并。由式(24)可以看到目标图像是对当前所有相关视角下的纹理图像进行加权平均来求解的,此约束体现了目标图像会根据融合图像的结果进行对齐。Similarly, wi (x k ) is not reduced for the purpose of merging with formula (21). From formula (24), it can be seen that the target image is solved by weighted average of the texture images under all current relevant viewing angles. This constraint reflects that the target image will be aligned according to the result of the fusion image.

最后,对能量函数E进行求导并令导数为0,即并结合式(21)和式(24)可以得到目标图像的表达式:Finally, take the derivative of the energy function E and set the derivative to be 0, that is And combining formula (21) and formula (24), the expression of the target image can be obtained:

步骤S2:固定纹理图像Ti,优化融合图像Mi。在这个阶段,融合图像M1,...,MN为优化参数。根据式(18),融合图像只与能量函数E2有关联,因此采用类似的方法可以得到纹理图像的生成公式:Step S2: Fix the texture image T i and optimize the fusion image M i . In this stage, the fused images M 1 ,...,M N are optimized parameters. According to formula (18), the fused image is only related to the energy function E2 , so a similar method can be used to obtain the generation formula of the texture image:

由式(26)可以看到,融合图像是对各个相关视角下的目标图像进行加权平均得到。在迭代运算的开始阶段,如果各个视角下的目标图像产生错位,融合图像会产生重影和模糊,而在迭代运算过程中,各个视角下的目标图像会根据融合图像进行对齐操作,从而在融合图像重建过程中不断减少图像的重影和模糊,直至能量函数E小于设定的临界值c,则判断为收敛。It can be seen from formula (26) that the fused image is obtained by weighting and averaging the target images under each relevant viewing angle. At the beginning of the iterative operation, if the target images in each viewing angle are misaligned, the fused image will produce ghosting and blurring. During the iterative operation, the target images in each viewing angle will be aligned according to the fused image. During the image reconstruction process, the ghosting and blurring of the image are continuously reduced until the energy function E is less than the set critical value c, then it is judged as convergent.

在迭代运算过程中,为了避免陷入局部最优,并加速收敛速度,我们采用多尺度的优化方法。在低尺度阶段,所有图像降采样到低的分辨率并进行上述迭代运算。当能量函数E收敛后,目标图像和融合图像升采样到更大的尺度,而原始图像依然进行降采样,目的是为了将原始图像的高频信息注入到目标图像和融合图像中。在一些实施例中,在初始阶段融合图像存在模糊,灰度曲线存在明显波纹,而随着不同尺度的迭代运算,图像越来越清晰,灰度曲线的对比度也得到增强。在最高尺度的迭代运算中,所有图像的分辨率调整至原始图像的初始分辨率大小,同时得到所有视角下的目标图像T1,...,TN和融合图像M1,...,MN。在一个实施例中,采用10级尺度进行多尺度优化,其中第i级的图像任意一维的图像尺寸li的计算公式为:In the iterative operation process, in order to avoid falling into local optimum and accelerate the convergence speed, we adopt a multi-scale optimization method. In the low-scale stage, all images are down-sampled to a low resolution and the above-mentioned iterative operation is performed. After the energy function E converges, the target image and the fusion image are up-sampled to a larger scale, while the original image is still down-sampled, in order to inject the high-frequency information of the original image into the target image and the fusion image. In some embodiments, the fused image is blurred at the initial stage, and the grayscale curve has obvious ripples. With the iterative operation of different scales, the image becomes clearer and the contrast of the grayscale curve is also enhanced. In the highest-scale iterative operation, the resolution of all images is adjusted to the initial resolution of the original image, and the target images T 1 ,...,T N and fused images M 1 ,..., M N . In one embodiment, 10-level scales are used for multi-scale optimization, wherein the formula for calculating the image size l i of any one-dimensional image of the i-th level is:

li=(l0/8)·8(i-1)/9 (27)l i =(l 0 /8)·8 (i-1)/9 (27)

l0为原始图像的初始图像尺寸,例如原始图像的初始分辨率是5520pixel×3680pixel,则第1级尺度下的图像分辨率是690pixel×460pixel。l 0 is the initial image size of the original image. For example, the initial resolution of the original image is 5520pixel×3680pixel, and the image resolution at the first level of scale is 690pixel×460pixel.

经过上述两步法的迭代运算,最终得到所有视角下新的最高分辨率的目标图像,且目标图像已经经过了对齐优化,此时再采用复合权重的融合算法对所有目标图像进行融合,即可得到三维模型最终的纹理贴图。基于BSF的彩色纹理融合算法的整体流程如图3所示。After the iterative operation of the above two-step method, the new target image with the highest resolution under all viewing angles is finally obtained, and the target image has been aligned and optimized. At this time, the composite weight fusion algorithm is used to fuse all the target images. Get the final texture map of the 3D model. The overall flow of the BSF-based color texture fusion algorithm is shown in Figure 3.

以上内容是结合具体/优选的实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,其还可以对这些已描述的实施方式做出若干替代或变型,而这些替代或变型方式都应当视为属于本发明的保护范围。The above content is a further detailed description of the present invention in conjunction with specific/preferred embodiments, and it cannot be assumed that the specific implementation of the present invention is limited to these descriptions. For those of ordinary skill in the technical field to which the present invention belongs, without departing from the concept of the present invention, they can also make some substitutions or modifications to the described embodiments, and these substitutions or modifications should be regarded as Belong to the protection scope of the present invention.

Claims (10)

1.一种真实感三维彩色纹理重建方法,其特征在于,包括:1. A realistic three-dimensional color texture reconstruction method, characterized in that, comprising: 预标定三维传感器以及彩色纹理相机的系统参数;System parameters of pre-calibrated 3D sensor and color texture camera; 获取多视角三维图像与二维彩色纹理图像;Obtain multi-view 3D images and 2D color texture images; 利用所述多视角三维图像生成三维网格模型;generating a three-dimensional mesh model by using the multi-view three-dimensional image; 由所述系统参数建立各个视角下所述二维彩色纹理图像与所述三维网格模型之间的映射关系;Establishing a mapping relationship between the two-dimensional color texture image and the three-dimensional mesh model under each viewing angle by the system parameters; 基于所述映射关系进行纹理融合得到融合图像以实现整体三维模型的彩色纹理重建;Performing texture fusion based on the mapping relationship to obtain a fusion image to realize color texture reconstruction of the overall three-dimensional model; 根据所述映射关系生成对应的纹理贴图。Generate a corresponding texture map according to the mapping relationship. 2.根据权利要求1所述的真实感三维彩色纹理重建方法,其特征在于,所述纹理融合是通过深度数据定义复合权重的方法评估每个纹理像素的置信度,并根据各个视角下的置信度进行加权平均以计算融合结果。2. The realistic three-dimensional color texture reconstruction method according to claim 1, wherein the texture fusion is to evaluate the confidence of each texel by defining composite weights through depth data, and according to the confidence of each viewing angle Degrees are weighted average to calculate the fusion result. 3.根据权利要求2所述的真实感三维彩色纹理重建方法,其特征在于,所述复合权重由下式计算获得:3. The realistic three-dimensional color texture reconstruction method according to claim 2, wherein the composite weight is calculated by the following formula: f(xk)=fnorm(xk)·fdepth(xk)·fedge(xk)f(x k )=f norm (x k )·f depth (x k )·f edge (x k ) 其中,法线权重深度权重边缘权重且每项权重经过归一化,取值范围为[0,1]。Among them, the normal weight depth weight edge weight And each weight is normalized, and the value range is [0,1]. 4.根据权利要求3所述的真实感三维彩色纹理重建方法,其特征在于,所述法线权重中的系数取值为:a=0.1,b=50°,所述深度权重中的系数取值为:a=0.4,b=50mm,d0=55mm,所述边缘权重中的系数取值为:a=-0.08,b=50mm。4. The realistic three-dimensional color texture reconstruction method according to claim 3, wherein the values of the coefficients in the normal weight are: a=0.1, b=50°, and the coefficients in the depth weight are The values are: a=0.4, b=50mm, d 0 =55mm, and the coefficients in the edge weight are: a=-0.08, b=50mm. 5.根据权利要求1所述的真实感三维彩色纹理重建方法,其特征在于,在原始所述二维彩色纹理图像与融合图像之间引入目标图像,并根据每个视角下所述融合图像的位移情况,采用双向相似度函数EBDS(S,T)对所述原始二维彩色纹理图像进行重建计算生成能量函数,并通过最小化能量函数以使得全局的所述目标图像发生位移而减少整体模型纹理融合的图像模糊,最终得到新的高分辨率的目标图像。5. The realistic three-dimensional color texture reconstruction method according to claim 1, wherein a target image is introduced between the original two-dimensional color texture image and the fused image, and according to the In the case of displacement, the original two-dimensional color texture image is reconstructed and calculated using the bidirectional similarity function E BDS (S, T) to generate an energy function, and the overall target image is displaced by minimizing the energy function to reduce the overall The image of the model texture fusion is blurred, and finally a new high-resolution target image is obtained. 6.根据权利要求5所述的真实感三维彩色纹理重建方法,其特征在于,所述能量函数还包括光学测度一致性函数EC6. The realistic three-dimensional color texture reconstruction method according to claim 5, wherein the energy function further comprises an optical measure consistency function E C : 其中Mi表示第i个视角下的所述融合图像,xk表示图像的像素位置,P(·)表示投影函数,N表示视角的数量。wj表示第j个视角下图像的复合权重。Where M i represents the fused image under the i-th viewing angle, x k represents the pixel position of the image, P(·) represents a projection function, and N represents the number of viewing angles. wj denotes the composite weight of the image at the jth viewing angle. 7.根据权利要求6所述的真实感三维彩色纹理重建方法,其特征在于,所述能量函数E最终被构造成:7. The realistic three-dimensional color texture reconstruction method according to claim 6, wherein the energy function E is finally constructed as: E=E1+λE2 E=E 1 +λE 2 其中λ是两个能量函数E1和E2之间的比例因子。where λ is the scaling factor between the two energy functions E1 and E2. 8.根据权利要求7所述的真实感三维彩色纹理重建方法,其特征在于,所述目标函数通过两步交替优化策略求解,即对以下两个步骤进行迭代求解:8. The realistic three-dimensional color texture reconstruction method according to claim 7, wherein the objective function is solved through a two-step alternate optimization strategy, that is, the following two steps are iteratively solved: S1:固定所述融合图像Mi,优化所述目标图像TiS1: fixing the fused image M i , optimizing the target image T i ; S2:固定所述目标图像Ti,优化所述融合图像MiS2: Fix the target image T i and optimize the fused image M i . 9.根据权利要求8所述的真实感三维彩色纹理重建方法,其特征在于,所述步骤S1中,所述目标图像Ti用下式表达:9. The realistic three-dimensional color texture reconstruction method according to claim 8, characterized in that, in the step S1, the target image Ti is expressed by the following formula: 所述步骤S2中,所述目标函数Mi用下式表示:In the step S2, the objective function Mi is represented by the following formula: 10.根据权利要求9所述的真实感三维彩色纹理重建方法,其特征在于,所述迭代运算过程中采用多尺度优化方法,即在低尺度阶段,所有图像降采样到低的分辨率并进行上述迭代运算。当能量函数E收敛后,所述目标图像和所述融合图像升采样到更大的尺度,而所述原始二维彩色纹理图像依然进行降采样,目的是为了将原始二维彩色纹理图像的高频信息注入到所述目标图像和所述融合图像中。10. The realistic three-dimensional color texture reconstruction method according to claim 9, characterized in that, a multi-scale optimization method is adopted in the iterative operation process, that is, in the low-scale stage, all images are down-sampled to a low resolution and performed The above iterative operation. After the energy function E converges, the target image and the fused image are up-sampled to a larger scale, while the original two-dimensional color texture image is still down-sampled, in order to reduce the high Injecting video information into the target image and the fused image.
CN201910687176.1A 2019-07-29 2019-07-29 Realistic three-dimensional color texture reconstruction method Pending CN110599578A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910687176.1A CN110599578A (en) 2019-07-29 2019-07-29 Realistic three-dimensional color texture reconstruction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910687176.1A CN110599578A (en) 2019-07-29 2019-07-29 Realistic three-dimensional color texture reconstruction method

Publications (1)

Publication Number Publication Date
CN110599578A true CN110599578A (en) 2019-12-20

Family

ID=68852938

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910687176.1A Pending CN110599578A (en) 2019-07-29 2019-07-29 Realistic three-dimensional color texture reconstruction method

Country Status (1)

Country Link
CN (1) CN110599578A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111539251A (en) * 2020-03-16 2020-08-14 重庆特斯联智慧科技股份有限公司 Security check article identification method and system based on deep learning
CN113487729A (en) * 2021-07-30 2021-10-08 上海联泰科技股份有限公司 Surface data processing method and system of three-dimensional model and storage medium
CN113538649A (en) * 2021-07-14 2021-10-22 深圳信息职业技术学院 A super-resolution three-dimensional texture reconstruction method, device and equipment
CN114004935A (en) * 2021-11-08 2022-02-01 优奈柯恩(北京)科技有限公司 Method and device for three-dimensional modeling through three-dimensional modeling system
CN114049423A (en) * 2021-10-13 2022-02-15 北京师范大学 An Automatic Texture Mapping Method for Photorealistic 3D Models
CN115546379A (en) * 2022-11-29 2022-12-30 思看科技(杭州)股份有限公司 Data processing method and device and computer equipment
CN115601490A (en) * 2022-11-29 2023-01-13 思看科技(杭州)股份有限公司(Cn) Texture image pre-replacement method and device based on texture mapping and storage medium
CN116758205A (en) * 2023-08-24 2023-09-15 先临三维科技股份有限公司 Data processing method, device, equipment and medium
CN117830392A (en) * 2024-03-05 2024-04-05 季华实验室 Environmental object recognition method and imaging system
CN113592698B (en) * 2021-08-16 2024-04-26 齐鲁工业大学 Sixteen-element moment-based multi-view color image zero watermark processing method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170046868A1 (en) * 2015-08-14 2017-02-16 Samsung Electronics Co., Ltd. Method and apparatus for constructing three dimensional model of object
CN106530395A (en) * 2016-12-30 2017-03-22 碰海科技(北京)有限公司 Depth and color imaging integrated handheld three-dimensional modeling device
CN108470370A (en) * 2018-03-27 2018-08-31 北京建筑大学 The method that three-dimensional laser scanner external camera joint obtains three-dimensional colour point clouds
CN109118578A (en) * 2018-08-01 2019-01-01 浙江大学 A kind of multiview three-dimensional reconstruction texture mapping method of stratification

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170046868A1 (en) * 2015-08-14 2017-02-16 Samsung Electronics Co., Ltd. Method and apparatus for constructing three dimensional model of object
CN106530395A (en) * 2016-12-30 2017-03-22 碰海科技(北京)有限公司 Depth and color imaging integrated handheld three-dimensional modeling device
CN108470370A (en) * 2018-03-27 2018-08-31 北京建筑大学 The method that three-dimensional laser scanner external camera joint obtains three-dimensional colour point clouds
CN109118578A (en) * 2018-08-01 2019-01-01 浙江大学 A kind of multiview three-dimensional reconstruction texture mapping method of stratification

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111539251A (en) * 2020-03-16 2020-08-14 重庆特斯联智慧科技股份有限公司 Security check article identification method and system based on deep learning
CN113538649A (en) * 2021-07-14 2021-10-22 深圳信息职业技术学院 A super-resolution three-dimensional texture reconstruction method, device and equipment
CN113487729A (en) * 2021-07-30 2021-10-08 上海联泰科技股份有限公司 Surface data processing method and system of three-dimensional model and storage medium
CN113592698B (en) * 2021-08-16 2024-04-26 齐鲁工业大学 Sixteen-element moment-based multi-view color image zero watermark processing method and system
CN114049423A (en) * 2021-10-13 2022-02-15 北京师范大学 An Automatic Texture Mapping Method for Photorealistic 3D Models
CN114004935A (en) * 2021-11-08 2022-02-01 优奈柯恩(北京)科技有限公司 Method and device for three-dimensional modeling through three-dimensional modeling system
CN115546379A (en) * 2022-11-29 2022-12-30 思看科技(杭州)股份有限公司 Data processing method and device and computer equipment
CN115601490A (en) * 2022-11-29 2023-01-13 思看科技(杭州)股份有限公司(Cn) Texture image pre-replacement method and device based on texture mapping and storage medium
CN116758205A (en) * 2023-08-24 2023-09-15 先临三维科技股份有限公司 Data processing method, device, equipment and medium
CN116758205B (en) * 2023-08-24 2024-01-26 先临三维科技股份有限公司 Data processing method, device, equipment and medium
CN117830392A (en) * 2024-03-05 2024-04-05 季华实验室 Environmental object recognition method and imaging system

Similar Documents

Publication Publication Date Title
CN110599578A (en) Realistic three-dimensional color texture reconstruction method
Maier et al. Intrinsic3D: High-quality 3D reconstruction by joint appearance and geometry optimization with spatially-varying lighting
CN110288642B (en) Three-dimensional object rapid reconstruction method based on camera array
CN106289106B (en) The stereo vision sensor and scaling method that a kind of line-scan digital camera and area array cameras are combined
CN109919911B (en) Mobile three-dimensional reconstruction method based on multi-view photometric stereo
US6750873B1 (en) High quality texture reconstruction from multiple scans
Zhang et al. Projection defocus analysis for scene capture and image display
CN108520537B (en) A binocular depth acquisition method based on photometric parallax
Bimber et al. Enabling view-dependent stereoscopic projection in real environments
CN110458932B (en) Image processing method, device, system, storage medium and image scanning apparatus
CN106780726A (en) The dynamic non-rigid three-dimensional digital method of fusion RGB D cameras and colored stereo photometry
US9147279B1 (en) Systems and methods for merging textures
KR100686952B1 (en) Image Synthesis Method, Apparatus, and Recording Media, and Rendering Method, Apparatus, and Recording Media of Stereoscopic Model
Elstrom et al. Stereo-based registration of ladar and color imagery
CN109945841A (en) An Industrial Photogrammetry Method Without Code Points
Aliaga et al. A self-calibrating method for photogeometric acquisition of 3D objects
KR100681320B1 (en) Three-Dimensional Shape Modeling of Objects Using Level Set Solution of Partial Differential Equations Derived from Helmholtz Exchange Conditions
Lin Automatic 3D color shape measurement system based on a stereo camera
Kim et al. Textureme: High-quality textured scene reconstruction in real time
Inzerillo et al. High quality texture mapping process aimed at the optimization of 3D structured light models
CN108898550B (en) Image splicing method based on space triangular patch fitting
Tyle_ek et al. Refinement of surface mesh for accurate multi-view reconstruction
Wu et al. Unsupervised texture reconstruction method using bidirectional similarity function for 3-D measurements
CN117876562A (en) An Optimized Approach for Texture Mapping of Non-Lambertian Surfaces Based on Consumer-Grade RGB-D Sensors
CN118247429A (en) A method and system for rapid three-dimensional modeling in air-ground collaboration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination