CN112907748B - A 3D Topography Reconstruction Method Based on Non-downsampling Shearlet Transform and Depth Image Texture Feature Clustering - Google Patents

A 3D Topography Reconstruction Method Based on Non-downsampling Shearlet Transform and Depth Image Texture Feature Clustering Download PDF

Info

Publication number
CN112907748B
CN112907748B CN202110345048.6A CN202110345048A CN112907748B CN 112907748 B CN112907748 B CN 112907748B CN 202110345048 A CN202110345048 A CN 202110345048A CN 112907748 B CN112907748 B CN 112907748B
Authority
CN
China
Prior art keywords
image
class
depth
depth image
downsampling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110345048.6A
Other languages
Chinese (zh)
Other versions
CN112907748A (en
Inventor
闫涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chuangbai Technology Transfer Shanxi Co ltd
Original Assignee
Shanxi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi University filed Critical Shanxi University
Priority to CN202110345048.6A priority Critical patent/CN112907748B/en
Publication of CN112907748A publication Critical patent/CN112907748A/en
Application granted granted Critical
Publication of CN112907748B publication Critical patent/CN112907748B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Remote Sensing (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering. The method comprises the following steps: step 1, collecting an image sequence of a scene to be detected; step 2, setting parameters of a non-down-sampling shear wave transformation and clustering algorithm; step 3, converting the image sequence into a plurality of high-frequency coefficients with different scales and directions by using non-down-sampling shear wave conversion; step 4, mapping all the high-frequency coefficients into a plurality of depth images; step 5, respectively taking the contrast, correlation, energy, inverse variance and entropy five-dimensional vector of the gray level co-occurrence matrix of each depth image as texture features of the depth image; step 6, obtaining K clustering results by using a K mean value clustering algorithm; step 7, selecting the class with the minimum average gradient of the depth images in different clustering results; and 8, calculating the average value of the depth images in the minimum class of the average gradient to obtain the three-dimensional morphology reconstruction result of the scene to be detected. The invention can realize the optimal three-dimensional shape reconstruction result according to the scene.

Description

一种基于非降采样剪切波变换与深度图像纹理特征聚类的三 维形貌重建方法A 3D Topography Reconstruction Method Based on Non-downsampling Shearlet Transform and Depth Image Texture Feature Clustering

技术领域technical field

本发明属于三维重建领域,具体涉及一种基于非降采样剪切波变换与深度图像纹理特征聚类的三维形貌重建方法。The invention belongs to the field of three-dimensional reconstruction, and in particular relates to a three-dimensional topography reconstruction method based on non-downsampling shearlet transform and depth image texture feature clustering.

背景技术Background technique

基于图像聚焦信息测量待测场景三维形貌的方法普遍具有硬件设备依赖性低、三维重建算法易于并行、重建系统可移植性强等优势,已被广泛应用于微细制造领域的零件缺陷检测与移动成像设备的智能变焦等领域。The method of measuring the 3D topography of the scene to be tested based on image focusing information generally has the advantages of low hardware equipment dependence, easy parallelization of 3D reconstruction algorithms, and strong portability of reconstruction systems. It has been widely used in parts defect detection and movement in the field of micro manufacturing Intelligent zoom of imaging equipment and other fields.

现阶段基于图像聚焦信息的三维形貌重建方法主要集中在图像聚焦评价指标的设计与形貌重建算法的构建两方面。图像聚焦评价指标作为三维形貌重建方法的核心环节,其对图像聚焦信息提取的准确性直接决定三维重建结果的好坏,典型的图像聚焦评价指标可以分为空间域和频率域两大类,其中空间域方法主要利用时域变换方法从图像像素的层面确定当前像素点是否处于聚焦区域范围,然后通过聚合全部聚焦像素的位置信息得到待测场景的三维形貌重建结果,此类指标可以大体分为拉普拉斯变换、梯度变换以及统计量估计三大类;频率域类方法则首先将图像变换为高低频分量,然后通过挖掘高低频分量与深度图像间的关联关系得到三维形貌重建结果,这类方法主要包括傅里叶变换和小波变换两大类。而形貌重建算法主要用于克服图像序列的采样间隔给重建结果带来的非连续性影响,主要代表性方法为高斯拟合。At present, the three-dimensional topography reconstruction method based on image focus information mainly focuses on the design of image focus evaluation index and the construction of topography reconstruction algorithm. The image focus evaluation index is the core link of the 3D topography reconstruction method. The accuracy of the image focus information extraction directly determines the quality of the 3D reconstruction results. Typical image focus evaluation indicators can be divided into two categories: spatial domain and frequency domain. Among them, the spatial domain method mainly uses the time domain transformation method to determine whether the current pixel is in the focus area from the level of image pixels, and then obtains the three-dimensional topography reconstruction result of the scene to be tested by aggregating the position information of all focus pixels. It is divided into three categories: Laplace transform, gradient transform and statistic estimation; the frequency domain method first transforms the image into high and low frequency components, and then obtains the three-dimensional topography reconstruction by mining the correlation between the high and low frequency components and the depth image. As a result, such methods mainly include Fourier transform and wavelet transform. The topography reconstruction algorithm is mainly used to overcome the discontinuous influence of the sampling interval of the image sequence on the reconstruction results, and the main representative method is Gaussian fitting.

通过了解研究现状,我们认为该领域方法主要面临以下挑战:现有的三维形貌重建方法通常仅能对单一场景进行三维重建,无法适用于其他场景的三维重建任务,即不同场景三维形貌重建效果的好坏取决于三维形貌重建方法中图像聚焦评价指标选择的准确性。因此,如何提出一种场景自适应的图像聚焦评价指标是三维形貌重建领域面临的一个重要难题。By understanding the research status, we believe that the methods in this field are mainly faced with the following challenges: The existing 3D topography reconstruction methods are usually only capable of 3D reconstruction of a single scene, and cannot be applied to the 3D reconstruction tasks of other scenes, that is, the 3D topography reconstruction of different scenes. The quality of the effect depends on the accuracy of the selection of the image focus evaluation index in the 3D topography reconstruction method. Therefore, how to propose a scene-adaptive image focus evaluation index is an important problem in the field of 3D topography reconstruction.

综上可知,我们认为如何根据场景中图像特性选择图像聚焦评价指标是解决上述问题的关键。本专利引入非降采样剪切波变换克服传统三维形貌重建方法的聚焦评价指标的单一性问题,通过非降采样剪切波变换可以得到覆盖图像中任意方向与尺度的多个图像聚焦评价指标,基于上述评价指标得到不同尺度与方向的多个深度图像,然后提出一种基于深度图像纹理特征的聚类方法得到表征待测场景的最优三维重建结果。To sum up, we believe that how to select the image focus evaluation index according to the image characteristics in the scene is the key to solving the above problems. In this patent, non-downsampling shearlet transform is introduced to overcome the singularity problem of focus evaluation indexes of traditional 3D topography reconstruction methods. Through non-downsampling shearlet transform, multiple image focus evaluation indexes covering any direction and scale in the image can be obtained. , based on the above evaluation indicators, multiple depth images of different scales and directions are obtained, and then a clustering method based on depth image texture features is proposed to obtain the optimal 3D reconstruction results representing the scene to be tested.

发明内容SUMMARY OF THE INVENTION

为克服上述技术中存在的问题,本发明的目的是提供一种基于非降采样剪切波变换与深度图像纹理特征聚类的三维形貌重建方法。In order to overcome the problems existing in the above technologies, the purpose of the present invention is to provide a three-dimensional topography reconstruction method based on non-downsampling shearlet transform and depth image texture feature clustering.

本发明所采取的技术方案是:一种基于非降采样剪切波变换与深度图像纹理特征聚类的三维形貌重建方法,包括以下步骤:The technical scheme adopted by the present invention is: a three-dimensional topography reconstruction method based on non-downsampling shearlet transform and depth image texture feature clustering, comprising the following steps:

步骤1,等间隔调节相机与待测场景间的距离获取待测场景不同景深的图像序列

Figure BDA0003000546680000021
其中i代表图像数其取值范围是1≤i≤N,(x,y)表示图像序列的坐标位置其范围为0≤x,y≤M-1;Step 1: Adjust the distance between the camera and the scene to be tested at equal intervals to obtain image sequences of different depths of field of the scene to be tested
Figure BDA0003000546680000021
where i represents the number of images, and its value range is 1≤i≤N, (x, y) represents the coordinate position of the image sequence, and its range is 0≤x, y≤M-1;

步骤2,将非降采样剪切波变换的最大分解尺度设置为J,最大方向数设置为L,设置非降采样剪切波变换的滤波器,在聚类算法中设置聚类个数K,距离度量标准为欧式距离;Step 2: Set the maximum decomposition scale of the non-downsampling shearlet transform to J, the maximum number of directions to L, set the filter of the non-downsampling shearlet transform, and set the number of clusters K in the clustering algorithm, The distance metric is Euclidean distance;

步骤3,对步骤1中的图像序列

Figure BDA0003000546680000022
进行非降采样剪切波变换(NSST),如式(1)所示,每幅图像可以得到J×L个不同尺度与方向的高频分解系数;Step 3, for the image sequence in step 1
Figure BDA0003000546680000022
Perform non-downsampling shearlet transform (NSST), as shown in equation (1), each image can obtain J×L high-frequency decomposition coefficients of different scales and directions;

Figure BDA0003000546680000023
Figure BDA0003000546680000023

其中j代表尺度数其取值范围是1≤j≤J,l代表方向数其取值范围是1≤l≤L,

Figure BDA0003000546680000024
表示第i幅图像在尺度j和方向l的高频分解系数,ihigh表示高频系数
Figure BDA0003000546680000025
的下标其取值范围为1≤ihigh≤N,NSST表示非降采样剪切波变换;Among them, j represents the scale number, and its value range is 1≤j≤J, and l represents the direction number, and its value range is 1≤l≤L,
Figure BDA0003000546680000024
Represents the high-frequency decomposition coefficient of the i-th image at scale j and direction l, ihigh represents the high-frequency coefficient
Figure BDA0003000546680000025
The subscript of , whose value range is 1≤ihigh≤N, NSST means non-downsampling shearlet transform;

步骤4,按照式(2)将J×L个不同尺度与方向的高频系数

Figure BDA0003000546680000026
映射为J×L个不同尺度与方向的深度图像
Figure BDA0003000546680000027
Step 4: According to formula (2), the high-frequency coefficients of J×L different scales and directions are
Figure BDA0003000546680000026
Mapped to J×L depth images of different scales and orientations
Figure BDA0003000546680000027

Figure BDA0003000546680000028
Figure BDA0003000546680000028

其中ihigh表示第i幅图像对应的第ihigh个高频系数其取值范围是1≤ihigh≤N,

Figure BDA0003000546680000029
表示求解高频系数下标ihigh的函数,abs(·)表示绝对值函数;where ihigh represents the ihigh-th high-frequency coefficient corresponding to the i-th image, and its value range is 1≤ihigh≤N,
Figure BDA0003000546680000029
Represents the function of solving the high-frequency coefficient subscript ihigh, and abs( ) represents the absolute value function;

步骤5,计算每幅深度图像

Figure BDA0003000546680000031
的灰度共生矩阵,并根据式(3)将灰度共生矩阵的对比度rCon、相关性rCor、能量rEne、逆方差rHom和熵rEnt作为深度图像的五维特征向量,J×L幅深度图像共得到J×L个五维特征向量;Step 5, calculate each depth image
Figure BDA0003000546680000031
The gray-level co-occurrence matrix of , and according to formula (3), the contrast r Con , correlation r Cor , energy r Ene , inverse variance r Hom and entropy r Ent of the gray-level co-occurrence matrix are used as the five-dimensional feature vector of the depth image, J× A total of J×L five-dimensional feature vectors are obtained from L depth images;

Figure BDA0003000546680000032
Figure BDA0003000546680000032

其中GLCM(·)表示灰度共生矩阵的计算函数,Vj,l(·)表示第j个尺度l个方向的深度图像的特征向量;where GLCM( ) represents the calculation function of the gray level co-occurrence matrix, and V j,l ( ) represents the feature vector of the depth image at the jth scale and in the l direction;

步骤6,对步骤5得到的J×L个五维特征向量根据式(4)的K均值聚类算法进行聚类,得到K个聚类结果{C1,C2,…,CK};Step 6: Perform clustering on the J×L five-dimensional feature vectors obtained in step 5 according to the K-means clustering algorithm of formula (4) to obtain K clustering results {C 1 , C 2 ,...,C K };

Figure BDA0003000546680000033
Figure BDA0003000546680000033

其中Kmeans(·)表示K均值聚类算法,类C1共包含

Figure BDA0003000546680000034
个深度图像集合
Figure BDA0003000546680000035
依此类推,类CK共包含
Figure BDA0003000546680000036
个深度图像集合
Figure BDA0003000546680000037
Figure BDA0003000546680000038
where Kmeans( ) represents the K-means clustering algorithm, and class C 1 contains a total of
Figure BDA0003000546680000034
collection of depth images
Figure BDA0003000546680000035
And so on, the class C K contains a total of
Figure BDA0003000546680000036
collection of depth images
Figure BDA0003000546680000037
Figure BDA0003000546680000038

步骤7,计算步骤6中得到的所有深度图像类中的平均梯度,并按照式(5)选择平均梯度最小的类Cs作为最终的深度图像类;Step 7, calculate the average gradient in all the depth image classes obtained in step 6, and select the class C s with the smallest average gradient as the final depth image class according to formula (5);

Figure BDA0003000546680000039
Figure BDA0003000546680000039

其中

Figure BDA00030005466800000310
表示求解深度图像类下标m的函数,m的取值范围为1≤m≤K,Gradient(·)为梯度函数,s为平均梯度最小类的序号;in
Figure BDA00030005466800000310
Represents the function of solving the subscript m of the depth image class, the value range of m is 1≤m≤K, Gradient( ) is the gradient function, and s is the serial number of the class with the smallest average gradient;

步骤8,根据式(6)计算步骤7得到的平均梯度最小类Cs中所有图像的平均值,得到待测场景最终的三维形貌重建结果

Figure BDA00030005466800000311
Step 8: Calculate the average value of all images in the minimum average gradient class C s obtained in step 7 according to formula (6), and obtain the final three-dimensional topography reconstruction result of the scene to be tested.
Figure BDA00030005466800000311

Figure BDA00030005466800000312
Figure BDA00030005466800000312

其中

Figure BDA0003000546680000041
为平均梯度最小类Cs中深度图像的数量。in
Figure BDA0003000546680000041
is the number of depth images in the average gradient minimum class C s .

本发明的方法能够根据不同待测场景得到适合场景的最佳三维形貌重建结果。The method of the invention can obtain the best three-dimensional topography reconstruction result suitable for the scene according to different scenes to be tested.

附图说明Description of drawings

图1是一种基于非降采样剪切波变换与深度图像纹理特征聚类的三维形貌重建方法的流程图;1 is a flowchart of a three-dimensional topography reconstruction method based on non-downsampling shearlet transform and depth image texture feature clustering;

图2是一种基于非降采样剪切波变换与深度图像纹理特征聚类的三维形貌重建方法的示意图。FIG. 2 is a schematic diagram of a three-dimensional topography reconstruction method based on non-downsampling shearlet transform and depth image texture feature clustering.

具体实施方式Detailed ways

如图1、图2所示,本实施例所述一种基于非降采样剪切波变换与深度图像纹理特征聚类的三维形貌重建方法,包括以下步骤:As shown in FIG. 1 and FIG. 2 , a three-dimensional topography reconstruction method based on non-downsampling shearlet transform and depth image texture feature clustering described in this embodiment includes the following steps:

步骤1,等间隔调节相机与待测场景间的距离获取待测场景不同景深的图像序列

Figure BDA0003000546680000042
其中i代表图像数其取值范围是1≤i≤N,(x,y)表示图像序列的坐标位置其范围为0≤x,y≤M-1;Step 1: Adjust the distance between the camera and the scene to be tested at equal intervals to obtain image sequences of different depths of field of the scene to be tested
Figure BDA0003000546680000042
where i represents the number of images, and its value range is 1≤i≤N, (x, y) represents the coordinate position of the image sequence, and its range is 0≤x, y≤M-1;

步骤2,将非降采样剪切波变换的最大分解尺度设置为J,最大方向数设置为L,设置非降采样剪切波变换的滤波器,在聚类算法中设置聚类个数K,距离度量标准为欧式距离;Step 2: Set the maximum decomposition scale of the non-downsampling shearlet transform to J, the maximum number of directions to L, set the filter of the non-downsampling shearlet transform, and set the number of clusters K in the clustering algorithm, The distance metric is Euclidean distance;

步骤3,对步骤1中的图像序列

Figure BDA0003000546680000043
进行非降采样剪切波变换(NSST),如式(1)所示,每幅图像可以得到J×L个不同尺度与方向的高频分解系数;Step 3, for the image sequence in step 1
Figure BDA0003000546680000043
Perform non-downsampling shearlet transform (NSST), as shown in equation (1), each image can obtain J×L high-frequency decomposition coefficients of different scales and directions;

Figure BDA0003000546680000044
Figure BDA0003000546680000044

其中j代表尺度数其取值范围是1≤j≤J,l代表方向数其取值范围是1≤l≤L,

Figure BDA0003000546680000045
表示第i幅图像在尺度j和方向l的高频分解系数,ihigh表示高频系数
Figure BDA0003000546680000046
的下标其取值范围为1≤ihigh≤N,NSST表示非降采样剪切波变换;Among them, j represents the scale number, and its value range is 1≤j≤J, and l represents the direction number, and its value range is 1≤l≤L,
Figure BDA0003000546680000045
Represents the high-frequency decomposition coefficient of the i-th image at scale j and direction l, ihigh represents the high-frequency coefficient
Figure BDA0003000546680000046
The subscript of , whose value range is 1≤ihigh≤N, NSST means non-downsampling shearlet transform;

步骤4,按照式(2)将J×L个不同尺度与方向的高频系数

Figure BDA0003000546680000047
映射为J×L个不同尺度与方向的深度图像
Figure BDA0003000546680000048
Step 4: According to formula (2), the high-frequency coefficients of J×L different scales and directions are
Figure BDA0003000546680000047
Mapped to J×L depth images of different scales and orientations
Figure BDA0003000546680000048

Figure BDA0003000546680000049
Figure BDA0003000546680000049

其中ihigh表示第i幅图像对应的第ihigh个高频系数其取值范围是1≤ihigh≤N,

Figure BDA0003000546680000051
表示求解高频系数下标ihigh的函数,abs(·)表示绝对值函数;where ihigh represents the ihigh-th high-frequency coefficient corresponding to the i-th image, and its value range is 1≤ihigh≤N,
Figure BDA0003000546680000051
Represents the function of solving the high-frequency coefficient subscript ihigh, and abs( ) represents the absolute value function;

步骤5,计算每幅深度图像

Figure BDA0003000546680000052
的灰度共生矩阵,并根据式(3)将灰度共生矩阵的对比度rCon、相关性rCor、能量rEne、逆方差rHom和熵rEnt作为深度图像的五维特征向量,J×L幅深度图像共得到J×L个五维特征向量;Step 5, calculate each depth image
Figure BDA0003000546680000052
The gray-level co-occurrence matrix of , and according to formula (3), the contrast r Con , correlation r Cor , energy r Ene , inverse variance r Hom and entropy r Ent of the gray-level co-occurrence matrix are used as the five-dimensional feature vector of the depth image, J× A total of J×L five-dimensional feature vectors are obtained from L depth images;

Figure BDA0003000546680000053
Figure BDA0003000546680000053

其中GLCM(·)表示灰度共生矩阵的计算函数,Vj,l(·)表示第j个尺度l个方向的深度图像的特征向量;where GLCM( ) represents the calculation function of the gray level co-occurrence matrix, and V j,l ( ) represents the feature vector of the depth image at the jth scale and in the l direction;

步骤6,对步骤5得到的J×L个五维特征向量根据式(4)的K均值聚类算法进行聚类,得到K个聚类结果{C1,C2,…,CK};Step 6: Perform clustering on the J×L five-dimensional feature vectors obtained in step 5 according to the K-means clustering algorithm of formula (4) to obtain K clustering results {C 1 , C 2 ,...,C K };

Figure BDA0003000546680000054
Figure BDA0003000546680000054

其中Kmeans(·)表示K均值聚类算法,类C1共包含

Figure BDA0003000546680000055
个深度图像集合
Figure BDA0003000546680000056
依此类推,类CK共包含
Figure BDA0003000546680000057
个深度图像集合
Figure BDA0003000546680000058
Figure BDA0003000546680000059
where Kmeans( ) represents the K-means clustering algorithm, and class C 1 contains a total of
Figure BDA0003000546680000055
collection of depth images
Figure BDA0003000546680000056
And so on, the class C K contains a total of
Figure BDA0003000546680000057
collection of depth images
Figure BDA0003000546680000058
Figure BDA0003000546680000059

步骤7,计算步骤6中得到的所有深度图像类中的平均梯度,并按照式(5)选择平均梯度最小的类Cs作为最终的深度图像类;Step 7, calculate the average gradient in all the depth image classes obtained in step 6, and select the class C s with the smallest average gradient as the final depth image class according to formula (5);

Figure BDA00030005466800000510
Figure BDA00030005466800000510

其中

Figure BDA00030005466800000511
表示求解深度图像类下标m的函数,m的取值范围为1≤m≤K,Gradient(·)为梯度函数,s为平均梯度最小类的序号;in
Figure BDA00030005466800000511
Represents the function of solving the subscript m of the depth image class, the value range of m is 1≤m≤K, Gradient( ) is the gradient function, and s is the serial number of the class with the smallest average gradient;

步骤8,根据式(6)计算步骤7得到的平均梯度最小类Cs中所有图像的平均值,得到待测场景最终的三维形貌重建结果

Figure BDA00030005466800000512
Step 8: Calculate the average value of all images in the minimum average gradient class C s obtained in step 7 according to formula (6), and obtain the final three-dimensional topography reconstruction result of the scene to be tested.
Figure BDA00030005466800000512

Figure BDA0003000546680000061
Figure BDA0003000546680000061

其中

Figure BDA0003000546680000062
为平均梯度最小类Cs中深度图像的数量。in
Figure BDA0003000546680000062
is the number of depth images in the average gradient minimum class C s .

Claims (1)

1.一种基于非降采样剪切波变换与深度图像纹理特征聚类的三维形貌重建方法,其特征包括以下步骤:1. a three-dimensional topography reconstruction method based on non-downsampling shearlet transform and depth image texture feature clustering, it is characterized by comprising the following steps: (1)等间隔调节相机与待测场景间的距离获取待测场景不同景深的图像序列
Figure FDA0003632912610000011
其中i代表图像数其取值范围是1≤i≤N,(x,y)表示图像序列的坐标位置其范围为0≤x,y≤M-1;
(1) Adjust the distance between the camera and the scene to be measured at equal intervals to obtain image sequences of different depths of field of the scene to be measured
Figure FDA0003632912610000011
where i represents the number of images, and its value range is 1≤i≤N, (x, y) represents the coordinate position of the image sequence, and its range is 0≤x, y≤M-1;
(2)将非降采样剪切波变换的最大分解尺度设置为J,最大方向数设置为L,设置非降采样剪切波变换的滤波器,在聚类算法中设置聚类个数K,距离度量标准为欧式距离;(2) Set the maximum decomposition scale of the non-downsampling shearlet transform to J, the maximum number of directions to L, set the filter of the non-downsampling shearlet transform, and set the number of clusters K in the clustering algorithm, The distance metric is Euclidean distance; (3)对步骤1中的图像序列
Figure FDA0003632912610000012
进行非降采样剪切波变换NSST,如式(1)所示,每幅图像可以得到J×L个不同尺度与方向的高频分解系数;
(3) For the image sequence in step 1
Figure FDA0003632912610000012
Perform non-downsampling shearlet transform NSST, as shown in formula (1), each image can obtain J×L high-frequency decomposition coefficients of different scales and directions;
Figure FDA0003632912610000013
Figure FDA0003632912610000013
其中j代表尺度数其取值范围是1≤j≤J,l代表方向数其取值范围是1≤l≤L,
Figure FDA0003632912610000014
表示第i幅图像在尺度j和方向l的高频分解系数,ihigh表示高频系数
Figure FDA0003632912610000015
的下标其取值范围为1≤ihigh≤N,NSST表示非降采样剪切波变换;
Among them, j represents the scale number, and its value range is 1≤j≤J, and l represents the direction number, and its value range is 1≤l≤L,
Figure FDA0003632912610000014
Represents the high-frequency decomposition coefficient of the i-th image at scale j and direction l, ihigh represents the high-frequency coefficient
Figure FDA0003632912610000015
The subscript of , whose value range is 1≤ihigh≤N, NSST means non-downsampling shearlet transform;
(4)按照式(2)将J×L个不同尺度与方向的高频系数
Figure FDA0003632912610000016
映射为J×L个不同尺度与方向的深度图像
Figure FDA0003632912610000017
(4) According to formula (2), the high-frequency coefficients of J×L different scales and directions are
Figure FDA0003632912610000016
Mapped to J×L depth images of different scales and orientations
Figure FDA0003632912610000017
Figure FDA0003632912610000018
Figure FDA0003632912610000018
其中ihigh表示第i幅图像对应的第ihigh个高频系数其取值范围是1≤ihigh≤N,
Figure FDA0003632912610000019
表示求解高频系数下标ihigh的函数,abs(·)表示绝对值函数;
where ihigh represents the ihigh-th high-frequency coefficient corresponding to the i-th image, and its value range is 1≤ihigh≤N,
Figure FDA0003632912610000019
Represents the function of solving the high-frequency coefficient subscript ihigh, and abs( ) represents the absolute value function;
(5)计算每幅深度图像
Figure FDA00036329126100000110
的灰度共生矩阵,并根据式(3)将灰度共生矩阵的对比度rCon、相关性rCor、能量rEne、逆方差rHom和熵rEnt作为深度图像的五维特征向量,J×L幅深度图像共得到J×L个五维特征向量;
(5) Calculate each depth image
Figure FDA00036329126100000110
The gray-level co-occurrence matrix of , and according to formula (3), the contrast r Con , correlation r Cor , energy r Ene , inverse variance r Hom and entropy r Ent of the gray-level co-occurrence matrix are used as the five-dimensional feature vector of the depth image, J× A total of J×L five-dimensional feature vectors are obtained from L depth images;
Figure FDA00036329126100000111
Figure FDA00036329126100000111
其中GLCM(·)表示灰度共生矩阵的计算函数,Vj,l(·)表示第j个尺度l个方向的深度图像的特征向量;where GLCM( ) represents the calculation function of the gray level co-occurrence matrix, and V j,l ( ) represents the feature vector of the depth image at the jth scale and in the l direction; (6)对步骤5得到的J×L个五维特征向量根据式(4)的K均值聚类算法进行聚类,得到K个聚类结果{C1,C2,L,CK};(6) The J×L five-dimensional feature vectors obtained in step 5 are clustered according to the K-means clustering algorithm of formula (4) to obtain K clustering results {C 1 , C 2 , L, C K };
Figure FDA0003632912610000021
Figure FDA0003632912610000021
其中Kmeans(·)表示K均值聚类算法,类C1共包含
Figure FDA0003632912610000023
个深度图像集合
Figure FDA0003632912610000024
依此类推,类CK共包含
Figure FDA0003632912610000025
个深度图像集合
Figure FDA0003632912610000026
Figure FDA0003632912610000027
where Kmeans( ) represents the K-means clustering algorithm, and class C 1 contains a total of
Figure FDA0003632912610000023
collection of depth images
Figure FDA0003632912610000024
And so on, the class C K contains a total of
Figure FDA0003632912610000025
collection of depth images
Figure FDA0003632912610000026
Figure FDA0003632912610000027
(7)计算步骤6中得到的所有深度图像类中的平均梯度,并按照式(5)选择平均梯度最小的类Cs作为最终的深度图像类;(7) Calculate the average gradient in all the depth image classes obtained in step 6, and select the class C s with the smallest average gradient as the final depth image class according to formula (5);
Figure FDA0003632912610000028
Figure FDA0003632912610000028
其中
Figure FDA0003632912610000029
表示求解深度图像类下标m的函数,m的取值范围为1≤m≤K,Gradient(·)为梯度函数,s为平均梯度最小类的序号;
in
Figure FDA0003632912610000029
Represents the function of solving the subscript m of the depth image class, the value range of m is 1≤m≤K, Gradient( ) is the gradient function, and s is the serial number of the class with the smallest average gradient;
(8)根据式(6)计算步骤7得到的平均梯度最小类Cs中所有图像的平均值,得到待测场景最终的三维形貌重建结果
Figure FDA00036329126100000210
(8) Calculate the average value of all images in the minimum average gradient class C s obtained in step 7 according to formula (6), and obtain the final three-dimensional topography reconstruction result of the scene to be tested
Figure FDA00036329126100000210
Figure FDA00036329126100000211
Figure FDA00036329126100000211
其中
Figure FDA00036329126100000212
为平均梯度最小类Cs中深度图像的数量。
in
Figure FDA00036329126100000212
is the number of depth images in the average gradient minimum class C s .
CN202110345048.6A 2021-03-31 2021-03-31 A 3D Topography Reconstruction Method Based on Non-downsampling Shearlet Transform and Depth Image Texture Feature Clustering Active CN112907748B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110345048.6A CN112907748B (en) 2021-03-31 2021-03-31 A 3D Topography Reconstruction Method Based on Non-downsampling Shearlet Transform and Depth Image Texture Feature Clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110345048.6A CN112907748B (en) 2021-03-31 2021-03-31 A 3D Topography Reconstruction Method Based on Non-downsampling Shearlet Transform and Depth Image Texture Feature Clustering

Publications (2)

Publication Number Publication Date
CN112907748A CN112907748A (en) 2021-06-04
CN112907748B true CN112907748B (en) 2022-07-19

Family

ID=76109565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110345048.6A Active CN112907748B (en) 2021-03-31 2021-03-31 A 3D Topography Reconstruction Method Based on Non-downsampling Shearlet Transform and Depth Image Texture Feature Clustering

Country Status (1)

Country Link
CN (1) CN112907748B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113971717A (en) * 2021-10-25 2022-01-25 杭州图谱光电科技有限公司 Microscopic three-dimensional reconstruction method based on Markov random field constraint
CN116012607B (en) * 2022-01-27 2023-09-01 华南理工大学 Image weak texture feature extraction method and device, equipment, storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105354804A (en) * 2015-10-23 2016-02-24 广州高清视信数码科技股份有限公司 Maximization self-similarity based image super-resolution reconstruction method
CN106228601A (en) * 2016-07-21 2016-12-14 山东大学 Multiple dimensioned pyramidal CT image quick three-dimensional reconstructing method based on wavelet transformation
CN107240073A (en) * 2017-05-12 2017-10-10 杭州电子科技大学 A kind of 3 d video images restorative procedure merged based on gradient with clustering
CN108038905A (en) * 2017-12-25 2018-05-15 北京航空航天大学 A kind of Object reconstruction method based on super-pixel
CN109903372A (en) * 2019-01-28 2019-06-18 中国科学院自动化研究所 Depth map super-resolution completion method and high-quality 3D reconstruction method and system
US10405005B1 (en) * 2018-11-28 2019-09-03 Sherman McDermott Methods and systems for video compression based on dynamic vector wave compression
CN112489196A (en) * 2020-11-30 2021-03-12 太原理工大学 Particle three-dimensional shape reconstruction method based on multi-scale three-dimensional frequency domain transformation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105354804A (en) * 2015-10-23 2016-02-24 广州高清视信数码科技股份有限公司 Maximization self-similarity based image super-resolution reconstruction method
CN106228601A (en) * 2016-07-21 2016-12-14 山东大学 Multiple dimensioned pyramidal CT image quick three-dimensional reconstructing method based on wavelet transformation
CN107240073A (en) * 2017-05-12 2017-10-10 杭州电子科技大学 A kind of 3 d video images restorative procedure merged based on gradient with clustering
CN108038905A (en) * 2017-12-25 2018-05-15 北京航空航天大学 A kind of Object reconstruction method based on super-pixel
US10405005B1 (en) * 2018-11-28 2019-09-03 Sherman McDermott Methods and systems for video compression based on dynamic vector wave compression
CN109903372A (en) * 2019-01-28 2019-06-18 中国科学院自动化研究所 Depth map super-resolution completion method and high-quality 3D reconstruction method and system
CN112489196A (en) * 2020-11-30 2021-03-12 太原理工大学 Particle three-dimensional shape reconstruction method based on multi-scale three-dimensional frequency domain transformation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Three-dimensional Video Inpainting Combined with Gradient Fusion and Cluster";Lai Yili等;《Journal of Computer Aided Design & Computer Graphics》;20180331;第30卷(第3期);477-484 *
"医学图像中血管的三维重建的研究与应用";胡泽龙;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20160215(第02期);I138-1716 *

Also Published As

Publication number Publication date
CN112907748A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN107945161B (en) Detection method of road surface defect based on texture feature extraction
CN109949349B (en) Multi-mode three-dimensional image registration and fusion display method
CN103475898B (en) Non-reference image quality assessment method based on information entropy characters
CN101539629B (en) Change Detection Method of Remote Sensing Image Based on Multi-Feature Evidence Fusion and Structural Similarity
CN111932468B (en) Bayesian image denoising method based on noise-containing image distribution constraint
CN104299232B (en) SAR image segmentation method based on self-adaptive window directionlet domain and improved FCM
CN112907748B (en) A 3D Topography Reconstruction Method Based on Non-downsampling Shearlet Transform and Depth Image Texture Feature Clustering
CN105512670B (en) Divided based on KECA Feature Dimension Reduction and the HRCT peripheral nerve of cluster
CN104268833A (en) New image fusion method based on shift invariance shearlet transformation
CN107610118B (en) A dM-based image segmentation quality assessment method
CN108257093B (en) Single-frame image super-resolution method based on controllable kernel and Gaussian process regression
CN108550146A (en) A kind of image quality evaluating method based on ROI
CN113160265A (en) Construction method of prediction image for brain corpus callosum segmentation for corpus callosum state evaluation
CN114529519B (en) Image compressed sensing reconstruction method and system based on multi-scale depth cavity residual error network
Chen et al. Direction-guided and multi-scale feature screening for fetal head–pubic symphysis segmentation and angle of progression calculation
CN112734683A (en) Multi-scale SAR and infrared image fusion method based on target enhancement
CN103198456B (en) Remote sensing image fusion method based on directionlet domain hidden Markov tree (HMT) model
CN114331989B (en) Full-reference 3D point cloud quality assessment method based on point feature histogram geodesic distance
CN111681272A (en) A SAR Image Processing Method Based on Singularity Power Spectrum
CN103971125A (en) Super-resolution algorithm based on vibration signal of laser echo
CN112149728B (en) Rapid multi-mode image template matching method
CN107219483B (en) A kind of radial kurtosis anisotropic quantitative approach based on diffusion kurtosis imaging
CN109191437A (en) Clarity evaluation method based on wavelet transformation
CN102298768A (en) High-resolution image reconstruction method based on sparse samples
CN116228520A (en) Image Compression Sensing Reconstruction Method and System Based on Transformer Generative Adversarial Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231120

Address after: East Area, 6th Floor, Qilian Building, No. 200 Nanzhonghuan Street, Xiaodian District, Taiyuan City, Shanxi Province, 030000

Patentee after: Chuangbai technology transfer (Shanxi) Co.,Ltd.

Address before: 030006, No. 92, Hollywood Road, Xiaodian District, Shanxi, Taiyuan

Patentee before: SHANXI University