CN112907748B - A 3D Topography Reconstruction Method Based on Non-downsampling Shearlet Transform and Depth Image Texture Feature Clustering - Google Patents
A 3D Topography Reconstruction Method Based on Non-downsampling Shearlet Transform and Depth Image Texture Feature Clustering Download PDFInfo
- Publication number
- CN112907748B CN112907748B CN202110345048.6A CN202110345048A CN112907748B CN 112907748 B CN112907748 B CN 112907748B CN 202110345048 A CN202110345048 A CN 202110345048A CN 112907748 B CN112907748 B CN 112907748B
- Authority
- CN
- China
- Prior art keywords
- image
- class
- depth
- depth image
- downsampling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 24
- 238000012876 topography Methods 0.000 title claims description 24
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 14
- 239000013598 vector Substances 0.000 claims abstract description 13
- 239000011159 matrix material Substances 0.000 claims abstract description 10
- 238000000354 decomposition reaction Methods 0.000 claims description 9
- 238000003064 k means clustering Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 abstract description 4
- 230000009466 transformation Effects 0.000 abstract 2
- 238000006243 chemical reaction Methods 0.000 abstract 1
- 238000013507 mapping Methods 0.000 abstract 1
- 238000011156 evaluation Methods 0.000 description 9
- 230000004931 aggregating effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Remote Sensing (AREA)
- Probability & Statistics with Applications (AREA)
- Computer Graphics (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明属于三维重建领域,具体涉及一种基于非降采样剪切波变换与深度图像纹理特征聚类的三维形貌重建方法。The invention belongs to the field of three-dimensional reconstruction, and in particular relates to a three-dimensional topography reconstruction method based on non-downsampling shearlet transform and depth image texture feature clustering.
背景技术Background technique
基于图像聚焦信息测量待测场景三维形貌的方法普遍具有硬件设备依赖性低、三维重建算法易于并行、重建系统可移植性强等优势,已被广泛应用于微细制造领域的零件缺陷检测与移动成像设备的智能变焦等领域。The method of measuring the 3D topography of the scene to be tested based on image focusing information generally has the advantages of low hardware equipment dependence, easy parallelization of 3D reconstruction algorithms, and strong portability of reconstruction systems. It has been widely used in parts defect detection and movement in the field of micro manufacturing Intelligent zoom of imaging equipment and other fields.
现阶段基于图像聚焦信息的三维形貌重建方法主要集中在图像聚焦评价指标的设计与形貌重建算法的构建两方面。图像聚焦评价指标作为三维形貌重建方法的核心环节,其对图像聚焦信息提取的准确性直接决定三维重建结果的好坏,典型的图像聚焦评价指标可以分为空间域和频率域两大类,其中空间域方法主要利用时域变换方法从图像像素的层面确定当前像素点是否处于聚焦区域范围,然后通过聚合全部聚焦像素的位置信息得到待测场景的三维形貌重建结果,此类指标可以大体分为拉普拉斯变换、梯度变换以及统计量估计三大类;频率域类方法则首先将图像变换为高低频分量,然后通过挖掘高低频分量与深度图像间的关联关系得到三维形貌重建结果,这类方法主要包括傅里叶变换和小波变换两大类。而形貌重建算法主要用于克服图像序列的采样间隔给重建结果带来的非连续性影响,主要代表性方法为高斯拟合。At present, the three-dimensional topography reconstruction method based on image focus information mainly focuses on the design of image focus evaluation index and the construction of topography reconstruction algorithm. The image focus evaluation index is the core link of the 3D topography reconstruction method. The accuracy of the image focus information extraction directly determines the quality of the 3D reconstruction results. Typical image focus evaluation indicators can be divided into two categories: spatial domain and frequency domain. Among them, the spatial domain method mainly uses the time domain transformation method to determine whether the current pixel is in the focus area from the level of image pixels, and then obtains the three-dimensional topography reconstruction result of the scene to be tested by aggregating the position information of all focus pixels. It is divided into three categories: Laplace transform, gradient transform and statistic estimation; the frequency domain method first transforms the image into high and low frequency components, and then obtains the three-dimensional topography reconstruction by mining the correlation between the high and low frequency components and the depth image. As a result, such methods mainly include Fourier transform and wavelet transform. The topography reconstruction algorithm is mainly used to overcome the discontinuous influence of the sampling interval of the image sequence on the reconstruction results, and the main representative method is Gaussian fitting.
通过了解研究现状,我们认为该领域方法主要面临以下挑战:现有的三维形貌重建方法通常仅能对单一场景进行三维重建,无法适用于其他场景的三维重建任务,即不同场景三维形貌重建效果的好坏取决于三维形貌重建方法中图像聚焦评价指标选择的准确性。因此,如何提出一种场景自适应的图像聚焦评价指标是三维形貌重建领域面临的一个重要难题。By understanding the research status, we believe that the methods in this field are mainly faced with the following challenges: The existing 3D topography reconstruction methods are usually only capable of 3D reconstruction of a single scene, and cannot be applied to the 3D reconstruction tasks of other scenes, that is, the 3D topography reconstruction of different scenes. The quality of the effect depends on the accuracy of the selection of the image focus evaluation index in the 3D topography reconstruction method. Therefore, how to propose a scene-adaptive image focus evaluation index is an important problem in the field of 3D topography reconstruction.
综上可知,我们认为如何根据场景中图像特性选择图像聚焦评价指标是解决上述问题的关键。本专利引入非降采样剪切波变换克服传统三维形貌重建方法的聚焦评价指标的单一性问题,通过非降采样剪切波变换可以得到覆盖图像中任意方向与尺度的多个图像聚焦评价指标,基于上述评价指标得到不同尺度与方向的多个深度图像,然后提出一种基于深度图像纹理特征的聚类方法得到表征待测场景的最优三维重建结果。To sum up, we believe that how to select the image focus evaluation index according to the image characteristics in the scene is the key to solving the above problems. In this patent, non-downsampling shearlet transform is introduced to overcome the singularity problem of focus evaluation indexes of traditional 3D topography reconstruction methods. Through non-downsampling shearlet transform, multiple image focus evaluation indexes covering any direction and scale in the image can be obtained. , based on the above evaluation indicators, multiple depth images of different scales and directions are obtained, and then a clustering method based on depth image texture features is proposed to obtain the optimal 3D reconstruction results representing the scene to be tested.
发明内容SUMMARY OF THE INVENTION
为克服上述技术中存在的问题,本发明的目的是提供一种基于非降采样剪切波变换与深度图像纹理特征聚类的三维形貌重建方法。In order to overcome the problems existing in the above technologies, the purpose of the present invention is to provide a three-dimensional topography reconstruction method based on non-downsampling shearlet transform and depth image texture feature clustering.
本发明所采取的技术方案是:一种基于非降采样剪切波变换与深度图像纹理特征聚类的三维形貌重建方法,包括以下步骤:The technical scheme adopted by the present invention is: a three-dimensional topography reconstruction method based on non-downsampling shearlet transform and depth image texture feature clustering, comprising the following steps:
步骤1,等间隔调节相机与待测场景间的距离获取待测场景不同景深的图像序列其中i代表图像数其取值范围是1≤i≤N,(x,y)表示图像序列的坐标位置其范围为0≤x,y≤M-1;Step 1: Adjust the distance between the camera and the scene to be tested at equal intervals to obtain image sequences of different depths of field of the scene to be tested where i represents the number of images, and its value range is 1≤i≤N, (x, y) represents the coordinate position of the image sequence, and its range is 0≤x, y≤M-1;
步骤2,将非降采样剪切波变换的最大分解尺度设置为J,最大方向数设置为L,设置非降采样剪切波变换的滤波器,在聚类算法中设置聚类个数K,距离度量标准为欧式距离;Step 2: Set the maximum decomposition scale of the non-downsampling shearlet transform to J, the maximum number of directions to L, set the filter of the non-downsampling shearlet transform, and set the number of clusters K in the clustering algorithm, The distance metric is Euclidean distance;
步骤3,对步骤1中的图像序列进行非降采样剪切波变换(NSST),如式(1)所示,每幅图像可以得到J×L个不同尺度与方向的高频分解系数;Step 3, for the image sequence in
其中j代表尺度数其取值范围是1≤j≤J,l代表方向数其取值范围是1≤l≤L,表示第i幅图像在尺度j和方向l的高频分解系数,ihigh表示高频系数的下标其取值范围为1≤ihigh≤N,NSST表示非降采样剪切波变换;Among them, j represents the scale number, and its value range is 1≤j≤J, and l represents the direction number, and its value range is 1≤l≤L, Represents the high-frequency decomposition coefficient of the i-th image at scale j and direction l, ihigh represents the high-frequency coefficient The subscript of , whose value range is 1≤ihigh≤N, NSST means non-downsampling shearlet transform;
步骤4,按照式(2)将J×L个不同尺度与方向的高频系数映射为J×L个不同尺度与方向的深度图像 Step 4: According to formula (2), the high-frequency coefficients of J×L different scales and directions are Mapped to J×L depth images of different scales and orientations
其中ihigh表示第i幅图像对应的第ihigh个高频系数其取值范围是1≤ihigh≤N,表示求解高频系数下标ihigh的函数,abs(·)表示绝对值函数;where ihigh represents the ihigh-th high-frequency coefficient corresponding to the i-th image, and its value range is 1≤ihigh≤N, Represents the function of solving the high-frequency coefficient subscript ihigh, and abs( ) represents the absolute value function;
步骤5,计算每幅深度图像的灰度共生矩阵,并根据式(3)将灰度共生矩阵的对比度rCon、相关性rCor、能量rEne、逆方差rHom和熵rEnt作为深度图像的五维特征向量,J×L幅深度图像共得到J×L个五维特征向量;Step 5, calculate each depth image The gray-level co-occurrence matrix of , and according to formula (3), the contrast r Con , correlation r Cor , energy r Ene , inverse variance r Hom and entropy r Ent of the gray-level co-occurrence matrix are used as the five-dimensional feature vector of the depth image, J× A total of J×L five-dimensional feature vectors are obtained from L depth images;
其中GLCM(·)表示灰度共生矩阵的计算函数,Vj,l(·)表示第j个尺度l个方向的深度图像的特征向量;where GLCM( ) represents the calculation function of the gray level co-occurrence matrix, and V j,l ( ) represents the feature vector of the depth image at the jth scale and in the l direction;
步骤6,对步骤5得到的J×L个五维特征向量根据式(4)的K均值聚类算法进行聚类,得到K个聚类结果{C1,C2,…,CK};Step 6: Perform clustering on the J×L five-dimensional feature vectors obtained in step 5 according to the K-means clustering algorithm of formula (4) to obtain K clustering results {C 1 , C 2 ,...,C K };
其中Kmeans(·)表示K均值聚类算法,类C1共包含个深度图像集合依此类推,类CK共包含个深度图像集合 where Kmeans( ) represents the K-means clustering algorithm, and class C 1 contains a total of collection of depth images And so on, the class C K contains a total of collection of depth images
步骤7,计算步骤6中得到的所有深度图像类中的平均梯度,并按照式(5)选择平均梯度最小的类Cs作为最终的深度图像类;Step 7, calculate the average gradient in all the depth image classes obtained in step 6, and select the class C s with the smallest average gradient as the final depth image class according to formula (5);
其中表示求解深度图像类下标m的函数,m的取值范围为1≤m≤K,Gradient(·)为梯度函数,s为平均梯度最小类的序号;in Represents the function of solving the subscript m of the depth image class, the value range of m is 1≤m≤K, Gradient( ) is the gradient function, and s is the serial number of the class with the smallest average gradient;
步骤8,根据式(6)计算步骤7得到的平均梯度最小类Cs中所有图像的平均值,得到待测场景最终的三维形貌重建结果 Step 8: Calculate the average value of all images in the minimum average gradient class C s obtained in step 7 according to formula (6), and obtain the final three-dimensional topography reconstruction result of the scene to be tested.
其中为平均梯度最小类Cs中深度图像的数量。in is the number of depth images in the average gradient minimum class C s .
本发明的方法能够根据不同待测场景得到适合场景的最佳三维形貌重建结果。The method of the invention can obtain the best three-dimensional topography reconstruction result suitable for the scene according to different scenes to be tested.
附图说明Description of drawings
图1是一种基于非降采样剪切波变换与深度图像纹理特征聚类的三维形貌重建方法的流程图;1 is a flowchart of a three-dimensional topography reconstruction method based on non-downsampling shearlet transform and depth image texture feature clustering;
图2是一种基于非降采样剪切波变换与深度图像纹理特征聚类的三维形貌重建方法的示意图。FIG. 2 is a schematic diagram of a three-dimensional topography reconstruction method based on non-downsampling shearlet transform and depth image texture feature clustering.
具体实施方式Detailed ways
如图1、图2所示,本实施例所述一种基于非降采样剪切波变换与深度图像纹理特征聚类的三维形貌重建方法,包括以下步骤:As shown in FIG. 1 and FIG. 2 , a three-dimensional topography reconstruction method based on non-downsampling shearlet transform and depth image texture feature clustering described in this embodiment includes the following steps:
步骤1,等间隔调节相机与待测场景间的距离获取待测场景不同景深的图像序列其中i代表图像数其取值范围是1≤i≤N,(x,y)表示图像序列的坐标位置其范围为0≤x,y≤M-1;Step 1: Adjust the distance between the camera and the scene to be tested at equal intervals to obtain image sequences of different depths of field of the scene to be tested where i represents the number of images, and its value range is 1≤i≤N, (x, y) represents the coordinate position of the image sequence, and its range is 0≤x, y≤M-1;
步骤2,将非降采样剪切波变换的最大分解尺度设置为J,最大方向数设置为L,设置非降采样剪切波变换的滤波器,在聚类算法中设置聚类个数K,距离度量标准为欧式距离;Step 2: Set the maximum decomposition scale of the non-downsampling shearlet transform to J, the maximum number of directions to L, set the filter of the non-downsampling shearlet transform, and set the number of clusters K in the clustering algorithm, The distance metric is Euclidean distance;
步骤3,对步骤1中的图像序列进行非降采样剪切波变换(NSST),如式(1)所示,每幅图像可以得到J×L个不同尺度与方向的高频分解系数;Step 3, for the image sequence in
其中j代表尺度数其取值范围是1≤j≤J,l代表方向数其取值范围是1≤l≤L,表示第i幅图像在尺度j和方向l的高频分解系数,ihigh表示高频系数的下标其取值范围为1≤ihigh≤N,NSST表示非降采样剪切波变换;Among them, j represents the scale number, and its value range is 1≤j≤J, and l represents the direction number, and its value range is 1≤l≤L, Represents the high-frequency decomposition coefficient of the i-th image at scale j and direction l, ihigh represents the high-frequency coefficient The subscript of , whose value range is 1≤ihigh≤N, NSST means non-downsampling shearlet transform;
步骤4,按照式(2)将J×L个不同尺度与方向的高频系数映射为J×L个不同尺度与方向的深度图像 Step 4: According to formula (2), the high-frequency coefficients of J×L different scales and directions are Mapped to J×L depth images of different scales and orientations
其中ihigh表示第i幅图像对应的第ihigh个高频系数其取值范围是1≤ihigh≤N,表示求解高频系数下标ihigh的函数,abs(·)表示绝对值函数;where ihigh represents the ihigh-th high-frequency coefficient corresponding to the i-th image, and its value range is 1≤ihigh≤N, Represents the function of solving the high-frequency coefficient subscript ihigh, and abs( ) represents the absolute value function;
步骤5,计算每幅深度图像的灰度共生矩阵,并根据式(3)将灰度共生矩阵的对比度rCon、相关性rCor、能量rEne、逆方差rHom和熵rEnt作为深度图像的五维特征向量,J×L幅深度图像共得到J×L个五维特征向量;Step 5, calculate each depth image The gray-level co-occurrence matrix of , and according to formula (3), the contrast r Con , correlation r Cor , energy r Ene , inverse variance r Hom and entropy r Ent of the gray-level co-occurrence matrix are used as the five-dimensional feature vector of the depth image, J× A total of J×L five-dimensional feature vectors are obtained from L depth images;
其中GLCM(·)表示灰度共生矩阵的计算函数,Vj,l(·)表示第j个尺度l个方向的深度图像的特征向量;where GLCM( ) represents the calculation function of the gray level co-occurrence matrix, and V j,l ( ) represents the feature vector of the depth image at the jth scale and in the l direction;
步骤6,对步骤5得到的J×L个五维特征向量根据式(4)的K均值聚类算法进行聚类,得到K个聚类结果{C1,C2,…,CK};Step 6: Perform clustering on the J×L five-dimensional feature vectors obtained in step 5 according to the K-means clustering algorithm of formula (4) to obtain K clustering results {C 1 , C 2 ,...,C K };
其中Kmeans(·)表示K均值聚类算法,类C1共包含个深度图像集合依此类推,类CK共包含个深度图像集合 where Kmeans( ) represents the K-means clustering algorithm, and class C 1 contains a total of collection of depth images And so on, the class C K contains a total of collection of depth images
步骤7,计算步骤6中得到的所有深度图像类中的平均梯度,并按照式(5)选择平均梯度最小的类Cs作为最终的深度图像类;Step 7, calculate the average gradient in all the depth image classes obtained in step 6, and select the class C s with the smallest average gradient as the final depth image class according to formula (5);
其中表示求解深度图像类下标m的函数,m的取值范围为1≤m≤K,Gradient(·)为梯度函数,s为平均梯度最小类的序号;in Represents the function of solving the subscript m of the depth image class, the value range of m is 1≤m≤K, Gradient( ) is the gradient function, and s is the serial number of the class with the smallest average gradient;
步骤8,根据式(6)计算步骤7得到的平均梯度最小类Cs中所有图像的平均值,得到待测场景最终的三维形貌重建结果 Step 8: Calculate the average value of all images in the minimum average gradient class C s obtained in step 7 according to formula (6), and obtain the final three-dimensional topography reconstruction result of the scene to be tested.
其中为平均梯度最小类Cs中深度图像的数量。in is the number of depth images in the average gradient minimum class C s .
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110345048.6A CN112907748B (en) | 2021-03-31 | 2021-03-31 | A 3D Topography Reconstruction Method Based on Non-downsampling Shearlet Transform and Depth Image Texture Feature Clustering |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110345048.6A CN112907748B (en) | 2021-03-31 | 2021-03-31 | A 3D Topography Reconstruction Method Based on Non-downsampling Shearlet Transform and Depth Image Texture Feature Clustering |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112907748A CN112907748A (en) | 2021-06-04 |
CN112907748B true CN112907748B (en) | 2022-07-19 |
Family
ID=76109565
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110345048.6A Active CN112907748B (en) | 2021-03-31 | 2021-03-31 | A 3D Topography Reconstruction Method Based on Non-downsampling Shearlet Transform and Depth Image Texture Feature Clustering |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112907748B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113971717A (en) * | 2021-10-25 | 2022-01-25 | 杭州图谱光电科技有限公司 | Microscopic three-dimensional reconstruction method based on Markov random field constraint |
CN116012607B (en) * | 2022-01-27 | 2023-09-01 | 华南理工大学 | Image weak texture feature extraction method and device, equipment, storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105354804A (en) * | 2015-10-23 | 2016-02-24 | 广州高清视信数码科技股份有限公司 | Maximization self-similarity based image super-resolution reconstruction method |
CN106228601A (en) * | 2016-07-21 | 2016-12-14 | 山东大学 | Multiple dimensioned pyramidal CT image quick three-dimensional reconstructing method based on wavelet transformation |
CN107240073A (en) * | 2017-05-12 | 2017-10-10 | 杭州电子科技大学 | A kind of 3 d video images restorative procedure merged based on gradient with clustering |
CN108038905A (en) * | 2017-12-25 | 2018-05-15 | 北京航空航天大学 | A kind of Object reconstruction method based on super-pixel |
CN109903372A (en) * | 2019-01-28 | 2019-06-18 | 中国科学院自动化研究所 | Depth map super-resolution completion method and high-quality 3D reconstruction method and system |
US10405005B1 (en) * | 2018-11-28 | 2019-09-03 | Sherman McDermott | Methods and systems for video compression based on dynamic vector wave compression |
CN112489196A (en) * | 2020-11-30 | 2021-03-12 | 太原理工大学 | Particle three-dimensional shape reconstruction method based on multi-scale three-dimensional frequency domain transformation |
-
2021
- 2021-03-31 CN CN202110345048.6A patent/CN112907748B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105354804A (en) * | 2015-10-23 | 2016-02-24 | 广州高清视信数码科技股份有限公司 | Maximization self-similarity based image super-resolution reconstruction method |
CN106228601A (en) * | 2016-07-21 | 2016-12-14 | 山东大学 | Multiple dimensioned pyramidal CT image quick three-dimensional reconstructing method based on wavelet transformation |
CN107240073A (en) * | 2017-05-12 | 2017-10-10 | 杭州电子科技大学 | A kind of 3 d video images restorative procedure merged based on gradient with clustering |
CN108038905A (en) * | 2017-12-25 | 2018-05-15 | 北京航空航天大学 | A kind of Object reconstruction method based on super-pixel |
US10405005B1 (en) * | 2018-11-28 | 2019-09-03 | Sherman McDermott | Methods and systems for video compression based on dynamic vector wave compression |
CN109903372A (en) * | 2019-01-28 | 2019-06-18 | 中国科学院自动化研究所 | Depth map super-resolution completion method and high-quality 3D reconstruction method and system |
CN112489196A (en) * | 2020-11-30 | 2021-03-12 | 太原理工大学 | Particle three-dimensional shape reconstruction method based on multi-scale three-dimensional frequency domain transformation |
Non-Patent Citations (2)
Title |
---|
"Three-dimensional Video Inpainting Combined with Gradient Fusion and Cluster";Lai Yili等;《Journal of Computer Aided Design & Computer Graphics》;20180331;第30卷(第3期);477-484 * |
"医学图像中血管的三维重建的研究与应用";胡泽龙;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20160215(第02期);I138-1716 * |
Also Published As
Publication number | Publication date |
---|---|
CN112907748A (en) | 2021-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107945161B (en) | Detection method of road surface defect based on texture feature extraction | |
CN109949349B (en) | Multi-mode three-dimensional image registration and fusion display method | |
CN103475898B (en) | Non-reference image quality assessment method based on information entropy characters | |
CN101539629B (en) | Change Detection Method of Remote Sensing Image Based on Multi-Feature Evidence Fusion and Structural Similarity | |
CN111932468B (en) | Bayesian image denoising method based on noise-containing image distribution constraint | |
CN104299232B (en) | SAR image segmentation method based on self-adaptive window directionlet domain and improved FCM | |
CN112907748B (en) | A 3D Topography Reconstruction Method Based on Non-downsampling Shearlet Transform and Depth Image Texture Feature Clustering | |
CN105512670B (en) | Divided based on KECA Feature Dimension Reduction and the HRCT peripheral nerve of cluster | |
CN104268833A (en) | New image fusion method based on shift invariance shearlet transformation | |
CN107610118B (en) | A dM-based image segmentation quality assessment method | |
CN108257093B (en) | Single-frame image super-resolution method based on controllable kernel and Gaussian process regression | |
CN108550146A (en) | A kind of image quality evaluating method based on ROI | |
CN113160265A (en) | Construction method of prediction image for brain corpus callosum segmentation for corpus callosum state evaluation | |
CN114529519B (en) | Image compressed sensing reconstruction method and system based on multi-scale depth cavity residual error network | |
Chen et al. | Direction-guided and multi-scale feature screening for fetal head–pubic symphysis segmentation and angle of progression calculation | |
CN112734683A (en) | Multi-scale SAR and infrared image fusion method based on target enhancement | |
CN103198456B (en) | Remote sensing image fusion method based on directionlet domain hidden Markov tree (HMT) model | |
CN114331989B (en) | Full-reference 3D point cloud quality assessment method based on point feature histogram geodesic distance | |
CN111681272A (en) | A SAR Image Processing Method Based on Singularity Power Spectrum | |
CN103971125A (en) | Super-resolution algorithm based on vibration signal of laser echo | |
CN112149728B (en) | Rapid multi-mode image template matching method | |
CN107219483B (en) | A kind of radial kurtosis anisotropic quantitative approach based on diffusion kurtosis imaging | |
CN109191437A (en) | Clarity evaluation method based on wavelet transformation | |
CN102298768A (en) | High-resolution image reconstruction method based on sparse samples | |
CN116228520A (en) | Image Compression Sensing Reconstruction Method and System Based on Transformer Generative Adversarial Network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20231120 Address after: East Area, 6th Floor, Qilian Building, No. 200 Nanzhonghuan Street, Xiaodian District, Taiyuan City, Shanxi Province, 030000 Patentee after: Chuangbai technology transfer (Shanxi) Co.,Ltd. Address before: 030006, No. 92, Hollywood Road, Xiaodian District, Shanxi, Taiyuan Patentee before: SHANXI University |