CN112907748A - Three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering - Google Patents

Three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering Download PDF

Info

Publication number
CN112907748A
CN112907748A CN202110345048.6A CN202110345048A CN112907748A CN 112907748 A CN112907748 A CN 112907748A CN 202110345048 A CN202110345048 A CN 202110345048A CN 112907748 A CN112907748 A CN 112907748A
Authority
CN
China
Prior art keywords
depth image
image
equal
class
clustering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110345048.6A
Other languages
Chinese (zh)
Other versions
CN112907748B (en
Inventor
闫涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chuangbai Technology Transfer Shanxi Co ltd
Original Assignee
Shanxi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi University filed Critical Shanxi University
Priority to CN202110345048.6A priority Critical patent/CN112907748B/en
Publication of CN112907748A publication Critical patent/CN112907748A/en
Application granted granted Critical
Publication of CN112907748B publication Critical patent/CN112907748B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Remote Sensing (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)

Abstract

本发明一种基于非降采样剪切波变换与深度图像纹理特征聚类的三维形貌重建方法。包括:步骤1,采集待测场景的图像序列;步骤2,设置非降采样剪切波变换与聚类算法的参数;步骤3,运用非降采样剪切波变换将图像序列变换为多个不同尺度与方向的高频系数;步骤4,所有高频系数映射为多幅深度图像;步骤5,分别将每幅深度图像的灰度共生矩阵的对比度、相关性、能量、逆方差和熵五维向量作为深度图像的纹理特征;步骤6,运用K均值聚类算法得到K个聚类结果;步骤7,选择不同聚类结果中深度图像平均梯度最小的类;步骤8,计算平均梯度最小类中深度图像的均值得到待测场景的三维形貌重建结果。本发明能够根据场景实现最优的三维形貌重建结果。The present invention is a three-dimensional topography reconstruction method based on non-downsampling shearlet transform and depth image texture feature clustering. Including: step 1, collecting the image sequence of the scene to be tested; step 2, setting the parameters of the non-downsampling shearlet transform and clustering algorithm; step 3, using the non-downsampling shearlet transform to transform the image sequence into a plurality of different High-frequency coefficients of scale and direction; step 4, all high-frequency coefficients are mapped to multiple depth images; step 5, the contrast, correlation, energy, inverse variance and entropy of the gray level co-occurrence matrix of each depth image are respectively five-dimensional The vector is used as the texture feature of the depth image; step 6, use the K-means clustering algorithm to obtain K clustering results; step 7, select the class with the smallest average gradient of the depth image in different clustering results; step 8, calculate the class with the smallest average gradient The mean value of the depth images is used to obtain the 3D topography reconstruction result of the scene to be tested. The invention can realize the optimal three-dimensional topography reconstruction result according to the scene.

Description

Three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering
Technical Field
The invention belongs to the field of three-dimensional reconstruction, and particularly relates to a three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering.
Background
The method for measuring the three-dimensional morphology of the scene to be measured based on the image focusing information generally has the advantages of low dependence on hardware equipment, easiness in paralleling of a three-dimensional reconstruction algorithm, strong transportability of a reconstruction system and the like, and is widely applied to the fields of part defect detection in the field of micro-manufacturing, intelligent zooming of mobile imaging equipment and the like.
At present, the three-dimensional shape reconstruction method based on image focusing information mainly focuses on two aspects of design of image focusing evaluation indexes and construction of shape reconstruction algorithms. The image focusing evaluation index is used as a core link of a three-dimensional shape reconstruction method, the accuracy of extracting image focusing information directly determines the quality of a three-dimensional reconstruction result, typical image focusing evaluation indexes can be divided into a space domain and a frequency domain, wherein the space domain method mainly utilizes a time domain transformation method to determine whether a current pixel point is in a focusing area range from the aspect of an image pixel, then obtains the three-dimensional shape reconstruction result of a scene to be detected by aggregating position information of all focusing pixels, and the indexes can be roughly divided into three categories of Laplace transformation, gradient transformation and statistic estimation; the frequency domain method firstly transforms the image into high and low frequency components, and then obtains a three-dimensional shape reconstruction result by mining the incidence relation between the high and low frequency components and the depth image, and the method mainly comprises two major types of Fourier transform and wavelet transform. The shape reconstruction algorithm is mainly used for overcoming the discontinuous influence on a reconstruction result caused by the sampling interval of an image sequence, and a main representative method is Gaussian fitting.
By understanding the current state of the art, we believe that this field approach is mainly challenged by: the existing three-dimensional shape reconstruction method can only carry out three-dimensional reconstruction on a single scene generally and cannot be applied to three-dimensional reconstruction tasks of other scenes, namely, the quality of the three-dimensional shape reconstruction effect of different scenes depends on the accuracy of image focusing evaluation index selection in the three-dimensional shape reconstruction method. Therefore, how to provide a scene adaptive image focusing evaluation index is an important problem in the field of three-dimensional topography reconstruction.
In summary, how to select an image focus evaluation index according to the image characteristics in the scene is considered as a key for solving the above problem. The method includes the steps that non-down-sampling shear wave transformation is introduced to overcome the problem of singleness of focusing evaluation indexes of a traditional three-dimensional shape reconstruction method, a plurality of image focusing evaluation indexes covering any direction and scale in an image can be obtained through the non-down-sampling shear wave transformation, a plurality of depth images with different scales and directions are obtained based on the evaluation indexes, and then a depth image texture feature-based clustering method is provided to obtain an optimal three-dimensional reconstruction result representing a scene to be measured.
Disclosure of Invention
In order to overcome the problems in the prior art, the invention aims to provide a three-dimensional shape reconstruction method based on non-downsampling shear wave transformation and depth image texture feature clustering.
The technical scheme adopted by the invention is as follows: a three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering comprises the following steps:
step 1, adjusting the distance between a camera and a scene to be detected at equal intervals to obtain image sequences of different depths of field of the scene to be detected
Figure BDA0003000546680000021
Wherein i represents the number of images, the numeric range of i is more than or equal to 1 and less than or equal to N, and (x, y) represents the coordinate position of the image sequence, the range of x is more than or equal to 0, and y is more than or equal to M-1;
step 2, setting the maximum decomposition scale of non-down-sampling shear wave transformation as J, setting the maximum direction number as L, setting a filter of non-down-sampling shear wave transformation, setting the clustering number K in a clustering algorithm, and setting the distance measurement standard as Euclidean distance;
step 3, the image sequence in the step 1 is processed
Figure BDA0003000546680000022
Performing non-downsampling shear wave transformation (NSST), wherein each image can obtain J × L high-frequency decomposition coefficients with different scales and directions as shown in formula (1);
Figure BDA0003000546680000023
wherein J represents the number of the ruler and the range of J is more than or equal to 1 and less than or equal to J, L represents the number of the direction and the range of L is more than or equal to 1 and less than or equal to L,
Figure BDA0003000546680000024
high-frequency decomposition coefficient of the ith image in the dimension j and the direction l, and high-frequency coefficient
Figure BDA0003000546680000025
The subscript of (A) has a value range of 1-ihigh-N, and NSST represents non-down-sampling shear wave transformation;
step 4, according to the formula (2), J multiplied by L high-frequency coefficients with different scales and directions
Figure BDA0003000546680000026
Depth image mapped into J × L different scales and directions
Figure BDA0003000546680000027
Figure BDA0003000546680000028
Wherein ihigh indicates that the value range of the ihigh high-frequency coefficient corresponding to the ith image is more than or equal to 1 and less than or equal to N,
Figure BDA0003000546680000029
a function for solving the high-frequency coefficient subscript ihigh is shown, and abs (·) represents an absolute value function;
step 5, calculating each depth image
Figure BDA0003000546680000031
And the contrast r of the gray level co-occurrence matrix is expressed according to the formula (3)ConCorrelation rCorEnergy rEneInverse variance rHomAnd entropy rEntJ multiplied by L depth images are used as five-dimensional feature vectors of the depth images to obtain J multiplied by L five-dimensional feature vectors;
Figure BDA0003000546680000032
wherein GLCM (-) represents a computation function of the gray level co-occurrence matrix, Vj,l() a feature vector representing the depth image at the jth dimension in the direction of l;
step 6, clustering the JXL five-dimensional feature vectors obtained in the step 5 according to a K-means clustering algorithm of the formula (4) to obtain K clustering results { C1,C2,…,CK};
Figure BDA0003000546680000033
Where Kmeans (-) represents the K-means clustering algorithm, class C1Is totally composed of
Figure BDA0003000546680000034
Individual depth image set
Figure BDA0003000546680000035
By analogy, class CKIs totally composed of
Figure BDA0003000546680000036
Individual depth image set
Figure BDA0003000546680000037
Figure BDA0003000546680000038
Step 7, calculating the average gradient in all the depth image classes obtained in the step 6, and selecting the class C with the minimum average gradient according to the formula (5)sAs a final depth image class;
Figure BDA0003000546680000039
wherein
Figure BDA00030005466800000310
Representing a function for solving the subscript m of the depth image class, wherein the value range of m is more than or equal to 1 and less than or equal to K, Gradient () is a Gradient function, and s is the serial number of the minimum class of the average Gradient;
step 8, calculating the minimum class C of the average gradient obtained in the step 7 according to the formula (6)sAll ofThe average value of the images is used for obtaining the final three-dimensional shape reconstruction result of the scene to be measured
Figure BDA00030005466800000311
Figure BDA00030005466800000312
Wherein
Figure BDA0003000546680000041
Is the mean gradient minimum class CsThe number of medium depth images.
The method can obtain the optimal three-dimensional shape reconstruction result suitable for the scene according to different scenes to be detected.
Drawings
FIG. 1 is a flow chart of a three-dimensional topography reconstruction method based on non-downsampling shear wave transformation and depth image texture feature clustering;
FIG. 2 is a schematic diagram of a three-dimensional feature reconstruction method based on non-downsampling shear wave transformation and depth image texture feature clustering.
Detailed Description
As shown in fig. 1 and fig. 2, the three-dimensional feature reconstruction method based on non-downsampling shear wave transformation and depth image texture feature clustering in this embodiment includes the following steps:
step 1, adjusting the distance between a camera and a scene to be detected at equal intervals to obtain image sequences of different depths of field of the scene to be detected
Figure BDA0003000546680000042
Wherein i represents the number of images, the numeric range of i is more than or equal to 1 and less than or equal to N, and (x, y) represents the coordinate position of the image sequence, the range of x is more than or equal to 0, and y is more than or equal to M-1;
step 2, setting the maximum decomposition scale of non-down-sampling shear wave transformation as J, setting the maximum direction number as L, setting a filter of non-down-sampling shear wave transformation, setting the clustering number K in a clustering algorithm, and setting the distance measurement standard as Euclidean distance;
step 3, the image sequence in the step 1 is processed
Figure BDA0003000546680000043
Performing non-downsampling shear wave transformation (NSST), wherein each image can obtain J × L high-frequency decomposition coefficients with different scales and directions as shown in formula (1);
Figure BDA0003000546680000044
wherein J represents the number of the ruler and the range of J is more than or equal to 1 and less than or equal to J, L represents the number of the direction and the range of L is more than or equal to 1 and less than or equal to L,
Figure BDA0003000546680000045
high-frequency decomposition coefficient of the ith image in the dimension j and the direction l, and high-frequency coefficient
Figure BDA0003000546680000046
The subscript of (A) has a value range of 1-ihigh-N, and NSST represents non-down-sampling shear wave transformation;
step 4, according to the formula (2), J multiplied by L high-frequency coefficients with different scales and directions
Figure BDA0003000546680000047
Depth image mapped into J × L different scales and directions
Figure BDA0003000546680000048
Figure BDA0003000546680000049
Wherein ihigh indicates that the value range of the ihigh high-frequency coefficient corresponding to the ith image is more than or equal to 1 and less than or equal to N,
Figure BDA0003000546680000051
a function for solving the high-frequency coefficient subscript ihigh is shown, and abs (·) represents an absolute value function;
step 5, calculating each depth image
Figure BDA0003000546680000052
And the contrast r of the gray level co-occurrence matrix is expressed according to the formula (3)ConCorrelation rCorEnergy rEneInverse variance rHomAnd entropy rEntJ multiplied by L depth images are used as five-dimensional feature vectors of the depth images to obtain J multiplied by L five-dimensional feature vectors;
Figure BDA0003000546680000053
wherein GLCM (-) represents a computation function of the gray level co-occurrence matrix, Vj,l() a feature vector representing the depth image at the jth dimension in the direction of l;
step 6, clustering the JXL five-dimensional feature vectors obtained in the step 5 according to a K-means clustering algorithm of the formula (4) to obtain K clustering results { C1,C2,…,CK};
Figure BDA0003000546680000054
Where Kmeans (-) represents the K-means clustering algorithm, class C1Is totally composed of
Figure BDA0003000546680000055
Individual depth image set
Figure BDA0003000546680000056
By analogy, class CKIs totally composed of
Figure BDA0003000546680000057
Individual depth image set
Figure BDA0003000546680000058
Figure BDA0003000546680000059
Step 7, calculating the average gradient in all the depth image classes obtained in the step 6, and selecting the class C with the minimum average gradient according to the formula (5)sAs a final depth image class;
Figure BDA00030005466800000510
wherein
Figure BDA00030005466800000511
Representing a function for solving the subscript m of the depth image class, wherein the value range of m is more than or equal to 1 and less than or equal to K, Gradient () is a Gradient function, and s is the serial number of the minimum class of the average Gradient;
step 8, calculating the minimum class C of the average gradient obtained in the step 7 according to the formula (6)sAverage value of all the images in the image to obtain the final three-dimensional shape reconstruction result of the scene to be measured
Figure BDA00030005466800000512
Figure BDA0003000546680000061
Wherein
Figure BDA0003000546680000062
Is the mean gradient minimum class CsThe number of medium depth images.

Claims (1)

1. A three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering is characterized by comprising the following steps:
(1) obtaining image sequences of different depths of field of a scene to be detected by adjusting the distance between a camera and the scene to be detected at equal intervals
Figure FDA0003000546670000011
Wherein i represents the number of images and the value range thereofThe circumference is 1-N, i, x, y represents the coordinate position of the image sequence and is in the range of 0-x, y-M-1;
(2) setting the maximum decomposition scale of non-downsampling shear wave transformation as J, setting the maximum direction number as L, setting a filter of the non-downsampling shear wave transformation, setting the clustering number K in a clustering algorithm, and setting the distance measurement standard as Euclidean distance;
(3) for the image sequence in step 1
Figure FDA0003000546670000012
Performing non-downsampling shear wave transformation (NSST), wherein each image can obtain J × L high-frequency decomposition coefficients with different scales and directions as shown in formula (1);
Figure FDA0003000546670000013
wherein J represents the number of the ruler and the range of J is more than or equal to 1 and less than or equal to J, L represents the number of the direction and the range of L is more than or equal to 1 and less than or equal to L,
Figure FDA0003000546670000014
high-frequency decomposition coefficient of the ith image in the dimension j and the direction l, and high-frequency coefficient
Figure FDA0003000546670000015
The subscript of (A) has a value range of 1-ihigh-N, and NSST represents non-down-sampling shear wave transformation;
(4) dividing the high-frequency coefficients of J × L different scales and directions according to the formula (2)
Figure FDA0003000546670000016
Depth image mapped into J × L different scales and directions
Figure FDA0003000546670000017
Figure FDA0003000546670000018
Wherein ihigh indicates that the value range of the ihigh high-frequency coefficient corresponding to the ith image is more than or equal to 1 and less than or equal to N,
Figure FDA0003000546670000019
a function for solving the high-frequency coefficient subscript ihigh is shown, and abs (·) represents an absolute value function;
(5) calculating each depth image
Figure FDA00030005466700000110
And the contrast r of the gray level co-occurrence matrix is expressed according to the formula (3)ConCorrelation rCorEnergy rEneInverse variance rHomAnd entropy rEntJ multiplied by L depth images are used as five-dimensional feature vectors of the depth images to obtain J multiplied by L five-dimensional feature vectors;
Figure FDA00030005466700000111
wherein GLCM (-) represents a computation function of the gray level co-occurrence matrix, Vj,l() a feature vector representing the depth image at the jth dimension in the direction of l;
(6) clustering the J multiplied by L five-dimensional feature vectors obtained in the step 5 according to a K mean value clustering algorithm of the formula (4) to obtain K clustering results { C }1,C2,…,CK};
Figure FDA0003000546670000021
Where Kmeans (-) represents the K-means clustering algorithm, class C1Is totally composed of
Figure FDA0003000546670000022
Individual depth image set
Figure FDA0003000546670000023
By analogy, class CKIs totally composed of
Figure FDA0003000546670000024
Individual depth image set
Figure FDA0003000546670000025
1≤x,y≤M-1;
(7) Calculating the average gradient in all depth image classes obtained in the step 6, and selecting the class C with the minimum average gradient according to the formula (5)sAs a final depth image class;
Figure FDA0003000546670000026
wherein
Figure FDA0003000546670000027
Representing a function for solving the subscript m of the depth image class, wherein the value range of m is more than or equal to 1 and less than or equal to K, Gradient () is a Gradient function, and s is the serial number of the minimum class of the average Gradient;
(8) calculating the minimum class C of the average gradient obtained in step 7 according to the formula (6)sAverage value of all the images in the image to obtain the final three-dimensional shape reconstruction result of the scene to be measured
Figure FDA0003000546670000028
Figure FDA0003000546670000029
Wherein
Figure FDA00030005466700000210
Is the mean gradient minimum class CsThe number of medium depth images.
CN202110345048.6A 2021-03-31 2021-03-31 A 3D Topography Reconstruction Method Based on Non-downsampling Shearlet Transform and Depth Image Texture Feature Clustering Active CN112907748B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110345048.6A CN112907748B (en) 2021-03-31 2021-03-31 A 3D Topography Reconstruction Method Based on Non-downsampling Shearlet Transform and Depth Image Texture Feature Clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110345048.6A CN112907748B (en) 2021-03-31 2021-03-31 A 3D Topography Reconstruction Method Based on Non-downsampling Shearlet Transform and Depth Image Texture Feature Clustering

Publications (2)

Publication Number Publication Date
CN112907748A true CN112907748A (en) 2021-06-04
CN112907748B CN112907748B (en) 2022-07-19

Family

ID=76109565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110345048.6A Active CN112907748B (en) 2021-03-31 2021-03-31 A 3D Topography Reconstruction Method Based on Non-downsampling Shearlet Transform and Depth Image Texture Feature Clustering

Country Status (1)

Country Link
CN (1) CN112907748B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113971717A (en) * 2021-10-25 2022-01-25 杭州图谱光电科技有限公司 Microscopic three-dimensional reconstruction method based on Markov random field constraint
CN116012607A (en) * 2022-01-27 2023-04-25 华南理工大学 Image weak texture feature extraction method and device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105354804A (en) * 2015-10-23 2016-02-24 广州高清视信数码科技股份有限公司 Maximization self-similarity based image super-resolution reconstruction method
CN106228601A (en) * 2016-07-21 2016-12-14 山东大学 Multiple dimensioned pyramidal CT image quick three-dimensional reconstructing method based on wavelet transformation
CN107240073A (en) * 2017-05-12 2017-10-10 杭州电子科技大学 A kind of 3 d video images restorative procedure merged based on gradient with clustering
CN108038905A (en) * 2017-12-25 2018-05-15 北京航空航天大学 A kind of Object reconstruction method based on super-pixel
CN109903372A (en) * 2019-01-28 2019-06-18 中国科学院自动化研究所 Depth map super-resolution completion method and high-quality 3D reconstruction method and system
US10405005B1 (en) * 2018-11-28 2019-09-03 Sherman McDermott Methods and systems for video compression based on dynamic vector wave compression
CN112489196A (en) * 2020-11-30 2021-03-12 太原理工大学 Particle three-dimensional shape reconstruction method based on multi-scale three-dimensional frequency domain transformation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105354804A (en) * 2015-10-23 2016-02-24 广州高清视信数码科技股份有限公司 Maximization self-similarity based image super-resolution reconstruction method
CN106228601A (en) * 2016-07-21 2016-12-14 山东大学 Multiple dimensioned pyramidal CT image quick three-dimensional reconstructing method based on wavelet transformation
CN107240073A (en) * 2017-05-12 2017-10-10 杭州电子科技大学 A kind of 3 d video images restorative procedure merged based on gradient with clustering
CN108038905A (en) * 2017-12-25 2018-05-15 北京航空航天大学 A kind of Object reconstruction method based on super-pixel
US10405005B1 (en) * 2018-11-28 2019-09-03 Sherman McDermott Methods and systems for video compression based on dynamic vector wave compression
CN109903372A (en) * 2019-01-28 2019-06-18 中国科学院自动化研究所 Depth map super-resolution completion method and high-quality 3D reconstruction method and system
CN112489196A (en) * 2020-11-30 2021-03-12 太原理工大学 Particle three-dimensional shape reconstruction method based on multi-scale three-dimensional frequency domain transformation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LAI YILI等: ""Three-dimensional Video Inpainting Combined with Gradient Fusion and Cluster"", 《JOURNAL OF COMPUTER AIDED DESIGN & COMPUTER GRAPHICS》 *
胡泽龙: ""医学图像中血管的三维重建的研究与应用"", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113971717A (en) * 2021-10-25 2022-01-25 杭州图谱光电科技有限公司 Microscopic three-dimensional reconstruction method based on Markov random field constraint
CN116012607A (en) * 2022-01-27 2023-04-25 华南理工大学 Image weak texture feature extraction method and device, equipment and storage medium
CN116012607B (en) * 2022-01-27 2023-09-01 华南理工大学 Image weak texture feature extraction method and device, equipment, storage medium

Also Published As

Publication number Publication date
CN112907748B (en) 2022-07-19

Similar Documents

Publication Publication Date Title
CN111932468B (en) Bayesian image denoising method based on noise-containing image distribution constraint
CN101539629B (en) Change Detection Method of Remote Sensing Image Based on Multi-Feature Evidence Fusion and Structural Similarity
CN109949349B (en) Multi-mode three-dimensional image registration and fusion display method
CN109872305B (en) No-reference stereo image quality evaluation method based on quality map generation network
CN105006001B (en) A kind of method for evaluating quality for having ginseng image based on nonlinear organization similarity deviation
CN104299232B (en) SAR image segmentation method based on self-adaptive window directionlet domain and improved FCM
CN111160176A (en) Fusion feature-based ground radar target classification method for one-dimensional convolutional neural network
CN110070574B (en) Binocular vision stereo matching method based on improved PSMAT net
CN112907748A (en) Three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering
CN112308873B (en) Edge detection method for multi-scale Gabor wavelet PCA fusion image
CN107610118B (en) A dM-based image segmentation quality assessment method
CN111340702A (en) A Sparse Reconstruction Method for High-Frequency Ultrasound Microscopic Imaging of Small Defects Based on Blind Estimation
CN107944497A (en) Image block method for measuring similarity based on principal component analysis
CN108550146A (en) A kind of image quality evaluating method based on ROI
CN110766657B (en) A method for evaluating the quality of laser interference images
CN110223331B (en) Brain MR medical image registration method
CN104008386A (en) Method and system for identifying type of tumor
CN112070717A (en) Power transmission line icing thickness detection method based on image processing
CN116188458B (en) Intelligent recognition method for abnormal deformation of surface of die-casting die of automobile part
CN114331989B (en) Full-reference 3D point cloud quality assessment method based on point feature histogram geodesic distance
CN111681272A (en) A SAR Image Processing Method Based on Singularity Power Spectrum
CN116400724A (en) Intelligent inspection method for unmanned aerial vehicle of power transmission line
CN109447952B (en) Semi-reference image quality evaluation method based on Gabor differential box weighting dimension
CN116596922B (en) Production quality detection method of solar water heater
CN112581453A (en) Depth, structure and angle-based non-reference light field image quality evaluation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231120

Address after: East Area, 6th Floor, Qilian Building, No. 200 Nanzhonghuan Street, Xiaodian District, Taiyuan City, Shanxi Province, 030000

Patentee after: Chuangbai technology transfer (Shanxi) Co.,Ltd.

Address before: 030006, No. 92, Hollywood Road, Xiaodian District, Shanxi, Taiyuan

Patentee before: SHANXI University

TR01 Transfer of patent right