CN112907748B - Three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering - Google Patents
Three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering Download PDFInfo
- Publication number
- CN112907748B CN112907748B CN202110345048.6A CN202110345048A CN112907748B CN 112907748 B CN112907748 B CN 112907748B CN 202110345048 A CN202110345048 A CN 202110345048A CN 112907748 B CN112907748 B CN 112907748B
- Authority
- CN
- China
- Prior art keywords
- equal
- image
- depth image
- shear wave
- class
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000009466 transformation Effects 0.000 title claims abstract description 27
- 238000000034 method Methods 0.000 title claims abstract description 24
- 238000005070 sampling Methods 0.000 title claims abstract description 18
- 239000013598 vector Substances 0.000 claims abstract description 14
- 239000011159 matrix material Substances 0.000 claims abstract description 7
- 238000013507 mapping Methods 0.000 claims abstract 2
- 238000000354 decomposition reaction Methods 0.000 claims description 9
- 238000003064 k means clustering Methods 0.000 claims description 5
- 239000010749 BS 2869 Class C1 Substances 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 abstract 1
- 238000011156 evaluation Methods 0.000 description 9
- 238000012876 topography Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000004931 aggregating effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Abstract
The invention discloses a three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering. The method comprises the following steps: step 1, collecting an image sequence of a scene to be detected; step 2, setting parameters of a non-down-sampling shear wave transformation and clustering algorithm; step 3, converting the image sequence into a plurality of high-frequency coefficients with different scales and directions by using non-down-sampling shear wave conversion; step 4, mapping all the high-frequency coefficients into a plurality of depth images; step 5, respectively taking the contrast, correlation, energy, inverse variance and entropy five-dimensional vector of the gray level co-occurrence matrix of each depth image as texture features of the depth image; step 6, obtaining K clustering results by using a K mean value clustering algorithm; step 7, selecting the class with the minimum average gradient of the depth images in different clustering results; and 8, calculating the average value of the depth images in the minimum class of the average gradient to obtain the three-dimensional morphology reconstruction result of the scene to be detected. The invention can realize the optimal three-dimensional shape reconstruction result according to the scene.
Description
Technical Field
The invention belongs to the field of three-dimensional reconstruction, and particularly relates to a three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering.
Background
The method for measuring the three-dimensional morphology of the scene to be measured based on the image focusing information generally has the advantages of low dependence on hardware equipment, easiness in paralleling of a three-dimensional reconstruction algorithm, strong transportability of a reconstruction system and the like, and is widely applied to the fields of part defect detection in the field of micro-manufacturing, intelligent zooming of mobile imaging equipment and the like.
At present, the three-dimensional shape reconstruction method based on image focusing information mainly focuses on two aspects of design of image focusing evaluation indexes and construction of shape reconstruction algorithms. The image focusing evaluation index is used as a core link of the three-dimensional shape reconstruction method, the accuracy of the image focusing information extraction directly determines the quality of a three-dimensional reconstruction result, typical image focusing evaluation indexes can be divided into a space domain and a frequency domain, the space domain method mainly utilizes a time domain transformation method to determine whether a current pixel point is in a focusing area range from the aspect of an image pixel, then the three-dimensional shape reconstruction result of a scene to be detected is obtained by aggregating the position information of all focusing pixels, and the indexes can be roughly divided into three categories of Laplace transformation, gradient transformation and statistic estimation; the frequency domain method firstly transforms the image into high and low frequency components, and then obtains a three-dimensional shape reconstruction result by mining the incidence relation between the high and low frequency components and the depth image, and the method mainly comprises two major types of Fourier transform and wavelet transform. The shape reconstruction algorithm is mainly used for overcoming the discontinuous influence on a reconstruction result caused by the sampling interval of an image sequence, and a main representative method is Gaussian fitting.
By understanding the current state of the art, we believe that this field approach is mainly challenged by: the existing three-dimensional shape reconstruction method can only carry out three-dimensional reconstruction on a single scene generally and cannot be applied to three-dimensional reconstruction tasks of other scenes, namely, the quality of the three-dimensional shape reconstruction effect of different scenes depends on the accuracy of image focusing evaluation index selection in the three-dimensional shape reconstruction method. Therefore, how to provide a scene adaptive image focusing evaluation index is an important problem in the field of three-dimensional topography reconstruction.
In summary, how to select an image focus evaluation index according to the image characteristics in the scene is considered as a key for solving the above problem. The method includes the steps that non-down-sampling shear wave transformation is introduced to overcome the problem of singleness of focusing evaluation indexes of a traditional three-dimensional shape reconstruction method, a plurality of image focusing evaluation indexes covering any direction and scale in an image can be obtained through the non-down-sampling shear wave transformation, a plurality of depth images with different scales and directions are obtained based on the evaluation indexes, and then a depth image texture feature-based clustering method is provided to obtain an optimal three-dimensional reconstruction result representing a scene to be measured.
Disclosure of Invention
In order to overcome the problems in the prior art, the invention aims to provide a three-dimensional shape reconstruction method based on non-downsampling shear wave transformation and depth image texture feature clustering.
The technical scheme adopted by the invention is as follows: a three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering comprises the following steps:
step 2, setting the maximum decomposition scale of non-down-sampling shear wave transformation as J, setting the maximum direction number as L, setting a filter of non-down-sampling shear wave transformation, setting the clustering number K in a clustering algorithm, and setting the distance measurement standard as Euclidean distance;
step 3, the image sequence in the step 1 is processedPerforming non-downsampling shear wave transformation (NSST), wherein each image can obtain J × L high-frequency decomposition coefficients with different scales and directions as shown in formula (1);
wherein J represents the number of the ruler and the value range of J is more than or equal to 1 and less than or equal to J, L represents the number of the directions and the value range of L is more than or equal to 1 and less than or equal to L,the high-frequency decomposition coefficient of the ith image in the dimension j and the direction l is shown, and ihigh represents the high-frequency coefficientThe subscript of (A) has a value range of 1-ihigh-N, and NSST represents non-down-sampling shear wave transformation;
step 4, according to the formula (2), J multiplied by L high-frequency coefficients with different scales and directionsDepth image mapped into J × L different scales and directions
Wherein ihigh indicates that the value range of the ihigh high-frequency coefficient corresponding to the ith image is more than or equal to 1 and less than or equal to N,a function for solving the high-frequency coefficient subscript ihigh is represented, and abs (-) represents an absolute value function;
step 5, calculating each depth imageAnd the contrast r of the gray level co-occurrence matrix is expressed according to the formula (3)ConCorrelation rCorEnergy rEneInverse variance rHomAnd entropy rEntJ multiplied by L depth images are used as five-dimensional feature vectors of the depth images to obtain J multiplied by L five-dimensional feature vectors;
wherein GLCM (-) represents a computation function of the gray level co-occurrence matrix, Vj,l() a feature vector representing the depth image at the jth dimension in the direction of l;
step 6, clustering is carried out on the JxL five-dimensional feature vectors obtained in the step 5 according to a K-means clustering algorithm of a formula (4), and K clustering results { C are obtained1,C2,…,CK};
Where Kmeans (. cndot.) represents the K-meansClustering algorithms, class C1Is totally composed ofIndividual depth image setBy analogy, class CKIs totally composed ofIndividual depth image set
Step 7, calculating the average gradient in all the depth image classes obtained in the step 6, and selecting the class C with the minimum average gradient according to the formula (5)sAs a final depth image class;
whereinRepresenting a function for solving the subscript m of the depth image class, wherein the value range of m is more than or equal to 1 and less than or equal to K, Gradient () is a Gradient function, and s is the serial number of the minimum class of the average Gradient;
step 8, calculating the minimum class C of the average gradient obtained in the step 7 according to the formula (6)sAverage value of all the images in the image to obtain the final three-dimensional shape reconstruction result of the scene to be measured
The method can obtain the optimal three-dimensional shape reconstruction result suitable for the scene according to different scenes to be measured.
Drawings
FIG. 1 is a flow chart of a three-dimensional topography reconstruction method based on non-downsampling shear wave transformation and depth image texture feature clustering;
FIG. 2 is a schematic diagram of a three-dimensional topography reconstruction method based on non-downsampling shear wave transformation and depth image texture feature clustering.
Detailed Description
As shown in fig. 1 and fig. 2, the three-dimensional feature reconstruction method based on non-downsampling shear wave transformation and depth image texture feature clustering in this embodiment includes the following steps:
step 2, setting the maximum decomposition scale of non-down-sampling shear wave transformation as J, setting the maximum direction number as L, setting a filter of non-down-sampling shear wave transformation, setting the clustering number K in a clustering algorithm, and setting the distance measurement standard as Euclidean distance;
step 3, comparing the image sequence in the step 1Performing non-downsampling shear wave transformation (NSST), wherein each image can obtain J × L high-frequency decomposition coefficients with different scales and directions as shown in formula (1);
wherein J represents the number of the ruler and the range of J is more than or equal to 1 and less than or equal to J, L represents the number of the direction and the range of L is more than or equal to 1 and less than or equal to L,high-frequency decomposition coefficient of the ith image in the dimension j and the direction l, and high-frequency coefficientThe subscript of (A) has a value range of 1-ihigh-N, and NSST represents non-down-sampling shear wave transformation;
step 4, according to the formula (2), J multiplied by L high-frequency coefficients with different scales and directionsDepth image mapped into J × L different scales and directions
Wherein ihigh indicates that the value range of the ihigh high-frequency coefficient corresponding to the ith image is more than or equal to 1 and less than or equal to N,a function for solving the high-frequency coefficient subscript ihigh is shown, and abs (·) represents an absolute value function;
step 5, calculating each depth imageAnd the contrast r of the gray level co-occurrence matrix is expressed according to the formula (3)ConCorrelation rCorEnergy rEneInverse variance rHomAnd entropy rEntJ × L pieces of feature vectors as five-dimensional feature vectors of depth imagesJ multiplied by L five-dimensional feature vectors are obtained from the depth image;
wherein GLCM (-) represents a computation function of the gray level co-occurrence matrix, Vj,l() a feature vector representing the depth image at the jth dimension in the direction of l;
step 6, clustering the JXL five-dimensional feature vectors obtained in the step 5 according to a K-means clustering algorithm of the formula (4) to obtain K clustering results { C1,C2,…,CK};
Where Kmeans (. cndot.) represents a K-means clustering algorithm, class C1Is totally composed ofIndividual depth image setBy analogy, class CKAre co-containedIndividual depth image set
Step 7, calculating the average gradient in all the depth image classes obtained in the step 6, and selecting the class C with the minimum average gradient according to the formula (5)sAs a final depth image class;
whereinRepresenting a function for solving the subscript m of the depth image class, wherein the value range of m is more than or equal to 1 and less than or equal to K, Gradient () is a Gradient function, and s is the serial number of the minimum class of the average Gradient;
step 8, calculating the minimum class C of the average gradient obtained in the step 7 according to the formula (6)sAverage value of all the images in the image to obtain the final three-dimensional shape reconstruction result of the scene to be measured
Claims (1)
1. A three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering is characterized by comprising the following steps:
(1) obtaining image sequences of different depths of field of a scene to be measured by adjusting the distance between a camera and the scene to be measured at equal intervalsWherein i represents the number of images, the numeric range of i is more than or equal to 1 and less than or equal to N, and (x, y) represents the coordinate position of the image sequence, the range of x is more than or equal to 0, and y is more than or equal to M-1;
(2) setting the maximum decomposition scale of non-downsampling shear wave transformation as J, setting the maximum direction number as L, setting a filter of the non-downsampling shear wave transformation, setting the clustering number K in a clustering algorithm, and setting the distance measurement standard as Euclidean distance;
(3) for the image sequence in step 1Performing non-downsampling shear wave transformation NSST, wherein each image can obtain J multiplied by L high-frequency decomposition coefficients with different scales and directions as shown in a formula (1);
wherein J represents the number of the ruler and the range of J is more than or equal to 1 and less than or equal to J, L represents the number of the direction and the range of L is more than or equal to 1 and less than or equal to L,high-frequency decomposition coefficient of the ith image in the dimension j and the direction l, and high-frequency coefficientThe subscript of (A) has a value range of 1-ihigh-N, and NSST represents non-down-sampling shear wave transformation;
(4) dividing the high-frequency coefficients of J multiplied by L different scales and directions according to the formula (2)Mapping into J × L depth images of different scales and directions
Wherein ihigh indicates that the value range of the ihigh high-frequency coefficient corresponding to the ith image is more than or equal to 1 and less than or equal to N,a function for solving the high-frequency coefficient subscript ihigh is shown, and abs (·) represents an absolute value function;
(5) calculating each depth imageAnd the contrast r of the gray level co-occurrence matrix is expressed according to the formula (3)ConCorrelation rCorEnergy rEneInverse variance rHomAnd entropy rEntJ multiplied by L depth images are used as five-dimensional feature vectors of the depth images to obtain J multiplied by L five-dimensional feature vectors;
wherein GLCM (-) represents a computation function of the gray level co-occurrence matrix, Vj,l() a feature vector representing the depth image at the jth dimension in the direction of l;
(6) clustering the J multiplied by L five-dimensional feature vectors obtained in the step 5 according to a K mean value clustering algorithm of the formula (4) to obtain K clustering results { C }1,C2,L,CK};
Where Kmeans (-) represents the K-means clustering algorithm, class C1Is totally composed ofIndividual depth image setBy analogy, class CKIs totally composed ofIndividual depth image set
(7) Calculating the average gradient in all the depth image classes obtained in the step 6, and selecting the class C with the minimum average gradient according to the formula (5)sAs a final depth image class;
whereinRepresenting a function for solving the subscript m of the depth image class, wherein the value range of m is more than or equal to 1 and less than or equal to K, Gradient () is a Gradient function, and s is the serial number of the minimum class of the average Gradient;
(8) calculating the minimum class C of the average gradient obtained in step 7 according to the formula (6)sAverage value of all the images to obtain the final three-dimensional shape reconstruction result of the scene to be measured
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110345048.6A CN112907748B (en) | 2021-03-31 | 2021-03-31 | Three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110345048.6A CN112907748B (en) | 2021-03-31 | 2021-03-31 | Three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112907748A CN112907748A (en) | 2021-06-04 |
CN112907748B true CN112907748B (en) | 2022-07-19 |
Family
ID=76109565
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110345048.6A Active CN112907748B (en) | 2021-03-31 | 2021-03-31 | Three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112907748B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113971717A (en) * | 2021-10-25 | 2022-01-25 | 杭州图谱光电科技有限公司 | Microscopic three-dimensional reconstruction method based on Markov random field constraint |
CN116012607B (en) * | 2022-01-27 | 2023-09-01 | 华南理工大学 | Image weak texture feature extraction method and device, equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105354804A (en) * | 2015-10-23 | 2016-02-24 | 广州高清视信数码科技股份有限公司 | Maximization self-similarity based image super-resolution reconstruction method |
CN106228601A (en) * | 2016-07-21 | 2016-12-14 | 山东大学 | Multiple dimensioned pyramidal CT image quick three-dimensional reconstructing method based on wavelet transformation |
CN107240073A (en) * | 2017-05-12 | 2017-10-10 | 杭州电子科技大学 | A kind of 3 d video images restorative procedure merged based on gradient with clustering |
CN108038905A (en) * | 2017-12-25 | 2018-05-15 | 北京航空航天大学 | A kind of Object reconstruction method based on super-pixel |
CN109903372A (en) * | 2019-01-28 | 2019-06-18 | 中国科学院自动化研究所 | Depth map super-resolution complementing method and high quality three-dimensional rebuilding method and system |
US10405005B1 (en) * | 2018-11-28 | 2019-09-03 | Sherman McDermott | Methods and systems for video compression based on dynamic vector wave compression |
CN112489196A (en) * | 2020-11-30 | 2021-03-12 | 太原理工大学 | Particle three-dimensional shape reconstruction method based on multi-scale three-dimensional frequency domain transformation |
-
2021
- 2021-03-31 CN CN202110345048.6A patent/CN112907748B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105354804A (en) * | 2015-10-23 | 2016-02-24 | 广州高清视信数码科技股份有限公司 | Maximization self-similarity based image super-resolution reconstruction method |
CN106228601A (en) * | 2016-07-21 | 2016-12-14 | 山东大学 | Multiple dimensioned pyramidal CT image quick three-dimensional reconstructing method based on wavelet transformation |
CN107240073A (en) * | 2017-05-12 | 2017-10-10 | 杭州电子科技大学 | A kind of 3 d video images restorative procedure merged based on gradient with clustering |
CN108038905A (en) * | 2017-12-25 | 2018-05-15 | 北京航空航天大学 | A kind of Object reconstruction method based on super-pixel |
US10405005B1 (en) * | 2018-11-28 | 2019-09-03 | Sherman McDermott | Methods and systems for video compression based on dynamic vector wave compression |
CN109903372A (en) * | 2019-01-28 | 2019-06-18 | 中国科学院自动化研究所 | Depth map super-resolution complementing method and high quality three-dimensional rebuilding method and system |
CN112489196A (en) * | 2020-11-30 | 2021-03-12 | 太原理工大学 | Particle three-dimensional shape reconstruction method based on multi-scale three-dimensional frequency domain transformation |
Non-Patent Citations (2)
Title |
---|
"Three-dimensional Video Inpainting Combined with Gradient Fusion and Cluster";Lai Yili等;《Journal of Computer Aided Design & Computer Graphics》;20180331;第30卷(第3期);477-484 * |
"医学图像中血管的三维重建的研究与应用";胡泽龙;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20160215(第02期);I138-1716 * |
Also Published As
Publication number | Publication date |
---|---|
CN112907748A (en) | 2021-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103049892B (en) | Non-local image denoising method based on similar block matrix rank minimization | |
CN112907748B (en) | Three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering | |
Starovoytov et al. | Comparative analysis of the SSIM index and the pearson coefficient as a criterion for image similarity | |
CN104933678B (en) | A kind of image super-resolution rebuilding method based on image pixel intensities | |
CN110135438B (en) | Improved SURF algorithm based on gradient amplitude precomputation | |
CN110070574B (en) | Binocular vision stereo matching method based on improved PSMAT net | |
CN112308873B (en) | Edge detection method for multi-scale Gabor wavelet PCA fusion image | |
WO2019062595A1 (en) | Method for quality evaluation of photoplethysmogram | |
CN113259288B (en) | Underwater sound modulation mode identification method based on feature fusion and lightweight hybrid model | |
CN104268833A (en) | New image fusion method based on shift invariance shearlet transformation | |
CN110516525A (en) | SAR image target recognition method based on GAN and SVM | |
CN112070717A (en) | Power transmission line icing thickness detection method based on image processing | |
CN107944497A (en) | Image block method for measuring similarity based on principal component analysis | |
Jin et al. | Perceptual Gradient Similarity Deviation for Full Reference Image Quality Assessment. | |
CN108428221A (en) | A kind of neighborhood bivariate shrinkage function denoising method based on shearlet transformation | |
CN110223331B (en) | Brain MR medical image registration method | |
CN112598711B (en) | Hyperspectral target tracking method based on joint spectrum dimensionality reduction and feature fusion | |
CN113033602B (en) | Image clustering method based on tensor low-rank sparse representation | |
JP3507083B2 (en) | Polynomial filters for high-order correlation and multi-input information integration | |
Wang et al. | A new method of denoising crop image based on improved SVD in wavelet domain | |
Wu et al. | Research on crack detection algorithm of asphalt pavement | |
CN109447952B (en) | Semi-reference image quality evaluation method based on Gabor differential box weighting dimension | |
CN112489196B (en) | Particle three-dimensional shape reconstruction method based on multi-scale three-dimensional frequency domain transformation | |
CN113610906B (en) | Multi-parallax image sequence registration method based on fusion image guidance | |
CN102298768A (en) | High-resolution image reconstruction method based on sparse samples |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20231120 Address after: East Area, 6th Floor, Qilian Building, No. 200 Nanzhonghuan Street, Xiaodian District, Taiyuan City, Shanxi Province, 030000 Patentee after: Chuangbai technology transfer (Shanxi) Co.,Ltd. Address before: 030006, No. 92, Hollywood Road, Xiaodian District, Shanxi, Taiyuan Patentee before: SHANXI University |
|
TR01 | Transfer of patent right |