CN112907748B - Three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering - Google Patents

Three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering Download PDF

Info

Publication number
CN112907748B
CN112907748B CN202110345048.6A CN202110345048A CN112907748B CN 112907748 B CN112907748 B CN 112907748B CN 202110345048 A CN202110345048 A CN 202110345048A CN 112907748 B CN112907748 B CN 112907748B
Authority
CN
China
Prior art keywords
equal
image
depth image
shear wave
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110345048.6A
Other languages
Chinese (zh)
Other versions
CN112907748A (en
Inventor
闫涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chuangbai Technology Transfer Shanxi Co ltd
Original Assignee
Shanxi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi University filed Critical Shanxi University
Priority to CN202110345048.6A priority Critical patent/CN112907748B/en
Publication of CN112907748A publication Critical patent/CN112907748A/en
Application granted granted Critical
Publication of CN112907748B publication Critical patent/CN112907748B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention discloses a three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering. The method comprises the following steps: step 1, collecting an image sequence of a scene to be detected; step 2, setting parameters of a non-down-sampling shear wave transformation and clustering algorithm; step 3, converting the image sequence into a plurality of high-frequency coefficients with different scales and directions by using non-down-sampling shear wave conversion; step 4, mapping all the high-frequency coefficients into a plurality of depth images; step 5, respectively taking the contrast, correlation, energy, inverse variance and entropy five-dimensional vector of the gray level co-occurrence matrix of each depth image as texture features of the depth image; step 6, obtaining K clustering results by using a K mean value clustering algorithm; step 7, selecting the class with the minimum average gradient of the depth images in different clustering results; and 8, calculating the average value of the depth images in the minimum class of the average gradient to obtain the three-dimensional morphology reconstruction result of the scene to be detected. The invention can realize the optimal three-dimensional shape reconstruction result according to the scene.

Description

Three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering
Technical Field
The invention belongs to the field of three-dimensional reconstruction, and particularly relates to a three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering.
Background
The method for measuring the three-dimensional morphology of the scene to be measured based on the image focusing information generally has the advantages of low dependence on hardware equipment, easiness in paralleling of a three-dimensional reconstruction algorithm, strong transportability of a reconstruction system and the like, and is widely applied to the fields of part defect detection in the field of micro-manufacturing, intelligent zooming of mobile imaging equipment and the like.
At present, the three-dimensional shape reconstruction method based on image focusing information mainly focuses on two aspects of design of image focusing evaluation indexes and construction of shape reconstruction algorithms. The image focusing evaluation index is used as a core link of the three-dimensional shape reconstruction method, the accuracy of the image focusing information extraction directly determines the quality of a three-dimensional reconstruction result, typical image focusing evaluation indexes can be divided into a space domain and a frequency domain, the space domain method mainly utilizes a time domain transformation method to determine whether a current pixel point is in a focusing area range from the aspect of an image pixel, then the three-dimensional shape reconstruction result of a scene to be detected is obtained by aggregating the position information of all focusing pixels, and the indexes can be roughly divided into three categories of Laplace transformation, gradient transformation and statistic estimation; the frequency domain method firstly transforms the image into high and low frequency components, and then obtains a three-dimensional shape reconstruction result by mining the incidence relation between the high and low frequency components and the depth image, and the method mainly comprises two major types of Fourier transform and wavelet transform. The shape reconstruction algorithm is mainly used for overcoming the discontinuous influence on a reconstruction result caused by the sampling interval of an image sequence, and a main representative method is Gaussian fitting.
By understanding the current state of the art, we believe that this field approach is mainly challenged by: the existing three-dimensional shape reconstruction method can only carry out three-dimensional reconstruction on a single scene generally and cannot be applied to three-dimensional reconstruction tasks of other scenes, namely, the quality of the three-dimensional shape reconstruction effect of different scenes depends on the accuracy of image focusing evaluation index selection in the three-dimensional shape reconstruction method. Therefore, how to provide a scene adaptive image focusing evaluation index is an important problem in the field of three-dimensional topography reconstruction.
In summary, how to select an image focus evaluation index according to the image characteristics in the scene is considered as a key for solving the above problem. The method includes the steps that non-down-sampling shear wave transformation is introduced to overcome the problem of singleness of focusing evaluation indexes of a traditional three-dimensional shape reconstruction method, a plurality of image focusing evaluation indexes covering any direction and scale in an image can be obtained through the non-down-sampling shear wave transformation, a plurality of depth images with different scales and directions are obtained based on the evaluation indexes, and then a depth image texture feature-based clustering method is provided to obtain an optimal three-dimensional reconstruction result representing a scene to be measured.
Disclosure of Invention
In order to overcome the problems in the prior art, the invention aims to provide a three-dimensional shape reconstruction method based on non-downsampling shear wave transformation and depth image texture feature clustering.
The technical scheme adopted by the invention is as follows: a three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering comprises the following steps:
step 1, adjusting the distance between a camera and a scene to be detected at equal intervals to obtain image sequences of different depths of field of the scene to be detected
Figure BDA0003000546680000021
Wherein i represents the number of images, the numeric range of i is more than or equal to 1 and less than or equal to N, and (x, y) represents the coordinate position of the image sequence, the range of x is more than or equal to 0, and y is more than or equal to M-1;
step 2, setting the maximum decomposition scale of non-down-sampling shear wave transformation as J, setting the maximum direction number as L, setting a filter of non-down-sampling shear wave transformation, setting the clustering number K in a clustering algorithm, and setting the distance measurement standard as Euclidean distance;
step 3, the image sequence in the step 1 is processed
Figure BDA0003000546680000022
Performing non-downsampling shear wave transformation (NSST), wherein each image can obtain J × L high-frequency decomposition coefficients with different scales and directions as shown in formula (1);
Figure BDA0003000546680000023
wherein J represents the number of the ruler and the value range of J is more than or equal to 1 and less than or equal to J, L represents the number of the directions and the value range of L is more than or equal to 1 and less than or equal to L,
Figure BDA0003000546680000024
the high-frequency decomposition coefficient of the ith image in the dimension j and the direction l is shown, and ihigh represents the high-frequency coefficient
Figure BDA0003000546680000025
The subscript of (A) has a value range of 1-ihigh-N, and NSST represents non-down-sampling shear wave transformation;
step 4, according to the formula (2), J multiplied by L high-frequency coefficients with different scales and directions
Figure BDA0003000546680000026
Depth image mapped into J × L different scales and directions
Figure BDA0003000546680000027
Figure BDA0003000546680000028
Wherein ihigh indicates that the value range of the ihigh high-frequency coefficient corresponding to the ith image is more than or equal to 1 and less than or equal to N,
Figure BDA0003000546680000029
a function for solving the high-frequency coefficient subscript ihigh is represented, and abs (-) represents an absolute value function;
step 5, calculating each depth image
Figure BDA0003000546680000031
And the contrast r of the gray level co-occurrence matrix is expressed according to the formula (3)ConCorrelation rCorEnergy rEneInverse variance rHomAnd entropy rEntJ multiplied by L depth images are used as five-dimensional feature vectors of the depth images to obtain J multiplied by L five-dimensional feature vectors;
Figure BDA0003000546680000032
wherein GLCM (-) represents a computation function of the gray level co-occurrence matrix, Vj,l() a feature vector representing the depth image at the jth dimension in the direction of l;
step 6, clustering is carried out on the JxL five-dimensional feature vectors obtained in the step 5 according to a K-means clustering algorithm of a formula (4), and K clustering results { C are obtained1,C2,…,CK};
Figure BDA0003000546680000033
Where Kmeans (. cndot.) represents the K-meansClustering algorithms, class C1Is totally composed of
Figure BDA0003000546680000034
Individual depth image set
Figure BDA0003000546680000035
By analogy, class CKIs totally composed of
Figure BDA0003000546680000036
Individual depth image set
Figure BDA0003000546680000037
Figure BDA0003000546680000038
Step 7, calculating the average gradient in all the depth image classes obtained in the step 6, and selecting the class C with the minimum average gradient according to the formula (5)sAs a final depth image class;
Figure BDA0003000546680000039
wherein
Figure BDA00030005466800000310
Representing a function for solving the subscript m of the depth image class, wherein the value range of m is more than or equal to 1 and less than or equal to K, Gradient () is a Gradient function, and s is the serial number of the minimum class of the average Gradient;
step 8, calculating the minimum class C of the average gradient obtained in the step 7 according to the formula (6)sAverage value of all the images in the image to obtain the final three-dimensional shape reconstruction result of the scene to be measured
Figure BDA00030005466800000311
Figure BDA00030005466800000312
Wherein
Figure BDA0003000546680000041
Is the mean gradient minimum class CsThe number of medium depth images.
The method can obtain the optimal three-dimensional shape reconstruction result suitable for the scene according to different scenes to be measured.
Drawings
FIG. 1 is a flow chart of a three-dimensional topography reconstruction method based on non-downsampling shear wave transformation and depth image texture feature clustering;
FIG. 2 is a schematic diagram of a three-dimensional topography reconstruction method based on non-downsampling shear wave transformation and depth image texture feature clustering.
Detailed Description
As shown in fig. 1 and fig. 2, the three-dimensional feature reconstruction method based on non-downsampling shear wave transformation and depth image texture feature clustering in this embodiment includes the following steps:
step 1, adjusting the distance between a camera and a scene to be detected at equal intervals to obtain image sequences of different depths of field of the scene to be detected
Figure BDA0003000546680000042
Wherein i represents the number of images, the numeric range of i is more than or equal to 1 and less than or equal to N, and (x, y) represents the coordinate position of the image sequence, the range of x is more than or equal to 0, and y is more than or equal to M-1;
step 2, setting the maximum decomposition scale of non-down-sampling shear wave transformation as J, setting the maximum direction number as L, setting a filter of non-down-sampling shear wave transformation, setting the clustering number K in a clustering algorithm, and setting the distance measurement standard as Euclidean distance;
step 3, comparing the image sequence in the step 1
Figure BDA0003000546680000043
Performing non-downsampling shear wave transformation (NSST), wherein each image can obtain J × L high-frequency decomposition coefficients with different scales and directions as shown in formula (1);
Figure BDA0003000546680000044
wherein J represents the number of the ruler and the range of J is more than or equal to 1 and less than or equal to J, L represents the number of the direction and the range of L is more than or equal to 1 and less than or equal to L,
Figure BDA0003000546680000045
high-frequency decomposition coefficient of the ith image in the dimension j and the direction l, and high-frequency coefficient
Figure BDA0003000546680000046
The subscript of (A) has a value range of 1-ihigh-N, and NSST represents non-down-sampling shear wave transformation;
step 4, according to the formula (2), J multiplied by L high-frequency coefficients with different scales and directions
Figure BDA0003000546680000047
Depth image mapped into J × L different scales and directions
Figure BDA0003000546680000048
Figure BDA0003000546680000049
Wherein ihigh indicates that the value range of the ihigh high-frequency coefficient corresponding to the ith image is more than or equal to 1 and less than or equal to N,
Figure BDA0003000546680000051
a function for solving the high-frequency coefficient subscript ihigh is shown, and abs (·) represents an absolute value function;
step 5, calculating each depth image
Figure BDA0003000546680000052
And the contrast r of the gray level co-occurrence matrix is expressed according to the formula (3)ConCorrelation rCorEnergy rEneInverse variance rHomAnd entropy rEntJ × L pieces of feature vectors as five-dimensional feature vectors of depth imagesJ multiplied by L five-dimensional feature vectors are obtained from the depth image;
Figure BDA0003000546680000053
wherein GLCM (-) represents a computation function of the gray level co-occurrence matrix, Vj,l() a feature vector representing the depth image at the jth dimension in the direction of l;
step 6, clustering the JXL five-dimensional feature vectors obtained in the step 5 according to a K-means clustering algorithm of the formula (4) to obtain K clustering results { C1,C2,…,CK};
Figure BDA0003000546680000054
Where Kmeans (. cndot.) represents a K-means clustering algorithm, class C1Is totally composed of
Figure BDA0003000546680000055
Individual depth image set
Figure BDA0003000546680000056
By analogy, class CKAre co-contained
Figure BDA0003000546680000057
Individual depth image set
Figure BDA0003000546680000058
Figure BDA0003000546680000059
Step 7, calculating the average gradient in all the depth image classes obtained in the step 6, and selecting the class C with the minimum average gradient according to the formula (5)sAs a final depth image class;
Figure BDA00030005466800000510
wherein
Figure BDA00030005466800000511
Representing a function for solving the subscript m of the depth image class, wherein the value range of m is more than or equal to 1 and less than or equal to K, Gradient () is a Gradient function, and s is the serial number of the minimum class of the average Gradient;
step 8, calculating the minimum class C of the average gradient obtained in the step 7 according to the formula (6)sAverage value of all the images in the image to obtain the final three-dimensional shape reconstruction result of the scene to be measured
Figure BDA00030005466800000512
Figure BDA0003000546680000061
Wherein
Figure BDA0003000546680000062
Is the mean gradient minimum class CsThe number of medium depth images.

Claims (1)

1. A three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering is characterized by comprising the following steps:
(1) obtaining image sequences of different depths of field of a scene to be measured by adjusting the distance between a camera and the scene to be measured at equal intervals
Figure FDA0003632912610000011
Wherein i represents the number of images, the numeric range of i is more than or equal to 1 and less than or equal to N, and (x, y) represents the coordinate position of the image sequence, the range of x is more than or equal to 0, and y is more than or equal to M-1;
(2) setting the maximum decomposition scale of non-downsampling shear wave transformation as J, setting the maximum direction number as L, setting a filter of the non-downsampling shear wave transformation, setting the clustering number K in a clustering algorithm, and setting the distance measurement standard as Euclidean distance;
(3) for the image sequence in step 1
Figure FDA0003632912610000012
Performing non-downsampling shear wave transformation NSST, wherein each image can obtain J multiplied by L high-frequency decomposition coefficients with different scales and directions as shown in a formula (1);
Figure FDA0003632912610000013
wherein J represents the number of the ruler and the range of J is more than or equal to 1 and less than or equal to J, L represents the number of the direction and the range of L is more than or equal to 1 and less than or equal to L,
Figure FDA0003632912610000014
high-frequency decomposition coefficient of the ith image in the dimension j and the direction l, and high-frequency coefficient
Figure FDA0003632912610000015
The subscript of (A) has a value range of 1-ihigh-N, and NSST represents non-down-sampling shear wave transformation;
(4) dividing the high-frequency coefficients of J multiplied by L different scales and directions according to the formula (2)
Figure FDA0003632912610000016
Mapping into J × L depth images of different scales and directions
Figure FDA0003632912610000017
Figure FDA0003632912610000018
Wherein ihigh indicates that the value range of the ihigh high-frequency coefficient corresponding to the ith image is more than or equal to 1 and less than or equal to N,
Figure FDA0003632912610000019
a function for solving the high-frequency coefficient subscript ihigh is shown, and abs (·) represents an absolute value function;
(5) calculating each depth image
Figure FDA00036329126100000110
And the contrast r of the gray level co-occurrence matrix is expressed according to the formula (3)ConCorrelation rCorEnergy rEneInverse variance rHomAnd entropy rEntJ multiplied by L depth images are used as five-dimensional feature vectors of the depth images to obtain J multiplied by L five-dimensional feature vectors;
Figure FDA00036329126100000111
wherein GLCM (-) represents a computation function of the gray level co-occurrence matrix, Vj,l() a feature vector representing the depth image at the jth dimension in the direction of l;
(6) clustering the J multiplied by L five-dimensional feature vectors obtained in the step 5 according to a K mean value clustering algorithm of the formula (4) to obtain K clustering results { C }1,C2,L,CK};
Figure FDA0003632912610000021
Where Kmeans (-) represents the K-means clustering algorithm, class C1Is totally composed of
Figure FDA0003632912610000023
Individual depth image set
Figure FDA0003632912610000024
By analogy, class CKIs totally composed of
Figure FDA0003632912610000025
Individual depth image set
Figure FDA0003632912610000026
Figure FDA0003632912610000027
(7) Calculating the average gradient in all the depth image classes obtained in the step 6, and selecting the class C with the minimum average gradient according to the formula (5)sAs a final depth image class;
Figure FDA0003632912610000028
wherein
Figure FDA0003632912610000029
Representing a function for solving the subscript m of the depth image class, wherein the value range of m is more than or equal to 1 and less than or equal to K, Gradient () is a Gradient function, and s is the serial number of the minimum class of the average Gradient;
(8) calculating the minimum class C of the average gradient obtained in step 7 according to the formula (6)sAverage value of all the images to obtain the final three-dimensional shape reconstruction result of the scene to be measured
Figure FDA00036329126100000210
Figure FDA00036329126100000211
Wherein
Figure FDA00036329126100000212
Is the mean gradient minimum class CsThe number of medium depth images.
CN202110345048.6A 2021-03-31 2021-03-31 Three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering Active CN112907748B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110345048.6A CN112907748B (en) 2021-03-31 2021-03-31 Three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110345048.6A CN112907748B (en) 2021-03-31 2021-03-31 Three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering

Publications (2)

Publication Number Publication Date
CN112907748A CN112907748A (en) 2021-06-04
CN112907748B true CN112907748B (en) 2022-07-19

Family

ID=76109565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110345048.6A Active CN112907748B (en) 2021-03-31 2021-03-31 Three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering

Country Status (1)

Country Link
CN (1) CN112907748B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113971717A (en) * 2021-10-25 2022-01-25 杭州图谱光电科技有限公司 Microscopic three-dimensional reconstruction method based on Markov random field constraint
CN116012607B (en) * 2022-01-27 2023-09-01 华南理工大学 Image weak texture feature extraction method and device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105354804A (en) * 2015-10-23 2016-02-24 广州高清视信数码科技股份有限公司 Maximization self-similarity based image super-resolution reconstruction method
CN106228601A (en) * 2016-07-21 2016-12-14 山东大学 Multiple dimensioned pyramidal CT image quick three-dimensional reconstructing method based on wavelet transformation
CN107240073A (en) * 2017-05-12 2017-10-10 杭州电子科技大学 A kind of 3 d video images restorative procedure merged based on gradient with clustering
CN108038905A (en) * 2017-12-25 2018-05-15 北京航空航天大学 A kind of Object reconstruction method based on super-pixel
CN109903372A (en) * 2019-01-28 2019-06-18 中国科学院自动化研究所 Depth map super-resolution complementing method and high quality three-dimensional rebuilding method and system
US10405005B1 (en) * 2018-11-28 2019-09-03 Sherman McDermott Methods and systems for video compression based on dynamic vector wave compression
CN112489196A (en) * 2020-11-30 2021-03-12 太原理工大学 Particle three-dimensional shape reconstruction method based on multi-scale three-dimensional frequency domain transformation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105354804A (en) * 2015-10-23 2016-02-24 广州高清视信数码科技股份有限公司 Maximization self-similarity based image super-resolution reconstruction method
CN106228601A (en) * 2016-07-21 2016-12-14 山东大学 Multiple dimensioned pyramidal CT image quick three-dimensional reconstructing method based on wavelet transformation
CN107240073A (en) * 2017-05-12 2017-10-10 杭州电子科技大学 A kind of 3 d video images restorative procedure merged based on gradient with clustering
CN108038905A (en) * 2017-12-25 2018-05-15 北京航空航天大学 A kind of Object reconstruction method based on super-pixel
US10405005B1 (en) * 2018-11-28 2019-09-03 Sherman McDermott Methods and systems for video compression based on dynamic vector wave compression
CN109903372A (en) * 2019-01-28 2019-06-18 中国科学院自动化研究所 Depth map super-resolution complementing method and high quality three-dimensional rebuilding method and system
CN112489196A (en) * 2020-11-30 2021-03-12 太原理工大学 Particle three-dimensional shape reconstruction method based on multi-scale three-dimensional frequency domain transformation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Three-dimensional Video Inpainting Combined with Gradient Fusion and Cluster";Lai Yili等;《Journal of Computer Aided Design & Computer Graphics》;20180331;第30卷(第3期);477-484 *
"医学图像中血管的三维重建的研究与应用";胡泽龙;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20160215(第02期);I138-1716 *

Also Published As

Publication number Publication date
CN112907748A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN103049892B (en) Non-local image denoising method based on similar block matrix rank minimization
CN112907748B (en) Three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering
Starovoytov et al. Comparative analysis of the SSIM index and the pearson coefficient as a criterion for image similarity
CN104933678B (en) A kind of image super-resolution rebuilding method based on image pixel intensities
CN110135438B (en) Improved SURF algorithm based on gradient amplitude precomputation
CN110070574B (en) Binocular vision stereo matching method based on improved PSMAT net
CN112308873B (en) Edge detection method for multi-scale Gabor wavelet PCA fusion image
WO2019062595A1 (en) Method for quality evaluation of photoplethysmogram
CN113259288B (en) Underwater sound modulation mode identification method based on feature fusion and lightweight hybrid model
CN104268833A (en) New image fusion method based on shift invariance shearlet transformation
CN110516525A (en) SAR image target recognition method based on GAN and SVM
CN112070717A (en) Power transmission line icing thickness detection method based on image processing
CN107944497A (en) Image block method for measuring similarity based on principal component analysis
Jin et al. Perceptual Gradient Similarity Deviation for Full Reference Image Quality Assessment.
CN108428221A (en) A kind of neighborhood bivariate shrinkage function denoising method based on shearlet transformation
CN110223331B (en) Brain MR medical image registration method
CN112598711B (en) Hyperspectral target tracking method based on joint spectrum dimensionality reduction and feature fusion
CN113033602B (en) Image clustering method based on tensor low-rank sparse representation
JP3507083B2 (en) Polynomial filters for high-order correlation and multi-input information integration
Wang et al. A new method of denoising crop image based on improved SVD in wavelet domain
Wu et al. Research on crack detection algorithm of asphalt pavement
CN109447952B (en) Semi-reference image quality evaluation method based on Gabor differential box weighting dimension
CN112489196B (en) Particle three-dimensional shape reconstruction method based on multi-scale three-dimensional frequency domain transformation
CN113610906B (en) Multi-parallax image sequence registration method based on fusion image guidance
CN102298768A (en) High-resolution image reconstruction method based on sparse samples

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231120

Address after: East Area, 6th Floor, Qilian Building, No. 200 Nanzhonghuan Street, Xiaodian District, Taiyuan City, Shanxi Province, 030000

Patentee after: Chuangbai technology transfer (Shanxi) Co.,Ltd.

Address before: 030006, No. 92, Hollywood Road, Xiaodian District, Shanxi, Taiyuan

Patentee before: SHANXI University

TR01 Transfer of patent right