CN111260776B - Three-dimensional shape reconstruction method for adaptive normal analysis - Google Patents

Three-dimensional shape reconstruction method for adaptive normal analysis Download PDF

Info

Publication number
CN111260776B
CN111260776B CN202010082939.2A CN202010082939A CN111260776B CN 111260776 B CN111260776 B CN 111260776B CN 202010082939 A CN202010082939 A CN 202010082939A CN 111260776 B CN111260776 B CN 111260776B
Authority
CN
China
Prior art keywords
sequence
reconstructed
formula
depth
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010082939.2A
Other languages
Chinese (zh)
Other versions
CN111260776A (en
Inventor
闫涛
胡治国
吴鹏
钱宇华
徐丽云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanxi University
Original Assignee
Shanxi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi University filed Critical Shanxi University
Priority to CN202010082939.2A priority Critical patent/CN111260776B/en
Publication of CN111260776A publication Critical patent/CN111260776A/en
Application granted granted Critical
Publication of CN111260776B publication Critical patent/CN111260776B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a three-dimensional shape reconstruction method for adaptive normal analysis. The technical scheme is as follows: step 1, analyzing the variation trend of the focusing measure information of a continuous image sequence by taking the current position of an object to be reconstructed as a center, and determining a gradient information variation sequence corresponding to the variation trend; step 2, determining a candidate depth sequence interval by measuring the distance between the position of the maximum point in the focusing measurement sequence and the position of the maximum point in the gradient sequence; step 3, selecting the position of the maximum value of the focus measure in the candidate depth sequence interval as a depth result of the current position of the object to be reconstructed; and 4, traversing point by point to obtain depth results of all positions of the object to be reconstructed. The method has the advantages that the error depth information of the three-dimensional shape reconstruction result of the object to be reconstructed can be greatly reduced, and the three-dimensional shape reconstruction precision of the object to be reconstructed in the real scene is effectively improved.

Description

Three-dimensional shape reconstruction method for adaptive normal analysis
Technical Field
The invention relates to the field of three-dimensional shape reconstruction, in particular to a three-dimensional shape reconstruction method for adaptive normal analysis.
Background
The method for estimating the depth of the object to be reconstructed by using the depth of field of the image sequence becomes an important component of the three-dimensional reconstruction field, and the consistency and continuity of the three-dimensional shape of the object to be reconstructed are the most important evaluation indexes of the three-dimensional shape reconstruction method. Therefore, in recent years, attention has been paid to how to obtain a high-precision three-dimensional shape of an object to be reconstructed in an open environment in the industrial and academic fields, so as to provide a method and a theoretical basis for applications in the fields of biomedicine, intelligent manufacturing and the like.
At present, a three-dimensional topography reconstruction method based on an image sequence mainly comprises two main steps of selection of a focus measure function and design of a topography approximation algorithm. The focus measure function is mainly represented by Laplace type and wavelet transformation models. The edge information of the image can be accurately calculated by the Laplace-like algorithm, but the source of the representation depth information not only has the edge information, but also has partial high-frequency information, so that a typical Laplace-like focusing measure function cannot accurately reflect the depth information of the object to be reconstructed. The wavelet transformation model obtains high and low frequency parts corresponding to the image sequence through time-frequency conversion of the image sequence, then statistical characteristics of high frequency information are used as a basis for depth judgment, the method has higher reconstruction precision on image data with less noise interference, but in an open environment, the acquisition process of the image data is interfered by different factors, and the type of the noise is difficult to accurately judge. Therefore, such methods are not suitable for the problem of reconstructing the three-dimensional shape of the real scene with unknown noise distribution. The essence of the morphology approximation algorithm can be summarized as that the depth result of the deviation position is restored by using the depth information of the surrounding area, which belongs to the post-processing process of the three-dimensional morphology reconstruction method, and obviously, the depth information of the current position cannot be accurately reflected by using the depth result of the surrounding area to obtain the depth result of the current position.
By understanding the current state of the art, we believe that the methods in this area suffer from the following drawbacks: (1) although a typical focus measure function can reflect the change of depth information in most image sequences, the method is easily interfered by noise to cause the reduction of reconstruction precision and cannot be used for the reconstruction of three-dimensional topography in a real scene; (2) the morphology approximation method as a type of post-processing method cannot reflect the real depth information of the object to be reconstructed; (3) although the existing method relates to the shape reconstruction under the interference of some typical noises (such as salt and pepper noises, gaussian noises and the like), the high-precision three-dimensional shape algorithm under the conditions of unknown noise distribution and sparse texture details is less researched. How to establish a three-dimensional shape reconstruction method under a real scene with higher precision is a difficult problem at present.
In summary, in the three-dimensional topography reconstruction method based on the depth-of-field image sequence, the amount of information determined by the window size of the focus metric function plays a central and fundamental role in accurately estimating the depth information of the object to be reconstructed, and if a mapping model between the window size of the focus metric function and the depth information can be established, the trend change of the focus metric function is analyzed, and the gradient change information is further used for overcoming the interference of unknown noise, the method has an important meaning for three-dimensional topography reconstruction in an open scene. The method comprises the steps of firstly analyzing essential characteristics of three-dimensional shape reconstruction of a depth-of-field image sequence, jointly determining candidate depth intervals through a focusing measure sequence result and a corresponding gradient sequence, then carrying out normality test on data distribution of the candidate depth intervals, and providing a three-dimensional shape reconstruction new model of self-adaptive normal analysis.
Disclosure of Invention
The invention aims to provide a three-dimensional shape reconstruction method for adaptive normal analysis aiming at the defects.
The technical scheme adopted by the invention is as follows: a three-dimensional shape reconstruction method for adaptive normal analysis comprises the following steps:
step 1, firstly, using an image data acquisition platform, and acquiring image sequences of different depths of field of the object to be reconstructed at the same angle as input by adjusting the distance between a camera in the image data acquisition platform and the object to be reconstructed, wherein the step lengths between the image sequences are equal, and the total number of the image sequences starts from virtual focus of all regions of the object to be reconstructed to focus in partial regions until the virtual focus of all the regions is determined again, so as to obtain the image sequences of different focuses of the object to be reconstructed;
step 2, in the image sequence obtained in the step 1, the current position I is used i (x, y), i is more than or equal to 1 and less than or equal to n as the center, n is the total number of the image sequences, x and y are the image positions, m is the radius, according to the formula(1) Obtaining the focus measure sequence result of n local image sequences
Figure BDA0002380947230000021
FM i (x,y)=XSML(I i (p,q)) (1)
Wherein: i is more than or equal to 1 and less than or equal to n,
Figure BDA0002380947230000022
XSML (g) is a focus measure function;
step 3, obtaining corresponding gradient change sequence result according to formula (2) according to the focusing measure sequence result obtained in step 2
Figure BDA0002380947230000023
G i (x,y)=FM i (x,y)-FM i-1 (x,y),1≤i≤n (2)
Step 4, comparing the focusing measure sequence result of the step 2 with the gradient change sequence result of the step 3, and analyzing whether the distance d between the positions of the maximum values of the two sequences is smaller than a distance threshold value T according to the formula (3);
Figure BDA0002380947230000031
step 5, if the formula (3) in the step 4 is established, intercepting a partial focusing measure sequence which takes the position of the maximum value of the gradient sequence as the center and takes the fixed length s as the radius according to the formula (4) as a candidate depth sequence interval to carry out normality test in the formula (5);
Figure BDA0002380947230000032
Figure BDA0002380947230000033
wherein:
Figure BDA0002380947230000034
if the formula (3) in the step 4 is not satisfied, setting the radius to (M + 3) × (M + 3), and if the current radius M +3 is smaller than the maximum value M of the window radius, re-executing the steps 2 to 4; otherwise, outputting the position of the maximum value in the focusing measure sequence obtained in the step (1) as the depth result of the current position according to the formula (6);
Figure BDA0002380947230000035
/>
step 6, if the candidate depth sequence interval meets normal distribution, taking the position of the maximum value of the candidate depth sequence interval as the depth result of the current position according to the formula (7);
Figure BDA0002380947230000036
if the candidate depth sequence interval does not meet normal distribution, setting the radius to be (M + 3) x (M + 3), if the current radius meets M +3 and is not more than M, judging again according to the steps 2 to 5, otherwise, outputting the position of the maximum value in the focusing measurement sequence obtained in the step 1 as the depth result of the current position according to an equation (8);
Figure BDA0002380947230000037
and 7, traversing all the positions of the object to be reconstructed in sequence to obtain the corresponding three-dimensional shape.
Further, the focus measure function described in step 2 is calculated according to the following equation (9),
Figure BDA0002380947230000041
wherein: u (p, q) is a pixel point in the area around the image (p, q), and s is the step length.
Experimental results show that the method can well overcome the interference of unknown noise in a real scene, and effectively improve the three-dimensional shape reconstruction precision of a sparse texture detail condition sample. Therefore, the method has the advantages of greatly reducing the error depth information of the three-dimensional shape reconstruction result of the object to be reconstructed and effectively improving the three-dimensional shape reconstruction precision of the object to be reconstructed in the real scene.
The invention has the advantages that:
drawings
FIG. 1 is a schematic overall flow chart of a three-dimensional shape reconstruction method for adaptive normal analysis according to the present invention;
FIG. 2 is a general framework diagram of the three-dimensional topography reconstruction method of the adaptive normal analysis of the present invention;
FIG. 3 is a focus measure change sequence diagram of a certain position of an object to be reconstructed;
FIG. 4 is a sequence diagram of gradient changes corresponding to a certain position of an object to be reconstructed;
FIG. 5 is a candidate depth sequence interval diagram of a certain position of an object to be reconstructed;
FIG. 6 is a three-dimensional topography reconstruction gray scale result diagram of an object to be reconstructed;
fig. 7 is a schematic structural diagram of a three-dimensional topography reconstruction result of an object to be reconstructed.
Detailed description of the preferred embodiment
As shown in fig. 1 and fig. 2, the three-dimensional topography reconstruction method for adaptive normal analysis in this embodiment includes the following steps:
step 1, firstly, using an image data acquisition platform, and acquiring image sequences of different depths of field of the object to be reconstructed at the same angle as input by adjusting the distance between a camera in the image data acquisition platform and the object to be reconstructed, wherein the step lengths between the image sequences are equal, and the total number of the image sequences starts from virtual focus of all regions of the object to be reconstructed to focus in partial regions until the virtual focus of all the regions is determined again, so as to obtain the image sequences of different focuses of the object to be reconstructed;
as shown in FIG. 3, step 2, the image obtained in step 1In the sequence, with the current position I i (x, y), i is more than or equal to 1 and less than or equal to n as the center, n is the total number of the image sequences, x and y are the image positions, m multiplied by m is the radius, and n partial image sequence focusing measure sequence results are obtained according to the formula (1)
Figure BDA0002380947230000051
FM i (x,y)=XSML(I i (p,q)) (1)
Wherein: i is more than or equal to 1 and less than or equal to n,
Figure BDA0002380947230000052
XSML (g) is a focus measure function;
as shown in fig. 4, step 3, according to the focusing measure sequence result obtained in step 2, a corresponding gradient change sequence result is obtained according to equation (2)
Figure BDA0002380947230000053
G i (x,y)=FM i (x,y)-FM i-1 (x,y),1≤i≤n (2)
Step 4, comparing the focusing measure sequence result of the step 2 with the gradient change sequence result of the step 3, and analyzing whether the distance d between the positions of the maximum values of the two sequences is smaller than a distance threshold value T according to the formula (3);
Figure BDA0002380947230000054
as shown in fig. 5, in step 5, if equation (3) in step 4 is satisfied, a partial focus metric sequence with the position of the maximum value of the gradient sequence as the center and the fixed length s as the radius is intercepted according to equation (4) and is used as a candidate depth sequence interval to perform normality test in equation (5);
Figure BDA0002380947230000055
Figure BDA0002380947230000056
wherein:
Figure BDA0002380947230000057
if the formula (3) in the step 4 is not satisfied, setting the radius to (M + 3) × (M + 3), and if the current radius M +3 is smaller than the maximum value M of the window radius, re-executing the steps 2 to 4; otherwise, outputting the position of the maximum value in the focusing measure sequence obtained in the step (1) as the depth result of the current position according to the formula (6);
Figure BDA0002380947230000061
step 6, if the candidate depth sequence interval meets normal distribution, taking the position of the maximum value of the candidate depth sequence interval as the depth result of the current position according to the formula (7);
Figure BDA0002380947230000062
if the candidate depth sequence interval does not meet normal distribution, setting the radius to be (M + 3) x (M + 3), if the current radius meets M +3 or less than M, judging again according to the steps 2 to 5, and otherwise, outputting the position of the maximum value in the focusing measure sequence obtained in the step 1 as the depth result of the current position according to the formula (8);
Figure BDA0002380947230000063
and 7, traversing all the positions of the object to be reconstructed in sequence to obtain the corresponding three-dimensional shape.
Further, the focus measure function described in step 2 is calculated according to the following equation (9),
Figure BDA0002380947230000064
wherein: u (p, q) is a pixel point in the surrounding area of the image (p, q), and s is the step length.
The three-dimensional shape reconstruction result of the metal sample obtained by the invention, the gray scale and the three-dimensional structure of the three-dimensional reconstruction result are respectively shown in fig. 6 and fig. 7.
Experimental results show that the method can well overcome the interference of unknown noise in a real scene, and effectively improve the three-dimensional shape reconstruction precision of the sparse texture detail condition sample.

Claims (1)

1. A three-dimensional shape reconstruction method for adaptive normal analysis is characterized by comprising the following steps:
step 1, firstly, using an image data acquisition platform, and acquiring image sequences of the object to be reconstructed with different depths of field at the same angle as input by adjusting the distance between a camera in the image data acquisition platform and the object to be reconstructed, wherein the step lengths between the image sequences are equal, and the total number of the image sequences starts from virtual focus of all regions of the object to be reconstructed to partial region focusing until the virtual focus of all the regions is determined again, so as to obtain the image sequences of the object to be reconstructed with different focuses;
step 2, in the image sequence obtained in the step 1, the current position I is used i (x, y), i is more than or equal to 1 and less than or equal to n as the center, n is the total number of the image sequences, x and y are the image positions, m multiplied by m is the radius, and n partial image sequence focusing measure sequence results are obtained according to the formula (1)
Figure FDA0004068931900000011
FM i (x,y)=XSML(I i (p,q)) (1)
Wherein: i is more than or equal to 1 and less than or equal to n,
Figure FDA0004068931900000012
XSML (g) is a focus measure function;
step 3, obtaining according to step 2The corresponding gradient change sequence result is obtained according to the formula (2)
Figure FDA0004068931900000013
G i (x,y)=FM i (x,y)-FM i-1 (x,y),1≤i≤n (2)
Step 4, comparing the focusing measure sequence result of the step 2 with the gradient change sequence result of the step 3, and analyzing whether the distance d between the positions of the maximum values of the two sequences is smaller than a distance threshold value T according to the formula (3);
Figure FDA0004068931900000014
step 5, if the formula (3) in the step 4 is established, intercepting a partial focusing measure sequence which takes the position of the maximum value of the gradient sequence as the center and takes the fixed length s as the radius according to the formula (4) and taking the partial focusing measure sequence as a candidate depth sequence interval to carry out the normality test in the formula (5);
Figure FDA0004068931900000015
Figure FDA0004068931900000016
wherein:
Figure FDA0004068931900000021
if the formula (3) in the step 4 is not satisfied, setting the radius to (M + 3) × (M + 3), and if the current radius M +3 is smaller than the maximum value M of the window radius, re-executing the steps 2 to 4; otherwise, outputting the position of the maximum value in the focusing measure sequence obtained in the step 1 as the depth result of the current position according to the formula (6);
Figure FDA0004068931900000022
step 6, if the candidate depth sequence interval meets normal distribution, taking the position of the maximum value of the candidate depth sequence interval as the depth result of the current position according to the formula (7);
Figure FDA0004068931900000023
if the candidate depth sequence interval does not meet normal distribution, setting the radius to be (M + 3) x (M + 3), if the current radius meets M +3 or less than M, judging again according to the steps 2 to 5, and otherwise, outputting the position of the maximum value in the focusing measure sequence obtained in the step 1 as the depth result of the current position according to the formula (8);
Figure FDA0004068931900000024
step 7, traversing all positions of the object to be reconstructed in sequence to obtain the corresponding three-dimensional shape;
the focus measure function described in step 2 is calculated according to the following equation (9),
Figure FDA0004068931900000025
wherein: u (p, q) is a pixel point in the surrounding area of the image (p, q), and s is the step length.
CN202010082939.2A 2020-02-07 2020-02-07 Three-dimensional shape reconstruction method for adaptive normal analysis Active CN111260776B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010082939.2A CN111260776B (en) 2020-02-07 2020-02-07 Three-dimensional shape reconstruction method for adaptive normal analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010082939.2A CN111260776B (en) 2020-02-07 2020-02-07 Three-dimensional shape reconstruction method for adaptive normal analysis

Publications (2)

Publication Number Publication Date
CN111260776A CN111260776A (en) 2020-06-09
CN111260776B true CN111260776B (en) 2023-04-18

Family

ID=70954415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010082939.2A Active CN111260776B (en) 2020-02-07 2020-02-07 Three-dimensional shape reconstruction method for adaptive normal analysis

Country Status (1)

Country Link
CN (1) CN111260776B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489196B (en) * 2020-11-30 2022-08-02 太原理工大学 Particle three-dimensional shape reconstruction method based on multi-scale three-dimensional frequency domain transformation
CN113188474B (en) * 2021-05-06 2022-09-23 山西大学 Image sequence acquisition system for imaging of high-light-reflection material complex object and three-dimensional shape reconstruction method thereof
CN113421334B (en) * 2021-07-06 2022-05-20 山西大学 Multi-focus image three-dimensional reconstruction method based on deep learning

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101900536A (en) * 2010-07-28 2010-12-01 西安交通大学 Method for measuring object surface appearance based on digital picture method
CN104574369A (en) * 2014-12-19 2015-04-29 东北大学 Overall diffusion blurring depth obtaining method based on thermal diffusion
CN107909648A (en) * 2017-11-28 2018-04-13 山西大学 A kind of three-dimensional rebuilding method based on the fusion of more depth images
CN108038902A (en) * 2017-12-07 2018-05-15 合肥工业大学 A kind of high-precision three-dimensional method for reconstructing and system towards depth camera
CN108205821A (en) * 2016-12-20 2018-06-26 广东技术师范学院 Workpiece surface three-dimensional reconstruction method based on computer vision
KR20180071765A (en) * 2016-12-20 2018-06-28 (주) 대연아이앤티 Distance measuring system for atypical line using 3-dimension reconstructed image
CN109242959A (en) * 2018-08-29 2019-01-18 清华大学 Method for reconstructing three-dimensional scene and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6257064B2 (en) * 2014-01-03 2018-01-10 インテル・コーポレーション Real-time 3D reconstruction using depth camera
US10610170B2 (en) * 2017-05-12 2020-04-07 Carestream Health, Inc. Patient position monitoring system based on 3D surface acquisition technique

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101900536A (en) * 2010-07-28 2010-12-01 西安交通大学 Method for measuring object surface appearance based on digital picture method
CN104574369A (en) * 2014-12-19 2015-04-29 东北大学 Overall diffusion blurring depth obtaining method based on thermal diffusion
CN108205821A (en) * 2016-12-20 2018-06-26 广东技术师范学院 Workpiece surface three-dimensional reconstruction method based on computer vision
KR20180071765A (en) * 2016-12-20 2018-06-28 (주) 대연아이앤티 Distance measuring system for atypical line using 3-dimension reconstructed image
CN107909648A (en) * 2017-11-28 2018-04-13 山西大学 A kind of three-dimensional rebuilding method based on the fusion of more depth images
CN108038902A (en) * 2017-12-07 2018-05-15 合肥工业大学 A kind of high-precision three-dimensional method for reconstructing and system towards depth camera
CN109242959A (en) * 2018-08-29 2019-01-18 清华大学 Method for reconstructing three-dimensional scene and system

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Intuon Lertrusdachakul ; Yohan D. Fougerolle ; Olivier Laligant.Depth from dynamic (de) focused projection.《2010 25th International Conference of Image and Vision Computing New Zealand》.2010,全文. *
杨洁.基于多层序列图像的三维粗糙度测量方法的研究.《中国优秀硕士学位论文全文数据库信息科技辑》.2010,全文. *
王金岩 ; 史文华 ; 敬忠良.基于Depth from Focus的图像三维重建.《南京航空航天大学学报》.2007,全文. *
赵一姣 ; 熊玉雪 ; 杨慧芳 ; 等.2种三维颜面部扫描仪测量精度的定量评价.《实用口腔医学杂志》.2016,全文. *
钱宇华.复杂数据的粒化机理与数据建模.《中国博士学位论文全文数据库信息科技辑》.2012,全文. *
闫涛 ; 陈斌 ; 刘凤娴 ; 等.基于多景深融合模型的显微三维重建方法.《计算机辅助设计与图形学学报》.2017,全文. *

Also Published As

Publication number Publication date
CN111260776A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
CN111260776B (en) Three-dimensional shape reconstruction method for adaptive normal analysis
CN111189639B (en) Bearing fault diagnosis method based on instantaneous frequency optimization VMD
EP1846919B1 (en) Signal processing method and apparatus
CN108983208B (en) Target RCS measurement method based on near-field sparse imaging extrapolation
CN115797335B (en) Euler movement amplification effect evaluation and optimization method for bridge vibration measurement
CN115690106B (en) Deep-buried anchor sealing detection method based on computer vision
CN107730582B (en) Ocean wave three-dimensional display method based on ocean remote sensing data
CN111912521A (en) Frequency detection method of non-stationary signal and storage medium
Ren et al. Overall filtering algorithm for multiscale noise removal from point cloud data
CN117111048B (en) Improved ITD-based multi-beam submarine topography measurement echo noise reduction method
CN113446930A (en) Method for correcting non-uniform sampling of white light interference signal based on Hilbert transform
CN103020905A (en) Sparse-constraint-adaptive NLM (non-local mean) super-resolution reconstruction method aiming at character image
CN117874900B (en) House construction engineering supervision method based on BIM technology
CN112907748B (en) Three-dimensional shape reconstruction method based on non-down-sampling shear wave transformation and depth image texture feature clustering
CN109242797B (en) Image denoising method, system and medium based on homogeneous and heterogeneous region fusion
CN112818762B (en) Large-size composite material and rapid nondestructive testing method for sandwich structure thereof
CN112907458B (en) F-XY domain improved non-local mean denoising method and device for seismic exploration
CN116756477B (en) Precise measurement method based on Fresnel diffraction edge characteristics
CN113204005A (en) Method and device for improving distance resolving precision of frequency modulated continuous wave laser radar
CN112489196B (en) Particle three-dimensional shape reconstruction method based on multi-scale three-dimensional frequency domain transformation
CN113375065B (en) Method and device for eliminating trend signal in pipeline leakage monitoring
CN108062743B (en) Super-resolution method for noisy image
CN112989966B (en) Improved analog circuit signal noise reduction method
CN116148347A (en) Super-resolution imaging method for ultrasonic detection of internal defects of materials
CN110941908B (en) Sea clutter distribution modeling method based on kernel density estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant