AU2021102705A4 - THz-TDS image defocus processing method based on deep learning - Google Patents
THz-TDS image defocus processing method based on deep learning Download PDFInfo
- Publication number
- AU2021102705A4 AU2021102705A4 AU2021102705A AU2021102705A AU2021102705A4 AU 2021102705 A4 AU2021102705 A4 AU 2021102705A4 AU 2021102705 A AU2021102705 A AU 2021102705A AU 2021102705 A AU2021102705 A AU 2021102705A AU 2021102705 A4 AU2021102705 A4 AU 2021102705A4
- Authority
- AU
- Australia
- Prior art keywords
- image
- thz
- tds
- network
- imaging distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 16
- 238000013135 deep learning Methods 0.000 title claims abstract description 15
- 238000003384 imaging method Methods 0.000 claims abstract description 73
- 238000012549 training Methods 0.000 claims abstract description 33
- 238000012360 testing method Methods 0.000 claims abstract description 29
- 238000012545 processing Methods 0.000 claims abstract description 23
- 238000000034 method Methods 0.000 claims abstract description 15
- 238000012937 correction Methods 0.000 claims description 13
- 230000006870 function Effects 0.000 claims description 9
- 238000011176 pooling Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 9
- 230000015556 catabolic process Effects 0.000 claims description 6
- 238000006731 degradation reaction Methods 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 3
- 239000000654 additive Substances 0.000 claims description 3
- 230000000996 additive effect Effects 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 abstract description 3
- 238000011161 development Methods 0.000 abstract description 3
- 238000005265 energy consumption Methods 0.000 abstract description 2
- 238000002594 fluoroscopy Methods 0.000 abstract description 2
- 230000003595 spectral effect Effects 0.000 abstract description 2
- 238000000701 chemical imaging Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000001228 spectrum Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004611 spectroscopical analysis Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003989 dielectric material Substances 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000009659 non-destructive testing Methods 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 239000004033 plastic Substances 0.000 description 1
- 229920003023 plastic Polymers 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/28—Investigating the spectrum
- G01J3/42—Absorption spectrometry; Double beam spectrometry; Flicker spectrometry; Reflection spectrometry
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/17—Systems in which incident light is modified in accordance with the properties of the material investigated
- G01N21/25—Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
- G01N21/31—Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry
- G01N21/35—Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry using infrared light
- G01N21/3581—Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry using infrared light using far infrared light; using Terahertz radiation
- G01N21/3586—Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry using infrared light using far infrared light; using Terahertz radiation by Terahertz time domain spectroscopy [THz-TDS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Toxicology (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- Chemical & Material Sciences (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Image Analysis (AREA)
Abstract
This invention provides THz-TDS image defocus processing method based on deep
learning, which relates to the field of THz time-domain spectral image processing. THz-TDS
image defocus processing method based on deep learning comprises of the following steps: Si.
Building THz-TDS image defocus processing model that contains Inception module and
structural residual. S2. Constructing a high-low definition image pair data set, and dividing the
data set into training data set and test data set. S3. Using the training set to build the image
defocus processing model, and using the test set to verify the training results, obtaining and
saving the trained network model. The method can effectively focus on sample which needs
fluoroscopy images but were not placed accurately at the focal plane of the imaging system,
and it can solve the problem of image blurring caused by inaccurate focusing. This method
does not require additional detection and control device, it not only has the advantages of low
energy consumption and low cost, but is conductive to make equipment miniaturized and
portable, therefore, it has a good development prospect.
i
0
Ca
0, r_
Co
0 c C m t:
0
CL (U
0)
a) Ca
t7 .2
0 Fm m
E cc
Description
i 0
Ca
0, r_ Co
0 cm C t:
0
0) a) Ca
t7 .2
Fm 0 m E cc
THz-TDS image defocus processing method based on deep learning
The invention relates to the technical field of THz-TDS image defocus processing, in
particular to THz-TDS image defocus processing method based on deep learning.
THz-TDS is terahertz time-domain spectrum, and THz time-domain spectrum imaging
technology is a kind of technology which uses the time-domain spectrum information
transmitted or reflected by THz radiation pulses at different positions of samples to image.
Because of the penetrability of terahertz wave, most nonpolar dielectric materials (clothes,
plastics, paper, etc.) can be imaged through perspective. Moreover, terahertz photon energy is
low (millielectron volts), which can avoid the photoionization hazard of the detected sample
with high safety. Therefore, terahertz time-domain spectral imaging technology is widely
used in biomedicine, chemical analysis, nondestructive testing, safety inspection and other
fields, and has great development potential and application prospects.
In order to improve the spatial resolution of imaging, terahertz time-domain
spectroscopy system usually uses lens group to focus terahertz waves. The focal length of the
lens group used in terahertz time-domain spectral imaging system is mostly fixed, so it is
necessary to accurately place the test sample in the focal plane in order to avoid imaging blur
caused by inaccurate focusing. However, the test sample usually has a certain thickness and is
covered by packaging, so it is difficult to ensure the accurate coincidence of the imaging
plane and the focal plane in perspective imaging, so the images collected by terahertz time
domain spectroscopy system are often accompanied by a certain degree of defocus blur. How
to remove defocus blur of terahertz time-domain spectral images and improve image quality
is one of the urgent problems to be solved in this field.
The existing method mainly adjusts the distance between the test sample and the lens by
introducing additional detection and control devices, so as to avoid defocus of terahertz
images. This will not only increase the equipment cost, but more importantly, this method
can only determine the focal plane position, which is suitable for detecting thin samples or
samples that need surface imaging. For samples with a certain thickness that need perspective
imaging, it is impossible to ensure that the imaging area inside the sample is accurately
located in the focal plane of the imaging system.
In view of the shortcomings of the prior art, the invention provides THz-TDS image
defocus processing method based on deep learning, which solves the image defocus blur
problem caused by inaccurate focusing of a terahertz time-domain spectral imaging system.
In order to achieve the above purpose, the present invention is realized by the following
technical scheme: THz-TDS image defocus processing method based on deep learning
includes the following steps:
Si. Building THz-TDS image defocus processing model that contains Inception module
and structural residual. The model includes three sub-neural networks: imaging distance
prediction network P, imaging distance correction network C and image restoration network F.
S2. Collecting 400 BSD, 800 DIV2K and 800 high-definition optical images of different
test samples respectively, generating corresponding low-definition images by using THz-TDS
image degradation model, constructing high-low-definition image pair data sets, and dividing
the data sets into training data sets and test data sets;
S3. Using training set to train the image defocus processing model built, and using test set
to verify the training result, then obtaining a trained network model and saving it.
S4. Acquiring an image of test sample by using a THz-TDS imaging system, and
preprocessing the image.
S5. Processing the THz-TDS defocus image by using the model trained in S3, and
quantitatively evaluating the processing result.
2. Preferably, the image viewing angle processing model in SI specifically comprises:
11) Imaging distance prediction network p consists of two 3x3 convolution layers, two
Inception modules and one pooling layer. Among them, the Inception module includes four
lxi convolution layers, one 3x3 convolution layer, one 5x5 convolution layer and one 3x3
pooling layer.
12) Imaging distance correction network c consists of two 3x3 convolution layers, three
Ixi convolution layers, two Inception modules, one pooling layer and two fully connected
layers.
13) Image restoration network f consists of four 3x3 convolution layers, sixteen Residual
modules, I SFT module and I sub-pixel translation layer. Among them, the Residual module
includes two 3x3 convolution layers and two SFT modules; The SFT module contains two 3x3
convolution layers and one Sigmoid activation layer.
Preferably, the process of constructing the data set in S2 includes the following steps:
21)Low-definition images in the data set are generated from high-definition images by
THz-TDS image degradation model, as shown in the following formula:
ILR (I HR0 PSF(xy,z), +n
Wherein, ILR represents low-definition image, PSF(x, y, z) represents point spread
function and convolution operation, i represents down sampling operation with scale factor
s, and n represents additive noise.
22) The above-mentioned point spread function can be expressed by Gaussian beam
model, as shown in the following formula:
PSF(x, y, z) 2 exp 2 2
Wherein, z represents the imaging distance, W(z) represents the radius of terahertz light
spot when the imaging distance is z, which can be obtained by the following formula:
2 /2
co(z)=wo1K+ A jz2 Wherein, w o represents the radius of terahertz light spot at focal plane.
23) For 2000 pairs of high-low definition image data sets, 70% of the data sets are divided
into training data sets and 30% into test data sets.
Preferably, In S3. The image defoucs processing model built by training the training set
specifically includes:
31) Firstly, the image restoration network F is trained, the network input is low-definition
image |LR and imaging distance Z, the network output is corresponding high-definition image
IHR, and the loss function during training is defined as:
I n2 JHR _ HR 2 1MSE
Wherein, N is the number of image pairs included in the training set, IHR represents the
real HD image, IHR represents the HD image predicted by the network, and MSE represents
the mean square error.
32) The image restoration network F obtained after fixed training, and the imaging
distance prediction network P and the imaging distance correction network C are alternately
trained. Firstly, the low-definition image ILR is input into the imaging distance prediction network P, and the preliminarily predicted imaging distance Zo is output. Then, zo and ILR are input into the image restoration network F together to obtain a preliminary restored image; Finally, the sum IHR and Zo is input into the imaging distance correction network C, and the corrected imaging distance zi is obtained, and iteration is performed, a total of 7 iterations are performed as well. During training, the imaging distance prediction network P can be obtained by the following formula:
0 = arg min z P(ILR;Op)
Wherein, 0 P represents the parameters of the imaging distance prediction network P, z
is the real imaging distance, and ILR represents the low-definition image.
Preferably, THz-TDS image defocus processing method based on deep learning
described in S4, the THz-TDS imaging system collects the test sample image and pre
processes the image as follows:
41) Scanning and imaging the test sample by using a THz-TDS imaging system to
obtain a THz time domain image of the test sample.
42) Transform the sample image into the frequency domain by Fourier transform.
43) The low-pass and high-pass filters with cut-off frequencies of 1.5T and 0.6T are
used to process the image.
The invention provides THz-TDS image defocus processing method based on deep
learning that has the following beneficial effects:
1. According to the invention, the sample with a certain thickness which needs
fluoroscopy imaging can be effectively processed by this method, so that the problem of
image blurring caused by inaccurate focusing will not occur.
2. This method does not require additional detection and control device, it not only has
the advantages of low energy consumption and low cost, but is conductive to make equipment
miniaturized and portable, therefore, it has a good development prospect.
Fig. 1 is a processing flow chart of the present invention
Fig. 2 is a structural diagram of an imaging distance prediction network according to the
present invention
Fig. 3 is a structural diagram of an imaging distance correction network according to the
present invention
Fig. 4 is a structural diagram of an image restoration network according to the present
invention.
The technical scheme in the embodiments of the present invention will be described
clearly and completely with reference to the drawings in the embodiments of the present
invention. Obviously, the described embodiments are only part of the embodiments of the
present invention, not all of them. Based on the embodiments of the present invention, all other
embodiments obtained by ordinary technicians in the field without creative labor belong to the
scope of protection of the present invention.
Embodiment
As shown in Fig. 1-4, an embodiment of the present invention provides THz-TDS image
defocus processing method based on deep learning, which comprises the following steps:
Si. Building THz-TDS image defocus processing model that contains Inception module
and structural residual. The model includes three sub-neural networks: imaging distance
prediction network P, imaging distance correction network C and image restoration network F.
S2. Collecting 400 BSD, 800 DIV2K and 800 high-definition optical images of different
test samples respectively, generating corresponding low-definition images by using THz-TDS
image degradation model, constructing high-low-definition image pair data sets, and dividing
the data sets into training data sets and test data sets;
S3. Using training set to train the image defocus processing model built, and using test set
to verify the training result, then obtaining a trained network model and saving it.
S4. Acquiring an image of test sample by using a THz-TDS imaging system, and
preprocessing the image.
S5. Processing the THz-TDS defocus image by using the model trained in S3, and
quantitatively evaluating the processing result.
The image viewing angle processing model described in S specifically comprises:
11) Imaging distance prediction network p consists of two 3x3 convolution layers, two
Inception modules and one pooling layer. Among them, the Inception module includes four
lxi convolution layers, one 3x3 convolution layer, one 5x5 convolution layer and one 3x3
pooling layer.
12) Imaging distance correction network c consists of two 3x3 convolution layers, three
Ixi convolution layers, two Inception modules, one pooling layer and two fully connected
layers.
13) Image restoration network F consists of four 3x3 convolution layers, sixteen Residual
modules, I SFT module and I sub-pixel translation layer. Among them, the Residual module
includes two 3x3 convolution layers and two SFT modules; The SFT module contains two 3x3
convolution layers and one Sigmoid activation layer.
The described process of constructing the data set in S2 includes the following steps:
21)Low-definition images in the data set are generated from high-definition images by
THz-TDS image degradation model, as shown in the following formula:
ILR (I HR0 PSF(xy,z), +n
Wherein, ILR represents low-definition image, PSF(x, y, z) represents point spread
function and convolution operation, i represents down sampling operation with scale factor
s, and n represents additive noise.
22) The above-mentioned point spread function can be expressed by Gaussian beam
model, as shown in the following formula:
PSF(x, y, z)= 2 exp 2 2
Wherein, z represents the imaging distance, W(z) represents the radius of terahertz light
spot when the imaging distance is z, which can be obtained by the following formula:
2 /2
co(z)= wo 1+KAz 2
Wherein, w o represents the radius of terahertz light spot at focal plane.
23) For 2000 pairs of high-low definition image data sets, 70% of the data sets are divided
into training data sets and 30% into test data sets.
In S3. The image defoucs processing model built by training the training set specifically
includes:
31) Firstly, the image restoration network F is trained, the network input is low-definition
image |LR and imaging distance Z, the network output is corresponding high-definition image
IHR, and the loss function during training is defined as:
I n2 HR, 2 1MSE = Nj=1 NS HR _
Wherein, N is the number of image pairs included in the training set, IHR represents the
real HD image, IHR represents the HD image predicted by the network, and MSE represents
the mean square error.
33) The image restoration network F obtained after fixed training, and the imaging
distance prediction network P and the imaging distance correction network C are alternately
trained. Firstly, the low-definition image ILR is input into the imaging distance prediction
network P, and the preliminarily predicted imaging distance Zo is output. Then, Zo and ILR
are input into the image restoration network F together to obtain a preliminary restored
image; Finally, the sum IHR and Zo is input into the imaging distance correction network C,
and the corrected imaging distance zi is obtained, and iteration is performed, a total of 7
iterations are performed as well. During training, the imaging distance prediction network P
can be obtained by the following formula:
0, = arg min z P((ILR;O) 0P 2
Wherein, 0 P represents the parameters of the imaging distance prediction network P, z
is the real imaging distance, and ILR represents the low-definition image.
THz-TDS image defocus processing method based on deep learning described in S4, the
THz-TDS imaging system collects the test sample image and pre-processes the image as
follows:
41) Scanning and imaging the test sample by using a THz-TDS imaging system to obtain
a THz time domain image of the test sample.
42) Transform the sample image into the frequency domain by Fourier transform.
43) The low-pass and high-pass filters with cut-off frequencies of 1.5T and 0.6T are used
to process the image.
Although embodiments of the present invention have been shown and described, it will
be understood by those of ordinary skill in the art that various changes, modifications,
substitutions and modifications can be made to these embodiments without departing from
the principles and spirit of the present invention, and the scope of the present invention is
defined by the appended claims and their equivalents.
Claims (1)
- THE CLAIMS DEFINING THE INVENTION ARE AS FOLLOWS:THz-TDS image defocus processing method based on deep learning is characterized incomprising the following steps:Si. Building THz-TDS image defocus processing model that contains Inception moduleand structural residual. The model includes three sub-neural networks: imaging distanceprediction network P, imaging distance correction network C and image restoration network F.S2. Collecting 400 BSD, 800 DIV2K and 800 high-definition optical images of differenttest samples respectively, generating corresponding low-definition images by using THz-TDSimage degradation model, constructing high-low-definition image pair data sets, and dividingthe data sets into training data sets and test data sets;S3. Using training set to train the image defocus processing model built, and using test setto verify the training result, then obtaining a trained network model and saving it.S4. Acquiring an image of test sample by using a THz-TDS imaging system, andpreprocessing the image.S5. Processing the THz-TDS defocus image by using the model trained in S3, andquantitatively evaluating the processing result.2. THz-TDS image defocus processing method based on deep learning, according to claim1, is characterized in that the image viewing angle processing model in Si specificallycomprises:11) Imaging distance prediction network p consists of two 3x3 convolution layers, twoInception modules and one pooling layer. Among them, the Inception module includes fourlxi convolution layers, one 3x3 convolution layer, one 5x5 convolution layer and one 3x3pooling layer.12) Imaging distance correction network c consists of two 3x3 convolution layers, three1x1 convolution layers, two Inception modules, one pooling layer and two fully connectedlayers.13) Image restoration network F consists of four 3x3 convolution layers, sixteen Residualmodules, one SFT module and one sub-pixel translation layer. Among them, the Residualmodule includes two 3x3 convolution layers and two SFT modules; The SFT module containstwo 3x3 convolution layers and one Sigmoid activation layer.3. THz-TDS image defocus processing method based on deep learning, according to claim1, is characterized in that the process of constructing the data set in S2 includes the followingsteps:21)Low-definition images in the data set are generated from high-definition images byTHz-TDS image degradation model, as shown in the following formula:ILR =(I HR PSF(x,y,z)), +nWherein, ILR represents low-definition image, PSF(x, y, z) represents point spreadfunction and convolution operation, i represents down sampling operation with scale factors, and n represents additive noise.22) The above-mentioned point spread function can be expressed by Gaussian beammodel, as shown in the following formula:PSF(x, y, z)= 2 exp 2 cco(Z) L co(z)Wherein, z represents the imaging distance, W(z) represents the radius of terahertz lightspot when the imaging distance is z, which can be obtained by the following formula:2 /2co(z)= wo 1+ 2Wherein, w o represents the radius of terahertz light spot at focal plane.23) For 2000 pairs of high-low definition image data sets, 70% of the data sets are dividedinto training data sets and 30% into test data sets.4. THz-TDS image defocus processing method based on deep learning, according to claim1, is characterized in thatIn S3. The image defoucs processing model built by training the training set specificallyincludes:31 ) Firstly, the image restoration network F is trained, the network input is low-definitionimage |LR and imaging distance Z, the network output is corresponding high-definition imageIHR, and the loss function during training is defined as:MSE JHR __ HR' N j=1Wherein, N is the number of image pairs included in the training set, IHR represents thereal HD image, IHR represents the HD image predicted by the network, and MSE representsthe mean square error.32) The image restoration network F obtained after fixed training, and the imagingdistance prediction network P and the imaging distance correction network C are alternatelytrained. Firstly, the low-definition image ILR is input into the imaging distance predictionnetwork P, and the preliminarily predicted imaging distance Zo is output. Then, Zo and ILRare input into the image restoration network F together to obtain a preliminary restoredimage; Finally, the sum IHR and Zo is input into the imaging distance correction network C,and the corrected imaging distance zi is obtained, and iteration is performed, a total of 7 iterations are performed as well. During training, the imaging distance prediction network P can be obtained by the following formula:0, = arg min z -P(ILR;Op) OP 2Wherein, 0 P represents the parameters of the imaging distance prediction network P, zis the real imaging distance, and LR represents the low-definition image.5. THz-TDS image defocus processing method based on deep learning, according toclaim 1, is characterized in that In S4, the THz-TDS imaging system collects the test sampleimage and pre-processes the image as follows:41) Scanning and imaging the test sample by using a THz-TDS imaging system toobtain a THz time domain image of the test sample.42) Transform the sample image into the frequency domain by Fourier transform.43) The low-pass and high-pass filters with cut-off frequencies of 1.5T and 0.6T areused to process the image
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2021102705A AU2021102705A4 (en) | 2021-05-20 | 2021-05-20 | THz-TDS image defocus processing method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2021102705A AU2021102705A4 (en) | 2021-05-20 | 2021-05-20 | THz-TDS image defocus processing method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
AU2021102705A4 true AU2021102705A4 (en) | 2021-07-08 |
Family
ID=76662662
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2021102705A Ceased AU2021102705A4 (en) | 2021-05-20 | 2021-05-20 | THz-TDS image defocus processing method based on deep learning |
Country Status (1)
Country | Link |
---|---|
AU (1) | AU2021102705A4 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114034651A (en) * | 2021-11-10 | 2022-02-11 | 北京环境特性研究所 | Method and device for generating global earth surface spectrum basic data |
-
2021
- 2021-05-20 AU AU2021102705A patent/AU2021102705A4/en not_active Ceased
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114034651A (en) * | 2021-11-10 | 2022-02-11 | 北京环境特性研究所 | Method and device for generating global earth surface spectrum basic data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Ren et al. | State of the art in defect detection based on machine vision | |
Huang et al. | Automatic inspection of pavement cracking distress | |
US6229913B1 (en) | Apparatus and methods for determining the three-dimensional shape of an object using active illumination and relative blurring in two-images due to defocus | |
CN109102455B (en) | Defect detection method, detection image generation method, system and storage device | |
CN103472256A (en) | Flow two-dimensional velocity profile measuring method and device based on planar array CCD spatial filter | |
AU2021102705A4 (en) | THz-TDS image defocus processing method based on deep learning | |
US9395309B2 (en) | Multiple angle computational wafer inspection | |
CN110836867A (en) | Non-lens holographic microscopic particle characterization method based on convolutional neural network | |
Ermolli et al. | The prototype RISE-PSPT instrument operating in Rome | |
CN112862077A (en) | System and method for replacing traditional spectrometer by combining multimode optical fiber with deep learning network | |
CN115753809A (en) | Insulator contamination detection method, device, equipment and storage medium | |
DE19525770C1 (en) | Bonding connections testing system for bonded semiconductor wafers | |
CN111429378A (en) | Turbid underwater gray level image restoration method based on deep neural network and polarization imaging | |
CN108898563B (en) | Processing method of optical detection image of display panel and computer readable medium | |
Webster et al. | An automated survey for gravitational lenses | |
CN110388882B (en) | Quantized differential phase contrast microscope system with isotropic transfer function | |
CN109799191B (en) | Optical non-contact detection device and method for sound disturbance of rough surface of solid material | |
Li | Wavelet transform for detection of partial fringe patterns induced by defects in nondestructive testing of holographic interferometry and electronic speckle pattern interferometry | |
CN114414577B (en) | Method and system for detecting plastic products based on terahertz technology | |
Pan et al. | Edge extraction and reconstruction of terahertz image using simulation evolutionary with the symmetric fourth order partial differential equation | |
CN215865743U (en) | Film uniformity detection system based on line structured light | |
CN114998469A (en) | Regional infrared digital holographic method based on neural convolutional network | |
Ferreira et al. | Evaluating sub-pixel functional defects of a display using an arbitrary resolution camera | |
CN114235347A (en) | Lens quality evaluation method and device | |
Liu et al. | Measurement method of the width of the strands of cut tobacco based on digital image processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FGI | Letters patent sealed or granted (innovation patent) | ||
MK22 | Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry |