AU2021102705A4 - THz-TDS image defocus processing method based on deep learning - Google Patents

THz-TDS image defocus processing method based on deep learning Download PDF

Info

Publication number
AU2021102705A4
AU2021102705A4 AU2021102705A AU2021102705A AU2021102705A4 AU 2021102705 A4 AU2021102705 A4 AU 2021102705A4 AU 2021102705 A AU2021102705 A AU 2021102705A AU 2021102705 A AU2021102705 A AU 2021102705A AU 2021102705 A4 AU2021102705 A4 AU 2021102705A4
Authority
AU
Australia
Prior art keywords
image
thz
tds
network
imaging distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2021102705A
Inventor
Yaohua Hu
Yao LU
Cixing Lv
Zirong ZHOU
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan University of Technology
Original Assignee
Dongguan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan University of Technology filed Critical Dongguan University of Technology
Priority to AU2021102705A priority Critical patent/AU2021102705A4/en
Application granted granted Critical
Publication of AU2021102705A4 publication Critical patent/AU2021102705A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/42Absorption spectrometry; Double beam spectrometry; Flicker spectrometry; Reflection spectrometry
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/31Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry
    • G01N21/35Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry using infrared light
    • G01N21/3581Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry using infrared light using far infrared light; using Terahertz radiation
    • G01N21/3586Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry using infrared light using far infrared light; using Terahertz radiation by Terahertz time domain spectroscopy [THz-TDS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Toxicology (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Image Analysis (AREA)

Abstract

This invention provides THz-TDS image defocus processing method based on deep learning, which relates to the field of THz time-domain spectral image processing. THz-TDS image defocus processing method based on deep learning comprises of the following steps: Si. Building THz-TDS image defocus processing model that contains Inception module and structural residual. S2. Constructing a high-low definition image pair data set, and dividing the data set into training data set and test data set. S3. Using the training set to build the image defocus processing model, and using the test set to verify the training results, obtaining and saving the trained network model. The method can effectively focus on sample which needs fluoroscopy images but were not placed accurately at the focal plane of the imaging system, and it can solve the problem of image blurring caused by inaccurate focusing. This method does not require additional detection and control device, it not only has the advantages of low energy consumption and low cost, but is conductive to make equipment miniaturized and portable, therefore, it has a good development prospect. i 0 Ca 0, r_ Co 0 c C m t: 0 CL (U 0) a) Ca t7 .2 0 Fm m E cc

Description

i 0
Ca
0, r_ Co
0 cm C t:
0
CL (U
0) a) Ca
t7 .2
Fm 0 m E cc
THz-TDS image defocus processing method based on deep learning
TECHNICAL FIELD
The invention relates to the technical field of THz-TDS image defocus processing, in
particular to THz-TDS image defocus processing method based on deep learning.
BACKGROUND
THz-TDS is terahertz time-domain spectrum, and THz time-domain spectrum imaging
technology is a kind of technology which uses the time-domain spectrum information
transmitted or reflected by THz radiation pulses at different positions of samples to image.
Because of the penetrability of terahertz wave, most nonpolar dielectric materials (clothes,
plastics, paper, etc.) can be imaged through perspective. Moreover, terahertz photon energy is
low (millielectron volts), which can avoid the photoionization hazard of the detected sample
with high safety. Therefore, terahertz time-domain spectral imaging technology is widely
used in biomedicine, chemical analysis, nondestructive testing, safety inspection and other
fields, and has great development potential and application prospects.
In order to improve the spatial resolution of imaging, terahertz time-domain
spectroscopy system usually uses lens group to focus terahertz waves. The focal length of the
lens group used in terahertz time-domain spectral imaging system is mostly fixed, so it is
necessary to accurately place the test sample in the focal plane in order to avoid imaging blur
caused by inaccurate focusing. However, the test sample usually has a certain thickness and is
covered by packaging, so it is difficult to ensure the accurate coincidence of the imaging
plane and the focal plane in perspective imaging, so the images collected by terahertz time
domain spectroscopy system are often accompanied by a certain degree of defocus blur. How
to remove defocus blur of terahertz time-domain spectral images and improve image quality
is one of the urgent problems to be solved in this field.
The existing method mainly adjusts the distance between the test sample and the lens by
introducing additional detection and control devices, so as to avoid defocus of terahertz
images. This will not only increase the equipment cost, but more importantly, this method
can only determine the focal plane position, which is suitable for detecting thin samples or
samples that need surface imaging. For samples with a certain thickness that need perspective
imaging, it is impossible to ensure that the imaging area inside the sample is accurately
located in the focal plane of the imaging system.
SUMMARY
In view of the shortcomings of the prior art, the invention provides THz-TDS image
defocus processing method based on deep learning, which solves the image defocus blur
problem caused by inaccurate focusing of a terahertz time-domain spectral imaging system.
In order to achieve the above purpose, the present invention is realized by the following
technical scheme: THz-TDS image defocus processing method based on deep learning
includes the following steps:
Si. Building THz-TDS image defocus processing model that contains Inception module
and structural residual. The model includes three sub-neural networks: imaging distance
prediction network P, imaging distance correction network C and image restoration network F.
S2. Collecting 400 BSD, 800 DIV2K and 800 high-definition optical images of different
test samples respectively, generating corresponding low-definition images by using THz-TDS
image degradation model, constructing high-low-definition image pair data sets, and dividing
the data sets into training data sets and test data sets;
S3. Using training set to train the image defocus processing model built, and using test set
to verify the training result, then obtaining a trained network model and saving it.
S4. Acquiring an image of test sample by using a THz-TDS imaging system, and
preprocessing the image.
S5. Processing the THz-TDS defocus image by using the model trained in S3, and
quantitatively evaluating the processing result.
2. Preferably, the image viewing angle processing model in SI specifically comprises:
11) Imaging distance prediction network p consists of two 3x3 convolution layers, two
Inception modules and one pooling layer. Among them, the Inception module includes four
lxi convolution layers, one 3x3 convolution layer, one 5x5 convolution layer and one 3x3
pooling layer.
12) Imaging distance correction network c consists of two 3x3 convolution layers, three
Ixi convolution layers, two Inception modules, one pooling layer and two fully connected
layers.
13) Image restoration network f consists of four 3x3 convolution layers, sixteen Residual
modules, I SFT module and I sub-pixel translation layer. Among them, the Residual module
includes two 3x3 convolution layers and two SFT modules; The SFT module contains two 3x3
convolution layers and one Sigmoid activation layer.
Preferably, the process of constructing the data set in S2 includes the following steps:
21)Low-definition images in the data set are generated from high-definition images by
THz-TDS image degradation model, as shown in the following formula:
ILR (I HR0 PSF(xy,z), +n
Wherein, ILR represents low-definition image, PSF(x, y, z) represents point spread
function and convolution operation, i represents down sampling operation with scale factor
s, and n represents additive noise.
22) The above-mentioned point spread function can be expressed by Gaussian beam
model, as shown in the following formula:
PSF(x, y, z) 2 exp 2 2
Wherein, z represents the imaging distance, W(z) represents the radius of terahertz light
spot when the imaging distance is z, which can be obtained by the following formula:
2 /2
co(z)=wo1K+ A jz2 Wherein, w o represents the radius of terahertz light spot at focal plane.
23) For 2000 pairs of high-low definition image data sets, 70% of the data sets are divided
into training data sets and 30% into test data sets.
Preferably, In S3. The image defoucs processing model built by training the training set
specifically includes:
31) Firstly, the image restoration network F is trained, the network input is low-definition
image |LR and imaging distance Z, the network output is corresponding high-definition image
IHR, and the loss function during training is defined as:
I n2 JHR _ HR 2 1MSE
Wherein, N is the number of image pairs included in the training set, IHR represents the
real HD image, IHR represents the HD image predicted by the network, and MSE represents
the mean square error.
32) The image restoration network F obtained after fixed training, and the imaging
distance prediction network P and the imaging distance correction network C are alternately
trained. Firstly, the low-definition image ILR is input into the imaging distance prediction network P, and the preliminarily predicted imaging distance Zo is output. Then, zo and ILR are input into the image restoration network F together to obtain a preliminary restored image; Finally, the sum IHR and Zo is input into the imaging distance correction network C, and the corrected imaging distance zi is obtained, and iteration is performed, a total of 7 iterations are performed as well. During training, the imaging distance prediction network P can be obtained by the following formula:
0 = arg min z P(ILR;Op)
Wherein, 0 P represents the parameters of the imaging distance prediction network P, z
is the real imaging distance, and ILR represents the low-definition image.
Preferably, THz-TDS image defocus processing method based on deep learning
described in S4, the THz-TDS imaging system collects the test sample image and pre
processes the image as follows:
41) Scanning and imaging the test sample by using a THz-TDS imaging system to
obtain a THz time domain image of the test sample.
42) Transform the sample image into the frequency domain by Fourier transform.
43) The low-pass and high-pass filters with cut-off frequencies of 1.5T and 0.6T are
used to process the image.
The invention provides THz-TDS image defocus processing method based on deep
learning that has the following beneficial effects:
1. According to the invention, the sample with a certain thickness which needs
fluoroscopy imaging can be effectively processed by this method, so that the problem of
image blurring caused by inaccurate focusing will not occur.
2. This method does not require additional detection and control device, it not only has
the advantages of low energy consumption and low cost, but is conductive to make equipment
miniaturized and portable, therefore, it has a good development prospect.
BRIEF DESCRIPTION OF THE FIGURES
Fig. 1 is a processing flow chart of the present invention
Fig. 2 is a structural diagram of an imaging distance prediction network according to the
present invention
Fig. 3 is a structural diagram of an imaging distance correction network according to the
present invention
Fig. 4 is a structural diagram of an image restoration network according to the present
invention.
DESCRIPTION OF THE INVENTION
The technical scheme in the embodiments of the present invention will be described
clearly and completely with reference to the drawings in the embodiments of the present
invention. Obviously, the described embodiments are only part of the embodiments of the
present invention, not all of them. Based on the embodiments of the present invention, all other
embodiments obtained by ordinary technicians in the field without creative labor belong to the
scope of protection of the present invention.
Embodiment
As shown in Fig. 1-4, an embodiment of the present invention provides THz-TDS image
defocus processing method based on deep learning, which comprises the following steps:
Si. Building THz-TDS image defocus processing model that contains Inception module
and structural residual. The model includes three sub-neural networks: imaging distance
prediction network P, imaging distance correction network C and image restoration network F.
S2. Collecting 400 BSD, 800 DIV2K and 800 high-definition optical images of different
test samples respectively, generating corresponding low-definition images by using THz-TDS
image degradation model, constructing high-low-definition image pair data sets, and dividing
the data sets into training data sets and test data sets;
S3. Using training set to train the image defocus processing model built, and using test set
to verify the training result, then obtaining a trained network model and saving it.
S4. Acquiring an image of test sample by using a THz-TDS imaging system, and
preprocessing the image.
S5. Processing the THz-TDS defocus image by using the model trained in S3, and
quantitatively evaluating the processing result.
The image viewing angle processing model described in S specifically comprises:
11) Imaging distance prediction network p consists of two 3x3 convolution layers, two
Inception modules and one pooling layer. Among them, the Inception module includes four
lxi convolution layers, one 3x3 convolution layer, one 5x5 convolution layer and one 3x3
pooling layer.
12) Imaging distance correction network c consists of two 3x3 convolution layers, three
Ixi convolution layers, two Inception modules, one pooling layer and two fully connected
layers.
13) Image restoration network F consists of four 3x3 convolution layers, sixteen Residual
modules, I SFT module and I sub-pixel translation layer. Among them, the Residual module
includes two 3x3 convolution layers and two SFT modules; The SFT module contains two 3x3
convolution layers and one Sigmoid activation layer.
The described process of constructing the data set in S2 includes the following steps:
21)Low-definition images in the data set are generated from high-definition images by
THz-TDS image degradation model, as shown in the following formula:
ILR (I HR0 PSF(xy,z), +n
Wherein, ILR represents low-definition image, PSF(x, y, z) represents point spread
function and convolution operation, i represents down sampling operation with scale factor
s, and n represents additive noise.
22) The above-mentioned point spread function can be expressed by Gaussian beam
model, as shown in the following formula:
PSF(x, y, z)= 2 exp 2 2
Wherein, z represents the imaging distance, W(z) represents the radius of terahertz light
spot when the imaging distance is z, which can be obtained by the following formula:
2 /2
co(z)= wo 1+KAz 2
Wherein, w o represents the radius of terahertz light spot at focal plane.
23) For 2000 pairs of high-low definition image data sets, 70% of the data sets are divided
into training data sets and 30% into test data sets.
In S3. The image defoucs processing model built by training the training set specifically
includes:
31) Firstly, the image restoration network F is trained, the network input is low-definition
image |LR and imaging distance Z, the network output is corresponding high-definition image
IHR, and the loss function during training is defined as:
I n2 HR, 2 1MSE = Nj=1 NS HR _
Wherein, N is the number of image pairs included in the training set, IHR represents the
real HD image, IHR represents the HD image predicted by the network, and MSE represents
the mean square error.
33) The image restoration network F obtained after fixed training, and the imaging
distance prediction network P and the imaging distance correction network C are alternately
trained. Firstly, the low-definition image ILR is input into the imaging distance prediction
network P, and the preliminarily predicted imaging distance Zo is output. Then, Zo and ILR
are input into the image restoration network F together to obtain a preliminary restored
image; Finally, the sum IHR and Zo is input into the imaging distance correction network C,
and the corrected imaging distance zi is obtained, and iteration is performed, a total of 7
iterations are performed as well. During training, the imaging distance prediction network P
can be obtained by the following formula:
0, = arg min z P((ILR;O) 0P 2
Wherein, 0 P represents the parameters of the imaging distance prediction network P, z
is the real imaging distance, and ILR represents the low-definition image.
THz-TDS image defocus processing method based on deep learning described in S4, the
THz-TDS imaging system collects the test sample image and pre-processes the image as
follows:
41) Scanning and imaging the test sample by using a THz-TDS imaging system to obtain
a THz time domain image of the test sample.
42) Transform the sample image into the frequency domain by Fourier transform.
43) The low-pass and high-pass filters with cut-off frequencies of 1.5T and 0.6T are used
to process the image.
Although embodiments of the present invention have been shown and described, it will
be understood by those of ordinary skill in the art that various changes, modifications,
substitutions and modifications can be made to these embodiments without departing from
the principles and spirit of the present invention, and the scope of the present invention is
defined by the appended claims and their equivalents.

Claims (1)

  1. THE CLAIMS DEFINING THE INVENTION ARE AS FOLLOWS:
    THz-TDS image defocus processing method based on deep learning is characterized in
    comprising the following steps:
    Si. Building THz-TDS image defocus processing model that contains Inception module
    and structural residual. The model includes three sub-neural networks: imaging distance
    prediction network P, imaging distance correction network C and image restoration network F.
    S2. Collecting 400 BSD, 800 DIV2K and 800 high-definition optical images of different
    test samples respectively, generating corresponding low-definition images by using THz-TDS
    image degradation model, constructing high-low-definition image pair data sets, and dividing
    the data sets into training data sets and test data sets;
    S3. Using training set to train the image defocus processing model built, and using test set
    to verify the training result, then obtaining a trained network model and saving it.
    S4. Acquiring an image of test sample by using a THz-TDS imaging system, and
    preprocessing the image.
    S5. Processing the THz-TDS defocus image by using the model trained in S3, and
    quantitatively evaluating the processing result.
    2. THz-TDS image defocus processing method based on deep learning, according to claim
    1, is characterized in that the image viewing angle processing model in Si specifically
    comprises:
    11) Imaging distance prediction network p consists of two 3x3 convolution layers, two
    Inception modules and one pooling layer. Among them, the Inception module includes four
    lxi convolution layers, one 3x3 convolution layer, one 5x5 convolution layer and one 3x3
    pooling layer.
    12) Imaging distance correction network c consists of two 3x3 convolution layers, three
    1x1 convolution layers, two Inception modules, one pooling layer and two fully connected
    layers.
    13) Image restoration network F consists of four 3x3 convolution layers, sixteen Residual
    modules, one SFT module and one sub-pixel translation layer. Among them, the Residual
    module includes two 3x3 convolution layers and two SFT modules; The SFT module contains
    two 3x3 convolution layers and one Sigmoid activation layer.
    3. THz-TDS image defocus processing method based on deep learning, according to claim
    1, is characterized in that the process of constructing the data set in S2 includes the following
    steps:
    21)Low-definition images in the data set are generated from high-definition images by
    THz-TDS image degradation model, as shown in the following formula:
    ILR =(I HR PSF(x,y,z)), +n
    Wherein, ILR represents low-definition image, PSF(x, y, z) represents point spread
    function and convolution operation, i represents down sampling operation with scale factor
    s, and n represents additive noise.
    22) The above-mentioned point spread function can be expressed by Gaussian beam
    model, as shown in the following formula:
    PSF(x, y, z)= 2 exp 2 cco(Z) L co(z)
    Wherein, z represents the imaging distance, W(z) represents the radius of terahertz light
    spot when the imaging distance is z, which can be obtained by the following formula:
    2 /2
    co(z)= wo 1+ 2
    Wherein, w o represents the radius of terahertz light spot at focal plane.
    23) For 2000 pairs of high-low definition image data sets, 70% of the data sets are divided
    into training data sets and 30% into test data sets.
    4. THz-TDS image defocus processing method based on deep learning, according to claim
    1, is characterized in that
    In S3. The image defoucs processing model built by training the training set specifically
    includes:
    31 ) Firstly, the image restoration network F is trained, the network input is low-definition
    image |LR and imaging distance Z, the network output is corresponding high-definition image
    IHR, and the loss function during training is defined as:
    MSE JHR __ HR' N j=1
    Wherein, N is the number of image pairs included in the training set, IHR represents the
    real HD image, IHR represents the HD image predicted by the network, and MSE represents
    the mean square error.
    32) The image restoration network F obtained after fixed training, and the imaging
    distance prediction network P and the imaging distance correction network C are alternately
    trained. Firstly, the low-definition image ILR is input into the imaging distance prediction
    network P, and the preliminarily predicted imaging distance Zo is output. Then, Zo and ILR
    are input into the image restoration network F together to obtain a preliminary restored
    image; Finally, the sum IHR and Zo is input into the imaging distance correction network C,
    and the corrected imaging distance zi is obtained, and iteration is performed, a total of 7 iterations are performed as well. During training, the imaging distance prediction network P can be obtained by the following formula:
    0, = arg min z -P(ILR;Op) OP 2
    Wherein, 0 P represents the parameters of the imaging distance prediction network P, z
    is the real imaging distance, and LR represents the low-definition image.
    5. THz-TDS image defocus processing method based on deep learning, according to
    claim 1, is characterized in that In S4, the THz-TDS imaging system collects the test sample
    image and pre-processes the image as follows:
    41) Scanning and imaging the test sample by using a THz-TDS imaging system to
    obtain a THz time domain image of the test sample.
    42) Transform the sample image into the frequency domain by Fourier transform.
    43) The low-pass and high-pass filters with cut-off frequencies of 1.5T and 0.6T are
    used to process the image
AU2021102705A 2021-05-20 2021-05-20 THz-TDS image defocus processing method based on deep learning Ceased AU2021102705A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2021102705A AU2021102705A4 (en) 2021-05-20 2021-05-20 THz-TDS image defocus processing method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2021102705A AU2021102705A4 (en) 2021-05-20 2021-05-20 THz-TDS image defocus processing method based on deep learning

Publications (1)

Publication Number Publication Date
AU2021102705A4 true AU2021102705A4 (en) 2021-07-08

Family

ID=76662662

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2021102705A Ceased AU2021102705A4 (en) 2021-05-20 2021-05-20 THz-TDS image defocus processing method based on deep learning

Country Status (1)

Country Link
AU (1) AU2021102705A4 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114034651A (en) * 2021-11-10 2022-02-11 北京环境特性研究所 Method and device for generating global earth surface spectrum basic data

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114034651A (en) * 2021-11-10 2022-02-11 北京环境特性研究所 Method and device for generating global earth surface spectrum basic data

Similar Documents

Publication Publication Date Title
Ren et al. State of the art in defect detection based on machine vision
Huang et al. Automatic inspection of pavement cracking distress
US6229913B1 (en) Apparatus and methods for determining the three-dimensional shape of an object using active illumination and relative blurring in two-images due to defocus
CN109102455B (en) Defect detection method, detection image generation method, system and storage device
CN103472256A (en) Flow two-dimensional velocity profile measuring method and device based on planar array CCD spatial filter
AU2021102705A4 (en) THz-TDS image defocus processing method based on deep learning
US9395309B2 (en) Multiple angle computational wafer inspection
CN110836867A (en) Non-lens holographic microscopic particle characterization method based on convolutional neural network
Ermolli et al. The prototype RISE-PSPT instrument operating in Rome
CN112862077A (en) System and method for replacing traditional spectrometer by combining multimode optical fiber with deep learning network
CN115753809A (en) Insulator contamination detection method, device, equipment and storage medium
DE19525770C1 (en) Bonding connections testing system for bonded semiconductor wafers
CN111429378A (en) Turbid underwater gray level image restoration method based on deep neural network and polarization imaging
CN108898563B (en) Processing method of optical detection image of display panel and computer readable medium
Webster et al. An automated survey for gravitational lenses
CN110388882B (en) Quantized differential phase contrast microscope system with isotropic transfer function
CN109799191B (en) Optical non-contact detection device and method for sound disturbance of rough surface of solid material
Li Wavelet transform for detection of partial fringe patterns induced by defects in nondestructive testing of holographic interferometry and electronic speckle pattern interferometry
CN114414577B (en) Method and system for detecting plastic products based on terahertz technology
Pan et al. Edge extraction and reconstruction of terahertz image using simulation evolutionary with the symmetric fourth order partial differential equation
CN215865743U (en) Film uniformity detection system based on line structured light
CN114998469A (en) Regional infrared digital holographic method based on neural convolutional network
Ferreira et al. Evaluating sub-pixel functional defects of a display using an arbitrary resolution camera
CN114235347A (en) Lens quality evaluation method and device
Liu et al. Measurement method of the width of the strands of cut tobacco based on digital image processing

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry