CN110674835B - Terahertz imaging method and system and nondestructive testing method and system - Google Patents

Terahertz imaging method and system and nondestructive testing method and system Download PDF

Info

Publication number
CN110674835B
CN110674835B CN201910220319.8A CN201910220319A CN110674835B CN 110674835 B CN110674835 B CN 110674835B CN 201910220319 A CN201910220319 A CN 201910220319A CN 110674835 B CN110674835 B CN 110674835B
Authority
CN
China
Prior art keywords
image
terahertz
imaging
images
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910220319.8A
Other languages
Chinese (zh)
Other versions
CN110674835A (en
Inventor
蒋英兰
郑佳春
李铁军
刘建军
邵桂芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jimei University
Original Assignee
Jimei University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jimei University filed Critical Jimei University
Priority to CN201910220319.8A priority Critical patent/CN110674835B/en
Publication of CN110674835A publication Critical patent/CN110674835A/en
Application granted granted Critical
Publication of CN110674835B publication Critical patent/CN110674835B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

The invention discloses a terahertz imaging method and system and a nondestructive testing method and system. In the imaging method and system, the terahertz image library comprises a terahertz time-domain image and a frequency-domain image. The method comprises the steps of selecting a plurality of images with the highest similarity value with a standard image by adopting a similarity measurement method as a source image, and then performing image fusion on the selected source image by adopting an image fusion method based on wavelet transformation, wherein the fused image has moderate brightness, image information is basically not lost, the definition and the contrast are enhanced to a certain degree, and the imaging effect is good. In addition, when similarity measurement is carried out, multi-dimensional vector calculation of a large number of SIFT feature points of two images is reduced to measurement calculation between two K-dimensional vectors, the calculation amount and the calculation complexity are reduced, and the real-time efficiency and the implementation cost of the imaging method are improved. When the nondestructive detection is carried out on the basis, the tiny characteristics of the detection object can be quickly and accurately identified, the detection speed is high, and the detection precision is high.

Description

Terahertz imaging method and system and nondestructive testing method and system
Technical Field
The invention relates to the field of image processing, in particular to a terahertz imaging method and system for a nondestructive testing method and system.
Background
The terahertz spectrum image is low in resolution, and therefore great difficulty is brought to the identification of the tiny features of the sample target. At present, for a terahertz time-domain spectroscopy imaging system (THz-TDS), technical means such as improving precision of optical hardware to obtain a smaller terahertz light spot or reducing a scanning step length of a two-dimensional scanning platform to obtain more scanning pixel points are often adopted to improve spatial resolution of imaging. However, the scanning step size is reduced, so that the amount of spectral data is increased sharply, the scanning time is too long, and the real-time performance of the system is seriously reduced, which is particularly obvious for detecting large components. In addition, as the diffraction limit is closely related to the terahertz wavelength, the unlimited reduction of the light spot and the scanning step length has no significance, and the improvement of the imaging effect is limited.
Disclosure of Invention
The invention aims to provide a terahertz imaging method and system and a nondestructive testing method and system, which have the advantages of high imaging speed and good imaging effect; the nondestructive testing speed is high, and the testing precision is high.
In order to achieve the purpose, the invention provides the following scheme:
a terahertz imaging method, the method comprising:
acquiring a plurality of time domain spectral information file matrixes of a detection object;
performing terahertz time-domain information imaging and terahertz frequency-domain information imaging according to the time-domain spectral information file matrix to obtain a terahertz time-domain image and a terahertz frequency-domain image;
constructing an image library of the detection object according to the terahertz time domain image and the terahertz frequency domain image;
calculating the information entropy of the images with the frequency smaller than the terahertz spectrum cut-off frequency in the image library;
screening out an image with the largest information entropy as a standard image;
extracting SIFT feature points of each image in the image library;
clustering each SIFT feature point by adopting a K-means clustering method, and clustering each SIFT feature point into K clustering clusters;
counting the number of SIFT feature points of each image in each cluster, and generating a K-dimensional feature vector corresponding to each image;
respectively calculating similarity values of the K-dimensional feature vectors of the standard images and the K-dimensional feature vectors of the images in the image library;
selecting N images with the highest similarity value as super-resolution recognition source images, wherein N represents the number of preset source images;
and carrying out image fusion on the N super-resolution recognition source images by adopting an image fusion method based on wavelet transformation to obtain a fused terahertz image.
Optionally, the specific method for obtaining the terahertz frequency domain image includes:
reading the spectrum data files of all rows of each column of pixel points in the time domain spectrum information file matrix, and determining the imaging parameters of each pixel point according to the type of the time domain information imaging method;
constructing a time domain imaging parameter matrix according to each imaging parameter;
carrying out quantitative mapping on each imaging parameter in the time domain imaging parameter matrix, and expanding and transforming each imaging parameter to be between [0,255] according to the proportion to obtain a time domain imaging gray matrix;
and generating a terahertz time-domain image according to the time-domain imaging gray matrix.
Optionally, the specific method for obtaining the terahertz time-domain image includes:
reading the spectrum data files of all rows of each column of pixel points in the time domain spectrum information file matrix, and transforming the time domain spectrum of each pixel point into a frequency domain spectrum by adopting fast Fourier transform;
calculating the energy parameter of each pixel point by adopting a Pasteval theorem formula according to the spectral density of the frequency domain spectrum;
constructing a frequency domain imaging parameter matrix according to the energy parameters of all the pixel points;
performing quantitative mapping on each energy parameter in the frequency domain imaging parameter matrix, and expanding and transforming each energy parameter to be between [0,255] according to a proportion to obtain a frequency domain imaging gray matrix;
and generating a terahertz frequency domain image according to the frequency domain imaging gray matrix.
Optionally, the fast fourier transform is a blackman windowed fast fourier transform.
Optionally, after the image with the largest information entropy is screened out as the standard image, the method further includes:
judging whether the number of the screened standard images is greater than or equal to 2 or not, and obtaining a judgment result;
when the judgment result shows that the standard images are true, calculating the average gradient of each screened standard image;
and taking the image with the maximum average gradient as a final standard image.
Optionally, the K-dimensional feature vector corresponding to the image is an ordered set of the number of SIFT feature points of the image in each cluster.
Optionally, the calculating the similarity value between the K-dimensional feature vector of the standard image and the K-dimensional feature vector of each image in the image library respectively specifically includes:
and respectively calculating the similarity value of the K-dimensional feature vector of the standard image and the K-dimensional feature vector of each image in the image library by adopting a cosine similarity calculation method.
A terahertz imaging system, the system comprising:
the acquisition module is used for acquiring a plurality of time domain spectral information file matrixes of the detection object;
the imaging module is used for carrying out terahertz time domain information imaging and terahertz frequency domain information imaging according to the time domain spectrum information file matrix to obtain a terahertz time domain image and a terahertz frequency domain image;
the image library construction module is used for constructing an image library of the detection object according to the terahertz time domain image and the terahertz frequency domain image;
the information entropy calculation module is used for calculating the information entropy of the images with the frequency smaller than the terahertz spectrum cut-off frequency in the image library;
the screening module is used for screening the image with the largest information entropy as a standard image;
the characteristic point extraction module is used for extracting SIFT characteristic points of each image in the image library;
the clustering module is used for clustering all SIFT feature points by adopting a K-means clustering method and clustering all the SIFT feature points into K clustering clusters;
the feature vector generation module is used for counting the number of SIFT feature points of each image in each cluster and generating a K-dimensional feature vector corresponding to each image;
the similarity calculation module is used for calculating similarity values of the K-dimensional feature vectors of the standard images and the K-dimensional feature vectors of the images in the image library respectively;
the source image screening module is used for selecting N images with the highest similarity value as super-resolution recognition source images, and N represents the number of preset source images;
and the image fusion module is used for carrying out image fusion on the N super-resolution recognition source images by adopting an image fusion method based on wavelet transformation to obtain a fused terahertz image.
A non-destructive inspection method, comprising:
acquiring a terahertz image of a detection object, wherein the terahertz image is a fused terahertz image generated according to the method;
and carrying out nondestructive detection on the detection object according to the terahertz image.
A non-destructive inspection system, comprising:
a fused image acquisition module for acquiring a terahertz image of the detection object, wherein the terahertz image is a fused terahertz image generated according to the method;
and the nondestructive testing module is used for carrying out nondestructive testing on the detection object according to the terahertz image.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
according to the terahertz imaging method and system, terahertz time-domain information imaging and terahertz frequency-domain information imaging are performed based on the time-domain spectral information file matrix of the detection object. And forming a terahertz image library of the detection sample through time-frequency domain multi-information image reconstruction, wherein the terahertz image library comprises a terahertz time-domain image and a terahertz frequency-domain image. Because the terahertz image has large influence on high-frequency-band noise, the image with the largest information entropy is selected as the standard image in the frequency band below the terahertz spectrum cut-off frequency, so that the selected standard image has a large information content, and the imaging effect of the image is ensured. On the basis, a plurality of images with the highest similarity value with the standard image are selected as super-resolution recognition source images, and the quality of the source images for super-resolution recognition is ensured. And finally, performing image fusion on the selected super-resolution recognition source image by adopting an image fusion method based on wavelet transformation, wherein the fused image has moderate brightness, no loss of image information basically, certain enhancement of definition and contrast and good imaging effect.
When similarity measurement is carried out, SIFT feature points of all images are extracted, then, a K-means algorithm is adopted to cluster the SIFT feature points, and K-dimensional feature vectors of each image are generated; and finally, the similarity measurement is carried out by using the K-dimensional feature vector of each image, so that the multi-dimensional vector calculation of thousands of SIFT feature points of two images can be reduced to the measurement calculation between two K-dimensional vectors, the calculation amount and the calculation complexity are effectively reduced, and the real-time efficiency and the implementation cost of the imaging method are improved.
On the basis, the nondestructive testing method and the nondestructive testing system provided by the invention can quickly and accurately identify the tiny characteristics of the tested object, and have the advantages of high testing speed and high testing precision.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a flowchart of a terahertz imaging method according to an embodiment of the present invention;
fig. 2 is a block diagram of a terahertz imaging system according to an embodiment of the present invention;
FIG. 3 is a flow chart of a non-destructive testing method according to an embodiment of the present invention;
FIG. 4 is a block diagram of a nondestructive testing system according to an embodiment of the present invention;
FIG. 5 is a flow chart of time domain information imaging;
FIG. 6 is a terahertz time-domain information image;
FIG. 7 is a terahertz frequency domain information image;
FIG. 8 is a flow chart of frequency domain information imaging;
FIG. 9 is a flow of image SIFT feature extraction and matching;
FIG. 10 is a comparison of conventional SIFT model similarity matching and improved SIFT-Kmeans model similarity matching;
FIG. 11 is a flow chart of a wavelet transform-based image fusion method;
FIG. 12 is a transmission frequency domain spectrum of four samples;
FIG. 13 is a graph showing the results of quantitative analysis of alumina in different imaging modes;
FIG. 14 is a graph showing the results of quantitative analysis of aluminum nitride in different imaging modes;
FIG. 15 is a graph showing the results of quantitative analysis of beryllium oxide under different imaging modalities;
FIG. 16 is a graph showing the results of quantitative analysis of zirconia in different imaging modes;
FIG. 17 is a source image of different frequency energy imaging of an alumina sample;
FIG. 18 is a source image of different frequency energy imaging of an aluminum nitride sample;
FIG. 19 is a source image of beryllium oxide sample imaging at different frequency energies;
FIG. 20 is a source image of different frequency energy imaging of a zirconia sample;
fig. 21 is a diagram of wavelet fusion imaging results of two samples.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a terahertz imaging method and system and a nondestructive testing method and system, which have the advantages of high imaging speed and good imaging effect; the nondestructive testing speed is high, and the testing precision is high.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Example 1:
fig. 1 is a flowchart of a terahertz imaging method according to an embodiment of the present invention. As shown in fig. 1, a terahertz imaging method includes:
step 101: and acquiring a plurality of time domain spectral information file matrixes of the detection object.
Step 102: and performing terahertz time domain information imaging and terahertz frequency domain information imaging according to the time domain spectrum information file matrix to obtain a terahertz time domain image and a terahertz frequency domain image.
Step 103: and constructing an image library of the detection object according to the terahertz time domain image and the terahertz frequency domain image.
Step 104: and calculating the information entropy of the images with the frequency less than the terahertz spectrum cut-off frequency in the image library.
Step 105: and screening the image with the maximum information entropy as a standard image.
Step 106: and extracting SIFT feature points of each image in the image library.
Step 107: and clustering each SIFT feature point by adopting a K-means clustering method, and clustering each SIFT feature point into K clustering clusters.
Step 108: and counting the number of SIFT feature points of each image in each cluster to generate a K-dimensional feature vector corresponding to each image. In this embodiment, the K-dimensional feature vector corresponding to the image is an ordered set of the number of SIFT feature points of the image in each cluster
Step 109: and respectively calculating the similarity value of the K-dimensional feature vector of the standard image and the K-dimensional feature vector of each image in the image library.
Step 110: and selecting N images with the highest similarity value as super-resolution recognition source images, wherein N represents the number of preset source images. In this embodiment, a cosine similarity calculation method is adopted to calculate similarity values between the K-dimensional feature vector of the standard image and the K-dimensional feature vectors of the images in the image library.
Step 111: and carrying out image fusion on the N super-resolution recognition source images by adopting an image fusion method based on wavelet transformation to obtain a fused terahertz image.
Specifically, the specific method for obtaining the terahertz frequency domain image in step 102 includes:
reading the spectrum data files of all rows of each column of pixel points in the time domain spectrum information file matrix, and determining the imaging parameters of each pixel point according to the type of the time domain information imaging method; the time domain information imaging method comprises amplitude information imaging and phase information imaging; the imaging parameter of the amplitude information imaging can be selected from a maximum value imaging parameter, a minimum value imaging parameter, a peak-to-peak value imaging parameter or a mean value imaging parameter; the phase information imaging parameter can be selected as a maximum peak delay time imaging parameter or a minimum peak delay time imaging parameter;
constructing a time domain imaging parameter matrix according to each imaging parameter;
carrying out quantitative mapping on each imaging parameter in the time domain imaging parameter matrix, and expanding and transforming each imaging parameter to be between [0,255] according to the proportion to obtain a time domain imaging gray matrix;
and generating a terahertz time-domain image according to the time-domain imaging gray matrix.
The specific method for obtaining the terahertz time-domain image in step 102 includes:
reading the spectrum data files of all rows of each column of pixel points in the time domain spectrum information file matrix, and transforming the time domain spectrum of each pixel point into a frequency domain spectrum by adopting fast Fourier transform; in this embodiment, the fast fourier transform is a blackman windowed fast fourier transform.
Calculating the energy parameter of each pixel point by adopting a Pasteval theorem formula according to the spectral density of the frequency domain spectrum;
constructing a frequency domain imaging parameter matrix according to the energy parameters of all the pixel points;
performing quantitative mapping on each energy parameter in the frequency domain imaging parameter matrix, and expanding and transforming each energy parameter to be between [0,255] according to a proportion to obtain a frequency domain imaging gray matrix;
and generating a terahertz frequency domain image according to the frequency domain imaging gray matrix.
As a preferred scheme, step 105 is executed: after the image with the maximum information entropy is screened out as a standard image, the method further comprises the following steps:
judging whether the number of the screened standard images is greater than or equal to 2 or not, and obtaining a judgment result;
when the judgment result shows that the standard images are true, calculating the average gradient of each screened standard image;
and taking the image with the maximum average gradient as a final standard image.
Example 2:
fig. 2 is a structural block diagram of a terahertz imaging system provided in an embodiment of the present invention. As shown in fig. 2, a terahertz imaging system includes:
the acquiring module 201 is configured to acquire a plurality of time domain spectral information file matrices of the detection object.
And the imaging module 202 is configured to perform terahertz time-domain information imaging and terahertz frequency-domain information imaging according to the time-domain spectral information file matrix to obtain a terahertz time-domain image and a terahertz frequency-domain image.
An image library construction module 203, configured to construct an image library of the detection object according to the terahertz time-domain image and the terahertz frequency-domain image.
And the information entropy calculation module 204 is used for calculating the information entropy of the images with the frequency smaller than the terahertz spectrum cut-off frequency in the image library.
And the screening module 205 is configured to screen out an image with the largest information entropy as a standard image.
And the feature point extraction module 206 is configured to extract SIFT feature points of each image in the image library.
The clustering module 207 is configured to cluster each SIFT feature point by using a K-means clustering method, and cluster each SIFT feature point into K cluster clusters.
The feature vector generation module 208 is configured to count the number of SIFT feature points of each image in each cluster, and generate a K-dimensional feature vector corresponding to each image.
And the similarity calculation module 209 is configured to calculate similarity values between the K-dimensional feature vector of the standard image and the K-dimensional feature vectors of the images in the image library, respectively.
And the source image screening module 210 is configured to select N images with the highest similarity value as super-resolution recognition source images, where N represents a preset number of source images.
And the image fusion module 211 is configured to perform image fusion on the N super-resolution recognition source images by using an image fusion method based on wavelet transformation, so as to obtain a fused terahertz image.
Example 3:
fig. 3 is a flowchart of a nondestructive testing method according to an embodiment of the present invention. As shown in fig. 3, a nondestructive testing method includes:
step 301: acquiring a terahertz image of a detection object, wherein the terahertz image is a fused terahertz image generated by adopting the method in embodiment 1;
step 302: and carrying out nondestructive detection on the detection object according to the terahertz image.
Fig. 4 is a block diagram of a nondestructive testing system according to an embodiment of the present invention. As shown in fig. 4, a non-destructive inspection system, comprising:
a fused image acquisition module 401, configured to acquire a terahertz image of a detection object, where the terahertz image is a fused terahertz image generated according to the method described in embodiment 1;
and the nondestructive testing module 402 is used for performing nondestructive testing on the detection object according to the terahertz image.
The following Ceramic Matrix Composite (CMC) is taken as a research and verification object, and several common CMC samples with preset defects are subjected to nondestructive testing, and the specific implementation process is as follows:
(1) and (3) performing imaging scanning on the customized sample by adopting a T-Ray 5000 time domain spectral imaging scanning system of American API (application program interface) company. The system is mainly different from a single spectrum measurement system in that a two-dimensional stepping scanning platform is additionally arranged, a sample is made to perform stepping actions in the X-axis direction and the Y-axis direction through scanning control software, time domain spectrums of the sample are recorded through stepping scanning every time and serve as a spectrum pixel point of the sample, and finally a terahertz spectrum determinant matrix of the scanning pixel point is formed. And then a corresponding imaging algorithm is adopted to indirectly obtain a sample spectrum image.
In the scanning process, a measured sample is placed on a two-dimensional moving platform, and one surface, with preset defects, of the sample surface faces a terahertz lens of a receiving end and is located at a terahertz pulse focusing position. The scanning step length of the scanning platform is set to be 0.25mm, and the transverse line scanning speed is 100 mm/s.
The four CMC samples were each alumina (Al)2O3) Ceramic wafers, aluminum nitride (AlN) ceramic wafers, beryllium oxide (BeO) ceramic wafers and zirconium oxide (ZrO) ceramic wafers2) And (5) ceramic plates. The defects are preset in all four samples, and the specific specification parameters, defect description and other information are shown in table 1.
TABLE 1 ceramic matrix composite sample information
Figure GDA0002923439850000101
(2) Terahertz time-domain information imaging and terahertz frequency-domain information imaging
The terahertz spectral imaging technology is different from a single light intensity information imaging technology, each scanning pixel point of a measured object is directly obtained through spectral imaging scanning, and a time domain waveform signal of the whole terahertz waveband contains a large amount of time domain information such as the electric field intensity and the phase of a measured sample. And carrying out Fourier transformation on the terahertz time-domain spectrum to obtain a frequency-domain spectrum of each space scanning point.
Terahertz spectral imaging is mainly divided into a time domain information imaging mode and a frequency domain information imaging mode. Through the selection of different imaging physical parameters of a time domain and a frequency domain, a two-dimensional image reflecting that the measured sample has different spectral resolution characteristics can be obtained.
(2.1) imaging of temporal information
The time domain information imaging mode mainly utilizes physical parameters in a terahertz time domain waveform to perform imaging, and the selected physical quantity generally comprises an amplitude value and a phase. The amplitude imaging mainly refers to imaging by adopting information such as field intensity peak values, mean values and the like in time domain waveforms, and reflects the absorption strength of a measured sample to terahertz waves. Fig. 5 is a flow chart of imaging of temporal information. Fig. 6 is a terahertz time-domain information image.
As shown in fig. 5, the specific imaging steps of the time domain information imaging are as follows:
1) obtaining terahertz time-domain waveforms of all pixel points of a sample through terahertz time-domain spectral imaging scanning to form a time-domain spectral information file matrix of the pixel points;
2) and reading the spectral data files of all rows of pixel points in the first column of the sample, obtaining the required imaging parameter value through parameter calculation, and storing the imaging parameter value in a time domain imaging parameter matrix. For different time domain information imaging methods, the adopted imaging physical parameters are different, and corresponding imaging parameter values are determined and calculated according to the table 2.
3) And (5) repeatedly executing the step 2) until the last column, and determining the imaging parameter values of all pixel points in the sample.
4) And (5) quantizing the mapping. In order to image, the imaging parameters of the pixel point need to be expanded and converted to be between [0,255] according to the proportion to form the imaging gray value of the pixel point, and the calculation formula is as follows:
Figure GDA0002923439850000121
where m is the value of the imaging parameter, mminAnd mmaxRespectively the minimum value and the maximum value in the imaging parameter values; m is the expanded gray value, M min0 and Mmax=255。
5) And outputting the terahertz time-domain image according to the imaging gray matrix.
Table 2 time domain mode imaging information parameter selection table
Figure GDA0002923439850000122
As shown in fig. 6, a in fig. 6 is time domain maximum peak information; the phase imaging mainly refers to imaging by using time information corresponding to a position such as a minimum peak value or a maximum peak value of an amplitude value, and reflects characteristics such as refraction and scattering of a sample, for example, a in fig. 6 is time information corresponding to the position of the minimum peak value.
(2.2) frequency domain information imaging
The frequency domain information imaging mode mainly utilizes physical parameters in various frequency domain spectrums to perform imaging, mainly comprises information such as amplitude, phase, energy, absorption coefficient and the like of a specific frequency point, and reflects the physical characteristic difference of a sample under different frequencies. Fig. 7 is a terahertz frequency domain information image, and B in fig. 7 is power spectral density parameter information corresponding to a frequency point 2.2 THz. Fig. 8 is a flow chart of frequency domain information imaging.
Generally speaking, in a high-frequency band, due to the fact that terahertz wavelength is short, scattering effect is small, image resolution is high, and more detailed information such as edges and textures of a sample can be obtained. However, the terahertz wave energy in the frequency band is small, and is easily interfered by background noise, and the image often contains large noise; in a low-frequency band, the terahertz wavelength is long, the carried energy is large, the image noise is small, but the scattering influence of the terahertz wave in the frequency band plays a leading role, so that the image resolution is not high, the external contour information of a measured object is mainly expressed, and the internal detail information of the object is not expressed; the image with higher quality is mainly obtained from the information imaging corresponding to the middle frequency band, and the outline and detail information of the sample are comprehensively reflected.
Because the terahertz energy spectrum of the sample comprehensively reflects the absorption condition of the sample on terahertz waves, the invention mainly researches an imaging method based on energy information in an energy density spectrum, and the energy parameter calculation method comprises the following steps:
setting a terahertz time-domain signal of a sample as s (t), energy as E, and spectral density after Fourier transform as S (f), obtaining an energy parameter E by Parseval's theorem:
Figure GDA0002923439850000131
wherein | S (f) & gtA2(ii) non-woven ray count for signal energy spectral density | S (f)2The integral on the frequency axis is equal to the signal energy.
As shown in fig. 8, the specific imaging steps of frequency domain information imaging are as follows:
1) acquiring a time domain spectral information file matrix of each pixel point of a sample;
2) reading spectrum data files of all rows of pixel points in a first column of a sample, and transforming a time domain spectrum into a frequency domain spectrum through Blackman windowed fast Fourier transform;
3) calculating an energy parameter by a Pasteval theorem formula (2);
4) repeating the steps 2) and 3) until the last column, and determining the energy parameter information of all pixel points in the sample;
5) and selecting energy parameter values of all pixel points under a certain frequency to construct a frequency domain imaging parameter matrix. Carrying out quantitative mapping according to the formula (1), and expanding each energy parameter value to be between [0,255] to obtain a frequency domain imaging gray matrix;
6) and outputting the terahertz frequency domain image according to the frequency domain imaging gray matrix.
The spectrum signal of the time domain can be decomposed and represented in the frequency domain through FFT conversion, when the frequency of the input signal is not integral multiple of the FFT resolution, the energy of the signal can be diffused to the whole frequency domain, the frequency point with smaller amplitude can be covered at the moment, and the frequency point with small amplitude can not be observed. The invention adopts a frequency domain windowing interpolation method to improve the situation and obtain the characteristic of a small-amplitude frequency point.
The invention adopts an interpolation method of Blackman window functions to prevent energy leakage. Meanwhile, the frequency domain spectral resolution can be effectively improved through windowing treatment. For example, in the API T-Ray 5000 THz-TDS system adopted in this embodiment, the spectral resolution of the frequency domain is 12.5GHz, and the spectral resolution of the frequency domain can be improved by 4 times by the FFT processing of blackman windowing, that is, the frequency point interval of the frequency domain spectrum is 3.125 GHz. The specific implementation process is as follows:
blackman window function design
The Blackman window is a cosine window, and the formula is as follows:
ω (N) ═ 0.42-0.52cos (2 π N/N) +0.08cos (4 π N/N), where N ═ 0,1,2, · · N, corresponds to a spectrum W (2 π f).
The attenuation speed of the first side lobe of the Blackman window is high, and the spectral leakage can be restrained. Although the main lobe width of the window is large, the shortage of the main lobe too wide can be compensated by increasing the number of sampling points per period.
② windowed FFT
Let the discrete signal obtained after sampling the single frequency signal x (t) be:
Figure GDA0002923439850000151
the windowed FFT transform is carried out to obtain:
Figure GDA0002923439850000152
due to symmetry, the frequency spectrum of the negative frequency is abandoned, and only the frequency spectrum at the positive frequency point is reserved to obtain:
x+(f)=(A/2j)eW(2π(f-f0)/fs) Wherein A represents amplitude, f0Representing the center frequency, fsDenotes the sampling frequency, n denotes the discrete sequence length, W denotes the window function spectrum, θ denotes the phase, e denotes the base of the natural logarithm, and j denotes the imaginary unit.
(iii) discrete sampling
For x+(f) Discrete sampling is carried out to obtain a discrete Fourier expression: x+(kΔf)=(A/2j)eW(2π(kΔf-f0)/fs) Wherein Δ f ═ fsand/N, wherein N is the data truncation length.
(2.3) multimodal imaging
At present, terahertz images for nondestructive testing analysis are mainly generated directly by software carried by an imaging system, and the method for acquiring terahertz images generally adopts single physical information imaging. In fact, due to the influence of factors such as power, functional difference and detection environment of the detection equipment, the image quality of the method of only adopting the single information imaging cannot meet the requirement of detection analysis. Therefore, an imaging method using a plurality of modes is required.
The terahertz spectral imaging has the main advantage that the spectral image has the characteristic of integrating spectra, so that the invention provides the multimode imaging method based on the terahertz time-domain and frequency-domain information by taking the time-domain signal of each pixel point of imaging scanning as basic experimental data. According to the specific detection requirements of the samples, a large number of terahertz images are obtained through time-frequency domain multi-information image reconstruction, a terahertz image library of each detection sample is formed, the image library comprises terahertz time-domain images and terahertz frequency-domain images, and basic image data are provided for comprehensive and accurate identification of sample targets from multiple angles, multiple purposes and multiple features.
(3) Calculating the fusion evaluation index of the image, and screening out the standard image
In order to select a high-quality image from a large terahertz image library as an image to be retrieved of the SIFT-Kmeans model, namely a standard image. The invention introduces five objective evaluation indexes of digital image quality, including Standard Deviation (SD), Information Entropy (H), Average Gradient (AG), Spatial Frequency (SF) and Peak Signal-to-Noise Ratio (PSNR). For an image with the size of m multiplied by n, the image objective quality evaluation index calculation method comprises the following steps:
standard deviation of
The standard deviation reflects the dispersion of the image gray value relative to the mean. The smaller the standard deviation is, the more concentrated the gray value distribution of the image is; conversely, the larger the standard deviation is, the more dispersed the gradation value distribution of the image is, and the more information the image contains. The standard deviation of the image can be calculated by the following formula:
Figure GDA0002923439850000161
wherein P (i, j) is the gray value of the image at (i, j); m is the height of the image and n is the width of the image; μ is the mean value of the gray levels.
Information entropy-
The information entropy is an index for measuring the average information content of the image. The larger the entropy of the image information is, the more the information content contained in the image is, and the better the imaging effect of the image is; on the contrary, the smaller the information entropy, the more the information loss in the image imaging process, and the worse the imaging effect. The calculation formula of the image information entropy is as follows:
Figure GDA0002923439850000171
wherein, L is the gray level number of the image, and 256 gray levels are adopted; p (l) represents the probability of the gray value l appearing in the image, which can be expressed by the ratio of the number of pixels of the gray value l to the total number of pixels of the image.
③ average gradient
The size of the average gradient can reflect the expressive power of the image detailed information. The larger the average gradient value is, the more the image layers are, the higher the image definition is, and the better the image imaging effect is; on the contrary, the smaller the average gradient value of the image is, the poorer the definition is, and the imaging effect is not good. The image mean gradient can be calculated by:
Figure GDA0002923439850000172
spatial frequency
The spatial frequency can reflect the variation of the pixel gray scale of the image in space, and is divided into a spatial row frequency and a column frequency (contrast). The larger the spatial frequency value is, the stronger the low-frequency component of the image is, and the weaker the high-frequency component is, so that the gray value distribution of the image is relatively flat; conversely, the smaller the spatial frequency value is, the higher the high-frequency component of the image is, the weaker the low-frequency component is, and the gray value of the image changes rapidly in space. The calculation method of the image spatial frequency is as follows:
Figure GDA0002923439850000181
where RF is the spatial row frequency and CF is the spatial column frequency.
Peak Signal to Noise Ratio (PSNR)
The peak signal-to-noise ratio is an objective standard for measuring image distortion or noise level, the evaluation result is in dB, the higher the PSNR value is, the less image distortion is represented, and the evaluation result can be calculated by the following formula:
Figure GDA0002923439850000182
where MSE is the mean square error, P0And (i, j) is the gray value of the compared original image at the position (i, j), wherein an image directly acquired by an imaging system is taken as the original image, and MAX is the maximum gray value of the pixel point.
The two evaluation indexes of the information entropy and the average gradient reflect the key information of the characteristic details of the sample, and the two indexes are used as main weight indexes and combined with the cut-off frequency fusion analysis of the terahertz spectrum of the sample to select a terahertz image with higher quality from an image library as a standard image. Wherein the order of the index weights is: the first weight is the terahertz spectrum cut-off frequency, the second weight is the information entropy, and the third weight is the average gradient. The method specifically comprises the following steps:
calculating the information entropy of the images with the frequency less than the terahertz spectrum cut-off frequency in the image library;
screening out an image with the largest information entropy as a standard image;
judging whether the number of the screened standard images is greater than or equal to 2 or not, and obtaining a judgment result;
when the judgment result shows that the standard images are true, calculating the average gradient of each screened standard image;
and taking the image with the maximum average gradient as a final standard image.
In practical application, the priority of each evaluation index can be set according to actual needs so as to screen out images meeting application requirements.
(4) SIFT feature extraction
Fig. 9 is an image SIFT feature extraction and matching process. In order to compare the similarity of two images, Scale-invariant feature transform (SIFT-invariant feature transform) feature points of each image need to be extracted, for thousands of SIFT feature points of two images, the traditional method adopts similarity matching calculation among multi-dimensional feature vectors in a feature point set, and the flow of typical image SIFT feature point extraction and matching is shown in fig. 9.
When the method is used for image matching retrieval, when the number of images is large, the matching calculation amount of the multi-dimensional vectors is very large, and the algorithm efficiency is low. In order to improve the efficiency of the algorithm, the SIFT-Kmeans model is adopted for similar image matching.
The implementation process of matching similar images by using the SIFT-Kmeans model is as follows:
firstly, extracting SIFT feature points of all images; then clustering SIFT feature points by adopting a K-means algorithm to generate a new K-dimensional feature vector of each image; and finally, matching and calculating the K-dimensional feature vector of each image to measure the similarity of the two images.
Fig. 10 is a comparison graph of the similarity matching of the conventional SIFT model and the similarity matching of the improved SIFT-Kmeans model. As shown in fig. 10, by using the similarity calculation method provided by the present invention, the multi-dimensional vector calculation of thousands of SIFT feature points of two images can be reduced to the measurement calculation between two K-dimensional vectors, thereby reducing the calculation amount and the complexity of the algorithm to a great extent, and improving the real-time performance of the algorithm. The specific implementation process is as follows:
(4.1) image SIFT feature extraction
The SIFT feature description of the image is an image local feature description operator based on invariant technology. The operator is based on a scale space and has the characteristic of keeping unchanged under various operating conditions of scaling, space rotation, brightness transformation, affine transformation and the like of an image. The core of the SIFT algorithm is to search extreme points in different scale spaces, calculate the information of the size, the direction, the scale and the like of the extreme points, and describe feature points by using key points formed by the information.
The key points extracted by the SIFT are some 'stable' feature points which cannot be transformed by factors such as illumination, affine transformation and noise, such as corner points, edge points, bright points of a dark area, dark points of a bright area and the like. The process of extracting and matching the image feature points is the process of comparing the feature points.
The image SIFT feature extraction is mainly realized by four steps, and the specific process is as follows:
1) constructing a Gaussian difference pyramid scale space
Scale space
The scale space (scale space) of a two-dimensional image is defined as:
L(x,y,σ)=G(x,y,σ)·I(x,y) (8)
wherein I (x, y) is an image region; g (x, y, sigma) is a scale-variable Gaussian function, and x, y are spatial coordinates; σ is a scale value of a gaussian function, the size of which determines the degree of smoothness of the image.
The scale space is realized by using Gaussian blur, a blur template is calculated through normal distribution, and convolution operation is carried out on the blur template and the original image to blur the image. The normal distribution equation of the N-dimensional space is:
Figure GDA0002923439850000211
where σ' is the standard deviation of a normal distribution; r is the blur radius; assuming that a two-dimensional template (fuzzy template) is m × n, where m and n are respectively expressed as the length and width of an image, and the unit is a pixel, the gaussian kernel function corresponding to an element (x, y) on the template is:
Figure GDA0002923439850000212
second gaussian difference pyramid
The image multi-scale features are described by an image pyramid, and the basis of the scale space construction is a Difference of Gaussian (DOG). The extraction of the SIFT feature points is carried out on a DOG pyramid, and the features contained in the DOG images are 'stable' feature points existing under different fuzzy degrees and different scales. For an image with the original size of m × n, the calculation formula of the number of layers of the gaussian pyramid is as follows:
g=log2{min(m,n)}-t,t∈[0,log2{min(m,n)}] (11)
wherein t is a logarithm value of the minimum dimension of the tower top image; the gaussian pyramid uses different sigma for each layer to make gaussian blur, and a group of images form an octave, which is called octave.
By calculating the function of Gauss Laplace
Figure GDA0002923439850000213
The extreme value of (2) produces stable image features, but the calculation amount of the extreme value is large, so that the difference calculation approximation of the Gaussian DOG operator is used to replace the differential, and the calculation formula is as follows:
Figure GDA0002923439850000214
wherein
Figure GDA0002923439850000215
Represents the laplacian operator; k-1 is a threshold constant and does not affect the calculation of the position of the extreme point.
Construction of DOG scale space
Detecting stable extreme points by utilizing Gaussian difference kernels with different scales and image convolution calculation, and constructing a DOG scale space, wherein the DOG scale space is defined as:
Figure GDA0002923439850000221
all values of the scale are:
2i-1(σ,kσ,k2σ,…,kn-1σ),k=21/s (14)
where i is the ordinal number of octave (a group of pictures constitutes an octave, referred to as an octave) and S is the number of layers in each group.
2) Spatial extremum point search and location
In the two-dimensional image space, the central point is compared with 26 points of the adjacent area and the upper and lower adjacent layers of the image, and the local extreme point can be detected. Because the DOG value is sensitive to noise, curve fitting needs to be carried out on the DOG function in the scale space, and the matching capability of the extreme point is improved. The taylor expansion of the DOG function in scale space is:
Figure GDA0002923439850000222
the extreme points are then:
Figure GDA0002923439850000223
3) direction information distribution of stable extreme points
For any extreme point, the gradient magnitude and direction are as follows:
Figure GDA0002923439850000224
4) description of characteristic points
According to the method, each key point is characterized by adopting descriptors of 128-dimensional SIFT feature vectors in 8 directions of a traditional 4 x 4 image block.
(5) Feature vector K-means clustering
The commonly used matching method for the SIFT feature points is realized by calculating the Euclidean distance of 128-dimensional vectors of each feature point in two sets of SIFT feature point sets of two images. And (3) calculating the Euclidean distance of the Nearest Neighbor feature point of the image feature point and the Euclidean distance of the next Nearest Neighbor feature point by adopting a Nearest Neighbor (NN) distance algorithm, and calculating the ratio of the Euclidean distance to the next Nearest Neighbor feature point. The ratio is compared with a set threshold, and if the ratio is smaller than the threshold, the two feature points are considered to be matched. Since each image can generate thousands of SIFT feature points, this method is computationally expensive when the image data set is large.
In order to improve the efficiency of image retrieval and matching, the SIFT feature point matching method of the traditional image is improved. Firstly, extracting SIFT feature points of all images; and then carrying out K-means clustering on the feature points, and counting the number of SIFT feature points of each image corresponding to each clustering center so as to generate a new K-dimensional feature vector. Because one image has a plurality of SIFT feature points, K clustering clusters are finally formed by the SIFT feature points of all the images; and counting the number of SIFT feature points of one image in each cluster to form a K-dimensional vector which is used as a feature vector for measuring the similarity of the two images. And finally, calculating the cosine value of the included angle of the K-dimensional characteristic vectors between the images by adopting a vector cosine similarity matching method, thereby realizing image similarity measurement. The essence of the K-means clustering algorithm is to perform quantitative statistical analysis on SIFT feature points of the image.
The specific implementation steps of adopting a K-means clustering algorithm to carry out SIFT feature point clustering are as follows:
(5.1) extracting SIFT feature points of all images (l frames) in the image library, wherein the total number of the SIFT feature points is n, and constructing a data set X of the SIFT feature points as (X)1 d,X2 d,...,Xn d) Wherein X isi d(i 1, 2.., n) denotes the ith feature point, and the dimension of the ith feature point is d 128, namely each feature point has a 128-dimensional vector;
(5.2) selecting the number K of proper clustering clusters, and randomly selecting K SIFT points from the data set as initial clustering centers (centroids);
(5.3) calculating the distance from each SIFT feature point to the cluster center, wherein the distance metric function adopts the Euclidean distance. According to the Euclidean distance nearest criterion, sequentially distributing each SIFT feature point in the data set to the nearest mass center to form K clustering clusters;
and (5.4) iteratively updating the clustering center.
Considering the data of euclidean distances, the Sum of Squares of Errors (SSE) is used as the objective function of the clustering:
Figure GDA0002923439850000241
where k denotes the number of cluster centers, ciDenotes the ith cluster center and dist is the euclidean distance.
For the Kth centroid ckSolving, the minimum SSE has:
Figure GDA0002923439850000242
wherein m iskIs the number of feature points in the kth centroid.
Calculating the average value of all the characteristic points in each cluster according to a formula (19), generating a corresponding average value vector, taking the average value vector as the clustering center of the cluster, calculating an objective function value, operating K mean values twice to generate two different cluster sets, and taking the average value vector with the minimum objective function value as a new centroid.
And (5.5) judging convergence.
And judging whether the clustering center and the objective function value are changed or not, if not, outputting the clustering results of K clusters, and if so, returning to (5.3). The convergence judgment process is as follows: firstly, judging whether the cluster is changed or not, if not, no new characteristic point is redistributed, and ending; if the cluster is changed, judging whether the maximum iteration times is reached, if so, ending, and if not, returning to (5.3).
(6) And counting the number of SIFT feature points of each image in each cluster to generate a K-dimensional feature vector corresponding to each image.
In order to improve the efficiency of image retrieval and matching, the method improves the SIFT feature point matching method of the traditional image. The method specifically comprises the following steps:
and counting the number of SIFT feature points of each image in each cluster according to the result output by the K-means cluster, thereby generating a K-dimensional feature vector related to the number of the centroid feature points. For all the images subjected to SIFT feature point extraction, a relation is formedNew data set for image metric Y ═ Y (Y)1 k,Yk 2,...,Yl k). Wherein Y isi k(i ═ 1, 2.. times, l) represents the K-dimensional feature vector of the ith image, K being the number of mean clusters;
(7) and matching the image similarity, and selecting N images with the highest similarity value with the standard image as super-resolution recognition source images.
The invention adopts cosine similarity to measure the similarity of two images. Cosine similarity is measured by the cosine value of the included angle between two vectors, and for the vectors A and B, the calculation formula is as follows:
Figure GDA0002923439850000261
wherein the cosine similarity is set as [ -1,1], and the larger the cosine similarity is, the more similar the cosine similarity is.
(8) Wavelet fusion super-resolution reconstruction
Through quantitative analysis of the SIFT-Kmeans model, a plurality of high-quality representative images of each sample are obtained and serve as super-resolution recognition source images of the samples. Image fusion is a method that can combine two or more images into a new image. The image fusion is divided into three levels according to the sequence from low to high: pixel level fusion, feature level fusion, and decision level fusion. The method for fusing the image data at the bottom layer is a method for directly processing the collected image data to obtain a fused image, and has the advantages of keeping original data characteristics to the maximum extent and providing detailed information for two high-level fusion methods. Pixel-level image fusion can be divided into spatial domain image fusion and frequency domain image fusion according to different processing domains.
The frequency domain image fusion method needs to perform wavelet transformation or pyramid transformation on an original image, then perform fusion on coefficients of a plurality of images to be fused after transformation, and finally perform inverse transformation on the fusion coefficients to obtain a final fusion image. Therefore, the frequency domain image fusion method can carry out different processing on the image information of a plurality of images to be fused in different frequency bands, thereby obtaining better image fusion effect.
Fig. 11 is a flowchart of an image fusion method based on wavelet transform. As shown in fig. 11, the image fusion method based on wavelet transform includes:
reading gray value matrixes of n images to be fused, and recording the gray value of k image to be fused at (i, j) as Pk(i,j);
Setting decomposition layer dim and wavelet basis function wname of wavelet fusion, wherein the decomposition layer is 2 or 3 layers in general, and the wavelet basis function comprises haar, db2, db3, sym2, sym3 and the like;
deleting partial data of the gray value matrix of the image to be fused, so that the row and column values of the matrix can be 2dimTrimming;
invoking a wavedec2 function in the MATLAB to perform wavelet decomposition on the image to be fused;
fusing the low frequency band, and taking the average value of the low frequency coefficients at the corresponding positions in the source image as a new low frequency coefficient; fusing in a high frequency band, and taking the value of the maximum absolute value of the high frequency coefficient at the corresponding position of the source image as a new high frequency coefficient;
sixthly, calling a waverec2 function in MATLAB to reconstruct according to the processed high-frequency and low-frequency coefficients, and calling an IMSHOW function to re-image to obtain a fused image.
The invention adopts the image fusion method based on wavelet transformation to perform image fusion, and has the following three advantages:
first, in performing wavelet transform, not only low-frequency information of an image but also three sets of high-frequency information in horizontal, vertical, and diagonal directions can be acquired at the same time.
Secondly, the effects of reducing noise, clearing edges and the like can be well achieved by reasonably selecting the wavelet basis and the number of decomposition layers.
Finally, wavelet transforms have higher independence at different resolutions than pyramid transforms.
(9) Non-destructive testing of the test object from the fused image
The transmission frequency domain spectra of the four samples obtained in this example are shown in fig. 12. In fig. 12, (a) is an alumina sample diagram, fig. 12, (b) is an aluminum nitride sample diagram, fig. 12, (c) is a beryllium oxide sample diagram, and fig. 12, (d) is a zirconium oxide sample diagram. As can be seen from FIG. 12, the terahertz pulse has better penetrability to four ceramic matrix composite materials. As can be seen from the sample transmission frequency domain spectrum shown in FIG. 12, the effective frequency bandwidth of the terahertz pulse measurement is about 0.2-3.5 THz, wherein the zirconia cutoff frequency is about 0.5THz, the alumina cutoff frequency is about 1.5THz, and the aluminum nitride and beryllium oxide cutoff frequencies are about 2 THz.
The invention adopts a multi-mode imaging processing method to generate a spectral image library of each tested sample. The method comprises 1 terahertz image obtained by directly scanning a system, 4 time domain electric field intensity information images, 1120 energy information images corresponding to each frequency point (the interval is 3.125GHz) of a frequency domain energy density spectrum, and 1125 images in a library.
Fig. 13 is a result of quantitative analysis of alumina in different imaging modes, where part (a) of fig. 13 is a result of quantitative analysis of alumina in an optical imaging mode, part (b) of fig. 13 is a result of quantitative analysis of alumina in a system software imaging mode, part (c) of fig. 13 is a result of quantitative analysis of alumina in an amplitude peak imaging mode, part (d) of fig. 13 is a result of quantitative analysis of alumina in an amplitude mean imaging mode, part (e) of fig. 13 is a result of quantitative analysis of 0.20THz energy imaging, part (f) of fig. 13 is a result of quantitative analysis of 0.34THz energy imaging, part (g) of fig. 13 is a result of quantitative analysis of 0.76THz energy imaging, and part (h) of fig. 13 is a result of quantitative analysis of 1.30THz energy imaging. Parts (c) and (d) of fig. 13 are time domain field intensity parametric imaging, and parts (e) to (h) of fig. 13 are frequency domain energy parametric imaging.
Fig. 14 is a graph showing the results of quantitative analysis of aluminum nitride in different imaging modes, where part (a) of fig. 14 shows the results of quantitative analysis of aluminum nitride in an optical imaging mode, part (b) of fig. 14 shows the results of quantitative analysis of aluminum nitride in a system software imaging mode, part (c) of fig. 14 shows the results of quantitative analysis of aluminum nitride in an amplitude peak imaging mode, part (d) of fig. 14 shows the results of quantitative analysis of aluminum nitride in an amplitude mean imaging mode, part (e) of fig. 14 shows the results of quantitative analysis in a 0.36THz energy imaging mode, part (f) of fig. 14 shows the results of quantitative analysis in a 0.83THz energy imaging mode, part (g) of fig. 14 shows the results of quantitative analysis in a 1.48THz energy imaging mode, and part (h) of fig. 14 shows the results of quantitative analysis in a 1.80THz energy imaging mode. Parts (c) and (d) in fig. 14 are time domain field intensity parametric imaging, and parts (e) to (h) in fig. 14 are frequency domain energy parametric imaging.
Fig. 15 is a graph showing results of quantitative analysis of beryllium oxide in different imaging modes, where part (a) of fig. 15 shows results of quantitative analysis of beryllium oxide in an optical imaging mode, part (b) of fig. 15 shows results of quantitative analysis of beryllium oxide in a system software imaging mode, part (c) of fig. 15 shows results of quantitative analysis of beryllium oxide in an amplitude peak imaging mode, part (d) of fig. 15 shows results of quantitative analysis of beryllium oxide in an amplitude mean imaging mode, part (e) of fig. 15 shows results of quantitative analysis of 0.36THz energy imaging, part (f) of fig. 15 shows results of quantitative analysis of 0.67THz energy imaging, part (g) of fig. 15 shows results of quantitative analysis of 1.48THz energy imaging, and part (h) of fig. 15 shows results of quantitative analysis of 1.82THz energy imaging. Parts (c) and (d) of fig. 15 are time domain field intensity parametric imaging, and parts (e) to (h) of fig. 15 are frequency domain energy parametric imaging.
Fig. 16 is a graph showing the results of quantitative analysis of zirconia in different imaging modes, where part (a) of fig. 16 shows the results of quantitative analysis of zirconia in an optical imaging mode, part (b) of fig. 16 shows the results of quantitative analysis of zirconia in a system software imaging mode, part (c) of fig. 16 shows the results of quantitative analysis of zirconia in an amplitude peak imaging mode, part (d) of fig. 16 shows the results of quantitative analysis of zirconia in an amplitude mean imaging mode, part (e) of fig. 16 shows the results of quantitative analysis of 0.24THz energy imaging, part (f) of fig. 16 shows the results of quantitative analysis of 0.29THz energy imaging, part (g) of fig. 16 shows the results of quantitative analysis of 0.37THz energy imaging, and part (h) of fig. 16 shows the results of quantitative analysis of 0.46THz energy imaging. Parts (c) and (d) of fig. 16 are time domain field intensity parametric imaging, and parts (e) to (h) of fig. 16 are frequency domain energy parametric imaging.
As can be seen from the quantitative analysis result, the image quality is poor by adopting a time domain parameter imaging mode; the system software is in a direct imaging mode, and the image effect is slightly superior to that of time domain information imaging; energy information imaging corresponding to different frequency points of a frequency domain is adopted, the image quality is generally superior to the former two imaging modes, and a clear image with higher resolution can be obtained in a middle frequency band.
In order to further quantitatively analyze the experimental results, five image quality objective evaluation indexes are adopted, a terahertz image library of each sample is traversed, the image quality of each sample is objectively evaluated, the terahertz image has large influence on high-frequency-band noise, the cut-off frequency is determined by combining a transmission frequency domain spectrum of the sample, and a representative image is comprehensively selected from a middle frequency band in the cut-off frequency range and serves as an image to be matched. Representative partial objective evaluation results were extracted by the present invention and are shown in table 3.
And sequentially selecting high-quality representative images of the 4 samples according to the sequence of the three weights of the spectral cut-off frequency, the information entropy and the average gradient by combining the objective evaluation result of the image with the cut-off frequency of the transmission spectrum of each sample, and taking the high-quality representative images as images to be matched of the SIFT-Kmeans model. They are respectively: alumina sample 0.76THz energy imaging, aluminum nitride and beryllium oxide sample 1.48THz energy imaging, zirconia sample 0.24THz energy imaging. From these high quality images, detailed information of the object can be clearly observed.
TABLE 3 comparison of objective evaluation of image quality
Figure GDA0002923439850000311
Figure GDA0002923439850000321
And (3) adopting an SIFT-Kmeans model, taking the representative image selected by comprehensive evaluation as a standard image, retrieving and matching the images in the library to obtain a group of images with higher similarity as the source image of the sample. In the simulation experiment, the gaussian scale value σ extracted from the SIFI feature points is set to 1.6, and the K-means clustering number K is set to 500. Specific model parameter settings are shown in table 4.
TABLE 4 model parameters
Figure GDA0002923439850000322
1125 terahertz images of each sample are subjected to matching retrieval, and finally 8 images with the maximum similarity are obtained, as shown in fig. 17-20. Wherein, fig. 17 is a source image of the alumina sample imaged by different frequency energies, and the cosine value of the included angle between the characteristic vectors is gradually decreased from large to small, that is, the similarity is gradually decreased. Wherein part (a) of fig. 17 is 0.86562THz energy imaging; (b) part 0.88125THz energy imaging; (c) part 0.91562THz energy imaging; (d) partial 0.85THz energy imaging; (e) part 0.85312THz energy imaging; (f) part 0.91875THz energy imaging; (g) part 0.88437THz energy imaging; (h) the portion is 0.96875THz energy imaging.
Fig. 18 is a source image of an aluminum nitride sample imaged by different frequency energies, and cosine values of included angles between feature vectors of the source image are gradually decreased from large to small, that is, the similarity is gradually decreased. Wherein part (a) of fig. 18 is 1.4844THz energy imaging; (b) part 1.4969THz energy imaging; (c) part is 1.5THz energy imaging; (d) part 1.5937THz energy imaging; (e) part 1.4781THz energy imaging; (f) part 1.5906THz energy imaging; (g) part 1.3375THz energy imaging; (h) the portion is 1.1687THz energy imaging.
Fig. 19 is a source image of beryllium oxide sample imaged by different frequency energies, and cosine values of included angles between feature vectors of the source image are gradually decreased from large to small, that is, the similarity is gradually decreased. Wherein, part (a) of fig. 19 is 1.4781THz energy imaging; (b) part 1.4719THz energy imaging; (c) part 1.1969THz energy imaging; (d) part 1.1937THz energy imaging; (e) part 1.475THz energy imaging; (f) part of 1.1875THz energy imaging; (g) part 1.4656THz energy imaging; (h) the portion is 1.4687THz energy imaging.
Fig. 20 is a source image of a zirconia sample imaged by energy of different frequencies, in which cosine values of included angles between feature vectors are gradually decreased from large to small, that is, similarity is gradually decreased. Wherein part (a) of fig. 20 is 0.31562THz energy imaging; (b) part 0.38437THz energy imaging; (c) partial 0.2625THz energy imaging; (d) part 0.32187THz energy imaging; (e) part 0.35625THz energy imaging; (f) part 0.3875THz energy imaging; (g) part 0.39062THz energy imaging; (h) the portion is 0.325THz energy imaging.
Through quantitative analysis of the SIFT-Kmeans model, 8 high-quality representative images of each sample are obtained and serve as super-resolution recognition source images of the samples. As can be seen from the image clustering matching result, the method provided by the invention can effectively retrieve the sample representative high-quality source image from a large amount of image data. Similar images are frequency domain energy imaging, except that the contrast between the images is slightly different, the representation of other details of the images has consistency, and the outline of a sample and detail information such as internal and external defects can be clearly embodied.
In order to verify the effect of super-resolution image processing, the invention carries out wavelet transformation fusion processing on two representative images of an aluminum oxide sample and a beryllium oxide sample, and adopts objective image quality evaluation indexes to evaluate the processing effect.
The result of the fusion is shown in fig. 21, in which part (a) of fig. 21 is alumina 0.7625THz energy imaging; (b) part of the energy imaging is aluminum oxide 0.86562 THz; (c) part is alumina wavelet fusion imaging; (d) part is beryllium oxide 1.4875THz energy imaging; (e) part is beryllium oxide 1.4781THz energy imaging; (f) and the part is beryllium oxide wavelet fusion imaging.
The results of the objective evaluation are shown in Table 5. According to the fusion result and the objective evaluation index, the wavelet fusion is adopted to process the source image, the brightness of the fused image is moderate, the image information is basically not lost, the definition and the contrast are enhanced to a certain degree, and the effectiveness of the fusion processing is proved.
TABLE 5 Objective evaluation of image fusion
Figure GDA0002923439850000341
The invention provides a method for selecting a nondestructive testing multi-source image based on the characteristic of 'map-in-one' of a terahertz spectrum image. The image processing method such as super-resolution identification and the like is adopted, and the problem of image source selection is solved fundamentally. Firstly, the frequency domain resolution of the spectrum is improved through windowed FFT (fast Fourier transform), and a time-frequency domain multi-mode imaging method is adopted to construct a detection sample image library; secondly, selecting a standard image of the image to be retrieved through image evaluation index fusion processing; then, matching and identifying a group of representative images of the sample through an SIFT-Kmeans model; and finally, performing image fusion verification by adopting wavelet transformation. Research results show that the method provided by the invention can realize the quick and effective selection of the high-quality source image of the detection sample. On the basis, the accuracy and the rapidity of nondestructive testing can be effectively improved.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (7)

1. A terahertz imaging method, comprising:
acquiring a plurality of time domain spectral information file matrixes of a detection object;
performing terahertz time-domain information imaging and terahertz frequency-domain information imaging according to the time-domain spectral information file matrix to obtain a terahertz time-domain image and a terahertz frequency-domain image;
the specific method for obtaining the terahertz frequency domain image comprises the following steps:
reading the spectrum data files of all rows of each column of pixel points in the time domain spectrum information file matrix, and transforming the time domain spectrum of each pixel point into a frequency domain spectrum by adopting fast Fourier transform;
calculating the energy parameter of each pixel point by adopting a Pasteval theorem formula according to the spectral density of the frequency domain spectrum;
constructing a frequency domain imaging parameter matrix according to the energy parameters of all the pixel points;
performing quantitative mapping on each energy parameter in the frequency domain imaging parameter matrix, and expanding and transforming each energy parameter to the range between [0,255] according to a proportion to obtain a frequency domain imaging gray matrix;
generating a terahertz frequency domain image according to the frequency domain imaging gray matrix;
the fast fourier transform is a Blackman windowed fast Fourier transform;
the fast Fourier transform processing of Blackman windowing is specifically realized by the following steps:
blackman window function design
The Blackman window is a cosine window, and the formula is as follows:
ω (N) ═ 0.42-0.52cos (2 π N/N) +0.08cos (4 π N/N), where N ═ 0,1,2, · · N, corresponding to the spectrum W (2 π f);
the attenuation speed of the first side lobe of the Blackman window is high, and the suppression of frequency spectrum leakage is facilitated; although the main lobe width of the window is large, the shortage of the too wide main lobe can be made up by increasing the number of sampling points in each period;
② windowed FFT
Let the discrete signal obtained after sampling the single frequency signal x (t) be:
Figure FDA0002923439840000011
the windowed FFT transform is carried out to obtain:
Figure FDA0002923439840000021
due to symmetry, the frequency spectrum of the negative frequency is abandoned, and only the frequency spectrum at the positive frequency point is reserved to obtain:
x+(f)=(A/2j)eW(2π(f-f0)/fs) Wherein A represents amplitude, f0Representing the center frequency, fsRepresenting the sampling frequency, n representing the length of a discrete sequence, W representing a window function spectrum, theta representing the phase, e representing the base of a natural logarithm, and j representing an imaginary unit;
(iii) discrete sampling
For x+(f) Discrete sampling is carried out to obtain a discrete Fourier expression: x+(kΔf)=(A/2j)eW(2π(kΔf-f0)/fs) Wherein Δ f ═ fsN is the data truncation length;
constructing an image library of the detection object according to the terahertz time domain image and the terahertz frequency domain image;
calculating the information entropy of the images with the frequency smaller than the terahertz spectrum cut-off frequency in the image library;
screening out an image with the largest information entropy as a standard image;
after the image with the maximum information entropy is screened out and used as a standard image, the method further comprises the following steps:
judging whether the number of the screened standard images is greater than or equal to 2 or not, and obtaining a judgment result;
when the judgment result shows that the standard images are true, calculating the average gradient of each screened standard image;
taking the image with the maximum average gradient as a final standard image;
extracting SIFT feature points of each image in the image library;
the image SIFT feature extraction is realized by four steps, and the specific process is as follows:
1) constructing a Gaussian difference pyramid scale space
Scale space
The scale space of a two-dimensional image is defined as:
L(x,y,σ)=G(x,y,σ)·I(x,y)
wherein I (x, y) is an image region; g (x, y, sigma) is a scale-variable Gaussian function, and x, y are spatial coordinates; σ is a gaussian scale value, the size of which determines the degree of smoothness of the image;
the scale space is realized by Gaussian blur, a blur template is calculated through normal distribution, and convolution operation is carried out on the blur template and the original image to blur the image; the normal distribution equation of the N-dimensional space is:
Figure FDA0002923439840000031
where σ' is the standard deviation of a normal distribution; r is the blur radius; assuming that the two-dimensional blur template is m × n, where m and n are respectively expressed as the length and width of an image, and the unit is a pixel, the gaussian kernel function corresponding to the element (x, y) on the two-dimensional blur template is:
Figure FDA0002923439840000032
second gaussian difference pyramid
The image multi-scale features are described by using an image pyramid, and the basis of the scale space construction is a Gaussian difference pyramid DOG; extracting SIFT feature points on a DOG pyramid, wherein the features contained in the DOG images are stable feature points with different fuzzy degrees and different scales; for an image with the original size of m × n, the calculation formula of the number of layers of the gaussian pyramid is as follows:
g=log2{min(m,n)}-t,t∈[0,log2{min(m,n)}]
wherein t is a logarithm value of the minimum dimension of the tower top image; the Gaussian pyramid uses different sigma to make Gaussian blur on each layer, and a group of images form an octave, which is called octave;
by calculating the Gaussian Laplace function σ22The extreme value of G generates stable image characteristics, but the calculation amount of the extreme value is large, so that the difference calculation approximation of a Gaussian DOG operator is used for replacing the differential, and the calculation formula is as follows:
Figure FDA0002923439840000041
wherein +2Represents the laplacian operator; k-1 is a threshold constant, and does not influence the calculation of the position of the extreme point;
construction of DOG scale space
Detecting stable extreme points by utilizing Gaussian difference kernels with different scales and image convolution calculation, and constructing a DOG scale space, wherein the DOG scale space is defined as:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))·I(x,y)
=L(x,y,kσ)-L(x,y,σ)
all values of the scale are:
2i-1(σ,kσ,k2σ,…,kn-1σ),k=21/s
wherein i is the serial number of octave, and S is the number of layers in each group; one group of images constitutes an octave, called octave;
2) spatial extremum point search and location
In a two-dimensional image space, comparing a central point with 26 points of the neighborhood and the upper and lower adjacent layers of images of the central point, and detecting a local extreme point; because the DOG value is sensitive to noise, curve fitting needs to be carried out on the DOG function in the scale space, and the matching capability of extreme points is improved; the taylor expansion of the DOG function in scale space is:
Figure FDA0002923439840000042
the extreme points are then:
Figure FDA0002923439840000043
3) direction information distribution of stable extreme points
For any extreme point, the gradient magnitude and direction are as follows:
Figure FDA0002923439840000051
Figure FDA0002923439840000052
4) description of characteristic points
For each key point, 8 directions of a traditional 4 multiplied by 4 image block are adopted, 128-dimensional SIFT feature vector descriptors are used in total, and key point representation is carried out;
clustering each SIFT feature point by adopting a K-means clustering method, and clustering each SIFT feature point into K clustering clusters;
the specific implementation steps of adopting a K-means clustering algorithm to carry out SIFT feature point clustering are as follows:
(5.1) extracting SIFT feature points of all images in the image library, wherein the total number of the SIFT feature points is n, and constructing a data set X of the SIFT feature points is (X)1 d,X2 d,...,Xn d) Wherein X isi d(i 1, 2.., n) denotes the ith feature point, and the dimension of the ith feature point is d 128, namely each feature point has a 128-dimensional vector;
(5.2) selecting the number K of proper clustering clusters, and randomly selecting K SIFT points from the data set as initial clustering centers;
(5.3) calculating the distance from each SIFT feature point to the clustering center, wherein the distance measurement function adopts the Euclidean distance; according to the Euclidean distance nearest criterion, sequentially distributing each SIFT feature point in the data set to the nearest mass center to form K clustering clusters;
(5.4) iteratively updating the clustering center
Considering the data of euclidean distance, the sum of squares of errors is used as the objective function of the cluster:
Figure FDA0002923439840000053
where k denotes the number of cluster centers, ciDenotes the ith cluster center, dist is the Euclidean distance;
for the Kth centroid ckSolving, the minimum SSE has:
Figure FDA0002923439840000061
wherein m iskIs the number of feature points in the kth centroid;
calculating the average value of all characteristic points in each cluster according to a formula (19), generating a corresponding average value vector, taking the average value vector as the clustering center of the cluster, calculating an objective function value, operating K mean values twice to generate two different cluster sets, and taking the average value vector with the minimum objective function value as a new centroid;
(5.5) convergence judgment
Judging whether the clustering center and the objective function value are changed, if not, outputting the clustering results of K clusters, and if so, returning to (5.3); the convergence judgment process is as follows: firstly, judging whether the cluster is changed or not, if not, no new characteristic point is redistributed, and ending; if the cluster is changed, judging whether the maximum iteration times is reached, if so, ending, otherwise, returning to (5.3);
counting the number of SIFT feature points of each image in each cluster, and generating a K-dimensional feature vector corresponding to each image;
respectively calculating similarity values of the K-dimensional feature vectors of the standard images and the K-dimensional feature vectors of the images in the image library;
selecting N images with the highest similarity value as super-resolution recognition source images, wherein N represents the number of preset source images;
and carrying out image fusion on the N super-resolution recognition source images by adopting an image fusion method based on wavelet transformation to obtain a fused terahertz image.
2. The terahertz imaging method according to claim 1, wherein the specific method for obtaining the terahertz time-domain image comprises:
reading the spectrum data files of all rows of each column of pixel points in the time domain spectrum information file matrix, and determining the imaging parameters of each pixel point according to the type of the time domain information imaging method;
constructing a time domain imaging parameter matrix according to each imaging parameter;
carrying out quantitative mapping on each imaging parameter in the time domain imaging parameter matrix, and expanding and transforming each imaging parameter to be between [0,255] according to a proportion to obtain a time domain imaging gray matrix;
and generating a terahertz time-domain image according to the time-domain imaging gray matrix.
3. The terahertz imaging method as claimed in claim 1, wherein the K-dimensional feature vector corresponding to the image is an ordered set of the number of SIFT feature points of the image in each cluster.
4. The terahertz imaging method according to claim 1, wherein the calculating the similarity values of the K-dimensional feature vectors of the standard images and the K-dimensional feature vectors of the images in the image library respectively includes:
and respectively calculating the similarity value of the K-dimensional feature vector of the standard image and the K-dimensional feature vector of each image in the image library by adopting a cosine similarity calculation method.
5. A terahertz imaging system, the system comprising:
the acquisition module is used for acquiring a plurality of time domain spectral information file matrixes of the detection object;
the imaging module is used for carrying out terahertz time domain information imaging and terahertz frequency domain information imaging according to the time domain spectrum information file matrix to obtain a terahertz time domain image and a terahertz frequency domain image;
the specific method for obtaining the terahertz frequency domain image comprises the following steps:
reading the spectrum data files of all rows of each column of pixel points in the time domain spectrum information file matrix, and transforming the time domain spectrum of each pixel point into a frequency domain spectrum by adopting fast Fourier transform;
calculating the energy parameter of each pixel point by adopting a Pasteval theorem formula according to the spectral density of the frequency domain spectrum;
constructing a frequency domain imaging parameter matrix according to the energy parameters of all the pixel points;
performing quantitative mapping on each energy parameter in the frequency domain imaging parameter matrix, and expanding and transforming each energy parameter to the range between [0,255] according to a proportion to obtain a frequency domain imaging gray matrix;
generating a terahertz frequency domain image according to the frequency domain imaging gray matrix;
the fast fourier transform is a Blackman windowed fast Fourier transform;
the fast Fourier transform processing of Blackman windowing is specifically realized by the following steps:
blackman window function design
The Blackman window is a cosine window, and the formula is as follows:
ω (N) ═ 0.42-0.52cos (2 π N/N) +0.08cos (4 π N/N), where N ═ 0,1,2, · · N, corresponding to the spectrum W (2 π f);
the attenuation speed of the first side lobe of the Blackman window is high, and the suppression of frequency spectrum leakage is facilitated; although the main lobe width of the window is large, the shortage of the too wide main lobe can be made up by increasing the number of sampling points in each period;
② windowed FFT
Let the discrete signal obtained after sampling the single frequency signal x (t) be:
Figure FDA0002923439840000081
the windowed FFT transform is carried out to obtain:
Figure FDA0002923439840000082
due to symmetry, the frequency spectrum of the negative frequency is abandoned, and only the frequency spectrum at the positive frequency point is reserved to obtain:
x+(f)=(A/2j)eW(2π(f-f0)/fs) Wherein A represents amplitude, f0Representing the center frequency, fsRepresenting the sampling frequency, n representing the length of a discrete sequence, W representing a window function spectrum, theta representing the phase, e representing the base of a natural logarithm, and j representing an imaginary unit;
(iii) discrete sampling
For x+(f) Discrete sampling is carried out to obtain a discrete Fourier expression: x+(kΔf)=(A/2j)eW(2π(kΔf-f0)/fs) Wherein Δ f ═ fsN is the data truncation length;
the image library construction module is used for constructing an image library of the detection object according to the terahertz time domain image and the terahertz frequency domain image;
the information entropy calculation module is used for calculating the information entropy of the images with the frequency smaller than the terahertz spectrum cut-off frequency in the image library;
the screening module is used for screening the image with the largest information entropy as a standard image; judging whether the number of the screened standard images is greater than or equal to 2 or not, and obtaining a judgment result; when the judgment result shows that the standard images are true, calculating the average gradient of each screened standard image; taking the image with the maximum average gradient as a final standard image;
the characteristic point extraction module is used for extracting SIFT characteristic points of each image in the image library;
the image SIFT feature extraction is realized by four steps, and the specific process is as follows:
1) constructing a Gaussian difference pyramid scale space
Scale space
The scale space of a two-dimensional image is defined as:
L(x,y,σ)=G(x,y,σ)·I(x,y)
wherein I (x, y) is an image region; g (x, y, sigma) is a scale-variable Gaussian function, and x, y are spatial coordinates; σ is a gaussian scale value, the size of which determines the degree of smoothness of the image;
the scale space is realized by Gaussian blur, a blur template is calculated through normal distribution, and convolution operation is carried out on the blur template and the original image to blur the image; the normal distribution equation of the N-dimensional space is:
Figure FDA0002923439840000091
where σ' is the standard deviation of a normal distribution; r is the blur radius; assuming that the two-dimensional blur template is m × n, where m and n are respectively expressed as the length and width of an image, and the unit is a pixel, the gaussian kernel function corresponding to the element (x, y) on the two-dimensional blur template is:
Figure FDA0002923439840000101
second gaussian difference pyramid
The image multi-scale features are described by using an image pyramid, and the basis of the scale space construction is a Gaussian difference pyramid DOG; extracting SIFT feature points on a DOG pyramid, wherein the features contained in the DOG images are stable feature points with different fuzzy degrees and different scales; for an image with the original size of m × n, the calculation formula of the number of layers of the gaussian pyramid is as follows:
g=log2{min(m,n)}-t,t∈[0,log2{min(m,n)}]
wherein t is a logarithm value of the minimum dimension of the tower top image; the Gaussian pyramid uses different sigma to make Gaussian blur on each layer, and a group of images form an octave, which is called octave;
by calculating the function of Gauss Laplace
Figure FDA0002923439840000102
The extreme value of (2) produces stable image features, but the calculation amount of the extreme value is large, so that the difference calculation approximation of the Gaussian DOG operator is used to replace the differential, and the calculation formula is as follows:
Figure FDA0002923439840000103
wherein +2Represents the laplacian operator; k-1 is a threshold constant, and does not influence the calculation of the position of the extreme point;
construction of DOG scale space
Detecting stable extreme points by utilizing Gaussian difference kernels with different scales and image convolution calculation, and constructing a DOG scale space, wherein the DOG scale space is defined as:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))·I(x,y)
=L(x,y,kσ)-L(x,y,σ)
all values of the scale are:
2i-1(σ,kσ,k2σ,…,kn-1σ),k=21/s
wherein i is the serial number of octave, and S is the number of layers in each group; one group of images constitutes an octave, called octave;
2) spatial extremum point search and location
In a two-dimensional image space, comparing a central point with 26 points of the neighborhood and the upper and lower adjacent layers of images of the central point, and detecting a local extreme point; because the DOG value is sensitive to noise, curve fitting needs to be carried out on the DOG function in the scale space, and the matching capability of extreme points is improved; the taylor expansion of the DOG function in scale space is:
Figure FDA0002923439840000111
the extreme points are then:
Figure FDA0002923439840000112
3) direction information distribution of stable extreme points
For any extreme point, the gradient magnitude and direction are as follows:
Figure FDA0002923439840000113
Figure FDA0002923439840000114
4) description of characteristic points
For each key point, 8 directions of a traditional 4 multiplied by 4 image block are adopted, 128-dimensional SIFT feature vector descriptors are used in total, and key point representation is carried out;
the clustering module is used for clustering all SIFT feature points by adopting a K-means clustering method and clustering all the SIFT feature points into K clustering clusters;
the specific implementation steps of adopting a K-means clustering algorithm to carry out SIFT feature point clustering are as follows:
(5.1) extracting SIFT feature points of all images in the image library, wherein the total number of the SIFT feature points is n, and constructing a data set X of the SIFT feature points is (X)1 d,X2 d,...,Xn d) Wherein X isi d(i 1, 2.., n) denotes the ith feature point, and the dimension of the ith feature point is d 128, namely each feature point has a 128-dimensional vector;
(5.2) selecting the number K of proper clustering clusters, and randomly selecting K SIFT points from the data set as initial clustering centers;
(5.3) calculating the distance from each SIFT feature point to the clustering center, wherein the distance measurement function adopts the Euclidean distance; according to the Euclidean distance nearest criterion, sequentially distributing each SIFT feature point in the data set to the nearest mass center to form K clustering clusters;
(5.4) iteratively updating the clustering center
Considering the data of euclidean distances, the Sum of Squares of Errors (SSE) is used as the objective function of the clustering:
Figure FDA0002923439840000121
where k denotes the number of cluster centers, ciDenotes the ith cluster center, dist is the Euclidean distanceSeparating;
for the Kth centroid ckSolving, the minimum SSE has:
Figure FDA0002923439840000131
wherein m iskIs the number of feature points in the kth centroid;
calculating the average value of all characteristic points in each cluster according to a formula (19), generating a corresponding average value vector, taking the average value vector as the clustering center of the cluster, calculating an objective function value, operating K mean values twice to generate two different cluster sets, and taking the average value vector with the minimum objective function value as a new centroid;
(5.5) convergence judgment
Judging whether the clustering center and the objective function value are changed, if not, outputting the clustering results of K clusters, and if so, returning to (5.3); the convergence judgment process is as follows: firstly, judging whether the cluster is changed or not, if not, no new characteristic point is redistributed, and ending; if the cluster is changed, judging whether the maximum iteration times is reached, if so, ending, otherwise, returning to (5.3);
the feature vector generation module is used for counting the number of SIFT feature points of each image in each cluster and generating a K-dimensional feature vector corresponding to each image;
the similarity calculation module is used for calculating similarity values of the K-dimensional feature vectors of the standard images and the K-dimensional feature vectors of the images in the image library respectively;
the source image screening module is used for selecting N images with the highest similarity value as super-resolution recognition source images, and N represents the number of preset source images;
and the image fusion module is used for carrying out image fusion on the N super-resolution recognition source images by adopting an image fusion method based on wavelet transformation to obtain a fused terahertz image.
6. A non-destructive inspection method, comprising:
acquiring a terahertz image of a detection object, wherein the terahertz image is a fused terahertz image generated according to the method of any one of claims 1-4;
and carrying out nondestructive detection on the detection object according to the terahertz image.
7. A non-destructive inspection system, comprising:
a fused image acquisition module for acquiring a terahertz image of the detection object, the terahertz image being a fused terahertz image generated according to the method of any one of claims 1 to 4;
and the nondestructive testing module is used for carrying out nondestructive testing on the detection object according to the terahertz image.
CN201910220319.8A 2019-03-22 2019-03-22 Terahertz imaging method and system and nondestructive testing method and system Active CN110674835B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910220319.8A CN110674835B (en) 2019-03-22 2019-03-22 Terahertz imaging method and system and nondestructive testing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910220319.8A CN110674835B (en) 2019-03-22 2019-03-22 Terahertz imaging method and system and nondestructive testing method and system

Publications (2)

Publication Number Publication Date
CN110674835A CN110674835A (en) 2020-01-10
CN110674835B true CN110674835B (en) 2021-03-16

Family

ID=69068543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910220319.8A Active CN110674835B (en) 2019-03-22 2019-03-22 Terahertz imaging method and system and nondestructive testing method and system

Country Status (1)

Country Link
CN (1) CN110674835B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110553998A (en) * 2019-07-31 2019-12-10 西安交通大学 nondestructive testing method for blade test piece of aero-engine based on terahertz technology
CN111553877B (en) * 2020-03-20 2022-04-01 西安交通大学 Damage identification and service life evaluation method based on terahertz ceramic matrix composite blade
CN112508113B (en) * 2020-12-14 2022-11-29 中国科学院空天信息创新研究院 Method and device for detecting passive terahertz human body image hidden target
CN113538405B (en) * 2021-07-30 2023-03-31 吉林大学 Nondestructive testing method and system for glass fiber composite material based on image fusion
CN113744163B (en) * 2021-11-03 2022-02-08 季华实验室 Integrated circuit image enhancement method and device, electronic equipment and storage medium
CN116434289A (en) * 2021-12-30 2023-07-14 同方威视技术股份有限公司 Millimeter wave or terahertz wave identity verification method and device, millimeter wave or terahertz wave security inspection equipment and electronic equipment
CN116818704B (en) * 2023-03-09 2024-02-02 苏州荣视软件技术有限公司 High-precision full-automatic detection method, equipment and medium for semiconductor flaw AI
CN116385296B (en) * 2023-04-04 2024-02-27 西安电子科技大学 Terahertz imaging method based on adaptive soft threshold shrinkage

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214424A (en) * 2018-08-03 2019-01-15 大连理工大学 A method of the new-energy automobile charging time is predicted using regression analysis and clustering method
CN109255726A (en) * 2018-09-07 2019-01-22 中国电建集团华东勘测设计研究院有限公司 A kind of ultra-short term wind power prediction method of Hybrid Intelligent Technology

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070257194A1 (en) * 2005-03-22 2007-11-08 Mueller Eric R Terahertz heterodyne tomographic imaging system
CN103576002B (en) * 2013-11-11 2016-01-20 华北电力大学(保定) A kind of computing method of capacitive insulator arrangement dielectric loss angle
CN105512469A (en) * 2015-12-01 2016-04-20 江苏省电力公司淮安供电公司 Charging pile harmonic wave detection algorithm based on windowing interpolation FFT and wavelet packet
CN107356599B (en) * 2017-06-23 2020-02-18 厦门大学 Terahertz nondestructive testing method for ceramic matrix composite material
CN108197649A (en) * 2017-12-29 2018-06-22 厦门大学 A kind of Terahertz image clustering analysis method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214424A (en) * 2018-08-03 2019-01-15 大连理工大学 A method of the new-energy automobile charging time is predicted using regression analysis and clustering method
CN109255726A (en) * 2018-09-07 2019-01-22 中国电建集团华东勘测设计研究院有限公司 A kind of ultra-short term wind power prediction method of Hybrid Intelligent Technology

Also Published As

Publication number Publication date
CN110674835A (en) 2020-01-10

Similar Documents

Publication Publication Date Title
CN110674835B (en) Terahertz imaging method and system and nondestructive testing method and system
Bhargava Towards a practical Fourier transform infrared chemical imaging protocol for cancer histopathology
CN112308832B (en) Bearing quality detection method based on machine vision
US9336429B2 (en) Necrotic cell region detection apparatus and method of the same, and non-transitory computer readable storage medium to store a necrotic cell region detection program
Reddy et al. Accurate histopathology from low signal-to-noise ratio spectroscopic imaging data
CN107274462B (en) Classified multi-dictionary learning magnetic resonance image reconstruction method based on entropy and geometric direction
Hosseini et al. Focus quality assessment of high-throughput whole slide imaging in digital pathology
US8155420B2 (en) System and method for detecting poor quality in 3D reconstructions
Li et al. Soft measurement of wood defects based on LDA feature fusion and compressed sensor images
Uss et al. Maximum likelihood estimation of spatially correlated signal-dependent noise in hyperspectral images
CN108921809B (en) Multispectral and panchromatic image fusion method based on spatial frequency under integral principle
Sungheetha et al. Comparative study: statistical approach and deep learning method for automatic segmentation methods for lung CT image segmentation
Braverman et al. Scale-specific multifractal medical image analysis
CN112070008A (en) Hyperspectral image feature identification method, device and equipment and storage medium
JP2022000777A (en) Classification device, classification method, program, and information recording medium
CN116310510A (en) Hyperspectral image classification method based on small sample deep learning
CN110766657B (en) Laser interference image quality evaluation method
Li et al. A novel hyperspectral imaging and modeling method for the component identification of woven fabrics
Raja et al. Cross-spectral periocular recognition by cascaded spectral image transformation
CN110717485B (en) Hyperspectral image sparse representation classification method based on local retention projection
CN115908950B (en) Rapid medical hyperspectral image classification method based on similarity tangent mapping
Zhao et al. Attention-Based CNN Ensemble for Soil Organic Carbon Content Estimation with Spectral Data
JP2013509630A (en) Apparatus and method for adjusting a raised pattern of a hyperspectral image.
Quintana et al. Blur-specific no-reference image quality assesment for microscopic hyperspectral image focus quantification
CN113628762A (en) Biological tissue structure classification system based on Mueller polarization technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant