CN117670698A - Ultra-micro target imaging method and system - Google Patents

Ultra-micro target imaging method and system Download PDF

Info

Publication number
CN117670698A
CN117670698A CN202311603387.5A CN202311603387A CN117670698A CN 117670698 A CN117670698 A CN 117670698A CN 202311603387 A CN202311603387 A CN 202311603387A CN 117670698 A CN117670698 A CN 117670698A
Authority
CN
China
Prior art keywords
image
light
local
features
global
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311603387.5A
Other languages
Chinese (zh)
Inventor
陈致蓬
韩杰
王珲荣
潘果文
肖鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Aochuangpu Technology Co ltd
Central South University
Original Assignee
Hunan Aochuangpu Technology Co ltd
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Aochuangpu Technology Co ltd, Central South University filed Critical Hunan Aochuangpu Technology Co ltd
Priority to CN202311603387.5A priority Critical patent/CN117670698A/en
Publication of CN117670698A publication Critical patent/CN117670698A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses an ultra-micro target imaging method and system, which are characterized in that a full-frequency light source is received to irradiate a full-frequency band reflected light field reflected after an ultra-micro target is irradiated, high-frequency and low-frequency switching filtering is carried out on reflected light in the full-frequency band reflected light field, light intensity information of the filtered reflected light is collected, fourier transform and wavelet transform are respectively carried out on the light intensity information to obtain a global image and a local image, global features of the global image are extracted, the local features of the local image are extracted, the global features and the local features are fused, an enhanced image of the ultra-micro target is obtained according to the fused local features, the technical problem that the imaging precision of the existing ultra-micro target is low is solved, the characteristics of ultra-high resolution, high noise resistance, high imaging speed and the like are realized, the chip imaging problem in the eutectic chip pasting process can be effectively solved, and the whole system is simple in structure, low in cost and easy to maintain.

Description

Ultra-micro target imaging method and system
Technical Field
The invention mainly relates to the technical field of optical imaging, in particular to an ultra-micro target imaging method and an ultra-micro target imaging system.
Background
With the continuous development of integrated circuit manufacturing processes, chip integration is higher and higher, and critical dimensions have entered deep submicron or even nanometer levels. In order to further improve the performance and reliability of the chip, it is desirable to be able to perform ultra-high resolution non-destructive inspection and imaging of critical structures inside the chip.
In recent years, super-resolution optical microscopy based on principles of structural illumination, saturated laser, and the like has made significant progress. These techniques can break through the diffraction limit of the optical system to achieve higher resolution imaging. At the same time, a special image processing algorithm is used to recover a high resolution image from a low resolution image, a so-called super resolution image reconstruction technique.
The two technologies are organically combined, and an optimized light path design is adopted, so that ultrahigh resolution imaging of a chip key area is hopefully realized rapidly, accurately and nondestructively, and the chip key area becomes a powerful tool for detecting chip defects, process control and quality control. This represents an important opportunity for development of super-resolution chip imaging technology.
The invention patent with publication number CN114137005B proposes a distributed multimode diffraction imaging method that uses multiple distributed sub-diffraction systems to acquire images of different fields of view and spectral bands and obtains high resolution results by image processing algorithms. But the matching relationship between different subsystems is not considered. The image processing algorithm flow is not complete enough, the customization and optimization aiming at diffraction imaging are not enough, and the result is difficult to reach the expectations. And the algorithm complexity is higher, and higher requirements are put on the computing capacity of hardware. In addition, the distributed system structure has the problems of collineation and stability of optical axes, and is relatively complex to realize.
The invention patent application with publication number of CN116559427A provides a lateral flow immunity detection chip and a device based on CMOS lens-free imaging. The method uses a CMOS image sensor without a lens to perform direct imaging, and combines with a printable immunochromatography film to realize lateral flow immunodetection. But the control and correction capability of the method on the imaging quality in the light path is weaker, the effect is improved mainly through the back-end image processing, the adaptability to complex scenes is poorer, the reconstruction effect of the resolution of the CMOS chip used by the method is poorer, and the application range is narrower mainly for the lateral fluid chip application.
Disclosure of Invention
The invention provides an ultra-micro target imaging method and an ultra-micro target imaging system, which solve the technical problem of low imaging precision of the existing ultra-micro target.
In order to solve the technical problems, the invention provides an ultra-micro target imaging method which comprises the following steps:
receiving a total-frequency band reflected light field reflected after the total-frequency light source irradiates the ultrafine target, wherein the ultrafine target is a chip.
And carrying out high-frequency and low-frequency switching filtering on the reflected light in the full-frequency band reflected light field, and collecting light intensity information of the filtered reflected light.
And carrying out Fourier transform and wavelet transform on the light intensity information respectively to obtain a global image and a local image.
Global features of the global image are extracted, and local features of the local image are extracted.
And fusing the global features and the local features, and obtaining an enhanced image of the ultrafine target according to the fused features.
Further, receiving the full-frequency band reflected light field reflected after the full-frequency light source irradiates the ultra-micro target comprises:
and receiving the light beam reflected by the light guide after the full-frequency illumination light source irradiates the ultra-micro target.
According to the reflected light beam, a reflected light field model is established, wherein the specific formula of the reflected light field model is as follows:
wherein R is 1 (R, t) represents the time t, the total-band reflected light field model corresponding to the position vector R, R (R, θ) 12 Lambda) represents a position vector r and an incident angle theta 1 The angle of reflection is theta 2 A reflected light field with a wavelength lambda, R i (r,θ 1 Lambda) represents a position vector r and an incident angle theta 1 An incident light field of wavelength lambda, O (r, theta 12 Lambda) represents a position vector r and an incident angle theta 1 The angle of reflection is theta 2 Modulation function of an imaged object with a wavelength of light lambda minmax ]For the wavelength range of light, ω represents the angular frequency of light, k r The wave vector corresponding to the position vector r is represented, and i represents an imaginary unit.
And obtaining a full-band reflection light field according to the reflection light field model.
Further, extracting global features of the global image includes:
the method comprises the steps of constructing a multi-level graph convolution network to extract global features of a global image, wherein a calculation formula of feature vector extraction of each level of graph convolution network in the multi-level graph convolution network is as follows:
wherein G is j Representing the feature vector output by the jth level graph convolution network, M representing the number of pixel points in the global image, G 0 Representing inputs to a multi-level graph convolutional network, Y 1 A global image is represented and is displayed,and->The characteristic of the kth point in the characteristic vector output by the jth and the (j+1) th level graph convolution network is respectively represented by +.>Representing the calculation of the j-th level of graph convolution network output, at +.>Surrounding created local area with K elements, < >>The characteristic of the nth point in the characteristic vector output by the jth graph convolution network is represented, wherein K is more than or equal to 1 and less than or equal to M, K is more than or equal to 1 and less than or equal to M, n is more than or equal to 1 and less than or equal to K, sigma represents a nonlinear activation function, w is a learnable parameter of the multilayer sensor, and Max-Pool is the maximum pooling operation.
According to the feature vector output by each level of graph convolution network, global features of the global image are obtained, wherein the specific formula is as follows:
wherein,representing global features, G 1 、G 2 And G j Representing the eigenvectors output by the level 1, level 2 and level j graph rolling networks, respectively, with MLP representing the perceptual function of the graph rolling network.
Further, when computing the j-th level graph convolution network output, the method comprises the following steps ofThe surrounding created local region with K elements includes:
the characteristic distance between the kth point and other points in the characteristic vector output by the jth-level graph convolution network is calculated, and the specific formula is as follows:
wherein,and->Feature vector G respectively representing output of jth-level graph convolution network j Features of the kth and mth point,/->Characteristic distance representing kth and mth point, +.>Representative pair->And squaring the two norms, wherein k is more than or equal to 1 and less than or equal to M, and M is more than or equal to 1 and less than or equal to M.
K elements with the distance smaller than a preset threshold value are selected and used as calculation of the output of a jth level graph convolution network, and the calculation is performed whenSurrounding created local area with K elements +.>And K is more than 1 and less than M.
Further, extracting the local features of the local image includes:
the convolution operation of the preset layer threshold value is carried out on the local image, and the specific formula is as follows:
F (l) =ReLU(W (l) *F (l-1) +b (l) ),
wherein F is (l) And F (l-1) Representing the local output characteristics, W, of the first convolution layer and the first-1 convolution layer, respectively (l) And b (l) Representing the weights and offsets of the first convolution layer, respectively, and ReLU represents the activation function, N th Represents a preset layer threshold value, and l is more than or equal to 0 and less than or equal to N th
The local output characteristic of the last convolution layer is taken as the local characteristic of the local image.
Further, fusing the global feature and the local feature, and obtaining the enhanced image of the ultrafine target according to the fused feature comprises:
and carrying out feature fusion on the global features and the local features.
And carrying out Fourier inverse transformation on the fused characteristics.
And obtaining an enhanced image of the ultrafine target according to the result of the Fourier inverse transformation.
Further, the full-frequency light source comprises a laser diode, a collimating lens, a polaroid, an aspherical mirror, annular laser, a first output lens group, a second output lens group, a third output lens group and a fourth output lens group which are connected in sequence, wherein:
a laser diode for pumping the source.
And the collimating lens is used for adjusting the divergence angle and the beam quality of the light beam and converting the diverged light beam into parallel light.
And a polarizer for selectively controlling a polarization state of the polarized light.
And the aspheric mirror is used for converting the Gaussian beam into the annular beam.
The ring-shaped laser is used for inhibiting diffraction loss and realizing super-resolution focusing and imaging.
The first, second, third and fourth output lens groups are used for expanding the irradiation area.
Further, the light guide comprises a first plano-convex lens, a relay lens group and a second plano-convex lens which are sequentially connected, wherein:
the first plano-convex lens is used for converging light rays output by the full-frequency light source.
And the relay lens group is used for being composed of a series of columnar lenses so as to enable light rays to be converged at the focus of the second plano-convex lens.
And the second plano-convex lens is used for converging and emitting parallel light.
The invention provides an ultra-micro target imaging system, which comprises:
the system comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps of the ultra-micro target imaging method provided by the invention.
According to the method and the system for imaging the ultra-micro target, provided by the invention, the full-frequency light source is received to irradiate the full-frequency band reflected light field reflected by the ultra-micro target, the reflected light in the full-frequency band reflected light field is subjected to high-frequency and low-frequency switching filtering, the light intensity information of the filtered reflected light is collected, the light intensity information is subjected to Fourier transform and wavelet transform respectively to obtain a global image and a local image, the global feature of the global image is extracted, the local feature of the local image is extracted, the global feature and the local feature are fused, the enhanced image of the ultra-micro target is obtained according to the fused local feature, the technical problem of low imaging precision of the existing ultra-micro target is solved, the characteristics of high ultra-resolution, high noise resistance, high imaging speed and the like are realized, the chip imaging problem in the eutectic patch process can be effectively solved, and the whole system is simple in structure, low in cost and easy to maintain. In addition, the invention adopts a novel imaging method to replace the traditional optical imaging, designs a complete super-resolution chip imaging system, can obtain high-quality chip microstructure images at high speed, and effectively solves the imaging difficulty in the eutectic paster process.
The beneficial effects of the invention include:
(1) The invention can capture more abundant light field information, including light waves with different frequencies and angles, by a finely designed light path structure, and lays a foundation for subsequent high-resolution image reconstruction.
(2) Advanced mathematical modeling by using complex reflected light field mathematical modeling, the invention can more accurately describe the interaction of light and micro-targets, which is the key to realize nano-scale resolution imaging.
(3) The wave-Fu Chengxiang algorithm based on the cross-domain image generation network combines the global feature capture of Fourier transform and the local feature analysis of wavelet transform, and can extract image information from multiple dimensions, thereby remarkably improving the resolution of the reconstructed image.
(4) The invention can effectively reduce noise in the image reconstruction process and improve the signal-to-noise ratio of the image by the deep learning technology. This is particularly important for imaging in low light or high noise conditions.
(5) The parallel computing framework is characterized in that the parallel processing technology is adopted, including hardware and algorithm levels, so that the speed of image processing is remarkably improved.
Drawings
FIG. 1 is a flow chart of a method for forming an ultra-micro target according to a second embodiment of the invention;
FIG. 2 is a schematic diagram of a full-frequency light source and a light guide lens assembly according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an ultrafine target imaging apparatus according to a second embodiment of the invention;
FIG. 4 is a diagram of a network architecture according to a second embodiment of the present invention;
FIG. 5 is a flowchart of a wave-Fu Chengxiang algorithm of a cross-domain image generation network according to a second embodiment of the present invention;
fig. 6 is a block diagram of an embodiment of an ultra-micro target imaging system.
Reference numerals:
1. a laser diode; 2. a collimating lens; 3. a polarizing plate; 4. an aspherical mirror; 5. ring-shaped laser; 6. a first output lens group; 7. a second output lens group; 8. a third output lens group; 9. a fourth output lens group; 10. a first plano-convex lens; 11. a relay lens group; 12. a second plano-convex lens; 13. a full-frequency illumination light source; 14. a light guide; 15. an observation chip; 16. a beam splitter; 17. a bidirectional frequency divider; 18. a CCD; 19. generating a wave-Fu Chengxiang algorithm of the network based on the cross-domain image; 20. a memory; 30. a processor.
Detailed Description
The present invention will be described more fully hereinafter with reference to the accompanying drawings, in which preferred embodiments are shown, for the purpose of illustrating the invention, but the scope of the invention is not limited to the specific embodiments shown.
Embodiments of the invention are described in detail below with reference to the attached drawings, but the invention can be implemented in a number of different ways, which are defined and covered by the claims.
Example 1
The method for imaging the ultra-micro target provided by the embodiment of the invention comprises the following steps:
step S101, receiving a full-frequency band reflected light field reflected after the full-frequency light source irradiates the ultra-micro target, wherein the ultra-micro target is a chip.
Step S102, high-frequency and low-frequency switching filtering is carried out on reflected light in the full-frequency band reflected light field, and light intensity information of the filtered reflected light is collected.
And step S103, performing Fourier transform and wavelet transform on the light intensity information respectively to obtain a global image and a local image.
Step S104, extracting the global features of the global image and extracting the local features of the local image.
Step S105, fusing the global features and the local features, and obtaining an enhanced image of the ultrafine target according to the fused features.
According to the method for imaging the ultra-micro target, provided by the embodiment of the invention, the full-frequency light source is received to irradiate the full-frequency band reflected light field reflected by the ultra-micro target, the reflected light in the full-frequency band reflected light field is subjected to high-frequency and low-frequency switching filtering, the light intensity information of the filtered reflected light is collected, the light intensity information is subjected to Fourier transform and wavelet transform respectively to obtain a global image and a local image, the global feature of the global image is extracted, the local feature of the local image is extracted, the global feature and the local feature are fused, the enhanced image of the ultra-micro target is obtained according to the fused local feature, the technical problem of low imaging precision of the existing ultra-micro target is solved, the characteristics of high ultra-resolution, high noise resistance, high imaging speed and the like are realized, the chip imaging problem in the eutectic patch process can be effectively solved, and the whole system is simple in structure, low in cost and easy to maintain. In addition, the invention adopts a novel imaging method to replace the traditional optical imaging, designs a complete super-resolution chip imaging system, can obtain high-quality chip microstructure images at high speed, and effectively solves the imaging difficulty in the eutectic paster process.
Example two
In the eutectic chip mounting process, the chip is often defective or the chip mounting machine is not performed according to a preset process, so that unnecessary production loss and raw material waste are caused, therefore, it becomes critical to design a super-resolution chip imaging system to provide clear observable images for a series of problems such as whether the relative positions of the chip and the wafer deviate and the solder liquefaction degree occur in the chip mounting process.
The traditional optical imaging system is difficult to image aiming at details such as chips and small defects of the chips, the traditional optical microscope is difficult to realize nano-scale high-resolution imaging due to the limitation of optical diffraction limit, and a scanning tunnel microscope and an atomic force microscope based on a scanning probe technology can realize sub-nano-scale high-resolution imaging, but the scanning range is small, the imaging speed is low, and the imaging system is difficult to be used for large-scale chip imaging detection. Therefore, developing a novel super-resolution chip imaging technology becomes a key technical problem.
Referring to fig. 1, a method for implementing ultra-micro target imaging according to an embodiment of the present invention mainly includes:
(1) Full-frequency illumination light source:
specifically, referring to fig. 2 and 3, the system designs the corresponding full-frequency illumination light source 13 according to the strong coherence requirement of the illumination light, the laser diode 1 is used as a pumping source, the collimating lens 2 is used for adjusting the divergence angle and beam quality of the light beam, converting the diverged light beam into parallel light, and selectively controlling the polarization state of polarized light through the polarizer 3. And then the aspherical mirror 4 is used as a circular laser generator to convert Gaussian beams into annular beams, and the annular laser 5 has larger numerical aperture and smaller diffraction limit, so that diffraction loss can be restrained, and super-resolution focusing and imaging can be realized. Finally, the illumination area is expanded by the first output lens group 6, the second output lens group 7, the third output lens group 8 and the fourth output lens group 9 to meet the illumination requirement of the super-resolution imaging system, and the design of the lens group and the actual light path structure refer to fig. 2.
(2) Light guide:
the design of the light guide 14 in this embodiment refers to fig. 2 and 3, and since the light intensity and coherence can be affected by attenuation and degradation when the laser propagates in the scattering medium, and the depth network hybrid optimization algorithm reconstruction requires small transmission loss of the diffraction light field spectrum image, and the light wave information is real and reliable, the system designs the light guide 14 independently to ensure the quality of the reconstructed image. The light is converged by the first plano-convex lens 10, the relay lens group 11 is composed of a series of cylindrical lenses, so that the light is converged at the focus of the second plano-convex lens 12, and finally the parallel light is converged by the second plano-convex lens 12 and emitted, so that higher light transmission efficiency and coherence can be ensured, the quality of a reconstructed image is obviously improved, and the design of the lens group and the actual light path structure are as shown in fig. 2.
(3) Modeling the reflection and reflection light field functions:
in this embodiment, the light emitted by the full-frequency illumination light source 13 irradiates the observation chip 15 after passing through the light guide, and is reflected to form a reflected light field, and the reflected light field is modeled below, so that the description of the reflected light field is more comprehensive, and finer physical characteristics are captured. The reflected light field is modeled, R (R, lambda, t) is used for representing a reflected light field function, the possible light filling instability of an illumination light source in the eutectic patch process is considered, the precise imaging requirements of chip surface circuit patterns, transistors, and other micro components with the sizes and shapes generally in the micrometer or nanometer level are met, the edge characteristic texture is enhanced, the multi-scale analysis is realized, the subsequent image processing capability is enhanced, the neural network training data is enriched, and a specific building model is as follows:
wherein R is 1 (R, t) represents the time t, the total-band reflected light field model corresponding to the position vector R, R (R, θ) 12 Lambda) tableThe position vector is shown as r and the incident angle is shown as theta 1 The angle of reflection is theta 2 A reflected light field with a wavelength lambda, R i (r,θ 1 Lambda) represents a position vector r and an incident angle theta 1 An incident light field of wavelength lambda, O (r, theta 12 Lambda) represents a position vector r and an incident angle theta 1 The angle of reflection is theta 2 Modulation function of an imaged object with a wavelength of light lambda minmax ]For the wavelength range of light, ω represents the angular frequency of light, k r The wave vector corresponding to the position vector r is represented, and i represents an imaginary unit.
According to the embodiment, the imaging capability can be enhanced by the reflection field modeling mode, the micro-characteristics of the chip can be captured more accurately, and particularly in high-resolution imaging, the enhanced edge characteristic texture and multi-scale analysis provide richer information for subsequent image processing, which is important for improving the quality of a final image. The comprehensiveness of the model enables the imaging system to better adapt to different imaging requirements and conditions, particularly in the field of complex and demanding chip imaging.
(4) Optical splitter and bidirectional divider:
referring to fig. 3, the beam splitter 16 splits the reflected light into two beams, which are all passed through the bidirectional frequency divider 17, and the bidirectional frequency divider 17 filters the processed light beams of the frequency divider, it should be noted that, instead of simply implementing frequency light screening, the bidirectional filter is implemented by using 500 hz as a boundary, so as to implement the switching of the structured light, the structured light represents the high frequency component in the light beam, and the structured light represents the low frequency component, and it should be noted that the switching of the bidirectional filter is simultaneous, so that the cross-domain image generation network (XGSN) in the depth wave-Fu Chengxiang algorithm has a better picture generation effect, and the modulation effect of the bidirectional frequency divider can be expressed as:
R 2 (r,t)=M(r,t)·R 1 (r,t) (2)
wherein M (r, t) is used for separating the texture light and the structural light, and at t 0 At the moment, the bidirectional frequency divider is a high-frequency filter, separates texture components in reflected light, and is t 0 At +Δt, a low-frequency filter separates out a low-frequency component, i.e., a structural feature.
(5) CCD light field detection:
as shown in fig. 3, the CCD 18 is configured to capture light field intensity information, and when photons are incident on the surface of the CCD sensor, the photons are absorbed by the semiconductor material of the CCD sensor and undergo a photoelectric effect to generate electrons, since the energy of the photons is inversely proportional to the wavelength. The electrons in each pixel (x, y) gradually accumulate to a charge, the amount of which is proportional to the intensity of the incident light, and brighter areas generate more charge.
By transferring and reading these charges row by row or column by column, a charge image can be obtained reflecting the intensity distribution of the reflected light field. The output of the CCD sensor is an image of the charge, typically represented as a two-dimensional array. Each array element corresponds to a pixel whose value represents the intensity of the light incident at that location, the overall detected reflected light field intensity being denoted Y.
(6) wave-Fu Chengxiang algorithm for generating a network based on cross-domain images:
existing imaging algorithms, particularly in the field of super-resolution imaging, typically rely on techniques such as structured light illumination, saturated laser techniques, or post-processing based image reconstruction techniques. These techniques aim at improving resolution, but they have great limitations in terms of processing speed, image quality (especially in noisy environments) and adaptability to complex scenes, so that the existing tiny object imaging techniques are more definition processing techniques of pictures accurately, and cannot meet the requirement of real-time monitoring of eutectic patch processes.
The wave-Fu Chengxiang algorithm 19 based on the cross-domain image generation network provided by the embodiment of the invention adopts an optimized deep learning network to optimize a model for a specific task of eutectic patch, and can complete complex image processing tasks in a short time during actual use by continuously training and adjusting parameters during training. These models are typically trained to process specific types of data quickly and efficiently. The algorithm adopts parallelization feature extraction, fourier transformation and wavelet transformation can process image data in parallel, and the bidirectional frequency divider realizes isolation learning of texture information and structural information, so that the overall processing speed is improved. The parallel processing can greatly improve the operation efficiency of the algorithm. The wave-Fu Chengxiang algorithm based on the cross-domain image generation network of the embodiment mainly comprises:
and I, carrying out Fourier transform and wavelet transform on the light intensity information:
the parameters of the reflected light field can be initialized by the formulas (1) (2), wherein lambda min =400nm,λ max Other parameters can be initialized to 760nm, and brought into equations (1) (2) to obtain the initial light field R 2 (r,t)。
And carrying out Fourier transform on the beam intensity information processed by the CCD to obtain global image information. The light intensity information of the other processed light beam is firstly divided into n partial light intensity information graphs according to the region, the n partial light intensity information graphs are respectively processed, and one of the n partial light intensity information graphs is taken as an example, continuous Wavelet Transform (CWT) is firstly carried out to serve as partial accurate image information.
Wherein Y is 1 Representing Fourier global image information, Y 2 Representing wavelet local accurate information, a is a scale parameter for controlling the expansion and contraction of a wavelet function, b is a translation parameter, and by changing b, the characteristics of signals at different positions can be analyzed, P (t) is a wavelet basis function, and P * (t) is the complex conjugate of P (t), which is in the form:
II, crossing cross-domain image generation network:
the cross-domain image generation network is divided into two branches, namely a global image branch, namely a Fourier branch, and a local scenic spot image branch, namely a wavelet branch, wherein the Fourier branch captures global features of images by using multi-stage GCN, and the wavelet branch extracts accurate features of each part. After feature extraction, the global features and the local accurate features are input into a feature fusion module, and finally the enhanced images are output through inverse Fourier transform, and Fourier branches and wavelet branches are described in detail below.
In the network training process, for Fourier branches, eutectic patch images with high enough resolution are taken as input, the size is assumed to be 256×256 pixels of an original image, 256×256 complex values can be obtained after Fourier transformation, each value represents a specific frequency component, the complex value after Fourier transformation is taken as initial characteristic information, and in order to acquire the information of surrounding pixel points from discrete points, a multi-stage image convolution characteristic extraction mode is adopted to gradually extract the characteristics of global images, namely the spatial correlation of the images. Assume that features extracted in the j-th-stage feature extractor are:
the number of levels is set to j, M represents the number of pixels in the global image, G j Representing the feature vector output by the jth level graph convolution network, M representing the number of pixel points in the global image, G 0 Representing the input of a multi-level graph convolution network, wherein the 0 th level characteristic is the global image Y after Fourier transformation 1 Where the result of the fourier transform (i.e. a specific frequency component) is considered as a node. Each node represents a particular frequency component in the image.Features of the kth point in the feature vector output by the jth graph convolution network are represented. To calculate->Is characterized in that->Surrounding creates a local region Ω j with K elements k The method comprises the following steps:
wherein,and->Feature vector G respectively representing output of jth-level graph convolution network j Features of the kth and mth point,/->Characteristic distance representing kth and mth point, +.>Representative pair->And squaring the two norms, wherein k is more than or equal to 1 and less than or equal to M, and M is more than or equal to 1 and less than or equal to M. Calculating all frequency components by traversal, wherein +.>Indicate>Other elements than those derivedMinimum first K element composition->And then calculate to get new +.>
Wherein,features representing the nth point in the feature vector output by the jth graph convolution networkWherein K is more than or equal to 1 and less than or equal to M, K is more than or equal to 1 and less than or equal to M, n is more than or equal to 1 and less than or equal to K, sigma represents a nonlinear activation function, w is a learnable parameter of the multilayer sensor, and Max-Pool is the maximum pooling operation.
The quality of the feature extraction directly influences the quality of the reconstructed image, and the existing common frequency domain feature extraction mode mainly comprises the following two types of directly using a Fourier transform result or using a fully connected network to process frequency domain data, but the network cannot effectively simulate the relation between frequency domain points. These methods have the following disadvantages: (1) Correlation between frequency domain information cannot be modeled, and local features are insufficient; (2) The global structure information of the image is lost, and the reconstruction support is insufficient; (3) the expressive force and layering of the extracted features are weaker; (4) The last feature is typically not enough to support limited improvement in image quality.
In the embodiment of the invention, the proposed feature extraction method reestablishes the correlation between each point in the frequency domain through the multi-level graph convolution network of the formula (5), the formula (6) and the formula (7), so that more abstract and high-order feature expression can be obtained. To extract the richer global image features, which is a very innovative approach. The reason why the existing other feature extraction structures are not adopted here is that most of them are aimed at spatial domain information, but the present embodiment deals with the frequency domain, and the above-mentioned defects are made up by means of multi-stage network iteration aggregation of peripheral information and fusion of global frequency domain structures. Therefore, compared with other feature extraction methods, the feature extraction method of the embodiment can obtain better effects and provide stronger support for subsequent image reconstruction. The final output frequency domain features fully fuse global information, and the integral structure and details of the image are reserved, which is also an important value of adopting the design.
The final extracted Fourier features areThe concrete expression is as follows:
wherein G is 1,2,3...j Representing the feature information extracted by the layer 1,2,3. J network, the MLP representing the perceptual function in the graph rolling network GCN feature extraction module, takes the feature vector as input, and outputs the extracted feature through multi-layer transfer and nonlinear transformationThe complete flow is shown as a fourier branch feature extraction part in fig. 5, wherein the part is physically composed of j GCN modules, and the single GCN functions are shown in formula (5), formula (6) and formula (7), and finally the j-th final feature is output>
For wavelet branching, the wavelet transformed Y is also 1 As initial input, a series of wavelet transformed local information is shared in a convolution network layer of the same convolution kernel, and as initial input of the network, convolution feature extraction is carried out:
wherein, represents convolution operation, W (l) And b (l) Representing the weights and offsets of the first convolutional layer, reLU represents the activation function, F (l) Outputting the result for the first convolution layer, F (l-1) Representing the output result of the first-1 convolution layer, wherein, the initial F (0) Is the continuous wavelet transformation result Y of the formula (3) 2 The convolution layer operation is fully expressed as follows:
wherein Conv represents convolution operation, reLU represents activation function, wavelet feature extraction of a certain partial graph is completed according to formula (4), n partial images are processed, and n two-dimensional features are obtained
It should be noted that, because the image is a partial image, and the complexity of the two-dimensional features is continuously reduced in the feature extraction process, although the wavelet feature extraction is aimed at the repeated operation of a plurality of images, the imaging speed is not affected, the main time is consumed in the training stage, and the algorithm complexity is greatly reduced in the practical application stage after the network parameters are adjusted. Wherein the output size of the convolution layers from the 1 st layer to the 6 th layer is set to 8-8-24-24-16-16, the specific network architecture is shown in fig. 4, and the complete flow is shown in the wavelet branch part in fig. 5.
Finally, based on the original image S, performing feature fusion on the Fourier branch and the wavelet branch, and performing Fourier inverse transformation to obtain a final enhanced imageIn the training process, a loss function is introduced for measuring the quality of the generated image:
obtaining lossIn the initial training stage, when the loss is excessive, carrying out parameter adjustment on each parameter:
wherein,for the parameters after adjustment, Γ is the parameters before adjustment, including parameters in the light field modeling process, network structure parameters in the fourier branch and wavelet branch, η is the learning rate, in the present algorithm, the learning rate is set to 6.33×10 -2 Iterative training, when lost->Less than 10 -5 The enhanced image may be considered to have approximated the original image, training is complete,the obtained parameters are the final network parameters, and the training is finished. Changing the input image into a true acquisition image, and generating the image.
In summary, the advantages of the wave-fourier imaging algorithm based on the cross-domain image generation network in terms of improving the imaging speed mainly come from the efficient parallel processing capability, algorithm optimization and utilization of the modern hardware acceleration technology. These features make it particularly useful in applications requiring fast, high quality imaging in eutectic patches.
The embodiment of the invention designs a complete super-resolution imaging method, and designs a full-frequency illuminator and a light guide to provide strong coherent illumination light. A reflected light field function model is established, and light field information is comprehensively described. The reflected light field is processed using a beam splitter and a bi-directional frequency divider to extract structured light and textured light. The CCD captures the light field intensity information and converts the light field intensity information into two-dimensional light intensity distribution data. The wave-Fourier imaging algorithm based on the deep learning comprises Fourier transformation, wavelet transformation and a cross-domain image generation network, and is used for processing the light intensity data to generate a super-resolution image. The cross-domain image generation network consists of Fourier branches and wavelet branches, global and local image information is respectively extracted, and finally, the enhanced image is output by feature fusion. The method has the characteristics of ultrahigh resolution, strong noise immunity, high imaging speed and the like, and can effectively solve the chip imaging problem in the eutectic chip mounting process. The system has the advantages of simple structure, low cost and easy maintenance.
The embodiment of the invention adopts a novel imaging method to replace the traditional optical imaging, designs a complete super-resolution chip imaging system, can obtain high-quality chip microstructure images at high speed, and effectively solves the imaging difficulty in the eutectic chip mounting process.
Referring to fig. 6, an ultrafine target imaging system according to an embodiment of the present invention includes a memory 20, a processor 30, and a computer program stored in the memory 20 and executable on the processor 30, wherein the processor 30 implements the steps of an ultrafine target imaging method according to the embodiment when executing the computer program.
The specific working process and working principle of the ultra-micro target imaging system of the present embodiment may refer to the working process and working principle of the ultra-micro target imaging method of the present embodiment.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A method of ultra-micro target imaging, the method comprising:
receiving a total-frequency band reflected light field reflected after a total-frequency light source irradiates an ultrafine target, wherein the ultrafine target is a chip;
performing high-frequency and low-frequency switching filtering on the reflected light in the total-frequency-band reflected light field, and collecting light intensity information of the filtered reflected light;
performing Fourier transform and wavelet transform on the light intensity information respectively to obtain a global image and a local image;
extracting global features of the global image and extracting local features of the local image;
and fusing the global features and the local features, and obtaining an enhanced image of the ultrafine target according to the fused features.
2. The method of claim 1, wherein receiving the full-band reflected light field reflected after the full-band light source irradiates the ultrafine target comprises:
receiving the light beam reflected by the light guide after the full-frequency illumination light source irradiates the ultra-micro target;
according to the reflected light beam, a reflected light field model is established, wherein the specific formula of the reflected light field model is as follows:
wherein R is 1 (R, t) represents the time t, the total-band reflected light field model corresponding to the position vector R, R (R, θ) 12 Lambda) represents a position vector r and an incident angle theta 1 The angle of reflection is theta 2 A reflected light field with a wavelength lambda, R i (r,θ 1 Lambda) represents a position vector r and an incident angle theta 1 An incident light field of wavelength lambda, O (r, theta 12 Lambda) represents a position vector r and an incident angle theta 1 The angle of reflection is theta 2 Modulation function of an imaged object with a wavelength of light lambda minmax ]For the wavelength range of light, ω represents the angular frequency of light, k r A wave vector corresponding to the position vector r is represented, i represents an imaginary unit;
and obtaining a full-band reflection light field according to the reflection light field model.
3. The method of claim 1-2, wherein extracting global features of the global image comprises:
the method comprises the steps of constructing a multi-level graph convolution network to extract global features of a global image, wherein a calculation formula of feature vector extraction of each level of graph convolution network in the multi-level graph convolution network is as follows:
wherein G is j Representing the feature vector output by the jth level graph convolution network, M representing the number of pixel points in the global image, G 0 Representing inputs to a multi-level graph convolutional network, Y 1 A global image is represented and is displayed,and->The characteristic of the kth point in the characteristic vector output by the jth and the (j+1) th level graph convolution network is respectively represented by +.>Representing the calculation of the j-th level of graph convolution network output, at +.>Surrounding created local area with K elements, < >>The characteristic of the nth point in the characteristic vector output by the jth level graph convolution network is represented, wherein K is more than or equal to 1 and less than or equal to M, K is more than or equal to 1 and less than or equal to M, n is more than or equal to 1 and less than or equal to K, sigma represents a nonlinear activation function, w is a learnable parameter of the multilayer sensor, and Max-Pool is the maximum pooling operation;
according to the feature vector output by each level of graph convolution network, global features of the global image are obtained, wherein the specific formula is as follows:
wherein,representing global features, G 1 、G 2 And G j Representing the eigenvectors output by the level 1, level 2 and level j graph rolling networks, respectively, with MLP representing the perceptual function of the graph rolling network.
4. A method of forming an ultrafine image according to claim 3, wherein the calculation of the output of the jth level of graph convolution network is performed byThe surrounding created local region with K elements includes:
the characteristic distance between the kth point and other points in the characteristic vector output by the jth-level graph convolution network is calculated, and the specific formula is as follows:
wherein,and->Feature vector G respectively representing output of jth-level graph convolution network j Features of the kth and mth points in (c),characteristic distance representing kth and mth point, +.>Representative pair->Squaring the two norms, wherein k is more than or equal to 1 and less than or equal to M, and M is more than or equal to 1 and less than or equal to M;
k elements with the distance smaller than a preset threshold value are selected and used as calculation of the output of a jth level graph convolution network, and the calculation is performed whenSurrounding created local area with K elements +.>And K is more than 1 and less than M.
5. The method of claim 4, wherein extracting local features of the local image comprises:
the convolution operation of the preset layer threshold value is carried out on the local image, and the specific formula is as follows:
F (l) =ReLU(W (l) *F (l-1) +b (l) ),
wherein F is (l) And F (l-1) Representing the local output characteristics, W, of the first convolution layer and the first-1 convolution layer, respectively (l) And b (l) Representing the weights and offsets of the first convolution layer, respectively, and ReLU represents the activation function, N th Represents a preset layer threshold value, and l is more than or equal to 0 and less than or equal to N th
The local output characteristic of the last convolution layer is taken as the local characteristic of the local image.
6. The method of claim 5, wherein fusing the global features and the local features and obtaining an enhanced image of the ultrafine target based on the fused features comprises:
feature fusion is carried out on the global features and the local features;
performing Fourier inverse transformation on the fused characteristics;
and obtaining an enhanced image of the ultrafine target according to the result of the Fourier inverse transformation.
7. The method of claim 6, wherein the full-frequency light source comprises a laser diode, a collimating lens, a polarizer, an aspherical mirror, a ring-shaped laser, a first, a second, a third, and a fourth output lens group, which are sequentially connected, wherein:
the laser diode is used for pumping a source;
the collimating lens is used for adjusting the divergence angle and the beam quality of the light beam and converting the diverged light beam into parallel light;
the polaroid is used for selectively controlling the polarization state of polarized light;
the aspheric mirror is used for converting Gaussian beams into annular beams;
the ring-shaped laser is used for inhibiting diffraction loss and realizing super-resolution focusing and imaging;
the first, second, third and fourth output lens groups are used for expanding the irradiation area.
8. The method of claim 7, wherein the light guide comprises a first plano-convex lens, a relay lens group, and a second plano-convex lens connected in sequence, wherein:
the first plano-convex lens is used for converging light rays output by the full-frequency light source;
the relay lens group is used for being composed of a series of columnar lenses, so that light rays are converged at the focus of the second plano-convex lens;
the second plano-convex lens is used for converging parallel light and emitting.
9. An ultra-micro target imaging system, the system comprising:
memory (20), a processor (30) and a computer program stored on the memory (20) and executable on the processor (30), characterized in that the processor (30) implements the steps of the method according to any of the preceding claims 1 to 8 when executing the computer program.
CN202311603387.5A 2023-11-28 2023-11-28 Ultra-micro target imaging method and system Pending CN117670698A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311603387.5A CN117670698A (en) 2023-11-28 2023-11-28 Ultra-micro target imaging method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311603387.5A CN117670698A (en) 2023-11-28 2023-11-28 Ultra-micro target imaging method and system

Publications (1)

Publication Number Publication Date
CN117670698A true CN117670698A (en) 2024-03-08

Family

ID=90069203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311603387.5A Pending CN117670698A (en) 2023-11-28 2023-11-28 Ultra-micro target imaging method and system

Country Status (1)

Country Link
CN (1) CN117670698A (en)

Similar Documents

Publication Publication Date Title
JP6845327B2 (en) Data augmentation for defect inspection based on convolutional neural networks
US11379967B2 (en) Methods and systems for inspection of semiconductor structures with automatically generated defect features
US9915625B2 (en) Optical die to database inspection
US10274425B2 (en) Structured illumination for contrast enhancement in overlay metrology
CN106990694B (en) Non-iterative phase recovery device and method under partially-dry-light illumination
TWI665445B (en) Optical die to database inspection
JP7170037B2 (en) Multi-Step Image Alignment Method for Large Offset Die Inspection
US11368608B2 (en) Compressed sensing based object imaging system and imaging method therefor
US10184901B2 (en) Computational wafer image processing
CN110455834A (en) X-ray single exposure imaging device and method based on light intensity transmission equation
TW202211092A (en) Training a machine learning model to generate higher resolution images from inspection images
Sun et al. Double‐flow convolutional neural network for rapid large field of view Fourier ptychographic reconstruction
CN117670698A (en) Ultra-micro target imaging method and system
US9523645B2 (en) Lenticular wafer inspection
CN115494005A (en) Semiconductor defect detection device and method based on infrared microscopic digital holography
Rahmat et al. 3D shape from focus using LULU operators and discrete pulse transform in the presence of noise
CN110460756A (en) A kind of scene removes rain image processing method and device automatically in real time
Liu et al. Resolution-Enhanced Lensless Ptychographic Microscope Based on Maximum-Likelihood High-Dynamic-Range Image Fusion
Attota Application of Convolutional Neural Network to TSOM Images for Classification of 6 nm Node Patterned Defects
Li et al. A denoising method based on CNN for the measurement of far-field focal spot of high power laser using Schlieren method
CN114487628A (en) Vortex array topology charge detection system and method based on neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination