CN112697751B - Multi-angle illumination lens-free imaging method, system and device - Google Patents

Multi-angle illumination lens-free imaging method, system and device Download PDF

Info

Publication number
CN112697751B
CN112697751B CN202011418841.6A CN202011418841A CN112697751B CN 112697751 B CN112697751 B CN 112697751B CN 202011418841 A CN202011418841 A CN 202011418841A CN 112697751 B CN112697751 B CN 112697751B
Authority
CN
China
Prior art keywords
function
imaging
target sample
illumination
estimation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011418841.6A
Other languages
Chinese (zh)
Other versions
CN112697751A (en
Inventor
赵巨峰
吴小辉
崔光茫
毛海锋
张培伟
林彬彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202011418841.6A priority Critical patent/CN112697751B/en
Publication of CN112697751A publication Critical patent/CN112697751A/en
Application granted granted Critical
Publication of CN112697751B publication Critical patent/CN112697751B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/47Scattering, i.e. diffuse reflection
    • G01N21/4788Diffraction

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

The invention discloses a multi-angle illumination lens-free imaging method, a system and a device, wherein the method comprises the following steps: s100, acquiring a plurality of diffraction patterns, wherein the diffraction patterns are images acquired by an image sensor when light waves with different illumination angles irradiate a sample surface through a diaphragm; s200, performing iterative reconstruction based on the diffraction pattern to obtain an imaging result, specifically, calculating relative displacement corresponding to each diffraction image; performing analog calculation on the imaging process based on the relative displacement to obtain light intensity data transmitted to a sample surface; calculating a current target estimation error, and updating a target sample function corresponding to a sample surface based on light intensity data and the current target estimation error; and repeating the steps until a preset iteration condition is reached, and outputting the updated target sample function as a corresponding imaging result. The invention carries out phase recovery and laminated imaging through the process of simulated imaging, and iterates, thereby improving the reconstruction resolution and optimizing the imaging result.

Description

Multi-angle illumination lens-free imaging method, system and device
Technical Field
The invention relates to the field of lens-free imaging, in particular to a multi-angle illumination lens-free imaging method, system and device.
Background
The lensless on-chip microscopy is a novel computational imaging technology, does not need focusing of an imaging lens, directly clings a target sample glass slide to a sensor to record an image, and realizes reconstruction of a clear image by combining a corresponding image recovery algorithm. Lensless microscopy has been widely accepted as an alternative and cost-effective solution for many applications, such as pathology, cell counting, etc., and the main advantages of lensless microscopy are its compact overall architecture, simplicity, and scalability in terms of spatial resolution and field of view, however, the effectiveness of lensless imaging relies on the high signal-to-noise ratio of the captured images, which is closely related to the measured signal-to-noise ratio, but in the optical systems of lensless microscopes, the high signal-to-noise ratio condition is difficult to guarantee.
The noise sources include photon noise, electronic noise from the image sensor, quantization noise generated by factor-to-analog conversion, and speckle noise of the coherent light source; in addition, some stained biological samples with high absorbance and low light transmittance will result in resulting images with low signal-to-noise ratios; if the recorded original image is corrupted by noise, the restored image results can be degraded by the generation of artifacts. In addition, in the lens-free imaging structure, since the sensor itself does not have the magnifying power, the imaging resolution is often low. Conventional lensless holographic imaging tends to use coherent light for illumination and to place the target slide as close as possible to the light source for sufficient magnification. However, this method has a fundamental contradiction between the magnification and the size of the field of view, and when the magnification becomes larger, the field of view becomes smaller, and vice versa. The former limits the ultimate resolution achievable by the system, while the latter determines the range over which it is imaged.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a multi-angle illumination lens-free imaging method, a system and a device capable of optimizing an imaging result.
In order to solve the technical problem, the invention is solved by the following technical scheme:
a multi-angle illumination lensless imaging method, comprising the steps of:
s100, obtaining a plurality of diffraction patterns, wherein the diffraction patterns are images collected by an image sensor when light waves with different illumination angles irradiate onto a sample surface through a diaphragm;
s200, carrying out iterative reconstruction based on the diffraction pattern to obtain an imaging result, and specifically comprising the following steps:
extracting a diffraction pattern corresponding to the light wave positioned at the optical axis as a reference pattern, and performing cross correlation on the reference pattern and each diffraction image to obtain relative displacement;
performing analog calculation on the imaging process based on the relative displacement to obtain light intensity data transmitted to a sample surface;
calculating a current target estimation error, and updating a target sample function corresponding to a sample surface based on light intensity data and the current target estimation error;
and repeating the steps until a preset iteration condition is reached, and acquiring and outputting the updated target sample function as a corresponding imaging result.
As an implementable embodiment:
obtaining corresponding plane waves based on the relative displacement simulation, wherein the plane waves correspond to the light waves one by one;
calculating and obtaining an illumination function based on the diaphragm function and the plane wave;
calculating and obtaining corresponding outlet wave estimation based on the illumination function and the target sample function to obtain first outlet wave estimation;
performing simulation calculation on the process of the first outlet wave estimation and the propagation to the image sensor to obtain a corresponding diffraction pattern estimation;
performing simulation calculation on the process of backward propagation of the diffraction pattern estimation to the sample surface based on a preset constraint condition to obtain a second exit wave estimation;
calculating a current target estimation error, updating the target sample function and the illumination function based on the first exit wave estimate, the second exit wave estimate, and the current target estimation error; and updating the diaphragm function based on the updated illumination function.
As an implementable embodiment:
calculating and obtaining a corresponding current target estimation error based on the second outlet wave estimation, the target sample function and the illumination function;
calculating to obtain a corresponding loss value based on the gradient of the current target estimation error relative to the target sample function;
calculating and obtaining a first updating weight and a second updating weight based on the loss value;
calculating to obtain a first adaptive adjustment step length based on the first update weight and a target sample function;
calculating to obtain a second self-adaptive adjusting step length based on the second updating weight and the illumination function;
updating the target sample function based on the first adaptive adjustment step length, the first outlet wave estimation, the second outlet wave estimation and the illumination function to obtain an updated target sample function;
updating the illumination function based on the second adaptive adjustment step length, the first outlet wave estimation, the second outlet wave estimation and the target sample function to obtain an updated illumination function; and updating the diaphragm function based on the updated illumination function.
As an implementable embodiment:
target sample function O obtained after nth iteration updatingn+1(r) is:
Figure BDA0002821351610000021
wherein, On(r) represents the target sample function for the nth iteration,
Figure BDA0002821351610000031
representing the illumination function corresponding to the jth light wave in the nth iteration,
Figure BDA0002821351610000032
representing a first exit wave estimate corresponding to the jth light wave in the nth iteration,
Figure BDA0002821351610000033
representing a second exit wave estimate corresponding to the jth lightwave in the nth iteration, representing a complex conjugate, aORepresenting a first adaptation step size;
illumination function obtained after nth iteration updating
Figure BDA0002821351610000034
Comprises the following steps:
Figure BDA0002821351610000035
wherein, apIndicating a second adaptation step size.
As an implementable embodiment:
the first adaptive adjustment step size is:
Figure BDA0002821351610000036
wherein,
Figure BDA0002821351610000037
a first update weight representing the nth iteration is calculated as:
Figure BDA0002821351610000038
the second adaptive adjustment step length is:
Figure BDA0002821351610000039
wherein,
Figure BDA00028213516100000310
a second update weight representing the nth iteration is calculated as:
Figure BDA00028213516100000311
above formula, enAnd (3) representing a corresponding loss value in the nth iteration, wherein eta is a constant.
As an implementable embodiment:
loss value e at nth iterationnThe calculation formula of (2) is as follows:
Figure BDA00028213516100000312
order:
Figure BDA0002821351610000041
above-mentioned [ Delta ] [ epsilon ]j(On(r)) is the gradient of the current target estimation error with respect to the target sample function.
As an implementable manner, before performing analog computation on the imaging process based on the relative displacement, the method further comprises a relative displacement correction step, and the specific steps are as follows:
calculating to obtain a corresponding correction value based on the corresponding diffraction pattern and the diffraction pattern estimation;
and correcting the relative displacement based on the correction value, and performing analog calculation on the imaging process based on the corrected relative displacement.
As an implementable embodiment:
when the iteration times reach a preset time threshold value or the convergence of the target sample function is judged based on the loss value, outputting the obtained updated target sample function as an imaging result;
the invention also provides a multi-angle illumination lens-free imaging system, which comprises:
the acquisition module is used for acquiring a plurality of diffraction patterns, wherein the diffraction patterns are images acquired by the image sensor when light waves with different illumination angles irradiate on the sample surface through the diaphragm;
the reconstruction module is used for carrying out iterative reconstruction based on the diffraction pattern to obtain an imaging result and comprises a phase recovery unit, a laminated imaging unit and an output unit;
the phase recovery unit is used for extracting a diffraction pattern corresponding to the light wave positioned at the optical axis as a reference pattern, and performing cross correlation on the reference pattern and each diffraction image to obtain relative displacement; the system is also used for carrying out simulation calculation on the imaging process based on the relative displacement to obtain light intensity data transmitted to the sample surface;
the laminated imaging unit is used for calculating a current target estimation error and updating a target sample function corresponding to a sample surface based on light intensity data and the current target estimation error;
and the reconstruction module is used for acquiring the updated target sample function as a corresponding imaging result and outputting the imaging result when a preset iteration condition is reached.
The invention also provides a multi-angle illumination lensless imaging device, which comprises a multi-angle luminous source, a diaphragm, a target glass slide and an image sensor which are arranged along the optical axis in sequence; and light beams generated by the multi-angle light emitting source pass through the diaphragm and then irradiate onto the target glass slide, and the image sensor performs imaging acquisition to obtain a plurality of corresponding diffraction patterns.
Due to the adoption of the technical scheme, the invention has the remarkable technical effects that:
according to the invention, through the design of multi-angle illumination, a plurality of diffraction images with low resolution can be obtained at each angle, and through the process of simulated imaging, phase recovery and laminated imaging are carried out on the obtained diffraction images to obtain corresponding imaging results;
according to the method, the target sample function is updated based on the current estimation error through a self-adaptive step length method, so that the problem that the iteration process falls into the local optimal solution can be effectively avoided.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of a multi-angle illumination lensless imaging apparatus of the present invention;
fig. 2 is a schematic phase diagram of the imaging result, wherein the left diagram is a schematic phase diagram of the imaging result obtained in the first iteration, and the right diagram is a schematic phase diagram of the imaging result obtained in the 15 th iteration;
fig. 3 is a schematic amplitude diagram of the imaging result, wherein the left diagram is a schematic amplitude diagram of the imaging result obtained in the first iteration, and the right diagram is a schematic amplitude diagram of the imaging result obtained in the 15 th iteration.
In the figure:
1 denotes a multi-angle light emitting source, 2 denotes a diffuser, 3 denotes a diaphragm, 4 denotes a target slide, and 5 denotes an image sensor.
Detailed Description
The present invention will be described in further detail with reference to examples, which are illustrative of the present invention and are not to be construed as being limited thereto.
Due to the particle-duality of light, when light emitted from a light source having weak coherence passes through the diaphragm 3, a phenomenon in which the light travels away from a straight line, called diffraction of light, occurs. According to the different propagation distances of the diffracted light waves, the three regions can be divided into a Rayleigh-Sovifimi diffraction region, a Fresnel diffraction region and a Fraunhofer diffraction region, and one skilled in the art can select the diffraction formula of the corresponding region to express the complex amplitude of the corresponding light wave according to the propagation distance.
Embodiment 1, a multi-angle illumination lens-free imaging device, comprising a multi-angle light source 1, a diaphragm 3, a target glass slide 4 and an image sensor 5 sequentially arranged along an optical axis;
the distance d0 between the multi-angle luminous source 1 and the diaphragm 3 is 100-600mm, and the distance d1 between the diaphragm 3 and the target slide 4 and the distance d2 between the target slide 4 and the image sensor 5 are far smaller than d 0.
The multi-angle luminous source 1 is used for generating low-coherence light beams, so that the image sensor 5 obtains a plurality of diffraction patterns with low resolution, and the angles corresponding to the diffraction patterns are different, thereby facilitating the subsequent reconstruction based on the collected diffraction patterns to obtain an optimized imaging result;
in this embodiment, the multi-angle light emitting source 1 employs an LED matrix, the number of the obtained diffraction patterns is consistent with the number of LEDs in the LED matrix, and theoretically, the more the diffraction patterns employed in the reconstruction process are, the better the result is, but with the increase of the number of LEDs, the larger the measurement angle of the light beam generated by the peripheral LEDs to the target sample is, on the contrary, the poor imaging effect is caused, so that a person skilled in the art can set and appropriately use the LED matrix according to actual needs, for example, an LED matrix of 5 × 5 to 10 × 10 can be selected, and the distance between the LEDs can be, for example, 2cm to 5 cm.
The diaphragm 3 is used for filtering stray light to generate circular light waves, the light waves irradiate on a target glass slide 4 filled with a target sample, the light waves are imaged and collected by the image sensor 5, and the light waves are expressed by using an illumination function in the reconstruction process.
The radius of the micropore of the diaphragm 3 in the embodiment is 100-300 um.
The image sensor 5 may employ, for example, a Charge-coupled device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS).
Further, a diffuser 2 is arranged between the multi-angle luminous source 1 and the diaphragm 3;
when the light beam emitted by the multi-angle light emitting source 1 passes through the diffuser 2, a series of light waves with random angles are introduced into the light beam by the diffuser 2, and the imaging effect of the image can be further improved by adding the diffuser 2 in the embodiment.
Case (2):
the multi-angle illumination lens-free imaging device comprises a multi-angle luminous source 1, a diffuser 2, a diaphragm 3, a target glass slide 4 and an image sensor 5 which are sequentially arranged along an optical axis;
wherein the distance d0 between the multi-angle luminous source 1 and the diaphragm 3 is 300mm, the distance d1 between the diaphragm 3 and the target slide 4 is 5mm, and the distance d2 between the target slide 4 and the image sensor 5 is 0.5 mm.
The multi-angle light emitting source 1 is a 10 × 10 LED matrix, wherein the distance between the LEDs is 3.5 cm.
The radius of the micropore of the diaphragm 3 is 150 um;
the image sensor 5 employs a charge coupled element;
the distance d0 between the multi-angle light emission source 1 and the stop 3, the distance d1 between the stop 3 and the target slide 4, and the distance d2 between the target slide 4 and the image sensor 5 are shown.
After light beams emitted by each LED pass through the diffuser 2, stray light is filtered by the diaphragm 3 to form circular light waves; the light waves irradiate onto a target glass slide 4 filled with a target sample, and are imaged and collected by the image sensor 5, so that 100 diffraction patterns are obtained, wherein the diffraction patterns correspond to the LEDs one by one.
Aiming at the problem of contradiction between a field range and an imaging resolution, Feng et al utilize Ronchi gratings to realize Talbot grating illumination, use dark stripe light intensity of the gratings on an object plane as a support domain of the object plane, and restore reconstructed light intensity after iteration, but the method has higher requirements on samples and limited reconstruction effect. The Ozcan laboratory proposes a phase recovery algorithm for a multi-sample-sensor spacing, but the requirement on hardware is high, a mechanical displacement structure is required, and an iterative algorithm of the algorithm is easy to fall into a local optimal solution.
In the embodiment, through the design of the multi-angle illumination lens-free imaging device, a group of diffraction patterns with low resolution can be obtained through multi-angle illumination, so that the subsequent laminated imaging based on the diffraction patterns is facilitated, the quality of an imaging result is improved, and the contradiction between the field range and the resolution in the existing lens-free microscopic imaging technology can be solved on the premise of not needing a mechanical displacement structure.
Embodiment 2, a multi-angle illumination lens-free imaging method, comprising the steps of:
s100, obtaining a plurality of diffraction patterns, wherein the diffraction patterns are images collected by an image sensor 5 when light waves with different illumination angles irradiate onto a sample surface through a diaphragm 3;
in this embodiment, the diffraction pattern is the image collected by the lens-free imaging device described in embodiment 1.
And S200, carrying out iterative reconstruction based on the diffraction pattern to obtain an imaging result.
The method comprises the following specific steps:
s210, extracting a diffraction pattern corresponding to the light wave positioned at the optical axis as a reference pattern, and performing cross correlation on the reference pattern and each diffraction image to obtain relative displacement;
the optical axis is an optical axis of the lensless imaging device.
S220, performing analog calculation on the imaging process based on the relative displacement to obtain light intensity data transmitted to a sample surface;
s230, calculating a current target estimation error, and updating a target sample function corresponding to a sample surface based on the light intensity data and the current target estimation error;
the target sample function is a pixel matrix representing an imaging result, and the step is to perform laminated imaging based on the light intensity data to update the imaging result.
And S240, repeating the step S220 and the step S230 until a preset iteration condition is reached, and acquiring and outputting an updated target sample function as a corresponding imaging result.
The method for reconstructing based on the diffraction image in the embodiment includes two parts, namely phase recovery and stacked imaging, wherein the phase recovery is a relative displacement simulation imaging process obtained through calculation, and light intensity information transmitted to the sample surface is updated based on a preset constraint condition. And the stacked imaging superposes each updated diffraction pattern, and because the LEDs are at different positions, each diffraction pattern is partially overlapped and superposed together to form a complete image.
In the embodiment, a plurality of low-resolution diffraction patterns are obtained at each angle, illumination light at different angles is simulated to irradiate a target sample, and overlapping apertures are combined for reconstruction, so that the reconstruction resolution is effectively improved, the resolution of reconstruction can be improved and the convergence speed can be further improved by automatically adjusting the step length based on the current target estimation error, and the current target estimation error is a global error in the embodiment, so that the situation that a local optimal solution is trapped in an iteration process can be effectively avoided.
In step S210, the diffraction pattern measured by the LED corresponding to the central optical axis of the lens-less imaging device is selected as a reference pattern, and the reference pattern is cross-linked with other diffraction patterns, corresponding to the relative displacement of the diffraction pattern, i.e. the relative displacement Δ d between the reference pattern and the diffraction pattern generated by the jth LED in this embodimentjComprises the following steps:
Figure BDA0002821351610000071
where d represents the coordinate of the diffraction plane, j is the coordinate position of the LED array (representing the position of the diffraction pattern generated by the jth LED), Iref(d) Denotes a reference pattern, Ij(d + Δ d) represents the diffraction pattern produced by the jth LED.
Step S220 is used for implementing phase recovery, and includes the specific steps of:
s221, obtaining corresponding plane waves based on the relative displacement simulation, wherein the plane waves correspond to the light waves one by one;
s222, calculating to obtain an illumination function based on the diaphragm function and the plane wave;
s223, calculating and obtaining corresponding outlet wave estimation based on the illumination function and the target sample function, and obtaining first outlet wave estimation;
s224, carrying out simulation calculation on the process of the first outlet wave estimation transmitted to the image sensor 5 to obtain a corresponding diffraction pattern estimation;
s225, performing simulation calculation on the process that the diffraction pattern estimation is reversely propagated back to the sample surface based on a preset constraint condition to obtain a second outlet wave estimation;
the light intensity data includes the first exit wave estimate and the second exit wave estimate obtained by the above calculation, in this embodiment, the imaging process is simulated and calculated by the relative displacement to obtain the first exit wave estimate, and the first exit wave estimate is updated based on the preset constraint condition to obtain the second exit wave estimate.
In step S221, a plane wave corresponding to the LED is simulated based on the relative displacement, and a plane wave Γ corresponding to the jth LEDj(r) is:
Figure BDA0002821351610000081
where r is the coordinate position of the LED in real space and M is the image of the entire windowElemental value,. DELTA.djThe relative displacement corresponding to the jth LED.
Step S222 is to simulate a process of transmitting the plane wave to the diaphragm 3 and then irradiating the plane wave to a sample surface, so as to obtain a corresponding illumination function; the illumination function corresponding to the jth LED in the nth iteration is
Figure BDA0002821351610000082
Figure BDA0002821351610000083
Wherein n represents the number of iterations;
Figure BDA0002821351610000084
represents the propagation function of the diaphragm 3 to the sample face, i.e. the propagation function of the micro-holes in the diaphragm 3 to the target slide 4; a. then(r) a stop function representing the nth iteration;
in this embodiment, the diaphragm function represents rough estimates of the diffuser 2 and the diaphragm 3, and in the first iteration, the diaphragm function uses a preset initial diaphragm function, and in this embodiment, the initial diaphragm function uses an all-zero value having the same size as the diffraction pattern, otherwise, the diaphragm function updated in the last iteration is used, and the updating method is as described in step S238.
Propagation function in the present embodiment
Figure BDA0002821351610000085
The method adopts angular spectrum propagation, and comprises the following specific processes:
Input=F(Γj(r)An(r))
Output=Input*CTF
Figure BDA0002821351610000097
wherein F is a Fourier transform, F-1For inverse Fourier transform, Input is a function of the Fourier transform of a plane wave into the frequency domain, through the Coherent transfer function (Coherent T)The ransfer Function, CTF) to obtain the illumination Function of the sample plane in the frequency domain.
In step S223, the exit wave estimate when leaving the target sample is calculated and obtained by the product of the illumination function and the target sample function, so as to obtain a corresponding first exit wave estimate, and a first exit wave estimate corresponding to the jth LED at the nth iteration
Figure BDA0002821351610000091
Comprises the following steps:
Figure BDA0002821351610000092
wherein, On(r) represents a target sample function which is an updated target sample function for the last iterative reconstruction, and the target sample function is a preset initial target sample function when the iteration is performed for the first time.
Step S224 simulates a process of propagating the first exit wave estimate to the image sensor 5 for imaging, so as to obtain a corresponding diffraction image estimate;
in this embodiment, the diffraction image estimate is calculated based on angular spectrum propagation, and the diffraction image estimate corresponding to the jth LED at the nth iteration
Figure BDA0002821351610000093
Comprises the following steps:
Figure BDA0002821351610000094
where F is the fourier transform, h is the distance from the sample face to the image sensor 5, i.e., the distance from the target slide 4 to the image sensor 5 (d2), λ is the wavelength of the light, and r is the coordinate position of the LED in real space.
In step S225, the obtained diffraction image estimate is propagated back to the sample plane using a preset constraint condition, the first exit wave estimate is updated to obtain a corresponding second exit wave estimate, and the second exit wave estimate corresponding to the jth LED at the nth iteration is obtained
Figure BDA0002821351610000095
Comprises the following steps:
Figure BDA0002821351610000096
the preset constraint conditions in this embodiment are the constraint of angular spectrum propagation and the constraint of the initially acquired diffraction pattern.
As can be seen from the above, in the present embodiment, the phase is restored by switching the space domain and the frequency domain a plurality of times, which not only enables reconstruction of the complex amplitude distribution of the object on the focal plane, but also enables further improvement of the reconstruction resolution of the object.
Further, before the imaging process is simulated and calculated based on the relative displacement, the method also comprises a relative displacement correction step, and the specific steps are as follows:
calculating to obtain a corresponding correction value based on the corresponding diffraction pattern and the diffraction pattern estimation; correction value corresponding to jth LED in nth iteration
Figure BDA0002821351610000101
Comprises the following steps:
Figure BDA0002821351610000102
wherein, Ij(d) Representing the diffraction image corresponding to the jth LED,
Figure BDA0002821351610000103
diffraction pattern estimation of d position at nth iteration.
Correcting the relative displacement based on the correction value, performing analog calculation on the imaging process based on the corrected relative displacement, and performing analog imaging relative displacement during the (n + 1) th iteration
Figure BDA0002821351610000104
Is as follows.
Figure BDA0002821351610000105
That is, a corresponding correction value is estimated and generated based on the diffraction pattern at the current iteration, the relative displacement is updated based on the correction value, the updated relative displacement is adopted for imaging simulation at the next iteration, and the initial relative displacement is the relative displacement Δ d calculated and obtained in step S210j
Relative displacement Δ d obtained due to prior calculationjThe correlation is low, and because the diffraction pattern is smoothed due to the low coherence condition of the LED, there is a certain error, so the accuracy is improved by designing the correction value to be correlated with each other in this embodiment.
Step S230 is a step of stacked imaging, in this embodiment, the current target error is an error based on the global, and the step size is automatically adjusted according to the current target estimation error, so that the problem that the iterative process falls into the global optimal solution can be effectively avoided.
The method comprises the following specific steps:
s231, calculating to obtain a corresponding current target estimation error epsilon based on the second outlet wave estimation, the target sample function and the illumination function;
the error function is:
Figure BDA0002821351610000106
wherein,
Figure BDA0002821351610000107
representing a second exit wave estimate corresponding to the jth LED in the current iteration,
Figure BDA0002821351610000108
representing the illumination function, O, corresponding to the jth LED in the current iterationn(r) represents the target sample function in the current iteration (i.e., the updated target sample function in the last iteration).
S232, calculating to obtain a corresponding loss value based on the gradient of the current target estimation error relative to the target sample function;
loss value e at current iterationnThe calculation formula of (2) is as follows:
Figure BDA0002821351610000111
wherein, Delta epsilonj(On(r)) is the gradient of the current target estimation error with respect to the target sample function, which can be expressed as:
Figure BDA0002821351610000112
in the above formula, denotes a complex conjugate.
S233, calculating and obtaining a first updating weight and a second updating weight based on the loss value;
first update weight at current iteration
Figure BDA0002821351610000113
The calculation formula of (2) is as follows:
Figure BDA0002821351610000114
wherein η is a constant, and those skilled in the art can set the value of η according to actual needs, and the value of η in this embodiment is set to 0.0001;
that is, when the calculated loss value is close to convergence, the step size is made to be self-adaptive attenuation.
As above, the first update weight at the current iteration
Figure BDA0002821351610000115
The calculation formula of (2) is as follows:
Figure BDA0002821351610000116
the first update weight and the second update weight in the first iteration are both preset initial values, and those skilled in the art can set the initial values according to actual needs, where the initial values of the first update weight and the second update weight are both 0.001 in this embodiment.
S234, calculating and obtaining a first adaptive adjustment step length based on the first updating weight and the target sample function;
first self-adaptive adjustment step length a in current iterationOComprises the following steps:
Figure BDA0002821351610000117
s235, calculating to obtain a second self-adaptive adjusting step length based on the second updating weight and the illumination function;
second adaptive adjustment step length a in current iterationpComprises the following steps:
Figure BDA0002821351610000118
s236, updating the target sample function based on the first adaptive adjustment step length, the first outlet wave estimation, the second outlet wave estimation and the illumination function to obtain an updated target sample function;
target sample function O obtained after updating of current iteration (i.e. nth iteration)n+1(r) is:
Figure BDA0002821351610000121
wherein, On(r) represents the target sample function for the nth iteration,
Figure BDA0002821351610000122
representing the illumination function corresponding to the jth lightwave (i.e., the jth LED) in the nth iteration,
Figure BDA0002821351610000123
represents the first exit wave estimate corresponding to the jth lightwave (i.e., the jth LED) in the nth iteration,
Figure BDA0002821351610000124
represents the second exit wave estimate corresponding to the jth lightwave (i.e., jth LED) in the nth iteration, representing the complex conjugate, aoIndicating the first adaptation step size.
As can be seen from the above, the process of updating the target sample function is the process of performing the stack imaging based on the updated diffraction yuyang.
S237, updating the illumination function based on the second adaptive adjustment step size, the first exit wave estimation, the second exit wave estimation and the target sample function to obtain an updated illumination function;
the illumination function obtained after the current iteration (i.e. the nth iteration) is updated
Figure BDA0002821351610000125
Comprises the following steps:
Figure BDA0002821351610000126
and S238, updating the diaphragm function based on the updated illumination function.
In this embodiment, the updated illumination function is used to update the diaphragm function after removing the inclined plane wave, so that the corresponding illumination function is obtained by calculation based on the updated diaphragm function in the next iteration;
the updated diaphragm function A of the current iteration (i.e. the nth iteration)n+1(r) is:
Figure BDA0002821351610000127
the iteration condition in step S240 can be set by those skilled in the art according to actual needs, for example:
and when the iteration times reach a preset time threshold value or the convergence of the target sample function is judged based on the loss value, judging that the iteration is terminated, and outputting the obtained updated target sample function as an imaging result.
In the above calculation method of the reported loss value, a person skilled in the art can set the condition for determining the convergence of the target sample function based on the loss value according to actual needs, so the method is not limited thereto.
Based on the method provided by the embodiment, iterative reconstruction is performed on the corresponding diffraction pattern, the reconstruction result is shown in fig. 2 and fig. 3, and as can be seen from comparison between fig. 2 and fig. 3, the resolution of the imaging result output after 15 iterations is much higher than the imaging result output after 1 iteration.
Embodiment 3, a multi-angle illumination lensless imaging system, comprising:
the acquisition module is used for acquiring a plurality of diffraction patterns, wherein the diffraction patterns are images acquired by the image sensor 5 when light waves with different illumination angles irradiate onto the sample surface through the diaphragm 3;
the reconstruction module is used for carrying out iterative reconstruction based on the diffraction pattern to obtain an imaging result and comprises a phase recovery unit, a laminated imaging unit and an output unit;
the phase recovery unit is used for extracting a diffraction pattern corresponding to the light wave positioned at the optical axis as a reference pattern, and performing cross correlation on the reference pattern and each diffraction image to obtain relative displacement; the device is also used for carrying out analog calculation on the imaging process based on the relative displacement to obtain light intensity data transmitted to the sample surface;
the laminated imaging unit is used for calculating a current target estimation error and updating a target sample function corresponding to a sample surface based on light intensity data and the current target estimation error;
and the output unit is used for acquiring the updated target sample function as a corresponding imaging result and outputting the imaging result when a preset iteration condition is reached.
This embodiment is a product embodiment corresponding to embodiment 2, and since it is basically similar to method embodiment 2, the description is relatively simple, and for the relevant points, refer to the partial description of method embodiment 2.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that:
reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. Thus, the appearances of the phrase "one embodiment" or "an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
In addition, it should be noted that the specific embodiments described in the present specification may differ in the shape of the components, the names of the components, and the like. All equivalent or simple changes of the structure, the characteristics and the principle of the invention which are described in the patent conception of the invention are included in the protection scope of the patent of the invention. Various modifications, additions and substitutions for the specific embodiments described may be made by those skilled in the art without departing from the scope of the invention as defined in the accompanying claims.

Claims (9)

1. A multi-angle illumination lens-free imaging method is characterized by comprising the following steps:
s100, obtaining a plurality of diffraction patterns, wherein the diffraction patterns are images collected by an image sensor when light waves with different illumination angles irradiate onto a sample surface through a diaphragm;
s200, carrying out iterative reconstruction based on the diffraction pattern to obtain an imaging result, and specifically comprising the following steps:
s210, extracting a diffraction pattern corresponding to the light wave positioned at the optical axis as a reference pattern, and performing cross correlation on the reference pattern and each diffraction image to obtain relative displacement;
s220, performing analog calculation on the imaging process based on the relative displacement to obtain light intensity data transmitted to a sample surface;
s230, calculating a current target estimation error, and updating a target sample function corresponding to a sample surface based on the light intensity data and the current target estimation error;
and S230, repeating the step S220 and the step S230 until a preset iteration condition is reached, and acquiring and outputting an updated target sample function as a corresponding imaging result.
2. The multi-angle illuminated lensless imaging method of claim 1, wherein:
obtaining corresponding plane waves based on the relative displacement simulation, wherein the plane waves correspond to the light waves one by one;
calculating and obtaining an illumination function based on the diaphragm function and the plane wave;
calculating and obtaining corresponding outlet wave estimation based on the illumination function and the target sample function to obtain first outlet wave estimation;
performing simulation calculation on the process of the first outlet wave estimation propagating to the image sensor to obtain a corresponding diffraction pattern estimation;
performing simulation calculation on the process of the diffraction pattern estimation back propagating to the sample surface based on a preset constraint condition to obtain a second exit wave estimation;
calculating a current target estimation error, updating the target sample function and the illumination function based on the first exit wave estimate, the second exit wave estimate, and the current target estimation error; and updating the diaphragm function based on the updated illumination function.
3. The multi-angle illuminated lensless imaging method of claim 2, wherein:
calculating and obtaining a corresponding current target estimation error based on the second outlet wave estimation, the target sample function and the illumination function;
calculating to obtain a corresponding loss value based on the gradient of the current target estimation error relative to the target sample function;
calculating and obtaining a first updating weight and a second updating weight based on the loss value;
calculating to obtain a first adaptive adjustment step length based on the first update weight and a target sample function;
calculating to obtain a second self-adaptive adjusting step length based on the second updating weight and the illumination function;
updating the target sample function based on the first adaptive adjustment step length, the first outlet wave estimation, the second outlet wave estimation and the illumination function to obtain an updated target sample function;
updating the illumination function based on the second adaptive adjustment step length, the first outlet wave estimation, the second outlet wave estimation and the target sample function to obtain an updated illumination function; and updating the diaphragm function based on the updated illumination function.
4. The multi-angle illuminated lensless imaging method of claim 3, wherein:
target sample function O obtained after nth iteration updatingn+1(r) is:
Figure FDA0003561490320000021
wherein, On(r) represents the target sample function for the nth iteration,
Figure FDA0003561490320000022
representing the illumination function corresponding to the jth light wave in the nth iteration,
Figure FDA0003561490320000023
representing a first exit wave estimate corresponding to the jth light wave in the nth iteration,
Figure FDA0003561490320000024
representing a second exit wave estimate corresponding to the jth lightwave in the nth iteration, representing a complex conjugate, aORepresenting a first adaptation step size;
illumination function obtained after nth iteration updating
Figure FDA0003561490320000025
Comprises the following steps:
Figure FDA0003561490320000026
wherein, apIndicating a second adaptation step size.
5. The multi-angle illuminated lensless imaging method of claim 4, wherein:
the first adaptive adjustment step size is:
Figure FDA0003561490320000027
wherein,
Figure FDA0003561490320000028
a first update weight representing the nth iteration is calculated as:
Figure FDA0003561490320000029
the second adaptive adjustment step size is:
Figure FDA00035614903200000210
wherein,
Figure FDA00035614903200000211
a second update weight representing the nth iteration is calculated as:
Figure FDA00035614903200000212
above formula, enAnd representing the corresponding loss value in the nth iteration, wherein eta is a constant.
6. The multi-angle illuminated lensless imaging method of claim 5, wherein:
loss value e at nth iterationnThe calculation formula of (2) is as follows:
Figure FDA0003561490320000031
order:
Figure FDA0003561490320000032
above-mentioned [ Delta ] [ epsilon ]j(On(r)) is the gradient of the current target estimation error with respect to the target sample function.
7. The multi-angle illumination lens-free imaging method according to any one of claims 2 to 6, further comprising a relative displacement correction step before analog computation is performed on the imaging process based on the relative displacement, the specific steps being:
calculating to obtain a corresponding correction value based on the corresponding diffraction pattern and the diffraction pattern estimation;
and correcting the relative displacement based on the correction value, and performing analog calculation on the imaging process based on the corrected relative displacement.
8. The multi-angle illumination lensless imaging method of any one of claims 3-6, wherein:
and when the iteration times reach a preset time threshold value or the convergence of the target sample function is judged based on the loss value, outputting the obtained updated target sample function as an imaging result.
9. A multi-angle illumination lensless imaging system, comprising:
the acquisition module is used for acquiring a plurality of diffraction patterns, wherein the diffraction patterns are images acquired by the image sensor when light waves with different illumination angles irradiate onto the sample surface through the diaphragm;
the reconstruction module is used for carrying out iterative reconstruction on the basis of the diffraction pattern to obtain an imaging result and comprises a phase recovery unit, a laminated imaging unit and an output unit;
the phase recovery unit is used for extracting a diffraction pattern corresponding to the light wave positioned at the optical axis as a reference pattern, and performing cross correlation on the reference pattern and each diffraction image to obtain relative displacement; the device is also used for carrying out analog calculation on the imaging process based on the relative displacement to obtain light intensity data transmitted to the sample surface;
the laminated imaging unit is used for calculating a current target estimation error and updating a target sample function corresponding to a sample surface based on light intensity data and the current target estimation error;
and the output unit is used for acquiring the updated target sample function as a corresponding imaging result and outputting the imaging result when a preset iteration condition is reached.
CN202011418841.6A 2020-12-07 2020-12-07 Multi-angle illumination lens-free imaging method, system and device Active CN112697751B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011418841.6A CN112697751B (en) 2020-12-07 2020-12-07 Multi-angle illumination lens-free imaging method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011418841.6A CN112697751B (en) 2020-12-07 2020-12-07 Multi-angle illumination lens-free imaging method, system and device

Publications (2)

Publication Number Publication Date
CN112697751A CN112697751A (en) 2021-04-23
CN112697751B true CN112697751B (en) 2022-05-03

Family

ID=75506335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011418841.6A Active CN112697751B (en) 2020-12-07 2020-12-07 Multi-angle illumination lens-free imaging method, system and device

Country Status (1)

Country Link
CN (1) CN112697751B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110927115B (en) * 2019-12-09 2022-05-13 杭州电子科技大学 Lens-free dual-type fusion target detection device and method based on deep learning
CN113189101B (en) * 2021-04-27 2024-01-30 浙江大学 Lens-free imaging method with negative feedback adjustment
CN114202612A (en) * 2021-11-24 2022-03-18 北京理工大学 Method and device for calculating illumination imaging
CN114757915A (en) * 2022-04-15 2022-07-15 杭州电子科技大学 Oral cavity detection device, oral cavity image processing method and system
CN117974454A (en) * 2022-10-30 2024-05-03 珠海乘数信息科技有限公司 Method, system and storage medium for recovering transmission function and incident wave of object in high resolution imaging
CN118294414B (en) * 2024-06-03 2024-10-18 南方科技大学 Coherent diffraction imaging method and device
CN118295225B (en) * 2024-06-06 2024-09-13 南昌大学 Double-channel priori constrained multi-distance lens-free digital holographic reconstruction method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108780218A (en) * 2016-03-24 2018-11-09 分子装置有限公司 Use the imaging system of the assistant images detector for sample position
CN111670358A (en) * 2017-12-26 2020-09-15 彼得·佩纳 Device and method for monitoring yarn quality

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201017108Y (en) * 2007-02-02 2008-02-06 中国科学院上海光学精密机械研究所 Intensity-correlated diffraction imaging apparatus
CN107655405B (en) * 2017-08-29 2020-01-24 南京理工大学 Method for eliminating axial distance error between object and CCD by using self-focusing iterative algorithm
CN107861360B (en) * 2017-12-20 2020-01-10 清华大学 Single-exposure lens-free imaging system and method based on multi-angle illumination multiplexing
CN111861850B (en) * 2020-07-21 2022-04-29 中国科学院大学 Information hiding method and system for laminated imaging

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108780218A (en) * 2016-03-24 2018-11-09 分子装置有限公司 Use the imaging system of the assistant images detector for sample position
CN111670358A (en) * 2017-12-26 2020-09-15 彼得·佩纳 Device and method for monitoring yarn quality

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于多距离相位恢复的无透镜计算成像技术;刘正君等;《红外与激光工程》;20181031;第47卷(第10期);图1 *

Also Published As

Publication number Publication date
CN112697751A (en) 2021-04-23

Similar Documents

Publication Publication Date Title
CN112697751B (en) Multi-angle illumination lens-free imaging method, system and device
CN110441271B (en) Light field high-resolution deconvolution method and system based on convolutional neural network
Du et al. Three dimensions, two microscopes, one code: Automatic differentiation for x-ray nanotomography beyond the depth of focus limit
JP5130311B2 (en) System and method for recovering wavefront phase information
CN100402996C (en) Phase determination of radiation wave field
US9448160B2 (en) Method and apparatus for providing image data for constructing an image of a region of a target object
EP2206008B1 (en) Light microscope with novel digital method to achieve super-resolution
CN102227751B (en) Device and method for providing image date of region of target object
CN106094487B (en) Terahertz in-line holographic imaging method based on multiple recording distances
CN106845024B (en) Optical satellite in-orbit imaging simulation method based on wavefront inversion
CN111366557A (en) Phase imaging method based on thin scattering medium
CN114241072B (en) Laminated imaging reconstruction method and system
CN109581849A (en) A kind of in-line holographic method for reconstructing and system
CN111273533B (en) Coaxial digital holographic automatic focusing method and system
US11867624B2 (en) System for reconstructing and outputting a structure of an estimation sample using refractive index distribution of updated estimation sample
CN102323721A (en) Method for obtaining space image of non-ideal lithography system based on Abbe vector imaging model
Chen et al. Fast-converging algorithm for wavefront reconstruction based on a sequence of diffracted intensity images
Maiden et al. WASP: weighted average of sequential projections for ptychographic phase retrieval
CN112837390A (en) Reconstruction method and system of low-quality digital holographic image
Bos et al. Simulation of extended scenes imaged through turbulence over horizontal paths
WO2019012796A1 (en) Information processing device, information processing method, program, and cell observation system
WO2024128253A1 (en) Image generation device, image generation method, and program
CN102495535A (en) Method for obtaining mask three-dimensional vector space image based on Abbe vector imaging model
CN110411983B (en) High-resolution diffraction imaging method and device
CN118817643A (en) Stacked diffraction imaging method based on relaxation alternate multiplier method, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant