US12298524B2 - Multi-lens system for imaging in low light conditions and method - Google Patents
Multi-lens system for imaging in low light conditions and method Download PDFInfo
- Publication number
- US12298524B2 US12298524B2 US17/314,895 US202117314895A US12298524B2 US 12298524 B2 US12298524 B2 US 12298524B2 US 202117314895 A US202117314895 A US 202117314895A US 12298524 B2 US12298524 B2 US 12298524B2
- Authority
- US
- United States
- Prior art keywords
- vortex
- neural network
- imaging device
- image
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/09—Beam shaping, e.g. changing the cross-sectional area, not otherwise provided for
- G02B27/0905—Dividing and/or superposing multiple light beams
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/42—Diffraction optics, i.e. systems including a diffractive element being designed for providing a diffractive effect
- G02B27/4205—Diffraction optics, i.e. systems including a diffractive element being designed for providing a diffractive effect having a diffractive optical element [DOE] contributing to image formation, e.g. whereby modulation transfer function MTF or optical aberrations are relevant
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B3/00—Simple or compound lenses
- G02B3/0006—Arrays
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0499—Feedforward networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B82—NANOTECHNOLOGY
- B82Y—SPECIFIC USES OR APPLICATIONS OF NANOSTRUCTURES; MEASUREMENT OR ANALYSIS OF NANOSTRUCTURES; MANUFACTURE OR TREATMENT OF NANOSTRUCTURES
- B82Y20/00—Nanooptics, e.g. quantum optics or photonic crystals
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/28—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 for polarising
- G02B27/283—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 for polarising used for beam splitting or combining
Definitions
- Embodiments described herein generally relate to imaging devices and methods.
- CNNs deep-learning convolutional neural networks
- CNNs and other algorithms with higher computational complexity are vulnerable to adversarial attacks and have other disadvantages.
- Improved imaging devices and methods are desired that reduce computational needs and improve resolution in low light conditions.
- FIG. 1 (a) The general schematic of the technique: a coherent light source (e.g. laser) illuminates the object. Transmitted light is phase modulated with the multi-vortex lens array. Its Fourier image is collected in the sensor plane. The vortex Fourier intensity patterns are fed to a neural net that reconstructs the original image with real and imaginary parts. Vortex-Fourier patterns for a (b) centered and (c) shifted MNIST handwritten ‘5’ showing that the intensity pattern of the vortex-Fourier representation is not translation invariant.
- a coherent light source e.g. laser
- Transmitted light is phase modulated with the multi-vortex lens array. Its Fourier image is collected in the sensor plane.
- the vortex Fourier intensity patterns are fed to a neural net that reconstructs the original image with real and imaginary parts. Vortex-Fourier patterns for a (b) centered and (c) shifted MNIST handwritten ‘5’ showing that the intensity pattern of the vortex-Fourier representation is not translation invariant
- PSNR peak signal to noise ratio
- CNN convolutional neural network
- the linear activation produces a shallow and dense neural network that is more generalizable and transferrable with high noise.
- the shallow and dense neural network while more accurate with low noise, performs less of the “inverse” mapping, as seen by a bias in reconstruction for highlighting edges.
- FIG. 7 (a) Categorical Classification accuracy of the MNIST handwritten dataset as a function of PSNR for 1, 3, 5, 7 multi-vortex arrays without deep learning (DL) or with dense neural networks.
- the categorical accuracy is higher with a convolutional neural network (CNN) when the PSNR is greater than 3.
- CNN convolutional neural network
- a black dotted line denotes PSNR of ⁇ 3 dB where (b) we plot the corresponding confusion plot.
- SLM spatial light modulator liquid crystal matrix
- FIGS. 10 a - 10 k (a) Schematic illustration of the synthesis. SEM images of PANI-IOFs with different pore sizes: (b) S1, top-view, (c) S2, top-view, (d) S3, cross-section, and (e) S3, top-view. High magnification TEM (f) and SEM (g) images of S3 showing the vertical alignment of nanofibers. (h) TEM image of PANI nanofibers formed in solution (h). Digital images of PANI-IOFs on (i) a 1 cm 2 glass slide, (j) a 1 cm 2 silicon wafer and (k) a convex lens (6 mm in diameter). The transmitted and reflected color was shown by placing the sample on a white and black substrate. The scale bars are 2 ⁇ m in (b-d), 500 nm in (f), and 200 nm in (g, h and the inset of f).
- FIG. 11 SEM images of PS assemblies formed with (a) 0, (b) 0.1, (c) 0.3, and (d) 0.5 mM Triton X100.
- PCF Calculated pair correlation function
- PCF Calculated pair correlation function from sample shown in ( 2 e and 2 f ) and PCF for a perfect lattice as a function of r/2r 0 , where r 0 is the mean radius of the pore.
- the inset shows the Fourier Transforms (black line) and fitted curve (red line) of the PCF for the perfect lattice and PANI-IOFs.
- FIG. 12 Reflectance spectra of S1 (black), S2 (red) and S3 (blue): (a) experimental data, and (b) simulated data. (c) Dispersion of transmittance showing the structural color of S3. The detector is fixed along the sample normal, while the incident angles ( ⁇ i ) increases. The angles in (c) are values for the ⁇ i shown in the inset. (d) Simulated electric field distribution of S3 at 544 nm, with hot spots above each void. Optical microscopic images of S3 (e) at the bottom of the pores in the substrate and (f) above this location. The scale bars are 20 ⁇ m in (e and f).
- FIG. 12 g shows dimensions of example structures.
- FIGS. 13 a - 13 c (a) Schematic illustration of the setup for optical dispersion characterizations. Angular resolved reflectivity spectra of the PANI-IOFs supported on a glass substrate: (b) experimental data, and (c) simulation data. The white and black dotted lines are guidance for the eyes and represent the modes and Brewster-angles, calculated from interpolated values of the complex refractive index. The intensity of the spectra (I) is normalized by the intensity of the incident light (I 0 ). The scale bars show the values of log [(I/I 0 )*100%].
- FIGS. 14 a - 14 f (a) Schematic illustration of the polarization-dependent transmissive spectral measurement. ⁇ i and ⁇ D represents the incident angle and detector angle. Transmissive spectra dispersion of S3 illuminated by (b) s- and (c) p-polarized light. ⁇ i changes from 0° to 60° with ⁇ D fixed at 15°. The spectra are normalized by the intensity of the incident light. The black line in (b) represents the position of diffraction peaks by simulation. (d) Change of diffraction intensity with the angle of polarization for a PS nanosphere opal, a PS nanosphere opal coated with PANI and a PANI IOF (S3).
- FIG. 16 Reconstructed images from (a, b, c) MNIST handwritten and (d, e, f) fashion MNIST datasets with random, Fourier and vortex bases, respectively.
- the vortex basis provides edge enhancement for object detection.
- FIG. 17 (a)-(c) Sample training images X R , X F , and X V or random, Fourier, and vortex training sets. (d-f) Corresponding training and validation curves.
- FIG. 18 (a) Single “hot” pixel response of the random model and (b) single-pixel response of vortex model, which demonstrates sharp edges and resolves high-contrast objects. (c) Comparison of reconstruction error for different levels of noise given high-entropy random UTS and random mask and lower SVD-entropy vortex UTS and vortex mask. This error corresponds to the scenario in which shot noise dominates the background noise.
- topological phase a specific vortex example of a “topological phase” is used for illustration, however the invention is not so limited, and other topological phases may be used in addition to vortices. Other examples include, but are not limited to, off-axis singularities and edge dislocations in the phase.
- polarization may be manipulated topologically. For example, the birefringence or optical activity, changes in the refractive index for different polarizations, can vary spatially through the phase plate so that the polarization produces a vortex (azimuthal or radial) or some other spatially varying pattern.
- the topological phases (whether from a point singularity-vortex or edge dislocations) lead to Fourier-plane interference patterns, where a simple neural network can deconvolve and reconstruct the original image.
- a monochromatic laser light is used as an example, however, the invention is not so limited. Non-monochromatic light may be used in other examples.
- MNIST images in a vortex Fourier representation are reconstructed at a rate of several thousand frames per second on a 15 W central processing unit, two orders of magnitude faster than convolutional neural net schemes.
- the dense neural network is trained at a rate 20-times faster with the vortex encoding compared to random encoding schemes.
- CNNs deep-learning convolutional neural networks
- the first (to the authors' knowledge) application of CNNs for image reconstruction is presented in, where a phase-encoded image on a spatial light modulator is reconstructed via CNNs using intensity data from the camera.
- “Non-line-of-sight” imaging is achieved with CNNs using albedo autocorrelation patterns obtained from the speckle image using a 300-mW continuous-wave laser or pulsed laser.
- deep learning neural networks offers additional functionality in the process of reconstructing the object. For example, simultaneous autofocusing with phase recovery or super-resolution in pixel-limited or diffraction-limited systems. With sets of training and testing diffusers, the phase information encoded through controlled speckle patterns can be leveraged to predict the outputs from previously unseen diffusers.
- the non-exhaustive list of important applications includes profilometry, imaging through smoke, LIDAR that leverages multiple point cloud and time-of-flight information. Additional examples of “nonlinear reservoir learning” are presented in, which employs caustic patterns for original object reconstruction.
- the challenge with deep learning methods is that the neural network requires large training sets and long training times. These neural networks have higher degrees of computational complexity that render them vulnerable to adversarial network attacks.
- a topological vortex-based lenslet-array design contains multiple vortex phases in a lenslet pattern, which yields orthogonal, edge-enhanced representations in the Fourier plane.
- the presence of the vortex provides spatial encoding to break the translation invariance of the measured Fourier pattern, which is part of the Phase Problem.
- Image reconstruction is performed with dense neural nets or shallow neural nets. Again, we refer to this few-hidden-layer approach that does not require deep learning as a “small-brain”. Experimentally, our approach is robust to noise.
- quick reconstruction we also show quick training of the neural network. The speed is achieved because vortices provide feature extraction to train the neural network quickly, 20 times faster than a random encoding scheme.
- FIG. 1 depicts our imaging scheme, where multiple images of the object F(r, ⁇ ) are collected in the Fourier domain; the light transmitted through each lenslet is modulated by different vortex and lens mask patterns M m (r, ⁇ ).
- the camera detects the scaled, modulus-squared image of the Fresnel propagated, vortex-Fourier-transformed electric field,
- 2
- m is the vortex topological charge
- r and ⁇ are the real domain cylindrical coordinates
- u and v are the Fourier-plane Cartesian coordinates
- F is the Fourier transform operator.
- the vortex Fourier intensity patterns ⁇ tilde over (F) ⁇ are concentrated in a relatively small area but is typically donut-shaped, with a wider donut as m increases 1(b).
- the role of vortex phase in the ‘real-domain’ is to spatially encode and break translational invariance of the Fourier-transformed intensity pattern 1(c).
- ⁇ 0 is the dynamic range of the object phase-shift. This mapping is convenient because the signal power is invariant with our choice of Y. We have also considered opaque objects where Y blocks or absorbs the signal, i.e., F(u, v) ⁇ Y, which yields similar trends.
- Equation 4 is a modal decomposition at the phase mask over the aperture of radius a and where L p
- are the generalized Laguerre polynomials
- w m is the waist of the beam, which we assume is significantly larger than the features of object F.
- the detected intensity patterns [Eq. 12] are composed of differentials of the Fourier-transformed components of the Gaussian aperture object F, mixed with various weights W p .
- This differential scheme provides feature extraction of the Fourier transform and also mixes the real and imaginary parts in a manner that can be reconstructed as long as different m are used.
- a differential scheme for image reconstruction is deployed in the HERALDO method; however, this older technique employs iterative algorithms instead of dense neural networks. Without loss of information from the finite aperture a or focal length f, only 2 vortex-modulated images are needed to achieve optimal inverse mapping.
- the inputs X are the modulus-squared vortex Fourier-transforms of Gaussian-apertured Y.
- a dense neural net with 2 hidden layers is trained with mean-squared error (MSE) loss function.
- MSE mean-squared error
- FIG. 2 The importance of spatial encoding reconstruction is shown in FIG. 2 .
- the neural network does not have enough information to inverse-map F to F.
- the Arabic data set, 2 ( e - f ) the reconstructed letters are impressive since we limited our training to 40 types of handwritten marks that deviate substantially from the formal Arabic letters. This illustration is one approach to testing our intuition about the Phase Problem with neural networks.
- FIG. 4 ( a ) shows the validation set. Even though the neural network has not seen the validation set before, unlike in the previous example, it has been trained with similar set of images that fall into various categories (shirt, shoes, dress, etc.).
- Table 1 illustrates the convergence of the reconstructed images of the Fashion MNIST dataset to the original, both in terms of SSIM and MSE, as the number of vortices increases.
- SSIM Structuretural Similarity Index Metrics
- Table 1 also shows the SSIM and MSE for three-layer CNN-trained reconstruction with single and dual vortex datasets. This comparison suggests that our proposed architecture achieves the same quality, while yielding much lower computational overhead (more than 3000 FPS for the proposed network and less than 50 FPS for a three-layer CNN with 3 ⁇ 3 kernel).
- PSNR 10 ⁇ log 10 ( L 2 MSE ) , ( 13 )
- FIG. 4 illustrates the tradeoff between resolution and robustness to noise.
- the Fourier-plane pattern covers a larger area, so that spectral features are sampled with better resolution limited by the aperture function and mask resolution [Eq. B].
- we compare our results to a random spatial encoding pattern where vortices are replaced with a diffuser.
- the SSIM or accuracy from reconstruction using the random phase patterns approaches the level of performance of vortex schemes in the no-noise scene.
- Different numbers shapes
- Different numbers have varying levels of classification output robustness to noise, which is related to our choice of m.
- certain digit geometries are more clearly mapped by certain m.
- Transmutation is a process of vortex charge migration, determined by combination of the object group symmetry and m. We expect that vortex transmutation govern not only governs the breakup of the vortex patterns in the Fourier transformation or propagation [Eq. 12], but also that the neural network also gains information from transmutation for classification.
- Lensless focusing where the quadratic phase is on the spatial light modulator, is used to mimic the data collection with lenslet-array.
- the laser power is tuned with the computer-controlled rotation of crossed polarizers and laser repetition rate.
- the reflected light is collected by the CCD camera—2 ⁇ 3 inch size, pixel dimensions of 4.65 ⁇ m ⁇ 4.65 ⁇ m, 8 bit dynamic range and 0.7 MP (1024*768 pixels) resolution.
- the data is acquired in batches of 5000 images via automatic procedure using the Matlab software package.
- CCD images are cropped to 28 ⁇ 28 pixel size for each of the vortices.
- the vortex phase imprinted by the vortex represents reflection by an MNIST dataset [ FIG. 8 ( b ) ].
- An example of the vortex patterns imaged by the camera are shown in FIG. 8 ( c ) .
- Example reconstruction using 2 of the experimentally measured vortex-Fourier image is shown in FIG. 9 .
- required light intensity level is low, less than 50 ⁇ W with an exposure time of 2.8 ms.
- our approach is only limited to the sensitivity of the camera. Even though we are capable of placing 6 vortices on our SLM and imaging the result at once, the reconstruction achieves good results with only two vortex patterns and starts to saturate at three vortex patterns.
- the system behaves as a camera and is robust with low-light-levels or with noise.
- the small-brain machine-learning algorithm reduces the computational overhead with training and also reduces computational complexity, resulting in images being less vulnerable to adversarial attacks.
- optical preprocessing with a topological phase mask in the Fourier domain is an imaging approach that is:
- Memory compact for memory; the vortex Fourier transform provides a compressed representation that minimizes the number of pixels that carry data forward.
- the potential applications of the technique are numerous: imaging in low signal conditions, e.g. with lack of illumination, driver assist systems, microscopy of delicate photosensitive biological samples, high frame-rate imaging, among others. Given its low power requirements and reconstruction speed it can find applications in the areas of computer vision systems for unmanned vehicles, especially in the harsh environments, security, microscopy and many other applications. Specifically, given the possibility to work with pulsed lasers, one envisions energy efficient and spectroscopic applications of the proposed approach. It should be specifically noted that technique does not require a CW laser, thus it is not limited by the coherence length.
- a number of photonic structures are possible to use as a topological phase modulator in example configurations. Although the invention is not so limited, selected examples of suitable photonic structures are described in examples below.
- Multi-scale structures have been developed for a variety of energy- and sensing-related applications. In this communication, we show how such structures, due to their capacity to filter color and polarization, are particularly useful in computer vision. While either material anisotropy or surface patterning alone is capable of shaping the properties of light, the combination of both, in multi-scale structures, may enable sharp spectral and polarization filtering and enhance resolution and multimodal imaging in computer vision applications. To our knowledge, the production of multi-scale, monolayer photonic structures has not yet been demonstrated in a synthetic material fabricated by bottom-up methods.
- multi-scale conducting polymer assemblies provide tremendous potential for compact computer vision and imaging applications especially for larger-surface applications: e.g., surfaces with areas of the size of camera sensors ( ⁇ cm 2 ).
- Conducting polymer assemblies are analogous to metasurfaces with sub-wavelength metallic domains but with low transmission losses and numerous opportunities for inexpensive, large-area fabrication over flexible and non-flat substrates.
- the proposed implementation of multi-level bottom-up nanostructured materials in photonics applications is a challenge not only because of the high precision required, but also because of the synthesis. Given the nature of self-assembly, multi-scaled domains form simultaneously and may not be controlled independently. Meanwhile, to be effective, the optical response from anisotropy and patterning must work in concert.
- PANI polyaniline
- This material's low cost and toxicity, ease of synthesis, higher stability compared to other conducting polymers, and good compatibility with a variety of material types make it a promising candidate for energy storage, optoelectronics, spintronics, sensing, and biomedical applications.
- Its physical properties are also flexible: they can be changed dramatically by varying the synthesis environment. We harness its capacity to form nanofibers when the monomers are distributed freely in solution. When aligned, it has been shown that the nanofibers respond actively to switch the absorbed polarization of light.
- the nanoscale anisotropy is generated by PANI's fibrous structure and the photonic structure is achieved by filling the interstitial voids of close-packed polystyrene (PS) nanosphere assemblies with PANI.
- PS polystyrene
- IPFs two-dimensional PANI inverse-opal films
- the ordered structures yield strong, polarization-dependent diffraction and sharp, angular separation of colors.
- the strength of the diffraction is surprising: although the dispersion of this mode follows that of a Bragg feature, which is associated with multi-layered crystal, the scattering only occurs through interaction with a monolayer structure.
- PANI IOFs (as well as other IOF structures) are generally synthesized through hard-templating strategies, which involves the fabrication of PS assemblies and polymerization of PANI through either chemical or electrochemical deposition.
- the chemical oxidative polymerization at the air/water interface leads to the open-pore structures of the IOFs and confines the regular growth of the PANI chains.
- FIG. 10 a A schematic illustration of the synthesis is shown in FIG. 10 a . The entire process is divided into interfacial assembly and chemical oxidation polymerization.
- FIG. 10 b - e shows the morphology of the PANI-IOFs with pore sizes of 220, 485 and 670 nm (using 250, 531 and 727 nm PS nanospheres as the templates.
- the SEM image shows that the PANI-IOF is composed of a monolayer of nanobowls ( FIG. 10 d ).
- the nanofibers themselves maintain a similar shape compared to the nanofibers in solution ( FIG. 10 h ).
- the cross-section of the fibers indicates that the nanofibers are vertically aligned with the sample plane. Pore-adjacent nanofibers are much thinner than those in solution as their growth is confined by the voids between the PS. Their alignment is also observable from the SEM images ( FIG. 10 g ), which show the spiny protrusions pointing from the nanobowl to the substrate. This formation of the nanofibers is due to the low concentration of the monomers absorbed on the PS surface, which does not support secondary growth.
- two degrees of order are achieved: IOFs composed of spherical nanovoids and the packing of nanofibers around the pores. This multi-scaled ordering works in concert to filter the transverse-electric (TE) polarization.
- TE transverse-electric
- FIGS. 10 h, i and j show the coating of S2 on a 1 cm 2 glass slide, a 1 cm 2 silicon wafer, and a convex lens (6 mm in diameter); these examples illustrate the versatility of the synthesis method.
- Another advantage of this synthesis approach is that the ordering of the IOFs is tunable, determined by the surfactant-controlled ordering of the PS monolayer film.
- the attractive and repulsive forces between nanospheres are balanced by adjusting the assembly conditions, such as the solvent types, pH of the water, concentration of the PS nanospheres and surfactants. It is important to note that for PS nanospheres, the strong hydrophobic-hydrophobic interaction that brings nanospheres together and plays a negative role during assembly.
- FIG. 11 a - d shows the assemblies of 250 nm PS nanospheres with 0-0.5 mM of Triton X100.
- the ordering of PS nanospheres increases when the concentration of Triton X100 increases from 0 to 0.3 mM, and remains when the concentration is further increased to 0.5 mM.
- g ⁇ ( r ) 1 ⁇ ⁇ ⁇ ⁇ d ⁇ n ⁇ ( r , r + d ⁇ r ) d ⁇ a ⁇ ( r , r + d ⁇ r )
- a is the shell area and dn is the number of holes that lie within a spherical shell. 46-47
- the inset of FIG. 11 g shows the Fourier transform of g(r). For both samples, we compare the full width at half maximum (FWHM), ⁇ , for the first peak of g(r) ⁇ 1 to that of a perfect lattice ( ⁇ O ). We use the ratio of ⁇ / ⁇ 0 to quantitively determine the ordering of the photonic structure. A structure with ⁇ / ⁇ 0 ⁇ 1.5 is considered very highly ordered.
- the ⁇ / ⁇ O for samples 2e and 2f are 1.25 and 1.98, indicating the tunable range of PANI-IOFs.
- the highly ordered IOFs define crystalline planes (or lines) in three different directions that form angles of 120° between each other.
- the PANI IOF coating increases the field of view. If we include the possibility of imaging with the diffracted mode, then the effect of the PANI IOF on field of view is dramatic. Since our structure behaves like a grating, the diffraction mode depends on the color. The structural color of S3 is evident where the detector is placed on the axis of a flat sample ( FIG. 13 c ). When the incident angle increases from 45° to 75°, the peak redshifts in the visible light range. We have no difficulty imaging an object projected with an incident angle of 75° using the diffracted mode. We estimate the effective angular field of view to be over 170 degrees, which would not be uncommon for anthropod-inspired imaging systems.
- FIG. 14 Measurements of the specular reflectance of the samples are shown in FIG. 14 for angles of incidence between 18° and 78°.
- a schematic illustration of the optical setup for the measurement is shown in FIG. 14 a .
- the sample on a glass slide was mounted on a rotation stage and excited by a white light source.
- a polarizer is placed in front of the light source to illuminate the sample with TE (or s-) and TM or (p-)polarization.
- the PANI-IOFs show higher reflectivity in TE than in TM polarized light. No diffraction effects are expected within the range of parameters explored in this figure.
- sample S1 is too small to produce significant details in the reflectivity data, which is similar to the flat surface response.
- TE-polarization we see a monotonic increase in the reflectivity as a function of angle and, as a function of wavelength; this behavior seems to be dominated by the changes in the refractive index of PANI.
- TM-polarization we note the presence of a Brewster angle minimum whose dependence on the wavelength follows the dispersion properties of PANI.
- the maps are distorted.
- Particularly interesting is the modification of the Brewster angle position and its shift to smaller angles for wavelengths around 700 nm. Discrepancies between simulations and experiments are more prominent with thicker layers and wider pore sizes, which indicates greater influence from the material anisotropy or pattern disorder in the PANI IOF.
- the PANI-IOFs filter TE polarized light in the diffraction patterns in a manner similar to the resonant polarization-filtering linear grating. Simulations and experiments help us to assign about 3 ⁇ polarization anisotropy to the inverse opal structure and an additional 6 ⁇ higher transmission associated with the material anisotropy or nanowire alignment
- a schematic illustration of the measurement is shown in FIG. 14 a and the corresponding spectra are presented in FIG. 5 b - c .
- ⁇ i sin - 1 ( sin ⁇ ⁇ D + ⁇ T ) .
- TM-polarized light A comparison of the polarization-dependent transmission of the PS assembly, the PS assembly coated with PANI, and the PANI IOFs ( FIG. 14 d ), indicate that the PANI-IOF exhibits contrary behavior. Furthermore, it is important to note that the TM-polarized light excites almost no diffraction peaks.
- the polarization-dependent diffraction is useful for polarimetric imaging.
- the spatial light modulator rotates the linear polarization of the input beam, as shown in the upper-left corner of FIG. 14 f , where the letter “U” is s- and p-polarized for the letter area and the background, respectively.
- What is remarkable about our setup is that no additional color or polarization filters are used to distinguish the SLM shape with the naked eye. This experiment confirms the active and sensitive response of the PANI-IOFs to polarized light.
- PANI-IOF polarization-dependent structural color of the PANI-IOF to its multi-scale structure: the ordered nanovoids and the alignment of the PANI nanofibers.
- the former contributes to Bragg-like modes and the latter enhances the polarization sensitivity.
- PANI IOF carries a similar multi-scale geometry to the resonant, linear structures that others have fabricated with top-down methods, where in-plane anisotropy in a grating structure strong TM-polarized diffraction. Meanwhile, our samples with out-of-plane nanofiber alignment exhibit the opposite, or TE-polarized diffraction.
- SiO 2 -IOF shows similar polarization-dependent diffraction that is simulated in COMSOL, which is much weaker than that observed by PANI.
- Imaging devices utilizing neural networks are described above, along with selected examples of photonic structures that serve as topological phase modulators. Additional examples below describe generalized training of simple neural networks for use in the above described imaging systems and other systems.
- a UTS-trained model overcomes the challenges associated with the “stereotypes” that generally arise from training by a specific image set.
- some disadvantages include the fact that the neural network is too simple to reconstruct images when nonlinear transformations are required. Nevertheless, our results provide insight for training generalizable neural networks and computational cameras that operate at fast speeds. Our proposed method can readily be used for the initialization of alternating minimization problems or downstream image analysis tasks.
- FIG. 15 b shows a schematic of the hybrid machine vision system, which encodes the image prior to the neural network with either a random or vortex phase pattern.
- the fields from the object at the diffractive encoder plane are F(x,y).
- the encoder plane is imprinted with two diffractive element patterns M(x,y), as shown in FIG. 15 ( c ) .
- a sensor or detector captures the intensity pattern of the electric fields F′(u,v).
- each object produces two images, each with a different diffractive element M(x,y).
- the mask pattern may imprint vector (i.e., polarization-dependent) or spectral (time-dependent) delays, here we assume a homogeneous polarization, a linear encoder, and monochromatic, continuous-wave light.
- All optical neural networks have been previously demonstrated, notably with several diffractive layers in the THz regime, with nonlinear activations via saturable-absorbing nonlinearities, and with nano-interferometric etalons in the visible regime. All-optical methods maximize speed and minimize energy loss in the neural computation.
- all-optical systems require nonlinear interactions as proxies for the electronic neural network layer activations. These nonlinearities occur at small length scales in order to confine light sufficiently, so all-optical computing may be more sensitive to environmental conditions and less suitable for autonomous-vehicle computational cameras.
- the Fourier-plane intensity patterns Y are the inputs to a neural network.
- the neural network estimates X (size 28 ⁇ 28) given Y (size 28 ⁇ 28 ⁇ 2).
- X size 28 ⁇ 28
- Y size 28 ⁇ 28 ⁇ 2
- To train the neural network we use the TensorFlow library with the mean squared error loss and Adam optimization algorithm. Convergence is achieved with similar results using either “linear” or “ReLu” activation. Our approach is simple and shows promising opportunities for generalized image reconstruction with “small brain” neural networks.
- This Gaussian function G j,k,n (x,y) represents a scanning light beam that illuminates the training images. All image patterns are positively-valued and normalized to have a peak value of 1.
- FIG. 16 shows a representative set of images reconstructed from models trained with X F , X V , and X R and a vortex mask.
- X F X F
- X V X R
- a vortex mask 20,000 training images are used. Error with thresholding is as low as 10% with test datasets. While the overall error is similar, models trained with the vortex-phase datasets, X V generally have the lowest error and strongly highlighted edges. Meanwhile models trained with a Fourier basis X F have the highest error and models trained with a random basis X R have error in between, with error distributed over the area of the image. Additional differences are explained in the following section.
- FIGS. 17 a - c show samples from 20 k-image X F , X V , and X R UTS with the vortex mask M V . Some pairings converge with minimal overfitting while others do not provide enough information in Y to calculate the inverse of the nonlinear mapping, H(X) FIG. 17 d - f.
- a Fourier basis is the most well-known spectral basis for decomposing an image.
- the validation loss stops decreasing after a certain number of epochs, which signals that the neural network struggles to extract information about the mapping given this orthogonal set of images. What this tells us is rather unintuitive about the span or basis of image reconstruction with neural networks.
- the images are less effectively learned by the neural net because there is minimal overlap between them; the correlations between Fourier modes are less visible to the neural net.
- the random UTS also unreliably converges when the dataset is smaller than 2 k, and its loss generally shows a “hill,” where the loss plateaus before dropping. Meanwhile, the vortex-based UTS is less prone to such behavior.
- This combination of trends tells us that neither orthogonality nor randomness is ideal for training a neural network.
- the structured pattern of our vortex-based UTS X V is a better candidate for generalized training compared to random X R or Fourier X F patterns. In our discussion, we provide some measures related to the UTS image analysis and trained model robustness.
- FIGS. 18 a and 18 b illustrate example images reconstructed with just one “hot” pixel in the camera sensor plane. These patterns are the building blocks of the reconstruction scheme and these patterns change depending on how the model is trained. Depending on the training set, the model is tuned to pay attention to different features of the image, which may depend on the task at hand.
- FIG. 18 c provides a simple noise analysis that shows the additional advantage of robustness when the neural network is trained with a low-entropy UTS.
- FIG. 19 Some trends related to the SVD-entropy are illustrated in FIG. 19 . If images in the set have several high singular values ⁇ i , the images may be reconstructed using fewer “elementary” patterns; those with higher entropy require many more patterns to achieve enough reconstruction accuracy. Low SVD-entropy images are smoother with fewer edges. On the other hand, images with many discontinuities exhibit a high degree of SVD-entropy.
- the SVD-entropy scales logarithmically with the edge steps or dislocations in an image ( FIGS. 19 a and 19 b ).
- the measure of 2D SVD-entropy aids our analysis of the UTS.
- the vortex UTS has a broad range and lower values of SVD-entropy in contrast to the random UTS ( FIG. 19 c ).
- a low SVD-entropy training set like that with structured patterns X V allows us to extract the structured (low SVD-entropy) information from the data ( FIGS. 16 c , 16 f , 16 g , and 16 h ). This effectively acts as a filter for salient features of the image.
- This low SVD-entropy training would be useful for some specific tasks, especially when, e.g., we are less interested in the image's background information than in the foreground object.
- Single-layer neural networks are capable of approximating the inverse mapping from phaseless Fourier-plane intensity patterns after basic training.
- Such moderate-accuracy generalizable image reconstruction achieves high speeds (we achieve 15 k fps on a 15 W laptop CPU).
- UTS Ultra-Resolution phase retrieval from multiple phase-coded diffraction patterns, and depth detection.
- Example 1 includes an imaging device.
- the imaging device includes a light source, a plurality of topological phase modulators, and a neural network coupled to an output of the plurality of topological phase modulators.
- Example 2 includes the imaging device of example 1, wherein the plurality of topological phase modulators includes an array of topological phase modulators.
- Example 3 includes the imaging device of any one of examples 1-2, wherein the plurality of topological phase modulators includes a plurality of spiral vortex phase modulators.
- Example 4 includes the imaging device of any one of examples 1-3, wherein the light source includes a laser light source.
- Example 5 includes the imaging device of any one of examples 1-4, wherein the neural network includes a shallow dense neural network.
- Example 6 includes the imaging device of any one of examples 1-5, wherein the plurality of topological phase modulators include an array of nanobowls.
- Example 7 includes the imaging device of any one of examples 1-6, wherein the array of nanobowls include nanofibers on a convex surface of nanobowls in the array of nanobowls.
- inventive subject matter has been described with reference to specific example embodiments, various modifications and changes may be made to these embodiments without departing from the broader scope of embodiments of the present disclosure.
- inventive subject matter may be referred to herein, individually or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single disclosure or inventive concept if more than one is, in fact, disclosed.
- the term “or” may be construed in either an inclusive or exclusive sense. Moreover, plural instances may be provided for resources, operations, or structures described herein as a single instance. Additionally, boundaries between various resources, operations, modules, engines, and data stores are somewhat arbitrary, and particular operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within a scope of various embodiments of the present disclosure. In general, structures and functionality presented as separate resources in the example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within a scope of embodiments of the present disclosure as represented by the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
- first means “first,” “second,” and so forth may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first contact could be termed a second contact, and, similarly, a second contact could be termed a first contact, without departing from the scope of the present example embodiments. The first contact and the second contact are both contacts, but they are not the same contact.
- the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context.
- the phrase “if it is determined” or “if [a stated condition or event] is detected” may be construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Optics & Photonics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
Description
|{tilde over (F)} m(u,v)|2 =| {F(r,ϕ)M m(r,ϕ)|2 (1)
F(r,ϕ)=e iα
-
- where λ is the wavelength and f is an effective focal length. This pattern may be generalized and does not require for m to be an integer.
-
- where the weighted coefficients
W p=∫0 a2πr m+1 L p |m|(r 2 /w m 2)dr. (5)
- where the weighted coefficients
-
- where
-
- are the unit-power normalizing coefficients, which depend on m. The combination of radial magnitude and azimuthal phase Vm yields an interesting operator in the Fourier domain,
V m(r,ϕ)=r m e imϕ=(r cos(ϕ)+i sin(ϕ))m, (10) - since r cos(Φ) and r sin(Φ) are the canonical x and y variables in Cartesian coordinates, the Fourier transform of Vm is
- are the unit-power normalizing coefficients, which depend on m. The combination of radial magnitude and azimuthal phase Vm yields an interesting operator in the Fourier domain,
-
- while the radial quadratic phase R(r) yields a scaling factor in the Fourier-plane image. With Eq. 11, the detected camera image is:
| TABLE 1 |
| SSIM and MSE for Fashion-MNIST reconstruction. The table shows that acceptable |
| quality is achieved with 2 vortices. |
| | 1 vortex, linear | 2 vortices, linear | 3 vortices | CNN, 1 vortex | CNN, 2 vortices | |
| SSIM | 0.45 | 0.62 | 0.84 | 0.88 | 0.61 | 0.84 |
| MSE | 0.0280 | 0.0242 | 0.0140 | 0.0122 | 0.0235 | 0.0145 |
C. Speed and Robustness to Noise
-
- where L is dynamic range of the camera (e.g. 8 bits or 12 bits), MSE is defined as
-
- where N is the number of pixels and xi is the noiseless and yi is the noisy pixel value. We consider both sensor shot noise and dark noise to be generated with a Poisson distribution.
T=(d+r)sin(60°),
-
- where d and r are the sizes of the pore and the edge, respectively (
FIG. 12 g ). This leads to periods of 235 nm, 508 nm and 695 nm for samples S1, S2, and S3, respectively. For normal incidence the first order disappears for wavelengths greater than the period T. We have carefully analyzed the optical properties of the samples.FIG. 12 a shows the normal-incidence reflectance of the PANI-IOFs. S2 and S3 exhibit sharper reflection peaks than S1, and the peak redshifts 35 nm with an increase of the pore sizes for S2 and S3. By comparison with the reflection of a nanofiber film-which only shows broad absorption in the PANI emeraldine salt polaron structures—we attribute the sharp peaks in S2 and S3 to the periodic voids in the IOFs.
- where d and r are the sizes of the pore and the edge, respectively (
-
- 1. Small, sometimes undetectable perturbations in the input (both image and sampling domain) can cause severe artifacts in the image reconstruction.
- 2. Small structural changes can be left undetected.
- 3. More samples in the training set can lead to a deterioration of the results (as a result of the “memory” effect described above). Subsequently, algorithms themselves can stall or experience instabilities.
F′(u,v)=[M(x,y)F(x,y)]. (1)
F(x,y)M(x,y)=e iaX G(x,y)M(x,y) (2)
-
- where G(x,y) is the Gaussian beam pattern illuminating the object and X is the positively-valued original image. This Gaussian pattern represents a smooth pupil function or the illuminating beam. In our study, we fix α=π and find that the reconstruction quality does not change significantly when $\alpha$ varies from π/4 to 3π/2.
Y=H(X)+N, (3)
-
- or for our specific case,
Y=| [e iαX G(x,y)M(x,y)]|2 +N, (4) - where Y is the positively-valued sensor measurement, H(.) is a nonlinear transform operator that includes the transfer function of the optics, light scattering, and the sensitivity curve of the detector, and N is the measurement noise.
- or for our specific case,
-
- where f is the effective focal length of the radial quadratic phase, λ is the wavelength of light, m is an on-axis topological charge, and w is the width of the Gaussian beam illuminating the mask.
FIGS. 15 (b,c) show diffractive elements with m=1, 3. The second pair is composed of random masks, where each pixel of the transmitted pattern is encoded with a random phase from 0 to 2π. The mask is also illuminated with the same Gaussian beam. On the side of the training, we work with a range of images composed of 28×28 patterns that are random XR Fourier-based XF, or shapes related to a vortex phase XV.
- where f is the effective focal length of the radial quadratic phase, λ is the wavelength of light, m is an on-axis topological charge, and w is the width of the Gaussian beam illuminating the mask.
X F(s
-
- where combinations of sj=2πj/dx, sk=2πk/dy, and k span the Fourier space intended to reproduce any arbitrary image and N. Gn represents a scanning Gaussian beam with varied width and center,
G n(x,y)=e −[(x−xn )+(y−yn )2 ] /w n 2 (7) - where xn, yn, wn tune size of the UTS to be comparable to others. The size of the dataset also changes the phase shift, where φk=2πk and N is the number of the uniquely-valued wave fringes with wave numbers sj, sk in XF.
- where combinations of sj=2πj/dx, sk=2πk/dy, and k span the Fourier space intended to reproduce any arbitrary image and N. Gn represents a scanning Gaussian beam with varied width and center,
X V(x
G j,k,n(x,y)=e −[(x−x
-
- where the argument
σi is the normalized magnitude of the singular values or the modal coefficients of the image, given as
- where the argument
-
- where K is the number of singular values and σi are the singular values.
Claims (17)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US17/314,895 US12298524B2 (en) | 2020-05-08 | 2021-05-07 | Multi-lens system for imaging in low light conditions and method |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202062704412P | 2020-05-08 | 2020-05-08 | |
| US17/314,895 US12298524B2 (en) | 2020-05-08 | 2021-05-07 | Multi-lens system for imaging in low light conditions and method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20210349324A1 US20210349324A1 (en) | 2021-11-11 |
| US12298524B2 true US12298524B2 (en) | 2025-05-13 |
Family
ID=78412565
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/314,895 Active 2043-10-11 US12298524B2 (en) | 2020-05-08 | 2021-05-07 | Multi-lens system for imaging in low light conditions and method |
Country Status (1)
| Country | Link |
|---|---|
| US (1) | US12298524B2 (en) |
Families Citing this family (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11346769B2 (en) | 2020-02-20 | 2022-05-31 | Onto Innovation Inc. | Fast generalized multi-wavelength ellipsometer |
| US12187269B2 (en) * | 2020-09-30 | 2025-01-07 | Toyota Motor Engineering & Manufacturing North America, Inc. | Optical sense-compute solution for real-time navigation involving multiple vehicles |
| US11346768B1 (en) * | 2020-12-02 | 2022-05-31 | Onto Innovation Inc. | Vortex polarimeter |
| CN113644984B (en) * | 2021-10-14 | 2022-03-11 | 清华大学 | Optical logic element for optoelectronic digital logic operation and its logic operation method |
| WO2023145444A1 (en) * | 2022-01-27 | 2023-08-03 | ソニーグループ株式会社 | Imaging device, and imaging device operation method |
| EP4254350A1 (en) * | 2022-04-01 | 2023-10-04 | Siemens Healthcare GmbH | Determination of illumination parameters in medical image rendering |
| CN115039738A (en) * | 2022-04-07 | 2022-09-13 | 南京大学 | Application of photonic crystal film in alleviating phototoxicity of nematodes |
| CN115657322B (en) * | 2022-10-17 | 2024-10-15 | 中国科学院光电技术研究所 | Vortex beam array generation method and device |
| CN115633243B (en) * | 2022-12-01 | 2023-08-04 | 南京理工大学 | Transmission matrix theory-based transmission scattering medium generalization imaging method |
| US20240272000A1 (en) * | 2023-02-14 | 2024-08-15 | University Of Central Florida Research Foundation, Inc. | Wavelength-resolved photonic lantern wavefront sensor |
| CN115993611B (en) * | 2023-03-22 | 2023-06-20 | 清华大学 | A non-line-of-sight imaging method and device based on a transient signal super-resolution network |
| CN118067035B (en) * | 2024-01-15 | 2025-12-16 | 南京理工大学 | 4D multi-frame spectrum three-dimensional imaging method based on aperture coding and digital image correlation |
| CN118962994B (en) * | 2024-07-17 | 2025-09-26 | 南京邮电大学 | Vector vortex light generation method based on hybrid diffraction deep neural network |
| CN119936908B (en) * | 2025-04-03 | 2025-06-27 | 东海实验室 | Underwater target three-dimensional correlation imaging method based on vortex light spatial filtering |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180284025A1 (en) * | 2017-03-31 | 2018-10-04 | Richard Gozali | Oam microscope for edge enhancement of biomedical and condensed matter samples and objects |
| US20200351454A1 (en) * | 2019-04-30 | 2020-11-05 | William Marsh Rice University | Wish: wavefront imaging sensor with high resolution |
-
2021
- 2021-05-07 US US17/314,895 patent/US12298524B2/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180284025A1 (en) * | 2017-03-31 | 2018-10-04 | Richard Gozali | Oam microscope for edge enhancement of biomedical and condensed matter samples and objects |
| US20200351454A1 (en) * | 2019-04-30 | 2020-11-05 | William Marsh Rice University | Wish: wavefront imaging sensor with high resolution |
Non-Patent Citations (1)
| Title |
|---|
| Novak, K., "Compact vortex wavefront coding camera", Proc. SPIE 11396, Computational Imaging V, 113960O, (Apr. 12, 2020), 10 pgs. |
Also Published As
| Publication number | Publication date |
|---|---|
| US20210349324A1 (en) | 2021-11-11 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12298524B2 (en) | Multi-lens system for imaging in low light conditions and method | |
| Goy et al. | Low photon count phase retrieval using deep learning | |
| Li et al. | Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media | |
| Muminov et al. | Fourier optical preprocessing in lieu of deep learning | |
| Jaganathan et al. | Phase retrieval: An overview of recent developments | |
| Gao et al. | Motion-resolved, reference-free holographic imaging via spatiotemporally regularized inversion | |
| Lee et al. | Design and single-shot fabrication of lensless cameras with arbitrary point spread functions | |
| US20160291343A1 (en) | Rotating point-spread function (psf) design for three-dimensional imaging | |
| JP2024047560A (en) | Non-interfering, non-iterative complex amplitude readout method and apparatus - Patents.com | |
| CN110823812A (en) | Method and system for imaging of scattering medium based on machine learning | |
| González et al. | Nonredundant array of apertures to measure the spatial coherence in two dimensions with only one interferogram | |
| Li et al. | Generative adversarial network for superresolution imaging through a fiber | |
| Wijesinghe et al. | Emergent physics-informed design of deep learning for microscopy | |
| US9350977B1 (en) | Rotating point-spread function (PSF) design for three-dimensional imaging | |
| Li et al. | Amp-vortex edge-camera: a lensless multi-modality imaging system with edge enhancement | |
| Hazineh et al. | Polarization multi-image synthesis with birefringent metasurfaces | |
| US20250389943A1 (en) | Method and device for achieving super-resolution microscopic imaging by super-oscillatory diffractive neural network | |
| Leportier et al. | Holographic reconstruction by compressive sensing | |
| Zhou et al. | Learning-based phase imaging using a low-bit-depth pattern | |
| Muminov et al. | Toward simple, generalizable neural networks with universal training for low-SWaP hybrid vision | |
| Cheung et al. | Characterization of surface defects in fast tool servo machining of microlens array using a pattern recognition and analysis method | |
| US20240212095A1 (en) | Optical imaging | |
| Martin Jimenez et al. | Single-shot phase diversity wavefront sensing in deep turbulence via metasurface optics | |
| CN106066541A (en) | A kind of method and device producing generalized cylindrical vector light beam | |
| Muminov et al. | Small-brain neural networks rapidly solve inverse problems with vortex Fourier encoders |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| AS | Assignment |
Owner name: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:VUONG, LUAT;MUMINOV, BAURZHAN;FENG, JI;SIGNING DATES FROM 20240717 TO 20240725;REEL/FRAME:068923/0859 |
|
| AS | Assignment |
Owner name: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA, CALIFORNIA Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE APPLICATION NUMBER 17314894 PREVIOUSLY RECORDED AT REEL: 68923 FRAME: 859. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNORS:VUONG, LUAT;MUMINOV, BAURZHAN;FENG, JI;SIGNING DATES FROM 20240717 TO 20240725;REEL/FRAME:069238/0433 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |