CN113298700B - High-resolution image reconstruction method in scattering scene - Google Patents

High-resolution image reconstruction method in scattering scene Download PDF

Info

Publication number
CN113298700B
CN113298700B CN202110598361.0A CN202110598361A CN113298700B CN 113298700 B CN113298700 B CN 113298700B CN 202110598361 A CN202110598361 A CN 202110598361A CN 113298700 B CN113298700 B CN 113298700B
Authority
CN
China
Prior art keywords
scattering
scene
scenes
image reconstruction
resolution image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110598361.0A
Other languages
Chinese (zh)
Other versions
CN113298700A (en
Inventor
程雪岷
陈棵
高子琪
王安琪
郝群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Shenzhen International Graduate School of Tsinghua University
Original Assignee
Beijing Institute of Technology BIT
Shenzhen International Graduate School of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT, Shenzhen International Graduate School of Tsinghua University filed Critical Beijing Institute of Technology BIT
Priority to CN202110598361.0A priority Critical patent/CN113298700B/en
Publication of CN113298700A publication Critical patent/CN113298700A/en
Application granted granted Critical
Publication of CN113298700B publication Critical patent/CN113298700B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Abstract

A method of high resolution image reconstruction in a scattering scene, comprising the steps of: s1, aiming at different scattering scenes, measuring the point spread function PSF of each scattering scene by using a point spread function PSF measuring device, and extracting the line spread function LSF according to the measured point spread function PSF in a dimension-reducing way so as to characterize each scattering scene, thereby realizing classification of different scattering scenes; s2, carrying out image reconstruction on one-dimensional signals acquired by a computing ghost imaging system in a scattering scene according to a classification result of the scattering scene, wherein the image reconstruction is carried out by adopting an image reconstruction depth convolution neural network in a self-adaption mode so as to obtain a high-resolution image, and a training set of the reconstruction network is derived from one-dimensional data acquired in the scattering scene which is most matched with the classification characteristic of the scattering scene, so that self-adaption high-resolution image reconstruction in different scattering scenes is realized. The method can be used for rapidly classifying different scattering scenes and realizing rapid and high-resolution imaging in the different scattering scenes.

Description

High-resolution image reconstruction method in scattering scene
Technical Field
The application relates to an optical imaging technology, in particular to a high-resolution image reconstruction method in a scattering scene.
Background
To date, the computational ghost imaging technique combined with the deep learning method is a technique that is very prominent in realizing high-resolution imaging performance in the field of scatter imaging. The imaging method is a work with great research value in the scattering scene, and has wide application in the aspects of underwater imaging, extreme weather imaging, biological tissue imaging and the like. In conventional optical imaging systems, spherical wavelets from objects are required to be unable to undergo severe distortion when they enter the entrance pupil of the imaging system, otherwise a clear object image will not be captured by the lens imaging system. However, conventional imaging methods will not work when non-uniform media such as fog, clouds, biological tissue, etc. are present during imaging. The ghost imaging technology is used as a technology for acquiring light intensity signals in a light field where a target is located and then obtaining a target image by adopting an intensity correlation algorithm, and because the fluctuation trend of the light intensity signals acquired in a scattering scene still exists, the ghost imaging technology can be well applied to the scattering scene environment to realize imaging. The ghost imaging technology has been developed towards more practical use for over twenty years, the existence of a reference light path is not needed, the burden of a hardware system is greatly reduced, and the ghost imaging technology is developed into the current calculation ghost imaging technology. The reconstruction algorithm mainly comprises an association reconstruction algorithm, a compressed sensing reconstruction algorithm and a deep learning reconstruction algorithm, and compared with the other two algorithms, the deep learning algorithm can sample light intensity signals in a scattering scene at a sampling rate lower than the Nyquist sampling frequency to realize rapid imaging, meanwhile, the modulation mode used for sampling and the sequence of the modulation modes are not required to be known, so that the cost of image reconstruction is greatly reduced, and the quality of a reconstructed image is improved. However, when significant differences occur between the actual application scene and the scene contained in the acquired data set, it is difficult to reconstruct a high-quality image by using a calculated ghost imaging technique based on deep learning. In practical application, it is very common to face different scattering scenes, so that the universality of the computing ghost imaging technology based on deep learning in different scattering scenes is improved, and the important research direction of practical application is walked. In order to solve the application problem of different scattering scenes, some attempts are made to train the network by adopting a data set containing a large number of different scattering scenes, however, the method is very large in data quantity requirement and difficult to realize, the training time is too long, and high-quality images are difficult to reconstruct due to the fact that the contained data set is too complex.
It should be noted that the information disclosed in the above background section is only for understanding the background of the application and thus may include information that does not form the prior art that is already known to those of ordinary skill in the art.
Disclosure of Invention
The main object of the present application is to overcome the above-mentioned drawbacks of the prior art, and to provide a method for reconstructing a high-resolution image in a scattering scene, so as to achieve adaptive high-resolution image reconstruction in different scattering scenes.
In order to achieve the above purpose, the present application adopts the following technical scheme:
a method of high resolution image reconstruction in a scattering scene, comprising the steps of:
s1, aiming at different scattering scenes, measuring the Point Spread Function (PSF) of each scattering scene by using a point spread function (Point Spread Function, PSF) measuring device, and extracting a line spread function (Line Spread Function, LSF) according to the measured point spread function PSF dimension reduction to represent each scattering scene so as to realize the rapid and accurate classification of different scattering scenes;
s2, carrying out image reconstruction on one-dimensional signals acquired by a computing ghost imaging system in a scattering scene according to a classification result of the scattering scene, wherein the image reconstruction is carried out by adopting an image reconstruction depth convolution neural network in a self-adaption mode so as to obtain a high-resolution image, and a training set of the reconstruction network is derived from one-dimensional data acquired in the scattering scene which is most matched with the classification characteristic of the scattering scene, so that self-adaption high-resolution image reconstruction in different scattering scenes is realized.
The method provided by the application can be used for rapidly classifying different scattering scenes, and rapid and high-resolution imaging in different scattering scenes is realized.
Further:
in step S1, the peak value and standard deviation of the fitting curve are extracted after gaussian fitting is performed on the linear diffusion function, so as to characterize each scattering scene.
In step S1, scattering media representing different scattering scenes are placed in a point spread function PSF measuring device to measure, in order to reflect the change and divergence degree of the overall light intensity of the parallel laser beam after passing through the scattering media, the obtained light intensity values of the PSF data along the Y direction are accumulated to the X direction to obtain a line spread function LSF, and then gaussian fitting is performed on the LSF by using a gaussian function to obtain an LSF fitting curve with a gaussian distribution rule, characteristics of different scattering scenes are represented according to the peak value and standard deviation parameters of the LSF fitting curve, and the peak value and standard deviation of the curve can be used for reflecting the overall light intensity change and divergence degree of the laser after passing through the scattering scenes in a robust manner.
In order to improve the classification speed, the difficult problem that the processing of PSF data of a large number of measured scattering scenes is excessively long is solved, a scattering scene parameter extraction network with Gaussian fitting characteristics, which is obtained by training a simulation data set, is used for directly and rapidly extracting peaks and standard deviations of corresponding LSFs of PSFs measured by a PSF measuring device, so that rapid feature extraction and classification of different scattering scenes are realized.
The fitting function for performing feature fitting on the linear diffusion function is as follows:
the parameter a is the peak value of the Gaussian function and also represents the maximum value of the collected light intensity, b is the abscissa corresponding to the peak value, and c is the standard deviation of the Gaussian function;
classifying according to parameters a and c in a fitting function obtained after the scattering medium is fitted.
The fit determination coefficient (R-square) is not less than 0.99.
The PSF measuring device comprises a laser, a focusing lens, a pinhole, a collimating lens, an axicon and a camera which are sequentially arranged on an optical path, wherein the scattering medium is arranged between the pinhole and the collimating lens to measure.
Step S1 further includes: and simulating PSF images of different scattering scenes to obtain simulation data sets of PSFs of different scattering scenes, and training a network to obtain the scattering scene parameter extraction network with the Gaussian fitting characteristic.
In step S2, high-resolution image reconstruction is carried out by a deep learning-based ghost imaging calculation method; the computing ghost imaging system for imaging comprises a laser, a digital micro-mirror array, a plane mirror, a scattering scene, a target image, a converging prism and a single photon camera which are sequentially arranged on an optical path.
In computed ghost imaging, for a target T (x, y) to be imaged, where (x, y) is the coordinates of the target in the light field, the light impinging on the target is modulated by a speckle pattern, represented as I, displayed on a digital micromirror array m (x, y), wherein m= … M, the light intensity signal after the target emission is collected by a single pixel camera, resulting in a light intensity value:
S m =∫T(x,y)I m (x,y)dxdy (2)
the speckle mode is a Fourier mode, the Fourier mode can directly reconstruct a spectrogram of a target by collecting a light intensity sequence, and the Fourier base is used for reconstructing low-frequency characteristics of the target, wherein the Fourier base mode has the following expression:
T(x,y;f x ,f y ;θ)=a+b·cos(2πf x x+2πf y y+θ) (3)
wherein a, b are translation and scaling coefficients, (f) x ,f y ) Is the frequency domain coordinates, in (f x ,f y ) The values at the frequency domain points are represented by T (x, y; f (f) x ,f y The method comprises the steps of carrying out a first treatment on the surface of the θ) is shown, where θ=0, pi/2, pi, 3 pi/2 is controlled by four different phases at each pair of frequency domain points.
Sampling a target to obtain a one-dimensional 4096×1 light intensity signal as an input signal, and inputting the light intensity signal into a trained image reconstruction depth convolutional neural network for reconstruction, wherein the expression of the reconstructed image is as follows:
T DLCGI (x,y)=argmin||T-R{S m }|| 2 +λΨ(T) (4)
where T represents the original image of the target, R { S } m The output of the neural network is represented by ψ (T), the regularization term by λ, the weight by T DLCGI (x, y) represents the final acquired high resolution image. .
The application has the following beneficial effects:
the application provides a self-adaptive high-resolution image reconstruction method based on point spread function classification in different scattering scenes, and high-precision and high-quality rapid high-resolution imaging in different scattering scenes is realized. By the method, a scattering characteristic scheme corresponding to various scattering scenes such as underwater, extreme weather, biological tissues and the like can be established, and the method has self-adaptability and is mainly characterized in that: and calculating the point spread function of different scattering scenes of the ghost imaging system by measuring the point spread function of the scattering scene and further obtaining a line spread function, wherein the line spread function can more embody the integral light intensity change and the divergence degree of laser after passing through a scattering medium, is more accurate relative to the point spread function and is less influenced by external interference. Further, the obtained line diffusion function has a distribution rule of Gaussian distribution, the Gaussian fitting function is adopted to fit the line diffusion function, and the obtained peak value and standard deviation of the fitting curve are used for characterizing and classifying scattering scenes. In order to improve the classification speed, a scattering scene parameter extraction network with good Gaussian fitting characteristics is utilized to directly extract parameters of the measured point spread function image, and the peak value and standard deviation parameters of the corresponding line spread function are directly and rapidly extracted. According to the classification result of the scattering scene parameter extraction network on the scattering scene, the one-dimensional signal acquired by the ghost imaging system is calculated in the scattering scene, the image reconstruction depth convolution neural network obtained by training the classification characteristic and the data acquired in the closest scattering scene is adaptively adopted, and the image reconstruction is carried out to obtain a high-resolution image, so that the self-adaptive high-resolution image reconstruction in different scattering scenes is realized.
The image reconstruction depth convolution neural network can directly reconstruct the acquired one-dimensional signals to obtain high-resolution images, so that the imaging speed is improved, and the requirement of the images on high resolution is met. Compared with other methods, the method provided by the application can be used for rapidly classifying different scattering scenes, and rapid and high-resolution imaging in the scattering scenes is realized.
Drawings
Fig. 1 is a schematic view of a PSF measuring apparatus according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a scattering scene parameter extraction network according to an embodiment of the application.
FIG. 3 is a schematic diagram of a computed ghost imaging system in one embodiment of the application.
Fig. 4 is a schematic diagram of an image reconstruction depth convolution neural network according to an embodiment of the present application.
Fig. 5A to 5F are LSF fitting curves of different scattering scenes obtained by fitting in an embodiment of the present application, respectively.
Fig. 6 is a reconstructed image of signals acquired in five different scattering scenarios in accordance with an embodiment of the present application.
FIG. 7 is a reconstructed image of signals acquired in an unknown scattering scene according to an embodiment of the present application.
Detailed Description
The following describes embodiments of the present application in detail. It should be emphasized that the following description is merely exemplary in nature and is in no way intended to limit the scope of the application or its applications.
In some embodiments, a method for adaptive high-resolution image reconstruction in different scattering scenes based on point spread function classification comprises the steps of:
s1, aiming at different scattering scenes, measuring the point spread function PSF of each scattering scene by using a built point spread function measuring device, and extracting the line spread function LSF according to the measured PSF dimension reduction to characterize each scattering scene in order to reflect the change and the divergence degree of the overall light intensity of the parallel laser beam after passing through the scattering medium. In order to realize rapid feature extraction, a scattering scene parameter extraction network with Gaussian fitting characteristics obtained through simulation data set training can be used for directly extracting a measured PSF image so as to rapidly obtain the peak value and standard deviation of an LSF fitting curve, and rapid and accurate classification of different scattering scenes is realized.
S2, carrying out image reconstruction on one-dimensional signals acquired by a computing ghost imaging system in a scattering scene according to a classification result of the scattering scene, wherein the image reconstruction is carried out by adopting an image reconstruction depth convolution neural network in a self-adaption mode so as to obtain a high-resolution image, and a training set of the reconstruction network is derived from one-dimensional data acquired in the scattering scene which is most matched with the classification characteristic of the scattering scene, so that self-adaption high-resolution image reconstruction in different scattering scenes is realized.
By the method provided by the embodiment of the application, a scattering characteristic scheme corresponding to various scattering scenes such as underwater, extreme weather, biological tissues and the like can be established, and the method has self-adaptability. And calculating the point spread function of different scattering scenes of the ghost imaging system by measuring the point spread function of the scattering scene and further obtaining a line spread function, wherein the line spread function can more embody the integral light intensity change and the divergence degree of laser after passing through a scattering medium, is more accurate relative to the point spread function and is less influenced by external interference. The obtained line diffusion function has a distribution rule of Gaussian distribution, the line diffusion function is fitted by adopting a Gaussian fitting function, and the obtained peak value and standard deviation of the fitting curve characterize scattering scenes to realize classification. In order to improve the classification speed, a scattering scene parameter extraction network with good Gaussian fitting characteristics, which is trained by using a simulation data set, is used for directly extracting parameters of a measured point spread function image, and directly and quickly extracting peak value and standard deviation parameters of a corresponding line spread function. According to the classification result of the scattering scene parameter extraction network on the scattering scene, the one-dimensional signal acquired by the computing ghost imaging system in the scattering scene is adaptively adopted to reconstruct the depth convolutional neural network by using the image obtained by training the classification characteristic and the data acquired in the closest scattering scene, and the image reconstruction is carried out to obtain a high-resolution image, so that the self-adaptive high-resolution image reconstruction in different scattering scenes is realized. The image reconstruction depth convolution neural network can directly reconstruct the acquired one-dimensional signals to obtain high-resolution images, so that the imaging speed is improved, and the requirement of the images on high resolution is met.
Specific embodiments of the present application are described further below.
Method for establishing simulation data set to realize rapid classification of different scattering scenes
In order to obtain the PSF value of the scattering scene, we have designed a corresponding PSF measurement device, the schematic diagram of which is shown in fig. 1, the PSF measurement device includes a laser 1 (e.g. he—ne laser), a focusing lens 2, a pinhole 3, a collimating lens 4, an axicon 5, and a camera 6 (e.g. a CCD camera) sequentially disposed on an optical path, wherein the scattering medium 7 is disposed between the pinhole 3 and the collimating lens 4 to perform measurement. The scattering medium represents a scattering scene, the scattering medium is placed at a designated position in a PSF measuring device for quick measurement, the obtained measurement data is fitted through MATLAB, in order to reflect the change and the divergence degree of the overall light intensity of the parallel laser beam after passing through the scattering medium, all light intensity values in the Y direction in the measured three-dimensional PSF image are accumulated to the X direction, a line diffusion function is obtained, the characteristic fitting is carried out on the line diffusion function by utilizing a Gaussian function, and the fitting function is as follows:
the parameter a represents the peak value of the Gaussian function, and also represents the maximum value of the light intensity collected by us, b is the abscissa corresponding to the peak value, and c represents the standard deviation of the Gaussian function. Classifying according to two parameters a and c in a function obtained after each scattering medium is fitted, wherein the fitted determination coefficient (R-square) is not less than 0.99.a and c are close to each other and are considered to be scattering scenes with similar scattering power. However, in practical applications, we often need to face a large number of scattering scenes, and it is time-consuming to measure and fit them. Therefore, we simulate PSF images of different scattering scenes to obtain simulated data sets of PSFs of different scattering scenes, the simulated data sets are used for training a network to obtain a scattering scene parameter extraction network with Gaussian fitting characteristics, the measured PSF images are directly input into the network to obtain peak values and standard deviation parameters in a line diffusion function, and rapid and accurate extraction is realized. A schematic diagram of the scattering scene parameter extraction network is shown in fig. 2, wherein the numbers represent the data sizes of the input and output, respectively.
Deep learning-based rapid high-resolution image reconstruction method for computing ghost images
For high resolution imaging of objects in a scattering scene, we have built a computed ghost imaging system, a schematic diagram of which is shown in fig. 3, comprising a laser 8 (e.g. He-Ne laser), a digital micromirror array 9, a planar mirror 10, a scattering scene 11, an object image 12, a converging prism 13 and a single photon camera 14, which are sequentially arranged on an optical path. In computational ghost imaging we assume that the target to be imaged is T (x, y), where (x, y) is the coordinates of the target in the light field, and the light impinging on the target is modulated by a speckle pattern displayed on the array of digital micromirrors, the speckle pattern being denoted as I m (x, y), wherein m= … M, the light intensity signal after the target emission is collected by a single pixel camera, resulting in a light intensity value:
S m =∫T(x,y)I m (x,y)dxdy (2)
the speckle mode adopted by the method is a Fourier mode, the Fourier mode can directly reconstruct a spectrogram of the target by collecting the light intensity sequence, and the Fourier substrate can accurately reconstruct the low-frequency characteristic of the target because the main effective information of the target is 90% in the low-frequency part, and the expression of the Fourier base mode is shown in the formula 3:
T(x,y;f x ,f y ;θ)=a+b·cos(2πf x x+2πf y y+θ) (3)
wherein a, b are translation and scaling coefficients, (f) x ,f y ) Is the frequency domain coordinates, in (f x ,f y ) The values at the frequency domain points are represented by T (x, y; f (f) x ,f y The method comprises the steps of carrying out a first treatment on the surface of the θ) is shown, where θ=0, pi/2, pi, 3 pi/2 is controlled by four different phases at each pair of frequency domain points.
In order to realize rapid image reconstruction, the size of the target image is 64×64, the target is sampled by adopting a sampling rate of 25%, a one-dimensional 4096×1 light intensity signal is obtained and is input into a trained deep neural network as an input signal for reconstruction, and in the neural network reconstruction, the reconstruction is regarded as a pathological problem, and the expression of the reconstructed image is as follows:
T DLCGI (x,y)=argmin||T-R{S m }|| 2 +λΨ(T) (4)
where T represents the original image of the target, R { S } m The output of the neural network is represented, ψ (T) represents the regularization term to prevent overfitting, λ represents the weight, T DLCGI (x, y) represents the high resolution image finally obtained by the deep learning based computed ghost imaging rapid reconstruction method. The designed deep neural network is shown in fig. 4, in which numerals represent the data sizes of the input and output, respectively.
Test results
The simulation data set is established to realize the rapid classification method of different scattering scenes and the rapid high-resolution image reconstruction method of computing ghost images based on deep learning, and the simulation data set is applied to different scattering scenes measured by practical experiments, so that the test results shown in fig. 5A to 5F can be obtained. Six scattering media with different gaussian fitting parameters are provided to represent scattering scenes, the fitted curves are shown in fig. 5A to 5F, wherein (1), (2), (3), (4), (5) and (6) represent LSF gaussian fitting curves of six different scattering media measured by using a PSF measurement device, respectively, the corresponding parameters are extracted and given by a network, and data are acquired in five different scattering scenes of (1), (2), (3), (4), (5) and (6) for training the network to obtain an image reconstruction neural network. And inputting the one-dimensional data obtained in the five scattering scenes and not contained in the training set to the network respectively to explore the reconstruction effect of the reconstruction network, and the experimental result is shown in fig. 6. Compared with the original image, the PSNR values of the images obtained by acquiring one-dimensional data in the five scattering scenes and reconstructing the images through the network are all larger than 20dB, and the images can be regarded as high-quality images very close to the original images. In order to further verify the feasibility of the method, an unknown scattering scene (4) is introduced, and corresponding parameters are extracted by using a network, because the parameter data of the (4) is closest to the data of the (5), the data acquired in the scattering scene represented by the (4) is reconstructed by using an image reconstruction neural network corresponding to the (5), and the obtained experimental result is shown in a graph 7, wherein PSNR (particle signal noise ratio) is still higher than 20dB, so that the method can still reconstruct a high-quality image in the face of the unknown scene, meanwhile, the accuracy of classification is ensured due to the parameter quantification and visualization of a classification method, and the rapid classification and rapid image reconstruction are realized by using the way of extracting the network and the image reconstruction neural network by using the scattering scene parameters.
Test results show that compared with other methods, the method provided by the application can be used for rapidly and accurately classifying different scattering scenes, and rapid and high-resolution imaging in different scattering scenes is realized.
The background section of the present application may contain background information about the problems or environments of the present application and is not necessarily descriptive of the prior art. Accordingly, inclusion in the background section is not an admission of prior art by the applicant.
The foregoing is a further detailed description of the application in connection with specific/preferred embodiments, and it is not intended that the application be limited to such description. It will be apparent to those skilled in the art that several alternatives or modifications can be made to the described embodiments without departing from the spirit of the application, and these alternatives or modifications should be considered to be within the scope of the application. In the description of the present specification, reference to the terms "one embodiment," "some embodiments," "preferred embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Those skilled in the art may combine and combine the features of the different embodiments or examples described in this specification and of the different embodiments or examples without contradiction. Although embodiments of the present application and their advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the scope of the application as defined by the appended claims.

Claims (8)

1. A method for reconstructing a high resolution image in a scattering scene, comprising the steps of:
s1, aiming at different scattering scenes, measuring the point spread function PSF of each scattering scene by using a point spread function PSF measuring device, and extracting the line spread function LSF according to the measured point spread function PSF in a dimension-reducing way so as to characterize each scattering scene, thereby realizing classification of different scattering scenes; the method comprises the steps of carrying out Gaussian fitting on a linear diffusion function, and then extracting peak values and standard deviations of a fitting curve to characterize each scattering scene; the method comprises the steps of adopting a scattering scene parameter extraction network with Gaussian fitting characteristics, which is obtained by training a simulation data set of PSFs of different scattering scenes, to directly extract peaks and standard deviations of corresponding LSFs from the PSFs measured by a PSF measuring device, so as to realize feature extraction and classification of the different scattering scenes;
s2, carrying out image reconstruction on one-dimensional signals acquired by a computing ghost imaging system in a scattering scene according to a classification result of the scattering scene, wherein the image reconstruction is carried out by adopting an image reconstruction depth convolution neural network in a self-adaption mode so as to obtain a high-resolution image, and a training set of the reconstruction network is derived from one-dimensional data acquired in the scattering scene which is most matched with the classification characteristic of the scattering scene, so that self-adaption high-resolution image reconstruction in different scattering scenes is realized.
2. The method for reconstructing a high-resolution image as recited in claim 1, wherein in step S1, scattering media representing different scattering scenes are measured in a point spread function PSF measuring device, the obtained PSF data are accumulated to an X direction along a light intensity value in a Y direction to obtain a line spread function LSF, and then gaussian fitting is performed on the LSF by using a gaussian function to obtain an LSF fitting curve having a gaussian distribution rule, and characteristics of the different scattering scenes are characterized according to a peak value and a standard deviation parameter of the LSF fitting curve.
3. The high resolution image reconstruction method as set forth in claim 1, wherein the fitting function for performing the feature fitting of the linear diffusion function is:
the parameter a is the peak value of the Gaussian function and also represents the maximum value of the collected light intensity, b is the abscissa corresponding to the peak value, and c is the standard deviation of the Gaussian function;
classifying according to parameters a and c in a fitting function obtained after the scattering medium is fitted.
4. A high resolution image reconstruction method as defined in any one of claims 1 to 3, wherein the PSF measurement device comprises a laser, a focusing lens, a pinhole, a collimating lens, an axicon and a camera disposed in order on an optical path, wherein a scattering medium is interposed between the pinhole and the collimating lens to perform the measurement.
5. A high resolution image reconstruction method as claimed in any one of claims 1 to 3, wherein step S1 further comprises: and simulating PSF images of different scattering scenes to obtain simulation data sets of PSFs of different scattering scenes, and training a network to obtain the scattering scene parameter extraction network with the Gaussian fitting characteristic.
6. A high-resolution image reconstruction method according to any one of claims 1 to 3, wherein in step S2, high-resolution image reconstruction is performed by a calculated ghost imaging method based on deep learning; the computing ghost imaging system for imaging comprises a laser, a digital micro-mirror array, a plane mirror, a scattering scene, a target image, a converging prism and a single photon camera which are sequentially arranged on an optical path.
7. A high resolution image reconstruction method as set forth in claim 6 wherein in computed ghost imaging, for a target T (x, y) to be imaged, wherein (x, y) is the target in the light fieldIs modulated by a speckle pattern, indicated as I, displayed on the array of digital micromirrors m (x, y), wherein m= … M, the light intensity signal after the target emission is collected by a single pixel camera, resulting in a light intensity value:
S m =∫T(x,y)I m (x,y)dxdy (2)
the speckle mode is a Fourier mode, the Fourier mode can directly reconstruct a spectrogram of a target by collecting a light intensity sequence, and the Fourier base is used for reconstructing low-frequency characteristics of the target, wherein the Fourier base mode has the following expression:
T(x,y;f x ,f y ;θ)=a+b·cos(2πf x x+2πf y y+θ) (3)
wherein a, b are translation and scaling coefficients, (f) x ,f y ) Is the frequency domain coordinates, in (f x ,f y ) The values at the frequency domain points are represented by T (x, y; f (f) x ,f y The method comprises the steps of carrying out a first treatment on the surface of the θ) is shown, where θ=0, pi/2, pi, 3 pi/2 is controlled by four different phases at each pair of frequency domain points.
8. The method for reconstructing a high-resolution image as recited in claim 6, wherein the target is sampled to obtain a one-dimensional 4096×1 light intensity signal as an input signal, the light intensity signal is input into a trained image reconstruction depth convolutional neural network for reconstruction, and an expression of the reconstructed image is:
T DLCGI (x,y)=argmin||T-R{S m }|| 2 +λΨ(T) (4)
where T represents the original image of the target, R { S } m The output of the neural network is represented by ψ (T), the regularization term by λ, the weight by T DLCGI (x, y) represents the final acquired high resolution image.
CN202110598361.0A 2021-05-31 2021-05-31 High-resolution image reconstruction method in scattering scene Active CN113298700B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110598361.0A CN113298700B (en) 2021-05-31 2021-05-31 High-resolution image reconstruction method in scattering scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110598361.0A CN113298700B (en) 2021-05-31 2021-05-31 High-resolution image reconstruction method in scattering scene

Publications (2)

Publication Number Publication Date
CN113298700A CN113298700A (en) 2021-08-24
CN113298700B true CN113298700B (en) 2023-09-05

Family

ID=77326173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110598361.0A Active CN113298700B (en) 2021-05-31 2021-05-31 High-resolution image reconstruction method in scattering scene

Country Status (1)

Country Link
CN (1) CN113298700B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113393392A (en) * 2021-06-11 2021-09-14 清华大学深圳国际研究生院 Dynamic target ghost imaging system and method based on neural network
CN114518654B (en) * 2022-02-11 2023-05-09 南京大学 High-resolution large-depth-of-field imaging method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6295392B1 (en) * 1998-05-20 2001-09-25 Itt Manufacturing Enterprises, Inc. Super resolution methods for electro-optical systems
CN107545549A (en) * 2017-07-21 2018-01-05 南京航空航天大学 Point spread function method of estimation is defocused based on one-dimensional spectrum curve
CN110443882A (en) * 2019-07-05 2019-11-12 清华大学 Light field microscopic three-dimensional method for reconstructing and device based on deep learning algorithm
CN111479097A (en) * 2020-03-25 2020-07-31 清华大学 Scattering lens imaging system based on deep learning
CN112200264A (en) * 2020-10-26 2021-01-08 北京理工大学 High-flux imaging-free classification method and device based on scattering multiplexing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6295392B1 (en) * 1998-05-20 2001-09-25 Itt Manufacturing Enterprises, Inc. Super resolution methods for electro-optical systems
CN107545549A (en) * 2017-07-21 2018-01-05 南京航空航天大学 Point spread function method of estimation is defocused based on one-dimensional spectrum curve
CN110443882A (en) * 2019-07-05 2019-11-12 清华大学 Light field microscopic three-dimensional method for reconstructing and device based on deep learning algorithm
CN111479097A (en) * 2020-03-25 2020-07-31 清华大学 Scattering lens imaging system based on deep learning
CN112200264A (en) * 2020-10-26 2021-01-08 北京理工大学 High-flux imaging-free classification method and device based on scattering multiplexing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Computational Ghost Imaging in Scattering Media Using Simulation-Based Deep Learning";Ziqi Gao 等;《IEEE Photonics Journal》;20201031;1-16 *

Also Published As

Publication number Publication date
CN113298700A (en) 2021-08-24

Similar Documents

Publication Publication Date Title
CN110274877B (en) 3D spectral imaging system and method based on scattering medium
CN109086675B (en) Face recognition and attack detection method and device based on light field imaging technology
CN113298700B (en) High-resolution image reconstruction method in scattering scene
US11169367B2 (en) Three-dimensional microscopic imaging method and system
JP5130311B2 (en) System and method for recovering wavefront phase information
Brauers et al. Direct PSF estimation using a random noise target
US10694123B2 (en) Synthetic apertures for long-range, sub-diffraction limited visible imaging using fourier ptychography
US10664685B2 (en) Methods, systems, and devices for optical sectioning
US20210144278A1 (en) Compressed sensing based object imaging system and imaging method therefor
JP2013531268A (en) Measuring distance using coded aperture
CN111986118B (en) Underwater calculation ghost imaging image denoising method and system with minimized weighted kernel norm
CN116958419A (en) Binocular stereoscopic vision three-dimensional reconstruction system and method based on wavefront coding
Li et al. Self-measurements of point-spread function for remote sensing optical imaging instruments
Hu et al. A new method and implementation of blind restoration algorithm for moving fuzzy license plate image based on frequency-domain characteristics
EP2693397B1 (en) Method and apparatus for noise reduction in an imaging system
WO2023023961A1 (en) Piv image calibration apparatus and method based on laser linear array
Chen et al. Super-resolution reconstruction for underwater imaging
TWI826988B (en) System and method for three-dimensional image evaluation
Laub et al. Three-dimensional object representation in imaging systems
Li et al. Omnidirectional Ring Structured Light Noise Filtering Based On DCGAN Network And Autoencoder
Biggs et al. Subpixel deconvolution of 3D optical microscope imagery
Venkatachalam et al. Comprehensive investigation of subpixel edge detection schemes in metrology
Abramova et al. Investigation of blur kernel of terahertz images
Buitrago-Duque et al. Speckle Reduction in Digital Holographic Microscopy by Physical Manipulation of the Pupil Function
Cui et al. Space Target Super-resolution Based on Low-complex Convolutional Networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20221122

Address after: Second floor, building a, Tsinghua campus, Shenzhen University Town, Xili street, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen International Graduate School of Tsinghua University

Applicant after: BEIJING INSTITUTE OF TECHNOLOGY

Address before: Second floor, building a, Tsinghua campus, Shenzhen University Town, Xili street, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: Shenzhen International Graduate School of Tsinghua University

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant