CN116699554A - Moving target scattering imaging method and device in extremely low light environment based on deep learning - Google Patents

Moving target scattering imaging method and device in extremely low light environment based on deep learning Download PDF

Info

Publication number
CN116699554A
CN116699554A CN202310675567.8A CN202310675567A CN116699554A CN 116699554 A CN116699554 A CN 116699554A CN 202310675567 A CN202310675567 A CN 202310675567A CN 116699554 A CN116699554 A CN 116699554A
Authority
CN
China
Prior art keywords
neural network
imaging
light
scattering
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310675567.8A
Other languages
Chinese (zh)
Inventor
石剑虹
孙浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202310675567.8A priority Critical patent/CN116699554A/en
Publication of CN116699554A publication Critical patent/CN116699554A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a method and a device for scattering imaging of a moving target in a very low light environment based on deep learning, which relate to the field of imaging of the moving target in the very low light environment. The prior information and the recovered speckle are used to obtain the accurate position information of the target for track recovery. And placing targets at N fixed positions in the detection range to obtain a sampling chart of the fixed positions for training. The neural network is trained using sampling speckle of N fixed locations, and after training is completed, the fixed parameters become the neural network for low photon imaging location classification. The neural network is trained by using the sampling speckle of each fixed position, and the fixed parameters become the neural network for reconstructing the low-photon imaging at the fixed positions after training is completed. The invention uses the neural network of a plurality of positions to recover the speckle together, acquires more environmental information, and is favorable for generating the detection imaging precision under the complex motion state of the target.

Description

Moving target scattering imaging method and device in extremely low light environment based on deep learning
Technical Field
The invention relates to the field of imaging, in particular to a method and a device for scattering imaging of a moving target in a very low light environment based on deep learning.
Background
The scatter imaging recovery under the condition of few photons has important applications in real life, and has important applications in the fields of night detection, biomedical imaging and satellite observation.
Conventional imaging devices typically require 1012 photons per pixel to acquire a high quality picture, and an average of 105 photons per pixel is captured by a high quality array detector. However, in practical applications, it is difficult to achieve this requirement for photon count values. Especially in very low light and exposure times limited, the effective photon number can reach even a few.
Under the condition of sufficient photon number, the signal detected by the common detector is analog quantity, and the optical signal contains a large number of photons, and the photons are overlapped to obtain the light intensity obtained by detection. The manner of acquiring an image by recording the light intensity of each position on the detection target is called an analog manner. However, as the light intensity of the detection target decays, the light intensity becomes a pulse signal gradually, and especially when the light intensity decays to a single photon condition, the signal becomes a discrete pulse signal with a small number of pulses. Whereas single photons are generally considered as the limit of detection, being the smallest unit of energy that cannot be segmented further. Thus, under low photon count conditions, the signal exhibits a particle characteristic. In this case, the light is recorded as a single photon, its spatial position is determined while the single photon is detected, and two-dimensional photon counting detection is performed, which is the basis of photon counting imaging.
The Single Photon Camera (SPC) is a two-dimensional array detector, each pixel of which is provided with a Single Photon Avalanche Diode (SPAD) which is equivalent to an independent point detector, photon breakdown single photon avalanche diode is used for counting under the extremely weak light environment, in particular, the image formed by the camera is not a gray scale image, each pixel of the camera represents the photon number falling on the pixel in unit time, and the main noise source under the extremely weak light environment is poisson noise caused by the granularity of light, and the required gray scale image can be obtained by normalizing the photon number. In addition, the target can be imaged by utilizing the flight time of the photon sequence of the single photon detector through the first-photon technology, so that the resolution of the single photon detector is further improved. Because the sensitivity can break through the limit of shot noise, and has the advantages of good signal-to-noise ratio, low power consumption, high quantum efficiency, small volume and the like, the method is widely applied to the field of extremely weak light imaging.
Conventional physics research problems have been combined with deep learning to take advantage, especially in image processing. Traditional scatter imaging only focuses on the single problem of scattering, and does not focus on the complex situation that three limitations of few photon counting and scattering and moving targets coexist. The detected speckle in the low-photon environment contains a large amount of shot noise, the signal-to-noise ratio is reduced, and the speckle in the low-photon environment is difficult to recover by using a scattering imaging method based on the fluctuation of light. The moving object not only changes the light path, but also introduces motion blur errors into the traditional imaging method of long exposure, thereby further reducing the imaging quality. Imaging methods based on learning of scattering media and modeling based on transmission processes also fail to solve the above-mentioned dilemma in complex situations. The method under strong light needs to scan and detect the target for multiple times, and the imaging of scattering under the relative movement of the detector and the target under the condition of few photons is more complicated due to the movement process and the scattering process in the light path. In recent years, a deep learning method is widely applied to the field of extremely low light imaging, and good imaging results are obtained. However, the deep learning method in the prior art has a plurality of defects, such as the requirement of sampling a large amount of training data for pre-training, so that the neural network can learn the characteristics of the scattering mode, and further has imaging capability. However, in the case of the movement of the target, the scattering medium area used is large, the correlation between different positions is weak, and effective information cannot be learned. Therefore, when the relative change of the target movement position is large, the deep learning method in the prior art needs to process a large amount of data during training, and the learning capacity of the neural network cannot be guaranteed due to the effect. Accordingly, those skilled in the art have been working to develop a method and apparatus for scattering imaging of moving objects in very low light environments based on deep learning to solve the problems of the prior art.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, the present invention is to provide an apparatus and a method for detecting imaging in an extremely low light environment and under the movement of an object to solve the above-mentioned problems, in which the conventional scatter imaging only focuses on a single scatter problem, but does not focus on the complex situation where three limitations of low photon count and scatter and detector movement exist at the same time.
In order to achieve the above object, the present invention provides a method for scattering imaging of a moving object in a very low light environment based on deep learning, which is characterized in that the method comprises:
step 1, constructing a light path with a scattering medium, adjusting illumination intensity and exposure time, and acquiring detection imaging under extremely weak light;
step 2, placing targets at N fixed positions in a detection range, and acquiring a target scattering imaging sampling chart at each fixed position;
step 3, training a neural network respectively by using the scattering imaging sampling graphs of each fixed position, and forming the neural network after training, wherein the neural network comprises a first neural network for imaging position classification and a second neural network for reconstructing an original sampling graph at a corresponding imaging position in an extremely weak light environment;
and step 4, acquiring a scattering imaging sampling image of the moving target, and acquiring a reconstructed image by using the neural network training acquired in the step 3.
Further, the method comprises the steps of,
the second neural network in the step 3 is optimized, wherein the optimization takes a weak light scattering image as an input of the neural network, outputs a clear image reconstructed by a model, and automatically adjusts training parameters by minimizing a mean square error between the clear image reconstructed by the model and an original clear image, so as to optimize the neural network model.
Further, the method comprises the steps of,
the step 4 specifically comprises the following steps:
step 4-1, inputting the moving object scattering imaging sampling graph into the first neural network obtained in the step 3, and obtaining the position classification information of the moving object;
step 4-2, correcting the position classification information acquired in the step 4-1, wherein the correction refers to prior information that the moving target cannot jump in a non-adjacent interval;
and 4-3, inputting the moving object scattering imaging sampling graph and the position classification information acquired in the step 4-2 into the second neural network acquired in the step 3 to acquire a reconstructed moving object sequence.
Further, the second neural network is a U-shaped network with additional jump connections, the downsampling of the neural network is jump connected with the corresponding level of the upsampling, and jump connections are arranged between convolution layers of the neural network.
Further, the first neural network is a fully connected network with additional optimizers and random discard connections.
The invention provides a moving target scattering imaging device under a very low light environment based on deep learning, which is characterized by comprising the following components:
a light source for generating an extremely low light source;
a spatial light modulator for loading a plurality of targets using mirror flipping; the light source emits light to be incident to the target of the spatial light modulator;
a focusing assembly for performing focusing on the target reflected light;
at least one primary scattering light path for performing scattering processing on the focused extremely weak light;
the detector is used for collecting the extremely weak light processed by the scattered light path;
and the processor module is used for receiving the optical signals of the detector and performing analysis and executing the moving target scattering imaging method under the extremely low light environment based on the deep learning.
Further, the detector collects light field intensity values at the same frequency as the light source.
Further, the light intensity of the light source is adjustable.
Further, the at least one stage of scattering light path comprises a first frosted glass and a second frosted glass which are sequentially arranged along the light path.
Further, the detector is a single photon camera.
Technical effects
The conception, specific structure, and technical effects of the present invention will be further described with reference to the accompanying drawings to fully understand the objects, features, and effects of the present invention.
And establishing a neural network to acquire position information and original image restoration information for the speckle image acquired by the detector. The prior information and the recovered speckle are used to obtain the accurate position information of the target for track recovery.
And placing targets at N fixed positions in the detection range to obtain a sampling chart of the fixed positions for training. The neural network is trained using sampling speckle of N fixed locations, and after training is completed, the fixed parameters become the neural network for low photon imaging location classification. The neural network is trained by using the sampling speckle of each fixed position, and the fixed parameters become the neural network for reconstructing the low-photon imaging at the fixed positions after training is completed.
And correcting the information obtained by the position classification network by considering prior information that the moving target sequence cannot jump in the non-adjacent interval. And then the network is restored through the speckle at the corresponding position to obtain the reconstructed target sequence.
And calculating the cross-correlation of the reconstructed target sequence and the original target at different positions, and taking the maximum value of the cross-correlation value as a track recovery result.
And placing targets at N fixed positions in the detection range to obtain a sampling chart of the fixed positions for training. The neural network is trained using sampling speckle of N fixed locations, and after training is completed the fixed parameters become the neural network for low photon imaging location classification, which is a fully connected network with additional optimizers and random discard connections. The classification network limits the subsequent training times to a limited batch, and reduces the training times.
The neural network is trained by using the sampling speckle of each fixed position, and the fixed parameters become the neural network for reconstructing the low-photon imaging at the fixed positions after training is completed. The neural network is a U-shaped network with additional hopping connections. Downsampling of the neural network makes a jump connection with the corresponding hierarchy of upsampling, and jump connection is also used between each layer of the network of the convolution process to increase correlation between the hierarchies.
A unique two-step neural network is built to solve this problem, with the speckle images acquired by the detector being classified in position and then restored. Full connection layer networks with additional optimizers and random discard connections are included in both networks to augment the information that can be learned.
Drawings
Fig. 1 is a schematic view of a constitution of an image forming apparatus according to an embodiment of the present invention;
FIG. 2 is a flow chart of an imaging method according to an embodiment of the invention;
FIG. 3 is a diagram of one of the target originals in the training set, with pixels 32 x 32, according to one embodiment of the invention;
FIG. 4 is a speckle pattern obtained from the image of FIG. 3 under the light scattering device of FIG. 1 and in very low light conditions, in one embodiment of the present invention, the speckle pattern of the same target artwork of the training set at six different positions, pixels 32 x 32, and the average number of photons received by the detector after scattering is 0.2;
FIG. 5 is a speckle recovery plot of the unified target artwork of the training set at six different positions for the image of FIG. 4 obtained under very low light conditions and with pixels 32 x 32 of the scattering device of FIG. 1 in one embodiment of the present invention;
FIG. 6 is a graph of a trace of a test moving object versus sample points in the trace in accordance with one embodiment of the present invention;
FIG. 7 is a graph of the position relationship between a trajectory of a test moving object and six positions of a training set in an embodiment of the present invention;
FIG. 8 is a schematic diagram of recovery of a test moving object under the motion sequence of FIG. 6 in one embodiment of the invention.
Reference numerals in the embodiment of the present invention are described below:
1-light source, 2-spatial light modulator, 3-lens, 4-grating, 5-lens, 6-frosted glass one, 7-frosted glass two,
8-detector, 9-processor module.
Detailed Description
The following description of the preferred embodiments of the present invention refers to the accompanying drawings, which make the technical contents thereof more clear and easy to understand. The present invention may be embodied in many different forms of embodiments and the scope of the present invention is not limited to only the embodiments described herein.
In the drawings, like structural elements are referred to by like reference numerals and components having similar structure or function are referred to by like reference numerals. The dimensions and thickness of each component shown in the drawings are arbitrarily shown, and the present invention is not limited to the dimensions and thickness of each component. The thickness of the components is exaggerated in some places in the drawings for clarity of illustration.
The very low light environment defined according to the invention, in particular photon counting, is in the order of magnitude.
As shown in fig. 1, the scatter imaging apparatus used in this embodiment includes, in order from the optical path, a light source 1, a spatial light modulator 2, a lens 3, a grating 4, a lens 5, frosted glass one 6, frosted glass two 7, a detector 8, and a processor module 9.
Light is emitted from the light source 1, passes through the spatial light modulator 2, the lens 3, the grating 4, the lens 5, the ground glass one 6, the ground glass two 7 in sequence, and reaches the detector 8. The lens 3 is used for converging the reflected light of the spatial light modulator 3, the grating 4 is used for filtering clutter in the light converged by the lens 3, the lens 5 is used for diverging the modulated light after filtering, a double lens system is formed by the lens 3 and the lens 3, the frosted glass I6 is used for scattering the light diverged by the lens 5, the frosted glass II 7 is used for scattering the light scattered by the frosted glass I6, the detector 8 is used for detecting the scattered light intensity and counting, the detector 8 is in data connection with the processor module 9, and the processor module can perform moving object imaging recovery on the data acquired by the detector 8.
Preferably, the detector 9 may perform light field intensity value acquisition at the same frequency as the spatial light modulation device 3. Preferably, the light intensity of the light source 1 is adjustable.
Preferably, the detector 9 is a single photon camera for counting the number of photons whose sensitivity can break the shot noise limit.
The specific steps of the scattering imaging device for acquiring the scattering map are as follows:
(1) The spatial light modulator 2 uses a lens to overturn a loading target, and light emitted by the light source 1 strikes the target of the spatial light modulator 2;
(2) The lens 3 condenses the reflected light of the object in the spatial light modulator 2;
(3) The grating 4 filters out the reflected stray light in the condensed space light of the lens 3;
(4) The lens 5 properly diverges the light filtered by the grating 4, and forms a double-lens system with the lens 3;
(5) The ground glass-6 diffuses the light that the lens 5 diverges;
(6) The ground glass II 7 scatters the light scattered by the ground glass I6;
(7) The processor module 9 collects the photon number of the reflected light detected by the detector 8;
(5) The gray map is obtained by normalizing the number of photons obtained by the processor 11.
In this embodiment, the exposure time is 2 μs, the average photon number received by the single photon detector is controlled to be 0.2, and 13000 Zhang Sanshe images corresponding to 2600 original clear targets are obtained for 6 different positions of the moving target.
As shown in fig. 2, the present embodiment provides one implementation method of a moving object scattering imaging method in a very low light environment based on deep learning, which specifically includes the following steps:
step 1, constructing a light path with a scattering medium, adjusting illumination intensity and exposure time, and acquiring detection imaging under extremely weak light;
step 2, placing targets at N fixed positions in a detection range, and acquiring a target scattering imaging sampling chart at each fixed position;
step 3, respectively training a neural network by using the scattering imaging sampling graph of each fixed position, forming the neural network after training, wherein the neural network comprises a first neural network for imaging position classification and a second neural network for reconstructing an original sampling graph at a corresponding imaging position in an extremely weak light environment;
and 4, acquiring a scattering imaging sampling image of the moving target, and acquiring a reconstructed image by using the neural network training acquired in the step 3.
The second neural network in the step 3 is optimized, wherein the weak light scattering image is used as the input of the neural network, the weak light scattering image is output as a clear image reconstructed by the model, and training parameters are automatically adjusted by minimizing the mean square error between the clear image reconstructed by the model and the original clear image, so that the neural network model is optimized.
The step 4 is specifically as follows:
step 4-1, inputting a moving target scattering imaging sampling graph into the first neural network obtained in the step 3, and obtaining position classification information of the moving target;
step 4-2, correcting the position classification information acquired in the step 4-1 to obtain priori information that the reference moving target cannot jump in the non-adjacent interval;
and step 4-3, inputting the moving object scattering imaging sampling graph and the position classification information acquired in the step 4-2 into the second neural network acquired in the step 3 to acquire a reconstructed moving object sequence.
The training is performed by using the classification of the data sets at different positions, so that the learning of the environmental information can be effectively promoted, and the training speed and the overall training effect of the regional network can be improved. As shown in fig. 3-8, is an imaging diagram of one embodiment of an embodiment performed in accordance with the apparatus and method described above.
Traditional scatter imaging only focuses on single scattering problem, but does not focus on the complex situation that the three limitations of few photon counting and scattering and moving targets exist simultaneously, and at the moment, the track recovery is extremely difficult.
And establishing a neural network to acquire position information and original image restoration information for the speckle image acquired by the detector. The prior information and the recovered speckle are used to obtain the accurate position information of the target for track recovery.
A unique two-step neural network is built to solve this problem, with the speckle images acquired by the detector being classified in position and then restored. Full connection layer networks with additional optimizers and random discard connections are included in both networks to augment the information that can be learned.
As another embodiment of the present invention:
step 1, constructing a light path with a scattering medium, and adjusting illumination intensity and exposure time to enable a single photon detector to obtain a scattering diagram of a target under weak light.
And 2, placing targets at N fixed positions in the detection range to obtain a sampling chart of the fixed positions for training.
In this embodiment, the exposure time is 2 μs, the average photon number received by the single photon detector is controlled to be 0.2, and 13000 Zhang Sanshe images corresponding to 2600 original clear targets are acquired for 6 different positions.
And step 3, training the neural network by using sampling speckle of N fixed positions, wherein after training is finished, the fixed parameters become the neural network for classifying the low-photon imaging positions.
In the present embodiment, it is preferable that the weak light scattering image is output as any one of six position classifications as an input to the neural network.
And 4, respectively training the neural network by using the sampling speckle of each fixed position, and forming the neural network for reconstructing the low-photon imaging at the fixed position by using the fixed parameters after training.
In this embodiment, the weak light scattering image is preferably used as an input of the neural network, and is output as a clear image reconstructed by the model, and parameters are automatically adjusted by minimizing a mean square error between the clear image reconstructed by the model and the original clear image, so as to optimize the neural network model.
And step 5, moving the target in a detection range according to the set track, and obtaining a sampling graph of the movement position for test.
And 6, reconstructing the speckle of the low-photon moving target to be recovered, which is obtained in the step 5, by adopting the neural network for low-photon imaging position classification and fixed position reconstruction.
Step 6.1: and (3) obtaining position classification information through the neural network of the step (3) by the sampling graph obtained in the step (5).
Step 6.2: and (3) correcting the position classification information obtained in the step (6.1) by considering prior information that the moving target sequence cannot jump in a non-adjacent section.
And 6.3, acquiring a reconstruction target sequence through the neural network corresponding to the step 4 by using the sampling graph acquired in the step 5 and the position classification information acquired in the step 6.2.
In summary, the invention is applicable to low photon count situations, where light exhibits particle characteristics, yet still can obtain certain useful information through a single photon detector. The method is suitable for the condition of moving the target, and can still acquire certain effective information when the target moves within the range. The algorithm reduces the amount of samples required when training samples for a single neural network. The algorithm avoids the process of resampling the speckle of the position to carry out deep learning when the target moves, and has stronger robustness. The algorithm integrally uses more training samples, and improves the quality of scattering imaging recovery and tracking in the extremely low light environment by the deep learning method.
The foregoing describes in detail preferred embodiments of the present invention. It should be understood that numerous modifications and variations can be made in accordance with the concepts of the invention without requiring creative effort by one of ordinary skill in the art. Therefore, all technical solutions which can be obtained by logic analysis, reasoning or limited experiments based on the prior art by the person skilled in the art according to the inventive concept shall be within the scope of protection defined by the claims.

Claims (10)

1. A method for scattering imaging of a moving target in a very low light environment based on deep learning is characterized by comprising the following steps:
step 1, constructing a light path with a scattering medium, adjusting illumination intensity and exposure time, and acquiring detection imaging under extremely weak light;
step 2, placing targets at N fixed positions in a detection range, and acquiring a target scattering imaging sampling chart at each fixed position;
step 3, training a neural network respectively by using the scattering imaging sampling graphs of each fixed position, and forming the neural network after training, wherein the neural network comprises a first neural network for imaging position classification and a second neural network for reconstructing an original sampling graph at a corresponding imaging position in an extremely weak light environment;
and step 4, acquiring a scattering imaging sampling image of the moving target, and acquiring a reconstructed image by using the neural network training acquired in the step 3.
2. The method for scatter imaging of moving objects in a deep learning based very low light environment of claim 1,
the second neural network in the step 3 is optimized, wherein the optimization takes a weak light scattering image as an input of the neural network, outputs a clear image reconstructed by a model, and automatically adjusts training parameters by minimizing a mean square error between the clear image reconstructed by the model and an original clear image, so as to optimize the neural network model.
3. A method for scatter imaging of moving objects in a deep learning based very low light environment as defined in claim 2,
the step 4 specifically comprises the following steps:
step 4-1, inputting the moving object scattering imaging sampling graph into the first neural network obtained in the step 3, and obtaining the position classification information of the moving object;
step 4-2, correcting the position classification information acquired in the step 4-1, wherein the correction refers to prior information that the moving target cannot jump in a non-adjacent interval;
and 4-3, inputting the moving object scattering imaging sampling graph and the position classification information acquired in the step 4-2 into the second neural network acquired in the step 3 to acquire a reconstructed moving object sequence.
4. A method of moving object scatter imaging in a deep learning based very low light environment as claimed in claim 3, wherein the second neural network is a U-shaped network with additional jump connections, the downsampling of the neural network being jump connected with the corresponding level of upsampling, the convolution layers of the neural network being jump connected.
5. The method of deep learning based diffuse imaging of moving objects in very low light environments of claim 4, wherein the first neural network is a fully connected network with additional optimizers and random discard connections.
6. A deep learning-based scattering imaging device for a moving object in an extremely low light environment, the device comprising:
a light source (1) for generating an extremely low light source;
a spatial light modulator (2) for loading a plurality of targets by turning the utility mirror; -the light source (1) emits light incident at the target of the spatial light modulator (2);
a focusing assembly (3, 4, 5) for performing focusing on the target reflected light;
at least one primary scattering light path (6, 7) for performing scattering processing on the focused extremely weak light;
a detector (8) for collecting the extremely weak light processed by the scattered light path;
-a processor module (9) for receiving the light signal of the detector (8) and performing an analysis, performing a method of moving object scatter imaging in a deep learning based very low light environment as claimed in any one of claims 1 to 5.
7. The deep learning based moving object scatter imaging apparatus in an extremely low light environment as claimed in claim 6, wherein the detector (8) performs light field intensity value acquisition at the same frequency as the light source (1).
8. A moving object scattering imaging device in a deep learning based very low light environment as claimed in claim 7, characterized in that the light intensity of the light source (1) is adjustable.
9. The deep learning based moving object scattering imaging device in an extremely low light environment according to claim 6, wherein the at least one stage of scattering light path (6, 7) comprises a first frosted glass (6) and a second frosted glass (7) sequentially arranged along the light path.
10. A deep learning based moving object scatter imaging apparatus in an extremely low light environment according to claim 6, characterized in that the detector (8) is a single photon camera.
CN202310675567.8A 2023-06-08 2023-06-08 Moving target scattering imaging method and device in extremely low light environment based on deep learning Pending CN116699554A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310675567.8A CN116699554A (en) 2023-06-08 2023-06-08 Moving target scattering imaging method and device in extremely low light environment based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310675567.8A CN116699554A (en) 2023-06-08 2023-06-08 Moving target scattering imaging method and device in extremely low light environment based on deep learning

Publications (1)

Publication Number Publication Date
CN116699554A true CN116699554A (en) 2023-09-05

Family

ID=87832104

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310675567.8A Pending CN116699554A (en) 2023-06-08 2023-06-08 Moving target scattering imaging method and device in extremely low light environment based on deep learning

Country Status (1)

Country Link
CN (1) CN116699554A (en)

Similar Documents

Publication Publication Date Title
Lindell et al. Single-photon 3D imaging with deep sensor fusion.
Peng et al. Photon-efficient 3d imaging with a non-local neural network
US10302424B2 (en) Motion contrast depth scanning
US9866773B2 (en) System and method for using filtering and pixel correlation to increase sensitivity in image sensors
Pei et al. Dynamic non-line-of-sight imaging system based on the optimization of point spread functions
CN102759408A (en) Single-photon counting imaging system and method of same
CN102510282A (en) Time-resolved single-photon counting two-dimensional imaging system and method
CN102494663A (en) Measuring system of swing angle of swing nozzle and measuring method of swing angle
Seets et al. Motion adaptive deblurring with single-photon cameras
Zhang et al. Photon-starved snapshot holography
CN112461360B (en) High-resolution single photon imaging method and system combined with physical noise model
US20220100094A1 (en) Quantum-limited Extreme Ultraviolet Coherent Diffraction Imaging
CN116699554A (en) Moving target scattering imaging method and device in extremely low light environment based on deep learning
Gruber et al. Learning super-resolved depth from active gated imaging
CN116366952A (en) Detector movement scattering imaging device and method based on deep learning under extremely low light environment
US11539895B1 (en) Systems, methods, and media for motion adaptive imaging using single-photon image sensor data
Shin Computational imaging with small numbers of photons
CN114037771A (en) Few-photon imaging method based on deep learning
CN107643289A (en) A kind of transparent material micro devices bonding quality detecting system
FR3050300A1 (en) METHOD AND DEVICE FOR AUTOMATIC DETECTION OF POLLUTION ZONES ON A WATER SURFACE
CN111445507A (en) Data processing method for non-visual field imaging
CN114972104A (en) Low-photon image recovery method based on deep learning
EP2409276B1 (en) Image processing method for the analysis of integrated circuits, and system for implementing said method
US11927700B1 (en) Systems, methods, and media for improving signal-to-noise ratio in single-photon data
US20240161319A1 (en) Systems, methods, and media for estimating a depth and orientation of a portion of a scene using a single-photon detector and diffuse light source

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination