CN116704070A - Method and system for reconstructing jointly optimized image - Google Patents

Method and system for reconstructing jointly optimized image Download PDF

Info

Publication number
CN116704070A
CN116704070A CN202310982653.3A CN202310982653A CN116704070A CN 116704070 A CN116704070 A CN 116704070A CN 202310982653 A CN202310982653 A CN 202310982653A CN 116704070 A CN116704070 A CN 116704070A
Authority
CN
China
Prior art keywords
imaging
reconstruction
image
neural network
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310982653.3A
Other languages
Chinese (zh)
Other versions
CN116704070B (en
Inventor
边丽蘅
赵日发
常旭阳
闫军
郭鹏宇
秦同
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202310982653.3A priority Critical patent/CN116704070B/en
Publication of CN116704070A publication Critical patent/CN116704070A/en
Application granted granted Critical
Publication of CN116704070B publication Critical patent/CN116704070B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of computational photography, the invention discloses a method and a system for reconstructing an image by joint optimization, the method comprises constructing a joint optimization network comprising a physical layer and a reconstruction layer, wherein the physical layer is based on a propagation model which can be learned by imaging parameters, the physical layer is input into a natural image, the physical layer is output into a group of diffraction intensities under different propagation distances, the reconstruction layer is based on a neural network, the diffraction intensity set output by the physical layer is taken as the input, the reconstructed natural image is output, and the joint network is trained to optimize imaging parameters and reconstruction network parameters; constructing an imaging system based on optimal imaging parameters, and collecting diffraction intensity data of a sample; and giving optimal weight to the diffraction intensity graph, and inputting the optimal weight to a training convergence neural network to obtain a predicted reconstructed image. The method and the device jointly optimize the propagation distance, weight and other imaging parameters and target reconstruction quality, and improve the reconstruction quality of the image through the software and hardware joint design and optimization, thereby realizing robust high-precision computational imaging.

Description

Method and system for reconstructing jointly optimized image
Technical Field
The invention relates to the technical field of computational photography, in particular to a method and a system for reconstructing a jointly optimized image.
Background
The phase of the light wave contains rich target characteristics and structural information, and the phase measurement has important significance in astronomy, chemistry, biomedicine and the like. However, in the actual camera measurement process, the response speed of the sensor cannot reach the light wave frequency, so that only the intensity information of the wave front can be acquired, and the reconstruction result is inaccurate.
Computational microscopy is an imaging technique that has emerged in recent years, such as synthetic aperture interference microscopy, fourier stack microscopy, and on-lens-free microscopy, without mechanical scanning and stitching, to obtain high throughput and wide field-of-view images. The microscope on the lens-free sheet places the sample at a position as close to the imaging sensor as possible, no lens or other optical components are needed in the middle, the imaging system is greatly simplified, and meanwhile, the problems of optical aberration, chromatic aberration and the like existing in a lens-based system are effectively avoided. In both typical designs of lens-less on-chip microscope, the propagation distance of the contact-mode shadow imaging microscope is limited by the thickness of the cover glass, whereas in a lens-less on-chip digital holographic microscope, the distance between the object and the sensor chip can be quite small, the diffraction intensity pattern is generated by interference between scattered light from each object and self or non-scattered background light, and the diffraction pattern is digitally processed to reconstruct the sample, requiring the removal or partial removal of the associated double image artifacts by means of a computational phase recovery algorithm.
On the lens-free on-lens image reconstruction algorithm, the conventional algorithm is based on a numerical iterative algorithm, such as a GS algorithm, and the phase is recovered through alternate projection between spatial domain and frequency domain intensities, so that the reconstruction result is poor due to dependence on priori knowledge. In recent years, the deep learning with strong optimization capability is widely applied to computational imaging, reconstruction is realized only by learning the mapping between the acquired intensity diffraction pattern and a target, and the reconstruction process is simplified, but the existing deep learning mode is usually in an end-to-end mode, the imaging process is ignored, when data are acquired, factors such as illumination, sensor parameters, environmental noise, focusing distance and the like exist, errors occur during measurement, the quality of the acquired intensity pattern is different, the influence is ignored by the deep learning, the joint optimization of imaging parameters and reconstruction quality is not performed, the reconstruction precision of an algorithm is reduced, and the robustness is poor.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems in the related art to some extent.
Therefore, the invention provides a combined optimized image reconstruction method, wherein parameters such as propagation distance, weight and the like are introduced into an optimized network consisting of a forward propagation model and a reconstruction neural network, so that the network can automatically optimize imaging parameters required by acquiring a diffraction intensity pattern in an imaging process while learning an image reconstruction process, a multi-distance lens-free imaging system is guided to acquire the intensity diffraction pattern under proper parameter setting, the combined design and optimization of software and hardware are realized, and the robustness of a reconstruction result is further improved.
It is a further object of the invention to propose a jointly optimised image reconstruction system.
In order to achieve the above object, according to one aspect of the present invention, a joint optimization image reconstruction method is provided, including:
constructing a joint optimization network; the combined optimization network comprises an imaging physical model layer based on imaging parameters and a reconstruction layer based on a neural network;
inputting an image into the imaging physical model layer to output to obtain first diffraction intensity data, inputting the first diffraction intensity data into the reconstruction layer to output to obtain a reconstructed image, and training a joint optimization network to optimize imaging parameters and neural network parameters to obtain optimal imaging parameters and a trained neural network;
acquiring second diffraction intensity data of an image by using a lens-free imaging system constructed based on the optimal imaging parameters;
and inputting the second diffraction intensity data endowed with the optimal weight into the trained neural network to output a predicted reconstructed image.
In addition, the image reconstruction method for joint optimization according to the above embodiment of the present invention may further have the following additional technical features:
further, in one embodiment of the invention, the reconstruction layer is used to provide weight coefficients and construct a mapping of intensities at the sensor imaging plane to the original target.
Further, in one embodiment of the present invention, the original target is input to a joint optimization network, and a reconstruction result obtained by the reconstruction layer through linear combination according to the weight coefficient is taken as a final output to train the joint optimization network.
Further, in one embodiment of the present invention, the object wave after the light wave passes through the image is acquired by moving the sensor to an optimal preset distance to obtain the second diffraction intensity data.
Further, in one embodiment of the present invention, the propagation of light is simulated using an angular spectrum function, the light wave propagating from the object plane to the imaging plane by a distance of,/>For wavelength, < >>For the propagation direction, the propagation process is modeled as:
is the target to be reconstructed, i.e. the true value; wherein (1)>Represents->The distance of propagation of the light beam,/>is the two-dimensional space coordinate of the imaging plane, +.>Representing the propagation of a wavefront from an object plane to an imaging plane as a spatial propagation function, including both fraunhofer diffraction and fresnel diffraction processes,/->Is object light (I/O)>、/>Respectively representing the amplitude and phase distribution of the object light, < >>For noise distribution->An intensity map acquired for an imaging plane;
in the case of fraunhofer diffraction, the measured intensity is proportional to the magnitude of the fourier transform of the light wave in the object plane:
wherein ,representing a fourier transform;
in the case of fresnel diffraction, angular spectrometry is used to simulate the transmission of light waves:
wherein ,for the inverse Fourier transform, +.>For wave fronts from the initial object plane->Distance of propagation->The intensity and angular spectrum function collected later are:
wherein ,、/>the spatial frequencies of the two coordinate directions on the propagation plane are respectively.
Further, in one embodiment of the present invention, the first diffraction intensity data at N propagation distances is obtained by a forward propagation model of the imaging physical model layerThen, the first diffraction intensity data is multiplied by a weight coefficient and then spliced according to channels respectively so as to be input into the neural network; extracting and reconstructing the characteristics of the first diffraction intensity data based on the neural network, wherein the reconstruction result output by the neural network is that
Further, in one embodiment of the present invention, the output reconstruction results are linearly combined based on the weight coefficientsThe predicted reconstructed image is obtained as:
further, in one embodiment of the invention, a predicted reconstructed image is calculatedAnd true value of input->Loss of (2), loss function->
wherein ,、/>representing the maximum value of the pixel of the object on both axes, minimizing the sum of the absolute differences of the output values and the true values,/->Norm loss function:
the sum of squares of the difference between the output value and the true value is minimized.
Further, in one embodiment of the invention, node parameters, propagation distances and weight parameters of each layer of the neural network are updated through a gradient descent algorithm; the weight parameters dynamically update weights by sub-target reconstruction results based on the neural network, and an image evaluation function TOG is adopted:
wherein ,represents standard deviation->Average, alleviate->Is a gradient operator, and the weight coefficient is:
to achieve the above object, another aspect of the present invention proposes a joint optimization image reconstruction system, including:
the network construction module is used for constructing a joint optimization network; the combined optimization network comprises an imaging physical model layer based on imaging parameters and a reconstruction layer based on a neural network;
the network training module is used for inputting an image into the imaging physical model layer to output to obtain first diffraction intensity data, inputting the first diffraction intensity data into the reconstruction layer to output to obtain a reconstructed image, and training the joint optimization network to optimize imaging parameters and neural network parameters to obtain optimal imaging parameters and trained neural network;
a data acquisition module for acquiring second diffraction intensity data of an image using a lens-less imaging system constructed based on the optimal imaging parameters;
and the image reconstruction module is used for inputting the second diffraction intensity data endowed with the optimal weight into the trained neural network to output a predicted reconstructed image.
According to the image reconstruction method and system based on the joint optimization, the propagation distance, the weight and other imaging parameters are jointly optimized, and the image reconstruction quality is improved through the joint design and optimization of software and hardware, so that robust high-precision computational imaging is realized.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of a joint optimization image reconstruction method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a multi-range lensless imaging principle according to an embodiment of the invention;
FIG. 3 is a schematic diagram of an imaging physical layer model of a joint optimization network in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram of a reconstructed layer model according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a joint optimized image reconstruction system according to an embodiment of the present invention.
Detailed Description
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other. The invention will be described in detail below with reference to the drawings in connection with embodiments.
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
The image reconstruction method and system for joint optimization proposed according to the embodiments of the present invention are described below with reference to the accompanying drawings.
FIG. 1 is a flow chart of a joint optimization image reconstruction method according to an embodiment of the present invention.
As shown in fig. 1, the method includes, but is not limited to, the steps of:
s1, constructing a joint optimization network; the combined optimization network comprises an imaging physical model layer based on imaging parameters and a reconstruction layer based on a neural network;
s2, inputting an image into the imaging physical model layer to output to obtain first diffraction intensity data, inputting the first diffraction intensity data into the reconstruction layer to output to obtain a reconstructed image, and training a joint optimization network to optimize imaging parameters and neural network parameters to obtain optimal imaging parameters and a trained neural network;
s3, acquiring second diffraction intensity data of an image by using a lens-free imaging system constructed based on optimal imaging parameters;
and S4, inputting the second diffraction intensity data endowed with the optimal weight into the trained neural network to output a predicted reconstructed image.
It can be understood that the invention constructs a combined optimization network comprising a physical layer and a reconstruction layer, wherein the physical layer is based on a propagation model which can be learned by imaging parameters, is input into a natural image, is output into a group of diffraction intensities under different propagation distances, the reconstruction layer is based on a neural network, takes a diffraction intensity set output by the physical layer as input, outputs the reconstructed natural image, and trains the combined network to optimize the imaging parameters and the reconstruction network parameters; constructing a lens-free imaging system based on optimal imaging parameters, and collecting diffraction intensity data of a sample; and giving optimal weight to the diffraction intensity graph, and inputting the optimal weight to a training convergence neural network to obtain a predicted reconstructed image.
The physical layer is illustratively based on a propagation model which can be learned by imaging parameters, is input as a natural image, is output as a group of diffraction intensities under different propagation distances, and the reconstruction layer is based on a neural network, takes a diffraction intensity set output by the physical layer as input, and outputs the reconstructed natural image, and is characterized in that a joint optimization network is constructed, and the network comprises an imaging physical model layer and a reconstruction layer. The imaging physical model layer provides imaging parameters such as propagation distance and the like based on a multi-distance lens-free imaging technology and is used for optical propagation and physical modeling of a sensor imaging process; the reconstruction layer provides the weighting coefficients and learns the mapping of intensities at the sensor imaging plane to the original target.
The natural image dataset is input into the joint optimization network, the joint network is trained, and imaging parameters are optimized while the parameters of the nodes of the optimization network are updated.
Illustratively, diffraction intensity data of the sample is acquired, wherein the sensor is moved to the optimal distance, and object waves after the light waves pass through the sample are acquired, so as to obtain a diffraction intensity map of the distance.
Illustratively, the images are input into a training convergence neural network to obtain a predicted reconstruction result, and the predicted reconstruction result is characterized in that the optimal weights are given to intensity diffraction patterns at different distances, so that the intensity patterns are linearly combined, and then the intensity patterns are input into the reconstruction network, and the network outputs the predicted images.
As shown in fig. 2, a schematic diagram of a lens-free imaging system is shown, and an imaging physical model layer of a joint optimization network is designed based on the system. The light wave propagates from the object plane to the imaging plane with a propagation distance of,/>For wavelength, < >>For the propagation direction, the process can be modeled as
Is the target to be reconstructed, i.e. the true value. For the spatial propagation process, both far field diffraction and near field diffraction are considered. In the case of far field diffraction, fraunhofer diffraction, the measured intensity is proportional to the magnitude of the fourier transform of the light wave in the object plane:
wherein ,representing the fourier transform.
In the case of near field diffraction, fresnel diffraction, angular spectroscopy can be used to simulate the transmission of light waves:
wherein ,for the inverse Fourier transform, +.>For wave fronts from the initial object plane->Distance of propagation->The intensity and angular spectrum function collected later are:
wherein ,、/>the spatial frequencies of the two coordinate directions on the propagation plane are respectively.
First, the propagation distance and weight parameters are initialized, and the inclusion is givenNAnd the propagation distance and the corresponding weight and other super-parameter initial values. Since the quality of the diffraction pattern decreases with increasing propagation distance under the influence of ambient noise during the pattern acquisition process of the camera, the diffraction pattern is thus reflected in the imageIn the initialization process, the small propagation distance should provide more information for the reconstructed image, and correspondingly give a large weight, and the large propagation distance gives a small weight so as to accelerate the parameter updating speed.
FIG. 3 is a flow chart of the physical model layer of the present invention, as shown in FIG. 3, obtained by imaging the forward propagation model of the physical model layerNDiffraction intensity at individual propagation distancesAnd then inputting the reconstruction layers shown in fig. 4, multiplying the intensities by weight coefficients, and then splicing the intensities according to channels respectively for inputting the reconstruction network. The structure of the neural network is exemplified by U-Net, which belongs to the standard encoder-decoder structure, the diffraction intensity characteristics are extracted and reconstructed, and the output of the network is the reconstruction result of each intensity graph>
Calculating a target field by updating the linear combination of the targets, wherein each reconstructed sub-result has different contributions to the reconstructed result and is based on a weight coefficientThe model of the linear combination is:
thus, even if the quality of some diffraction intensity patterns is low, the imaging resolution can be improved by adjusting the weight coefficient, so that the multiple measurement method is more reliable. Then calculate the reconstruction resultAnd input truth value->The loss functions used include, but are not limited to, common image reconstruction losses such as +.>(minimumAbsolute value deviation):
wherein ,、/>representing the maximum value of the target pixel on both axes, minimizing the sum of the absolute differences of the output values and the true values. />Norm loss function (least squares error):
the sum of squares of the difference between the output value and the true value is minimized.
After the loss is calculated, node parameters, propagation distance and weight parameters of each layer of the neural network are updated through a gradient descent algorithm, and two updating schemes exist for the weight parameters: one is the same updating mode as the propagation distance, namely, special settingNThe super parameters representing the weights are used for directly changing the weights when the neural network updates the parameters; another way is to dynamically update weights based on sub-objective reconstruction results of the neural network, where an image evaluation function TOG is used:
wherein ,represents standard deviation->Average, alleviate->Is a gradient operator, and the weight coefficient is:
thus, the weight parameters can be obtained in each forward calculation of the network.
And selecting a proper strategy to train the network, optimizing network parameters through a gradient descent method, and obtaining an optimal propagation distance and a set of weight parameters, wherein the reconstruction effect of the neural network part reaches the optimal value at the same time under the propagation distance set.
The lens-less imaging system described in fig. 2 is constructed based on the optimal imaging parameters obtained above and diffraction intensity data of the desired sample is acquired. The wave front containing the sample information propagates to an imaging plane at an optimal distance, the sensor is moved to the optimal distance, and an object wave after the light wave passes through the sample is acquired to obtain an intensity diffraction pattern of a plurality of distances. The acquired intensity map is endowed with the optimal weight obtained by training, linear combination is carried out according to the optimal weight, and the converged neural network is input to obtain the result of the combined network prediction, so that the robustness of the algorithm is improved.
According to the image reconstruction method based on the joint optimization, disclosed by the embodiment of the invention, the imaging parameters such as the propagation distance, the weight and the like are jointly optimized with the target reconstruction quality, so that the joint design and optimization of software and hardware are realized, and the reconstruction precision of an image is improved.
In order to implement the above embodiment, as shown in fig. 5, there is further provided a jointly optimized image reconstruction system 10 in the present embodiment, the system 10 including:
a network construction module 100 for constructing a joint optimization network; the combined optimization network comprises an imaging physical model layer based on imaging parameters and a reconstruction layer based on a neural network;
the network training module 200 is configured to input an image to an imaging physical model layer to output to obtain first diffraction intensity data, and input the first diffraction intensity data to a reconstruction layer to output to obtain a reconstructed image, so as to train a joint optimization network to optimize imaging parameters and neural network parameters to obtain optimal imaging parameters and a trained neural network;
a data acquisition module 300 for acquiring second diffraction intensity data of an image using a lens-less imaging system constructed based on optimal imaging parameters;
the image reconstruction module 400 is configured to input the second diffraction intensity data given the optimal weight to the trained neural network to output a predicted reconstructed image.
According to the image reconstruction system with combined optimization, disclosed by the embodiment of the invention, the imaging parameters such as the propagation distance, the weight and the like are combined and optimized with the target reconstruction quality, so that the combined design and optimization of software and hardware are realized, and the reconstruction precision of an image is improved.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.

Claims (10)

1. A method of jointly optimized image reconstruction, comprising the steps of:
constructing a joint optimization network; the combined optimization network comprises an imaging physical model layer based on imaging parameters and a reconstruction layer based on a neural network;
inputting an image into the imaging physical model layer to output to obtain first diffraction intensity data, inputting the first diffraction intensity data into the reconstruction layer to output to obtain a reconstructed image, and training a joint optimization network to optimize imaging parameters and neural network parameters to obtain optimal imaging parameters and a trained neural network;
acquiring second diffraction intensity data of an image by using a lens-free imaging system constructed based on the optimal imaging parameters;
and inputting the second diffraction intensity data endowed with the optimal weight into the trained neural network to output a predicted reconstructed image.
2. The method of claim 1, wherein the reconstruction layer is configured to provide weight coefficients and construct a mapping of intensities at a sensor imaging plane to an original target.
3. The method according to claim 2, wherein the original target is input to a joint optimization network, and the reconstruction layer performs linear combination according to the weight coefficients to obtain a reconstruction result as a final output to train the joint optimization network.
4. A method according to claim 3, wherein the object wave after the light wave has passed through the image is acquired by moving the sensor to an optimal preset distance to obtain the second diffraction intensity data.
5. The method of claim 4, wherein the propagation of light from the object plane to the imaging plane is simulated using an angular spectrum function, the propagation distance being,/>For wavelength, < >>For the propagation direction, the propagation process is modeled as:
is the target to be reconstructed, i.e. the true value; wherein (1)>Represents->Distance of propagation->Is the two-dimensional space coordinate of the imaging plane, +.>Representing the propagation of a wavefront from an object plane to an imaging plane as a spatial propagation function, including both fraunhofer diffraction and fresnel diffraction processes,/->Is object light (I/O)>、/>Respectively represent the amplitude and phase of object lightBit distribution, ->For noise distribution->An intensity map acquired for an imaging plane;
in the case of fraunhofer diffraction, the measured intensity is proportional to the magnitude of the fourier transform of the light wave in the object plane:
wherein ,representing a fourier transform;
in the case of fresnel diffraction, angular spectrometry is used to simulate the transmission of light waves:
wherein ,for the inverse Fourier transform, +.>For wave fronts from the initial object plane->Distance of propagation->The intensity and angular spectrum function collected later are:
wherein ,、/>the spatial frequencies of the two coordinate directions on the propagation plane are respectively.
6. The method of claim 5, wherein the first diffraction intensity data at N propagation distances is obtained by a forward propagation model of the imaging physical model layerThen, the first diffraction intensity data is multiplied by a weight coefficient and then spliced according to channels respectively so as to be input into the neural network; extracting and reconstructing the characteristics of the first diffraction intensity data based on the neural network, wherein the reconstruction result output by the neural network is that
7. The method of claim 6, wherein the output reconstruction results are linearly combined based on weight coefficientsThe predicted reconstructed image is obtained as:
8. the method of claim 7, wherein a predicted reconstructed image is computedAnd true value of input->Loss of (2), loss function->
wherein ,、/>representing the maximum value of the pixel of the object on both axes, minimizing the sum of the absolute differences of the output values and the true values,/->Norm loss function:
the sum of squares of the difference between the output value and the true value is minimized.
9. The method according to claim 8, wherein node parameters, propagation distances and weight parameters of each layer of the neural network are updated by a gradient descent algorithm; the weight parameters dynamically update weights by sub-target reconstruction results based on the neural network, and an image evaluation function TOG is adopted:
wherein ,represents standard deviation->Average, alleviate->Is a gradient operator, and the weight coefficient is:
10. a joint optimization image reconstruction system, comprising:
the network construction module is used for constructing a joint optimization network; the combined optimization network comprises an imaging physical model layer based on imaging parameters and a reconstruction layer based on a neural network;
the network training module is used for inputting an image into the imaging physical model layer to output to obtain first diffraction intensity data, inputting the first diffraction intensity data into the reconstruction layer to output to obtain a reconstructed image, and training the joint optimization network to optimize imaging parameters and neural network parameters to obtain optimal imaging parameters and trained neural network;
a data acquisition module for acquiring second diffraction intensity data of an image using a lens-less imaging system constructed based on the optimal imaging parameters;
and the image reconstruction module is used for inputting the second diffraction intensity data endowed with the optimal weight into the trained neural network to output a predicted reconstructed image.
CN202310982653.3A 2023-08-07 2023-08-07 Method and system for reconstructing jointly optimized image Active CN116704070B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310982653.3A CN116704070B (en) 2023-08-07 2023-08-07 Method and system for reconstructing jointly optimized image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310982653.3A CN116704070B (en) 2023-08-07 2023-08-07 Method and system for reconstructing jointly optimized image

Publications (2)

Publication Number Publication Date
CN116704070A true CN116704070A (en) 2023-09-05
CN116704070B CN116704070B (en) 2023-11-14

Family

ID=87837881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310982653.3A Active CN116704070B (en) 2023-08-07 2023-08-07 Method and system for reconstructing jointly optimized image

Country Status (1)

Country Link
CN (1) CN116704070B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210350590A1 (en) * 2019-01-29 2021-11-11 Korea Advanced Institute Of Science And Technology Method and device for imaging of lensless hyperspectral image
CN114353946A (en) * 2021-12-29 2022-04-15 北京理工大学 Diffraction snapshot spectral imaging method
CN114647081A (en) * 2022-03-31 2022-06-21 爱思菲尔光学科技(苏州)有限公司 Diffraction optical element based on neural network and design method thereof
CN114972284A (en) * 2022-06-08 2022-08-30 南京大学 Lens-free microscopic imaging system and method based on self-supervision deep learning
CN115099389A (en) * 2022-06-02 2022-09-23 北京理工大学 Non-training phase reconstruction method and device based on complex neural network
CN115113508A (en) * 2022-05-07 2022-09-27 四川大学 Holographic display speckle suppression method based on optical diffraction neural network
CN115200702A (en) * 2022-06-17 2022-10-18 北京理工大学 Computational imaging method and device based on complex neural network
US20230024787A1 (en) * 2021-07-16 2023-01-26 The Regents Of The University Of California Diffractive optical network for reconstruction of holograms
CN115690252A (en) * 2022-11-15 2023-02-03 中国人民解放军陆军装甲兵学院 Hologram reconstruction method and system based on convolutional neural network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210350590A1 (en) * 2019-01-29 2021-11-11 Korea Advanced Institute Of Science And Technology Method and device for imaging of lensless hyperspectral image
US20230024787A1 (en) * 2021-07-16 2023-01-26 The Regents Of The University Of California Diffractive optical network for reconstruction of holograms
CN114353946A (en) * 2021-12-29 2022-04-15 北京理工大学 Diffraction snapshot spectral imaging method
CN114647081A (en) * 2022-03-31 2022-06-21 爱思菲尔光学科技(苏州)有限公司 Diffraction optical element based on neural network and design method thereof
CN115113508A (en) * 2022-05-07 2022-09-27 四川大学 Holographic display speckle suppression method based on optical diffraction neural network
CN115099389A (en) * 2022-06-02 2022-09-23 北京理工大学 Non-training phase reconstruction method and device based on complex neural network
CN114972284A (en) * 2022-06-08 2022-08-30 南京大学 Lens-free microscopic imaging system and method based on self-supervision deep learning
CN115200702A (en) * 2022-06-17 2022-10-18 北京理工大学 Computational imaging method and device based on complex neural network
CN115690252A (en) * 2022-11-15 2023-02-03 中国人民解放军陆军装甲兵学院 Hologram reconstruction method and system based on convolutional neural network

Also Published As

Publication number Publication date
CN116704070B (en) 2023-11-14

Similar Documents

Publication Publication Date Title
Yonel et al. Deep learning for passive synthetic aperture radar
CN111551129B (en) Medium-low-order surface shape detection device and system of large-caliber plane mirror and storage medium
CN111366557A (en) Phase imaging method based on thin scattering medium
CN113158487B (en) Wavefront phase difference detection method based on long-short term memory depth network
CN111579097B (en) High-precision optical scattering compensation method based on neural network
CN115471437B (en) Image fusion method based on convolutional neural network and remote sensing image fusion method
CN111650738A (en) Fourier laminated microscopic image reconstruction method and device based on deep learning
CN115200702A (en) Computational imaging method and device based on complex neural network
CN111561877B (en) Variable resolution phase unwrapping method based on point diffraction interferometer
CN112946789A (en) Interference flat-plate imaging system based on super lens array and photonic integrated chip
CN115099389A (en) Non-training phase reconstruction method and device based on complex neural network
CN113888444A (en) Image reconstruction method and system based on laminated self-focusing experiment
CN115393404A (en) Double-light image registration method, device and equipment and storage medium
CN115147709A (en) Underwater target three-dimensional reconstruction method based on deep learning
CN111189414A (en) Real-time single-frame phase extraction method
CN116704070B (en) Method and system for reconstructing jointly optimized image
CN113096039A (en) Depth information completion method based on infrared image and depth image
CN108428245A (en) Sliding method for registering images based on self-adapting regular item
Archinuk et al. Mitigating the nonlinearities in a pyramid wavefront sensor
CN110954133A (en) Method for calibrating position sensor of nuclear distance fuzzy clustering orthogonal spectral imaging
CN115760603A (en) Interference array broadband imaging method based on big data technology
CN114998760A (en) Radar image ship detection network model and detection method based on domain adaptation
Eslami et al. Using a plenoptic camera to measure distortions in wavefronts affected by atmospheric turbulence
CN110260788A (en) Optical micro/nano measuring device, the method for extracting structure micro-nano dimension information to be measured
CN112525496B (en) Method, device, equipment and medium for sensing wavefront curvature of telescope

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant