CN114353946B - Diffraction snapshot spectrum imaging method - Google Patents
Diffraction snapshot spectrum imaging method Download PDFInfo
- Publication number
- CN114353946B CN114353946B CN202111635615.8A CN202111635615A CN114353946B CN 114353946 B CN114353946 B CN 114353946B CN 202111635615 A CN202111635615 A CN 202111635615A CN 114353946 B CN114353946 B CN 114353946B
- Authority
- CN
- China
- Prior art keywords
- optical element
- diffraction
- spectral imaging
- diffractive optical
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003384 imaging method Methods 0.000 title abstract description 10
- 238000001228 spectrum Methods 0.000 title 1
- 230000003287 optical effect Effects 0.000 claims abstract description 136
- 238000000701 chemical imaging Methods 0.000 claims abstract description 81
- 238000000034 method Methods 0.000 claims abstract description 55
- 238000005457 optimization Methods 0.000 claims abstract description 36
- 238000004519 manufacturing process Methods 0.000 claims abstract description 14
- 238000013528 artificial neural network Methods 0.000 claims abstract description 12
- 230000004069 differentiation Effects 0.000 claims abstract description 6
- 238000010801 machine learning Methods 0.000 claims abstract description 4
- 238000013139 quantization Methods 0.000 claims description 43
- 230000003595 spectral effect Effects 0.000 claims description 33
- 230000006870 function Effects 0.000 claims description 31
- 230000008447 perception Effects 0.000 claims description 22
- 238000012549 training Methods 0.000 claims description 18
- 230000003044 adaptive effect Effects 0.000 claims description 16
- 238000012545 processing Methods 0.000 claims description 14
- 238000013461 design Methods 0.000 claims description 11
- 230000010363 phase shift Effects 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- 238000011423 initialization method Methods 0.000 claims description 2
- 230000008901 benefit Effects 0.000 abstract description 5
- 230000008569 process Effects 0.000 abstract description 4
- 238000004364 calculation method Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 5
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 2
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 2
- 235000001466 Ribes nigrum Nutrition 0.000 description 2
- 241001312569 Ribes nigrum Species 0.000 description 2
- 238000012271 agricultural production Methods 0.000 description 2
- 230000000052 comparative effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000004611 spectroscopical analysis Methods 0.000 description 2
- HPTJABJPZMULFH-UHFFFAOYSA-N 12-[(Cyclohexylcarbamoyl)amino]dodecanoic acid Chemical compound OC(=O)CCCCCCCCCCCNC(=O)NC1CCCCC1 HPTJABJPZMULFH-UHFFFAOYSA-N 0.000 description 1
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 1
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 1
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 1
- VYPSYNLAJGMNEJ-UHFFFAOYSA-N Silicium dioxide Chemical compound O=[Si]=O VYPSYNLAJGMNEJ-UHFFFAOYSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 239000010931 gold Substances 0.000 description 1
- 229910052737 gold Inorganic materials 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000010998 test method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Spectrometry And Color Measurement (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
Abstract
本发明公开的一种衍射快照光谱成像方法,属于计算摄像学领域。本发明应用于衍射快照光谱成像系统,通过数据驱动的方式对衍射光学元件的编码结构和重建解码神经网络进行联合优化,得到相对最优的光学编码结构及其对应的计算解码模型,使得编码和解码部分更加与彼此契合;光学编码结构在优化阶段考虑实际制造中的量化要求,使得优化得到的衍射光学元件结构和制造的衍射光学元件结构一致,提升重建图像精度;采用编码自由度高的衍射光学元件作为编码部分,具有体积小巧、结构紧凑的优点,具有实时成像的能力。采用可微分的模型对整个编码和解码过程进行建模,进而在任何机器学习自动微分框架中均可实现模型并进行优化,提升本发明的通用性。
A diffraction snapshot spectral imaging method disclosed by the invention belongs to the field of computational photography. The present invention is applied to the diffraction snapshot spectral imaging system, and jointly optimizes the encoding structure of the diffractive optical element and the reconstruction and decoding neural network in a data-driven manner, and obtains a relatively optimal optical encoding structure and its corresponding calculation and decoding model, so that the encoding and The decoding part is more in line with each other; the optical coding structure considers the quantitative requirements in actual manufacturing during the optimization stage, so that the structure of the optimized diffractive optical element is consistent with the structure of the manufactured diffractive optical element, improving the accuracy of the reconstructed image; using a diffractive optical element with a high degree of coding freedom As the coding part, the optical element has the advantages of small size and compact structure, and has the ability of real-time imaging. A differentiable model is used to model the entire encoding and decoding process, and then the model can be realized and optimized in any machine learning automatic differentiation framework, thereby improving the versatility of the present invention.
Description
技术领域Technical Field
本发明涉及一种衍射快照光谱成像方法,尤其涉及能够快照式获取高质量高光谱图像的方法,属于计算摄像学领域。The invention relates to a diffraction snapshot spectral imaging method, in particular to a method capable of acquiring high-quality hyperspectral images in a snapshot manner, and belongs to the field of computational photography.
背景技术Background Art
高光谱成像技术是一种将空间成像技术与光谱成像技术相结合的技术,其可以同时记录物体的空间和光谱信息,相较于传统成像技术能够获得更丰富的物体细节,目前已广泛应用于农业生产、物质识别、地质勘探和生物医学等众多领域。传统的快照式光谱成像方法通常使用了多个的折射透镜或反射透镜以实现光学编码,并通过计算解码的方式重建需要的光谱图像。此种方法对应的实物系统具有复杂的光学编码部分,使得其体积通常较大。得益于衍射光学元件具有高自由度编码能力,衍射快照光谱成像方法使用一个衍射光学元件代替此前系统中使用到的折射透镜或反射透镜作为光学编码结构,大大减小了整个系统的体积。然而,现有的衍射光谱成像方法通常是启发性地手工设计衍射光学元件的结构,此种设计方式需要使用有效的先验知识进行手工设计或者优化求解,且难以保证最终设计的结构是一个全局最优解。此外,现有的设计方法得到的衍射光学元件结构是一个高精度近乎连续的结构,而物理制造工艺通常只能加工不超过16台阶的衍射光学元件,这使得优化的结构和实际制造的结构存在差异,进一步将导致优化时一同设计的重建算法无法高质量地重建光谱图像。Hyperspectral imaging technology is a technology that combines spatial imaging technology with spectral imaging technology. It can simultaneously record the spatial and spectral information of an object. Compared with traditional imaging technology, it can obtain richer object details. It has been widely used in many fields such as agricultural production, material identification, geological exploration and biomedicine. The traditional snapshot spectral imaging method usually uses multiple refractive lenses or reflective lenses to achieve optical encoding, and reconstructs the required spectral image by computational decoding. The physical system corresponding to this method has a complex optical encoding part, which makes its volume usually large. Thanks to the high degree of freedom encoding capability of diffractive optical elements, the diffractive snapshot spectral imaging method uses a diffractive optical element to replace the refractive lens or reflective lens used in the previous system as the optical encoding structure, which greatly reduces the volume of the entire system. However, the existing diffractive spectral imaging method usually manually designs the structure of the diffractive optical element heuristically. This design method requires the use of effective prior knowledge for manual design or optimization solution, and it is difficult to ensure that the final designed structure is a global optimal solution. In addition, the diffractive optical element structure obtained by the existing design method is a high-precision, nearly continuous structure, while the physical manufacturing process can usually only process diffractive optical elements with no more than 16 steps, which makes the optimized structure different from the actual manufactured structure, and further leads to the reconstruction algorithm designed during the optimization being unable to reconstruct the spectral image with high quality.
发明内容Summary of the invention
为解决现有衍射快照光谱成像方法中光学编码优化复杂,且优化后的衍射光学元件结构和实际物理制造的结构存在差异等问题。本发明公开的一种衍射快照光谱成像方法要解决的技术问题是:通过数据驱动的方式对衍射快照光谱成像系统中的衍射光学元件的编码结构和重建解码神经网络进行联合优化,得到相对最优的光学编码结构,同时光学编码结构在优化阶段考虑实际制造中的量化要求,使得优化得到的衍射光学元件结构和制造的衍射光学元件结构一致,进而提升整个光谱成像系统的重建图像精度。此外,本发明采用编码自由度高的衍射光学元件作为编码部分,采用本发明的成像系统相比使用多种折射或反射透镜作为编码部分的成像系统还具有体积小巧、结构紧凑的优点。To solve the problems of complex optical coding optimization in the existing diffraction snapshot spectral imaging method, and the difference between the optimized diffraction optical element structure and the actual physical manufacturing structure. The technical problem to be solved by a diffraction snapshot spectral imaging method disclosed in the present invention is: to jointly optimize the coding structure and the reconstruction decoding neural network of the diffraction optical element in the diffraction snapshot spectral imaging system in a data-driven manner to obtain a relatively optimal optical coding structure. At the same time, the optical coding structure considers the quantitative requirements in actual manufacturing during the optimization stage, so that the optimized diffraction optical element structure is consistent with the manufactured diffraction optical element structure, thereby improving the reconstructed image accuracy of the entire spectral imaging system. In addition, the present invention adopts a diffraction optical element with a high degree of coding freedom as the coding part, and the imaging system using the present invention has the advantages of small size and compact structure compared to the imaging system using a variety of refractive or reflective lenses as the coding part.
为实现所述目的,本发明采用以下技术方案。To achieve the above object, the present invention adopts the following technical solutions.
本发明公开的一种衍射快照光谱成像方法,应用于衍射快照光谱成像系统,将衍射快照光谱成像系统的光学编码结构和重建算法联合优化,以数据驱动的方式同时寻找最适合光谱成像任务的光学编码结构及其对应的计算解码模型;本发明将系统的光学物理编码部分使用点扩散函数模型进行建模,并将重建解码器与其进行拼接,形成完整的端到端模型进行联合优化求解;同时,为了保证衍射光学元件结构在优化后得到量化的台阶状结构,本发明使用自适应量化感知训练方法对衍射光学元件的高度图进行处理,使得优化得到的衍射光学元件结构和制造的衍射光学元件结构一致,进而提升整个光谱成像系统的重建图像精度;在训练完成后,直接使用优化得到的衍射光学元件设计进行制造,并搭建衍射快照光谱成像实物系统;在搭建好的衍射快照光谱成像实物系统中,衍射光学元件对入射光进行相位调制,RGB图像传感器对调制后的光场进行采集,训练好的卷积神经网络则对采集后的RGB图像进行重建解码,得到最终需要的高光谱图像。本发明在保持衍射快照光谱成像系统体积小巧紧凑的优势下,通过数据驱动的方式对衍射快照光谱成像系统中的衍射光学元件的编码结构和重建解码神经网络进行联合优化,得到相对最优的光学编码结构及其对应的计算解码模型,进而使得编码和解码部分更加与彼此契合;同时本发明在光学编码结构在优化阶段考虑实际制造中的量化要求,使得优化得到的衍射光学元件结构和制造的衍射光学元件结构一致,进而提升整个光谱成像系统的重建图像精度使得优化得到的衍射光学元件结构和制造的衍射光学元件结构一致,进一步提升光谱图像重建准确度。本发明用于载人航天、地质勘测、农业生产和生物医学等多个涉及光谱成像的领域。The present invention discloses a diffraction snapshot spectral imaging method, which is applied to a diffraction snapshot spectral imaging system, jointly optimizes the optical coding structure and reconstruction algorithm of the diffraction snapshot spectral imaging system, and simultaneously searches for the optical coding structure most suitable for the spectral imaging task and its corresponding computational decoding model in a data-driven manner; the present invention models the optical physical coding part of the system using a point spread function model, and splices the reconstruction decoder with it to form a complete end-to-end model for joint optimization and solution; at the same time, in order to ensure that the diffraction optical element structure obtains a quantized step-shaped structure after optimization, the present invention uses an adaptive quantization perception training method to process the height map of the diffraction optical element, so that the optimized diffraction optical element structure is consistent with the manufactured diffraction optical element structure, thereby improving the reconstructed image accuracy of the entire spectral imaging system; after the training is completed, the optimized diffraction optical element design is directly used for manufacturing, and a diffraction snapshot spectral imaging physical system is built; in the built diffraction snapshot spectral imaging physical system, the diffraction optical element phase modulates the incident light, the RGB image sensor collects the modulated light field, and the trained convolutional neural network reconstructs and decodes the collected RGB image to obtain the final required hyperspectral image. The present invention, while maintaining the advantage of the diffraction snapshot spectral imaging system being small and compact, jointly optimizes the coding structure and reconstruction decoding neural network of the diffraction optical element in the diffraction snapshot spectral imaging system in a data-driven manner, and obtains a relatively optimal optical coding structure and its corresponding computational decoding model, thereby making the coding and decoding parts more compatible with each other; at the same time, the present invention considers the quantitative requirements in actual manufacturing in the optimization stage of the optical coding structure, so that the optimized diffraction optical element structure is consistent with the manufactured diffraction optical element structure, thereby improving the reconstructed image accuracy of the entire spectral imaging system, making the optimized diffraction optical element structure consistent with the manufactured diffraction optical element structure, and further improving the spectral image reconstruction accuracy. The present invention is used in multiple fields involving spectral imaging, such as manned spaceflight, geological survey, agricultural production, and biomedicine.
本发明公开的一种衍射快照光谱成像方法,包括如下步骤:The present invention discloses a diffraction snapshot spectral imaging method, comprising the following steps:
步骤101:按照衍射光学理论构建光学物理编码前向模型。Step 101: Construct an optical physics encoding forward model according to diffraction optics theory.
步骤101中所述光学物理编码前向模型基于点扩散函数模型构建。根据入射光方向,当自然场景的一个点光源波长为λ的波场传播一定的距离d后到达衍射光学编码器的衍射光学元件前的平面位置(x,y)时,表示为U0:The optical physical encoding forward model in step 101 is constructed based on the point spread function model. According to the direction of the incident light, when a wave field of a point light source of a natural scene with a wavelength of λ propagates a certain distance d and reaches the plane position (x, y) in front of the diffractive optical element of the diffractive optical encoder, it is expressed as U 0 :
其中,i是复数单位。衍射光学元件会对波场产生相位上的调制,调制后的波场U1表示为:Where i is a complex unit. The diffractive optical element will modulate the phase of the wave field, and the modulated wave field U1 is expressed as:
其中,A(x,y)是光圈函数,其值若为0则意味着阻止此位置的传播,其值为1意味着允许此位置的传播;φ(x,y,λ)是衍射光学元件在平面位置(x,y)上对波长为λ的波场产生的相位偏移:Where A(x, y) is the aperture function. If its value is 0, it means that the propagation at this position is blocked, and if its value is 1, it means that the propagation at this position is allowed. φ(x, y, λ) is the phase shift generated by the diffractive optical element at the plane position (x, y) for the wave field with a wavelength of λ:
φ(x,y,λ)=(nλ-1)H(x,y) (3)φ(x, y, λ)=(n λ -1)H(x, y) (3)
其中,nλ是衍射光学元件对于λ波长光的折射率,H(x,y)是衍射光学元件设计结构的在平面位置(x,y)上高度。波场继续传播z距离后到达RGB图像传感器平面,其被表示为U2:Where nλ is the refractive index of the diffractive optical element for wavelength λ light, and H(x, y) is the height of the diffractive optical element design structure at the plane position (x, y). The wave field continues to propagate a distance z and reaches the RGB image sensor plane, which is represented by U 2 :
其中fx和fy分别是空间位置(x,y)对应的频率变量。根据传感器平面的波场得出点扩散函数P,其是传感器平面上复波场的平方大小:Where fx and fy are the frequency variables corresponding to the spatial position (x, y), respectively. The point spread function P is derived from the wave field on the sensor plane, which is the square of the complex wave field on the sensor plane:
P(x,y,λ)∝|U2(x,y,λ)|2 (5)P(x,y,λ)∝|U 2 (x,y,λ)| 2 (5)
在获取点扩散函数后,将对应波长λ的目标场景图像I(x,y,λ,d)与相应点扩散函数P(x,y,λ)分别进行卷积运算:After obtaining the point spread function, the target scene image I(x, y, λ, d) corresponding to the wavelength λ is convolved with the corresponding point spread function P(x, y, λ) respectively:
即获得对应波长和场景深度的调制后图像I′(x,y,λ);求得的调制后场景I′包含了全集波长Λ与全集深度D中每种情况下调制后图像I′λ,d:That is, the modulated image I′(x, y, λ) corresponding to the wavelength and scene depth is obtained; the obtained modulated scene I′ contains the modulated image I′ λ,d in each case of the full set of wavelengths Λ and the full set of depths D:
I={I′λ,d|λ∈Λ,d∈D} (7)I={I′ λ, d |λ∈Λ, d∈D} (7)
调制后图像经过RGB图像传感器根据其在[λ0,λ1]范围内的光谱响应曲线Rc采集后,得到观测的调制后RGB图像Ic∈{R,G,B}(x,y):After the modulated image is collected by the RGB image sensor according to its spectral response curve R c in the range of [λ 0 ,λ 1 ], the observed modulated RGB image I c∈{R,G,B} (x,y) is obtained:
其中η为传感器噪声。至此,建立光学编码的前向模型如公式(1-8)所示。Where η is the sensor noise. So far, the forward model of optical coding is established as shown in formula (1-8).
步骤102:对步骤一构建的光学物理编码前向模型中的衍射光学元件高度图进行量化感知处理,使其呈现量化台阶的样式。Step 102: Perform quantization perception processing on the diffraction optical element height map in the optical physical coding forward model constructed in
步骤102中对衍射光学元件高度图进行量化感知处理,量化感知后的高度图表示为:In step 102, the height map of the diffractive optical element is subjected to quantization perception processing, and the height map after quantization perception is expressed as:
Haq=α×F(Q(Hf))+(1-α)×Hf (9)H aq =α×F(Q(H f ))+(1-α)×H f (9)
其中,Hf为全精度的原始高度图权重,α为量化感知控制参数,随着训练步数而变化,T0和T1分别对应量化感知开始和结束的训练步数:Among them, Hf is the original height map weight of full precision, α is the quantization perception control parameter, which changes with the number of training steps, and T0 and T1 correspond to the training steps at the beginning and end of quantization perception respectively:
Q为量化操作函数,将输入的高度图H量化为在hmax之内的L个台阶的样式:Q is the quantization operation function, which quantizes the input height map H into L steps within h max :
F为自适应微调函数:F is the adaptive fine-tuning function:
F(Q(Hf))=Q(Hf)+Wl (12)F(Q( Hf ))=Q( Hf )+ Wl (12)
其根据自适应量化损失动态调整Wl,以将原本每一个均匀台阶的物理高度调整到一个最优的量化形式。It dynamically adjusts W l according to the adaptive quantization loss to adjust the physical height of each uniform step to an optimal quantization form.
步骤103:基于可微分的神经网络构建计算解码模型,利用所述计算解码模型预测推理原始的光谱图像重建图像。Step 103: construct a computational decoding model based on a differentiable neural network, and use the computational decoding model to predict and infer the original spectral image to reconstruct the image.
步骤103所述计算解码模型为任意可微分的神经网络结构模块N,其从光学编码模块调制后的图像Ic∈{R,G,B}(x,y)预测推理原始的光谱图像重建图像即计算解码模型被表示为:The computational decoding model in step 103 is an arbitrary differentiable neural network structure module N, which predicts and infers the original spectral image reconstructed from the image I c∈{R, G, B} (x, y) modulated by the optical encoding module That is, the computational decoding model is expressed as:
步骤104:通过数据驱动的方式对衍射快照光谱成像系统中的衍射光学元件的编码结构和重建解码神经网络进行联合优化,得到相对最优的光学编码结构及其对应的计算解码模型;同时在优化阶段通过步骤102量化感知处理使衍射光学元件的编码结构呈现量化台阶的样式,进而实现在优化阶段考虑实际制造中的量化要求,使得优化得到的衍射光学元件结构和制造的衍射光学元件结构一致,进而提升整个光谱成像系统的重建图像精度。Step 104: The coding structure and the reconstruction decoding neural network of the diffractive optical element in the diffraction snapshot spectral imaging system are jointly optimized in a data-driven manner to obtain a relatively optimal optical coding structure and its corresponding computational decoding model; at the same time, in the optimization stage, the coding structure of the diffractive optical element is made to present a quantized step pattern through the quantization perception processing in step 102, so as to consider the quantization requirements in actual manufacturing in the optimization stage, so that the optimized diffractive optical element structure is consistent with the manufactured diffractive optical element structure, thereby improving the reconstructed image accuracy of the entire spectral imaging system.
步骤104的实现方法为:对步骤101构建的光学物理编码前向模型和步骤103构建的计算解码模型分别设置学习率、权值初始化方式、权值衰减系数,并设置联合优化的批处理大小、优化方法、迭代次数以及步骤102的量化感知处理开始和结束的参数。将步骤101构建的光学物理编码前向模型的输出作为步骤103构建的计算解码模型的输入,构建的联合编解码光谱成像模型,对其进行端到端地联合优化训练,得到相对最优的光学编码结构及其对应的计算解码模型;同时光学编码结构在优化阶段考虑实际制造中的量化要求,使得优化得到的衍射光学元件结构和制造的衍射光学元件结构一致,进而提升整个光谱成像系统的重建图像精度。The implementation method of step 104 is: respectively set the learning rate, weight initialization method, and weight decay coefficient for the optical physical coding forward model constructed in step 101 and the computational decoding model constructed in step 103, and set the batch size, optimization method, number of iterations of the joint optimization, and the parameters for starting and ending the quantization perception processing in step 102. The output of the optical physical coding forward model constructed in step 101 is used as the input of the computational decoding model constructed in step 103, and the constructed joint coding and decoding spectral imaging model is end-to-end jointly optimized and trained to obtain a relatively optimal optical coding structure and its corresponding computational decoding model; at the same time, the optical coding structure takes into account the quantization requirements in actual manufacturing during the optimization stage, so that the optimized diffractive optical element structure is consistent with the manufactured diffractive optical element structure, thereby improving the reconstructed image accuracy of the entire spectral imaging system.
步骤104中的端到端联合优化能够使用任意机器学习自动微分框架实现。端到端联合优化采用的端到端损失函数为包含重建图像损失部分、自适应量化部分和解码权重的L2正则化损失部分:The end-to-end joint optimization in step 104 can be implemented using any machine learning automatic differentiation framework. The end-to-end loss function used in the end-to-end joint optimization is It is the L2 regularization loss part that includes the reconstructed image loss part, the adaptive quantization part and the decoding weight:
其中,是光谱重建损失函数,是自适应量化损失,ω是重建模块的网络权重,β和γ为比例参数。in, is the spectral reconstruction loss function, is the adaptive quantization loss, ω is the network weight of the reconstruction module, and β and γ are scaling parameters.
光谱重建损失函数由重建图像和目标光谱图像I之间的平均绝对误差MAE计算得到:Spectral reconstruction loss function Reconstructed image The mean absolute error MAE between the target spectral image I is calculated as:
其中K为图像中的像素个数。Where K is the number of pixels in the image.
自适应量化损失表示为量化操作后的高度图和原始高度图之间的均方误差MSE:Adaptive Quantization Loss It is expressed as the mean square error MSE between the height map after quantization and the original height map:
其中J为高度图中的像素个数;Where J is the number of pixels in the height map;
还包括步骤105、步骤106:It also includes step 105 and step 106:
步骤105:根据步骤104优化得到的光学编码结构制造衍射光学元件,利用所述衍射光学元件和RGB图像传感器搭建衍射快照光谱成像系统,由于衍射光学元件编码自由度高,能够减小搭建的衍射快照光谱成像系统的体积,提高衍射快照光谱成像系统的紧凑度。Step 105: A diffractive optical element is manufactured according to the optical coding structure optimized in step 104, and a diffractive snapshot spectral imaging system is constructed using the diffractive optical element and an RGB image sensor. Since the diffractive optical element has a high degree of coding freedom, the volume of the constructed diffraction snapshot spectral imaging system can be reduced, thereby improving the compactness of the diffraction snapshot spectral imaging system.
步骤106:使用步骤105搭建好的衍射快照光谱成像系统拍摄目标场景,并使用步骤104中优化得到的计算解码模型对实物系统拍摄的观测图像进行解码重建,以得到所需要的目标场景对应的高光谱图像,进而提升整个光谱成像系统的重建图像精度。Step 106: Use the diffraction snapshot spectral imaging system built in step 105 to photograph the target scene, and use the computational decoding model optimized in step 104 to decode and reconstruct the observation image photographed by the physical system to obtain the required hyperspectral image corresponding to the target scene, thereby improving the reconstructed image accuracy of the entire spectral imaging system.
作为优选,使用GPU对步骤104构建的联合编解码光谱成像模型进行端到端联合优化训练,得到相对最优的光学编码结构及其对应的计算解码模型。Preferably, a GPU is used to perform end-to-end joint optimization training on the joint encoding and decoding spectral imaging model constructed in step 104 to obtain a relatively optimal optical coding structure and its corresponding computational decoding model.
有益效果:Beneficial effects:
1、本发明公开的一种衍射快照光谱成像方法,通过数据驱动的方式对衍射快照光谱成像系统中的衍射光学元件的编码结构和重建解码神经网络进行联合优化,得到相对最优的光学编码结构及其对应的计算解码模型,进而使得编码和解码部分更加与彼此契合。1. A diffraction snapshot spectral imaging method disclosed in the present invention jointly optimizes the coding structure and reconstruction decoding neural network of the diffraction optical element in the diffraction snapshot spectral imaging system in a data-driven manner to obtain a relatively optimal optical coding structure and its corresponding computational decoding model, thereby making the coding and decoding parts more compatible with each other.
2、本发明公开的一种衍射快照光谱成像方法,光学编码结构在优化阶段考虑实际制造中的量化要求,使得优化得到的衍射光学元件结构和制造的衍射光学元件结构一致,进而提升整个光谱成像系统的重建图像精度。2. In a diffraction snapshot spectral imaging method disclosed in the present invention, the optical encoding structure takes into account the quantitative requirements in actual manufacturing during the optimization stage, so that the optimized diffraction optical element structure is consistent with the manufactured diffraction optical element structure, thereby improving the reconstructed image accuracy of the entire spectral imaging system.
3、本发明公开的一种衍射快照光谱成像方法,采用编码自由度高的衍射光学元件作为编码部分,相比使用多种折射或反射透镜作为编码部分的成像系统还具有体积小巧、结构紧凑的优点,能够无需空间或光谱维度上扫描即可进行高光谱成像,具有实时成像的能力。3. A diffraction snapshot spectral imaging method disclosed in the present invention adopts a diffraction optical element with high coding freedom as the encoding part. Compared with an imaging system using a variety of refractive or reflective lenses as the encoding part, it has the advantages of small size and compact structure. It can perform high-spectral imaging without scanning in spatial or spectral dimensions and has the ability of real-time imaging.
4、本发明公开的一种衍射快照光谱成像方法,采用可微分的模型对整个编码和解码过程进行建模,进而在任何机器学习自动微分框架中均可实现模型并进行优化,提升本发明的通用性。4. A diffraction snapshot spectral imaging method disclosed in the present invention adopts a differentiable model to model the entire encoding and decoding process, and then the model can be implemented and optimized in any machine learning automatic differentiation framework, thereby improving the versatility of the present invention.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
为了更清楚的说明本发明实施例的技术方案,下面对实施例中所需要使用的附图作简单介绍。In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following briefly introduces the drawings required for use in the embodiments.
图1是本发明实施例提供的衍射快照光谱成像方法的流程图;FIG1 is a flow chart of a diffraction snapshot spectral imaging method provided by an embodiment of the present invention;
图2是本发明实施例提供的衍射快照光谱成像方法的物理系统结构示意图;FIG2 is a schematic diagram of the physical system structure of the diffraction snapshot spectral imaging method provided by an embodiment of the present invention;
图3是本发明实施例提供的衍射快照光谱成像方法的联合训练框架图;FIG3 is a joint training framework diagram of the diffraction snapshot spectral imaging method provided by an embodiment of the present invention;
图4是本发明实施例提供的衍射快照光谱成像方法的量化感知处理示意图;FIG4 is a schematic diagram of quantitative perception processing of a diffraction snapshot spectral imaging method provided by an embodiment of the present invention;
图5是本发明实施例提供的衍射快照光谱成像方法的重建网络结构图;FIG5 is a diagram of a reconstruction network structure of a diffraction snapshot spectral imaging method provided by an embodiment of the present invention;
图中:101-衍射光学元件、102-光圈、103-夹持件、104-RGB图像传感器。In the figure: 101 - diffractive optical element, 102 - aperture, 103 - clamping member, 104 - RGB image sensor.
具体实施方式DETAILED DESCRIPTION
为了更好地说明本发明的目的和优点,下面结合附图和实例对发明内容做进一步说明。In order to better illustrate the purpose and advantages of the present invention, the invention is further described below with reference to the accompanying drawings and examples.
实施例1:Embodiment 1:
如图2所示,本实施例公开的一种衍射快照光谱方法对应的成像系统,包括衍射光学元件101、光圈102、夹持件103和RGB图像传感器104。所述光圈102用于遮挡不需要的光线。所述衍射光学元件101用于对入射光进行相位上的调制。所述夹持件103用于将衍射光学元件固定在传感器上。所述RGB图像传感器104用于探测调制后的图像。As shown in FIG2 , an imaging system corresponding to a diffraction snapshot spectroscopy method disclosed in this embodiment includes a diffractive optical element 101, an aperture 102, a clamp 103, and an RGB image sensor 104. The aperture 102 is used to block unnecessary light. The diffractive optical element 101 is used to modulate the phase of incident light. The clamp 103 is used to fix the diffractive optical element on the sensor. The RGB image sensor 104 is used to detect the modulated image.
图2展示了本发明实施例提供的衍射快照光谱方法对应的成像系统的物理结构示意图,按照光路出射方向,入射光依次经过光圈102和衍射光学元件101,最终被RGB图像传感器104采集,并将其转化为图像数字信号,所述图像数字信号输出给光谱重建网络并通过光谱重建网络进行图像处理实现光谱成像。通过光谱重建网络进行图像处理实现光谱成像,实现方法为:将RGB图像传感器采集的RAW图像运行线性去马赛克算法后得到RGB图像,再将去马赛克后的RGB图像剪裁为宽高均为2的n次方大小的RGB图像,然后将其输入预训练好的重建网络中。重建网络输出的图像即为所需的光谱图像。FIG2 shows a schematic diagram of the physical structure of an imaging system corresponding to the diffraction snapshot spectroscopy method provided by an embodiment of the present invention. According to the light path exit direction, the incident light passes through the aperture 102 and the diffractive optical element 101 in sequence, and is finally collected by the RGB image sensor 104 and converted into an image digital signal. The image digital signal is output to the spectral reconstruction network and image processing is performed through the spectral reconstruction network to realize spectral imaging. Spectral imaging is realized by performing image processing through the spectral reconstruction network. The implementation method is as follows: the RAW image collected by the RGB image sensor is subjected to a linear de-mosaicing algorithm to obtain an RGB image, and then the de-mosaiced RGB image is cropped into an RGB image with a width and height of 2 to the power of n, and then the image is input into a pre-trained reconstruction network. The image output by the reconstruction network is the desired spectral image.
如图1所示,本实施例公开的一种衍射快照光谱成像方法,包括如下步骤:As shown in FIG1 , a diffraction snapshot spectral imaging method disclosed in this embodiment includes the following steps:
步骤101:按照衍射光学理论构建光学物理编码前向模型。Step 101: Construct an optical physics encoding forward model according to diffraction optics theory.
步骤101中所述光学物理编码前向模型基于点扩散函数模型构建。首先根据入射光方向,当自然场景的一个点光源波长为λ的波场传播一定的距离d后到达本发明所述系统的衍射光学编码器的衍射光学元件前的平面位置(x,y)时,表示为U0:The optical physical encoding forward model in step 101 is constructed based on the point spread function model. First, according to the direction of the incident light, when a wave field of a point light source of a natural scene with a wavelength of λ propagates a certain distance d and reaches the plane position (x, y) in front of the diffractive optical element of the diffractive optical encoder of the system of the present invention, it is expressed as U 0 :
其中,i是复数单位,d设置为1m,波长λ采样点包含400nm到700nm每10nm间隔的波长;衍射光学元件会对波场产生相位上的调制,调制后的波场U1表示为:Where i is a complex unit, d is set to 1m, and the wavelength λ sampling points include wavelengths with an interval of 10nm from 400nm to 700nm; the diffractive optical element will modulate the phase of the wave field, and the modulated wave field U 1 is expressed as:
其中,A(x,y)是光圈函数,其值若为0则意味着阻止此位置的传播,其值为1意味着允许此位置的传播;φ(x,y,λ)是衍射光学元件在平面位置(x,y)上对波长为λ的波场产生的相位偏移:Where A(x, y) is the aperture function. If its value is 0, it means that the propagation at this position is blocked, and if its value is 1, it means that the propagation at this position is allowed. φ(x, y, λ) is the phase shift generated by the diffractive optical element at the plane position (x, y) for the wave field with a wavelength of λ:
φ(x,y,λ)=(nλ-1)H(x,y) (3)φ(x, y, λ)=(n λ -1)H(x, y) (3)
其中,nλ是衍射光学元件对于λ波长光的折射率,实施例的材料为OHARA SK1300石英玻璃材料;H(x,y)是衍射光学元件设计结构的在平面位置(x,y)上高度。然后波场继续传播z距离后到达RGB图像传感器平面,其可被表示为U2:Where nλ is the refractive index of the diffractive optical element for the wavelength λ light, and the material of the embodiment is OHARA SK1300 quartz glass material; H(x, y) is the height of the diffractive optical element design structure at the plane position (x, y). Then the wave field continues to propagate a distance z and reaches the RGB image sensor plane, which can be expressed as U 2 :
其中fx和fy分别是空间位置(x,y)对应的频率变量,衍射光学元件到RGB图像传感器距离z设置为50mm;然后得出点扩散函数P,其是传感器平面上复波场的平方大小:Where fx and fy are the frequency variables corresponding to the spatial position (x, y), respectively, and the distance z from the diffractive optical element to the RGB image sensor is set to 50 mm; then the point spread function P is obtained, which is the square size of the complex wave field on the sensor plane:
P(x,y,λ)∝|U2(x,y,λ)|2. (5)P(x,y,λ)∝|U 2 (x,y,λ)| 2 . (5)
在获取点扩散函数后,将对应波长λ的目标场景图像I(x,y,λ,d)与相应点扩散函数P(x,y,λ)分别进行卷积运算:After obtaining the point spread function, the target scene image I(x, y, λ, d) corresponding to the wavelength λ is convolved with the corresponding point spread function P(x, y, λ) respectively:
即可获得对应波长和场景深度的调制后图像I′(x,y,λ);最终求得的调制后场景I′包含了全集波长Λ与全集深度D中每种情况下调制后图像I′λ,d:The modulated image I′(x, y, λ) corresponding to the wavelength and scene depth can be obtained; the modulated scene I′ finally obtained contains the modulated image I′ λ,d in each case of the full set of wavelengths Λ and full set of depths D:
I={I′λ,d|λ∈Λ,d∈D} (7)I={I′ λ, d |λ∈Λ, d∈D} (7)
调制后图像经过RGB图像传感器根据其在[λ0,λ1]范围内的光谱响应曲线Rc采集后,得到观测的调制后RGB图像Ic∈{R,G,B}(x,y):After the modulated image is collected by the RGB image sensor according to its spectral response curve R c in the range of [λ 0 ,λ 1 ], the observed modulated RGB image I c∈{R,G,B} (x,y) is obtained:
其中η为传感器噪声,设置为均值为0且标准差为0.001的高斯噪声;实施例采用的RGB传感器型号为FLIR GS3-U3-41S4C;至此,根据上述关系式建立光学编码的前向模型。并在TensorFlow2中实现上述模型。Where η is the sensor noise, which is set to Gaussian noise with a mean of 0 and a standard deviation of 0.001; the RGB sensor model used in the embodiment is FLIR GS3-U3-41S4C; So far, the forward model of optical coding is established according to the above relationship. And the above model is implemented in TensorFlow2.
步骤102:对步骤一构建的光学物理编码前向模型中的衍射光学元件高度图进行量化感知处理,使其呈现量化台阶的样式。Step 102: Perform quantization perception processing on the diffraction optical element height map in the optical physical coding forward model constructed in
步骤102中对衍射光学元件高度图进行量化感知处理,量化感知后的高度图可表示为:In step 102, the height map of the diffractive optical element is subjected to quantization perception processing, and the height map after quantization perception can be expressed as:
Haq=α×F(Q(Hf))+(1-α)×Hf (9)H aq =α×F(Q(H f ))+(1-α)×H f (9)
其中,Hf为全精度的原始高度图权重,α为量化感知控制参数,随着训练步数而变化,T0和T1分别设置为第5个训练轮次对应的训练步数和第40个训练轮次对应的训练步数:Where Hf is the full-precision original height map weight, α is the quantized perception control parameter, which changes with the number of training steps, and T0 and T1 are set to the number of training steps corresponding to the 5th training round and the 40th training round, respectively:
Q为量化操作函数,将输入的高度图H量化为在hmax之内的L个台阶的样式,台阶数量L为4:Q is the quantization operation function, which quantizes the input height map H into a style of L steps within h max , and the number of steps L is 4:
F为自适应微调函数:F is the adaptive fine-tuning function:
F(Q(Hf))=Q(Hf)+Wl (12)F(Q( Hf ))=Q( Hf )+ Wl (12)
其根据自适应量化损失动态调整Wl,以将原本每一个均匀台阶的物理高度调整到一个最优的量化形式。It dynamically adjusts W l according to the adaptive quantization loss to adjust the physical height of each uniform step to an optimal quantization form.
步骤103:基于可微分的神经网络构建计算解码模型,利用所述计算解码模型预测推理原始的光谱图像重建图像。Step 103: construct a computational decoding model based on a differentiable neural network, and use the computational decoding model to predict and infer the original spectral image to reconstruct the image.
步骤103所述计算解码模型为Res-UNet,表示为N,其从光学编码模块调制后的图像Ic∈{R,G,B}(x,y)预测推理原始的光谱图像重建图像即可被表示为:The computational decoding model in step 103 is Res-UNet, denoted as N, which predicts and infers the original spectral image reconstructed from the image I c∈{R, G, B} (x, y) modulated by the optical encoding module It can be expressed as:
步骤104:通过数据驱动的方式对衍射快照光谱成像系统中的衍射光学元件的编码结构和重建解码神经网络进行联合优化,得到相对最优的光学编码结构及其对应的计算解码模型;同时在优化阶段通过步骤102量化感知处理使衍射光学元件的编码结构呈现量化台阶的样式,进而实现在优化阶段考虑实际制造中的量化要求,使得优化得到的衍射光学元件结构和制造的衍射光学元件结构一致,进而提升整个光谱成像系统的重建图像精度。Step 104: The coding structure and the reconstruction decoding neural network of the diffractive optical element in the diffraction snapshot spectral imaging system are jointly optimized in a data-driven manner to obtain a relatively optimal optical coding structure and its corresponding computational decoding model; at the same time, in the optimization stage, the coding structure of the diffractive optical element is made to present a quantized step pattern through the quantization perception processing in step 102, so as to consider the quantization requirements in actual manufacturing in the optimization stage, so that the optimized diffractive optical element structure is consistent with the manufactured diffractive optical element structure, thereby improving the reconstructed image accuracy of the entire spectral imaging system.
步骤104所述光学物理编码前向模型和计算解码模型的学习率初始化分别为10-2和10-3,两者的学习率均在训练数据集上每训练1个轮次下降为原来的0.8倍。步骤104所述批处理大小设置为4,表示单次迭代优化所处理的图像数量;所述优化方法为Adam,所述迭代次数为50个轮次;并设置量化感知训练开始于第5个轮次,结束于第40个轮次。In step 104, the learning rates of the optical physical coding forward model and the computational decoding model are initialized to 10-2 and 10-3 respectively, and the learning rates of both are reduced to 0.8 times of the original value for each training round on the training data set. In step 104, the batch size is set to 4, indicating the number of images processed by a single iterative optimization; the optimization method is Adam, and the number of iterations is 50 rounds; and the quantization-aware training is set to start at the 5th round and end at the 40th round.
步骤104中的端到端联合优化使用TensorFlow2自动微分框架实现,数据集使用ICVL光谱场景数据集。端到端联合优化采用的端到端损失函数为包含重建图像损失部分、自适应量化部分和解码权重的L2正则化损失部分:The end-to-end joint optimization in step 104 is implemented using the TensorFlow2 automatic differentiation framework, and the dataset uses the ICVL spectral scene dataset. The end-to-end loss function used in the end-to-end joint optimization is It is the L2 regularization loss part that includes the reconstructed image loss part, the adaptive quantization part and the decoding weight:
其中,是光谱重建损失函数,是自适应量化损失,ω是重建模块的网络权重,比例参数β和γ分别取0.01和0.0001。in, is the spectral reconstruction loss function, is the adaptive quantization loss, ω is the network weight of the reconstruction module, and the scaling parameters β and γ are taken as 0.01 and 0.0001 respectively.
光谱重建损失函数由重建图像和目标光谱图像I之间的平均绝对误差MAE计算得到:Spectral reconstruction loss function Reconstructed image The mean absolute error MAE between the target spectral image I is calculated as:
其中K为图像中的像素个数。Where K is the number of pixels in the image.
自适应量化损失表示为量化操作后的高度图和原始高度图之间的均方误差MSE:Adaptive Quantization Loss It is expressed as the mean square error MSE between the height map after quantization and the original height map:
其中J为高度图中的像素个数;Where J is the number of pixels in the height map;
步骤105:根据步骤104优化得到的光学编码结构制造衍射光学元件,利用所述衍射光学元件和RGB图像传感器搭建衍射快照光谱成像系统,由于衍射光学元件编码自由度高,能够减小搭建的衍射快照光谱成像系统的体积,提高衍射快照光谱成像系统的紧凑度。Step 105: A diffractive optical element is manufactured according to the optical coding structure optimized in step 104, and a diffractive snapshot spectral imaging system is constructed using the diffractive optical element and an RGB image sensor. Since the diffractive optical element has a high degree of coding freedom, the volume of the constructed diffraction snapshot spectral imaging system can be reduced, thereby improving the compactness of the diffraction snapshot spectral imaging system.
步骤105中实物物理参数均与仿真优化阶段的所有物理参数保持一致,即拍摄场景到衍射光学元件距离为1m,衍射光学元件到传感器距离为50mm。The physical parameters of the real object in step 105 are consistent with all the physical parameters in the simulation optimization stage, that is, the distance from the shooting scene to the diffractive optical element is 1 m, and the distance from the diffractive optical element to the sensor is 50 mm.
步骤106:使用步骤105搭建好的衍射快照光谱成像系统拍摄目标场景,并使用步骤104中优化得到的计算解码模型Res-UNet对实物系统拍摄的观测图像进行解码重建,以得到所需要的目标场景对应的高光谱图像,进而提升整个光谱成像系统的重建图像精度。至此,实现衍射快照光谱成像。Step 106: Use the diffraction snapshot spectral imaging system built in step 105 to shoot the target scene, and use the computational decoding model Res-UNet optimized in step 104 to decode and reconstruct the observation image shot by the physical system to obtain the required hyperspectral image corresponding to the target scene, thereby improving the reconstructed image accuracy of the entire spectral imaging system. At this point, diffraction snapshot spectral imaging is achieved.
为说明本发明的效果,本实施例将在实验条件相同的情况下对包括Fresnel透镜编码CASSI编码、常规的联合优化编解码(Deep Optics)及本实例衍射快照光谱成像的联合优化编解码方法进行对比。To illustrate the effect of the present invention, this embodiment will compare the joint optimization encoding and decoding method including Fresnel lens encoding CASSI encoding, conventional joint optimization encoding and decoding (Deep Optics) and the diffraction snapshot spectral imaging of this example under the same experimental conditions.
1.实验条件1. Experimental conditions
本实验的硬件测试条件为:CPU采用Intel(R)Xeon(R)Gold 5218R,运行内存大小为256GB,GPU为NVIDIA GeForce RTX3090,GPU显存为24G,CUDA计算库版本为11.0;测试所用高光谱图片来自于ICVL数据集,输入的图像大小为512×512;测试方法均使用TensorFlow 2.5进行实现。为了保证对比实验的公平,所有采用不同编码方式的模型的物理参数和重建网络结构均与本实例介绍的物理参数和重建网络结构一致。The hardware test conditions of this experiment are as follows: the CPU uses Intel(R) Xeon(R) Gold 5218R, the running memory size is 256GB, the GPU is NVIDIA GeForce RTX3090, the GPU memory is 24G, and the CUDA computing library version is 11.0; the hyperspectral images used in the test are from the ICVL dataset, and the input image size is 512×512; the test methods are all implemented using TensorFlow 2.5. In order to ensure the fairness of the comparative experiment, the physical parameters and reconstruction network structures of all models using different encoding methods are consistent with the physical parameters and reconstruction network structures introduced in this example.
2.实验结果2. Experimental results
为了定量的衡量重建结果的质量,使用峰值信噪比(Peak Signal to NoiseRatio,PSNR)和结构相似性(Structural Similarity,SSIM)衡量重建结果的空间质量和视觉效果;使用均方根误差(Root Mean Square Error,RMSE)和相对无量纲全局误差(Relative Dimensionless Global Error in Synthesis,ERGAS)衡量重建结果的光谱保真度;In order to quantitatively measure the quality of the reconstruction results, Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM) are used to measure the spatial quality and visual effect of the reconstruction results; Root Mean Square Error (RMSE) and Relative Dimensionless Global Error in Synthesis (ERGAS) are used to measure the spectral fidelity of the reconstruction results;
表1不同快照光谱成像方法的对比实验光谱重建指标结果Table 1 Comparative experimental spectral reconstruction index results of different snapshot spectral imaging methods
表1展示了不同的快照光谱成像方法在ICVL数据集训练后,在测试集上的光谱重建指标结果。可以看到,不论是空间图像质量还是光谱保真度,本发明实施例公开的方法都具有更优的表现。Table 1 shows the spectral reconstruction index results of different snapshot spectral imaging methods on the test set after training on the ICVL dataset. It can be seen that the method disclosed in the embodiment of the present invention has better performance in terms of both spatial image quality and spectral fidelity.
综上所述,本实施例公开的衍射快照光谱成像方法,使用衍射光学元件作为光学编码部分,显著减小实物部分的体积和复杂度;同时本实施例提出的使用构建的仿真模型在自动微分框架中进行训练以联合优化设计编解码器的方法基于数据驱动的模式,无需额外的先验知识,亦无需手工参与设计即可使得编解码算法彼此契合,且此方法通常能找到一个在其解空间内全局最优的编解码设计方案;此外本实施例提出的衍射快照光谱成像方法在编解码联合优化阶段还使用量化感知方法考虑了实际物理制造时的量化需求,使得仿真和实物的差别缩小,从而达到一个更好的光谱重建效果。本实施例所提的光谱成像方法在遥感、生物医学等低光领域和动态物体监测、跟踪等高速测量领域,有着重要的应用价值。In summary, the diffraction snapshot spectral imaging method disclosed in this embodiment uses a diffraction optical element as the optical encoding part, which significantly reduces the volume and complexity of the physical part; at the same time, the method proposed in this embodiment uses the constructed simulation model to train in the automatic differentiation framework to jointly optimize the design of the codec based on a data-driven model, and does not require additional prior knowledge or manual participation in the design to make the codec algorithms fit each other, and this method can usually find a globally optimal codec design solution in its solution space; in addition, the diffraction snapshot spectral imaging method proposed in this embodiment also uses a quantitative perception method in the codec joint optimization stage to consider the quantitative requirements of actual physical manufacturing, so that the difference between simulation and real objects is reduced, thereby achieving a better spectral reconstruction effect. The spectral imaging method proposed in this embodiment has important application value in low-light fields such as remote sensing and biomedicine, and high-speed measurement fields such as dynamic object monitoring and tracking.
以上所述仅为本发明的优选实施例,并不用于限制本发明。对于本领域的技术人员来说,本发明专利可以包括各种更改和变化。凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention. For those skilled in the art, the present invention patent may include various changes and variations. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included in the protection scope of the present invention.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111635615.8A CN114353946B (en) | 2021-12-29 | 2021-12-29 | Diffraction snapshot spectrum imaging method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111635615.8A CN114353946B (en) | 2021-12-29 | 2021-12-29 | Diffraction snapshot spectrum imaging method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114353946A CN114353946A (en) | 2022-04-15 |
CN114353946B true CN114353946B (en) | 2023-05-05 |
Family
ID=81102545
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111635615.8A Active CN114353946B (en) | 2021-12-29 | 2021-12-29 | Diffraction snapshot spectrum imaging method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114353946B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115358381B (en) * | 2022-09-01 | 2024-05-31 | 清华大学 | Optical full adder and neural network design method, device and medium |
CN116704070B (en) * | 2023-08-07 | 2023-11-14 | 北京理工大学 | Method and system for reconstructing jointly optimized image |
CN118135397A (en) * | 2024-03-01 | 2024-06-04 | 北京邮电大学 | Parameter training, acquisition method and system of visible light hyperspectral compression acquisition chip based on diffraction coding |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109883548B (en) * | 2019-03-05 | 2020-04-21 | 北京理工大学 | Coding optimization method for spectral imaging system based on optimization-inspired neural network |
CN109886898B (en) * | 2019-03-05 | 2020-10-02 | 北京理工大学 | Imaging method for spectral imaging system based on optimization-inspired neural network |
CN112985600B (en) * | 2021-02-04 | 2022-01-04 | 浙江大学 | A diffraction-based spectrally encoded imaging system and method |
CN113155284B (en) * | 2021-04-20 | 2022-07-26 | 浙江大学 | A Refractive-Diffraction Hybrid Spectral Encoding Imaging System and Method |
-
2021
- 2021-12-29 CN CN202111635615.8A patent/CN114353946B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN114353946A (en) | 2022-04-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114353946B (en) | Diffraction snapshot spectrum imaging method | |
Sun et al. | Learning rank-1 diffractive optics for single-shot high dynamic range imaging | |
Yuan et al. | Snapshot compressive imaging: Theory, algorithms, and applications | |
Lohit et al. | Convolutional neural networks for noniterative reconstruction of compressively sensed images | |
Sitzmann et al. | End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging | |
CN114897752B (en) | A single lens large depth of field computational imaging system and method based on deep learning | |
CN110650340B (en) | Space-time multiplexing compressed video imaging method | |
CN115880225A (en) | Dynamic illumination human face image quality enhancement method based on multi-scale attention mechanism | |
Shamshad et al. | Compressed sensing-based robust phase retrieval via deep generative priors | |
CN109447891A (en) | A kind of high quality imaging method of the spectrum imaging system based on convolutional neural networks | |
Wen et al. | A sparse representation based joint demosaicing method for single-chip polarized color sensor | |
Dave et al. | Solving inverse computational imaging problems using deep pixel-level prior | |
KR20190036442A (en) | Hyperspectral Imaging Reconstruction Method Using Artificial Intelligence and Apparatus Therefor | |
CN107421640A (en) | Expand the multispectral light-field imaging system and method for principle based on aberration | |
CN115512192A (en) | Multispectral and hyperspectral image fusion method based on cross-scale octave convolution network | |
CN111598962A (en) | Single-pixel imaging method and device based on matrix sketch analysis | |
Zhou et al. | Lensless cameras using a mask based on almost perfect sequence through deep learning | |
Karim et al. | Spi-gan: Towards single-pixel imaging through generative adversarial network | |
Li et al. | MWDNs: reconstruction in multi-scale feature spaces for lensless imaging | |
CN111652815B (en) | A mask camera image restoration method based on deep learning | |
Zhou et al. | RDFNet: regional dynamic FISTA-Net for spectral snapshot compressive imaging | |
Alghamdi et al. | Reconfigurable snapshot HDR imaging using coded masks and inception network | |
Xu et al. | Hyperspectral image reconstruction based on the fusion of diffracted rotation blurred and clear images | |
CN116579959A (en) | Fusion imaging method and device for hyperspectral image | |
CN114742721A (en) | Calibration infrared non-uniformity correction method based on multi-scale STL-SRU residual error network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |