CN115222657A - Lens aberration prediction method, device, electronic device and storage medium - Google Patents
Lens aberration prediction method, device, electronic device and storage medium Download PDFInfo
- Publication number
- CN115222657A CN115222657A CN202210605185.3A CN202210605185A CN115222657A CN 115222657 A CN115222657 A CN 115222657A CN 202210605185 A CN202210605185 A CN 202210605185A CN 115222657 A CN115222657 A CN 115222657A
- Authority
- CN
- China
- Prior art keywords
- pictures
- aberration
- neural network
- lens
- visual angles
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000004075 alteration Effects 0.000 title claims abstract description 81
- 238000000034 method Methods 0.000 title claims abstract description 61
- 238000003860 storage Methods 0.000 title claims abstract description 13
- 238000013528 artificial neural network Methods 0.000 claims abstract description 33
- 230000003287 optical effect Effects 0.000 claims abstract description 19
- 238000003384 imaging method Methods 0.000 claims abstract description 17
- 238000012545 processing Methods 0.000 claims abstract description 13
- 230000000007 visual effect Effects 0.000 claims abstract 18
- 230000006870 function Effects 0.000 claims description 28
- 238000004590 computer program Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 abstract description 22
- 238000005516 engineering process Methods 0.000 abstract description 5
- 238000004088 simulation Methods 0.000 abstract description 4
- 238000004364 calculation method Methods 0.000 abstract description 2
- 230000000694 effects Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 7
- 230000007613 environmental effect Effects 0.000 description 5
- 238000003491 array Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000012937 correction Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000003754 machining Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000000386 microscopy Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10052—Images from lightfield camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Studio Devices (AREA)
Abstract
Description
技术领域technical field
本申请涉及图像数据处理技术领域,特别涉及一种镜头的像差预测方法、装置、电子设备及存储介质。The present application relates to the technical field of image data processing, and in particular, to a lens aberration prediction method, device, electronic device and storage medium.
背景技术Background technique
二维成像传感器已经彻底改变了几乎所有领域,包括工业检测、移动设备、自动驾驶、监控、医疗诊断、生物学和天文学,受益于半导体产业的快速发展,数字传感器的像素大小在过去十年迅速增长,然而,大多数成像系统的实际性能已经达到了光学而不是电子器件的瓶颈,不完善的镜头或环境干扰可以造成光学像差,从而影响成像的精度。2D imaging sensors have revolutionized almost all fields, including industrial inspection, mobile devices, autonomous driving, surveillance, medical diagnostics, biology, and astronomy. Benefiting from the rapid development of the semiconductor industry, the pixel size of digital sensors has rapidly increased in the past decade. Growing, however, the practical performance of most imaging systems has reached the bottleneck of optics rather than electronics, and imperfect lenses or environmental disturbances can cause optical aberrations that affect the accuracy of imaging.
AO(Adaptive optics,自适应光学)通过可变形反射镜阵列或空间光调制器实现主动像差校正,但是,当前的AO系统通常复杂、庞大且昂贵,无法适用于轻量级系统或便携式设备。AO (Adaptive optics) achieves active aberration correction through deformable mirror arrays or spatial light modulators. However, current AO systems are usually complex, bulky, and expensive, and cannot be applied to lightweight systems or portable devices.
扫描光场成像的提出为轻量级系统或便携式设备上的像差估计提供了一种解法,相关技术基于波动光学的DAO(Digital Adaptive optics,数字AO)算法,以低成本实现具有大SBP(space-bandwidth product,空间带宽积)的稳健、通用和高性能的3D成像。具体而言,相关技术可以通过物理方式模拟光传播的前向和反向过程,设置像差初值,计算镜头点扩散函数,之后通过反卷积过程计算高分辨率图片,再通过图片和点扩散函数计算应该拍摄到的图片和现有传感器信号进行对比,最后修正像差,不断迭代直至收敛。The proposal of scanning light field imaging provides a solution for aberration estimation on lightweight systems or portable devices. The related technology is based on the DAO (Digital Adaptive optics, digital AO) algorithm of wave optics, which can achieve a large SBP ( space-bandwidth product) for robust, versatile and high-performance 3D imaging. Specifically, the related technology can physically simulate the forward and reverse processes of light propagation, set the initial aberration value, calculate the lens point spread function, and then calculate the high-resolution image through the deconvolution process, and then pass the image and point The spread function calculation should compare the captured image with the existing sensor signal, and finally correct the aberration, and iterate continuously until convergence.
然而,相关技术基于物理光学,前向后向传播过程复杂,计算机模拟时间较长,是否收敛不确定,对不同过程迭代方式,迭代过程不同,鲁棒性差;反卷积过程耗时耗能巨大,迭代造成这一过程多次进行无法避免且难以并行;基于简单的神经网络,例如3D CNN等,整个过程网络负担过重,网络预测点扩散函数或者像差的性能不稳定,或精度较低,从而导致重建的效果不好,亟需改善。However, the related technology is based on physical optics, the forward-backward propagation process is complex, the computer simulation time is long, the convergence is uncertain, the iterative methods are different for different processes, the iterative process is different, and the robustness is poor; the deconvolution process is time-consuming and energy-consuming , the iteration makes this process unavoidable and difficult to parallelize; based on simple neural networks, such as 3D CNN, the network is overburdened in the whole process, the performance of the network prediction point spread function or aberration is unstable, or the accuracy is low , resulting in a poor reconstruction effect, which needs to be improved urgently.
发明内容SUMMARY OF THE INVENTION
本申请是基于发明人对以下问题的认知和发现作出的:This application is made based on the inventor's knowledge and discovery of the following issues:
对于一个千兆像素传感器,正常成像系统的有效像素数通常会被限制在百万像素级别,主要原因在于不完善的镜头或环境干扰引起的光学像差,这会导致从一个点发出的光散布在二维传感器上的一个很大的区域上。同时,将3D场景投影到2D平面还会导致LF(Light Field,光场)的各种自由度的损失,例如深度和局部相干性。因此,使用集成传感器获取高密度深度图一直是一个挑战。For a gigapixel sensor, the effective pixel count of a normal imaging system is usually limited to the megapixel level, mainly due to optical aberrations caused by imperfect lenses or environmental disturbances, which cause the light emitted from a point to spread on a large area on a 2D sensor. At the same time, projecting a 3D scene to a 2D plane also leads to the loss of various degrees of freedom of the LF (Light Field), such as depth and local coherence. Therefore, obtaining high-density depth maps using integrated sensors has always been a challenge.
为解决上述问题,光学工程专家使用多个精密工程镜头进行顺序的像差校正,以实现完美的成像系统。然而,光学设计和制造的难度随着SBP呈指数增长的同时,由于衍射极限的存在,限制了有效像素数,导致具有有效SBP的高性能非相干成像系统通常非常昂贵且体积庞大,例如大口径望远镜和中视镜。为缓解这个问题,当在大规模范围内给予足够的加工精度时,可以通过超透镜和自由曲面光学器件制造优化的透镜表面。To solve the above problems, optical engineering experts use multiple precision engineered lenses for sequential aberration correction to achieve the perfect imaging system. However, while the difficulty of optical design and fabrication increases exponentially with SBP, high-performance incoherent imaging systems with effective SBP are usually very expensive and bulky due to the diffraction limit, which limits the number of effective pixels, such as large apertures Telescopes and Mirrors. To alleviate this problem, when given sufficient machining accuracy on a large scale, optimized lens surfaces can be fabricated by metalens and freeform optics.
此外,为二维传感器设计的图像去模糊算法可以通过PSF(point spreadfunction,点扩散函数)的准确估计来提高图像对比度,具有编码孔径的PSF工程通过减少频域中的零点来保留更多信息。然而,这些方法很难恢复由低MTF(Modulation TransferFunction,调制传递函数)丢失的高频信息,并且尤其是对于空间非均匀像差,通常需要特定的数据先验和精确的PSF估计。同时,上述这些方法仍然对具有小DOF(depth of field,景深)的动态环境像差非常敏感。In addition, image deblurring algorithms designed for 2D sensors can improve image contrast through accurate estimation of PSF (point spread function), and PSF engineering with coded aperture preserves more information by reducing zeros in the frequency domain. However, it is difficult for these methods to recover high-frequency information lost by low MTF (Modulation Transfer Function), and especially for spatially inhomogeneous aberrations, specific data priors and accurate PSF estimation are usually required. Meanwhile, these methods above are still very sensitive to dynamic environmental aberrations with small DOF (depth of field).
AO通过可变形反射镜阵列或空间光调制器实现主动像差校正,它通过将从一个点发射的光线以不同角度引导到2D传感器上的同一位置来实现,像差影响的波前可以通过导星和波前传感器测量,也可以根据特定的评估指标通过迭代更新来测量。AO在天文学和显微镜方面都取得了巨大的成功,并取得了巨大的科学发现。然而,折射率的3D异质性引入的空间不均匀像差导致当前AO方法的有效FOV(Field of View,视场)非常小,尤其对于地面望远镜,大气湍流造成的像差将AO的视场直径限制在40角秒左右,不适用于大型观测望远镜,同时当前的AO系统通常复杂、庞大且昂贵,无法应用于轻量级系统或便携式设备。AO achieves active aberration correction through deformable mirror arrays or spatial light modulators. It does so by directing light emitted from a point to the same location on the 2D sensor at different angles. The aberration-affected wavefront can be guided by Star and wavefront sensor measurements can also be measured by iterative updates based on specific evaluation metrics. AO has achieved great success in both astronomy and microscopy, and has made great scientific discoveries. However, the spatially inhomogeneous aberration introduced by the 3D heterogeneity of the refractive index results in a very small effective FOV (Field of View) of the current AO method, especially for ground-based telescopes. The diameter is limited to around 40 arcseconds, making it unsuitable for large observation telescopes, and current AO systems are often complex, bulky, and expensive to apply to lightweight systems or portable devices.
本申请提供一种光学镜头或拍摄环境带来的像差预测方法、装置、电子设备及存储介质,以解决相关技术中基于物理光学,导致传播过程复杂,耗时耗能,鲁棒性较差,且重建效果无法满足预期的技术问题。The present application provides an aberration prediction method, device, electronic device and storage medium brought about by an optical lens or a shooting environment, so as to solve the problem of complicated propagation process, time-consuming and energy-consuming, and poor robustness in the related art based on physical optics. , and the reconstruction effect cannot meet the expected technical problems.
本申请第一方面实施例提供一种镜头的像差预测方法,包括以下步骤:获取多个视角的图片,其中,所述多个视角的图片由同一成像系统获得,且包括不同角度域信息;将所述多个视角的图片输入预设神经网络,提取每个视角的至少一个视角特征,并基于所述至少一个视角特征预测光学镜头或拍摄环境带来的像差。An embodiment of the first aspect of the present application provides a method for predicting aberrations of a lens, including the following steps: acquiring pictures of multiple viewing angles, wherein the pictures of the multiple viewing angles are obtained by the same imaging system and include different angle domain information; The pictures of the multiple perspectives are input into a preset neural network, at least one perspective feature of each perspective is extracted, and the aberration brought by the optical lens or the shooting environment is predicted based on the at least one perspective feature.
可选地,在本申请的一个实施例中,还包括:根据所述多个视角的图片和实际的点扩散函数重建无像差影响的高分辨率图片。Optionally, in an embodiment of the present application, the method further includes: reconstructing a high-resolution picture without aberration effects according to the pictures of the multiple viewing angles and the actual point spread function.
可选地,在本申请的一个实施例中,所述基于所述至少一个视角特征预测光学镜头或拍摄环境带来的像差,包括:利用所述像差和预设泽尼克多项式系数进行泽尼克多项式的拟合,生成不同分辨率的像差。Optionally, in an embodiment of the present application, the predicting the aberration brought by the optical lens or the shooting environment based on the at least one viewing angle feature includes: using the aberration and a preset Zernike polynomial coefficient to perform a Fitting of Nick polynomials to generate aberrations at different resolutions.
可选地,在本申请的一个实施例中,所述获取所述多个视角的图片,包括:扫描光场多视角相机的数据,得到多个数据图片;对所述多个数据图片重排列,得到所述多个视角的图片。Optionally, in an embodiment of the present application, the obtaining the pictures of the multiple viewing angles includes: scanning data of the light field multi-view camera to obtain multiple data pictures; rearranging the multiple data pictures , to obtain pictures of the multiple viewing angles.
可选地,在本申请的一个实施例中,所述预设神经网络为ResNet 3D神经网络。Optionally, in an embodiment of the present application, the preset neural network is a ResNet 3D neural network.
本申请第二方面实施例提供一种镜头的像差预测装置,包括:获取模块,用于获取多个视角的图片,其中,所述多个视角的图片由同一成像系统获得,且包括不同角度域信息;预测模块,将所述多个视角的图片输入预设神经网络,提取每个视角的至少一个视角特征,并基于所述至少一个视角特征预测光学镜头或拍摄环境带来的像差。An embodiment of the second aspect of the present application provides an apparatus for predicting aberrations of a lens, including: an acquisition module configured to acquire pictures of multiple viewing angles, wherein the pictures of the multiple viewing angles are obtained by the same imaging system and include different angles Domain information; a prediction module, which inputs the pictures of the multiple perspectives into a preset neural network, extracts at least one perspective feature of each perspective, and predicts the aberration caused by the optical lens or the shooting environment based on the at least one perspective feature.
可选地,在本申请的一个实施例中,还包括:重建模块,用于根据所述多个视角的图片和实际的点扩散函数重建无像差影响的高分辨率图片。Optionally, in an embodiment of the present application, a reconstruction module is further included, configured to reconstruct a high-resolution picture without aberration effects according to the pictures of the multiple viewing angles and the actual point spread function.
可选地,在本申请的一个实施例中,所述预测模块包括:拟合单元,用于利用所述像差和预设泽尼克多项式系数进行泽尼克多项式的拟合,生成不同分辨率的像差。Optionally, in an embodiment of the present application, the prediction module includes: a fitting unit, configured to perform Zernike polynomial fitting by using the aberration and preset Zernike polynomial coefficients to generate different resolutions. aberrations.
可选地,在本申请的一个实施例中,所述获取模块包括:扫描单元,用于扫描光场多视角相机的数据,得到多个数据图片;处理单元,用于对所述多个数据图片进行重排列,得到所述多个视角的图片。Optionally, in an embodiment of the present application, the acquisition module includes: a scanning unit, configured to scan the data of the light field multi-view camera to obtain multiple data pictures; and a processing unit, configured to scan the multiple data images The pictures are rearranged to obtain pictures of the multiple viewing angles.
可选地,在本申请的一个实施例中,所述预设残差神经网络为ResNet 3D神经网络。Optionally, in an embodiment of the present application, the preset residual neural network is a ResNet 3D neural network.
本申请第三方面实施例提供一种电子设备,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述程序,以实现如上述实施例所述的镜头的像差预测方法。An embodiment of a third aspect of the present application provides an electronic device, including: a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the program to achieve The method for predicting aberration of a lens as described in the above embodiments.
本申请第四方面实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储计算机指令,所述计算机指令用于使所述计算机执行如上述实施例所述的镜头的像差预测方法。Embodiments of the fourth aspect of the present application provide a computer-readable storage medium, where the computer-readable storage medium stores computer instructions, where the computer instructions are used to cause the computer to perform the lens aberration prediction according to the above embodiments method.
本申请实施例可以利用残差神经网络获得多视角图片的预测镜头像差,并通过像差拟合得到分辨率更高的像差,在保证视差估计精度的同时,实现高速无迭代、内存负担较小的图片重建,具有系统鲁棒性。由此,解决了相关技术中基于物理光学,导致模拟传播过程复杂,计算复杂度高,鲁棒性较差,且重建效果无法满足预期的技术问题。In the embodiment of the present application, the residual neural network can be used to obtain the predicted lens aberration of the multi-view image, and the aberration with higher resolution can be obtained through aberration fitting, so as to ensure the accuracy of the disparity estimation, and achieve high speed without iteration and memory burden. Smaller image reconstruction with system robustness. As a result, the technical problems in the related art based on physical optics, which lead to complex simulation propagation process, high computational complexity, poor robustness, and unsatisfactory reconstruction effects, are solved.
本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。Additional aspects and advantages of the present application will be set forth, in part, in the following description, and in part will be apparent from the following description, or learned by practice of the present application.
附图说明Description of drawings
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present application will become apparent and readily understood from the following description of embodiments taken in conjunction with the accompanying drawings, wherein:
图1为根据本申请实施例提供的一种镜头的像差预测方法的流程图;1 is a flowchart of a method for predicting aberrations of a lens according to an embodiment of the present application;
图2为根据本申请一个实施例的镜头的像差预测方法的流程图;2 is a flowchart of a method for predicting aberrations of a lens according to an embodiment of the present application;
图3为根据本申请实施例提供的一种镜头的像差预测装置的结构示意图;3 is a schematic structural diagram of an apparatus for predicting aberrations of a lens according to an embodiment of the present application;
图4为根据本申请实施例提供的电子设备的结构示意图。FIG. 4 is a schematic structural diagram of an electronic device provided according to an embodiment of the present application.
具体实施方式Detailed ways
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。The following describes in detail the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are exemplary, and are intended to be used to explain the present application, but should not be construed as a limitation to the present application.
下面参考附图描述本申请实施例的镜头的像差预测方法、装置、电子设备及存储介质。针对上述背景技术中心提到的相关技术中基于物理光学,导致模拟传播过程复杂,计算复杂度高,鲁棒性较差,且重建效果无法满足预期的技术问题,本申请提供了一种镜头的像差预测方法,在该方法中,可以利用残差神经网络获得多视角图片的预测镜头像差,在保证视差估计精度的同时,实现高速无迭代、内存负担较小的图片重建,具有系统鲁棒性。由此,解决了相关技术中基于物理光学,导致传播过程复杂,耗时耗能,鲁棒性较差,且重建效果无法满足预期的技术问题。The method, apparatus, electronic device, and storage medium for predicting lens aberration according to the embodiments of the present application are described below with reference to the accompanying drawings. Aiming at the technical problems that the related art mentioned in the above-mentioned background technology center is based on physical optics, resulting in complicated simulation propagation process, high computational complexity, poor robustness, and the reconstruction effect cannot meet expectations, the present application provides a lens Disparity prediction method, in this method, the residual neural network can be used to obtain the predicted lens aberration of multi-view images, while ensuring the accuracy of disparity estimation, it can achieve high-speed image reconstruction without iteration and less memory burden, with system robustness Awesome. As a result, the technical problems in the related art that are based on physical optics, resulting in complicated propagation process, time-consuming and energy-consuming, poor robustness, and the reconstruction effect not meeting expectations are solved.
具体而言,图1为本申请实施例所提供的一种镜头的像差预测方法的流程示意图。Specifically, FIG. 1 is a schematic flowchart of a method for predicting aberrations of a lens according to an embodiment of the present application.
如图1所示,该镜头的像差预测方法包括以下步骤:As shown in Figure 1, the aberration prediction method for this lens includes the following steps:
在步骤S101中,获取多个视角的图片,其中,多个视角的图片由同一成像系统获得,且包括不同角度域信息。In step S101, pictures of multiple viewing angles are obtained, wherein the pictures of multiple viewing angles are obtained by the same imaging system and include information of different angle domains.
可以理解的是,由于镜头或环境干扰引起的光学像差是导致成像设备的有效像素较低的主要因素,如在同一光源下,将3D场景投影至2D平面,会导致LF的自由度损失,如深度和局部相干性,为此,本申请可以通过获取多个视角的图片,并进行像差估计,从而实现高精度的图片重建。It can be understood that optical aberrations caused by lens or environmental interference are the main factors that lead to low effective pixels of imaging devices. For example, under the same light source, projecting a 3D scene to a 2D plane will result in the loss of degrees of freedom of the LF, For example, depth and local coherence, for this purpose, the present application can achieve high-precision image reconstruction by acquiring pictures from multiple viewing angles and performing aberration estimation.
可选地,在本申请的一个实施例中,获取多个视角的图片,包括:扫描光场多视角相机的数据,得到多个数据图片;对多个数据图片重排列,得到多个视角的图片。Optionally, in an embodiment of the present application, acquiring pictures of multiple viewing angles includes: scanning data of a light field multi-view camera to obtain multiple data pictures; rearranging the multiple data pictures to obtain multiple viewing angle pictures. picture.
在实际执行过程中,本申请实施例可以扫描光场多视角相机的数据,获取多个视角的图片,进一步地,本申请实施例可以将获得的多个数据图片重排列,得到多个视角的图片,进而将多视角图像拼接成一张二维图像或堆叠成三维堆栈图像,本申请实施例可以利用相机采集到扫描光场多视角高分辨率的频域图片,便于后续处理多视角图片之间的相关性,从而预测相机的像差,实现高质量图片重建。In the actual execution process, the embodiment of the present application can scan the data of the light field multi-view camera to obtain pictures of multiple viewing angles. pictures, and then the multi-view images are spliced into a two-dimensional image or stacked into a three-dimensional stack image. In this embodiment of the present application, a multi-view and high-resolution frequency domain image of the scanning light field can be collected by using a camera, which is convenient for subsequent processing of the correlation between the multi-view images. to predict camera aberrations and achieve high-quality image reconstruction.
在步骤S102中,将多个视角的图片输入预设神经网络,提取每个视角的至少一个视角特征,并基于至少一个视角特征预测光学镜头或拍摄环境带来的像差。In step S102, the pictures of multiple viewing angles are input into a preset neural network, at least one viewing angle feature of each viewing angle is extracted, and the aberration caused by the optical lens or the shooting environment is predicted based on the at least one viewing angle feature.
可以理解的是,基于现有的神经网络,随着网络的深度增加会存在梯度爆炸和梯度消失等问题,变得越来越难以训练,且整个过程网络负担过重,网络预测点扩散函数或像差的性能不稳定,不利于保证图片的重建质量,而残差网络通过跳跃连接能够有效的解决这一问题,可以从某一层网络层获取激活,然后迅速反馈至另外一层,甚至是神经网络的更深层,进而提供低耗能、高效、稳定的特征提取过程,并保证图片的重建质量。It is understandable that, based on the existing neural network, as the depth of the network increases, there will be problems such as gradient explosion and gradient disappearance, which becomes more and more difficult to train, and the network is overloaded in the whole process, and the network predicts the point spread function or The performance of aberration is unstable, which is not conducive to ensuring the quality of image reconstruction, and the residual network can effectively solve this problem through skip connections. It can obtain activation from a certain network layer, and then quickly feedback it to another layer, or even The deeper layer of the neural network provides a low-energy, efficient, and stable feature extraction process, and ensures the quality of image reconstruction.
因此,本申请实施例可以将由多个视角的图片重排列获得的多视角图像拼接成一张二维图像或堆叠成三维堆栈图像后输入至预设的残差神经网络,从而提取出每个视角的至少一个视角特征,进而预测相机相应的像差。Therefore, in this embodiment of the present application, multi-view images obtained by rearranging pictures from multiple viewpoints can be stitched into a two-dimensional image or stacked into a three-dimensional stack image and then input to a preset residual neural network, so as to extract at least one image from each viewpoint. View angle features, and then predict the corresponding aberrations of the camera.
可选地,在本申请的一个实施例中,基于至少一个视角特征预测光学镜头或拍摄环境带来的像差,包括:利用像差和预设泽尼克多项式系数进行泽尼克多项式的拟合,生成不同分辨率的像差。Optionally, in an embodiment of the present application, predicting aberrations brought by an optical lens or a shooting environment based on at least one viewing angle feature includes: using aberrations and preset Zernike polynomial coefficients to perform Zernike polynomial fitting, Generate aberrations at different resolutions.
可选地,在本申请的一个实施例中,预设残差神经网络为ResNet 3D神经网络。Optionally, in an embodiment of the present application, the preset residual neural network is a ResNet 3D neural network.
在实际执行过程中,本申请实施例中预设的残差神经网络可以是ResNet 3D神经网络,如ResNet34,并利用ResNet 3D神经网络预测像差的平面梯度,通常而言,残差神经网络更适用于处理二维图像,由于3D模型通常采用网格数据表示,缺乏规则的结构和层次化表示,而通过ResNet 3D神经网络可以实现三维图像处理,有效地处理扫描光场多视角图片之间的相关性,便于后续高质量图像的重建。In the actual execution process, the preset residual neural network in the embodiment of the present application may be a ResNet 3D neural network, such as ResNet34, and the ResNet 3D neural network is used to predict the plane gradient of the disparity. Generally speaking, the residual neural network is more It is suitable for processing two-dimensional images. Since 3D models are usually represented by grid data, there is a lack of regular structure and hierarchical representation. The ResNet 3D neural network can realize three-dimensional image processing and effectively process the scanning light field between multi-view images. correlation, which facilitates the reconstruction of subsequent high-quality images.
进一步地,本申请实施例可以根据预测得到的像差,实现图片重建。Further, in this embodiment of the present application, image reconstruction can be implemented according to the predicted disparity.
可选地,在本申请的一个实施例中,还包括:根据多个视角的图片和实际的点扩散函数重建相机的原始图片。Optionally, in an embodiment of the present application, the method further includes: reconstructing an original picture of the camera according to pictures of multiple viewing angles and an actual point spread function.
在实际执行过程中,本申请实施例可以基于理想的点扩散函数预测得到的像差图片计算实际的点扩散函数,利用实际点扩散函数进行反卷积得到图片。In the actual execution process, the embodiment of the present application may calculate the actual point spread function based on the aberration picture predicted by the ideal point spread function, and use the actual point spread function to perform deconvolution to obtain the picture.
下面结合图2所示,以一个具体实施例对本申请实施例的镜头的像差预测方法工作原理进行详细阐述。The working principle of the lens aberration prediction method according to the embodiment of the present application will be described in detail below with reference to FIG. 2 with a specific embodiment.
如图2所示,本申请实施例可以包括以下步骤:As shown in FIG. 2, this embodiment of the present application may include the following steps:
步骤S201:扫描光场相机传感器采集数据。Step S201: Scan the light field camera sensor to collect data.
步骤S202:重排列多视角图片,并将多视角图像拼接成一张二维图像或堆叠成三维堆栈图像。可以理解的是,由于镜头或环境干扰引起的光学像差是导致成像设备的有效像素较低的主要因素,在同一光源下,将3D场景投影至2D平面,会导致LF的自由度损失,如深度和局部相干性,为此,本申请可以通过获取多个视角的图片,并重排列多视角图片,并将多视角图像拼接成一张二维图像或堆叠成三维堆栈图像,进行像差估计,从而实现高精度的图片重建。Step S202 : rearranging the multi-view images, and splicing the multi-view images into a two-dimensional image or stacking them into a three-dimensional stack image. It is understandable that optical aberrations caused by lens or environmental disturbances are the main factors leading to low effective pixels of imaging devices. Under the same light source, projecting a 3D scene onto a 2D plane will result in the loss of degrees of freedom of the LF, such as Depth and local coherence, for this reason, the present application can perform aberration estimation by acquiring pictures from multiple perspectives, rearranging the multi-view pictures, and splicing the multi-view images into a two-dimensional image or stacking them into a three-dimensional stack image, so as to achieve high accuracy. Accurate image reconstruction.
步骤S203:通过ResNet3D预测像差平面梯度。本申请实施例可以根据多个视角的图片输入到残差神经网络如ResNet 3D提取每个视角特征,预测像差图像。Step S203: Predict the gradient of the disparity plane through ResNet3D. In this embodiment of the present application, pictures from multiple perspectives can be input to a residual neural network, such as ResNet 3D, to extract features of each perspective and predict disparity images.
步骤S204:计算实际的点扩散函数。本申请实施例可以基于理想的点扩散函数预测得到像差图片计算实际的点扩散函数,利用实际点扩散函数进行反卷积得到图片。Step S204: Calculate the actual point spread function. In the embodiment of the present application, an aberration picture can be obtained based on an ideal point spread function prediction, and an actual point spread function can be calculated, and a picture can be obtained by performing deconvolution using the actual point spread function.
步骤S205:计算实际图片。本申请实施例可以利用实际的点扩散函数和通过反卷积的多视角图片,计算生成实际图片,实现高精度图片重建。Step S205: Calculate the actual picture. In this embodiment of the present application, an actual point spread function and a multi-view image through deconvolution can be used to calculate and generate an actual image, so as to achieve high-precision image reconstruction.
根据本申请实施例提出的镜头的像差预测方法,可以利用残差神经网络获得多视角图片的预测镜头像差,在保证视差估计精度的同时,实现高速无迭代、内存负担较小的图片重建,具有系统鲁棒性。由此,解决了相关技术中基于物理光学,导致拟传播过程复杂,计算复杂度高,鲁棒性较差,且重建效果无法满足预期的技术问题。According to the lens aberration prediction method proposed in the embodiment of the present application, the residual neural network can be used to obtain the predicted lens aberration of the multi-view image, and while ensuring the accuracy of the disparity estimation, high-speed image reconstruction without iteration and less memory burden can be realized. , with system robustness. As a result, the technical problems in the related art based on physical optics, which lead to complex quasi-propagation process, high computational complexity, poor robustness, and unsatisfactory reconstruction effect, are solved.
其次参照附图描述根据本申请实施例提出的镜头的像差预测装置。Next, the apparatus for predicting the aberration of a lens according to the embodiments of the present application will be described with reference to the accompanying drawings.
图3是本申请实施例的镜头的像差预测装置的方框示意图。FIG. 3 is a schematic block diagram of an apparatus for predicting an aberration of a lens according to an embodiment of the present application.
如图3所示,该镜头的像差预测装置10包括:获取模块100和预测模块200。As shown in FIG. 3 , the lens
具体地,获取模块100,用于获取多个视角的图片,其中,多个视角的图片由同一成像系统获得,且包括不同角度域信息。Specifically, the obtaining
预测模块200,用于将多个视角的图片输入预设神经网络,提取每个视角的至少一个视角特征,并基于至少一个视角特征预测光学镜头或拍摄环境带来的像差。The
可选地,在本申请的一个实施例中,镜头的像差预测装置10还包括:重建模块。Optionally, in an embodiment of the present application, the lens
其中,重建模块,用于根据多个视角的图片和实际的点扩散函数重建无像差影响的高分辨率图片。Among them, the reconstruction module is used to reconstruct high-resolution pictures without aberration effects according to pictures from multiple viewing angles and the actual point spread function.
可选地,在本申请的一个实施例中,预测模块200包括:拟合单元。Optionally, in an embodiment of the present application, the
其中,拟合单元,用于利用像差和预设泽尼克多项式系数进行泽尼克多项式的拟合,生成不同分辨率的像差。The fitting unit is used for fitting the Zernike polynomial by using the aberrations and the preset Zernike polynomial coefficients to generate aberrations with different resolutions.
可选地,在本申请的一个实施例中,获取模块100包括:扫描单元和处理单元。Optionally, in an embodiment of the present application, the acquiring
其中,扫描单元,用于扫描光场多视角相机的数据,得到多个数据图片。Wherein, the scanning unit is used to scan the data of the light field multi-view camera to obtain a plurality of data pictures.
处理单元,用于对多个数据图片进行重排列,得到多个视角的图片。The processing unit is used for rearranging multiple data pictures to obtain pictures of multiple viewing angles.
可选地,在本申请的一个实施例中,预设残差神经网络为ResNet 3D神经网络。Optionally, in an embodiment of the present application, the preset residual neural network is a ResNet 3D neural network.
需要说明的是,前述对镜头的像差预测方法实施例的解释说明也适用于该实施例的镜头的像差预测装置,此处不再赘述。It should be noted that, the foregoing explanations of the embodiment of the lens aberration prediction method are also applicable to the lens aberration prediction device of this embodiment, and are not repeated here.
根据本申请实施例提出的镜头的像差预测装置,可以利用残差神经网络获得多视角图片的预测镜头像差,在保证视差估计精度的同时,实现高速无迭代、内存负担较小的图片重建,具有系统鲁棒性。由此,解决了相关技术中基于物理光学,导致模拟传播过程复杂,计算复杂度高,鲁棒性较差,且重建效果无法满足预期的技术问题。According to the lens aberration prediction device proposed in the embodiment of the present application, the residual neural network can be used to obtain the predicted lens aberration of the multi-view image, and the image reconstruction with high speed, no iteration and less memory burden can be realized while ensuring the accuracy of the disparity estimation. , with system robustness. As a result, the technical problems in the related art based on physical optics, which lead to complex simulation propagation process, high computational complexity, poor robustness, and unsatisfactory reconstruction effects, are solved.
图4为本申请实施例提供的电子设备的结构示意图。该电子设备可以包括:FIG. 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present application. The electronic device may include:
存储器401、处理器402及存储在存储器401上并可在处理器402上运行的计算机程序。
处理器402执行程序时实现上述实施例中提供的镜头的像差预测方法。When the
进一步地,电子设备还包括:Further, the electronic device also includes:
通信接口403,用于存储器401和处理器402之间的通信。The
存储器401,用于存放可在处理器402上运行的计算机程序。The
存储器401可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。The
如果存储器401、处理器402和通信接口403独立实现,则通信接口403、存储器401和处理器402可以通过总线相互连接并完成相互间的通信。总线可以是工业标准体系结构(Industry Standard Architecture,简称为ISA)总线、外部设备互连(PeripheralComponent,简称为PCI)总线或扩展工业标准体系结构(Extended Industry StandardArchitecture,简称为EISA)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,图4中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。If the
可选地,在具体实现上,如果存储器401、处理器402及通信接口403,集成在一块芯片上实现,则存储器401、处理器402及通信接口403可以通过内部接口完成相互间的通信。Optionally, in terms of specific implementation, if the
处理器402可能是一个中央处理器(Central Processing Unit,简称为CPU),或者是特定集成电路(Application Specific Integrated Circuit,简称为ASIC),或者是被配置成实施本申请实施例的一个或多个集成电路。The
本实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如上的镜头的像差预测方法。This embodiment also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the above method for predicting aberration of a lens.
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或N个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of this specification, description with reference to the terms "one embodiment," "some embodiments," "example," "specific example," or "some examples", etc., mean specific features described in connection with the embodiment or example , structure, material or feature is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials or characteristics described may be combined in any suitable manner in any one or N of the embodiments or examples. Furthermore, those skilled in the art may combine and combine the different embodiments or examples described in this specification, as well as the features of the different embodiments or examples, without conflicting each other.
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“N个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。In addition, the terms "first" and "second" are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature delimited with "first", "second" may expressly or implicitly include at least one of that feature. In the description of the present application, "N" means at least two, such as two, three, etc., unless otherwise expressly and specifically defined.
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或N个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。Any process or method description in the flowcharts or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or N executable instructions for implementing custom logical functions or steps of the process, And the scope of the preferred embodiments of the present application includes alternative implementations in which the functions may be performed out of the order shown or discussed, including performing the functions in a substantially simultaneous manner or in the reverse order depending upon the functions involved, which should be Those skilled in the art to which the embodiments of the present application pertain will be understood.
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或N个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。The logic and/or steps represented in flowcharts or otherwise described herein, for example, may be considered an ordered listing of executable instructions for implementing the logical functions, may be embodied in any computer-readable medium, For use with, or in conjunction with, an instruction execution system, apparatus, or device (such as a computer-based system, a system including a processor, or other system that can fetch instructions from and execute instructions from an instruction execution system, apparatus, or apparatus) or equipment. For the purposes of this specification, a "computer-readable medium" can be any device that can contain, store, communicate, propagate, or transport the program for use by or in connection with an instruction execution system, apparatus, or apparatus. More specific examples (non-exhaustive list) of computer readable media include the following: electrical connections (electronic devices) with one or N wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read Only Memory (ROM), Erasable Editable Read Only Memory (EPROM or Flash Memory), Fiber Optic Devices, and Portable Compact Disc Read Only Memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program may be printed, as the paper or other medium may be optically scanned, for example, followed by editing, interpretation, or other suitable medium as necessary process to obtain the program electronically and then store it in computer memory.
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,N个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that various parts of this application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the N steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware as in another embodiment, it can be implemented by any one of the following techniques known in the art, or a combination thereof: discrete with logic gates for implementing logic functions on data signals Logic circuits, application specific integrated circuits with suitable combinational logic gates, Programmable Gate Arrays (PGA), Field Programmable Gate Arrays (FPGA), etc.
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。Those skilled in the art can understand that all or part of the steps carried by the methods of the above embodiments can be completed by instructing the relevant hardware through a program, and the program can be stored in a computer-readable storage medium, and the program can be stored in a computer-readable storage medium. When executed, one or a combination of the steps of the method embodiment is included.
此外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。In addition, each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically alone, or two or more units may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. If the integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may also be stored in a computer-readable storage medium.
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like. Although the embodiments of the present application have been shown and described above, it should be understood that the above embodiments are exemplary and should not be construed as limitations to the present application. Embodiments are subject to variations, modifications, substitutions and variations.
Claims (11)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210605185.3A CN115222657A (en) | 2022-05-30 | 2022-05-30 | Lens aberration prediction method, device, electronic device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210605185.3A CN115222657A (en) | 2022-05-30 | 2022-05-30 | Lens aberration prediction method, device, electronic device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115222657A true CN115222657A (en) | 2022-10-21 |
Family
ID=83608606
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210605185.3A Pending CN115222657A (en) | 2022-05-30 | 2022-05-30 | Lens aberration prediction method, device, electronic device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115222657A (en) |
-
2022
- 2022-05-30 CN CN202210605185.3A patent/CN115222657A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11861813B2 (en) | Image distortion correction method and apparatus | |
JP6675478B2 (en) | Calibration device, calibration method, optical device, imaging device, projection device, measurement system, and measurement method | |
JP6664000B2 (en) | Calibration device, calibration method, optical device, photographing device, and projection device | |
JP6079333B2 (en) | Calibration apparatus, method and program | |
JP4782899B2 (en) | Parallax detection device, distance measuring device, and parallax detection method | |
US20210133920A1 (en) | Method and apparatus for restoring image | |
JP2018179981A (en) | Camera calibration method, camera calibration program and camera calibration device | |
JP7630928B2 (en) | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, PROGRAM, METHOD FOR PRODUCING TRAINED MODEL, AND IMAGE PROCESSING SYSTEM | |
WO2011137140A1 (en) | Range measurement using a coded aperture | |
CN114869528B (en) | Scanning data processing method, device, equipment and medium | |
CN114511609A (en) | Unsupervised light field disparity estimation system and method based on occlusion perception | |
CN113219475B (en) | Method and system for correcting monocular ranging by using single-line lidar | |
JP7414332B2 (en) | Depth map image generation method and computing device therefor | |
JP7286268B2 (en) | Image processing method, image processing device, imaging device, image processing program, and storage medium | |
JP2020030569A (en) | Image processing method, image processing device, imaging device, lens device, program, and storage medium | |
CN114140507A (en) | Depth estimation method, device and device for fusion of lidar and binocular camera | |
CN115222657A (en) | Lens aberration prediction method, device, electronic device and storage medium | |
CN118015179A (en) | Three-dimensional light field display method, device, equipment and medium based on camera in-loop | |
CN109859313B (en) | 3D point cloud data acquisition method and device, and 3D data generation method and system | |
KR20210046465A (en) | Method and apparatus for correction of aberration | |
US11704832B2 (en) | Camera calibration and/or use of a calibrated camera | |
CN103217147A (en) | Measurement device and measurement method | |
JP7227969B2 (en) | Three-dimensional reconstruction method and three-dimensional reconstruction apparatus | |
Joseph Raj et al. | Video-rate calculation of depth from defocus on a FPGA | |
CN115375586A (en) | Method and device for sharpening high-turbulence image, computer equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |