WO2016184099A1 - 基于光场数据分布的深度估计方法 - Google Patents
基于光场数据分布的深度估计方法 Download PDFInfo
- Publication number
- WO2016184099A1 WO2016184099A1 PCT/CN2015/098117 CN2015098117W WO2016184099A1 WO 2016184099 A1 WO2016184099 A1 WO 2016184099A1 CN 2015098117 W CN2015098117 W CN 2015098117W WO 2016184099 A1 WO2016184099 A1 WO 2016184099A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- depth
- light field
- scene
- pixel
- macro
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/557—Depth or shape recovery from multiple images from light fields, e.g. from plenoptic cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0007—Image acquisition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/529—Depth or shape recovery from texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/571—Depth or shape recovery from multiple images from focus
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/2224—Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
- H04N5/2226—Determination of depth image, e.g. for foreground/background separation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10052—Images from lightfield camera
Definitions
- the invention relates to the field of computer vision and digital image processing, and in particular to a depth estimation method based on light field data distribution.
- Stereo matching algorithm Existing depth estimation methods based on light field cameras can be roughly divided into two categories: stereo matching algorithm and light field analysis algorithm.
- the traditional stereo matching algorithm directly calculates the depth using the correlation between the sub-aperture images acquired by the light field camera.
- Such algorithms are generally computationally complex, and because the low resolution of the subaperture image does not meet the accuracy requirements required for algorithm matching, the resulting depth results are of poor quality.
- Other improved stereo matching algorithms such as considering the linearity of light propagation, still limit the performance of depth estimation by using only the correlation information of each viewpoint image in the light field data.
- the light field analysis method attempts to estimate the depth by using both the consistency of the viewpoint images and the focal length information contained in the light field data.
- This type of algorithm defines different cost equations for different clues, and combines the depth estimates obtained by the two clues to complement each other to improve the accuracy of the final result.
- the depth estimated by this type of algorithm lacks detailed information, and there is still room for improvement in accuracy and consistency.
- the idea of the present invention is to fully extract the focal depth-related tensor estimation scene depth from a series of refocused light field images obtained by changing the pixel distribution of the input light field image with sufficient reference to the light field data characteristics. Further, the tensor with depth trend and the gradient information of the scene center sub-aperture texture map are used to establish a multi-confidence model to measure the initial depth quality of each point, and the length complement is used to optimize the preliminary estimation result, and the data is calculated by using the light field camera. The purpose of high quality depth images.
- Step S2 is repeated to obtain the scene depth of all macro pixels.
- the pixel distribution of the input light field image is adjusted using a point spread function.
- the method further includes the step S3 of globally optimizing the scene depth obtained in step S2 according to the reliability model.
- the step S3 of globally optimizing the scene depth obtained in step S2 according to the credibility model includes: using the scene depth obtained in step S2 as an initial input.
- the Markov random field is used for optimization.
- the specific optimization methods include: depth assessment of each point according to the credibility model, using inaccurate depth estimation to correct inaccurate depth, and improving homogeneity region depth estimation. Consistency and preserve depth boundaries.
- the credibility model is a multi-level credibility model including a first part for measuring the accuracy of the depth of the scene. And a second part for measuring the consistency of the depth of the scene in the non-boundary region and the abruptness of the boundary region.
- the first part of the multivariate credibility model is C 1 (x, y),
- R z* (x, y) and R z' (x, y) are the minimum and minimum values of the intensity range R z (x, y) as a function of the depth of the scene, z* and z, respectively. 'The depth of the scene corresponding to the minimum point and the minimum point.
- the second part of the multivariate credibility model is based on gradient information of the central subaperture texture map; the depth estimation method further includes separately acquiring a step of a central subaperture texture map of the plurality of refocused light field images, and a step of calculating credibility through the second portion of the multivariate credibility model using the acquired central subaperture texture map.
- the present invention extracts the focal depth-dependent tensor estimated scene depth from a series of refocused light field images obtained by transforming the input image pixel distribution.
- the tensor with depth trend and the gradient information of the scene center subaperture texture map are used to define the accuracy and consistency of the multivariate credibility model to measure the initial depth to further optimize the depth estimation.
- the scene texture and spatial information collected by a light field camera such as Lytro can be fully utilized to obtain a scene depth estimation with rich details, clear features, high accuracy and consistency.
- FIG. 1 is a flow chart of some embodiments of a depth estimation method based on light field data distribution according to the present invention.
- some embodiments of a depth estimation method based on a light field data distribution include the following steps:
- S1 Adjust a pixel distribution of the input light field image to generate a plurality of refocused light field images of different focal lengths.
- the input light field image is first subjected to pre-correction processing to remove peripheral points of each macro pixel that fail to capture valid data information, thereby preventing meaningless pixel values from interfering with subsequent processing.
- the point spread function (PSF) is used to adjust the pixel position distribution of the corrected light field image L o , and the refocusing process on the input light field image is as follows:
- the macro pixel corresponds to a point in the actual scene, and the intensity range of the macro pixel is a variation range of the intensity values of all points in the macro pixel.
- Each microlens of the microlens array of the light field camera represents a subaperture at a certain angle with respect to the main lens of the camera.
- the macro pixel in the light field data corresponds to a point in the actual scene, and the macro pixel includes the corresponding scene point.
- the angle information of the entire microlens array projection is correspondingly recorded at each point in the macro pixel, that is, the intensity value and the distribution position of each point.
- L z the focal plane of a series of light field images
- the intensity values of the points in the macro pixel are constantly changing, and the intensity range of the entire macro pixel is also changed. Therefore, the depth is determined by using the intensity range of the macro pixel as the depth-dependent tensor.
- the macro pixel intensity range is extracted as follows:
- I(x, y, u, v) is the intensity value of a point (u, v) in the microlens (corresponding to the macro pixel (x, y) in the image plane L z ) at coordinates (x, y)
- M Represents a collection of all points within the microlens.
- the point is accurately projected on the image plane through the sub-aperture at each angle, that is, the projection of each angle accurately reflects the texture value of the point, so
- the intensity range of each point in the corresponding macro pixel is the smallest - the macro pixel intensity range is the smallest.
- the focal length of the light field image L z focusing on the scene point reflects the depth information of the point, thereby obtaining an initial estimate of the scene depth D initial (x, y) of the macro pixel (x, y),
- step S2 the scene depth of all macro pixels can be obtained.
- step S3. Perform global optimization on the depth of the scene obtained in step S2 according to the credibility model.
- the multivariate credibility model includes a first portion for measuring the accuracy of the depth of the scene, and a measure of the consistency of the scene depth in the non-boundary region and the abruptness of the boundary region. The second part.
- the multi-confidence model is established as follows: First, define the unitary credibility (ie, the first part of the multi-level credibility model) to measure the accuracy of the depth estimation, and extract the intensity range R z of each point by analysis ( x, y) With the trend of the depth D (variation curve), it is found that the Euclidean distance between the minimum point and the minimum point of the curve is positively correlated with the accuracy of the point depth estimate D initial (x, y). Therefore, the credibility C 1 corresponding to the accuracy given to the depth estimation of each point is as follows:
- R z* (x, y) and R z' (x, y) are the minimum and minimum values of the intensity curve R z (x, y) as a function of depth D, respectively, z* and z 'For their respective depths.
- elemental credibility ie the second part of the multivariate credibility model to measure the consistency of the estimated depth D initial in the non-boundary region and the abrupt change of the boundary region, according to the gradient information of the central subaperture texture image.
- the non-boundary area changes gently, and the characteristics of the mutation in the boundary area define another unit of confidence C 2 as follows:
- the second portion of the multivariate credibility model is based on gradient information of the central subaperture texture map.
- the depth estimation method further comprises the steps of respectively acquiring a central subaperture texture map of the plurality of refocused light field images, and passing the obtained multi-trust reliability model with the obtained central subaperture texture map
- the second part is the step of calculating the credibility. Specifically, since the angle information is recorded at each point in the macro pixel, the image formed by the center point of each macro pixel is the central subaperture texture map.
- the step of global optimization comprises: using the scene depth D initial obtained in step S2 as an initial input, optimized using a Markov Random Field (MRF).
- MRF Markov Random Field
- the optimization principle is to improve the accuracy and consistency of the depth estimation and to preserve clear boundary features.
- the specific optimization method includes: determining the depth of each point according to the credibility model, using the high-accuracy depth estimation to correct the inaccurate depth, improving the consistency of the homogeneity depth estimation and retaining the depth boundary.
- the final depth estimate D final is as follows:
- ⁇ flat and ⁇ smooth are the parameters of the Laplace constraint and the second-order differential term, which respectively limit the smoothness and continuity of the final depth estimate D final .
- the error between D final and the constraint term can be calculated, and the error matrix is constructed to minimize the formula (8), thereby further optimizing the depth estimation result.
- global optimization by using Markov random field is only a preferred manner, and the present invention can also adopt other methods for global optimization, such as image-cut-based multi-mark optimization, joint discrete continuous optimization, and the like.
- Some embodiments described above fully refer to the characteristics of the light field data, extract the focal depth-related tensor estimation scene depth from a series of refocused light field images obtained by transforming the input light field image pixel distribution, and utilize the variation trend of the tensor with depth.
- the gradient information of the scene center subaperture texture map defines the multivariate credibility model to measure the accuracy and consistency of the initial depth to further optimize the depth estimate.
- the scene texture and spatial information collected by a light field camera such as Lytro can be fully utilized to obtain indoor and outdoor scene depth estimation with rich details, clear features, high accuracy and consistency.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (7)
- 一种基于光场数据分布的深度估计方法,其特征在于,包括以下步骤:S1、调整输入光场图像的像素分布,生成多个不同焦距的重聚焦光场图像;S2、针对所述多个重聚焦光场图像,分别提取同一个宏像素的强度范围,进而选出最小强度范围对应的重聚焦光场图像,以该重聚焦光场图像的焦距作为该宏像素的场景深度;所述宏像素对应实际场景中的一点,所述宏像素的强度范围为该宏像素内所有点的强度值的变化范围;以及重复所述步骤S2,获得所有宏像素的场景深度。
- 根据权利要求1所述的基于光场数据分布的深度估计方法,其特征在于,所述步骤S1中,采用点扩散函数调整输入光场图像的像素分布。
- 根据权利要求1所述的基于光场数据分布的深度估计方法,其特征在于,进一步还包括依据可信度模型对步骤S2获得的场景深度进行全局优化的步骤S3。
- 根据权利要求3所述的基于光场数据分布的深度估计方法,其特征在于,所述的依据可信度模型对步骤S2获得的场景深度进行全局优化的步骤S3包括:以步骤S2获得的场景深度作为初始输入,利用马尔科夫随机场进行优化,具体的优化方法包括:依据所述可信度模型对各点的深度评估,利用准确性高的深度估计修正不准确的深度,提升同质区域深度估计的一致性并保留深度边界。
- 根据权利要求3所述的基于光场数据分布的深度估计方法,其特征在于,所述可信度模型为多元可信度模型,该多元可信度模型包括用于衡量所述场景深度的准确性的第一部分,以及用于衡量所述场景深度在非边界区域的一致性与边界区域的突变性的第二部分。
- 根据权利要求5所述的基于光场数据分布的深度估计方法,其特征在于,所述多元可信度模型的第二部分以中心子孔径纹理图的梯度信息为基础;所述深度估计方法进一步还包括分别获取所述多个重聚焦光场图像的中心子孔径纹理图的步骤,以及用获取的中心子孔径纹理图通过所述多元可信度模型的第二部分计算可信度的步骤。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/809,769 US10346997B2 (en) | 2015-05-15 | 2017-11-10 | Depth estimation method based on light-field data distribution |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510251234.8 | 2015-05-15 | ||
CN201510251234.8A CN104899870B (zh) | 2015-05-15 | 2015-05-15 | 基于光场数据分布的深度估计方法 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/809,769 Continuation US10346997B2 (en) | 2015-05-15 | 2017-11-10 | Depth estimation method based on light-field data distribution |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016184099A1 true WO2016184099A1 (zh) | 2016-11-24 |
Family
ID=54032515
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/098117 WO2016184099A1 (zh) | 2015-05-15 | 2015-12-21 | 基于光场数据分布的深度估计方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10346997B2 (zh) |
CN (1) | CN104899870B (zh) |
WO (1) | WO2016184099A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109615650A (zh) * | 2018-11-22 | 2019-04-12 | 浙江工商大学 | 一种基于变分法和遮挡互补的光场流估计方法 |
CN110415209A (zh) * | 2019-06-12 | 2019-11-05 | 东北农业大学 | 一种基于光场视觉深度估计的奶牛采食量监测方法 |
CN112288669A (zh) * | 2020-11-08 | 2021-01-29 | 西北工业大学 | 一种基于光场成像的点云地图获取方法 |
CN117274067A (zh) * | 2023-11-22 | 2023-12-22 | 浙江优众新材料科技有限公司 | 一种基于强化学习的光场图像盲超分辨处理方法与系统 |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104899870B (zh) | 2015-05-15 | 2017-08-25 | 清华大学深圳研究生院 | 基于光场数据分布的深度估计方法 |
CN106384338B (zh) * | 2016-09-13 | 2019-03-15 | 清华大学深圳研究生院 | 一种基于形态学的光场深度图像的增强方法 |
CN107038719A (zh) * | 2017-03-22 | 2017-08-11 | 清华大学深圳研究生院 | 基于光场图像角度域像素的深度估计方法及系统 |
CN111406412B (zh) | 2017-04-11 | 2021-09-03 | 杜比实验室特许公司 | 分层的增强型娱乐体验 |
CN107702660A (zh) * | 2017-04-22 | 2018-02-16 | 广州首感光电科技有限公司 | 基于光场显微成像的微尺度三维工业检测系统 |
CN107610170B (zh) * | 2017-08-04 | 2020-05-19 | 中国科学院自动化研究所 | 多目图像重聚焦的深度获取方法及系统 |
CN107909578A (zh) * | 2017-10-30 | 2018-04-13 | 上海理工大学 | 基于六边形拼接算法的光场图像重聚焦方法 |
CN107726993B (zh) * | 2017-10-30 | 2019-07-19 | 上海理工大学 | 基于光场图像宏像素最大梯度区域的颗粒深度测量方法 |
CN108230223A (zh) * | 2017-12-28 | 2018-06-29 | 清华大学 | 基于卷积神经网络的光场角度超分辨率方法及装置 |
CN111869205B (zh) | 2018-01-19 | 2022-06-10 | Pcms控股公司 | 具有变化位置的多焦平面 |
CN108596965B (zh) * | 2018-03-16 | 2021-06-04 | 天津大学 | 一种光场图像深度估计方法 |
CN110349196B (zh) * | 2018-04-03 | 2024-03-29 | 联发科技股份有限公司 | 深度融合的方法和装置 |
EP4270944A3 (en) * | 2018-07-06 | 2024-01-03 | InterDigital VC Holdings, Inc. | Method and system for forming extended focal planes for large viewpoint changes |
CN109064505B (zh) * | 2018-07-26 | 2020-12-25 | 清华大学深圳研究生院 | 一种基于滑动窗口张量提取的深度估计方法 |
CN110120071B (zh) * | 2019-05-15 | 2023-03-24 | 南京工程学院 | 一种面向光场图像的深度估计方法 |
CN111679337B (zh) * | 2019-10-15 | 2022-06-10 | 上海大学 | 一种水下主动激光扫描成像系统中散射背景抑制方法 |
CN111784620B (zh) * | 2020-07-06 | 2023-05-16 | 太原科技大学 | 空间信息引导角度信息的光场相机全聚焦图像融合算法 |
CN112215879A (zh) * | 2020-09-25 | 2021-01-12 | 北京交通大学 | 一种光场极平面图像的深度提取方法 |
CN112288789B (zh) * | 2020-10-26 | 2024-03-29 | 杭州电子科技大学 | 基于遮挡区域迭代优化的光场深度自监督学习方法 |
CN113298943A (zh) * | 2021-06-10 | 2021-08-24 | 西北工业大学 | 一种基于光场成像的esdf地图构建方法 |
CN113808019A (zh) * | 2021-09-14 | 2021-12-17 | 广东三水合肥工业大学研究院 | 一种非接触测量系统及方法 |
CN114511605B (zh) * | 2022-04-18 | 2022-09-02 | 清华大学 | 光场深度估计方法、装置、电子设备及存储介质 |
CN115100269B (zh) * | 2022-06-28 | 2024-04-23 | 电子科技大学 | 一种光场图像深度估计方法、系统、电子设备及存储介质 |
CN116449049A (zh) * | 2023-03-29 | 2023-07-18 | 南京航空航天大学 | 一种基于多色光深度编码和光场相机的三维流场测试方法及系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102314683A (zh) * | 2011-07-15 | 2012-01-11 | 清华大学 | 一种非平面图像传感器的计算成像方法和成像装置 |
CN104079827A (zh) * | 2014-06-27 | 2014-10-01 | 中国科学院自动化研究所 | 一种光场成像自动重对焦方法 |
CN104463949A (zh) * | 2014-10-24 | 2015-03-25 | 郑州大学 | 一种基于光场数字重聚焦的快速三维重建方法及其系统 |
US9025895B2 (en) * | 2011-09-28 | 2015-05-05 | Pelican Imaging Corporation | Systems and methods for decoding refocusable light field image files |
CN104899870A (zh) * | 2015-05-15 | 2015-09-09 | 清华大学深圳研究生院 | 基于光场数据分布的深度估计方法 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102809918B (zh) * | 2012-08-08 | 2014-11-05 | 浙江大学 | 基于多层空间光调制器的高分辨全息三维显示装置和方法 |
US9658443B2 (en) * | 2013-03-15 | 2017-05-23 | The Board Of Trustees Of The Leland Stanford Junior University | Optics apparatus with detection of light rays received at different angles for output indicative of aliased views |
US9524556B2 (en) * | 2014-05-20 | 2016-12-20 | Nokia Technologies Oy | Method, apparatus and computer program product for depth estimation |
CN104050662B (zh) * | 2014-05-30 | 2017-04-12 | 清华大学深圳研究生院 | 一种用光场相机一次成像直接获取深度图的方法 |
-
2015
- 2015-05-15 CN CN201510251234.8A patent/CN104899870B/zh active Active
- 2015-12-21 WO PCT/CN2015/098117 patent/WO2016184099A1/zh active Application Filing
-
2017
- 2017-11-10 US US15/809,769 patent/US10346997B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102314683A (zh) * | 2011-07-15 | 2012-01-11 | 清华大学 | 一种非平面图像传感器的计算成像方法和成像装置 |
US9025895B2 (en) * | 2011-09-28 | 2015-05-05 | Pelican Imaging Corporation | Systems and methods for decoding refocusable light field image files |
CN104079827A (zh) * | 2014-06-27 | 2014-10-01 | 中国科学院自动化研究所 | 一种光场成像自动重对焦方法 |
CN104463949A (zh) * | 2014-10-24 | 2015-03-25 | 郑州大学 | 一种基于光场数字重聚焦的快速三维重建方法及其系统 |
CN104899870A (zh) * | 2015-05-15 | 2015-09-09 | 清华大学深圳研究生院 | 基于光场数据分布的深度估计方法 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109615650A (zh) * | 2018-11-22 | 2019-04-12 | 浙江工商大学 | 一种基于变分法和遮挡互补的光场流估计方法 |
CN109615650B (zh) * | 2018-11-22 | 2022-11-25 | 浙江工商大学 | 一种基于变分法和遮挡互补的光场流估计方法 |
CN110415209A (zh) * | 2019-06-12 | 2019-11-05 | 东北农业大学 | 一种基于光场视觉深度估计的奶牛采食量监测方法 |
CN112288669A (zh) * | 2020-11-08 | 2021-01-29 | 西北工业大学 | 一种基于光场成像的点云地图获取方法 |
CN112288669B (zh) * | 2020-11-08 | 2024-01-19 | 西北工业大学 | 一种基于光场成像的点云地图获取方法 |
CN117274067A (zh) * | 2023-11-22 | 2023-12-22 | 浙江优众新材料科技有限公司 | 一种基于强化学习的光场图像盲超分辨处理方法与系统 |
Also Published As
Publication number | Publication date |
---|---|
US10346997B2 (en) | 2019-07-09 |
CN104899870B (zh) | 2017-08-25 |
CN104899870A (zh) | 2015-09-09 |
US20180114328A1 (en) | 2018-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016184099A1 (zh) | 基于光场数据分布的深度估计方法 | |
EP3158532B1 (en) | Local adaptive histogram equalization | |
CN107995424B (zh) | 基于深度图的光场全聚焦图像生成方法 | |
CN107316326B (zh) | 应用于双目立体视觉的基于边的视差图计算方法和装置 | |
WO2018024006A1 (zh) | 一种聚焦型光场相机的渲染方法和系统 | |
WO2014044126A1 (zh) | 坐标获取装置、实时三维重建系统和方法、立体交互设备 | |
WO2020007320A1 (zh) | 多视角图像的融合方法、装置、计算机设备和存储介质 | |
US8144974B2 (en) | Image processing apparatus, method, and program | |
CN109345502B (zh) | 一种基于视差图立体结构信息提取的立体图像质量评价方法 | |
CN107084680B (zh) | 一种基于机器单目视觉的目标深度测量方法 | |
TWI529661B (zh) | 快速建立景深圖的方法及影像處理裝置 | |
CN110120071B (zh) | 一种面向光场图像的深度估计方法 | |
WO2015184978A1 (zh) | 摄像机控制方法、装置及摄像机 | |
TW201214335A (en) | Method and arrangement for multi-camera calibration | |
WO2018053952A1 (zh) | 一种基于场景样本库的影视图像深度提取方法 | |
KR101714224B1 (ko) | 센서 융합 기반 3차원 영상 복원 장치 및 방법 | |
US8340399B2 (en) | Method for determining a depth map from images, device for determining a depth map | |
US9635245B2 (en) | Focus position detection device and focus position detection method | |
CN115375745A (zh) | 基于偏振微透镜光场影像视差角的绝对深度测量方法 | |
TWI538476B (zh) | 立體攝影系統及其方法 | |
JP7312026B2 (ja) | 画像処理装置、画像処理方法およびプログラム | |
CN112132925A (zh) | 用于重建水下图像颜色的方法和装置 | |
JP2009293971A (ja) | 距離測定装置および方法並びにプログラム | |
JP2017011397A (ja) | 画像処理装置、画像処理方法、及びプログラム | |
CN114998522A (zh) | 多视连续光场影像室内场景稠密点云准确提取方法及系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15892473 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15892473 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 04.05.2018) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15892473 Country of ref document: EP Kind code of ref document: A1 |