CN115115689A - Depth estimation method of multiband spectrum - Google Patents
Depth estimation method of multiband spectrum Download PDFInfo
- Publication number
- CN115115689A CN115115689A CN202210640198.4A CN202210640198A CN115115689A CN 115115689 A CN115115689 A CN 115115689A CN 202210640198 A CN202210640198 A CN 202210640198A CN 115115689 A CN115115689 A CN 115115689A
- Authority
- CN
- China
- Prior art keywords
- image
- edge
- original
- blurred
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000001228 spectrum Methods 0.000 title claims abstract description 16
- 238000003384 imaging method Methods 0.000 claims abstract description 17
- 230000003595 spectral effect Effects 0.000 claims description 9
- 230000003287 optical effect Effects 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 7
- 238000003708 edge detection Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 description 6
- 238000011160 research Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000012876 topography Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10036—Multispectral image; Hyperspectral image
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Measurement Of Optical Distance (AREA)
Abstract
本发明提供了一种多波段光谱的深度估计方法,包括如下步骤:步骤1:用相机拍摄不同波段的原始图像P0(x,y)、P1(x,y)、···、Pi(x,y),其中i表示不同波段;步骤2:以单一波段原始图为输入,以h(x,y)为卷积核,根据图像卷积公式P'(x,y)=P(x,y)*h(x,y),得到卷积后的模糊图像P'(x,y);步骤3:分别计算原始图像P(x,y)和模糊图像P'(x,y)的边缘,根据得到边缘区域计算原始边缘梯度PE_G(x,y)和模糊边缘梯度P'E_G(x,y);步骤4:将原始图像边缘PE_G(x,y)除以模糊边缘梯度P'E_G(x,y),得原始图像和边缘图像的模糊比值σ(x,y);步骤5:根据所得模糊比值σ(x,y)根据透镜成像公式
计算稀疏深度;步骤6:对后续的波段原始图像分别再重复上述2‑5的步骤,并结合每个波段的聚焦位置,实现图像中的真实结构位置约束。The present invention provides a multi-band spectrum depth estimation method, comprising the following steps: Step 1: Use a camera to capture original images P 0 (x, y), P 1 (x, y), ···, P of different wavelength bands i (x,y), where i represents different bands; Step 2: Take the original image of a single band as the input, take h(x,y) as the convolution kernel, according to the image convolution formula P'(x,y)=P (x,y)*h(x,y), get the blurred image P'(x,y) after convolution; Step 3: Calculate the original image P(x,y) and the blurred image P'(x,y respectively ), calculate the original edge gradient P E_G (x, y) and the blurred edge gradient P' E_G (x, y) according to the obtained edge area; Step 4: Divide the original image edge P E_G (x, y) by the blurred edge Gradient P' E_G (x, y), get the blur ratio σ(x, y) of the original image and the edge image; Step 5: According to the obtained blur ratio σ(x, y), according to the lens imaging formula
Calculate the sparse depth; Step 6: Repeat the above steps 2-5 for the subsequent original images of the bands, and combine the focus position of each band to realize the real structure position constraint in the image.Description
技术领域technical field
本发明涉及一种多波段光谱的深度估计方法。The invention relates to a depth estimation method for multi-band spectrum.
背景技术Background technique
基于图像的深度估计,是指从单幅或多福二维图还原被测量样品表面的三维形貌信息,其估计的深度图可应用于无人驾驶、微纳结构件形貌检测等领域,具有重要的研究意义及应用价值,是计算机视觉和图形学领域的重要研究问题。传统成像设备均采用自然光照明下的图像信息进行还原,这种方法在单目相机下,受限于图像内的非确定性约束关系,难以用单张图像还原图像的深度信息。在多目相机成像系统下,虽然增加了图像间内的约束关系,但还原图像的冗余信息增加,且不同角度获取的图片信息存在一定的采集死角,难以还原图上上完整的深度信息。深度学习还原方法虽然可以通过训练在一定程度上解决单目单帧图像的深度,但需要大量的数据集,且缺乏一定的泛化能力。因此,需要寻找一种能够解决上述问题的方法。Image-based depth estimation refers to restoring the 3D topography information of the measured sample surface from a single or multiple 2D map. It has important research significance and application value, and is an important research problem in the field of computer vision and graphics. Traditional imaging equipment uses the image information under natural light illumination to restore. This method is limited by the non-deterministic constraint relationship in the image under the monocular camera, and it is difficult to restore the depth information of the image with a single image. In the multi-camera imaging system, although the constraint relationship between images is increased, the redundant information of the restored image increases, and the image information obtained from different angles has a certain collection dead angle, so it is difficult to restore the complete depth information on the image. Although the deep learning restoration method can solve the depth of a monocular single frame image to a certain extent through training, it requires a large amount of data sets and lacks a certain generalization ability. Therefore, it is necessary to find a method that can solve the above problems.
发明内容SUMMARY OF THE INVENTION
本发明所要解决的主要技术问题是提供一种多波段光谱的深度估计方法。The main technical problem to be solved by the present invention is to provide a multi-band spectrum depth estimation method.
为了解决上述的技术问题,本发明提供了一种多波段光谱的深度估计方法,包括如下步骤:In order to solve the above-mentioned technical problems, the present invention provides a depth estimation method for multi-band spectrum, comprising the following steps:
步骤1:用相机拍摄不同波段的原始图像P0(x,y)、P1(x,y)、···、Pi(x,y),其中i表示不同波段;i≥2;Step 1: Use the camera to capture original images of different bands P 0 (x,y), P 1 (x,y),..., P i (x,y), where i represents different bands; i≥2;
步骤2:以单一波段原始图为输入,以h(x,y)为卷积核,根据图像卷积公式P'(x,y)=P(x,y)*h(x,y),得到卷积后的模糊图像P'(x,y);Step 2: Take the original image of a single band as the input, take h(x,y) as the convolution kernel, and according to the image convolution formula P'(x,y)=P(x,y)*h(x,y), Obtain the convoluted blurred image P'(x,y);
步骤3:分别计算原始图像P(x,y)和模糊图像P'(x,y)的边缘,根据得到边缘区域计算原始边缘梯度PE_G(x,y)和模糊边缘梯度P'E_G(x,y);Step 3: Calculate the edges of the original image P(x,y) and the blurred image P'(x,y) respectively, and calculate the original edge gradient P E_G (x, y) and the blurred edge gradient P' E_G (x according to the obtained edge area ,y);
步骤4:将原始图像边缘PE_G(x,y)除以模糊边缘梯度P'E_G(x,y),得原始图像和边缘图像的模糊比值σ(x,y);Step 4: Divide the original image edge P E_G (x, y) by the blurred edge gradient P' E_G (x, y) to obtain the blur ratio σ(x, y) of the original image and the edge image;
步骤5:根据所得模糊比值σ(x,y)根据透镜成像公式计算稀疏深度,其中k为常数,D为光学系统通光孔径;s为像面与光学系统距离;df为物方焦距;d为物体深度。Step 5: According to the obtained blur ratio σ(x,y) according to the lens imaging formula Calculate the sparse depth, where k is a constant, D is the clear aperture of the optical system; s is the distance between the image plane and the optical system; d f is the focal length of the object; d is the depth of the object.
步骤6:对后续的波段原始图像分别再重复上述2-5的步骤,最后结合每个波段的聚焦位置,即可实现图像中的真实结构位置约束。Step 6: Repeat the above steps 2-5 for the subsequent original images of the bands, and finally combine the focus position of each band to realize the real structure position constraint in the image.
在一较佳实施例中:所述步骤1中的拍摄不同波段的原始图像采用在单目相机上加装不同波段滤光片获取,或采用具备一次成像可以收集多个波段图像的多光谱相机,或利用多目相机配置滤光片获取不同光谱波段图片。In a preferred embodiment: in the
在一较佳实施例中:所述步骤2中的卷积核包括:任意近似成像系统的点扩散函数。In a preferred embodiment: the convolution kernel in the step 2 includes: any approximation of the point spread function of the imaging system.
在一较佳实施例中:所述点扩散函数为高斯核、柯西核或高斯-柯西核。In a preferred embodiment: the point spread function is a Gaussian kernel, a Cauchy kernel or a Gauss-Cauchy kernel.
在一较佳实施例中:所述步骤3中的边缘梯度包括:In a preferred embodiment: the edge gradient in step 3 includes:
图像边缘梯度提取可以先采用canny等图像边缘检测算子提取图像边缘,再采用sobel、robert等边缘梯度算子或阈值提取边缘的方法计算图像边缘梯度变化;或者直接利用边缘梯度算子或阈值提取边缘的方法计算图像边缘梯度变化.Image edge gradient extraction can first use image edge detection operators such as canny to extract image edges, and then use edge gradient operators such as sobel, robert or threshold extraction methods to calculate image edge gradient changes; or directly use edge gradient operators or threshold extraction. The edge method calculates the gradient change of the image edge.
在一较佳实施例中:所述步骤4包括如下子步骤:In a preferred embodiment: the step 4 includes the following sub-steps:
(1)假设绝对清晰图像为P0(x,y),采集的原始图像P(x,y)可看作绝对清晰图像P0(x,y)做一次标准差未知但数值极小的卷积h(x,y,σ),而模糊图像为原始图像P(x,y)做一次已知标准差数值很大h(x,y,σ1)的卷积运算,即模糊图像P'(x,y)可以看作绝对清晰图像做两次卷积运算P'(x,y)=P(x,y)*h(x,y,σ)*h(x,y,σ1);(1) Assuming that the absolutely clear image is P 0 (x, y), the original image P (x, y) collected can be regarded as the absolutely clear image P 0 (x, y). Product h(x,y,σ), and the blurred image is the original image P(x,y) to do a convolution operation with a known standard deviation value h(x,y,σ 1 ), that is, the blurred image P' (x,y) can be regarded as an absolutely clear image and do two convolution operations P'(x,y)=P(x,y)*h(x,y,σ)*h(x,y,σ 1 ) ;
(2)原始图像和边缘图像的模糊比值σ(x,y)可表示为:(2) The blur ratio σ(x, y) of the original image and the edge image can be expressed as:
最终 finally
在一较佳实施例中:所述步骤6包括:利用每个光谱波段有独自的聚焦位置,通过求得图像相对每个单独波段的聚焦面绝对距离关系,结合波段本身的求得的相对位置关系,便可得出图像中各结构之间的深度绝对关系。In a preferred embodiment: the step 6 includes: using each spectral band to have its own focus position, by obtaining the absolute distance relationship of the focal plane of the image relative to each individual band, combined with the obtained relative position of the band itself. The absolute relationship between the depths of the structures in the image can be obtained.
相较于现有技术,本发明的技术方案具备以下有益效果:Compared with the prior art, the technical solution of the present invention has the following beneficial effects:
根据本发明所涉及的一种多波段光谱的深度估计方法,首先,因为采用多波段光谱采集图像信息,可以实现单目场景下采集多张具有不同深度信息的光谱图像,解决了单一图像的特征约束不够的问题;其次,利用了图像再模糊理论获取了每张光谱图像上的相对深度关系,解决了需要多视图定位图像中结构信息之间的相对深度求解;随后,采用了原始图像梯度和模糊图像梯度的比值方法,一定程度的消去了图像采集过程中的因为成像设备造成的乘性误差;最后,结合不同光谱波段的聚焦深度关系和单波段图像求解的相对深度,实现了图像的绝对深度还原。According to a multi-band spectrum depth estimation method involved in the present invention, firstly, because the multi-band spectrum is used to collect image information, multiple spectral images with different depth information can be collected in a monocular scene, and the characteristics of a single image can be solved. The problem of insufficient constraints; secondly, the relative depth relationship on each spectral image is obtained by using the image reblurring theory, and the relative depth solution between the structural information in the image that requires multi-view localization is solved; then, the original image gradient and The ratio method of blurred image gradients eliminates the multiplicative error caused by imaging equipment in the process of image acquisition to a certain extent; finally, combined with the focal depth relationship of different spectral bands and the relative depth of single-band image solution, the absolute image is realized. Deep restoration.
因此,本发明所涉及的一种多波段光谱的深度估计方法,在实现过程中受外界环境影响较小,减少了成像设备和其他乘性误差带来的干扰因素。其次,在深度估计的实现步骤中,均采用图像间的基本运算所实现,极大程度的降低了深度信息恢复的时间,降低了算法的时间复杂度;另外,估计模型从最基本的透镜成像原理出发,满足目前大部分测量领域的基础模型,使得本发明所提出的单目深度估计方法具备一定的普适性。。Therefore, the multi-band spectrum depth estimation method involved in the present invention is less affected by the external environment during the implementation process, and reduces the interference factors caused by imaging equipment and other multiplicative errors. Secondly, in the implementation steps of depth estimation, the basic operations between images are used, which greatly reduces the time of depth information recovery and the time complexity of the algorithm. In addition, the estimation model is based on the most basic lens imaging. Starting from the principle, it satisfies the basic models of most of the current measurement fields, so that the monocular depth estimation method proposed by the present invention has certain universality. .
附图说明Description of drawings
图1为本发明优选实施例中离焦模糊成像示意图;FIG. 1 is a schematic diagram of out-of-focus blur imaging in a preferred embodiment of the present invention;
图2为本发明优选实施例的流程图。FIG. 2 is a flowchart of a preferred embodiment of the present invention.
具体实施方式Detailed ways
下文结合附图和具体实施方式对本发明的技术方案做进一步说明。The technical solutions of the present invention will be further described below with reference to the accompanying drawings and specific embodiments.
参考图1-图2,本实施例提供了一种多波段光谱的深度估计方法,包括如下步骤:Referring to FIG. 1 to FIG. 2 , this embodiment provides a method for depth estimation of multi-band spectrum, including the following steps:
步骤1:用相机拍摄不同波段的原始图像P0(x,y)、P1(x,y)、···、Pi(x,y),其中i表示不同波段;i≥2;Step 1: Use the camera to capture original images of different bands P 0 (x,y), P 1 (x,y),..., P i (x,y), where i represents different bands; i≥2;
步骤2:以单一波段原始图为输入,以h(x,y)为卷积核,根据图像卷积公式P'(x,y)=P(x,y)*h(x,y),得到卷积后的模糊图像P'(x,y);Step 2: Take the original image of a single band as the input, take h(x,y) as the convolution kernel, and according to the image convolution formula P'(x,y)=P(x,y)*h(x,y), Obtain the convoluted blurred image P'(x,y);
步骤3:分别计算原始图像P(x,y)和模糊图像P'(x,y)的边缘,根据得到边缘区域计算原始边缘梯度PE_G(x,y)和模糊边缘梯度P'E_G(x,y);Step 3: Calculate the edges of the original image P(x,y) and the blurred image P'(x,y) respectively, and calculate the original edge gradient P E_G (x, y) and the blurred edge gradient P' E_G (x according to the obtained edge area ,y);
步骤4:将原始图像边缘PE_G(x,y)除以模糊边缘梯度P'E_G(x,y),得原始图像和边缘图像的模糊比值σ(x,y);Step 4: Divide the original image edge P E_G (x, y) by the blurred edge gradient P' E_G (x, y) to obtain the blur ratio σ(x, y) of the original image and the edge image;
步骤5:根据所得模糊比值σ(x,y)根据透镜成像公式计算稀疏深度,其中k为常数,D为光学系统通光孔径;s为像面与光学系统距离;df为物方焦距;d为物体深度。Step 5: According to the obtained blur ratio σ(x,y) according to the lens imaging formula Calculate the sparse depth, where k is a constant, D is the clear aperture of the optical system; s is the distance between the image plane and the optical system; d f is the focal length of the object; d is the depth of the object.
步骤6:对后续的波段原始图像分别再重复上述2-5的步骤,最后结合每个波段的聚焦位置,即可实现图像中的真实结构位置约束。Step 6: Repeat the above steps 2-5 for the subsequent original images of the bands, and finally combine the focus position of each band to realize the real structure position constraint in the image.
所述步骤1中的拍摄不同波段的原始图像采用在单目相机上加装不同波段滤光片获取,或采用具备一次成像可以收集多个波段图像的多光谱相机,或利用多目相机配置滤光片获取不同光谱波段图片。The original images of different wavebands in the
所述步骤2中的卷积核包括:任意近似成像系统的点扩散函数。所述点扩散函数为高斯核、柯西核、高斯-柯西核或其他近似成像系统点扩散函数的核函数。The convolution kernel in the step 2 includes: arbitrarily approximate the point spread function of the imaging system. The point spread function is a Gaussian kernel, a Cauchy kernel, a Gauss-Cauchy kernel or other kernel functions that approximate the point spread function of an imaging system.
所述步骤3中的边缘梯度包括:The edge gradient in step 3 includes:
图像边缘梯度提取可以先采用canny等图像边缘检测算子提取图像边缘,再采用sobel、robert等边缘梯度算子或阈值提取边缘的方法计算图像边缘梯度变化;或者直接利用边缘梯度算子或阈值提取边缘的方法计算图像边缘梯度变化.Image edge gradient extraction can first use canny and other image edge detection operators to extract image edges, and then use sobel, robert and other edge gradient operators or threshold extraction methods to calculate image edge gradient changes; or directly use edge gradient operators or threshold extraction. The edge method calculates the gradient change of the image edge.
所述步骤4包括如下子步骤:The step 4 includes the following sub-steps:
(1)假设绝对清晰图像为P0(x,y),采集的原始图像P(x,y)可看作绝对清晰图像P0(x,y)做一次标准差未知但数值极小的卷积h(x,y,σ),而模糊图像为原始图像P(x,y)做一次已知标准差数值很大h(x,y,σ1)的卷积运算,即模糊图像P'(x,y)可以看作绝对清晰图像做两次卷积运算P'(x,y)=P(x,y)*h(x,y,σ)*h(x,y,σ1);(1) Assuming that the absolutely clear image is P 0 (x, y), the original image P (x, y) collected can be regarded as the absolutely clear image P 0 (x, y). Product h(x,y,σ), and the blurred image performs a convolution operation with a known standard deviation value h(x,y,σ 1 ) for the original image P(x,y), that is, the blurred image P' (x,y) can be regarded as an absolutely clear image and do two convolution operations P'(x,y)=P(x,y)*h(x,y,σ)*h(x,y,σ 1 ) ;
(2)原始图像和边缘图像的模糊比值σ(x,y)可表示为:(2) The blur ratio σ(x, y) of the original image and the edge image can be expressed as:
最终 finally
所述步骤6包括:利用每个光谱波段有独自的聚焦位置,通过求得图像相对每个单独波段的聚焦面绝对距离关系,结合波段本身的求得的相对位置关系,便可得出图像中各结构之间的深度绝对关系。The step 6 includes: using each spectral band to have its own focus position, by obtaining the absolute distance relationship of the focal plane of the image relative to each individual band, and combining the obtained relative position relationship of the band itself, the image can be obtained. Depth absolute relationship between structures.
以上所述,仅为本发明较佳的具体实施方式,但本发明的设计构思并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,利用此构思对本发明进行非实质性的改动,均属于侵犯本发明保护范围的行为。The above description is only a preferred embodiment of the present invention, but the design concept of the present invention is not limited to this. Insubstantial changes are acts that infringe the protection scope of the present invention.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210640198.4A CN115115689B (en) | 2022-06-08 | 2022-06-08 | A depth estimation method based on multi-band spectrum |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210640198.4A CN115115689B (en) | 2022-06-08 | 2022-06-08 | A depth estimation method based on multi-band spectrum |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115115689A true CN115115689A (en) | 2022-09-27 |
CN115115689B CN115115689B (en) | 2024-07-26 |
Family
ID=83326782
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210640198.4A Active CN115115689B (en) | 2022-06-08 | 2022-06-08 | A depth estimation method based on multi-band spectrum |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115115689B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103049906A (en) * | 2012-12-07 | 2013-04-17 | 清华大学深圳研究生院 | Image depth extraction method |
CN103473743A (en) * | 2013-09-12 | 2013-12-25 | 清华大学深圳研究生院 | Method for obtaining image depth information |
CN107767332A (en) * | 2017-10-23 | 2018-03-06 | 合肥师范学院 | A kind of single image depth recovery method and system in real time |
US20180146847A1 (en) * | 2015-07-16 | 2018-05-31 | Olympus Corporation | Image processing device, imaging system, image processing method, and computer-readable recording medium |
CN109584210A (en) * | 2018-10-30 | 2019-04-05 | 南京理工大学 | Multispectral three-dimensional vein imaging system |
CN110942480A (en) * | 2019-11-19 | 2020-03-31 | 宁波五维检测科技有限公司 | Monocular single-frame multispectral three-dimensional imaging method |
CN111192238A (en) * | 2019-12-17 | 2020-05-22 | 南京理工大学 | Nondestructive blood vessel three-dimensional measurement method based on self-supervision depth network |
CN114463206A (en) * | 2022-01-24 | 2022-05-10 | 武汉理工大学 | Multispectral image quality improvement method based on global iterative fusion |
-
2022
- 2022-06-08 CN CN202210640198.4A patent/CN115115689B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103049906A (en) * | 2012-12-07 | 2013-04-17 | 清华大学深圳研究生院 | Image depth extraction method |
CN103473743A (en) * | 2013-09-12 | 2013-12-25 | 清华大学深圳研究生院 | Method for obtaining image depth information |
US20180146847A1 (en) * | 2015-07-16 | 2018-05-31 | Olympus Corporation | Image processing device, imaging system, image processing method, and computer-readable recording medium |
CN107767332A (en) * | 2017-10-23 | 2018-03-06 | 合肥师范学院 | A kind of single image depth recovery method and system in real time |
CN109584210A (en) * | 2018-10-30 | 2019-04-05 | 南京理工大学 | Multispectral three-dimensional vein imaging system |
CN110942480A (en) * | 2019-11-19 | 2020-03-31 | 宁波五维检测科技有限公司 | Monocular single-frame multispectral three-dimensional imaging method |
CN111192238A (en) * | 2019-12-17 | 2020-05-22 | 南京理工大学 | Nondestructive blood vessel three-dimensional measurement method based on self-supervision depth network |
CN114463206A (en) * | 2022-01-24 | 2022-05-10 | 武汉理工大学 | Multispectral image quality improvement method based on global iterative fusion |
Also Published As
Publication number | Publication date |
---|---|
CN115115689B (en) | 2024-07-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Abdelhamed et al. | A high-quality denoising dataset for smartphone cameras | |
CN104683767B (en) | Penetrating Fog image generating method and device | |
Yoon et al. | Learning a deep convolutional network for light-field image super-resolution | |
Bando et al. | Extracting depth and matte using a color-filtered aperture | |
WO2021098080A1 (en) | Multi-spectral camera extrinsic parameter self-calibration algorithm based on edge features | |
Zhuo et al. | Defocus map estimation from a single image | |
JP6446997B2 (en) | Light field processing by converting to scale and depth space | |
CN103049906B (en) | A kind of image depth extracting method | |
WO2018171008A1 (en) | Specular highlight area restoration method based on light field image | |
CN106683147B (en) | A method for blurring the background of an image | |
CN107862698A (en) | Light field foreground segmentation method and device based on K mean cluster | |
CN111462128A (en) | Pixel-level image segmentation system and method based on multi-modal spectral image | |
CN112016478B (en) | Complex scene recognition method and system based on multispectral image fusion | |
CN112200854B (en) | A three-dimensional phenotypic measurement method of leafy vegetables based on video images | |
Bailey et al. | Fast depth from defocus from focal stacks | |
CN101852970A (en) | A camera auto-focus method for imaging field of view scanning | |
CN110462679A (en) | Fast multispectral light field imaging method and system | |
CN107767332A (en) | A kind of single image depth recovery method and system in real time | |
CN117315138A (en) | Three-dimensional reconstruction method and system based on multi-eye vision | |
Cai et al. | Correlation-guided discriminative cross-modality features network for infrared and visible image fusion | |
CN106934110B (en) | Back projection method and device for reconstructing light field by focusing stack | |
CN108805937B (en) | Single-camera polarization information prediction method | |
CN115115689B (en) | A depth estimation method based on multi-band spectrum | |
Li et al. | Learning to synthesize photorealistic dual-pixel images from RGBD frames | |
Sadeghipoor et al. | Multiscale guided deblurring: Chromatic aberration correction in color and near-infrared imaging |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |