CN103955954A - Reconstruction method for high-resolution depth image in combination with space diagram pairs of same scene - Google Patents
Reconstruction method for high-resolution depth image in combination with space diagram pairs of same scene Download PDFInfo
- Publication number
- CN103955954A CN103955954A CN201410161575.1A CN201410161575A CN103955954A CN 103955954 A CN103955954 A CN 103955954A CN 201410161575 A CN201410161575 A CN 201410161575A CN 103955954 A CN103955954 A CN 103955954A
- Authority
- CN
- China
- Prior art keywords
- image
- local
- depth
- sigma
- current
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 19
- 238000010586 diagram Methods 0.000 title 1
- 238000001914 filtration Methods 0.000 claims description 23
- 238000001514 detection method Methods 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000004927 fusion Effects 0.000 claims description 3
- 230000007812 deficiency Effects 0.000 abstract description 3
- 238000004458 analytical method Methods 0.000 abstract description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
Landscapes
- Image Processing (AREA)
Abstract
本发明涉及一种结合同场景立体图对的高分辨率深度图像重建方法。越来越多的应用依赖于对真实场景深度图像准确且快速的观测和分析。飞行时间深度相机可以实时的获取场景的深度图像,但是由于硬件条件的限制,采集的深度图像分辨率比较低,无法满足实际应用的需要。立体匹配算法是获得深度图像的经典方法,但是由于左右图像之间遮挡以及无纹理区域的影响,立体匹配算法在实际应用中存在很大的局限性。本发明方法充分发掘飞行时间深度相机和立体匹配算法各自的优点,提出了一种结合飞行时间深度相机和同场景立体图对的高分辨率深度图像重建方法,能很好的克服现有技术的不足,重建高分辨率高质量的深度图像。The invention relates to a method for reconstructing a high-resolution depth image combined with a stereogram pair of the same scene. More and more applications rely on accurate and fast observation and analysis of real scene depth images. The time-of-flight depth camera can acquire the depth image of the scene in real time, but due to the limitation of hardware conditions, the resolution of the acquired depth image is relatively low, which cannot meet the needs of practical applications. Stereo matching algorithm is a classic method to obtain depth images, but due to the occlusion between the left and right images and the influence of non-textured areas, the stereo matching algorithm has great limitations in practical applications. The method of the present invention fully explores the respective advantages of the time-of-flight depth camera and the stereo matching algorithm, and proposes a high-resolution depth image reconstruction method combining the time-of-flight depth camera and the stereo image pair of the same scene, which can well overcome the deficiencies of the prior art , to reconstruct high-resolution and high-quality depth images.
Description
技术领域technical field
本发明属于计算机视觉领域,具体涉及一种结合同场景立体图对的高分辨率深度图像重建方法。The invention belongs to the field of computer vision, and in particular relates to a method for reconstructing a high-resolution depth image combined with a stereogram pair of the same scene.
背景技术Background technique
获取场景的深度图像是计算机视觉的重要任务。深度图像中的每一个像素值表示的是场景中的点与相机之间的距离。越来越多的应用如三维重建、碰撞检测、手势识别、机器人导航、工业自动化以及在电影和游戏中对虚拟场景的设计建模等,都依赖于对真实场景深度图像准确的观测和分析。目前,深度图像的获取手段主要有:1)通过立体匹配的方法计算得到深度图像;2)通过直接的测量仪器观测得到深度图像。Obtaining a depth image of a scene is an important task in computer vision. Each pixel value in the depth image represents the distance between a point in the scene and the camera. More and more applications such as 3D reconstruction, collision detection, gesture recognition, robot navigation, industrial automation, and design and modeling of virtual scenes in movies and games, etc., all rely on accurate observation and analysis of real scene depth images. At present, the acquisition methods of depth images mainly include: 1) calculating the depth image through stereo matching method; 2) obtaining the depth image through direct measurement instrument observation.
立体匹配算法计算左右图像之间各对应点的视差关系从而得到场景的深度图像,立体匹配算法是获得场景深度图像的一类经典方法。但是由于左右图像之间遮挡以及无纹理区域的影响,立体匹配算法在实际应用中存在很大的局限性。飞行时间深度相机是直接测量深度图像的主要设备,它通过给场景发送光脉冲,运用高速快门计算光脉冲的往返时间来确定场景中目标物的距离,飞行时间相机可以快速得到整幅场景的深度信息。但是,由于硬件的限制飞行时间相机得到的场景深度图分辨率比较低,很难满足实际应用的需要。Stereo matching algorithm calculates the disparity relationship of each corresponding point between the left and right images to obtain the depth image of the scene. Stereo matching algorithm is a class of classical methods to obtain the scene depth image. However, due to the occlusion between the left and right images and the influence of non-textured areas, the stereo matching algorithm has great limitations in practical applications. The time-of-flight depth camera is the main device for directly measuring the depth image. It sends light pulses to the scene and uses a high-speed shutter to calculate the round-trip time of the light pulses to determine the distance of the target object in the scene. The time-of-flight camera can quickly obtain the depth of the entire scene information. However, due to hardware limitations, the resolution of the scene depth map obtained by the time-of-flight camera is relatively low, which is difficult to meet the needs of practical applications.
发明内容Contents of the invention
本发明的目的就是克服现有技术的不足,提出了一种结合同场景立体图对的高分辨率深度图像重建方法。本方法结合飞行时间深度相机采集得到的低分辨率深度图像和同场景的高分辨率彩色立体图对来重建高分辨率高质量的深度图像。具体步骤是:The purpose of the present invention is to overcome the deficiencies of the prior art, and propose a high-resolution depth image reconstruction method combined with the stereogram pair of the same scene. This method combines the low-resolution depth image collected by the time-of-flight depth camera with the high-resolution color stereo image pair of the same scene to reconstruct a high-resolution and high-quality depth image. The specific steps are:
步骤(1)构造非局部滤波权值,对输入的低分辨率深度图像进行滤波:Step (1) Construct non-local filtering weights to filter the input low-resolution depth image:
G表示飞行时间深度相机采集得到的低分辨率深度图像,大小为n×m;L表示CCD左相机采集的同场景高分辨率彩色左图像,大小为rn×rm;R表示对应的CCD右相机采集的高分辨率彩色右图像,大小为rn×rm;G represents the low-resolution depth image collected by the time-of-flight depth camera, with a size of n×m; L represents the high-resolution color left image of the same scene collected by the CCD left camera, with a size of rn×rm; R represents the corresponding CCD right camera Acquired high-resolution color right image of size rn×rm;
按如下方式构造非局部滤波权值:The non-local filtering weights are constructed as follows:
其中I是输入的低分辨率深度图像进行双线性插值后的结果,大小为rn×rm;MiL表示以Li为中心的局部块,MiI表示以Ii为中心的局部块,K是局部块内含有的像素点个数,hL,hI是滤波器的参数,用来控制加权计算的指数表达式的衰减速度;where I is the result of bilinear interpolation of the input low-resolution depth image, and the size is rn×rm; M i L represents the local block centered on L i , and M i I represents the local block centered on I i , K is the number of pixels contained in the local block, h L , h I are the parameters of the filter, which are used to control the decay speed of the exponential expression of the weighted calculation;
由上述计算得到非局部滤波权值,按下式对双线性插值结果I进行滤波:The non-local filtering weight is obtained by the above calculation , filter the bilinear interpolation result I according to the following formula:
得到的初始高分辨率深度图像记为F′,大小为rn×rm;The obtained initial high-resolution depth image is denoted as F′, and its size is rn×rm;
步骤(2)以左图像为参考图像,右图像为目标图像,结合初始高分辨率深度图像F′,构造局部立体匹配权值,计算左图像的视差图DL:Step (2) Take the left image as the reference image and the right image as the target image, combine the initial high-resolution depth image F′ to construct local stereo matching weights, and calculate the disparity map D L of the left image:
(a)结合F′按如下方式构造局部立体匹配权值:(a) Combining with F', construct local stereo matching weights as follows:
其中j∈Ω(i)表示j在i的局部窗内,参数σc和σd用来控制指数表达式的衰减速度,按如下方式选取:Where j∈Ω(i) means that j is in the local window of i, and the parameters σ c and σ d are used to control the decay speed of the exponential expression, which are selected as follows:
①计算L当前局部窗内其他点和中心点之间的颜色值差异以当前颜色差异的中值作为当前σc;① Calculate the color value difference between other points and the center point in the current local window of L Take the median value of the current color difference as the current σ c ;
②计算F′当前局部窗内其他点和中心点之间的深度差异以当前深度差异的中值作为当前σd;② Calculate the depth difference between other points and the center point in the current local window of F′ Take the median value of the current depth difference as the current σ d ;
(b)按如下方式构造左图像L和右图像R之间的匹配代价函数:(b) Construct the matching cost function between the left image L and the right image R as follows:
其中匹配代价表示左图像中的点i视差为d时,i和右图像中的对应点之间的匹配度,j∈Ω(i)表示j在i的局部窗内,表示在的局部窗内,选取对应点之间的彩色和梯度差来定义匹配误差:where matching cost Represents point i in the left image when the disparity is d, i and the corresponding point in the right image The matching degree between, j∈Ω(i) means that j is in the local window of i, express exist In the local window of , the color and gradient differences between corresponding points are selected to define the matching error:
其中参数α用来平衡彩色和梯度误差,τ1,τ2为误差阈值;The parameter α is used to balance color and gradient errors, τ 1 and τ 2 are error thresholds;
(c)根据匹配代价函数,按如下的方式计算得到左图像内各点的视差:(c) According to the matching cost function, calculate the disparity of each point in the left image as follows:
其中Sd={dmin,...,dmax}表示视差范围;从而得到左图像的视差图像DL;Wherein S d ={d min ,...,d max } represents the parallax range; thus the parallax image D L of the left image is obtained;
步骤(3)以右图像为参考图像,左图像为目标图像,计算右图像内各点的视差值,利用计算得到的右图像视差图DR对DL进行左右遮挡检测:Step (3) Use the right image as the reference image and the left image as the target image, calculate the disparity value of each point in the right image, and use the calculated right image disparity map D R to perform left and right occlusion detection on D L :
以右图像为参考图,左图像为目标图时,由于F′是与左图像配准的,以右图像为参考图时,无法利用F′来构造匹配权值,故以右图像为参考图时按如下方式构造局部匹配权值:When the right image is used as the reference image and the left image is the target image, since F′ is registered with the left image, when the right image is used as the reference image, F′ cannot be used to construct the matching weight, so the right image is used as the reference image When the local matching weights are constructed as follows:
其中j∈Ω(i)表示j在i的局部窗内,参数σs和σc按如下方式选取:where j∈Ω(i) means that j is in the local window of i, and the parameters σ s and σ c are selected as follows:
①σs选为局部窗口的半径大小;①σ s is selected as the radius of the local window;
②计算R当前局部窗内其他点和中心点的颜色值差异以当前颜色差异的中值作为当前σc;② Calculate the color value difference between other points and the center point in the current local window of R Take the median value of the current color difference as the current σ c ;
利用计算得到的局部匹配权值构造右图像R和左图像L之间的匹配代价函数,计算得到以右图像为参考图时的视差图,记为DR;Using the calculated local matching weights Construct the matching cost function between the right image R and the left image L, and calculate the disparity map when the right image is used as the reference image, denoted as DR ;
运用DR对DL进行左右遮挡检测,若视差点满足如下公式,则将该点标记为满足左右遮挡检测:Use DR to perform left and right occlusion detection on DL , if the parallax point If the following formula is met, the point is marked as meeting the left and right occlusion detection:
用集合S表示DL内所有满足左右遮挡检测的点;Use the set S to represent all the points in DL that satisfy the left and right occlusion detection;
步骤(4)结合滤波得到的深度图像F′和匹配得到视差图像DL融合得到最终的高分辨率深度图像:Step (4) Combine the depth image F' obtained by filtering and the disparity image D L obtained by matching to fuse to obtain the final high-resolution depth image:
(a)按如下方式将F′和DL进行融合:(a) F' and DL are fused as follows:
其中b为左右相机之间基线距,f为CCD相机的焦距;Where b is the baseline distance between the left and right cameras, and f is the focal length of the CCD camera;
(b)构造局部滤波权值对融合得到的深度图像进行修正:(b) Construct local filtering weights to correct the fused depth image:
基于如下的两点观察:Based on the following two observations:
①彩色图像中有相似颜色的两个区域,很有可能有相似的深度值;① There are two areas of similar color in the color image, which are likely to have similar depth values;
②真实场景的表面基本都是分段光滑的;② The surface of the real scene is basically segmented and smooth;
将局部滤波权值分解为三个部分:颜色相似性、距离相似性和深度相似性,按如下方式构造局部滤波权值:The local filtering weights are decomposed into three parts: color similarity, distance similarity and depth similarity, and the local filtering weights are constructed as follows:
其中j∈Ω(i)表示j在i的局部窗内,参数σc,σs和σd按如下方式选取:where j∈Ω(i) means that j is in the local window of i, and the parameters σ c , σ s and σ d are selected as follows:
①σs选为局部窗口的半径大小;①σ s is selected as the radius of the local window;
②计算L当前局部窗内其他点和中心点的颜色值差异以当前颜色差异的中值作为当前σc;② Calculate the color value difference between other points and the center point in the current local window of L Take the median value of the current color difference as the current σ c ;
③计算F当前局部窗内其他点和中心点的深度差异以当前深度差异的中值作为当前σd;③ Calculate the depth difference between other points and the center point in the current local window of F Take the median value of the current depth difference as the current σ d ;
由上述计算得到局部滤波权值wij,按如下方式对融合结果F进行滤波:The local filtering weight w ij is obtained from the above calculation, and the fusion result F is filtered as follows:
从而得到最终的高分辨率深度图像,记为F*,大小为rn×rm。Thus, the final high-resolution depth image is obtained, denoted as F * , and the size is rn×rm.
本发明通过结合飞行时间深度相机采集得到的低分辨率深度图像和同场景高分辨率的左右彩色立体图对,可以重建得到高质量高分辨率的深度图像。The present invention can reconstruct a high-quality and high-resolution depth image by combining the low-resolution depth image collected by the time-of-flight depth camera and the high-resolution left and right color stereo image pairs of the same scene.
根据本发明的第一方面,公开了一种非局部权值滤波器的构造方法。According to a first aspect of the present invention, a method for constructing a non-local weight filter is disclosed.
根据本发明的第二方面,公开了一种结合初始深度图像来构造局部立体匹配权值的方法。According to a second aspect of the present invention, a method for constructing local stereo matching weights in combination with an initial depth image is disclosed.
根据本发明的第三方面,公开了一种将飞行时间深度相机采集得到的深度图像和立体匹配算法计算得到的视差图像进行融合的方法。According to a third aspect of the present invention, a method for fusing a depth image acquired by a time-of-flight depth camera with a disparity image calculated by a stereo matching algorithm is disclosed.
根据本发明的第四方面,公开了一种结合同场景立体图对的高分辨率深度图像重建方法的具体流程。主要包括:构造非局部滤波权值对飞行时间深度相机采集得到的深度图像进行滤波;构造局部立体匹配权值计算得到基于立体匹配算法的视差图像;最终融合得到高分辨率高质量的深度图像。According to the fourth aspect of the present invention, a specific flow of a method for reconstructing a high-resolution depth image combined with a stereogram pair of the same scene is disclosed. It mainly includes: constructing non-local filtering weights to filter the depth images collected by the time-of-flight depth camera; constructing local stereo matching weights to calculate disparity images based on stereo matching algorithms; finally merging to obtain high-resolution and high-quality depth images.
本发明的有益效果:本发明方法很好的结合立体匹配技术和飞行时间深度相机各自的优势,通过一种融合方法很好的克服了现有技术的不足,能够重建高质量高分辨率的深度图像。Beneficial effects of the present invention: the method of the present invention combines the respective advantages of the stereo matching technology and the time-of-flight depth camera well, overcomes the deficiencies of the prior art through a fusion method, and can reconstruct high-quality and high-resolution depth image.
具体实施步骤Specific implementation steps
步骤(1)构造非局部滤波权值,对输入的低分辨率深度图像进行滤波:Step (1) Construct non-local filtering weights to filter the input low-resolution depth image:
G表示飞行时间深度相机采集得到的低分辨率深度图像,大小为n×m;L表示CCD左相机采集的同场景高分辨率彩色左图像,大小为rn×rm;R表示对应的CCD右相机采集的高分辨率彩色右图像,大小为rn×rm;G represents the low-resolution depth image collected by the time-of-flight depth camera, with a size of n×m; L represents the high-resolution color left image of the same scene collected by the CCD left camera, with a size of rn×rm; R represents the corresponding CCD right camera Acquired high-resolution color right image of size rn×rm;
按如下方式构造非局部滤波权值:The non-local filtering weights are constructed as follows:
其中I是输入的低分辨率深度图像进行双线性插值后的结果,大小为rn×rm,MiL表示以Li为中心的局部块,MiI表示以Ii为中心的局部块,K是局部块内含有的像素点个数,局部块大小选为5×5;参数hL选为15,参数hI选为20;where I is the result of bilinear interpolation of the input low-resolution depth image, the size is rn×rm, M i L represents the local block centered on L i , and M i I represents the local block centered on I i , K is the number of pixels contained in the local block, the size of the local block is selected as 5×5; the parameter h L is selected as 15, and the parameter h I is selected as 20;
根据按下式对双线性插值结果I进行滤波:according to Filter the bilinear interpolation result I according to the following formula:
得到的初始高分辨率深度图像记为F′,大小为rn×rm;The obtained initial high-resolution depth image is denoted as F′, and its size is rn×rm;
步骤(2)以左图像为参考图像,右图像为目标图像,结合初始高分辨率深度图像F′,构造局部立体匹配权值,计算左图像的视差图DL:Step (2) Take the left image as the reference image and the right image as the target image, combine the initial high-resolution depth image F′ to construct local stereo matching weights, and calculate the disparity map D L of the left image:
(a)结合F′按如下方式构造局部立体匹配权值:(a) Combining with F', construct local stereo matching weights as follows:
其中j∈Ω(i)表示j在i的局部窗内,局部窗大小选为9×9,参数σc和σd按如下方式选取:Where j∈Ω(i) means that j is in the local window of i, the size of the local window is selected as 9×9, and the parameters σ c and σ d are selected as follows:
①计算L当前局部窗内其他点和中心点之间的颜色值差异以当前颜色差异的中值作为当前σc;① Calculate the color value difference between other points and the center point in the current local window of L Take the median value of the current color difference as the current σ c ;
②计算F′当前局部窗内其他点和中心点之间的深度差异以当前深度差异的中值作为当前σd;② Calculate the depth difference between other points and the center point in the current local window of F′ Take the median value of the current depth difference as the current σ d ;
(b)按如下方式构造左图像L和右图像R之间的匹配代价函数:(b) Construct the matching cost function between the left image L and the right image R as follows:
其中匹配代价表示左图像中的点i视差为d时,i和右图像中的对应点之间的匹配度,j∈Ω(i)表示j在i的局部窗内,表示在的局部窗内,选取对应点之间的彩色和梯度差来定义匹配误差:where matching cost Represents point i in the left image when the disparity is d, i and the corresponding point in the right image The matching degree between, j∈Ω(i) means that j is in the local window of i, express exist In the local window of , the color and gradient differences between corresponding points are selected to define the matching error:
其中参数α用来平衡彩色和梯度误差,选为0.2;τ1,τ2为误差阈值,分别选为7和2;Among them, the parameter α is used to balance the color and gradient errors, which is selected as 0.2; τ 1 and τ 2 are error thresholds, which are selected as 7 and 2 respectively;
(c)根据匹配代价函数,按如下的方式计算得到左图像内各点的视差:(c) According to the matching cost function, the disparity of each point in the left image is calculated as follows:
其中Sd={dmin,...,dmax}表示视差范围;从而得到左图像的视差图像DL;Wherein S d ={d min ,...,d max } represents the parallax range; thus the parallax image D L of the left image is obtained;
步骤(3)以右图像为参考图像,左图像为目标图像,计算右图像的视差图DR,对DL进行左右遮挡检测:Step (3) Take the right image as the reference image and the left image as the target image, calculate the disparity map DR of the right image, and perform left and right occlusion detection on DL :
以右图像为参考图,左图像为目标图时,由于F′是与左图像配准的,以右图像为参考图时,无法利用F′来构造匹配权值,故以右图像为参考图时按如下方式构造局部匹配权值:When the right image is used as the reference image and the left image is the target image, since F′ is registered with the left image, when the right image is used as the reference image, F′ cannot be used to construct the matching weight, so the right image is used as the reference image When the local matching weights are constructed as follows:
其中j∈Ω(i)表示j在i的局部窗内,局部窗大小选为9×9,参数σs和σc按如下方式选取:Where j∈Ω(i) means that j is in the local window of i, the size of the local window is selected as 9×9, and the parameters σ s and σ c are selected as follows:
①σs大小选为4;①The size of σ s is selected as 4;
②计算R当前局部窗内其他点和中心点的颜色值差异以当前颜色差异的中值作为当前σc;② Calculate the color value difference between other points and the center point in the current local window of R Take the median value of the current color difference as the current σ c ;
利用计算得到的局部立体匹配权值构造右图像R和左图像L之间的匹配代价函数,计算得到以右图像为参考图时的视差图,记为DR;Using the calculated local stereo matching weights Construct the matching cost function between the right image R and the left image L, and calculate the disparity map when the right image is used as the reference image, denoted as DR ;
运用DR对DL进行左右遮挡检测,若视差点满足如下公式,则将该点标记为满足左右遮挡检测:Use DR to perform left and right occlusion detection on DL , if the parallax point If the following formula is met, the point is marked as meeting the left and right occlusion detection:
用集合S表示DL内所有满足左右遮挡检测的点;Use the set S to represent all the points in DL that satisfy the left and right occlusion detection;
步骤(4)结合滤波得到的深度图像F′和匹配得到视差图像DL融合得到最终的高分辨率深度图像:Step (4) Combine the depth image F' obtained by filtering and the disparity image D L obtained by matching to fuse to obtain the final high-resolution depth image:
(a)按如下方式将F′和DL进行融合:(a) F' and DL are fused as follows:
其中b为左右相机之间基线距,f为CCD相机的焦距;Where b is the baseline distance between the left and right cameras, and f is the focal length of the CCD camera;
(b)构造局部滤波权值对融合得到的深度图像进行修正:(b) Construct local filtering weights to correct the fused depth image:
按如下方式构造局部滤波权值:The local filtering weights are constructed as follows:
其中j∈Ω(i)表示j在i的局部窗内,局部窗大小选为9×9,参数σc,σs和σd按如下方式选取:Where j∈Ω(i) means that j is in the local window of i, the size of the local window is selected as 9×9, and the parameters σ c , σ s and σ d are selected as follows:
①σs大小选为4;①The size of σ s is selected as 4;
②计算L当前局部窗内其他点和中心点的颜色值差异以当前颜色差异的中值作为当前σc;② Calculate the color value difference between other points and the center point in the current local window of L Take the median value of the current color difference as the current σ c ;
③计算F当前局部窗内其他点和中心点的深度差异以当前深度差异的中值作为当前σd;③ Calculate the depth difference between other points and the center point in the current local window of F Take the median value of the current depth difference as the current σ d ;
由上述计算得到局部滤波权值wij,按如下方式对融合结果F进行滤波:The local filtering weight w ij is obtained from the above calculation, and the fusion result F is filtered as follows:
从而得到最终的高分辨率深度图像F*,大小为rn×rm。Thus the final high-resolution depth image F * is obtained, with a size of rn×rm.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410161575.1A CN103955954B (en) | 2014-04-21 | 2014-04-21 | Reconstruction method for high-resolution depth image in combination with space diagram pairs of same scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410161575.1A CN103955954B (en) | 2014-04-21 | 2014-04-21 | Reconstruction method for high-resolution depth image in combination with space diagram pairs of same scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103955954A true CN103955954A (en) | 2014-07-30 |
CN103955954B CN103955954B (en) | 2017-02-08 |
Family
ID=51333223
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410161575.1A Active CN103955954B (en) | 2014-04-21 | 2014-04-21 | Reconstruction method for high-resolution depth image in combination with space diagram pairs of same scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103955954B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105184780A (en) * | 2015-08-26 | 2015-12-23 | 京东方科技集团股份有限公司 | Prediction method and system for stereoscopic vision depth |
CN105869167A (en) * | 2016-03-30 | 2016-08-17 | 天津大学 | High-resolution depth map acquisition method based on active and passive fusion |
WO2016172960A1 (en) * | 2015-04-30 | 2016-11-03 | SZ DJI Technology Co., Ltd. | System and method for enhancing image resolution |
CN106408513A (en) * | 2016-08-25 | 2017-02-15 | 天津大学 | Super-resolution reconstruction method of depth map |
CN106774309A (en) * | 2016-12-01 | 2017-05-31 | 天津工业大学 | A kind of mobile robot is while visual servo and self adaptation depth discrimination method |
CN106911888A (en) * | 2015-12-23 | 2017-06-30 | 意法半导体(R&D)有限公司 | A kind of device |
CN107749060A (en) * | 2017-09-28 | 2018-03-02 | 深圳市纳研科技有限公司 | Machine vision equipment and based on flying time technology three-dimensional information gathering algorithm |
CN108876836A (en) * | 2018-03-29 | 2018-11-23 | 北京旷视科技有限公司 | A kind of depth estimation method, device, system and computer readable storage medium |
CN109061658A (en) * | 2018-06-06 | 2018-12-21 | 天津大学 | Laser radar data melts method |
CN109791697A (en) * | 2016-09-12 | 2019-05-21 | 奈安蒂克公司 | Using statistical model from image data predetermined depth |
WO2022105615A1 (en) * | 2020-11-19 | 2022-05-27 | 中兴通讯股份有限公司 | 3d depth map construction method and apparatus, and ar glasses |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101430796A (en) * | 2007-11-06 | 2009-05-13 | 三星电子株式会社 | Image generating method and apparatus |
CN102387374A (en) * | 2010-08-30 | 2012-03-21 | 三星电子株式会社 | Device and method for acquiring high-precision depth map |
CN102867288A (en) * | 2011-07-07 | 2013-01-09 | 三星电子株式会社 | Depth image conversion apparatus and method |
CN103167306A (en) * | 2013-03-22 | 2013-06-19 | 上海大学 | A device and method for real-time extraction of high-resolution depth maps based on image matching |
CN103337069A (en) * | 2013-06-05 | 2013-10-02 | 余洪山 | A high-quality three-dimensional color image acquisition method based on a composite video camera and an apparatus thereof |
CN103440664A (en) * | 2013-09-05 | 2013-12-11 | Tcl集团股份有限公司 | Method, system and computing device for generating high-resolution depth map |
-
2014
- 2014-04-21 CN CN201410161575.1A patent/CN103955954B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101430796A (en) * | 2007-11-06 | 2009-05-13 | 三星电子株式会社 | Image generating method and apparatus |
CN102387374A (en) * | 2010-08-30 | 2012-03-21 | 三星电子株式会社 | Device and method for acquiring high-precision depth map |
CN102867288A (en) * | 2011-07-07 | 2013-01-09 | 三星电子株式会社 | Depth image conversion apparatus and method |
CN103167306A (en) * | 2013-03-22 | 2013-06-19 | 上海大学 | A device and method for real-time extraction of high-resolution depth maps based on image matching |
CN103337069A (en) * | 2013-06-05 | 2013-10-02 | 余洪山 | A high-quality three-dimensional color image acquisition method based on a composite video camera and an apparatus thereof |
CN103440664A (en) * | 2013-09-05 | 2013-12-11 | Tcl集团股份有限公司 | Method, system and computing device for generating high-resolution depth map |
Non-Patent Citations (2)
Title |
---|
EUN-KYUNG LEE 等: "Generation of high-quality depth maps using hybrid camera system for 3-D video", 《JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION》 * |
杨宇翔 等: "基于彩色图像局部结构特征的深度图超分辨率算法", 《模式识别与人工智能》 * |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11249173B2 (en) | 2015-04-30 | 2022-02-15 | SZ DJI Technology Co., Ltd. | System and method for enhancing image resolution |
WO2016172960A1 (en) * | 2015-04-30 | 2016-11-03 | SZ DJI Technology Co., Ltd. | System and method for enhancing image resolution |
US10488500B2 (en) | 2015-04-30 | 2019-11-26 | SZ DJI Technology Co., Ltd. | System and method for enhancing image resolution |
CN107534764A (en) * | 2015-04-30 | 2018-01-02 | 深圳市大疆创新科技有限公司 | Strengthen the system and method for image resolution ratio |
US9936189B2 (en) | 2015-08-26 | 2018-04-03 | Boe Technology Group Co., Ltd. | Method for predicting stereoscopic depth and apparatus thereof |
CN105184780B (en) * | 2015-08-26 | 2018-06-05 | 京东方科技集团股份有限公司 | A kind of Forecasting Methodology and system of stereoscopic vision depth |
CN105184780A (en) * | 2015-08-26 | 2015-12-23 | 京东方科技集团股份有限公司 | Prediction method and system for stereoscopic vision depth |
CN106911888A (en) * | 2015-12-23 | 2017-06-30 | 意法半导体(R&D)有限公司 | A kind of device |
CN105869167A (en) * | 2016-03-30 | 2016-08-17 | 天津大学 | High-resolution depth map acquisition method based on active and passive fusion |
CN106408513B (en) * | 2016-08-25 | 2019-10-18 | 天津大学 | Depth Map Super-Resolution Reconstruction Method |
CN106408513A (en) * | 2016-08-25 | 2017-02-15 | 天津大学 | Super-resolution reconstruction method of depth map |
CN109791697A (en) * | 2016-09-12 | 2019-05-21 | 奈安蒂克公司 | Using statistical model from image data predetermined depth |
CN109791697B (en) * | 2016-09-12 | 2023-10-13 | 奈安蒂克公司 | Predicting depth from image data using statistical models |
CN106774309A (en) * | 2016-12-01 | 2017-05-31 | 天津工业大学 | A kind of mobile robot is while visual servo and self adaptation depth discrimination method |
CN106774309B (en) * | 2016-12-01 | 2019-09-17 | 天津工业大学 | A kind of mobile robot visual servo and adaptive depth discrimination method simultaneously |
CN107749060A (en) * | 2017-09-28 | 2018-03-02 | 深圳市纳研科技有限公司 | Machine vision equipment and based on flying time technology three-dimensional information gathering algorithm |
CN108876836A (en) * | 2018-03-29 | 2018-11-23 | 北京旷视科技有限公司 | A kind of depth estimation method, device, system and computer readable storage medium |
CN109061658A (en) * | 2018-06-06 | 2018-12-21 | 天津大学 | Laser radar data melts method |
CN109061658B (en) * | 2018-06-06 | 2022-06-21 | 天津大学 | Laser radar data fusion method |
WO2022105615A1 (en) * | 2020-11-19 | 2022-05-27 | 中兴通讯股份有限公司 | 3d depth map construction method and apparatus, and ar glasses |
Also Published As
Publication number | Publication date |
---|---|
CN103955954B (en) | 2017-02-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103955954B (en) | Reconstruction method for high-resolution depth image in combination with space diagram pairs of same scene | |
CN104504671B (en) | Method for generating virtual-real fusion image for stereo display | |
CN101877143B (en) | Three-dimensional scene reconstruction method of two-dimensional image group | |
CN104539925B (en) | The method and system of three-dimensional scenic augmented reality based on depth information | |
TWI497980B (en) | System and method of processing 3d stereoscopic images | |
CN107945269A (en) | Complicated dynamic human body object three-dimensional rebuilding method and system based on multi-view point video | |
JP2019526878A5 (en) | ||
CN103426200B (en) | Tree three-dimensional reconstruction method based on unmanned aerial vehicle aerial photo sequence image | |
CN108932725B (en) | Scene flow estimation method based on convolutional neural network | |
CN110462685B (en) | Three-dimensional model reconstruction method and system | |
CN105761270B (en) | A kind of tree-shaped filtering solid matching method based on EP point range conversion | |
CN106447661A (en) | Rapid depth image generating method | |
CN113711276A (en) | Scale-aware monocular positioning and mapping | |
KR101714224B1 (en) | 3 dimension image reconstruction apparatus and method based on sensor fusion | |
CN104065947A (en) | A Depth Map Acquisition Method for Integrated Imaging System | |
CN111798505B (en) | Dense point cloud reconstruction method and system for triangularized measurement depth based on monocular vision | |
CN111325782A (en) | Unsupervised monocular view depth estimation method based on multi-scale unification | |
CN104794717A (en) | Depth information comparison method based on binocular vision system | |
CN108645375A (en) | One kind being used for vehicle-mounted biocular systems rapid vehicle distance measurement optimization method | |
CN111105451A (en) | A Binocular Depth Estimation Method for Driving Scenes Overcoming Occlusion Effect | |
CN107977938A (en) | A kind of Kinect depth image restorative procedure based on light field | |
CN102542541A (en) | Deep image post-processing method | |
CN107452037B (en) | GPS auxiliary information acceleration-based structure recovery method from movement | |
Zhou et al. | Single-view view synthesis with self-rectified pseudo-stereo | |
CN110148168B (en) | Three-eye camera depth image processing method based on size double baselines |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |