CN108038887A - Based on binocular RGB-D camera depth profile methods of estimation - Google Patents

Based on binocular RGB-D camera depth profile methods of estimation Download PDF

Info

Publication number
CN108038887A
CN108038887A CN201711311829.3A CN201711311829A CN108038887A CN 108038887 A CN108038887 A CN 108038887A CN 201711311829 A CN201711311829 A CN 201711311829A CN 108038887 A CN108038887 A CN 108038887A
Authority
CN
China
Prior art keywords
depth
image
mrow
msub
resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711311829.3A
Other languages
Chinese (zh)
Other versions
CN108038887B (en
Inventor
杨敬钰
蔡常瑞
柏井慧
侯春萍
李坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201711311829.3A priority Critical patent/CN108038887B/en
Publication of CN108038887A publication Critical patent/CN108038887A/en
Application granted granted Critical
Publication of CN108038887B publication Critical patent/CN108038887B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/564Depth or shape recovery from multiple images from contours

Abstract

The invention belongs to computer vision field, to propose the method for generation high quality depth profile estimation.The present invention adopts the technical scheme that, based on binocular RGB D camera depth profile methods of estimation, RGB D represent colored and depth image.The depth edge information of low resolution is obtained first;Then operated by camera calibration and image calibration, obtain the high-resolution scatter diagram of depth edge, and carry out edge interpolation and obtain high-resolution continuous depth profile;Finally, under the guiding and constraint of Color Image Edge, correction optimization is carried out to depth profile, generates final depth profile image.Present invention is mainly applied to computer vision occasion.

Description

Based on binocular RGB-D camera depth profile methods of estimation
Technical field
The invention belongs to computer vision field.More particularly to it is based on binocular RGB-D camera depth profile algorithm for estimating.
Background technology
The acquisition of depth is the emphasis that industrial quarters is paid close attention to academia.At present, having has had many methods to be used to complete height The acquisition of quality depth image, is broadly divided into two classes, and one of which is passively to obtain, such as Stereo matching, 2D-3D conversion skill Art, color camera array etc..However, these methods are all based on what is inferred, it is to estimate deeply from the structural information of coloured image Degree, is not that depth is directly measured, and this method often produces the depth estimation result of mistake.Another kind is actively Mode, i.e.,:Directly acquire depth image.With Kinect, the appearance of depth camera TOF camera even depth cameras can be surveyed, People are more likely to directly acquire the depth information of scene using depth camera.Kinect is Microsoft on June 2nd, 2009E3 On great Zhan, the XBOX360 body-sensings periphery peripheral hardware formally announced.This mode not only increase scene information quality and comprehensively Property, and greatly reduce workload when obtaining 3D contents.It currently there are various depth cameras, 2010, Microsoft First generation Kinect depth cameras are proposed, recently, second generation depth camera Kinect v2 have been issued in Microsoft's renewal.It is different from ToF (time flight) technology is utilized using the first generation Kinect, the Kinect v2 of pattern photoimaging principle, can be obtained Than the depth image of generation Kinect accuracy highers, but the problems such as systematic error, low resolution, noise and depth lack according to Old presence.For these problems, it presently, there are many depth and repair algorithm in the reconstruction of depth image.Including Depth image reconstruction model based on global optimization, the depth based on filtering strengthen algorithm etc., such as random based on Markov Field (markov random field) model, overall resolution model (total variation, TV), Steerable filter, based on intersection Local Multipoint filtering etc..
These methods can obtain the depth map of better quality, still, when there are large area depth missing in depth image During phenomenon, the effect of these methods does not reach optimal, the problems such as edge blurry, estimation of Depth mistake, depth easily occurs Algorithm is repaired to still need further to improve.Moreover, these methods, just for single viewpoint depth image, for needs regard more Point coloured image-depth image to three-dimensional display system for, lack validity and application
The side that Multi-visual point image is realized using first generation Kinect is proposed for Multi-visual point image task, Ye Xinchen et al. Method;Zhu et al. has built multiple views camera system to obtain high quality depth using a ToF camera and two color cameras Image;Choi et al. equally establishes multiple views system and carries out up-sampling repair to low resolution depth image.But this The accuracy problems of depth collection are paid close attention in a little work, not the correlation in consideration system between each viewpoint more, or Simple amalgamation mode has only been used to merge the image of different points of view.Therefore further analyze and improve binocular collection system The characteristic description of system, improve amalgamation mode to realize that the depth recovery of high quality is necessary.
The content of the invention
The invention is intended to make up the deficiencies in the prior art, that is, the method for generating the estimation of high quality depth profile.The present invention adopts The technical solution taken is, based on binocular RGB-D camera depth profile methods of estimation, RGB-D represents colored and depth image.First Obtain the depth edge information of low resolution;Then operated by camera calibration and image calibration, obtain the high score of depth edge Resolution scatter diagram, and carry out edge interpolation and obtain high-resolution continuous depth profile;Finally, drawing in Color Image Edge Lead and constrain down, correction optimization is carried out to depth profile, generate final depth profile image.
Further, comprise the following steps that:
1) the depth edge information of low resolution is obtained
Denoising and padding first are carried out to it, using the pretreatment of filtering and bicubic interpolation as original depth image Operation, then extracts depth edge using canny detective operators
2) the high-resolution scatter diagram of depth edge is generated
Calibrated using by depth image with pre-processing obtained camera parameter, carrying out image to original depth image matches somebody with somebody Standard, makes it have the resolution ratio identical with coloured image, obtains high-resolution depth edge scatter diagram
3) high-resolution continuous depth profile is generated
For the location of pixels x of marginal point in colour edging image, the marginal information in its neighborhood N (x) is changed to one The coordinate pair that dimension table shows, that is, obtain coordinate scatterplot collection X=xiAnd the corresponding value f (x of point set Xi);Then for needing to carry out difference Position x, minimize the weighted least-squares error p () of its fitting function:
Wherein, θ () represents nonnegative curvature function, and ∑ is summation operation, | | | | it is Euclidean distance, passes through After MLS fittings, carry out inverse transformation, that is, one-dimensional and be converted to two dimension, obtain continuous depth profile image
4) high-resolution depth image D is generatedL
Nonlinear transformation and canny detective operators extraction Color Image Edge is used in combination, to avoid canny is used alone The excessive fine grain that detective operators occur, completes the Registration of depth image, obtains high-resolution depth image DL
5) combined depth scatterplot-colour edging, corrects depth profile and optimizes:
Wherein, x is the location of pixels of marginal point in colour edging image, Nd(x)、It is corresponding respectively to be somebody's turn to do Position is in main view point depth image DL, high-resolution depth profile imageColour edging image EcIn neighborhood region;T () represents two-dimensional transformations as one-dimensional map function;G () is Gaussian kernel, represents colour edging bound term,For gradient Computing, formula (2) if physical significance be that the depth profile of same pixel position and colour edging have Similar trend That is curvature, then it is assumed that the pixel is that the possibility of depth profile point is larger, wherein, R () represents main view point depth image Corresponding bound term, is defined as:
Wherein, i, j=1,2,3,4 represent neighborhood Nd(x) four sub-regions of upper and lower, left and right, Nth1And Dth1Respectively The threshold value of effective depth value total quantity and depth average difference, which represents, in four neighborhood subregions, appoints as long as having Two sub-regions of anticipating have similar effective depth value quantity and depth average, then it is depth smooth area to be considered as the neighborhood Depth profile is not present in domain, the i.e. contiguous range, then the location of pixels should not there are depth profile information, i.e. Ed(x)= 0。
The technical characterstic and effect of the present invention:
The method of the present invention estimates problem of low quality for depth profile, and the depth edge of extraction low resolution depth map is made To initialize depth profile, after viewpoint is reversed, with reference to colour edging, using Moving Least Squares (MLS) method to depth Profile scatterplot is attached reconstruction, finally obtains high-resolution, connection, smooth depth profile.The invention has the characteristics that:
1st, the advantage of biocular systems is made full use of, there is provided more information references.
2nd, propose the depth edge of low resolution depth map as initialization depth profile first.
3rd, with reference to colour edging, depth profile scatterplot is attached using mobile MLS.
Brief description of the drawings
Fig. 1 is algorithm flow chart, in figure,For the depth edge information of low resolution,For the high-resolution of depth edge Rate scatter diagram,High-resolution continuous depth profile, EcFor colour edging image, EdEstimate for ultimate depth profile.
Fig. 2 is (a) colour edging (red), depth edge (green), main view point depth image (blueness) connection after calibration Close displaying;
Fig. 3 is high-resolution depth profile estimated result.
Embodiment
Using main view point colour-depth image to being used as input information.The depth edge information of low resolution is obtained first;So Operated afterwards by camera calibration and image calibration, obtain the high-resolution scatter diagram of depth edge, and carry out edge interpolation and obtain High-resolution continuous depth profile;Finally, under the guiding and constraint of Color Image Edge, depth profile is corrected Optimization, generates final depth profile image.Elaborate with reference to the accompanying drawings and examples to the present invention.
1) the depth edge information of low resolution is obtained
Due in original depth image there are noise and depth missing, it is necessary to first carry out denoising and padding to it, adopt By the use of filtering and bicubic interpolation as the pretreatment operation of original depth image, then depth is extracted using canny detective operators Edge
2) the high-resolution scatter diagram of depth edge is generated
Calibrated using by depth image with pre-processing obtained camera parameter, carrying out image to original depth image matches somebody with somebody Standard, makes it have the resolution ratio identical with coloured image, obtains high-resolution depth edge scatter diagram
3) high-resolution continuous depth profile is generated
For the location of pixels x of marginal point in colour edging image, the marginal information in its neighborhood N (x) is changed to one The coordinate pair that dimension table shows, that is, obtain coordinate scatterplot collection X=xiAnd the corresponding value f (x of point set Xi);Then for needing to carry out difference Position x, minimize the weighted least-squares error p () of its fitting function:
Wherein, θ () represents nonnegative curvature function, and ∑ is summation operation, | | | | it is Euclidean distance.By After MLS fittings, inverse transformation (one-dimensional to be converted to two dimension) is carried out, continuous depth profile image can be obtained
4) high-resolution depth image D is generatedL
Nonlinear transformation and canny detective operators extraction Color Image Edge is used in combination, to avoid canny is used alone The excessive fine grain that detective operators occur, completes the Registration of depth image, obtains high-resolution depth image DL
5) combined depth scatterplot-colour edging, corrects depth profile and optimizes:
Wherein, x is the location of pixels of marginal point in colour edging image, Nd(x)、It is corresponding respectively to be somebody's turn to do Position is in main view point depth image DL, high-resolution depth profile imageColour edging image EcIn neighborhood region;T () represents two-dimensional transformations as one-dimensional map function;G () is Gaussian kernel, represents colour edging bound term,For gradient Computing.Formula (2) if physical significance be that the depth profile of same pixel position and colour edging have Similar trend (curvature), then it is assumed that the pixel is that the possibility of depth profile point is larger.Wherein, R () represents main view point depth image Corresponding bound term, is defined as:
Wherein, i, j=1,2,3,4 represent neighborhood Nd(x) four sub-regions of upper and lower, left and right, Nth1And Dth1Respectively The threshold value of effective depth value total quantity and depth average difference.The bound term represents, in four neighborhood subregions, appoints as long as having Two sub-regions of anticipating have similar effective depth value quantity and depth average, then it is depth smooth area to be considered as the neighborhood Depth profile is not present in domain, the i.e. contiguous range, then the location of pixels should not there are depth profile information, i.e. Ed(x)= 0.The constraint mutually can effectively remove the redundancy boundary information in colour edging image, avoid the mistake that redundancy boundary strip comes from drawing Lead, reduce pseudo- color generation, continuous, the smooth depth profile information of generation.
The method of the present invention is using main view point colour-depth image to as information is inputted, obtaining the depth of low resolution first Marginal information;Then operated by camera calibration and image calibration, obtain the high-resolution scatter diagram of depth edge, and carry out side Edge interpolation obtains high-resolution continuous depth profile;Finally, under the guiding and constraint of Color Image Edge, to depth wheel Exterior feature carries out correction optimization, generates final depth profile image.(experiment flow figure is as shown in Figure 1).In conjunction with the accompanying drawings and embodiments Detailed description it is as follows:
1) the depth edge information of low resolution is obtained
Due in original depth image there are noise and depth missing, it is necessary to first carry out denoising and padding to it, adopt By the use of filtering and bicubic interpolation as the pretreatment operation of original depth image, then depth is extracted using canny detective operators Edge
2) the high-resolution scatter diagram of depth edge is generated
Calibrated using by depth image with pre-processing obtained camera parameter, carrying out image to original depth image matches somebody with somebody Standard, makes it have the resolution ratio identical with coloured image, obtains high-resolution depth edge scatter diagram
3) high-resolution continuous depth profile is generated(Fig. 2 greens)
For the location of pixels x of marginal point in colour edging image, the marginal information in its neighborhood N (x) is changed to one The coordinate pair that dimension table shows, that is, obtain coordinate scatterplot collection X=xiAnd the corresponding value f (x of point set Xi);Then for needing to carry out difference Position x, minimize the weighted least-squares error p () of its fitting function:
Wherein, θ () represents nonnegative curvature function, and ∑ is summation operation, | | | | it is Euclidean distance.By After MLS fittings, inverse transformation (one-dimensional to be converted to two dimension) is carried out, continuous depth profile image can be obtained
4) high-resolution depth image D is generatedL(Fig. 2 bluenesss)
Nonlinear transformation and canny detective operators extraction Color Image Edge is used in combination, to avoid canny is used alone The excessive fine grain that detective operators occur, completes the Registration of depth image, obtains high-resolution depth image DL
Because 1) colour edging image accurately describes the depth profile information of scene, while also contains part The redundancy boundary information of non-depth profile;2) region of redundancy boundary information is contained for those, its corresponding main view point depth Image DLIn depth value be typically it is smooth, i.e., this subregion be depth image depth smooth region.Although depth map Seem scatterplot form, effective depth Distribution value disperses, it is difficult to 4 neighborhoods or 8 neighborhood depth values of pixel are directly calculated, but it is deep Degree image still can provide the redundancy boundary information in a degree of constraint removal colour edging image;3) with colored side Edge image is compared, the depth profile obtained through MLS interpolationIt is inaccurate, still, it still has and real depth profile phase Same variation tendency is (i.e.:Curvature), this corrects the vital of optimization offer to next step depth profile.Based on these property Matter, the method that this problem further provides a kind of depth profile correction optimization of combined depth scatterplot-colour edging.
5) combined depth scatterplot-colour edging, corrects depth profile and optimizes, and generates high-resolution depth profile image (Fig. 3):
Wherein, x is the location of pixels of marginal point in colour edging image, Nd(x)、It is corresponding respectively to be somebody's turn to do Position is in main view point depth image DL, high-resolution depth profile imageColour edging image EcNeighborhood in (Fig. 2 is red) Region;T () represents two-dimensional transformations as one-dimensional map function;G () is Gaussian kernel, represents colour edging bound term, For gradient algorithm.Formula (2) if physical significance be that the depth profile of same pixel position and colour edging have identical change Change trend (curvature), then it is assumed that the pixel is that the possibility of depth profile point is larger.Wherein, R () represents main view point depth The corresponding bound term of image is spent, is defined as:
Wherein, i, j=1,2,3,4 represent neighborhood Nd(x) four sub-regions of upper and lower, left and right, Nth1And Dth1Respectively The threshold value of effective depth value total quantity and depth average difference.The bound term represents, in four neighborhood subregions, appoints as long as having Two sub-regions of anticipating have similar effective depth value quantity and depth average, then it is depth smooth area to be considered as the neighborhood Depth profile is not present in domain, the i.e. contiguous range, then the location of pixels should not there are depth profile information, i.e. Ed(x)= 0.The constraint mutually can effectively remove the redundancy boundary information in colour edging image, avoid the mistake that redundancy boundary strip comes from drawing Lead, reduce pseudo- color generation, continuous, the smooth depth profile information of generation.

Claims (2)

1. one kind is based on binocular RGB-D camera depth profile methods of estimation, it is characterized in that, RGB-D represents colored and depth image. The depth edge information of low resolution is obtained first;Then operated by camera calibration and image calibration, obtain depth edge High-resolution scatter diagram, and carry out edge interpolation and obtain high-resolution continuous depth profile;Finally, in Color Image Edge Guiding and constraint under, correction optimization is carried out to depth profile, generates final depth profile image.
2. binocular RGB-D camera depth profile methods of estimation are based on as claimed in claim 1, it is characterized in that, further, tool Body step is as follows:
1) the depth edge information of low resolution is obtained
Denoising and padding first are carried out to it, grasped using filtering and bicubic interpolation as the pretreatment of original depth image Make, then extract depth edge using canny detective operators
2) the high-resolution scatter diagram of depth edge is generated
Calibrated using by depth image with pre-processing obtained camera parameter, image registration is carried out to original depth image, is made It has the resolution ratio identical with coloured image, obtains high-resolution depth edge scatter diagram
3) high-resolution continuous depth profile is generated
For the location of pixels x of marginal point in colour edging image, the marginal information in its neighborhood N (x) is changed to dimensional table The coordinate pair shown, that is, obtain coordinate scatterplot collection X=xiAnd the corresponding value f (x of point set Xi);Then for need carry out difference position X is put, minimizes the weighted least-squares error p () of its fitting function:
<mrow> <munder> <mo>&amp;Sigma;</mo> <mrow> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>&amp;Element;</mo> <mi>X</mi> </mrow> </munder> <msup> <mrow> <mo>(</mo> <mi>p</mi> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> <mo>-</mo> <mi>f</mi> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mi>&amp;theta;</mi> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <mi>x</mi> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>|</mo> <mo>|</mo> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
Wherein, θ () represents nonnegative curvature function, and ∑ is summation operation, | | | | it is Euclidean distance, intends by MLS After closing interpolation, carry out inverse transformation, that is, one-dimensional and be converted to two dimension, obtain continuous depth profile image
4) high-resolution depth image D is generatedL
Nonlinear transformation and canny detective operators extraction Color Image Edge is used in combination, to avoid canny detections are used alone The excessive fine grain that operator occurs, completes the Registration of depth image, obtains high-resolution depth image DL
5) combined depth scatterplot-colour edging, corrects depth profile and optimizes:
<mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>N</mi> <mi>d</mi> </msub> <mo>(</mo> <mi>x</mi> <mo>)</mo> <mo>)</mo> </mrow> <mi>G</mi> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <mo>&amp;dtri;</mo> <mi>T</mi> <mo>(</mo> <mrow> <msub> <mi>N</mi> <msub> <mi>E</mi> <mi>d</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> <mo>)</mo> <mo>-</mo> <mo>&amp;dtri;</mo> <mi>T</mi> <mo>(</mo> <mrow> <msub> <mi>N</mi> <msub> <mi>E</mi> <mi>c</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> <mo>)</mo> <mo>|</mo> <mo>|</mo> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
Wherein, x is the location of pixels of marginal point in colour edging image, Nd(x)、The position is corresponded to respectively In main view point depth image DL, high-resolution depth profile imageColour edging image EcIn neighborhood region;T () generation Table two-dimensional transformations are one-dimensional map function;G () is Gaussian kernel, represents colour edging bound term,It is public for gradient algorithm Formula (2) if physical significance be that the depth profile of same pixel position and colour edging have Similar trend i.e. curvature, Then think the pixel be depth profile point possibility it is larger, wherein, it is corresponding about that R () represents main view point depth image Shu Xiang, is defined as:
Wherein, i, j=1,2,3,4 represent neighborhood Nd(x) four sub-regions of upper and lower, left and right, Nth1And Dth1It is respectively effectively deep The threshold value of angle value total quantity and depth average difference, which represents, in four neighborhood subregions, as long as there is any two Subregion has similar effective depth value quantity and depth average, then it is depth smooth region to be considered as the neighborhood, i.e., should Depth profile is not present in contiguous range, then the location of pixels should not there are depth profile information, i.e. Ed(x)=0.
CN201711311829.3A 2017-12-11 2017-12-11 Binocular RGB-D camera based depth contour estimation method Active CN108038887B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711311829.3A CN108038887B (en) 2017-12-11 2017-12-11 Binocular RGB-D camera based depth contour estimation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711311829.3A CN108038887B (en) 2017-12-11 2017-12-11 Binocular RGB-D camera based depth contour estimation method

Publications (2)

Publication Number Publication Date
CN108038887A true CN108038887A (en) 2018-05-15
CN108038887B CN108038887B (en) 2021-11-02

Family

ID=62102463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711311829.3A Active CN108038887B (en) 2017-12-11 2017-12-11 Binocular RGB-D camera based depth contour estimation method

Country Status (1)

Country Link
CN (1) CN108038887B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108846857A (en) * 2018-06-28 2018-11-20 清华大学深圳研究生院 The measurement method and visual odometry of visual odometry
CN110322411A (en) * 2019-06-27 2019-10-11 Oppo广东移动通信有限公司 Optimization method, terminal and the storage medium of depth image
CN112535870A (en) * 2020-06-08 2021-03-23 张玉奇 Soft cushion supply system and method applying ankle detection
TWI725522B (en) * 2018-08-28 2021-04-21 鈺立微電子股份有限公司 Image capture system with calibration function
CN112819878A (en) * 2021-01-28 2021-05-18 北京市商汤科技开发有限公司 Depth detection method and device, computer equipment and storage medium
CN113689400A (en) * 2021-08-24 2021-11-23 凌云光技术股份有限公司 Method and device for detecting section contour edge of depth image
CN116311079A (en) * 2023-05-12 2023-06-23 探长信息技术(苏州)有限公司 Civil security engineering monitoring method based on computer vision
CN113689400B (en) * 2021-08-24 2024-04-19 凌云光技术股份有限公司 Method and device for detecting profile edge of depth image section

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440664A (en) * 2013-09-05 2013-12-11 Tcl集团股份有限公司 Method, system and computing device for generating high-resolution depth map
CN106162147A (en) * 2016-07-28 2016-11-23 天津大学 Depth recovery method based on binocular Kinect depth camera system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440664A (en) * 2013-09-05 2013-12-11 Tcl集团股份有限公司 Method, system and computing device for generating high-resolution depth map
CN106162147A (en) * 2016-07-28 2016-11-23 天津大学 Depth recovery method based on binocular Kinect depth camera system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JINGYU YANG 等: "Color-Guided Depth Recovery From RGB-D Data Using an Adaptive Autoregressive Model", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
RYOTARO TAKAOKA 等: "Depth Map Super-Resolution for Cost-Effective RGB-D Camera", 《2015 INTERNATIONAL CONFERENCE ON CYBERWORLDS》 *
叶昕辰: "面向3DTV的深度计算重建", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108846857A (en) * 2018-06-28 2018-11-20 清华大学深圳研究生院 The measurement method and visual odometry of visual odometry
TWI725522B (en) * 2018-08-28 2021-04-21 鈺立微電子股份有限公司 Image capture system with calibration function
CN110322411A (en) * 2019-06-27 2019-10-11 Oppo广东移动通信有限公司 Optimization method, terminal and the storage medium of depth image
CN112535870A (en) * 2020-06-08 2021-03-23 张玉奇 Soft cushion supply system and method applying ankle detection
CN112819878A (en) * 2021-01-28 2021-05-18 北京市商汤科技开发有限公司 Depth detection method and device, computer equipment and storage medium
WO2022160586A1 (en) * 2021-01-28 2022-08-04 北京市商汤科技开发有限公司 Depth measurement method and apparatus, computer device, and storage medium
CN113689400A (en) * 2021-08-24 2021-11-23 凌云光技术股份有限公司 Method and device for detecting section contour edge of depth image
CN113689400B (en) * 2021-08-24 2024-04-19 凌云光技术股份有限公司 Method and device for detecting profile edge of depth image section
CN116311079A (en) * 2023-05-12 2023-06-23 探长信息技术(苏州)有限公司 Civil security engineering monitoring method based on computer vision
CN116311079B (en) * 2023-05-12 2023-09-01 探长信息技术(苏州)有限公司 Civil security engineering monitoring method based on computer vision

Also Published As

Publication number Publication date
CN108038887B (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN108038887A (en) Based on binocular RGB-D camera depth profile methods of estimation
Wang et al. Noise detection and image denoising based on fractional calculus
CN106651938A (en) Depth map enhancement method blending high-resolution color image
Park et al. High-quality depth map upsampling and completion for RGB-D cameras
CN103400366B (en) Based on the dynamic scene depth acquisition methods of fringe structure light
CN110232389B (en) Stereoscopic vision navigation method based on invariance of green crop feature extraction
CN103745439B (en) Image magnification method and device
CN101840570A (en) Fast image splicing method
CN103761739B (en) A kind of Image registration method optimized based on half global energy
CN103440653A (en) Binocular vision stereo matching method
CN107689050B (en) Depth image up-sampling method based on color image edge guide
EP2761591A1 (en) Localising transportable apparatus
CN102005033B (en) Method for suppressing noise by image smoothing
Lindner et al. Sub-pixel data fusion and edge-enhanced distance refinement for 2d/3d images
CN110211169B (en) Reconstruction method of narrow baseline parallax based on multi-scale super-pixel and phase correlation
CN105894521A (en) Sub-pixel edge detection method based on Gaussian fitting
TWI553590B (en) Method and device for retargeting a 3d content
CN110738731B (en) 3D reconstruction method and system for binocular vision
CN106780383B (en) The depth image enhancement method of TOF camera
CN105005964A (en) Video sequence image based method for rapidly generating panorama of geographic scene
CN103826032A (en) Depth map post-processing method
CN103914820A (en) Image haze removal method and system based on image layer enhancement
CN112862683A (en) Adjacent image splicing method based on elastic registration and grid optimization
CN104537627B (en) A kind of post-processing approach of depth image
Moorfield et al. Bilateral filtering of 3D point clouds for refined 3D roadside reconstructions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant