CN103955954A - Reconstruction method for high-resolution depth image in combination with space diagram pairs of same scene - Google Patents

Reconstruction method for high-resolution depth image in combination with space diagram pairs of same scene Download PDF

Info

Publication number
CN103955954A
CN103955954A CN201410161575.1A CN201410161575A CN103955954A CN 103955954 A CN103955954 A CN 103955954A CN 201410161575 A CN201410161575 A CN 201410161575A CN 103955954 A CN103955954 A CN 103955954A
Authority
CN
China
Prior art keywords
image
depth
sigma
current
overbar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410161575.1A
Other languages
Chinese (zh)
Other versions
CN103955954B (en
Inventor
杨宇翔
高明煜
何志伟
吴占雄
黄继业
曾毓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201410161575.1A priority Critical patent/CN103955954B/en
Publication of CN103955954A publication Critical patent/CN103955954A/en
Application granted granted Critical
Publication of CN103955954B publication Critical patent/CN103955954B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to a reconstruction method for a high-resolution depth image in combination with space diagram pairs of the same scene. More and more applications depend on accurate and rapid observation and analysis of depth images of a real scene. A flight time depth camera can obtain the depth images of the scene in real time; however, due to limitation of hardware conditions, collected depth images are low in resolution and can not meet practical application requirements. A stereo matching algorithm is a classic method for obtaining the depth images; however, due to blocking between the left image and the right image and the influence of a texture-free region, the stereo matching algorithm has a large limitation in practical application. According to the reconstruction method, the advantages of the flight time depth camera and the advantages of the stereo matching algorithm are made full use of, the reconstruction method for the high-resolution depth image combines the flight time depth camera and the space diagram pairs of the same scene, the defects in the prior art can be well overcome, and the high-resolution and high-quality depth image can be reconstructed.

Description

A kind of combination is with the right high resolving power depth image method for reconstructing of scene stereographic map
Technical field
The invention belongs to computer vision field, be specifically related to a kind of combination with the right high resolving power depth image method for reconstructing of scene stereographic map.
Background technology
The depth image that obtains scene is the vital task of computer vision.Each pixel value in depth image represents is point in scene and the distance between camera.Increasing application as three-dimensional reconstruction, collision detection, gesture identification, robot navigation, industrial automation and at film and in playing to the design setting model of virtual scene etc., all depend on real scene depth image observation and analysis accurately.At present, the obtaining means of depth image mainly contains: 1) method by Stereo matching calculates depth image; 2) by direct surveying instrument observation, obtain depth image.
Thereby Stereo Matching Algorithm is calculated the depth image that the parallax relation of each corresponding point between the image of left and right obtains scene, Stereo Matching Algorithm is a class classical way that obtains scene depth image.But owing to blocking between the image of left and right and without the impact of texture region, Stereo Matching Algorithm exists significant limitation in actual applications.Flight time depth camera is the major equipment of image of directly fathoming, it is by sending light pulse to scene, use high-speed shutter to calculate the distance of determining object in scene two-way time of light pulse, time-of-flight camera can obtain the depth information of view picture scene fast.But the scene depth figure resolution obtaining due to the restriction time-of-flight camera of hardware is lower, be difficult to meet the needs of practical application.
Summary of the invention
Object of the present invention overcomes the deficiencies in the prior art exactly, has proposed a kind of combination with the right high resolving power depth image method for reconstructing of scene stereographic map.The low resolution depth image that this method collects in conjunction with flight time depth camera and with the high-resolution color stereographic map of scene to rebuilding the high-quality depth image of high resolving power.Concrete steps are:
Step (1) is constructed non local filter weights, and the low resolution depth image of input is carried out to filtering:
G represents the low resolution depth image that flight time depth camera collects, and size is n * m; L represents the left image of same scene high-resolution color of the left collected by camera of CCD, and size is rn * rm; R represents the right image of high-resolution color of the corresponding right collected by camera of CCD, and size is rn * rm;
Construct as follows non local filter weights:
w ij N = exp ( - | | M i L - M j L | | 2 2 K · h L 2 ) · exp ( - | | M i I - M j I | | 2 2 K · h I 2 )
Wherein I is that the low resolution depth image of inputting carries out the result after bilinear interpolation, and size is rn * rm; M il represents with L icentered by localized mass, M ii represents with I icentered by localized mass, K is the pixel number that localized mass contains, h l, h ibe the parameter of wave filter, be used for controlling the rate of decay of the exponential expression of weighted calculation;
By above-mentioned, calculate non local filter weights , by following formula, bilinear interpolation result I is carried out to filtering:
F i ′ = Σ w ij N · I j Σ w ij N
The initial high resolution depth image obtaining is designated as F ', and size is rn * rm;
Step (2) be take left image as reference picture, and right image is target image, and in conjunction with initial high resolution depth image F ', structure sectional perspective is mated weights, calculates the disparity map D of left image l:
(a) in conjunction with F ', construct as follows sectional perspective coupling weights:
W ij L = exp ( - | | L i - L j | | 2 2 σ c 2 ) · exp ( - | | F i ′ - F j ′ | | 2 2 σ d 2 ) , j ∈ Ω ( i ) 0 , else
Wherein j ∈ Ω (i) represents that j is in the local window of i, parameter σ cand σ dbe used for the rate of decay of control characteristic expression formula, choose as follows:
1. calculate the color data error between interior other points of the current local window of L and central point using the intermediate value of current color difference as current σ c;
2. calculate the depth difference between interior other points of the current local window of F ' and central point using the intermediate value of current depth difference as current σ d;
(b) construct as follows the coupling cost function between left image L and right image R:
E ( i , i ‾ d ) = Σ j ∈ Ω ( i ) , j ‾ d ∈ Ω ( i ‾ d ) W ij L · e ( j , j ‾ d ) Σ j ∈ Ω ( i ) , j ‾ d ∈ Ω ( i ‾ d ) W ij L
Wherein mate cost while representing that some i parallax in left image is d, the corresponding point in i and right image between matching degree, j ∈ Ω (i) represents that j is in the local window of i, represent ? local window in, colour and the gradient difference chosen between corresponding point define matching error:
e ( j , j ‾ d ) = α min ( | | L j - R j ‾ d | | , τ 1 ) + ( 1 - α ) min ( | | ▿ L j - ▿ R j ‾ d | | , τ 2 )
Wherein parameter alpha is used for balance colour and gradient error, τ 1, τ 2for error threshold;
(c), according to coupling cost function, calculate as follows the parallax of each point in left image:
d * = arg min d ∈ S d E ( i , i ‾ d )
S wherein d={ d min..., d maxexpression disparity range; Thereby obtain the anaglyph D of left image l;
Step (3) be take right image as reference picture, and left image is target image, calculates the parallax value of each point in right image, utilizes the right image parallactic figure D calculating rto D lcarry out left and right occlusion detection:
Take right image as reference diagram, when left image is target figure, due to F ' and left image registration, take right image during as reference diagram, cannot utilize F ' to construct coupling weights, therefore take right image, construct as follows local matching weights during as reference diagram:
W ij R = exp ( - | | R i - R j | | 2 2 σ c 2 ) · exp ( - | | i - j | | 2 2 σ s 2 ) , j ∈ Ω ( i ) 0 , else
Wherein j ∈ Ω (i) represents that j is in the local window of i, parameter σ sand σ cchoose as follows:
1. σ select the radius size of local window as;
2. calculate the color data error of interior other points of the current local window of R and central point using the intermediate value of current color difference as current σ c;
The local matching weights that utilization calculates construct the coupling cost function between right image R and left image L, calculate and take the disparity map of right image during as reference diagram, be designated as D r;
Use D rto D lcarry out left and right occlusion detection, if parallax point meet following formula, this point be labeled as and meet left and right occlusion detection:
D i L = D i - D i L R
By S set, represent D linterior all points that meet left and right occlusion detection;
The depth image F ' that step (4) obtains in conjunction with filtering and coupling obtain anaglyph D lmerge and obtain final high resolving power depth image:
(a) as follows by F ' and D lmerge:
F i = b - f D i L , i ∈ S F i ′ , else
Wherein b is baseline distance between the camera of left and right, and f is the focal length of CCD camera;
(b) structure part filter weights are revised merging the depth image obtaining:
2 observations based on following:
1., probably there is similar depth value in two regions that have Similar color in coloured image;
2. the surface of real scene is substantially all piecewise smooth;
Part filter weights are decomposed into three parts: color similarity, distance similarity and degree of depth similarity, construct as follows part filter weights:
w ij = exp ( - | | L i - L j | | 2 2 σ c 2 ) · exp ( - | | i - j | | 2 2 σ s 2 ) · exp ( - | | F i - F j | | 2 2 σ d 2 ) , j ∈ Ω ( i ) 0 , else
Wherein j ∈ Ω (i) represents that j is in the local window of i, parameter σ c, σ sand σ dchoose as follows:
1. σ select the radius size of local window as;
2. calculate the color data error of interior other points of the current local window of L and central point using the intermediate value of current color difference as current σ c;
3. calculate the depth difference of interior other points of the current local window of F and central point using the intermediate value of current depth difference as current σ d;
By the above-mentioned part filter weight w that calculates ij, as follows fusion results F is carried out to filtering:
F i * = Σ w ij · F j Σ w ij
Thereby obtain final high resolving power depth image, be designated as F *, size is rn * rm.
The present invention, by the low resolution depth image that collects in conjunction with flight time depth camera with the high-resolution left and right of scene anaglyph pair, can rebuild the depth image that obtains quality, high resolution.
A kind of building method of non local weights wave filter is disclosed according to a first aspect of the invention.
According to a second aspect of the invention, a kind of method that combination initial depth image is constructed sectional perspective coupling weights is disclosed.
According to a third aspect of the invention we, the method that anaglyph that a kind of depth image that flight time depth camera is collected and Stereo Matching Algorithm calculate merges is disclosed.
According to a forth aspect of the invention, a kind of combination is disclosed with the idiographic flow of the right high resolving power depth image method for reconstructing of scene stereographic map.Mainly comprise: construct the depth image that non local filter weights collects flight time depth camera and carry out filtering; Structure sectional perspective coupling weights calculate the anaglyph based on Stereo Matching Algorithm; Final fusion obtains the high-quality depth image of high resolving power.
Beneficial effect of the present invention: the inventive method well, in conjunction with Stereo Matching Technology and flight time depth camera advantage separately, has well overcome the deficiencies in the prior art by a kind of fusion method, can rebuild the depth image of quality, high resolution.
Concrete implementation step
Step (1) is constructed non local filter weights, and the low resolution depth image of input is carried out to filtering:
G represents the low resolution depth image that flight time depth camera collects, and size is n * m; L represents the left image of same scene high-resolution color of the left collected by camera of CCD, and size is rn * rm; R represents the right image of high-resolution color of the corresponding right collected by camera of CCD, and size is rn * rm;
Construct as follows non local filter weights:
w ij N = exp ( - | | M i L - M j L | | 2 2 K · h L 2 ) · exp ( - | | M i I - M j I | | 2 2 K · h I 2 )
Wherein I is that the low resolution depth image of inputting carries out the result after bilinear interpolation, and size is rn * rm, M il represents with L icentered by localized mass, M ii represents with I icentered by localized mass, K is the pixel number that localized mass contains, localized mass size elects 5 * 5 as; Parameter h lelect 15 as, parameter h ielect 20 as;
According to by following formula, bilinear interpolation result I is carried out to filtering:
F i ′ = Σ w ij N · I j Σ w ij N
The initial high resolution depth image obtaining is designated as F ', and size is rn * rm;
Step (2) be take left image as reference picture, and right image is target image, and in conjunction with initial high resolution depth image F ', structure sectional perspective is mated weights, calculates the disparity map D of left image l:
(a) in conjunction with F ', construct as follows sectional perspective coupling weights:
W ij L = exp ( - | | L i - L j | | 2 2 σ c 2 ) · exp ( - | | F i ′ - F j ′ | | 2 2 σ d 2 ) , j ∈ Ω ( i ) 0 , else
Wherein j ∈ Ω (i) represents that j is in the local window of i, and local window size elects 9 * 9 as, parameter σ cand σ dchoose as follows:
1. calculate the color data error between interior other points of the current local window of L and central point using the intermediate value of current color difference as current σ c;
2. calculate the depth difference between interior other points of the current local window of F ' and central point using the intermediate value of current depth difference as current σ d;
(b) construct as follows the coupling cost function between left image L and right image R:
E ( i , i ‾ d ) = Σ j ∈ Ω ( i ) , j ‾ d ∈ Ω ( i ‾ d ) W ij L · e ( j , j ‾ d ) Σ j ∈ Ω ( i ) , j ‾ d ∈ Ω ( i ‾ d ) W ij L
Wherein mate cost while representing that some i parallax in left image is d, the corresponding point in i and right image between matching degree, j ∈ Ω (i) represents that j is in the local window of i, represent ? local window in, colour and the gradient difference chosen between corresponding point define matching error:
e ( j , j ‾ d ) = α min ( | | L j - R j ‾ d | | , τ 1 ) + ( 1 - α ) min ( | | ▿ L j - ▿ R j ‾ d | | , τ 2 )
Wherein parameter alpha is used for balance colour and gradient error, elects 0.2 as; τ 1, τ 2for error threshold, elect respectively 7 and 2 as;
(c), according to coupling cost function, calculate as follows the parallax of each point in left image:
d * = arg min d ∈ S d E ( i , i ‾ d )
S wherein d={ d min..., d maxexpression disparity range; Thereby obtain the anaglyph D of left image l;
Step (3) be take right image as reference picture, and left image is target image, calculates the disparity map D of right image r, to D lcarry out left and right occlusion detection:
Take right image as reference diagram, when left image is target figure, due to F ' and left image registration, take right image during as reference diagram, cannot utilize F ' to construct coupling weights, therefore take right image, construct as follows local matching weights during as reference diagram:
W ij R = exp ( - | | R i - R j | | 2 2 σ c 2 ) · exp ( - | | i - j | | 2 2 σ s 2 ) , j ∈ Ω ( i ) 0 , else
Wherein j ∈ Ω (i) represents that j is in the local window of i, and local window size elects 9 * 9 as, parameter σ sand σ cchoose as follows:
1. σ ssize elects 4 as;
2. calculate the color data error of interior other points of the current local window of R and central point using the intermediate value of current color difference as current σ c;
The sectional perspective coupling weights that utilization calculates construct the coupling cost function between right image R and left image L, calculate and take the disparity map of right image during as reference diagram, be designated as D r;
Use D rto D lcarry out left and right occlusion detection, if parallax point meet following formula, this point be labeled as and meet left and right occlusion detection:
D i L = D i - D i L R
By S set, represent D linterior all points that meet left and right occlusion detection;
The depth image F ' that step (4) obtains in conjunction with filtering and coupling obtain anaglyph D lmerge and obtain final high resolving power depth image:
(a) as follows by F ' and D lmerge:
F i = b - f D i L , i ∈ S F i ′ , else
Wherein b is baseline distance between the camera of left and right, and f is the focal length of CCD camera;
(b) structure part filter weights are revised merging the depth image obtaining:
Construct as follows part filter weights:
w ij = exp ( - | | L i - L j | | 2 2 σ c 2 ) · exp ( - | | i - j | | 2 2 σ s 2 ) · exp ( - | | F i - F j | | 2 2 σ d 2 ) , j ∈ Ω ( i ) 0 , else
Wherein j ∈ Ω (i) represents that j is in the local window of i, and local window size elects 9 * 9 as, parameter σ c, σ sand σ dchoose as follows:
1. σ ssize elects 4 as;
2. calculate the color data error of interior other points of the current local window of L and central point using the intermediate value of current color difference as current σ c;
3. calculate the depth difference of interior other points of the current local window of F and central point using the intermediate value of current depth difference as current σ d;
By the above-mentioned part filter weight w that calculates ij, as follows fusion results F is carried out to filtering:
F i * = Σ w ij · F j Σ w ij
Thereby obtain final high resolving power depth image F *, size is rn * rm.

Claims (1)

1. combination, with the right high resolving power depth image method for reconstructing of scene stereographic map, is characterized in that the concrete steps of the method are:
Step (1) is constructed non local filter weights, and the low resolution depth image of input is carried out to filtering:
G represents the low resolution depth image that flight time depth camera collects, and size is n * m; L represents the left image of same scene high-resolution color of the left collected by camera of CCD, and size is rn * rm; R represents the right image of high-resolution color of the corresponding right collected by camera of CCD, and size is rn * rm;
Construct as follows non local filter weights:
w ij N = exp ( - | | M i L - M j L | | 2 2 K · h L 2 ) · exp ( - | | M i I - M j I | | 2 2 K · h I 2 )
Wherein I is that the low resolution depth image of inputting carries out the result after bilinear interpolation, and size is rn * rm; M il represents with L icentered by localized mass, M ii represents with I icentered by localized mass, K is the pixel number that localized mass contains, h l, h ibe the parameter of wave filter, be used for controlling the rate of decay of the exponential expression of weighted calculation;
By above-mentioned, calculate non local filter weights , by following formula, bilinear interpolation result I is carried out to filtering:
F i ′ = Σ w ij N · I j Σ w ij N
The initial high resolution depth image obtaining is designated as F ', and size is rn * rm;
Step (2) be take left image as reference picture, and right image is target image, and in conjunction with initial high resolution depth image F ', structure sectional perspective is mated weights, calculates the disparity map D of left image l:
(a) in conjunction with F ', construct as follows sectional perspective coupling weights:
W ij L = exp ( - | | L i - L j | | 2 2 σ c 2 ) · exp ( - | | F i ′ - F j ′ | | 2 2 σ d 2 ) , j ∈ Ω ( i ) 0 , else
Wherein j ∈ Ω (i) represents that j is in the local window of i, parameter σ cand σ dbe used for the rate of decay of control characteristic expression formula, choose as follows:
1. calculate the color data error between interior other points of the current local window of L and central point using the intermediate value of current color difference as current σ c;
2. calculate the depth difference between interior other points of the current local window of F ' and central point using the intermediate value of current depth difference as current σ d;
(b) construct as follows the coupling cost function between left image L and right image R:
E ( i , i ‾ d ) = Σ j ∈ Ω ( i ) , j ‾ d ∈ Ω ( i ‾ d ) W ij L · e ( j , j ‾ d ) Σ j ∈ Ω ( i ) , j ‾ d ∈ Ω ( i ‾ d ) W ij L
Wherein mate cost while representing that some i parallax in left image is d, the corresponding point in i and right image between matching degree, j ∈ Ω (i) represents that j is in the local window of i, represent ? local window in, colour and the gradient difference chosen between corresponding point define matching error:
e ( j , j ‾ d ) = α min ( | | L j - R j ‾ d | | , τ 1 ) + ( 1 - α ) min ( | | ▿ L j - ▿ R j ‾ d | | , τ 2 )
Wherein parameter alpha is used for balance colour and gradient error, τ 1, τ 2for error threshold;
(c), according to coupling cost function, calculate as follows the parallax of each point in left image:
d * = arg min d ∈ S d E ( i , i ‾ d )
S wherein d={ d min..., d maxexpression disparity range; Thereby obtain the anaglyph D of left image l;
Step (3) be take right image as reference picture, and left image is target image, calculates the parallax value of each point in right image, utilizes the right image parallactic figure D calculating rto D lcarry out left and right occlusion detection:
Take right image as reference diagram, when left image is target figure, due to F ' and left image registration, take right image during as reference diagram, cannot utilize F ' to construct coupling weights, therefore take right image, construct as follows sectional perspective coupling weights during as reference diagram:
W ij R = exp ( - | | R i - R j | | 2 2 σ c 2 ) · exp ( - | | i - j | | 2 2 σ s 2 ) , j ∈ Ω ( i ) 0 , else
Wherein j ∈ Ω (i) represents that j is in the local window of i, parameter σ sand σ cchoose as follows:
1. σ select the radius size of local window as;
2. calculate the color data error of interior other points of the current local window of R and central point using the intermediate value of current color difference as current σ c;
The local matching weights that utilization calculates , construct the coupling cost function between right image R and left image L, calculate and take the disparity map of right image during as reference diagram, be designated as D r;
Use D rto D lcarry out left and right occlusion detection, if parallax point meet following formula, this point be labeled as and meet left and right occlusion detection:
D i L = D i - D i L R
By S set, represent D linterior all points that meet left and right occlusion detection;
The depth image F ' that step (4) obtains in conjunction with filtering and coupling obtain anaglyph D lmerge and obtain final high resolving power depth image:
(a) as follows by F ' and D lmerge:
F i = b - f D i L , i ∈ S F i ′ , else
Wherein b is baseline distance between the camera of left and right, and f is the focal length of CCD camera;
(b) structure part filter weights are revised merging the depth image obtaining:
2 observations based on following:
1., probably there is similar depth value in two regions that have Similar color in coloured image;
2. the surface of real scene is substantially all piecewise smooth;
Part filter weights are decomposed into three parts: color similarity, distance similarity and degree of depth similarity, construct as follows part filter weights:
w ij = exp ( - | | L i - L j | | 2 2 σ c 2 ) · exp ( - | | i - j | | 2 2 σ s 2 ) · exp ( - | | F i - F j | | 2 2 σ d 2 ) , j ∈ Ω ( i ) 0 , else
Wherein j ∈ Ω (i) represents that j is in the local window of i, parameter σ c, σ sand σ dchoose as follows:
1. σ select the radius size of local window as;
2. calculate the color data error of interior other points of the current local window of L and central point using the intermediate value of current color difference as current σ c;
3. calculate the depth difference of interior other points of the current local window of F and central point using the intermediate value of current depth difference as current σ d;
By the above-mentioned part filter weight w that calculates ij, as follows fusion results F is carried out to filtering:
F i * = Σ w ij · F j Σ w ij
Thereby obtain final high resolving power depth image F *, size is rn * rm.
CN201410161575.1A 2014-04-21 2014-04-21 Reconstruction method for high-resolution depth image in combination with space diagram pairs of same scene Active CN103955954B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410161575.1A CN103955954B (en) 2014-04-21 2014-04-21 Reconstruction method for high-resolution depth image in combination with space diagram pairs of same scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410161575.1A CN103955954B (en) 2014-04-21 2014-04-21 Reconstruction method for high-resolution depth image in combination with space diagram pairs of same scene

Publications (2)

Publication Number Publication Date
CN103955954A true CN103955954A (en) 2014-07-30
CN103955954B CN103955954B (en) 2017-02-08

Family

ID=51333223

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410161575.1A Active CN103955954B (en) 2014-04-21 2014-04-21 Reconstruction method for high-resolution depth image in combination with space diagram pairs of same scene

Country Status (1)

Country Link
CN (1) CN103955954B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184780A (en) * 2015-08-26 2015-12-23 京东方科技集团股份有限公司 Prediction method and system for stereoscopic vision depth
CN105869167A (en) * 2016-03-30 2016-08-17 天津大学 High-resolution depth map acquisition method based on active and passive fusion
WO2016172960A1 (en) * 2015-04-30 2016-11-03 SZ DJI Technology Co., Ltd. System and method for enhancing image resolution
CN106408513A (en) * 2016-08-25 2017-02-15 天津大学 Super-resolution reconstruction method of depth map
CN106774309A (en) * 2016-12-01 2017-05-31 天津工业大学 A kind of mobile robot is while visual servo and self adaptation depth discrimination method
CN106911888A (en) * 2015-12-23 2017-06-30 意法半导体(R&D)有限公司 A kind of device
CN107749060A (en) * 2017-09-28 2018-03-02 深圳市纳研科技有限公司 Machine vision equipment and based on flying time technology three-dimensional information gathering algorithm
CN108876836A (en) * 2018-03-29 2018-11-23 北京旷视科技有限公司 A kind of depth estimation method, device, system and computer readable storage medium
CN109061658A (en) * 2018-06-06 2018-12-21 天津大学 Laser radar data melts method
CN109791697A (en) * 2016-09-12 2019-05-21 奈安蒂克公司 Using statistical model from image data predetermined depth
WO2022105615A1 (en) * 2020-11-19 2022-05-27 中兴通讯股份有限公司 3d depth map construction method and apparatus, and ar glasses

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101430796A (en) * 2007-11-06 2009-05-13 三星电子株式会社 Image generating method and apparatus
CN102387374A (en) * 2010-08-30 2012-03-21 三星电子株式会社 Device and method for acquiring high-precision depth map
CN102867288A (en) * 2011-07-07 2013-01-09 三星电子株式会社 Depth image conversion apparatus and method
CN103167306A (en) * 2013-03-22 2013-06-19 上海大学 Device and method for extracting high-resolution depth map in real time based on image matching
CN103337069A (en) * 2013-06-05 2013-10-02 余洪山 A high-quality three-dimensional color image acquisition method based on a composite video camera and an apparatus thereof
CN103440664A (en) * 2013-09-05 2013-12-11 Tcl集团股份有限公司 Method, system and computing device for generating high-resolution depth map

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101430796A (en) * 2007-11-06 2009-05-13 三星电子株式会社 Image generating method and apparatus
CN102387374A (en) * 2010-08-30 2012-03-21 三星电子株式会社 Device and method for acquiring high-precision depth map
CN102867288A (en) * 2011-07-07 2013-01-09 三星电子株式会社 Depth image conversion apparatus and method
CN103167306A (en) * 2013-03-22 2013-06-19 上海大学 Device and method for extracting high-resolution depth map in real time based on image matching
CN103337069A (en) * 2013-06-05 2013-10-02 余洪山 A high-quality three-dimensional color image acquisition method based on a composite video camera and an apparatus thereof
CN103440664A (en) * 2013-09-05 2013-12-11 Tcl集团股份有限公司 Method, system and computing device for generating high-resolution depth map

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
EUN-KYUNG LEE 等: "Generation of high-quality depth maps using hybrid camera system for 3-D video", 《JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION》 *
杨宇翔 等: "基于彩色图像局部结构特征的深度图超分辨率算法", 《模式识别与人工智能》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11249173B2 (en) 2015-04-30 2022-02-15 SZ DJI Technology Co., Ltd. System and method for enhancing image resolution
WO2016172960A1 (en) * 2015-04-30 2016-11-03 SZ DJI Technology Co., Ltd. System and method for enhancing image resolution
US10488500B2 (en) 2015-04-30 2019-11-26 SZ DJI Technology Co., Ltd. System and method for enhancing image resolution
CN107534764A (en) * 2015-04-30 2018-01-02 深圳市大疆创新科技有限公司 Strengthen the system and method for image resolution ratio
US9936189B2 (en) 2015-08-26 2018-04-03 Boe Technology Group Co., Ltd. Method for predicting stereoscopic depth and apparatus thereof
CN105184780B (en) * 2015-08-26 2018-06-05 京东方科技集团股份有限公司 A kind of Forecasting Methodology and system of stereoscopic vision depth
CN105184780A (en) * 2015-08-26 2015-12-23 京东方科技集团股份有限公司 Prediction method and system for stereoscopic vision depth
CN106911888A (en) * 2015-12-23 2017-06-30 意法半导体(R&D)有限公司 A kind of device
CN105869167A (en) * 2016-03-30 2016-08-17 天津大学 High-resolution depth map acquisition method based on active and passive fusion
CN106408513B (en) * 2016-08-25 2019-10-18 天津大学 Depth map super resolution ratio reconstruction method
CN106408513A (en) * 2016-08-25 2017-02-15 天津大学 Super-resolution reconstruction method of depth map
CN109791697A (en) * 2016-09-12 2019-05-21 奈安蒂克公司 Using statistical model from image data predetermined depth
CN109791697B (en) * 2016-09-12 2023-10-13 奈安蒂克公司 Predicting depth from image data using statistical models
CN106774309A (en) * 2016-12-01 2017-05-31 天津工业大学 A kind of mobile robot is while visual servo and self adaptation depth discrimination method
CN106774309B (en) * 2016-12-01 2019-09-17 天津工业大学 A kind of mobile robot visual servo and adaptive depth discrimination method simultaneously
CN107749060A (en) * 2017-09-28 2018-03-02 深圳市纳研科技有限公司 Machine vision equipment and based on flying time technology three-dimensional information gathering algorithm
CN108876836A (en) * 2018-03-29 2018-11-23 北京旷视科技有限公司 A kind of depth estimation method, device, system and computer readable storage medium
CN109061658A (en) * 2018-06-06 2018-12-21 天津大学 Laser radar data melts method
CN109061658B (en) * 2018-06-06 2022-06-21 天津大学 Laser radar data fusion method
WO2022105615A1 (en) * 2020-11-19 2022-05-27 中兴通讯股份有限公司 3d depth map construction method and apparatus, and ar glasses

Also Published As

Publication number Publication date
CN103955954B (en) 2017-02-08

Similar Documents

Publication Publication Date Title
CN103955954A (en) Reconstruction method for high-resolution depth image in combination with space diagram pairs of same scene
CN101902657B (en) Method for generating virtual multi-viewpoint images based on depth image layering
TWI497980B (en) System and method of processing 3d stereoscopic images
CN101887589B (en) Stereoscopic vision-based real low-texture image reconstruction method
CN103646396B (en) The Matching power flow algorithm of Binocular Stereo Matching Algorithm and non local Stereo Matching Algorithm
CN104504671A (en) Method for generating virtual-real fusion image for stereo display
CN105956597A (en) Binocular stereo matching method based on convolution neural network
CN105869167A (en) High-resolution depth map acquisition method based on active and passive fusion
CN104661010A (en) Method and device for establishing three-dimensional model
CN103702103B (en) Based on the grating stereo printing images synthetic method of binocular camera
CN106254854A (en) The preparation method of 3-D view, Apparatus and system
CN104809719A (en) Virtual view synthesis method based on homographic matrix partition
CN103606151A (en) A wide-range virtual geographical scene automatic construction method based on image point clouds
CN104599317A (en) Mobile terminal and method for achieving 3D (three-dimensional) scanning modeling function
CN106600632A (en) Improved matching cost aggregation stereo matching algorithm
CN106408513A (en) Super-resolution reconstruction method of depth map
CN111402311A (en) Knowledge distillation-based lightweight stereo parallax estimation method
KR101714224B1 (en) 3 dimension image reconstruction apparatus and method based on sensor fusion
CN104301706B (en) A kind of synthetic method for strengthening bore hole stereoscopic display effect
CN103716615A (en) 2D video three-dimensional method based on sample learning and depth image transmission
CN103260008B (en) A kind of image position is to the projection conversion method of physical location
CN109345444A (en) The super-resolution stereo-picture construction method of depth perception enhancing
CN101833759B (en) Robot scene depth discrimination method based on continuous videos
WO2009099117A1 (en) Plane parameter estimating device, plane parameter estimating method, and plane parameter estimating program
CN105138979A (en) Method for detecting the head of moving human body based on stereo visual sense

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant