CN108564536A - A kind of global optimization method of depth map - Google Patents

A kind of global optimization method of depth map Download PDF

Info

Publication number
CN108564536A
CN108564536A CN201711406513.2A CN201711406513A CN108564536A CN 108564536 A CN108564536 A CN 108564536A CN 201711406513 A CN201711406513 A CN 201711406513A CN 108564536 A CN108564536 A CN 108564536A
Authority
CN
China
Prior art keywords
data
parallax
depth
pixel
look
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711406513.2A
Other languages
Chinese (zh)
Other versions
CN108564536B (en
Inventor
郭文松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luoyang Zhongke Information Industry Research Institute
Luoyang Zhongke Zhongchuang Space Technology Co., Ltd
Original Assignee
Luoyang Zhongke Public Interspace Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luoyang Zhongke Public Interspace Technology Ltd filed Critical Luoyang Zhongke Public Interspace Technology Ltd
Priority to CN201711406513.2A priority Critical patent/CN108564536B/en
Publication of CN108564536A publication Critical patent/CN108564536A/en
Application granted granted Critical
Publication of CN108564536B publication Critical patent/CN108564536B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

A kind of global optimization method of depth map, this method make full use of the difference information of left and right visual angle parallax data and the edge gradient information of color data to carry out global optimization to depth map.First, it is based on region-growing method and region filtering is carried out to initial left and right visual angle parallax data respectively, remove the isolated small wrong parallax of bulk;Then, the difference information using the horizontal parallax data after optimization and useModel calculates parallax confidence level coefficient data, experiments have shown that this method is succinctly effective;Finally, by LOOK LEFT parallax data and confidence level coefficient data, through visual angle projection transform at the initial depth data and confidence data under color camera visual angle, make full use of the marginal information of coloured image, the system of linear equations about depth data is constructed, depth data after can must optimizing by over-relaxation iterative method resolving.This method can obtain high accuracy depth data in real time, and the depth map by optimization is smooth, possess edge and large stretch of cavity can preferably be filled.

Description

A kind of global optimization method of depth map
Technical field
The present invention relates to computer vision, technical field of image processing, specifically a kind of global optimization of depth map Method.
Background technology
The two images for obtaining scene with different view, can estimate field by the position offset of scene in two images The depth information of scape.This position offset corresponds to the parallax of image slices vegetarian refreshments, can be converted directly into scene depth, general with deep Degree figure indicates.However, when texture missing occurs in scene and texture repeats, the depth map of calculating is likely to occur on corresponding region Large stretch of cavity.Existing method, on the one hand, by scene carry out artificially compensation (such as sticking sign point, projected spot) come Abundant texture, but there are it is inconvenient, can not operate, do not work situations such as;On the other hand, directly depth map is carried out excellent Change, but that there are methods is complicated, excessively optimizes or be not inconsistent situations such as practical.
Invention content
In order to solve deficiency in the prior art, the present invention provides a kind of global optimization method of depth map, this method It realizes the filtering and noise reduction of depth map and filling up for large stretch of cavity, left and right visual angle parallax data is transformed under RGB camera visual angle, is filled Divide and utilizes RGB image marginal information, it is succinct efficient.
To achieve the goals above, the concrete scheme that the present invention uses for:A kind of global optimization method of depth map, the party Method includes the following steps:
Step 1: carrying out region to initial LOOK LEFT parallax data and initial LOOK RIGHT parallax data respectively based on region-growing method Filtering, removes the wrong parallax of isolated boxed area, and the LOOK LEFT parallax data after being optimized and the LOOK RIGHT after optimization regard Difference data;The detailed process that the wrong parallax of boxed area is removed based on region-growing method is as follows:
S1, the image Buff and Dst for creating that two sizes are equal with former anaglyph and initial value is zero, Buff is for recording growth The pixel crossed, Dst are used for marking the image boxed area of the condition of satisfaction;
S2, setting first threshold and second threshold;The first threshold is disparity difference, and second threshold, which is that boxed area is wrong, to be regarded The area value of difference;
The pixel that S3, traversal were not grown each, using current point as seed point, embedding area growth function;
S4, newly-built stack vectorGrowPoints and stack resultPoints, end is taken out from stack vectorGrowPoints Point, then press eight directions of the point:{ -1, -1 }, { 0, -1 }, { 1, -1 }, { 1,0 }, { 1,1 }, { 0,1 }, { -1,1 }, { -1,0 } are taken out The pixel parallax value not grown is compared with seed point parallax value, if being less than first threshold, then it is assumed that and it is eligible, point Ya Ru not be in stack vectorGrowPoints and stack resultPoints, and the point grown is made marks in Buff, it repeats The above process, until no point in stack vectorGrowPoints;If the points in stack resultPoints are less than second Threshold value then makes marks in Dst;
S5, step S3 and S4 are repeated, the region that label was done in Dst is removed in parallax data, the left view after being optimized LOOK RIGHT parallax data after angular parallax data and optimization;
Step 2: the LOOK LEFT parallax data after being optimized by step 1 calculates LOOK LEFT with the LOOK RIGHT parallax data after optimization and sets Coefficient of reliability data;Calculate LOOK LEFT confidence level coefficient data specific method be:αp=e-|ld-rd|, wherein ld is step 1 LOOK LEFT parallax data after optimization, rd are LOOK RIGHT parallax data after the optimization of corresponding step 1, αpFor LOOK LEFT confidence level system Number data;
Step 3: the LOOK LEFT parallax data and camera parameter after being optimized by step 1 calculate LOOK LEFT depth data;By left view The LOOK LEFT confidence level coefficient data that pull degrees of data and step 2 obtain obtains RGB camera simultaneously by visual angle projection transform Initial depth data under visual angle and confidence level coefficient data;
Step 4: edge constraint coefficient data is calculated using RGB image marginal information, later by edge constraint coefficient data, step After initial depth data and confidence level coefficient data under rapid three RGB camera visual angle generate optimization using global optimization object function Depth data.
Preferably, using a kind of acquisition device during obtaining depth image, the acquisition device includes two close Infrared camera and a RGB camera.
Preferably, in step 3, the specific calculating process of initial depth data under RGB camera visual angle is as follows:
T1, traversal image pixel, it is known that parallax value is converted to depth value by left and right near infrared camera baseline and focal length;
T2, spatial point is corresponded in the coordinate by depth value and the in the vicinity intrinsic parameter of infrared camera or the right camera of near-infrared, calculating Three-dimensional coordinate under system;
T3, the relative position relation by infrared camera in the vicinity or right near infrared camera coordinate system with RGB camera coordinate system and a left side Three-dimensional correction matrix between right near infrared camera, calculates three-dimensional coordinate of the corresponding spatial point under RGB camera coordinate system;T4、 By the intrinsic parameter of RGB camera, projection and depth value of the corresponding spatial point in RGB image plane are calculated to get RGB camera visual angle Under initial depth data.
Preferably, the global optimization object function that uses of step 4 for:
Wherein,For the initial depth data of pixel p on image, DpFor depth data to be asked, αpFor the LOOK LEFT of pixel p Confidence level coefficient data, ωqpFor edge constraint coefficient data, q is the four neighborhood pixels of p;When ε (D) minimums, optimization knot Beam;Assuming that image has n pixel, to make ε (D) reach minimum, enable global optimization object function equal sign right part to each A DpDerivation be equal to zero, obtain n equation, arrange AX=B system of linear equations, wherein A for n × n coefficient matrix, only with αpAnd ωqpIt is related, B be n × 1 constant matrices, only with αpWithRelated, X is depth data column vector [D to be asked1,D2,…, Dn]T, by iterative calculation, the depth data after must optimizing.
Preferably, to arbitrary pixel p, pth behavior in AX=B: Calculate to obtain coefficient matrices A and constant matrices B.
Preferably, coefficient matrices A and the specific calculating process of constant matrices B are as follows:
(1), gradient is asked to RGB image firstFor the gray scale difference value of pixel q and p, then For its value range between [0,1], wherein β is tuning parameter, and β=20;
(2), by αpAnd ωqpDesign factor matrix A, the pth behavior of wherein A:(αp+∑(p,q)∈Epqqp))Dp-∑(p,q)∈Epqqp)Dq, it is four neighborhood territory pixels of pixel p and pixel p to obtain the row to have 5 nonzero values, 5 nonzero values Point corresponding element, wherein the element α corresponding to pixel pp+∑(p,q)∈Epqqp), the four neighborhood pictures of pixel p Element-(ω corresponding to vegetarian refreshments qpqqp);
(3), by αpWith initial depth valueComputational constant matrix B, the pth behavior of wherein B
Preferably, resolving system of linear equations, the depth data after being optimized using over-relaxation iterative method.
Advantageous effect:
(1) the present invention provides a kind of global optimization method of depth map, this method is based on an acquisition device, the acquisition dress It sets including two near infrared cameras (NIR) and visible light (RGB) camera, near infrared camera constitutes a binocular stereo vision System obtains depth map, and is registrated with the RGB image of Visible Light Camera acquisition in real time;Make full use of left and right visual angle parallax data Global information and color data edge constraint come to depth map carry out global optimization, left and right visual angle parallax data is transformed into Under RGB camera visual angle, RGB image marginal information is utilized;When calculating confidence level coefficient data, using e-xModel directly utilizes a left side The method of LOOK RIGHT parallax data, experiments have shown that this method is succinctly effective.Succinctly it is embodied in:In existing method, confidence level coefficient Determination be by the method for the Matching power flow conic section of the adjacent three integer parallax values of match pixel point, which needs again Disparity correspondence cost is calculated, and quadratic fit is done to three Matching power flow values of pixel, is determined by judgment curves direction αpPositive and negative values, therefore, the method for the present invention is succinct compared with prior art;Effectively it is embodied in:By the depth map of optimization It is smooth, possess edge and large stretch of cavity can preferably fill;
(2) the present invention provides a kind of global optimization methods of depth map, are regarded respectively to initial LOOK LEFT using region-growing method Difference data and initial LOOK RIGHT parallax data carry out region filtering, it is demonstrated experimentally that this method, which traverses an image, can be completed mark Note, and being capable of the wrong parallax removal of fritter isolated area that is effectively that parallax value is similar and differing markedly from parallax value around.
Description of the drawings
Fig. 1 is flow chart of the present invention;
Fig. 2 is the large stretch of empty depth map of optimization fore head;
Fig. 3 is the depth map after global optimization method of the present invention optimization.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
Flow chart Fig. 1 of the present invention is please referred to, the inside and outside parameter of all cameras of the present invention is it is known that and passing through the prior art Initial LOOK LEFT parallax data and initial LOOK RIGHT parallax data are calculated to obtain, this will not be repeated here.A kind of overall situation of depth map is excellent Change method, this method are based on using a kind of acquisition device, the acquisition device during an acquisition device obtains depth image Including two near infrared cameras and a RGB camera, this method comprises the following steps:
Step 1: carrying out region to initial LOOK LEFT parallax data and initial LOOK RIGHT parallax data respectively based on region-growing method Filtering, removes the wrong parallax of isolated boxed area, and the LOOK LEFT parallax data after being optimized and the LOOK RIGHT after optimization regard Difference data;The parallax data generally generated by left and right verification, has had been removed the point parallax of a large amount of error hidings, but there are still In the wrong parallax of pocket, the present invention carries out region filtering to left and right visual angle parallax data respectively first, removes parallax value Similar fritter isolated area, further improves parallax quality, and the wrong parallax of boxed area is removed based on region-growing method Detailed process is as follows:
S1, the image Buff and Dst for creating that two sizes are equal with former anaglyph and initial value is zero, Buff is for recording growth The pixel crossed, Dst are used for marking the image boxed area of the condition of satisfaction;
S2, setting first threshold and second threshold;The first threshold is disparity difference, and second threshold, which is that boxed area is wrong, to be regarded The area value of difference;Preferably, the first threshold is 10, second threshold 60;
The pixel that S3, traversal were not grown each, using current point as seed point, embedding area growth function;
S4, newly-built stack vectorGrowPoints and stack resultPoints, end is taken out from stack vectorGrowPoints Point, then press eight directions of the point:{ -1, -1 }, { 0, -1 }, { 1, -1 }, { 1,0 }, { 1,1 }, { 0,1 }, { -1,1 }, { -1,0 } are taken out The pixel parallax value not grown is compared with seed point parallax value, if being less than first threshold, then it is assumed that and it is eligible, point Ya Ru not be in stack vectorGrowPoints and stack resultPoints, and the point grown is made marks in Buff, it repeats The above process, until no point in stack vectorGrowPoints;If the points in stack resultPoints are less than second Threshold value then makes marks in Dst;
S5, step S3 and S4 are repeated, the region that label was done in Dst is removed in parallax data, the left view after being optimized LOOK RIGHT parallax data after angular parallax data and optimization;
Step 2: the LOOK LEFT parallax data after being optimized by step 1 calculates LOOK LEFT with the LOOK RIGHT parallax data after optimization and sets Coefficient of reliability data;Calculate LOOK LEFT confidence level coefficient data specific method be:αp=e-|ld-rd|, wherein ld is step 1 LOOK LEFT parallax data after optimization, rd are LOOK RIGHT parallax data after the optimization of corresponding step 1, αpFor LOOK LEFT confidence level system Number data;In existing method, there is the method for determining this parallax confidence level coefficient data by being fitted Matching power flow curve, it is real Existing process is cumbersome, and the method that the present invention calculates confidence level coefficient data is succinctly efficient.LOOK LEFT confidence level coefficient data is to excellent Change effect and plays conclusive, and αpThe confidence level of value is closely related with the accuracy of parallax data again, in parallax data The wrong parallax of fritter can cause optimization after corresponding region there is the wrong depth data of bulk, therefore, the present invention propose be based on area Domain growth method removes the method for blocky wrong parallax to improve parallax quality;
Step 3: the LOOK LEFT parallax data and camera parameter after being optimized by step 1 calculate LOOK LEFT depth data;By left view The LOOK LEFT confidence level coefficient data that pull degrees of data and step 2 obtain obtains RGB camera simultaneously by visual angle projection transform Initial depth data under visual angle and confidence level coefficient data;The specific calculating process of initial depth data under RGB camera visual angle It is as follows:
T1, traversal image pixel, it is known that parallax value is converted to depth value by left and right near infrared camera baseline and focal length;
T2, spatial point is corresponded in the coordinate by depth value and the in the vicinity intrinsic parameter of infrared camera or the right camera of near-infrared, calculating Three-dimensional coordinate under system;
T3, the relative position relation by infrared camera in the vicinity or right near infrared camera coordinate system with RGB camera coordinate system and a left side Three-dimensional correction matrix between right near infrared camera, calculates three-dimensional coordinate of the corresponding spatial point under RGB camera coordinate system;T4、 By the intrinsic parameter of RGB camera, projection and depth value of the corresponding spatial point in RGB image plane are calculated to get RGB camera visual angle Under initial depth data;
Step 4: edge constraint coefficient data is calculated using RGB image marginal information, later by edge constraint coefficient data, step After initial depth data and confidence level coefficient data under rapid three RGB camera visual angle generate optimization using global optimization object function Depth data, the global optimization object function used for:
Wherein,For the initial depth data of pixel p on image, DpFor depth data to be asked, αpFor a left side of pixel p Visual angle confidence level coefficient data, ωqpFor edge constraint coefficient data, q is the four neighborhood pixels of p;When ε (D) minimums, optimization Terminate;Assuming that image has n pixel, to make ε (D) reach minimum, enable global optimization object function equal sign right part to every One DpDerivation be equal to zero, obtain n equation, arrange AX=B system of linear equations, wherein A for n × n coefficient matrix, only With αpAnd ωqpIt is related, B be n × 1 constant matrices, only with αpWithRelated, X is depth data column vector [D to be asked1, D2,…,Dn]T, by iterative calculation, the depth data after must optimizing.
To arbitrary pixel p, pth behavior in AX=B: Calculate to obtain coefficient matrices A and constant matrices B.
Step 3 has obtained initial depth data, below design factor matrix and constant matrices, for million resolution ratio Image, depth data amount is up to million, and coefficient matrix data volume is a square grade, and to meet GPU real-time implementations, the present invention uses Over-relaxation iterative method (SOR) resolves system of linear equations, completes depth data optimization, and as shown in Figures 2 and 3, Fig. 2 is optimization front The large stretch of empty depth map in portion;Fig. 3 is to utilize the depth map after global optimization method of the present invention optimization.Coefficient matrices A and constant square The specific calculating process of battle array B is as follows:
(1), gradient is asked to RGB image firstFor the gray scale difference value of pixel q and p, then Its value range is between [0,1], and wherein β is tuning parameter, and β=20 are solved by this step ωqp, ωqpInfluence to depth effect is to maintain depth edge, makes it not by excess smoothness;
(2), by αpAnd ωqpDesign factor matrix A, the pth behavior of wherein A: Obtaining the row has 5 nonzero values, 5 nonzero values to be pixel p and should The four neighborhood pixel corresponding elements of pixel p, wherein the element α corresponding to pixel pp+∑(p,q)∈Epqqp), Element-(ω corresponding to the four neighborhood pixel q of pixel ppqqp);
(3), by αpWith initial depth valueComputational constant matrix B, the pth behavior of wherein B
(4), system of linear equations, the depth data after being optimized are resolved by SOR methods.
The present invention provides a kind of global optimization method of depth map, this method carries out scene initial depth global excellent Change, realize that depth real-time high-precision obtains, when mainly solution scene texture lacks or repeats, causes to deposit in the parallax data calculated Problem in a large amount of cavities, at hair, texture is single, and even if is easily absorbed if using active light source projective structure light And it lacks in individuality.It can be used in the cases such as three-dimensional reconstruction, body feeling interaction.In three-dimensional reconstruction, provided for real-time high-precision reconstruction High-quality depth data under each visual angle, can simplify follow-up optimization process operation.In body feeling interaction, by distinct interaction person The foundation of model, real picture is presented in face of other side.
It should also be noted that, herein, relational terms such as first and second and the like are used merely to one Entity or operation are distinguished with another entity or operation, without necessarily requiring or implying between these entities or operation There are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant are intended to contain Lid non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also include other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in process, method, article or equipment including the element.
The foregoing description of the disclosed embodiments enables those skilled in the art to implement or use the present invention. Various modifications to these embodiments will be apparent to those skilled in the art, as defined herein General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, of the invention It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one The widest range caused.

Claims (7)

1. a kind of global optimization method of depth map, which is characterized in that this method comprises the following steps:
Step 1: carrying out region to initial LOOK LEFT parallax data and initial LOOK RIGHT parallax data respectively based on region-growing method Filtering, removes the wrong parallax of isolated boxed area, and the LOOK LEFT parallax data after being optimized and the LOOK RIGHT after optimization regard Difference data;The detailed process that the wrong parallax of boxed area is removed based on region-growing method is as follows:
S1, the image Buff and Dst for creating that two sizes are equal with former anaglyph and initial value is zero, Buff is for recording growth The pixel crossed, Dst are used for marking the image boxed area of the condition of satisfaction;
S2, setting first threshold and second threshold;The first threshold is the difference of parallax size, and second threshold is blocky wrong The area of parallax;
The pixel that S3, traversal were not grown each, using current point as seed point, embedding area growth function;
S4, newly-built stack vectorGrowPoints and stack resultPoints, end is taken out from stack vectorGrowPoints Point, then press eight directions of the point:{ -1, -1 }, { 0, -1 }, { 1, -1 }, { 1,0 }, { 1,1 }, { 0,1 }, { -1,1 }, { -1,0 } are taken out The pixel parallax value not grown is compared with seed point parallax value, if being less than first threshold, then it is assumed that and it is eligible, point Ya Ru not be in stack vectorGrowPoints and stack resultPoints, and the point grown is made marks in Buff, it repeats The above process, until no point in stack vectorGrowPoints;If the points in stack resultPoints are less than second Threshold value then makes marks in Dst;
S5, step S3 and S4 are repeated, the region that label was done in Dst is removed in parallax data, the left view after being optimized LOOK RIGHT parallax data after angular parallax data and optimization;
Step 2: the LOOK LEFT parallax data after being optimized by step 1 calculates LOOK LEFT with the LOOK RIGHT parallax data after optimization and sets Coefficient of reliability data;Calculate LOOK LEFT confidence level coefficient data specific method be:αp=e-|ld-rd|, wherein ld is step 1 LOOK LEFT parallax data after optimization, rd are LOOK RIGHT parallax data after the optimization of corresponding step 1, αpFor LOOK LEFT confidence level system Number data;
Step 3: the LOOK LEFT parallax data and camera parameter after being optimized by step 1 calculate LOOK LEFT depth data;By left view The LOOK LEFT confidence level coefficient data that pull degrees of data and step 2 obtain obtains RGB camera simultaneously by visual angle projection transform Initial depth data under visual angle and confidence level coefficient data;
Step 4: edge constraint coefficient data is calculated using RGB image marginal information, later by edge constraint coefficient data, step After initial depth data and confidence level coefficient data under rapid three RGB camera visual angle generate optimization using global optimization object function Depth data.
2. a kind of global optimization method of depth map as described in claim 1, it is characterised in that:During acquisition depth image A kind of acquisition device is used, the acquisition device includes two near infrared cameras and a RGB camera.
3. a kind of global optimization method of depth map as described in claim 1, it is characterised in that:In step 3, RGB camera regards The specific calculating process of initial depth data under angle is as follows:
T1, traversal image pixel, it is known that parallax value is converted to depth value by left and right near infrared camera baseline and focal length;
T2, spatial point is corresponded in the coordinate by depth value and the in the vicinity intrinsic parameter of infrared camera or the right camera of near-infrared, calculating Three-dimensional coordinate under system;
T3, the relative position relation by infrared camera in the vicinity or right near infrared camera coordinate system with RGB camera coordinate system and a left side Three-dimensional correction matrix between right near infrared camera, calculates three-dimensional coordinate of the corresponding spatial point under RGB camera coordinate system;
T4, the intrinsic parameter by RGB camera calculate projection and depth value of the corresponding spatial point in RGB image plane to get RGB Initial depth data under camera perspective.
4. a kind of global optimization method of depth map as described in claim 1, it is characterised in that:The overall situation that step 4 uses is excellent Changing object function is:
Wherein,For the initial depth data of pixel p on image, DpFor depth data to be asked, αpFor the LOOK LEFT of pixel p Confidence level coefficient data, is edge constraint coefficient data, and q is the four neighborhood pixels of p;When ε (D) minimums, optimization terminates;It is false If image has n pixel, to make ε (D) reach minimum, enable global optimization object function equal sign right part to each DpIt asks Lead be equal to zero, obtain n equation, arrange AX=B system of linear equations, wherein A for n × n coefficient matrix, only with αpWith ωqpIt is related, B be n × 1 constant matrices, only with αpWithRelated, X is depth data column vector [D to be asked1, D2..., Dn ]T, by iterative calculation, the depth data after must optimizing.
5. a kind of global optimization method of depth map as claimed in claim 4, it is characterised in that:To arbitrary pixel p, AX= Pth behavior in B:Calculate to obtain coefficient Matrix A and constant matrices B.
6. a kind of global optimization method of depth map as claimed in claim 5, it is characterised in that:Coefficient matrices A and constant square The specific calculating process of battle array B is as follows:
(1), gradient is asked to RGB image firstFor the gray scale difference value of pixel q and p, then For its value range between [0,1], wherein β is tuning parameter, and β=20;
(2), by αpAnd ωqpDesign factor matrix A, the pth behavior of wherein A:(αp+∑(p, q) ∈ Epqqp))Dp-∑(p, q) ∈ Epqqp)Dq, it is four neighborhood territory pixels of pixel p and pixel p to obtain the row to have 5 nonzero values, 5 nonzero values Point corresponding element, wherein the element α corresponding to pixel pp+∑(p, q) ∈ Epqqp), the four neighborhood pictures of pixel p Element-(ω corresponding to vegetarian refreshments qpqqp);
(3), by αpWith initial depth valueComputational constant matrix B, the pth behavior of wherein B
7. a kind of global optimization method of depth map as claimed in claim 6, it is characterised in that:Using over-relaxation iterative method solution Calculate system of linear equations, the depth data after being optimized.
CN201711406513.2A 2017-12-22 2017-12-22 Global optimization method of depth map Active CN108564536B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711406513.2A CN108564536B (en) 2017-12-22 2017-12-22 Global optimization method of depth map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711406513.2A CN108564536B (en) 2017-12-22 2017-12-22 Global optimization method of depth map

Publications (2)

Publication Number Publication Date
CN108564536A true CN108564536A (en) 2018-09-21
CN108564536B CN108564536B (en) 2020-11-24

Family

ID=63530387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711406513.2A Active CN108564536B (en) 2017-12-22 2017-12-22 Global optimization method of depth map

Country Status (1)

Country Link
CN (1) CN108564536B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109633661A (en) * 2018-11-28 2019-04-16 杭州凌像科技有限公司 A kind of glass inspection systems merged based on RGB-D sensor with ultrasonic sensor and method
CN110163898A (en) * 2019-05-07 2019-08-23 腾讯科技(深圳)有限公司 A kind of depth information method for registering and device
CN110288558A (en) * 2019-06-26 2019-09-27 纳米视觉(成都)科技有限公司 A kind of super depth image fusion method and terminal
CN111862077A (en) * 2020-07-30 2020-10-30 浙江大华技术股份有限公司 Disparity map processing method and device, storage medium and electronic device
CN112597334A (en) * 2021-01-15 2021-04-02 天津帕克耐科技有限公司 Data processing method of communication data center
WO2021195940A1 (en) * 2020-03-31 2021-10-07 深圳市大疆创新科技有限公司 Image processing method and movable platform
CN113570701A (en) * 2021-07-13 2021-10-29 聚好看科技股份有限公司 Hair reconstruction method and equipment
CN115937290A (en) * 2022-09-14 2023-04-07 北京字跳网络技术有限公司 Image depth estimation method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8774512B2 (en) * 2009-02-11 2014-07-08 Thomson Licensing Filling holes in depth maps
WO2014149403A1 (en) * 2013-03-15 2014-09-25 Pelican Imaging Corporation Extended color processing on pelican array cameras
CN104240217A (en) * 2013-06-09 2014-12-24 周宇 Binocular camera image depth information acquisition method and device
CN105023263A (en) * 2014-04-22 2015-11-04 南京理工大学 Shield detection and parallax correction method based on region growing
CN106570903B (en) * 2016-10-13 2019-06-18 华南理工大学 A kind of visual identity and localization method based on RGB-D camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8774512B2 (en) * 2009-02-11 2014-07-08 Thomson Licensing Filling holes in depth maps
WO2014149403A1 (en) * 2013-03-15 2014-09-25 Pelican Imaging Corporation Extended color processing on pelican array cameras
CN104240217A (en) * 2013-06-09 2014-12-24 周宇 Binocular camera image depth information acquisition method and device
CN105023263A (en) * 2014-04-22 2015-11-04 南京理工大学 Shield detection and parallax correction method based on region growing
CN106570903B (en) * 2016-10-13 2019-06-18 华南理工大学 A kind of visual identity and localization method based on RGB-D camera

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109633661A (en) * 2018-11-28 2019-04-16 杭州凌像科技有限公司 A kind of glass inspection systems merged based on RGB-D sensor with ultrasonic sensor and method
CN110163898A (en) * 2019-05-07 2019-08-23 腾讯科技(深圳)有限公司 A kind of depth information method for registering and device
CN110163898B (en) * 2019-05-07 2023-08-11 腾讯科技(深圳)有限公司 Depth information registration method, device, system, equipment and storage medium
CN110288558A (en) * 2019-06-26 2019-09-27 纳米视觉(成都)科技有限公司 A kind of super depth image fusion method and terminal
CN110288558B (en) * 2019-06-26 2021-08-31 福州鑫图光电有限公司 Super-depth-of-field image fusion method and terminal
WO2021195940A1 (en) * 2020-03-31 2021-10-07 深圳市大疆创新科技有限公司 Image processing method and movable platform
CN111862077A (en) * 2020-07-30 2020-10-30 浙江大华技术股份有限公司 Disparity map processing method and device, storage medium and electronic device
CN112597334B (en) * 2021-01-15 2021-09-28 天津帕克耐科技有限公司 Data processing method of communication data center
CN112597334A (en) * 2021-01-15 2021-04-02 天津帕克耐科技有限公司 Data processing method of communication data center
CN113570701A (en) * 2021-07-13 2021-10-29 聚好看科技股份有限公司 Hair reconstruction method and equipment
CN113570701B (en) * 2021-07-13 2023-10-24 聚好看科技股份有限公司 Hair reconstruction method and device
CN115937290A (en) * 2022-09-14 2023-04-07 北京字跳网络技术有限公司 Image depth estimation method and device, electronic equipment and storage medium
CN115937290B (en) * 2022-09-14 2024-03-22 北京字跳网络技术有限公司 Image depth estimation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN108564536B (en) 2020-11-24

Similar Documents

Publication Publication Date Title
CN108564536A (en) A kind of global optimization method of depth map
CN104504671B (en) Method for generating virtual-real fusion image for stereo display
US7557824B2 (en) Method and apparatus for generating a stereoscopic image
CN103971408B (en) Three-dimensional facial model generating system and method
CN103236082B (en) Towards the accurate three-dimensional rebuilding method of two-dimensional video of catching static scene
CN103974055B (en) 3D photo generation system and method
CN110288642A (en) Three-dimension object fast reconstructing method based on camera array
US20100085423A1 (en) Stereoscopic imaging
US20120182403A1 (en) Stereoscopic imaging
US10560683B2 (en) System, method and software for producing three-dimensional images that appear to project forward of or vertically above a display medium using a virtual 3D model made from the simultaneous localization and depth-mapping of the physical features of real objects
WO2011138472A1 (en) Method for generating depth maps for converting moving 2d images to 3d
CN111047709B (en) Binocular vision naked eye 3D image generation method
CN104599308B (en) A kind of dynamic chart pasting method based on projection
CN103247065B (en) A kind of bore hole 3D video generation method
CA2540538C (en) Stereoscopic imaging
CN103634584B (en) A kind of multiple views 3D image synthesizing method
CN104301706B (en) A kind of synthetic method for strengthening bore hole stereoscopic display effect
CN109218706B (en) Method for generating stereoscopic vision image from single image
Gouiaa et al. 3D reconstruction by fusioning shadow and silhouette information
KR20170025214A (en) Method for Multi-view Depth Map Generation
Knorr et al. An image-based rendering (ibr) approach for realistic stereo view synthesis of tv broadcast based on structure from motion
Han et al. View synthesis using foreground object extraction for disparity control and image inpainting
CN110149508A (en) A kind of array of figure generation and complementing method based on one-dimensional integrated imaging system
CN109003294A (en) A kind of unreal & real space location registration and accurate matching process
CN103400339B (en) The manufacture method of 3D ground patch

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220329

Address after: 471000 floor 10-11, East Tower, Xiaowen Avenue science and technology building, Yibin District, Luoyang City, Henan Province

Patentee after: Luoyang Zhongke Information Industry Research Institute

Patentee after: Luoyang Zhongke Zhongchuang Space Technology Co., Ltd

Address before: 471000 room 216, building 11, phase I standardized plant, Yibin District Industrial Park, Luoyang City, Henan Province

Patentee before: LUOYANG ZHONGKE ZHONGCHUANG SPACE TECHNOLOGY CO.,LTD.

TR01 Transfer of patent right