CN108564536A - A kind of global optimization method of depth map - Google Patents

A kind of global optimization method of depth map Download PDF

Info

Publication number
CN108564536A
CN108564536A CN201711406513.2A CN201711406513A CN108564536A CN 108564536 A CN108564536 A CN 108564536A CN 201711406513 A CN201711406513 A CN 201711406513A CN 108564536 A CN108564536 A CN 108564536A
Authority
CN
China
Prior art keywords
data
depth
parallax
visual angle
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711406513.2A
Other languages
Chinese (zh)
Other versions
CN108564536B (en
Inventor
郭文松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luoyang Zhongke Information Industry Research Institute
Luoyang Zhongke Zhongchuang Space Technology Co Ltd
Original Assignee
Luoyang Zhongke Public Interspace Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luoyang Zhongke Public Interspace Technology Ltd filed Critical Luoyang Zhongke Public Interspace Technology Ltd
Priority to CN201711406513.2A priority Critical patent/CN108564536B/en
Publication of CN108564536A publication Critical patent/CN108564536A/en
Application granted granted Critical
Publication of CN108564536B publication Critical patent/CN108564536B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

A kind of global optimization method of depth map, this method make full use of the difference information of left and right visual angle parallax data and the edge gradient information of color data to carry out global optimization to depth map.First, it is based on region-growing method and region filtering is carried out to initial left and right visual angle parallax data respectively, remove the isolated small wrong parallax of bulk;Then, the difference information using the horizontal parallax data after optimization and useModel calculates parallax confidence level coefficient data, experiments have shown that this method is succinctly effective;Finally, by LOOK LEFT parallax data and confidence level coefficient data, through visual angle projection transform at the initial depth data and confidence data under color camera visual angle, make full use of the marginal information of coloured image, the system of linear equations about depth data is constructed, depth data after can must optimizing by over-relaxation iterative method resolving.This method can obtain high accuracy depth data in real time, and the depth map by optimization is smooth, possess edge and large stretch of cavity can preferably be filled.

Description

Global optimization method of depth map
Technical Field
The invention relates to the technical field of computer vision and image processing, in particular to a global optimization method of a depth map.
Background
Two images of a scene are acquired from different perspectives, and depth information of the scene can be estimated by the position offset of the scene in the two images. The position deviation corresponds to the parallax of the image pixel point, and can be directly converted into the scene depth, and is generally represented by a depth map. However, when a scene has texture missing and texture repeating, a large slice of holes may appear on the corresponding region in the calculated depth map. On one hand, in the existing method, the texture is enriched by artificially compensating the scene (such as pasting a mark point, projecting a light spot and the like), but the existing method has the conditions of inconvenient operation, incapability of operating, no function and the like; on the other hand, the depth map is directly optimized, but the method is complicated, excessively optimized or not in practical conditions.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides a global optimization method of a depth map, which realizes filtering and denoising of the depth map and filling of large cavities, converts left and right visual angle parallax data into RGB camera visual angles, fully utilizes RGB image edge information, and is simple and efficient.
In order to achieve the purpose, the invention adopts the specific scheme that: a global optimization method of a depth map comprises the following steps:
respectively carrying out regional filtering on initial left visual angle parallax data and initial right visual angle parallax data based on a regional growing method, removing error parallax of isolated block regions, and obtaining optimized left visual angle parallax data and optimized right visual angle parallax data; the specific process of removing the block area based on the area growing method and having the error parallax is as follows:
s1, newly building two images Buff and Dst with the same size as the original parallax image and the initial value of zero, wherein Buff is used for recording grown pixel points, and Dst is used for marking image block areas meeting the conditions;
s2, setting a first threshold value and a second threshold value; the first threshold value is a parallax difference value, and the second threshold value is an area value of a block area with wrong parallax;
s3, traversing each pixel point which is not grown, taking the current point as a seed point, and pressing into a region growing function;
s4, creating stacks vectorGrowPoints and stacks ResultPoints, taking out the tail point from the stacks vectorGrowPoints, and then according to the eight directions of the point: { -1, -1}, {0, -1}, {1, -1}, {1, 0}, {1, 1}, {0, 1}, { -1, 1}, { -1, 0} extracting the disparity value of the pixel point which does not grow out to be compared with the disparity value of the seed point, if the disparity value is smaller than a first threshold value, considering that the condition is met, respectively pushing the pixel point into stacked vector growth points and stacked resource points, marking the grown point in Buff, and repeating the process until no point exists in the stacked vector growth points; if the number of points in the stack resultPoints is smaller than a second threshold value, marking in Dst;
s5, repeating the steps S3 and S4, and removing the marked region in the Dst from the parallax data to obtain optimized left visual angle parallax data and optimized right visual angle parallax data;
step two, calculating left visual angle confidence coefficient data by the left visual angle parallax data optimized in the step one and the right visual angle parallax data optimized in the step one, wherein the specific method for calculating the left visual angle confidence coefficient data is that alphap=e-|ld-rd|wherein ld is left visual angle parallax data after optimization in the step one, rd is right visual angle parallax data after optimization in the corresponding step one, and alphapLeft view confidence coefficient data;
step three, calculating left visual angle depth data according to the left visual angle parallax data and the camera parameters which are optimized in the step one; simultaneously carrying out perspective projection conversion on the left perspective depth data and the left perspective confidence coefficient data obtained in the second step to obtain initial depth data and confidence coefficient data under the perspective of the RGB camera;
and step four, calculating edge constraint coefficient data by using RGB image edge information, and then generating optimized depth data by using the edge constraint coefficient data, the initial depth data and the confidence coefficient data under the viewing angle of the RGB camera in the step three through a global optimization objective function.
Preferably, an acquisition device is used in the process of acquiring the depth image, and the acquisition device comprises two near-infrared cameras and an RGB camera.
Preferably, in step three, the specific calculation process of the initial depth data under the viewing angle of the RGB camera is as follows:
t1, traversing image pixels, knowing the base lines and focal lengths of the left and right near-infrared cameras, and converting the parallax values into depth values;
t2, calculating the three-dimensional coordinates of the corresponding space point in the coordinate system according to the depth value and the internal parameters of the left near-infrared camera or the near-infrared right camera;
t3, calculating three-dimensional coordinates of the corresponding space points in the RGB camera coordinate system according to the relative position relation between the left near-infrared camera coordinate system or the right near-infrared camera coordinate system and the RGB camera coordinate system and the three-dimensional correction matrix between the left near-infrared camera and the right near-infrared camera; and T4, calculating the projection and the depth value of the corresponding space point on the RGB image plane by the internal parameters of the RGB camera, and obtaining the initial depth data under the viewing angle of the RGB camera.
Preferably, the global optimization objective function adopted in step four is:
wherein,initial depth data for a pixel point p on the image, Dpfor depth data to be found, αpLeft view confidence coefficient data, ω, of a pixel point pqpFour-neighborhood pixel points of which the number is p and q is edge constraint coefficient data; when epsilon (D) is minimum, the optimization is finished; assuming that the image has n pixel points, in order to minimize epsilon (D), the right-hand part of the global optimization objective function with equal sign is directed to each DpThe derivative is equal to zero to obtainto n equations, the linear system of equations AX ═ B is obtained, where A is a coefficient matrix of n × n, only with αpAnd ωqpin relation to that, B is a constant matrix of n × 1, only with αpAndwhere X is the column vector [ D ] of the depth data to be determined1,D2,…,Dn]TAnd obtaining optimized depth data through iterative calculation.
Preferably, for any pixel point p, AX is the p-th behavior in B: and calculating a coefficient matrix A and a constant matrix B.
Preferably, the specific calculation process of the coefficient matrix a and the constant matrix B is as follows:
(1) firstly, the RGB image is gradientedIs the gray scale difference between pixel points q and p, and then The value range is [0,1 ]]wherein β is a tuning parameter, and β ═ 20;
(2) from alphapAnd ωqpcalculating a coefficient matrix A, wherein the p-th behavior of A is (alpha)p+∑(p,q)∈Epqqp))Dp-∑(p,q)∈Epqqp)DqObtaining 5 nonzero values of the row, wherein the 5 nonzero values are the pixel point p and corresponding elements of the pixel point p in the four adjacent domains, and the image is formedelement α corresponding to prime point pp+∑(p,q)∈Epqqp) The element- (omega) corresponding to the four-adjacent domain pixel point q of the pixel point ppqqp);
(3) from alphapAnd an initial depth valueComputing a constant matrix B, wherein the p-th behavior of B
Preferably, the linear equation set is solved by adopting a super-relaxation iteration method to obtain the optimized depth data.
Has the advantages that:
(1) the invention provides a global optimization method of a depth map, which is based on an acquisition device, wherein the acquisition device comprises two near infrared cameras (NIR) and a visible light (RGB) camera, the near infrared cameras form a binocular stereo vision system, the depth map is acquired in real time and is registered with an RGB image acquired by the visible light camera; the method comprises the steps of fully utilizing global information of left and right visual angle parallax data and edge constraint of color data to carry out global optimization on a depth map, converting the left and right visual angle parallax data into RGB camera visual angles, and utilizing RGB image edge information; when calculating the confidence coefficient data, adopt e-xthe method for determining the confidence coefficient by directly utilizing the parallax data of the left and right visual angles is proved by experiments to be simple and effective, and the method is characterized in that in the prior art, the confidence coefficient is determined by fitting a matching cost quadratic curve of three adjacent integer parallax values of pixel points, the method needs to recalculate the parallax matching cost, secondarily fits the three matching cost values of the pixel points, and determines α by judging the orientation of the curvepThe method is simpler compared with the prior art; the effective expression is as follows: the optimized depth map is smooth, the edge is kept, and large cavities can be filled well;
(2) the invention provides a global optimization method of a depth map, which is characterized in that a regional growing method is adopted to carry out regional filtering on initial left visual angle parallax data and initial right visual angle parallax data respectively, experiments prove that the method can finish marking after traversing an image once, and can effectively remove error and parallax of small isolated regions with similar parallax values and obviously different from peripheral parallax values.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a depth map of a large sheet of cavities in an optimized front header;
FIG. 3 is a depth map optimized by the global optimization method of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to the flowchart of fig. 1 of the present invention, the internal and external parameters of all cameras of the present invention are known, and the initial left view parallax data and the initial right view parallax data are calculated by the prior art, which is not described herein again. A global optimization method of a depth map is used in the process of acquiring a depth image based on an acquisition device, wherein the acquisition device comprises two near-infrared cameras and an RGB camera, and the method comprises the following steps:
respectively carrying out regional filtering on initial left visual angle parallax data and initial right visual angle parallax data based on a regional growing method, removing error parallax of isolated block regions, and obtaining optimized left visual angle parallax data and optimized right visual angle parallax data; generally generated parallax data are subjected to left and right verification, a large amount of point parallaxes which are mismatched are removed, but the mismatching parallaxes which are in a small area still exist, the method firstly carries out regional filtering on the parallax data of left and right visual angles respectively, small isolated areas with similar parallax values are removed, the parallax quality is further improved, and the specific process of removing the mismatching parallaxes of the block areas based on the regional growing method is as follows:
s1, newly building two images Buff and Dst with the same size as the original parallax image and the initial value of zero, wherein Buff is used for recording grown pixel points, and Dst is used for marking image block areas meeting the conditions;
s2, setting a first threshold value and a second threshold value; the first threshold value is a parallax difference value, and the second threshold value is an area value of a block area with wrong parallax; preferably, the first threshold is 10, and the second threshold is 60;
s3, traversing each pixel point which is not grown, taking the current point as a seed point, and pressing into a region growing function;
s4, creating stacks vectorGrowPoints and stacks ResultPoints, taking out the tail point from the stacks vectorGrowPoints, and then according to the eight directions of the point: { -1, -1}, {0, -1}, {1, -1}, {1, 0}, {1, 1}, {0, 1}, { -1, 1}, { -1, 0} extracting the disparity value of the pixel point which does not grow out to be compared with the disparity value of the seed point, if the disparity value is smaller than a first threshold value, considering that the condition is met, respectively pushing the pixel point into stacked vector growth points and stacked resource points, marking the grown point in Buff, and repeating the process until no point exists in the stacked vector growth points; if the number of points in the stack resultPoints is smaller than a second threshold value, marking in Dst;
s5, repeating the steps S3 and S4, and removing the marked region in the Dst from the parallax data to obtain optimized left visual angle parallax data and optimized right visual angle parallax data;
step two, bystep one, calculating left visual angle confidence coefficient data by the optimized left visual angle parallax data and the optimized right visual angle parallax data, wherein the specific method for calculating the left visual angle confidence coefficient data is αp=e-|ld-rd|wherein ld is left visual angle parallax data after optimization in the step one, rd is right visual angle parallax data after optimization in the corresponding step one, and alphapthe left visual angle confidence coefficient data plays a decisive role in the optimization effect, and α is the left visual angle confidence coefficient data, in the prior art, a method for determining the point parallax confidence coefficient data by fitting a matching cost curve exists, the implementation process is complicated, and the method for calculating the confidence coefficient data is simple and efficientpThe reliability of the value is closely related to the accuracy of parallax data, and the small blocks in the parallax data have parallax errors, so that large blocks in the corresponding area have depth data errors after optimization, and therefore, the invention provides a method for removing the block-shaped parallax errors based on a region growing method to improve the parallax quality;
step three, calculating left visual angle depth data according to the left visual angle parallax data and the camera parameters which are optimized in the step one; simultaneously carrying out perspective projection conversion on the left perspective depth data and the left perspective confidence coefficient data obtained in the second step to obtain initial depth data and confidence coefficient data under the perspective of the RGB camera; the specific calculation process of the initial depth data under the viewing angle of the RGB camera is as follows:
t1, traversing image pixels, knowing the base lines and focal lengths of the left and right near-infrared cameras, and converting the parallax values into depth values;
t2, calculating the three-dimensional coordinates of the corresponding space point in the coordinate system according to the depth value and the internal parameters of the left near-infrared camera or the near-infrared right camera;
t3, calculating three-dimensional coordinates of the corresponding space points in the RGB camera coordinate system according to the relative position relation between the left near-infrared camera coordinate system or the right near-infrared camera coordinate system and the RGB camera coordinate system and the three-dimensional correction matrix between the left near-infrared camera and the right near-infrared camera; t4, calculating the projection and depth value of the corresponding space point on the RGB image plane by the internal parameters of the RGB camera, and obtaining the initial depth data under the visual angle of the RGB camera;
step four, calculating edge constraint coefficient data by using RGB image edge information, and then generating optimized depth data by using a global optimization objective function according to the edge constraint coefficient data, the initial depth data and the confidence coefficient data under the viewing angle of the RGB camera in the step three, wherein the adopted global optimization objective function is as follows:
wherein,initial depth data for a pixel point p on the image, Dpfor depth data to be found, αpLeft view confidence coefficient data, ω, of a pixel point pqpFour-neighborhood pixel points of which the number is p and q is edge constraint coefficient data; when epsilon (D) is minimum, the optimization is finished; assuming that the image has n pixel points, in order to minimize epsilon (D), the right-hand part of the global optimization objective function with equal sign is directed to each Dpthe derivation is equal to zero, n equations are obtained, and a linear equation system with AX ═ B is obtained, wherein A is a coefficient matrix of n × n and only alpha is obtainedpAnd ωqpin relation to that, B is a constant matrix of n × 1, only with αpAndwhere X is the column vector [ D ] of the depth data to be determined1,D2,…,Dn]TAnd obtaining optimized depth data through iterative calculation.
For any pixel point p, AX is the p-th behavior in B: and calculating a coefficient matrix A and a constant matrix B.
Step three, acquiring initial depth data, calculating a coefficient matrix and a constant matrix below, wherein for an image with million resolution, the depth data volume can reach million, the coefficient matrix data volume is in a square level, and in order to meet the requirement of GPU real-time implementation, the invention adopts an ultra-relaxation iterative method (SOR) to solve a linear equation set to complete depth data optimization, as shown in FIG. 2 and FIG. 3, FIG. 2 is a depth map of a large-slice cavity before optimization; FIG. 3 is a depth map optimized using the global optimization method of the present invention. The specific calculation process of the coefficient matrix A and the constant matrix B is as follows:
(1) firstly, the RGB image is gradientedIs the gray scale difference between pixel points q and p, and then The value range is [0,1 ]]where β is the tuning parameter, and β ═ 20, by this step ω is solvedqp,ωqpThe effect on the depth effect is to keep the depth edge from being overly smoothed;
(2) from alphapAnd ωqpCalculating a coefficient matrix a, wherein the pth behavior of a: obtaining 5 nonzero values of the row, wherein the 5 nonzero values are corresponding elements of the pixel point p and four adjacent domain pixel points of the pixel point p, and the element α corresponding to the pixel point pp+∑(p,q)∈Epqqp) The element- (omega) corresponding to the four-adjacent domain pixel point q of the pixel point ppqqp);
(3) from alphapAnd an initial depth valueComputing a constant matrix B, wherein the p-th behavior of B
(4) And solving a linear equation set by an SOR method to obtain optimized depth data.
The invention provides a global optimization method of a depth map, which performs global optimization on the initial depth of a scene, realizes real-time high-precision acquisition of depth, and mainly solves the problem that a large number of cavities exist in calculated parallax data when the texture of the scene is lacked or repeated, such as hair, the texture is single, and even if an active light source is adopted to project structured light, the texture is easily absorbed and lacks characteristics. The method can be used in the cases of three-dimensional reconstruction, somatosensory interaction and the like. In the three-dimensional reconstruction, high-quality depth data under each visual angle is provided for real-time high-precision reconstruction, and subsequent optimization processing operation can be simplified. In the somatosensory interaction, a real picture is displayed in front of the opposite side by establishing different interactor models.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (7)

1. A global optimization method for a depth map is characterized by comprising the following steps:
respectively carrying out regional filtering on initial left visual angle parallax data and initial right visual angle parallax data based on a regional growing method, removing error parallax of isolated block regions, and obtaining optimized left visual angle parallax data and optimized right visual angle parallax data; the specific process of removing the block area based on the area growing method and having the error parallax is as follows:
s1, newly building two images Buff and Dst with the same size as the original parallax image and the initial value of zero, wherein Buff is used for recording grown pixel points, and Dst is used for marking image block areas meeting the conditions;
s2, setting a first threshold value and a second threshold value; the first threshold value is a difference value of parallax errors, and the second threshold value is a block area with error parallax errors;
s3, traversing each pixel point which is not grown, taking the current point as a seed point, and pressing into a region growing function;
s4, creating stacks vectorGrowPoints and stacks ResultPoints, taking out the tail point from the stacks vectorGrowPoints, and then according to the eight directions of the point: { -1, -1}, {0, -1}, {1, -1}, {1, 0}, {1, 1}, {0, 1}, { -1, 1}, { -1, 0} extracting the disparity value of the pixel point which does not grow out to be compared with the disparity value of the seed point, if the disparity value is smaller than a first threshold value, considering that the condition is met, respectively pushing the pixel point into stacked vector growth points and stacked resource points, marking the grown point in Buff, and repeating the process until no point exists in the stacked vector growth points; if the number of points in the stack resultPoints is smaller than a second threshold value, marking in Dst;
s5, repeating the steps S3 and S4, and removing the marked region in the Dst from the parallax data to obtain optimized left visual angle parallax data and optimized right visual angle parallax data;
step two, calculating left visual angle confidence coefficient data by the left visual angle parallax data optimized in the step one and the right visual angle parallax data optimized in the step one, wherein the specific method for calculating the left visual angle confidence coefficient data is that alphap=e-|ld-rd|wherein ld is left visual angle parallax data after optimization in the step one, rd is right visual angle parallax data after optimization in the corresponding step one, and alphapLeft view confidence coefficient data;
step three, calculating left visual angle depth data according to the left visual angle parallax data and the camera parameters which are optimized in the step one; simultaneously carrying out perspective projection conversion on the left perspective depth data and the left perspective confidence coefficient data obtained in the second step to obtain initial depth data and confidence coefficient data under the perspective of the RGB camera;
and step four, calculating edge constraint coefficient data by using RGB image edge information, and then generating optimized depth data by using the edge constraint coefficient data, the initial depth data and the confidence coefficient data under the viewing angle of the RGB camera in the step three through a global optimization objective function.
2. The global optimization method of a depth map as claimed in claim 1, characterized in that: an acquisition device is used in the process of acquiring the depth image, and the acquisition device comprises two near-infrared cameras and an RGB camera.
3. The global optimization method of a depth map as claimed in claim 1, characterized in that: in the third step, the specific calculation process of the initial depth data under the viewing angle of the RGB camera is as follows:
t1, traversing image pixels, knowing the base lines and focal lengths of the left and right near-infrared cameras, and converting the parallax values into depth values;
t2, calculating the three-dimensional coordinates of the corresponding space point in the coordinate system according to the depth value and the internal parameters of the left near-infrared camera or the near-infrared right camera;
t3, calculating three-dimensional coordinates of the corresponding space points in the RGB camera coordinate system according to the relative position relation between the left near-infrared camera coordinate system or the right near-infrared camera coordinate system and the RGB camera coordinate system and the three-dimensional correction matrix between the left near-infrared camera and the right near-infrared camera;
and T4, calculating the projection and the depth value of the corresponding space point on the RGB image plane by the internal parameters of the RGB camera, and obtaining the initial depth data under the viewing angle of the RGB camera.
4. The global optimization method of a depth map as claimed in claim 1, characterized in that: the global optimization objective function adopted in the fourth step is as follows:
wherein,for the initialization of a pixel point p on the imageDepth data, Dpfor depth data to be found, αpThe left visual angle confidence coefficient data of the pixel point p is edge constraint coefficient data, and q is a four-neighborhood pixel point of p; when epsilon (D) is minimum, the optimization is finished; assuming that the image has n pixel points, in order to minimize epsilon (D), the right-hand part of the global optimization objective function with equal sign is directed to each Dpthe derivation is equal to zero, n equations are obtained, and a linear equation system with AX ═ B is obtained, wherein A is a coefficient matrix of n × n and only alpha is obtainedpAnd ωqpin relation to that, B is a constant matrix of n × 1, only with αpAndwhere X is the column vector [ D ] of the depth data to be determined1,D2,…,Dn]TAnd obtaining optimized depth data through iterative calculation.
5. The global optimization method of depth map as claimed in claim 4, characterized in that: for any pixel point p, AX is the p-th behavior in B:and calculating a coefficient matrix A and a constant matrix B.
6. The global optimization method for depth maps according to claim 5, characterized in that: the specific calculation process of the coefficient matrix A and the constant matrix B is as follows:
(1) firstly, the RGB image is gradientedIs the gray scale difference between pixel points q and p, and then Value of whichIn the range of [0,1]wherein β is a tuning parameter, and β ═ 20;
(2) from alphapAnd ωqpcalculating a coefficient matrix A, wherein the p-th behavior of A is (alpha)p+∑(p,q)∈Epqqp))Dp-∑(p,q)∈Epqqp)Dqobtaining 5 nonzero values of the row, wherein the 5 nonzero values are the pixel point p and corresponding elements of the pixel point p in four adjacent domains, and the element α corresponding to the pixel point pp+∑(p,q)∈Epqqp) The element- (omega) corresponding to the four-adjacent domain pixel point q of the pixel point ppqqp);
(3) from alphapAnd an initial depth valueComputing a constant matrix B, wherein the p-th behavior of B
7. The global optimization method for depth maps according to claim 6, characterized in that: and solving the linear equation set by adopting an ultra-relaxation iteration method to obtain the optimized depth data.
CN201711406513.2A 2017-12-22 2017-12-22 Global optimization method of depth map Active CN108564536B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711406513.2A CN108564536B (en) 2017-12-22 2017-12-22 Global optimization method of depth map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711406513.2A CN108564536B (en) 2017-12-22 2017-12-22 Global optimization method of depth map

Publications (2)

Publication Number Publication Date
CN108564536A true CN108564536A (en) 2018-09-21
CN108564536B CN108564536B (en) 2020-11-24

Family

ID=63530387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711406513.2A Active CN108564536B (en) 2017-12-22 2017-12-22 Global optimization method of depth map

Country Status (1)

Country Link
CN (1) CN108564536B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109633661A (en) * 2018-11-28 2019-04-16 杭州凌像科技有限公司 A kind of glass inspection systems merged based on RGB-D sensor with ultrasonic sensor and method
CN110163898A (en) * 2019-05-07 2019-08-23 腾讯科技(深圳)有限公司 A kind of depth information method for registering and device
CN110288558A (en) * 2019-06-26 2019-09-27 纳米视觉(成都)科技有限公司 A kind of super depth image fusion method and terminal
CN111862077A (en) * 2020-07-30 2020-10-30 浙江大华技术股份有限公司 Disparity map processing method and device, storage medium and electronic device
CN112597334A (en) * 2021-01-15 2021-04-02 天津帕克耐科技有限公司 Data processing method of communication data center
CN113450391A (en) * 2020-03-26 2021-09-28 华为技术有限公司 Method and equipment for generating depth map
WO2021195940A1 (en) * 2020-03-31 2021-10-07 深圳市大疆创新科技有限公司 Image processing method and movable platform
CN113570701A (en) * 2021-07-13 2021-10-29 聚好看科技股份有限公司 Hair reconstruction method and equipment
CN115937290A (en) * 2022-09-14 2023-04-07 北京字跳网络技术有限公司 Image depth estimation method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8774512B2 (en) * 2009-02-11 2014-07-08 Thomson Licensing Filling holes in depth maps
WO2014149403A1 (en) * 2013-03-15 2014-09-25 Pelican Imaging Corporation Extended color processing on pelican array cameras
CN104240217A (en) * 2013-06-09 2014-12-24 周宇 Binocular camera image depth information acquisition method and device
CN105023263A (en) * 2014-04-22 2015-11-04 南京理工大学 Shield detection and parallax correction method based on region growing
CN106570903B (en) * 2016-10-13 2019-06-18 华南理工大学 A kind of visual identity and localization method based on RGB-D camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8774512B2 (en) * 2009-02-11 2014-07-08 Thomson Licensing Filling holes in depth maps
WO2014149403A1 (en) * 2013-03-15 2014-09-25 Pelican Imaging Corporation Extended color processing on pelican array cameras
CN104240217A (en) * 2013-06-09 2014-12-24 周宇 Binocular camera image depth information acquisition method and device
CN105023263A (en) * 2014-04-22 2015-11-04 南京理工大学 Shield detection and parallax correction method based on region growing
CN106570903B (en) * 2016-10-13 2019-06-18 华南理工大学 A kind of visual identity and localization method based on RGB-D camera

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109633661A (en) * 2018-11-28 2019-04-16 杭州凌像科技有限公司 A kind of glass inspection systems merged based on RGB-D sensor with ultrasonic sensor and method
CN110163898B (en) * 2019-05-07 2023-08-11 腾讯科技(深圳)有限公司 Depth information registration method, device, system, equipment and storage medium
CN110163898A (en) * 2019-05-07 2019-08-23 腾讯科技(深圳)有限公司 A kind of depth information method for registering and device
CN110288558A (en) * 2019-06-26 2019-09-27 纳米视觉(成都)科技有限公司 A kind of super depth image fusion method and terminal
CN110288558B (en) * 2019-06-26 2021-08-31 福州鑫图光电有限公司 Super-depth-of-field image fusion method and terminal
CN113450391A (en) * 2020-03-26 2021-09-28 华为技术有限公司 Method and equipment for generating depth map
WO2021195940A1 (en) * 2020-03-31 2021-10-07 深圳市大疆创新科技有限公司 Image processing method and movable platform
CN111862077A (en) * 2020-07-30 2020-10-30 浙江大华技术股份有限公司 Disparity map processing method and device, storage medium and electronic device
CN111862077B (en) * 2020-07-30 2024-08-30 浙江大华技术股份有限公司 Parallax map processing method and device, storage medium and electronic device
CN112597334B (en) * 2021-01-15 2021-09-28 天津帕克耐科技有限公司 Data processing method of communication data center
CN112597334A (en) * 2021-01-15 2021-04-02 天津帕克耐科技有限公司 Data processing method of communication data center
CN113570701A (en) * 2021-07-13 2021-10-29 聚好看科技股份有限公司 Hair reconstruction method and equipment
CN113570701B (en) * 2021-07-13 2023-10-24 聚好看科技股份有限公司 Hair reconstruction method and device
CN115937290A (en) * 2022-09-14 2023-04-07 北京字跳网络技术有限公司 Image depth estimation method and device, electronic equipment and storage medium
CN115937290B (en) * 2022-09-14 2024-03-22 北京字跳网络技术有限公司 Image depth estimation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN108564536B (en) 2020-11-24

Similar Documents

Publication Publication Date Title
CN108564536B (en) Global optimization method of depth map
CN110363858B (en) Three-dimensional face reconstruction method and system
KR102504246B1 (en) Methods and Systems for Detecting and Combining Structural Features in 3D Reconstruction
CN103236082B (en) Towards the accurate three-dimensional rebuilding method of two-dimensional video of catching static scene
CN109308719B (en) Binocular parallax estimation method based on three-dimensional convolution
Tam et al. 3D-TV content generation: 2D-to-3D conversion
JP5561781B2 (en) Method and system for converting 2D image data into stereoscopic image data
RU2382406C1 (en) Method of improving disparity map and device for realising said method
CN102930530B (en) Stereo matching method of double-viewpoint image
CN111988593B (en) Three-dimensional image color correction method and system based on depth residual optimization
CN107622480B (en) Kinect depth image enhancement method
CN103971408A (en) Three-dimensional facial model generating system and method
CN110853151A (en) Three-dimensional point set recovery method based on video
WO2017156905A1 (en) Display method and system for converting two-dimensional image into multi-viewpoint image
KR101714224B1 (en) 3 dimension image reconstruction apparatus and method based on sensor fusion
CN109218706B (en) Method for generating stereoscopic vision image from single image
CN112637582B (en) Three-dimensional fuzzy surface synthesis method for monocular video virtual view driven by fuzzy edge
CN104680544B (en) Variation scene flows method of estimation based on the regularization of 3-dimensional flow field
CN104301706B (en) A kind of synthetic method for strengthening bore hole stereoscopic display effect
CN104331890B (en) A kind of global disparity method of estimation and system
CN114494589A (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and computer-readable storage medium
CN104778673B (en) A kind of improved gauss hybrid models depth image enhancement method
CN110443228B (en) Pedestrian matching method and device, electronic equipment and storage medium
CN103247065A (en) Three-dimensional naked eye video generating method
KR20170025214A (en) Method for Multi-view Depth Map Generation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220329

Address after: 471000 floor 10-11, East Tower, Xiaowen Avenue science and technology building, Yibin District, Luoyang City, Henan Province

Patentee after: Luoyang Zhongke Information Industry Research Institute

Patentee after: Luoyang Zhongke Zhongchuang Space Technology Co., Ltd

Address before: 471000 room 216, building 11, phase I standardized plant, Yibin District Industrial Park, Luoyang City, Henan Province

Patentee before: LUOYANG ZHONGKE ZHONGCHUANG SPACE TECHNOLOGY CO.,LTD.

TR01 Transfer of patent right