CN106257537A - A kind of spatial depth extracting method based on field information - Google Patents

A kind of spatial depth extracting method based on field information Download PDF

Info

Publication number
CN106257537A
CN106257537A CN201610578644.8A CN201610578644A CN106257537A CN 106257537 A CN106257537 A CN 106257537A CN 201610578644 A CN201610578644 A CN 201610578644A CN 106257537 A CN106257537 A CN 106257537A
Authority
CN
China
Prior art keywords
spatial depth
reference view
field information
method based
extracting method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610578644.8A
Other languages
Chinese (zh)
Other versions
CN106257537B (en
Inventor
李晓彤
马壮
岑兆丰
兰顺
陈灏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201610578644.8A priority Critical patent/CN106257537B/en
Publication of CN106257537A publication Critical patent/CN106257537A/en
Application granted granted Critical
Publication of CN106257537B publication Critical patent/CN106257537B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention discloses a kind of spatial depth extracting method based on field information, comprise the steps: step 1, selected reference view in four-dimensional light field data, calculate the spatial depth of image border composition at reference view;Step 2, carries out region segmentation computing to image at reference view, is classified as some regions according to color or brightness homogeneity;Step 3, revises the spatial depth of each edges of regions after segmentation, according to revised spatial depth, the spatial depth of regional center part is made interpolation arithmetic.The present invention is by the method such as TABU search, region segmentation, degree of depth interpolation, it is to avoid the described function faced by spatial depth described function optimization method institute selects difficulty and the problem such as operation time is uncertain, it is achieved that quick, the exact arithmetic to spatial depth extraction.

Description

A kind of spatial depth extracting method based on field information
Technical field
The present invention relates to computer vision and calculate photography field, particularly relating to a kind of space based on field information deep Degree extracting method.
Background technology
The acquisition of information of optical field imaging belongs to calculating photography field, and spatial depth extracts and belongs to computer vision field. The imagination of record light field is proposed in the Latin paper delivered for 1908 by Gabriel Lippmann the earliest.1996, M.Levoy and P.Hanrahan proposes light field and renders theory, and plenoptic function tapers to the four-dimension, and referred to as light field letter Number.The acquisition mode of field information has two kinds, a kind of form adding microlens array for traditional camera, and another kind is by several Camera arrangement becomes the form of array.Two kinds of methods are reached the same goal by different routes, and all can obtain the sampling of field information.With tradition imaging mode phase Ratio, the method for optical field imaging not only have recorded the intensity of sensor pixel, and have recorded the direction of incident ray.Thus light field Imaging can obtain the anaglyph horizontally and vertically of imaging object, thus containing object space depth information.
The existing method extracting spatial depth in field information is required for greatly by optimizing spatial depth described function Realize, and due to needs loop optimization, its operation time is difficult to accurately estimate, and the time is longer.By optimizing spatial depth Described function realizes the method that spatial depth extracts, and needs to select strong effective described function, its operation result accurate Property have the biggest association with the matching degree of described function and actual scene, be difficult to find the description that all scenes are all adapted to optimize letter Number.
The patent documentation of Publication No. 104899870A discloses a kind of depth estimation method based on light field data distribution. The method reference light field data characteristic, from a series of refocusing light field image changing input light field image pixel distribution gained Extract and estimate scene depth with focal length correlation tensor.Further, also utilize this tensor with the variation tendency of the degree of depth and scene center The gradient information of sub-aperture texture maps is set up polynary credibility model and is weighed each point ID quality, optimizes and ties according to a preliminary estimate Really.The method is computationally intensive, and operation time is long.
Summary of the invention
In order to solve time complexity and the calculating accuracy that spatial depth extracting method based on function optimization is faced Problem, the present invention combines field information, proposes a kind of spatial depth extracting method, extracts entire image by once calculating Spatial depth, it is not necessary to iterative computation.
A kind of spatial depth extracting method based on field information, comprises the steps:
Step 1, selected reference view in four-dimensional light field data, calculate the space of image border composition at reference view deep Degree;
Step 2, carries out region segmentation computing to image at reference view, if being classified as according to color or brightness homogeneity Dry region;
Step 3, revises, according to revised spatial depth to region the spatial depth of each edges of regions after segmentation The spatial depth of core makees interpolation arithmetic.
In step 1, reference view can be central viewpoint, it is also possible to be edge viewpoint.For obtaining figure at reference view As marginal element, only ginseng need to can be obtained in conjunction with being in the image information of collinear two groups of other viewpoints with reference view Examine the spatial depth of image border composition at viewpoint.
Spatial depth refer in visual field on the object corresponding to a certain pixel a little to the distance of light field recording equipment.
Being different from traditional method, the present invention is not required to K × K viewpoint forming array, but only needs 2K-1 viewpoint.Wherein, K Individual viewpoint is positioned on straight line, and additionally K viewpoint is positioned at another straight line, and two straight-line intersections are reference view, therefore are total to Need 2K-1 viewpoint.Further, herein to the viewpoint group being in two lines, its included angle of straight line is without particular/special requirement.
Specifically comprising the following steps that of step 1
Step 1-1, obtains field information, and four-dimensional field information is decomposed into the image of each viewpoint.To figure at reference view As doing gradient algorithm, thus extract image border composition at reference view;
Step 1-2, in conjunction with reference view with a line with other viewpoint image-forming informations of string, calculate reference view Place's image border composition spatial depth characteristic curve slope;
Step 1-3, can calculate the space of image border composition at reference view according to spatial depth characteristic curve slope The degree of depth.
In step 1-1, the inventive method acquisition mode no requirement (NR) to field information, all satisfied four-dimensional light field feature shapes The data of formula are the most effective.
In step 1-2, Grad is more than to the marginal position of noise threshold limit value, by preimage element and the phase of its correspondence Adjacent pixel, scans for coupling by viewpoint;Two viewpoint direction of X and Y all can find one group of position matched, thus The characteristic curve of Special composition depth information;The computational methods of spatial depth characteristic curve, first use TABU search to reduce on characteristic curve The hunting zone of point, finds out the point minimum with point of proximity difference on fixed characteristic curve in hunting zone, is added into spy Levy line point range, then further determined that next step hunting zone by new point range set.
Noise threshold limit value can be manually set, and typically chooses 0≤T≤0.25Gmax, GmaxFor maximum gray scale.
TABU search refers to the result searched according to a upper viewpoint position, reduces this hunting zone, such that it is able to greatly Width improves operation efficiency.
In step 2, carry out the border between the region of region segmentation computing to be in step 1 at the reference view obtained Image border composition.Region segmentation computing uses the mode of quadtree decomposition and polymerization, can be effectively improved arithmetic speed Robustness with program.
Quaternary tree separates and the discrimination standard of polymerization is that whether the concordance of pixel exceedes marginal value, for gray scale in region Image, concordance is the maximum difference of pixel grey scale;For coloured image, concordance is the maximum of color and average color Difference.
In step 3, obtain according to spatial depth and the step 2 of image border composition at the reference view of step 1 acquisition Region segmentation situation, the spatial depth of each zone boundary is revised, with adjust produce because of the spatial occlusion relationship between object Raw boundary space deep errors.Then according to the spatial depth of revised zone boundary, non-borderline region is done space deep Degree interpolation arithmetic, to obtain the spatial depth in whole region.
Specifically comprising the following steps that of step 3
Step 3-1, it is thus achieved that each region, travel through its border, in order and the position of record delimitation pixel, and according to The result of step 1 is sequentially recorded the spatial depth value of its correspondence.
Step 3-2, does calculus of differences to the spatial depth value of the zone boundary arranged in order,
Step 3-3, does integral operation to the result of calculus of differences.
When step 3-3 does integral operation, the difference component of threshold value will be exceeded by threshold calculations, thus by the fortune of step 3 Calculate the acute variation of the spatial depth eliminating zone boundary.
The present invention not specific dependency is in a certain kind of two kinds of light field samplings as described in the background art, and the present invention is by prohibiting Avoid the methods such as search, region segmentation, degree of depth interpolation, it is to avoid spatial depth described function optimization method faced by described function The problems such as selection difficulty is uncertain with operation time, it is achieved that quick, the exact arithmetic that spatial depth is extracted.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of spatial depth extracting method based on field information;
Fig. 2 is the schematic diagram of field information, and wherein, Fig. 2 a Fig. 2 b is that the light of different points of view reception Same Scene shows respectively It is intended to;
Fig. 3 be reference view choose schematic diagram, wherein, Fig. 3 a is two line orthogonal the choosing of reference view when dividing equally Schematic diagram, Fig. 3 b be two straight lines the most orthogonal time reference view choose schematic diagram, Fig. 3 c is two straight lines ginsengs when arbitrarily intersecting That examines viewpoint chooses schematic diagram;
Fig. 4 is TABU search schematic diagram.
Detailed description of the invention
In conjunction with specific embodiments and the drawings, the inventive method is described in detail.
As it is shown in figure 1, it is as follows to carry out the step of spatial depth extraction based on field information:
(1) at reference view, the spatial depth of image border composition calculates
The reference view of selected four-dimensional light field data.The information of optical field imaging record is 4 D data, and its data record former Reason is as in figure 2 it is shown, (x, y), at the diverse location of image, (x, y) through different viewpoints, (u, v) record, can for piece image I (u, v, x, y), (u, v, x y) are (u, v, x, y) light luminance put to R to obtain four-dimensional light field data R.
The invention is not restricted to the acquisition mode of field information, the available light field data represented such as Fig. 2 of all acquisitions, the present invention Method all can realize it is extracted spatial depth.
Assume umin≤u≤umax,vmin≤v≤vmax,xmin≤x≤xmax,ymin≤y≤ymax, choose viewpoint (u, v) definition In the range of a viewpoint (u0,v0) it is reference view.The present invention is not required to K × K viewpoint forming array, but only needs 2K-1 Viewpoint, wherein, K viewpoint is positioned on straight line, and additionally K viewpoint is positioned at another straight line, and two straight-line intersections are ginseng Examining viewpoint, therefore, the present invention only needs to arrange 2K-1 viewpoint, reference view choose three kinds of situations as shown in Figure 3 of requirement.
Image at reference view is gradient algorithm G (u0,v0, x, y),
G ( u 0 , v 0 , x , y ) = Σ ( x n , y n ) ∈ N ( x , y ) [ R ( u 0 , v 0 , x , y ) - R ( u 0 , v 0 , x n , y n ) ] / 4 ,
Wherein (x y) is (x to Nn,yn) four neighborhood territory pixels, and
N (x, y)={ (xn,yn)||xn-x|-|yn-y |=1},
Marginal area E (u0,v0) definition be
E(u0,v0)={ (u0,v0,x,y)||G(u0,v0, x, y) | > T},
Wherein T is noise threshold limit value, and its value can be manually set, and typically chooses 0≤T≤0.25Gmax, GmaxFor maximum Gray scale.
Edge region, for (the v determined0,ye), (u, x) can form a stack features line, and (u x) crosses point (u0,xe);Right In (the u determined0,xe), (v, y) can form a stack features line, and (v y) crosses point (v0,ye)。
Above, a bit (u of marginal area0,v0,xe,ye)∈E(u0,v0)。
A bit (u of note marginal area0,v0,xe,ye) two stack features line set be respectively
With Characteristic curve can be tried to achieve by the way of iterationWith
Wherein,
x ( u ) | u , u - 1 v = v 0 y = y e ∈ ( u m i n , u m a x ) = argmin x ( u ) [ 2 R ( u , v 0 , x ( x ) , y e ) - R ( u - 1 , v 0 , x ( u - 1 ) , y e ) - R ( u 0 , v 0 , x e , y e ) ] ,
y ( v ) | v , v - 1 u = u 0 x = x e ∈ ( v m i n , v m a x ) = argmin y ( u ) [ 2 R ( u 0 , v , x e , y ( v ) ) - R ( u 0 , v - 1 , x e , y ( v - 1 ) ) - R ( u 0 , v 0 , x e , y e ) ] .
As it is shown on figure 3, the process of TABU search process iteration is without the traversal overall situation, after often determining on characteristic curve a bit, A hunting zone containing 3 to 5 pixels will be locked at next iterative position, thus can significantly compressive features line The calculating time.
After determining characteristic curve, it may be determined that the slope S of characteristic curveux(u0,v0,xe,ye) and Svy(u0,v0,xe,ye)。
S u x ( u 0 , v 0 , x e , y e ) = Σ ( u - u ‾ ) ( x - x ‾ ) Σ ( u - u ‾ ) 2 , ∀ ( u , v , x , y ) ∈ R u x ( u , v 0 , x , y e ) ,
S v y ( u 0 , v 0 , x e , y e ) = Σ ( v - v ‾ ) ( y - y ‾ ) Σ ( v - v ‾ ) 2 , ∀ ( u , v , x , y ) ∈ R v y ( u 0 , v , x e , y ) .
Finally can determine that edge pixel spatial depth D (μ0,v0,xe,ye) it is
D ( u 0 , v 0 , x e , y e ) = D 0 1 - 0.5 [ S u x ( u 0 , v 0 , x e , y e ) + S v y ( u 0 , v 0 , x e , y e ) ] ,
Wherein D0For normalization spatial depth.
(2) image at reference view is carried out region segmentation computing
The present invention uses the mode of quadtree decomposition and polymerization to implement the region segmentation of image at reference view.First will ginseng Examine image at viewpoint and carry out quadtree decomposition, it is desirable in the region after decomposition, pixel threshold is less than T, herein threshold value and first step threshold It is worth identical.
If RiFor current rectangle region, then
P ( R i ) = 1 , m a x ( R i ) - m i n ( R i ) ≤ T 0 , m a x ( R i ) - m i n ( R i ) > T ,
P(Ri) for judge region RiWhether carry out the logic split, if value is 1, do not split, if value is 0, Segmentation;max(Ri) it is the maximum in current rectangle region, min (Ri) it is the minima in current rectangle region.
If P is (Ri)=0, then by RiIt is divided into mutually disjoint 4 regions, is allowed to satisfied
Repeat above step until each subregion is the most indivisible.The region of the arbitrary neighborhood after then decomposing is tasted Examination polymerization, if meeting max (Ri∪Rj)-min(Ri∪Rj)≤T, then by merging.
(3) depth fill-in of cut zone core pixel
(3-1) first marginal element's spatial depth of each cut zone is revised.
Arbitrary cut zone R after being combinedi, retrieve its edge degree of depth counterclockwise, it is thus achieved that marginal position ordered series of numbers Ci With edge degree of depth ordered series of numbers Di, wherein i=1,2 ..., n.Ask for ordered series of numbers DiDifference di,
d i = D i + 1 - D i , i ≠ n D 0 - D i , i = n ,
Revised difference di' it is
d i &prime; = d i , d i < &Delta; D m a x &Delta;D m a x , d i &GreaterEqual; &Delta;D m a x ,
Wherein, Δ DmaxFor spatial depth change threshold.If DiMinima isThe most revised spatial depth Di' it is
D i &prime; = D i - 1 + d i &prime; , i > i 0 D m i n , i = i 0 D i + 1 - d i &prime; , i < i 0 .
(3-2) according to the spatial depth at revised cut zone edge, deep to the space of cut zone non-edge part Degree is filled.
Fill the most in the x-direction.For any point in cut zone, (x y), finds the left hand edge of identical y-coordinate (xleft, y) with right hand edge (xright, y), according to the spatial depth at revised cut zone edge to (x, y) spatial depth enters Line linearity interpolation.The spatial depth D in the x direction after interpolationx(u0,v0, x, y) be
D x ( u 0 , v 0 , x , y ) = x - x l e f t x r i g h t - x l e f t D &prime; ( u 0 , v 0 , x r i g h t , y ) + x r i g h t - x x r i g h t - x l e f t D &prime; ( u 0 , v 0 , x l e f t , y ) ,
Wherein D'(u0,v0,xright, y) it is pixel (u0,v0, x, y) the corresponding revised spatial depth of region right margin, D'(u0,v0,xleft, y) it is pixel (u0,v0, x, y) the corresponding revised spatial depth of region left margin.
For data with a line in region, it is only necessary to retrieve a left and right edges, thus can significantly compress calculating time Between complexity.
In the same manner, the spatial depth D in the y direction obtained after interpolation in the y-directiony(u0,v0, x, y) be
D y ( u 0 , v 0 , x , y ) = y - y d o w n y u p - y d o w m D ( u 0 , v 0 , x , y u p ) + y u p - y y u p - y d o w m D ( u 0 , v 0 , x , y d o w n ) ,
Wherein D (u0,v0,x,yup) it is pixel (u0,v0, x, y) the corresponding revised spatial depth in coboundary, region, D (u0,v0,x,ydown) it is pixel (u0,v0, x, y) the corresponding revised spatial depth of region lower boundary.
Final interpolation result is Dx(u0,v0, x, y) and Dy(u0,v0, x, meansigma methods D (u y)0,v0, x, y), i.e.
D ( u 0 , v 0 , x , y ) = 1 2 &lsqb; D x ( u 0 , v 0 , x , y ) + D y ( u 0 , v 0 , x , y ) &rsqb; .
Embodiment of the present invention adaptability is extensive, and the present invention does not relies on certain types of optical field acquisition device, all meets Fig. 1 The light field data described, all can realize the extraction to spatial depth by the present invention.
Advantages of the present invention is as follows:
(1) present invention is without setting central viewpoint, and prior art depends on central viewpoint more and extracts spatial depth.Difference In prior art, using and be directed to the method for reference view to extract spatial depth, reference view may be located at center, it is also possible to It is positioned at side.
(2) operation time of the present invention is short, it is assumed that at reference view, the data volume of image is N, and total viewpoint number is constant, then originally Inventing step 1, the time complexity of 2,3 described in explanation and be respectively N, NlogN, N, total time complexity is T (N)=O (2N+NlogN)=O (NlogN).Thus the operation time of the present invention is the shortest, possesses the operability of actual realization.
Technical scheme and beneficial effect have been described in detail by above-described detailed description of the invention, Ying Li Solve is to the foregoing is only presently most preferred embodiment of the invention, is not limited to the present invention, all principle models in the present invention Enclose interior done any amendment, supplement and equivalent etc., should be included within the scope of the present invention.

Claims (10)

1. a spatial depth extracting method based on field information, it is characterised in that comprise the steps:
Step 1, selected reference view in four-dimensional light field data, calculate the spatial depth of image border composition at reference view;
Step 2, carries out region segmentation computing to image at reference view, is classified as some districts according to color or brightness homogeneity Territory;
Step 3, revises, according to revised spatial depth to regional center the spatial depth of each edges of regions after segmentation The spatial depth of part makees interpolation arithmetic.
2. spatial depth extracting method based on field information as claimed in claim 1, it is characterised in that reference view is 2K-1 viewpoint, wherein, K viewpoint is positioned on straight line, and additionally K viewpoint is positioned at another straight line, two straight-line intersections For reference view, constitute 2K-1 viewpoint altogether.
3. spatial depth extracting method based on field information as claimed in claim 1, it is characterised in that step 1 concrete Step is as follows:
Step 1-1, obtains field information, and four-dimensional field information is decomposed into the image of each viewpoint.Image at reference view is done Gradient algorithm, thus extract image border composition at reference view;
Step 1-2, in conjunction with reference view with a line with other viewpoint image-forming informations of string, calculate figure at reference view As marginal element spatial depth characteristic curve slope;
Step 1-3, can calculate the spatial depth of image border composition at reference view according to spatial depth characteristic curve slope.
4. spatial depth extracting method based on field information as claimed in claim 3, it is characterised in that in step 1-2, For Grad more than the marginal position of noise threshold limit value, by preimage element and the neighbor of its correspondence, search by viewpoint Rope mates;Two viewpoint direction of X and Y all can find one group of position matched, thus Special composition depth information Characteristic curve;
Noise threshold limit value chooses 0≤T≤0.25Gmax, GmaxFor maximum gray scale.
5. spatial depth extracting method based on field information as claimed in claim 3, it is characterised in that in step 1-2, The computational methods of spatial depth characteristic curve, first use TABU search to reduce the hunting zone of point on characteristic curve, in hunting zone Find out the point minimum with point of proximity difference on fixed characteristic curve, be added into characteristic curve point range, then by new point range Set further determines that next step hunting zone.
6. spatial depth extracting method based on field information as claimed in claim 1, it is characterised in that in step 2, enter Border between the region of row region segmentation computing is image border composition at the reference view obtained in step 1.
A kind of spatial depth extracting method based on field information, it is characterised in that in step 2, Region segmentation computing uses the mode of quadtree decomposition and polymerization;In the discrimination standard of quaternary tree separation and polymerization is region Whether the concordance of pixel exceedes marginal value, and for gray level image, concordance is the maximum difference of pixel grey scale;For cromogram Picture, concordance is the maximum difference of color and average color.
8. spatial depth extracting method based on field information as claimed in claim 1, it is characterised in that in step 3, root The region segmentation situation obtained according to the spatial depth of image border composition at the reference view that step 1 obtains and step 2, to each district The spatial depth on border, territory is revised;
According to the spatial depth of revised zone boundary, non-borderline region is done spatial depth interpolation arithmetic, it is thus achieved that whole district The spatial depth in territory.
9. spatial depth extracting method based on field information as claimed in claim 1, it is characterised in that step 3 concrete Step is as follows:
Step 3-1, it is thus achieved that each region, travel through its border, in order and the position of record delimitation pixel, and according to step The result of 1 is sequentially recorded the spatial depth value of its correspondence;
Step 3-2, does calculus of differences to the spatial depth value of the zone boundary arranged in order;
Step 3-3, does integral operation to the result of calculus of differences.
10. spatial depth extracting method based on field information as claimed in claim 9, it is characterised in that in step 3-3 In, when doing integral operation, the difference component of threshold value will be exceeded by threshold calculations.
CN201610578644.8A 2016-07-18 2016-07-18 A kind of spatial depth extracting method based on field information Active CN106257537B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610578644.8A CN106257537B (en) 2016-07-18 2016-07-18 A kind of spatial depth extracting method based on field information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610578644.8A CN106257537B (en) 2016-07-18 2016-07-18 A kind of spatial depth extracting method based on field information

Publications (2)

Publication Number Publication Date
CN106257537A true CN106257537A (en) 2016-12-28
CN106257537B CN106257537B (en) 2019-04-09

Family

ID=57713781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610578644.8A Active CN106257537B (en) 2016-07-18 2016-07-18 A kind of spatial depth extracting method based on field information

Country Status (1)

Country Link
CN (1) CN106257537B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991637A (en) * 2017-02-28 2017-07-28 浙江大学 The method that multiresolution light field is decomposed is realized in a kind of utilization GPU parallel computations
CN107135388A (en) * 2017-05-27 2017-09-05 东南大学 A kind of depth extraction method of light field image
CN107330930A (en) * 2017-06-27 2017-11-07 晋江市潮波光电科技有限公司 Depth of 3 D picture information extracting method
CN108846473A (en) * 2018-04-10 2018-11-20 杭州电子科技大学 Light field depth estimation method based on direction and dimension self-adaption convolutional neural networks
CN109360235A (en) * 2018-09-29 2019-02-19 中国航空工业集团公司上海航空测控技术研究所 A kind of interacting depth estimation method based on light field data
CN110662014A (en) * 2019-09-25 2020-01-07 江南大学 Light field camera four-dimensional data large depth-of-field three-dimensional display method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851089A (en) * 2015-04-28 2015-08-19 中国人民解放军国防科学技术大学 Static scene foreground segmentation method and device based on three-dimensional light field
CN104966289A (en) * 2015-06-12 2015-10-07 北京工业大学 Depth estimation method based on 4D light field
CN105023275A (en) * 2015-07-14 2015-11-04 清华大学 Super-resolution light field acquisition device and three-dimensional reconstruction method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851089A (en) * 2015-04-28 2015-08-19 中国人民解放军国防科学技术大学 Static scene foreground segmentation method and device based on three-dimensional light field
CN104966289A (en) * 2015-06-12 2015-10-07 北京工业大学 Depth estimation method based on 4D light field
CN105023275A (en) * 2015-07-14 2015-11-04 清华大学 Super-resolution light field acquisition device and three-dimensional reconstruction method thereof

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HAE-GON JEON等: "Accurate Depth Map Estimation from a Lenslet Light Field Camera", 《 2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *
SVEN WANNER: "Variational Light Field Analysis for Disparity Estimation and Super-Resolution", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
赵兴荣: "基于光场相机深度信息获取技术的研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991637A (en) * 2017-02-28 2017-07-28 浙江大学 The method that multiresolution light field is decomposed is realized in a kind of utilization GPU parallel computations
CN106991637B (en) * 2017-02-28 2019-12-17 浙江大学 Method for realizing multi-resolution light field decomposition by utilizing GPU (graphics processing Unit) parallel computation
CN107135388A (en) * 2017-05-27 2017-09-05 东南大学 A kind of depth extraction method of light field image
CN107330930A (en) * 2017-06-27 2017-11-07 晋江市潮波光电科技有限公司 Depth of 3 D picture information extracting method
CN107330930B (en) * 2017-06-27 2020-11-03 晋江市潮波光电科技有限公司 Three-dimensional image depth information extraction method
CN108846473A (en) * 2018-04-10 2018-11-20 杭州电子科技大学 Light field depth estimation method based on direction and dimension self-adaption convolutional neural networks
CN108846473B (en) * 2018-04-10 2022-03-01 杭州电子科技大学 Light field depth estimation method based on direction and scale self-adaptive convolutional neural network
CN109360235A (en) * 2018-09-29 2019-02-19 中国航空工业集团公司上海航空测控技术研究所 A kind of interacting depth estimation method based on light field data
CN109360235B (en) * 2018-09-29 2022-07-19 中国航空工业集团公司上海航空测控技术研究所 Hybrid depth estimation method based on light field data
CN110662014A (en) * 2019-09-25 2020-01-07 江南大学 Light field camera four-dimensional data large depth-of-field three-dimensional display method

Also Published As

Publication number Publication date
CN106257537B (en) 2019-04-09

Similar Documents

Publication Publication Date Title
US10353271B2 (en) Depth estimation method for monocular image based on multi-scale CNN and continuous CRF
CN106257537A (en) A kind of spatial depth extracting method based on field information
CN104574375B (en) Image significance detection method combining color and depth information
CN105005755B (en) Three-dimensional face identification method and system
CN104966270B (en) A kind of more image split-joint methods
CN108875595A (en) A kind of Driving Scene object detection method merged based on deep learning and multilayer feature
CN107862698A (en) Light field foreground segmentation method and device based on K mean cluster
CN104504734B (en) A kind of color of image transmission method based on semanteme
Liao et al. SynthText3D: synthesizing scene text images from 3D virtual worlds
CN102609950B (en) Two-dimensional video depth map generation process
CN104850850A (en) Binocular stereoscopic vision image feature extraction method combining shape and color
CN110827312B (en) Learning method based on cooperative visual attention neural network
CN111738295B (en) Image segmentation method and storage medium
CN102074020A (en) Method for performing multi-body depth recovery and segmentation on video
Xue et al. Multi-frame stereo matching with edges, planes, and superpixels
CN101739683A (en) Image segmentation and multithread fusion-based method and system for evaluating depth of single image
CN116503836A (en) 3D target detection method based on depth completion and image segmentation
CN101765019A (en) Stereo matching algorithm for motion blur and illumination change image
CN111105451B (en) Driving scene binocular depth estimation method for overcoming occlusion effect
Guo et al. 2D to 3D convertion based on edge defocus and segmentation
Huang et al. ES-Net: An efficient stereo matching network
CN109218706B (en) Method for generating stereoscopic vision image from single image
CN107330930A (en) Depth of 3 D picture information extracting method
CN116385996B (en) Multitasking method and device based on three-dimensional matrix camera
CN111738061A (en) Binocular vision stereo matching method based on regional feature extraction and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant