CN107038719A - Depth estimation method and system based on light field image angle domain pixel - Google Patents

Depth estimation method and system based on light field image angle domain pixel Download PDF

Info

Publication number
CN107038719A
CN107038719A CN201710174505.3A CN201710174505A CN107038719A CN 107038719 A CN107038719 A CN 107038719A CN 201710174505 A CN201710174505 A CN 201710174505A CN 107038719 A CN107038719 A CN 107038719A
Authority
CN
China
Prior art keywords
depth
image
pixel
max
angle domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201710174505.3A
Other languages
Chinese (zh)
Inventor
金欣
秦延文
戴琼海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Tsinghua University
Original Assignee
Shenzhen Graduate School Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Tsinghua University filed Critical Shenzhen Graduate School Tsinghua University
Priority to CN201710174505.3A priority Critical patent/CN107038719A/en
Publication of CN107038719A publication Critical patent/CN107038719A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera

Abstract

The present invention provides a kind of depth estimation method based on light field image angle domain pixel, including:A1. input light field image data carry out refocusing;The depth tensor of the angle domain image pixel intensities uniformity based on Color Channel is extracted, estimation of Depth is carried out according to the depth tensor, ID image D is obtainedraw;A2. to the ID image DrawCarry out Confidence Analysis:The variance in minimum vertex neighborhood by analyzing normalized angle domain pixel uniformity variogram picture, according to threshold value, judges confidence region Ω and untrusted region;A3. according to confidence region Ω, Optimized model is set up, untrusted region is optimized, untrusted point is filled, obtains enhanced depth image Dopt.This method makes the depth of image texture smooth region more accurate, depth transition sense enhancing, and the border of actual scene object becomes apparent from, so that depth image is strengthened.

Description

Depth estimation method and system based on light field image angle domain pixel
Technical field
It is more particularly to a kind of to be based on light field image angle domain the present invention relates to computer vision and digital image processing field The depth estimation method and system of pixel.
Background technology
Light field can record light informations all in environment, and this concept is just carried early in nineteen forties Go out.Henceforth people begin one's study optical field imaging principle and constantly enrich light field intension.Theoretical, the section based on optical field imaging The personnel of grinding, which start to develop, can record the camera of light field, and light-field camera can record in actual scene the direction of light and strong Degree, the light-field camera invented in the last few years achieves good effect in industry and business circles.
The tactful main flow algorithm of existing estimation of Depth is Stereo Matching Algorithm, and the algorithm main thought is to utilize general camera The image at two visual angles is gathered, the correlation construction loss function between two visual angles is then analyzed, by minimizing loss letter Count to realize the estimation of depth;This method has some limitations for light-field camera estimation of Depth, due to light-field camera Baseline it is too short, the loss function of construction is not accurate enough, can produce certain blocking effect, so that error can be excessive, it is impossible to Meet the required precision of estimation of Depth.
Another method is made up using the defocus amount and matching degree between binding analysis light field sub-aperture image The error of simple Stereo matching, this mode improves the accuracy of depth image to a certain extent, but can not be accurate Estimate the depth at the sparse place of texture.
Also a kind of extracting mode:Comentropy;This measurement means can effectively suppress noise influence and can be with The influence blocked is handled well, but the calculating of this mode consumption is excessive.
The content of the invention
To solve the above problems, the present invention proposes a kind of depth estimation method based on light field image angle domain pixel and is System, this method makes the depth of image texture smooth region more accurate, depth transition sense enhancing, and the border of actual scene object is more Plus it is clear, so that depth image is strengthened.
The present invention provides a kind of depth estimation method based on light field image angle domain pixel, it is characterised in that including such as Lower step:
A1. input light field image data carry out refocusing, obtain the refocusing light field image of multiple different focussing planes;Root According to the multiple refocusing light field image, the depth tensor of the angle domain image pixel intensities uniformity based on Color Channel is extracted;Root According to the depth tensor, estimation of Depth is carried out along the direction of depth level, ID image D is obtainedraw
A2. to the ID image DrawCarry out Confidence Analysis:It is consistent by analyzing normalized angle domain pixel Property variogram picture minimum vertex neighborhood in variance, variance is more than certain threshold value, is then determined as confidence point, so as to obtain corresponding Confidence region Ω;Variance is less than the threshold value, then is determined as untrusted point, so as to obtain corresponding untrusted region;
A3. according to confidence region Ω, Optimized model is set up, untrusted region is optimized, untrusted point is filled, obtains Enhanced depth image Dopt
Preferably, the depth tensor in the step A1 includes the variable based on strength difference, is expressed as equation below:
C (p, α)=β Rmax(p,α)+(1-α)Ravg(p, α),
C (p, α) represents depth tensor of the angle domain pixel p in reunion focal plane α;β represents weight coefficient, 0≤β≤1; RmaxRepresent R, G, the maximum of tri- color channel intensities of B, Rmax(p, α)=max (RR(p,α),RG(p,α),RB(p,α));RavgTable Show R, G, the mean square of tri- color channel intensities of B,
Wherein, Ri(p, α)=max (Iq)-min(Iq), i represent R, G, B, q ∈ A (p, α), q represent be located at angle domain A (p, Pixel in α).
Preferably, the depth tensor in the step A1 includes the variable based on strength information entropy, is expressed as following public affairs Formula:
C (p, α)=β Rmax(p,α)+(1-α)Ravg(p, α),
C (p, α) represents depth tensor of the angle domain pixel p in reunion focal plane α;β represents weight coefficient, 0≤β≤1; RmaxRepresent R, G, the maximum of tri- color channel intensities of B, Rmax(p, α)=max (RR(p,α),RG(p,α),RB(p,α));RavgTable Show R, G, the mean square of tri- color channel intensities of B,
Wherein, Ri(p, α)=- ∑jH (j) log (h (j)), i represent that R, G, B, h (j) represent intensity j at angle domain A (p, α) The probability of middle appearance.
Preferably, the depth tensor in the step A1 includes the variable based on strength matching degree, is expressed as following public affairs Formula:
C (p, α)=β Rmax(p,α)+(1-α)Ravg(p, α),
C (p, α) represents depth tensor of the angle domain pixel p in reunion focal plane α;β represents weight coefficient, 0≤β≤1; RmaxRepresent R, G, the maximum of tri- color channel intensities of B, Rmax(p, α)=max (RR(p,α),RG(p,α),RB(p,α));RavgTable Show R, G, the mean square of tri- color channel intensities of B,
Wherein,I represents R, G, B, | WD| represent the field window of current pixel Mouth size,The average of angle domain image pixel intensities is represented,ΔpRepresent Laplace Operator.
Preferably, ID image D in the step A1rawIt is expressed as equation below:Draw(p)=∑pDraw(p), its In,
Preferably, the confidence region Ω in the step A2 is expressed as equation below:Ω=p | var (R (r)) |r∈M(p)> τreject, wherein, var (*) represents variance computing, and M (p) represents pixel p neighborhood, M (p)=[Draw(p)-Δ,Draw(p)+ Δ], Δ represents that half neighborhood is wide;τrejectRepresent threshold value.
Preferably, the Optimized model of the step A3 is the Optimized model of depth value, gradient and the smoothness of pixel.
Further preferably, the Optimized model of the step A3 is:Dopt=arg min J1(D)+λJ2(D)+γJ3(D)。
Wherein,J1(D) pixel r and picture in its neighborhood in depth image are represented Plain s depth value weights the error function of draw, and D represents the depth map finally given, and s represents to be located in pixel r neighborhood N (r) Pixel, wrsThe weight coefficient between pixel s and r is represented, specific form is: Wherein IcRepresent center bore image.
J2(D) the pixel r and image I of confidence depth areas in depth image is representedc Gradient keep error function, gDWithD and I is represented respectivelycGradient.
J3(D) smoothness of untrusted depth areas in depth image is represented, Δ D represents D Second dervative;λ and γ is proportionality coefficient.
Preferably, step A4 is also included after the step A3:Using Weighted median filtering to DoptOptimize again To Dfinal
The present invention also provides a kind of depth estimation system based on light field image angle domain pixel, including a kind of computer can Storage medium is read, it stores the computer program for electronic data interchange, wherein, the computer program causes computer to hold Row method as described above.
Beneficial effects of the present invention:By to light field image refocusing, and extract the angle domain pixel based on Color Channel The depth tensor of strength consistency, the direction according to the tensor along depth level carries out estimation of Depth, obtains ID and estimates Count image;And Confidence Analysis is carried out to ID image, Optimized model is set up, untrusted region is optimized, so that So that the depth of image texture smooth region is accurate, the sharpness of border of actual scene object, and then strengthened depth image.
More advantages can also be obtained in further preferred scheme:By constructing several effective measurement light field images The basic underlying variables of angle domain image pixel intensities uniformity, such as:Strength difference, strength information entropy, strength matching degree, can estimate well Depth is counted, estimation of Depth is more accurate;Can effectively it be obtained comprising final by the variance analyzed in the minimum vertex neighborhood of CMR curves The confidence region Ω of confidence depth point;According to confidence region Ω, confidence depth point diffusion model is set up, to untrusted depth areas Optimize, fill untrusted depth point, the increase of Derivative Terms make it that the depth of image texture smooth region is more accurate, First-order Gradient causes depth transition sense enhancing, and the border of actual scene object becomes apparent from.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of the depth estimation method of the embodiment of the present invention 1.
Fig. 2 is the refocusing light field image schematic diagram of the embodiment of the present invention 1.
Fig. 3 is the relation schematic diagram of the light field image of the embodiment of the present invention 1 and angle domain pixel.
Fig. 4 is the light field image of the embodiment of the present invention 1 and the relation schematic diagram of centre visual angle image.
Fig. 5 a are that loss function curve and Confidence Analysis of the embodiment of the present invention 1 based on strength consistency depth tensor show It is intended to.
Fig. 5 b are that loss function curve and Confidence Analysis of the embodiment of the present invention 2 based on strength matching degree depth tensor show It is intended to.
Fig. 5 c are that loss function curve and Confidence Analysis of the embodiment of the present invention 3 based on strength information entropy depth tensor show It is intended to.
Embodiment
With reference to embodiment and compare accompanying drawing the present invention be described in further detail, it should be emphasised that, What the description below was merely exemplary, the scope being not intended to be limiting of the invention and its application.
Embodiment 1
The present embodiment provides a kind of depth estimation method based on light field image angle domain pixel, and its schematic flow sheet is as schemed Shown in 1, comprise the following steps that:
A1. original input picture is focused on different depth levels, the depth level image angle different by analyzing The situation of change of image pixel intensities in domain is spent, designs and proposes a new depth tensor to weigh this change, statistics is obtained again To ID image Draw
Light field shearing is carried out according to existing light field refocusing technology, formula is as follows:
Wherein L0Represent original image, LαExpression is focused on depth level α (as shown in Figure 2);, can according to above formula Original input picture is focused on different depth levels, obtain a series of image clusters for being subordinate to different focal planes.
The situation of change of image pixel intensities in the depth level image angle domain different by analyzing, designs and proposes one Basic underlying variables:
Ri(p, α)=max (Iq)-min(Iq) (2)
Wherein, i represents that R, G, B, q ∈ A (p, α), q represent the pixel being located in angle domain A (p, α), using in angle domain The difference of image pixel intensities maxima and minima measures the change of angle domain image pixel intensities.
Light field image, centre visual angle image Ic, and angle domain pixel A (p, α) position relationship as shown in Figure 3 and Figure 4.
View data has three Color Channels, respectively R, G, B, and it is strong to obtain triple channel according to the basic underlying variables of formula (2) The maximum and average value of degree, it is as follows:
Rmax(p, α)=max (RR(p,α),RG(p,α),RB(p,α)) (3)
RmaxExpression obtains R, G, the maximum of B triple channel intensity, and some image pixel intensities is very prominent for angle domain for this variable Situation about going out is effectively;RavgRepresent R, G, the mean square of tri- channel strengths of B, this variable is in angle domain The little situation of triple channel strength difference is effective.It is as follows with reference to the advantage formation construction depth tensor of the two variables:
C (p, α)=β Rmax(p,α)+(1-β)Ravg(p,α) (5)
Wherein β is the number between 0 to 1, i.e., by RmaxAnd RavgWeighted sum.Then C (p, α) is finally used as weighing angle Spend the depth tensor of domain image pixel intensities uniformity, i.e. strength consistency measurement (CMR:Consistency Metric Range).
Original depth can obtain the ID at pixel p by minimizing C (p, α) along on depth level:
A2. the confidence analysis of original depth image is carried out according to the depth tensor information of extraction.Confidence analysis is used to determine The degree of accuracy of original depth estimation, counts each C (p, α) in Draw(p) trend of change in neighborhood, is proposed a kind of new Judge whether depth at the pixel p belongs to the strategy of confidence point, it is assumed that var (*) represents variance operator, then CMR in this neighborhood Variance can be expressed as:var(C(r))|r∈M(p), wherein M (p)=[Draw(p)-Δ,Draw(p)+Δ] represent pixel p's Neighborhood, Fig. 5 a describe the position relationship of each variable in this requirement.If this variance var () is less than a threshold taurejectWhen, It is untrusted point to judge the point;On the contrary, the point is confidence point.According to principles above, it is possible to determine that depth confidence region and non- Confidence region.Represent confidence depth areas with Ω, corresponding Ω=p | var (R (r)) |r∈M(p)> τreject}。
A3. according to confidence region Ω, Optimized model is set up, untrusted region is optimized, untrusted point is filled, obtains Enhanced depth image Dopt.Optimized model is the Optimized model of the depth value based on pixel, gradient and smoothness.Optimize mould Type is:
Restrictive condition is D (Ω)=Draw(Ω)。
In formula (7),J1(D) pixel r and its neighborhood in depth image are represented The interior pixel s average weighted error function of depth value, D represents the depth map finally given, and s represents to be located at pixel r neighborhood N (r) Interior pixel, wrsThe weight coefficient between pixel s and r is represented, specific form is: Wherein IcRepresent center bore image.
J2(D) the pixel r and image I of confidence depth areas in depth image is representedcGradient keep error function, gDWithD and I is represented respectivelycGradient.
J3(D) smoothness of untrusted depth areas in depth image is represented, Δ D represents D second dervative;λ and γ are Proportionality coefficient.
A4. using Weighted median filtering to DoptOptimize again and obtain Dfinal
Embodiment 2
The present embodiment provides a kind of depth estimation method based on light field image angle domain pixel, comprises the following steps that:
A1. original input picture is focused on different depth levels, the depth level image angle different by analyzing The situation of change of image pixel intensities in domain is spent, designs and proposes a new variable to weigh this change, statistics is obtained just again Beginning depth image Draw
Light field shearing is carried out according to existing light field refocusing technology, formula is as follows:
Wherein L0Represent original image, LαExpression is focused on depth level α;According to above formula, figure can will be originally inputted Image focu obtains a series of image clusters for being subordinate to different focal planes on different depth levels.
Difference with embodiment 1 is that the basic underlying variables in step A1 are different, and the present embodiment utilizes angle domain image pixel intensities Comentropy obtain basic underlying variables, be expressed as:
Ri(p, α)=- ∑jh(j)log(h(j)) (10)
Wherein, i represents that R, G, B, h (j) represent the probability that intensity j occurs in angle domain A (p, α).
View data has three Color Channels, respectively R, G, B, and it is strong to obtain triple channel according to the basic underlying variables of formula (10) The maximum and average value of degree:
Rmax(p, α)=max (RR(p,α),RG(p,α),RB(p,α)) (3)
The formula (3) of be the same as Example 1 and (4), then it is weighted the depth for obtaining weighing angle domain image pixel intensities uniformity Tensor:
C (p, α)=β Rmax(p,α)+(1-β)Ravg(p,α) (5)
Original depth can obtain the ID at pixel p by minimizing C (p, α) along on depth level:
A2. the confidence analysis of original depth image is carried out according to the depth tensor information of extraction.Confidence analysis is used to determine The degree of accuracy of original depth estimation, counts each C (p, α) in Draw(p) trend of change in neighborhood, is proposed a kind of new Judge whether depth at the pixel p belongs to the strategy of confidence point, it is assumed that var (*) represents variance operator, then the side in this neighborhood Difference can be expressed as:var(C(r))|r∈M(p), wherein M (p)=[Draw(p)-Δ,Draw(p)+Δ] pixel p neighborhood is represented, Fig. 5 b describe the position relationship of each variable in this requirement.If this variance var () is less than a threshold taurejectWhen, judge The point is untrusted point;On the contrary, the point is confidence point.According to principles above, it is possible to determine that depth confidence region and untrusted Region.Represent confidence depth areas with Ω, corresponding Ω=p | var (R (r)) |r∈M(p)> τreject}。
A3. according to confidence region Ω, Optimized model is set up, untrusted region is optimized, untrusted point is filled, obtains Enhanced depth image Dopt.Optimized model is the Optimized model of the depth value based on pixel, gradient and smoothness.Optimize mould Type is:
Restrictive condition is D (Ω)=Draw(Ω)。
In formula (7),J1(D) pixel r and its neighborhood in depth image are represented The interior pixel s average weighted error function of depth value, D represents the depth map finally given, and s represents to be located at pixel r neighborhood N (r) Interior pixel, wrsThe weight coefficient between pixel s and r is represented, specific form is: Wherein IcRepresent center bore image.
J2(D) the pixel r and image I of confidence depth areas in depth image is representedcGradient keep error function, gDWithD and I is represented respectivelycGradient.
J3(D) smoothness of untrusted depth areas in depth image is represented, Δ D represents D second dervative;λ and γ are Proportionality coefficient.
A4. using Weighted median filtering to DoptOptimize again and obtain Dfinal
Embodiment 3
The present embodiment provides a kind of depth estimation method based on light field image angle domain pixel, comprises the following steps that:
A1. original input picture is focused on different depth levels, the depth level image angle different by analyzing The situation of change of image pixel intensities in domain is spent, designs and proposes a new variable to weigh this change, statistics is obtained just again Beginning depth image Draw
Light field shearing is carried out according to existing light field refocusing technology, formula is as follows:
Wherein L0Represent original image, LαExpression is focused on depth level α;According to above formula, figure can will be originally inputted Image focu obtains a series of image clusters for being subordinate to different focal planes on different depth levels.
Difference with embodiment 1 is that the basic underlying variables in step A1 are different, and the present embodiment utilizes angle domain image pixel intensities Matching degree obtain basic underlying variables.
Matching measurement analyzes the matching value of sub-aperture and center bore, and the value is smaller to represent that matching degree is better.Ask first Obtain the average of the image pixel intensities in angle domain:
ΔpRepresent laplace operator.
Then pull-type operator Δ is utilizedpComputing is carried out in the angle domain neighborhood of pixels window to it and obtains basic underlying variables, It is expressed as:
Wherein, i represents R, G, B, | WD| represent the field window size of current pixel.
View data has three Color Channels, respectively R, G, B, and it is strong to obtain triple channel according to the basic underlying variables of formula (11) The maximum and average value of degree:
Rmax(p, α)=max (RR(p,α),RG(p,α),RB(p,α)) (3)
The formula (3) of be the same as Example 1 and (4), then it is weighted the depth for obtaining weighing angle domain image pixel intensities uniformity Tensor:
C (p, α)=β Rmax(p,α)+(1-β)Ravg(p,α) (5)
Original depth can obtain the ID at pixel p by minimizing C (p, α) along on depth level:
A2. the confidence analysis of original depth image is carried out according to the depth tensor information of extraction.Confidence analysis is used to determine The degree of accuracy of original depth estimation, counts each C (p, α) in Draw(p) trend of change in neighborhood, is proposed a kind of new Judge whether depth at the pixel p belongs to the strategy of confidence point, it is assumed that var (*) represents variance operator, then the side in this neighborhood Difference can be expressed as:var(C(r))|r∈M(p), wherein M (p)=[Draw(p)-Δ,Draw(p)+Δ] pixel p neighborhood is represented, Fig. 5 c describe the position relationship of each variable in this requirement.If this variance var () is less than a threshold taurejectWhen, judge The point is untrusted point;On the contrary, the point is confidence point.According to principles above, it is possible to determine that depth confidence region and untrusted Region.Represent confidence depth areas with Ω, corresponding Ω=p | var (R (r)) |r∈M(p)> τreject}。
A3. according to confidence region Ω, Optimized model is set up, untrusted region is optimized, untrusted point is filled, obtains Enhanced depth image Dopt.Optimized model is the Optimized model of the depth value based on pixel, gradient and smoothness.Optimize mould Type is:
Restrictive condition is D (Ω)=Draw(Ω)。
In formula (7),J1(D) represent in depth image in pixel r and its neighborhood The pixel s average weighted error function of depth value, D represents the depth map finally given, and s represents to be located in pixel r neighborhood N (r) Pixel, wrsThe weight coefficient between pixel s and r is represented, specific form is: Wherein IcRepresent center bore image.
J2(D) the pixel r and image I of confidence depth areas in depth image is representedcGradient keep error function, gDWithD and I is represented respectivelycGradient.
J3(D) smoothness of untrusted depth areas in depth image is represented, Δ D represents D second dervative;λ and γ are Proportionality coefficient.
A4. using Weighted median filtering to DoptOptimize again and obtain Dfinal
Above content is to combine specific/preferred embodiment made for the present invention be further described, it is impossible to recognized The specific implementation of the fixed present invention is confined to these explanations.For general technical staff of the technical field of the invention, Without departing from the inventive concept of the premise, it can also make some replacements or modification to the embodiment that these have been described, And these are substituted or variant should all be considered as belonging to protection scope of the present invention.

Claims (10)

1. a kind of depth estimation method based on light field image angle domain pixel, it is characterised in that comprise the following steps:
A1. input light field image data carry out refocusing, obtain the refocusing light field image of multiple different focussing planes;According to institute Multiple refocusing light field images are stated, the depth tensor of the angle domain image pixel intensities uniformity based on Color Channel is extracted;According to this Depth tensor, estimation of Depth is carried out along the direction of depth level, obtains ID image Draw
A2. to the ID image DrawCarry out Confidence Analysis:Become by analyzing normalized angle domain pixel uniformity Variance in the minimum vertex neighborhood of spirogram picture, variance is more than certain threshold value, then is determined as confidence point, so as to obtain corresponding confidence Region Ω;Variance is less than the threshold value, then is determined as untrusted point, so as to obtain corresponding untrusted region;
A3. according to confidence region Ω, Optimized model is set up, untrusted region is optimized, untrusted point is filled, is strengthened Depth image Dopt
2. depth estimation method as claimed in claim 1, it is characterised in that the depth tensor in the step A1 is included with strong Variable based on difference is spent, equation below is expressed as:
C (p, α)=β Rmax(p,α)+(1-α)Ravg(p, α),
C (p, α) represents depth tensor of the angle domain pixel p in reunion focal plane α;β represents weight coefficient, 0≤β≤1;RmaxTable Show R, G, the maximum of tri- color channel intensities of B, Rmax(p, α)=max (RR(p,α),RG(p,α),RB(p,α));RavgRepresent The mean square of tri- color channel intensities of R, G, B,
Wherein, Ri(p, α)=max (Iq)-min(Iq), i represents that R, G, B, q ∈ A (p, α), q represent to be located in angle domain A (p, α) Pixel.
3. depth estimation method as claimed in claim 1, it is characterised in that the depth tensor in the step A1 is included with strong Variable based on comentropy is spent, equation below is expressed as:
C (p, α)=β Rmax(p,α)+(1-α)Ravg(p, α),
C (p, α) represents depth tensor of the angle domain pixel p in reunion focal plane α;β represents weight coefficient, 0≤β≤1;RmaxTable Show R, G, the maximum of tri- color channel intensities of B, Rmax(p, α)=max (RR(p,α),RG(p,α),RB(p,α));RavgRepresent The mean square of tri- color channel intensities of R, G, B,
Wherein, Ri(p, α)=- ∑jH (j) log (h (j)), i represent that R, G, B, h (j) represent that intensity j goes out in angle domain A (p, α) Existing probability.
4. depth estimation method as claimed in claim 1, it is characterised in that the depth tensor in the step A1 is included with strong Variable based on matching degree is spent, equation below is expressed as:
C (p, α)=β Rmax(p,α)+(1-α)Ravg(p, α),
C (p, α) represents depth tensor of the angle domain pixel p in reunion focal plane α;β represents weight coefficient, 0≤β≤1;RmaxTable Show R, G, the maximum of tri- color channel intensities of B, Rmax(p, α)=max (RR(p,α),RG(p,α),RB(p,α));RavgRepresent The mean square of tri- color channel intensities of R, G, B,
Wherein,I represents R, G, B, | WD| represent that the field window of current pixel is big It is small,The average of angle domain image pixel intensities is represented,ΔpRepresent that Laplace is calculated Son.
5. described depth estimation method as claimed in claim 1, it is characterised in that ID image in the step A1 DrawIt is expressed as equation below:
Draw(p)=∑pDraw(p),
Wherein,
6. depth estimation method as claimed in claim 1, it is characterised in that the confidence region Ω in the step A2 is expressed as Equation below:Ω=p | var (R (r)) |r∈M(p)> τreject, wherein, var (*) represents variance computing, and M (p) represents pixel P neighborhood, M (p)=[Draw(p)-Δ,Draw(p)+Δ], Δ represents that half neighborhood is wide;τrejectRepresent threshold value.
7. depth estimation method as claimed in claim 1, it is characterised in that the Optimized model of the step A3 is pixel The Optimized model of depth value, gradient and smoothness.
8. depth estimation method as claimed in claim 7, it is characterised in that the Optimized model of the step A3 is:
Dopt=argmin J1(D)+λJ2(D)+γJ3(D),
Wherein,J1(D) pixel r and pixel s in its neighborhood in depth image is represented Depth value weights the error function of draw, and D represents the depth map finally given, and s represents the picture being located in pixel r neighborhood N (r) Vegetarian refreshments, wrsThe weight coefficient between pixel s and r is represented, specific form is:Its Middle IcRepresent center bore image;
J2(D) the pixel r and image I of confidence depth areas in depth image is representedcLadder Degree keeps error function, gDWithD and I is represented respectivelycGradient;
J3(D) smoothness of untrusted depth areas in depth image is represented, Δ D represents the two of D Order derivative;λ and γ is proportionality coefficient.
9. depth estimation method as claimed in claim 1, it is characterised in that also include step A4 after the step A3: Using Weighted median filtering to DoptOptimize again and obtain Dfinal
10. a kind of depth estimation system based on light field image angle domain pixel, it is characterised in that computer-readable including one kind Storage medium, it stores the computer program for electronic data interchange, wherein, the computer program causes computer to perform Method as described in claim 1-9 is any.
CN201710174505.3A 2017-03-22 2017-03-22 Depth estimation method and system based on light field image angle domain pixel Withdrawn CN107038719A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710174505.3A CN107038719A (en) 2017-03-22 2017-03-22 Depth estimation method and system based on light field image angle domain pixel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710174505.3A CN107038719A (en) 2017-03-22 2017-03-22 Depth estimation method and system based on light field image angle domain pixel

Publications (1)

Publication Number Publication Date
CN107038719A true CN107038719A (en) 2017-08-11

Family

ID=59533766

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710174505.3A Withdrawn CN107038719A (en) 2017-03-22 2017-03-22 Depth estimation method and system based on light field image angle domain pixel

Country Status (1)

Country Link
CN (1) CN107038719A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108133469A (en) * 2017-12-05 2018-06-08 西北工业大学 Light field splicing apparatus and method based on EPI
CN108154528A (en) * 2017-12-05 2018-06-12 西北工业大学 Based on broad sense to the light field depth estimation method of polar plane figure
CN108564620A (en) * 2018-03-27 2018-09-21 中国人民解放军国防科技大学 Scene depth estimation method for light field array camera
CN109993764A (en) * 2019-04-03 2019-07-09 清华大学深圳研究生院 A kind of light field depth estimation method based on frequency domain energy distribution
CN110276795A (en) * 2019-06-24 2019-09-24 大连理工大学 A kind of light field depth estimation method based on window splitting algorithm
CN110390689A (en) * 2019-07-11 2019-10-29 Oppo广东移动通信有限公司 Depth map processing method, device and electronic equipment
CN110400343A (en) * 2019-07-11 2019-11-01 Oppo广东移动通信有限公司 Depth map treating method and apparatus
CN111145134A (en) * 2019-12-24 2020-05-12 太原科技大学 Block effect-based microlens light field camera full-focus image generation algorithm
CN111260712A (en) * 2020-02-07 2020-06-09 清华大学深圳国际研究生院 Depth estimation method and device based on refocusing focal polar line diagram neighborhood distribution
CN111325763A (en) * 2020-02-07 2020-06-23 清华大学深圳国际研究生院 Occlusion prediction method and device based on light field refocusing
CN112771574A (en) * 2018-07-19 2021-05-07 交互数字Ce专利控股公司 Method for estimating the depth of a pixel, corresponding device and computer program product
CN112884645A (en) * 2021-01-18 2021-06-01 北京工业大学 Light field filling method and device based on tensor sparse constraint
CN113705796A (en) * 2021-09-28 2021-11-26 太原科技大学 Light field depth acquisition convolutional neural network based on EPI feature enhancement
CN114897951A (en) * 2022-05-30 2022-08-12 中国测绘科学研究院 Single light field image depth estimation method and system for aggregating multi-view depth information
CN115100269A (en) * 2022-06-28 2022-09-23 电子科技大学 Light field image depth estimation method and system, electronic device and storage medium
CN117474922A (en) * 2023-12-27 2024-01-30 中国科学院长春光学精密机械与物理研究所 Anti-noise light field depth measurement method and system based on inline shielding processing
CN112884645B (en) * 2021-01-18 2024-05-03 北京工业大学 Tensor sparse constraint-based light field filling method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899870A (en) * 2015-05-15 2015-09-09 清华大学深圳研究生院 Depth estimation method based on light-field data distribution
CN106384338A (en) * 2016-09-13 2017-02-08 清华大学深圳研究生院 Enhancement method for light field depth image based on morphology

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899870A (en) * 2015-05-15 2015-09-09 清华大学深圳研究生院 Depth estimation method based on light-field data distribution
CN106384338A (en) * 2016-09-13 2017-02-08 清华大学深圳研究生院 Enhancement method for light field depth image based on morphology

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MICHAEL W. TAO等: ""Depth from Combining Defocus and Correspondence Using Light-Field Cameras"", 《2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 *
WILLIEM等: ""Robust Light Field Depth Estimation for Noisy Scene with Occlusion"", 《2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *
YANWEN QIN等: ""ENHANCED DEPTH ESTIMATION FOR HANDHELD LIGHT FIELD CAMERAS"", 《2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING(ICASSP)》 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108133469A (en) * 2017-12-05 2018-06-08 西北工业大学 Light field splicing apparatus and method based on EPI
CN108154528A (en) * 2017-12-05 2018-06-12 西北工业大学 Based on broad sense to the light field depth estimation method of polar plane figure
CN108133469B (en) * 2017-12-05 2021-11-02 西北工业大学 Light field splicing device and method based on EPI
CN108564620A (en) * 2018-03-27 2018-09-21 中国人民解放军国防科技大学 Scene depth estimation method for light field array camera
CN112771574A (en) * 2018-07-19 2021-05-07 交互数字Ce专利控股公司 Method for estimating the depth of a pixel, corresponding device and computer program product
CN112771574B (en) * 2018-07-19 2024-03-26 交互数字Ce专利控股公司 Method for estimating the depth of a pixel and corresponding device
CN109993764A (en) * 2019-04-03 2019-07-09 清华大学深圳研究生院 A kind of light field depth estimation method based on frequency domain energy distribution
CN110276795B (en) * 2019-06-24 2022-11-18 大连理工大学 Light field depth estimation method based on splitting iterative algorithm
CN110276795A (en) * 2019-06-24 2019-09-24 大连理工大学 A kind of light field depth estimation method based on window splitting algorithm
CN110400343A (en) * 2019-07-11 2019-11-01 Oppo广东移动通信有限公司 Depth map treating method and apparatus
CN110390689B (en) * 2019-07-11 2021-07-30 Oppo广东移动通信有限公司 Depth map processing method and device and electronic equipment
CN110390689A (en) * 2019-07-11 2019-10-29 Oppo广东移动通信有限公司 Depth map processing method, device and electronic equipment
CN110400343B (en) * 2019-07-11 2021-06-18 Oppo广东移动通信有限公司 Depth map processing method and device
CN111145134A (en) * 2019-12-24 2020-05-12 太原科技大学 Block effect-based microlens light field camera full-focus image generation algorithm
CN111145134B (en) * 2019-12-24 2022-04-19 太原科技大学 Block effect-based microlens light field camera full-focus image generation algorithm
CN111260712A (en) * 2020-02-07 2020-06-09 清华大学深圳国际研究生院 Depth estimation method and device based on refocusing focal polar line diagram neighborhood distribution
CN111325763A (en) * 2020-02-07 2020-06-23 清华大学深圳国际研究生院 Occlusion prediction method and device based on light field refocusing
CN111325763B (en) * 2020-02-07 2023-04-07 清华大学深圳国际研究生院 Occlusion prediction method and device based on light field refocusing
CN112884645B (en) * 2021-01-18 2024-05-03 北京工业大学 Tensor sparse constraint-based light field filling method and device
CN112884645A (en) * 2021-01-18 2021-06-01 北京工业大学 Light field filling method and device based on tensor sparse constraint
CN113705796B (en) * 2021-09-28 2024-01-02 太原科技大学 Optical field depth acquisition convolutional neural network based on EPI feature reinforcement
CN113705796A (en) * 2021-09-28 2021-11-26 太原科技大学 Light field depth acquisition convolutional neural network based on EPI feature enhancement
CN114897951B (en) * 2022-05-30 2023-02-28 中国测绘科学研究院 Single light field image depth estimation method and system for aggregating multi-view depth information
CN114897951A (en) * 2022-05-30 2022-08-12 中国测绘科学研究院 Single light field image depth estimation method and system for aggregating multi-view depth information
CN115100269A (en) * 2022-06-28 2022-09-23 电子科技大学 Light field image depth estimation method and system, electronic device and storage medium
CN115100269B (en) * 2022-06-28 2024-04-23 电子科技大学 Light field image depth estimation method, system, electronic equipment and storage medium
CN117474922A (en) * 2023-12-27 2024-01-30 中国科学院长春光学精密机械与物理研究所 Anti-noise light field depth measurement method and system based on inline shielding processing
CN117474922B (en) * 2023-12-27 2024-04-02 中国科学院长春光学精密机械与物理研究所 Anti-noise light field depth measurement method and system based on inline shielding processing

Similar Documents

Publication Publication Date Title
CN107038719A (en) Depth estimation method and system based on light field image angle domain pixel
EP3477931B1 (en) Image processing method and device, readable storage medium and electronic device
US8983152B2 (en) Image masks for face-related selection and processing in images
CN108038456B (en) Anti-deception method in face recognition system
CN104036278B (en) The extracting method of face algorithm standard rules face image
CN108537782B (en) Building image matching and fusing method based on contour extraction
CN103902958A (en) Method for face recognition
CN105139404A (en) Identification camera capable of detecting photographing quality and photographing quality detecting method
US9256950B1 (en) Detecting and modifying facial features of persons in images
US20140079319A1 (en) Methods for enhancing images and apparatuses using the same
US9881202B2 (en) Providing visual effects for images
CN103927520A (en) Method for detecting human face under backlighting environment
CN104732225B (en) image rotation processing method
CN102693426A (en) Method for detecting image salient regions
CN107886507B (en) A kind of salient region detecting method based on image background and spatial position
CN103139574B (en) Image processing apparatus and control method thereof
CN106384338B (en) A kind of Enhancement Method based on morphologic light field depth image
CN107832730A (en) Improve the method and face identification system of face recognition accuracy rate
CN107862658A (en) Image processing method, device, computer-readable recording medium and electronic equipment
CN105684046A (en) Generating image compositions
CN108805826B (en) Method for improving defogging effect
CN114582003B (en) Sleep health management system based on cloud computing service
US9330340B1 (en) Noise estimation for images using polynomial relationship for pixel values of image features
US9940543B2 (en) Control of computer vision pre-processing based on image matching using structural similarity
CN108765316B (en) Mist concentration self-adaptive judgment method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20170811

WW01 Invention patent application withdrawn after publication