CN105701813A - Significance detection method of light field image - Google Patents

Significance detection method of light field image Download PDF

Info

Publication number
CN105701813A
CN105701813A CN201610018667.3A CN201610018667A CN105701813A CN 105701813 A CN105701813 A CN 105701813A CN 201610018667 A CN201610018667 A CN 201610018667A CN 105701813 A CN105701813 A CN 105701813A
Authority
CN
China
Prior art keywords
focusedimage
image
degree
sigma
centerdot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610018667.3A
Other languages
Chinese (zh)
Inventor
王兴政
闫冰
张永兵
王好谦
李莉华
戴琼海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Weilai Media Technology Research Institute
Shenzhen Graduate School Tsinghua University
Original Assignee
Shenzhen Weilai Media Technology Research Institute
Shenzhen Graduate School Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Weilai Media Technology Research Institute, Shenzhen Graduate School Tsinghua University filed Critical Shenzhen Weilai Media Technology Research Institute
Priority to CN201610018667.3A priority Critical patent/CN105701813A/en
Publication of CN105701813A publication Critical patent/CN105701813A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10148Varying focus

Abstract

The invention discloses a significance detection method of a light field image. The method comprises the following steps of S1, carrying out refocusing on different positions of the light field image to acquire N focusing images, and fusing the N focusing images to acquire a full-focusing image, wherein the N is a positive integer; S2, calculating a focusing degree F(x, y) of each pixel point of each focusing image, wherein the x and the y represent an abscissa and an ordinate of the pixel point (x, y) respectively; Gx and Gy represent gray gradient values of an x direction and a y direction of the pixel point (x, y) respectively; S3, calculating one-dimensional focusing degree distribution of each focusing image; S4, calculating a background layer approximation degree of each focusing image; and S5, taking the focusing image corresponding to a maximal background layer approximation degree BLS(Ii) as a background layer of the full-focusing image. In the invention, a background can be accurately detected and separated; processing of color contrast and the like can be performed on foreground information so as to acquire an accurate saliency map.

Description

A kind of significance detection method of light field image
[technical field]
The present invention relates to computer vision field, be specifically related to the significance detection method of a kind of light field image。
[background technology]
When you pick up mobile phone or photographing unit is taken pictures time, you can find that face part is enclosed by view-finder automatically。This belongs to face detection, is a kind of to detect the computer assisted image processing process that face is target, for the concern of part interested when its simulation people is look at piece image。
The result of significance test is significantly schemed, and is widely used in computer vision field, including Automatic image segmentation, target recognition, efficient image breviary and the image retrieval etc. based on context, is critically important Image semantic classification step。
Existing significance test algorithm great majority are based upon on some premises, as prospect and background differ greatly, background relative smooth and simple, prospect is unobstructed。But a lot of scenes can not meet these conditions in actual life。
[summary of the invention]
In order to solve, the image such as foreground and background color and vein similarity that existing significance algorithm cannot effectively process is high, background numerous and complicated, and prospect such as is blocked at the situation, and the present invention proposes the significance detection method of a kind of light field image。
The significance detection method of a kind of light field image, comprises the steps:
S1, diverse location to light field image carry out refocusing and obtain N number of focusedimage, are merged by described N number of focusedimage and obtain total focus image, and N is positive integer;
S2, calculate each pixel in each focusedimage degree of focus F (x, y):
F ( x , y ) = G x 2 + G y 2 ;
Wherein, x and y represents pixel (x, abscissa y) and vertical coordinate, G respectivelyxAnd GyRepresent pixel (x, y) the shade of gray value in x direction and y direction respectively;
S3, for each focusedimage:
D x = 1 α Σ y = 1 h F ( x , y ) , D y = 1 α Σ x = 1 w F ( x , y ) , α=∑xyF (x, y);
Wherein, h and w represents height and the width of focusedimage respectively;
S4, calculate the background layer degree of approximation of each focusedimage:
B L S ( I i ) = ρ [ Σ x = 1 w D x i ( x ) · U ( x , w ) + Σ y = 1 h D y i ( y ) · U ( y , h ) ] ,
U ( x , w ) = 1 1 + ( x η ) 2 + 1 1 + ( w - x η ) 2 , U ( y , h ) = 1 1 + ( y η ) 2 + 1 1 + ( h - y η ) 2 ,
ρ = exp λ · i N ;
Wherein, BLS (Ii) for i-th focusedimage IiThe background layer degree of approximation,WithRepresent the D of i-th focusedimage respectivelyxAnd Dy, λ is coefficient;
S5, by maximum background layer degree of approximation BLS (Ii) corresponding focusedimage is as the background layer of described total focus image。
In one embodiment,
Also comprise the steps: after step s 5
S6, calculate the target factor of each focusedimage:
O S ( I i ) = Σ x = 1 w D x i ( x ) · G ( x , w ) + Σ y = 1 h D y i ( y ) · G ( y , h ) ,
G ( x ) = exp ( x - μ ) 2 - 2 σ 2 , G ( y ) = exp ( y - μ ) 2 - 2 σ 2 ;
Wherein, μ and σ represents gaussian coefficient, OS (I respectivelyi) represent i-th focusedimage target factor;
S7, calculate the prospect degree of approximation coefficient of each focusedimage:
FLS(Ii)=OS (Ii)·(1-BLS(Ii));
Wherein, FLS (Ii) represent i-th focusedimage IiProspect degree of approximation coefficient;
S8, will be greater than focusedimage corresponding to the prospect degree of approximation coefficient of prospect degree of approximation coefficient threshold candidate's foreground layer as described total focus image。
In one embodiment,
Also comprise the steps: after step S8
S9, the background area information calculated in background layer:
B C ( r ) = 1 γ [ F B ( r ) · | | p r - c | | 2 ]
Wherein, r represents the super-pixel region of total focus image, FBR () represents the degree of focus of the super-pixel region r of background layer, prBeing the central point of region r, c represents the central point of background layer image, and γ is coefficient。
S10, the background area value of information is ranked up by order from small to large, takes front K BC value region as notable district, district as a setting, rear D BC value region, wherein K+D=NBC, NBCNumber for super-pixel。
In one embodiment,
Also comprise the steps: after step S10
S11, for marking area, calculating location information:
LC (r)=exp (-β BC (r));Wherein, LC (r) is denoted as the positional information of the super-pixel region r in notable district, and β is coefficient;
S12, calculating are schemed based on the notable of color distortion:
SC(r)=HV (r) LC (r)
Wherein, H V ( r ) = [ 1 K Σ r ′ = 1 K 1 δ ( r , r ′ ) ] - 1 ,
δ (r, r')=max{ | red (r)-red (r') |2,|green(r)-green(r')|2,|blue(r)-blue(r')|2};Wherein, r' is denoted as the super-pixel region of background area, red (r) and red (r') represents the red component of region r and r' respectively, green (r) and green (r') represents the blue component that the green component of region r and r', blue (r) and blue (r') represent region r and r' respectively respectively;
S13, calculating prospect clue:
S F j ( r ) = F j F ( r ) · L C ( r ) ;
Wherein,Represent the degree of focus of the super-pixel region r as foreground area of j-th candidates foreground layer,Represent the prospect clue of the super-pixel region r as foreground area of j-th candidates foreground layer;
S14, calculate total focus image notable district S (r):
S ( r ) = Σ j = 1 L w j · S F j ( r ) + O x · w C · S C ( r ) ;
Wherein, wj、wCAnd OxFor parameter。
In one embodiment,
μ takes in each focusedimage and works as DxReach abscissa during peak value。
In one embodiment,
μ takes in each focusedimage and works as DyReach vertical coordinate during peak value。
In one embodiment,
γ=max (| | pr-c||2)。
With the image using general camera to take in the past as input difference, in the present invention, use the image that Lytro light-field camera is taken as input。Light field is to comprise in space each light to propagate information overall of every bit。Optical field imaging is compared with tradition imaging, and owing to overcoming the dependence to physical device, more by means of mathematical tool, the light field image of any degree of depth obtains such as through to the integration of light field, and therefore light field image has the feature of the big depth of field。Additionally, light field image can also take pictures after refocusing。The two feature makes light field image can provide abundant depth of view information and degree of focus information for significance test, has very great help to accurately obtaining notable figure。
Utilize abundant degree of focus information and depth information that light field image has, it is possible to background detected exactly and separate, foreground information can be carried out color contrast etc. and process thus getting and significantly scheming accurately。
The present invention makes full use of the degree of focus information and depth information that comprise in light field image, is conducive to the edge of significance target in image and internal information are extracted exactly and showed。
It addition, the present invention seeks image focusing degree by gradient function, substantially increase operation efficiency, also make rim detection more accurate。
Further it is proposed that significance target only one of which it is assumed that and it is assumed herein that basis on suppress exceptional value。
[detailed description of the invention]
Hereinafter the preferred embodiment of invention is described in further detail。
The significance detection method of a kind of light field image, comprises the steps:
S1, the light field image obtained with light-field camera shooting, the diverse location of light field image is carried out refocusing and obtains N number of focusedimage, N number of focusedimage constitutes a focusing stack, then the focusedimage focused in stack is carried out fusion and obtain total focus image, thus using the focusedimage focused in stack and total focus image as input。
Focus on stack { Ii, i=1 ..., N, IiRepresent focusedimage, I*Represent the total focus image of synthesis。In one embodiment, final target is to calculate about I*Notable figure。Total focus image is divided into the super-pixel (region of multiple pixels composition) of a series of non-overlapping copies first by mean-shift algorithm, and the image for the 360*360 of our use is usually 300 polyliths。This segmentation helps to maintain edge concordance and the suitable size of space。With (x, y) represents a pixel, and r represents a super-pixel region。This step is the position in each super-pixel region, central point, RGB color (the front three-dimensional data obtained by color feature extracted forms) and areal, gradient after obtaining segmentation。
S2, calculate the degree of focus of each pixel of each focusedimage
For focusing on the every width focusedimage I in stacki, calculate the degree of focus of each pixel respectively, obtain a degree of focus representing view picture focusedimage with focusedimage two-dimensional matrix of the same size, it is clear that the numerical value that in matrix, accurately the region of focusing is corresponding is big, shows on degree of focus image to be exactly brightness height。
By conventional images definition evaluation algorithms it can be seen that when image is clear, show as gray value difference between each neighbor pixel in spatial domain big, it is possible to use the evaluation function based on space domain characteristic represents its definition (namely degree of focus)。Herein by means of gradient method to focusing on each layer (individual) focusedimage degree of being focused assessment in stack。
For focusing on the piece image I in stacki, it is converted into gray level image Igrayi, then each pixel of gray level image is sought shade of gray value:
Gx(j, k)=Igrayi(j,k+1)-Igrayi(j, k), if k=1
Gx(j, k)=Igrayi(j,k)-Igrayi(j, k-1), if k=maximum ordinate value
G x ( j , k ) = Igray i ( j , k + 1 ) - Igray i ( j , k - 1 ) 2 , If other values of k=
Gx(j k) represents that (j, k) gray value in the x direction in like manner, obtain pixel (j, k) gray value G in y-direction to pixely(j, k)。Therefore, for any one pixel (x, y), its degree of focus F (x, y):
F ( x , y ) = G x 2 + G y 2
And then the degree of focus matrix of focusedimage can be obtained。
To detect the method in focus region, (high fdrequency component contained because of definition more hi-vision is more many with using frequency domain filtering before, dct transform matrix shows as upper left corner brightness high, lower right corner brightness is low) compare, gradient method principle is simple, algorithm operational efficiency is high, and can accurately detect significance object edge。
After obtaining degree of focus matrix, it is possible to select the background layer and candidate's foreground layer that focus in stack based on two wave filter。Why it is called candidate's foreground layer herein, is because the foreground target depth of field relatively larger, more than one layer of the number of plies comprised, therefore select some layers。
Because the process of the multi-layer focusing image in synthetic focusing stack is not directed to any depth information, so farthest focusedimage layer be not equal to background layer, perhaps this layer does not comprise the object of any focusing。The layer additionally containing farthest accurate Focused objects is also not equal to background layer, and such as this object is anomaly exists unrelated with all objects。Method used herein is based on Focused objects position in the picture to analyze its distribution。If most of pixel of Focused objects is close to image boundary, the probability that this object belongs to background area is very big。Therefore we travel through whole focusing stack and analyze。For each tomographic image Ii, its focusing angle value is added respectively along x-axis and y-axis and obtains two one-dimensional focusing degree distributions:
D x = 1 α Σ y = 1 h F ( x , y )
D y = 1 α Σ x = 1 w F ( x , y )
Wherein, h and w represents height and the width of focusedimage, α=∑ respectivelyxy(x y) is normalization factor to F。
As it was previously stated, significance target tends to be distributed near the central authorities of image, thus can be derived from the degree of focus of background layer (certain focusedimage) boundary focused in stack can height。Therefore the D of background layer edgexAnd DyValue can be high, and the D near central authoritiesxAnd DyValue can be low。The degree of focus distribution different in order to highlight each layer, adopts two wave filter that the distribution of one-dimensional focusing degree is filtered。
Utilize U-shaped wave filter U that the distribution of one-dimensional focusing degree is filtered:
U ( x , w ) = 1 1 + ( x η ) 2 + 1 1 + ( w - x η ) 2
U ( y , h ) = 1 1 + ( y η ) 2 + 1 1 + ( h - y η ) 2
η is the constant of restriction bandwidth, depends on image resolution ratio。When input image resolution is 360*360, η is taken as 10。D due to background layerxAnd DyIt is worth higher at boundary, corresponding with U-shaped wave filter, therefore U-shaped wave filter is used for the degree of focus value suppressing candidate's foreground layer thus selecting background layer。The background layer degree of approximation coefficient B LS of each layer can obtain in conjunction with U-shaped filter function:
B L S ( I i ) = ρ [ Σ x = 1 w D x i ( x ) · U ( x , w ) + Σ y = 1 h D y i ( y ) · U ( y , h ) ]
Being weight coefficient, i represents that in focusing stack i-th layer (i.e. i-th focusedimage), N represents total number of plies (sum of focusedimage), and λ takes 0.2。The layer taking maximum place after obtaining the BLS of N shell is background layer IBAnd obtain the degree of focus matrix of its correspondence (backgroundslice)。Utilize Gaussian Bell wave filter G that the distribution of one-dimensional focusing degree is filtered:
G ( x ) = exp ( x - μ ) 2 - 2 σ 2
G ( y ) = exp ( y - μ ) 2 - 2 σ 2
μ works as D in taking every layer hereinxOr DyReach abscissa x during peak valuepOr yp, σ represents the size of foreground target。If it is too big that σ takes, can using entire image as a target object, what take is too little, distinguished for edge zonule can also serve as significance target and process。Take σ through repetition test is 60 herein。Then the target factor (OS, objectscore) of each layer is calculated
O S ( I i ) = Σ x = 1 w D x i ( x ) · G ( x , w ) + Σ y = 1 h D y i ( y ) · G ( y , h )
Obviously, if containing significance target in one layer, then its BLS is only small and OS is very big, then it belongs to the layer at candidate's significance target place。Prospect degree of approximation coefficient FLS can be defined with that
FLS(Ii)=OS (Ii)·(1-BLS(Ii))
Then candidate's foreground layer can be selectedJ=1 ..., L and degree of focus matrix corresponding to each layer。Notice that different with one layer of background layer of selection herein is select multiple foreground layer, so being called candidate's foreground layer。Selecting multilamellar to can ensure that reliability, doing with color distortion later can be more accurate when merging。Meanwhile, the depth of field of significance target is often big than the depth of field shared by background, so having bigger degree of focus on more than one layer。Through repeatedly testing, the size of each layer FLS factor is closer to, so place selects FLS (FLS > 0.9*max (FLS))。
Subsequent step is had a result of decisive role to be namely background layer I by this stepBSelection, the degree of focus binding site priori factor of background layer determines the distribution in notable district in total focus image and background area。
S3, calculating color contrast
(1) notable district and background area are divided
Relative to top-down algorithm, bottom-up algorithm lacks high-level information (as the detection of face is had fixing detector), and it is necessary for therefore using some prioris。Object interested, when shooting photo, is always naturally placed on the middle position of camera lens by people。Therefore we use location-prior knowledge to indicate the distance relation of each pixel and image center, and distance center more long distance distance values is more big。The purpose of this step is to obtain the foreground zone of total focus image and background area and foreground zone is operated, because away from the background parts degree of focus F of central area in background layerBR () is the highest, and after binding site priori, the value of background information is higher, therefore selects background layer to be operated。Background information (backgroundcue) formula is
B C ( r ) = 1 γ [ F B ( r ) · | | p r - c | | 2 ]
prBeing central point when total focus image superpixel (segmentation obtain) of region r, c represents the central point of entire image, and γ is normalization factor, take γ=max (| | pr-c||2)。
The BC value matrix of background layer is carried out threshold value division, first by NBC(number of super-pixel) individual BC value, by order rearrangement from small to large, then takes front K BC value region as notable district, district as a setting, rear D BC value region, wherein K+D=NBC。Background area set R'(r can be drawn) and notable district's set R (r) of prospect。
Owing to value bigger in BC matrix is positioned at background area and non-significant district, therefore the operation below for notable district can not use BC matrix, it is possible to use the natural logrithm of BC, i.e. positional information (locationcue):
LC (r)=exp (-β BC (r))
β selects the information for suppressing peripheral image vegetarian refreshments critically important。The more big suppression to boundary pixel point of β is more big, and the brightness of marking area is also more low simultaneously。
(2) notable district pixel is carried out color contrast operation
The pixel of the pixel in notable district Yu non-significant district is carried out color contrast, thus extracting the feature in more notable district。Owing to total focus image has been divided into the super-pixel block of color even in block, therefore use the RGB color feature extracted during segmentation image。To each notable district r, seek the color distortion of it and each background area r':
δ (r, r')=max{ | red (r)-red (r') |2,|green(r)-green(r')|2,|blue(r)-blue(r')|2}。For ensureing reliability, calculate the harmonic mean about notable district r
H V ( r ) = [ 1 K Σ r ′ = 1 K 1 δ ( r , r ′ ) ] - 1
K represents the number in notable district。
Being combined by the positional information that color contrast is obtained in upper joint can based on the notable figure of color distortion
SC=HV (r) LC (r)
S5, calculating prospect clue
But, owing to it is an object of the invention to process the image that general significance algorithm cannot process, such as situations such as prospect are high with the similarity of background, background is complicated, therefore, it is inadequate for simply using color contrast, because for prospect similar to background color and texture in the case of, only use a color contrast mistakenly background can be judged to prospect。Need to use some degree of focus knowledge, and these knowledge can be provided by the candidate's foreground layer obtained before。
Each foreground layer degree of focus matrix is combined with positional information and can obtain prospect clue
S F j ( r ) = F j F ( r ) · L C ( r )
S5, calculating suppress exceptional value and are significantly schemed
To combine based on the notable figure of color distortion and prospect clue, and use the target factor of each layer as weight factor, it is possible to obtain the expression formula about notable district
S ( r ) = Σ j = 1 L w j · S F j ( r ) + O x · w C · S C ( r )
Wherein OxFor adjustable parameter, it is used for adjusting two parts proportion。Through experiment, in order to suppress background information too much in prospect clue, this parameter being fixed tentatively is 1000。Although this parameter is very big, process perfect further afterwards will continually look for more suitably parameter value。
Finally according to the notable zone position obtained when dividing notable district and background area, these saliency value being inserted corresponding pixel points, background area fills out 0, is not namely suppressed notable figure during exceptional value。
There is a basic assumption about the operation suppressing exceptional value, i.e. significance target only one of which, and be one maximum in the region of high brightness in figure。Based on this it is assumed that first notable figure is carried out edge extracting, obtain a series of connected domain, including the background object that significance target and flase drop go out。In theory, the connected domain containing significance target is maximum。Therefore the region that other connected domains comprise is suppressed。But it is helpless when background that the method goes out for significance object and flase drop is closely coupled, the two non-conterminous situation can only be processed。
Above content is in conjunction with concrete preferred implementation further description made for the present invention, it is impossible to assert that specific embodiment of the invention is confined to these explanations。For general technical staff of the technical field of the invention, without departing from the inventive concept of the premise, it is also possible to make some simple deduction or replace, the scope of patent protection that the present invention is determined all should be considered as belonging to by submitted claims。

Claims (7)

1. a significance detection method for light field image, is characterized in that, comprise the steps:
S1, diverse location to light field image carry out refocusing and obtain N number of focusedimage, are merged by described N number of focusedimage and obtain total focus image, and N is positive integer;
S2, calculate each pixel in each focusedimage degree of focus F (x, y):
F ( x , y ) = G x 2 + G y 2 ;
Wherein, x and y represents pixel (x, abscissa y) and vertical coordinate, G respectivelyxAnd GyRepresent pixel (x, y) the shade of gray value in x direction and y direction respectively;
S3, for each focusedimage:
D x = 1 α Σ y = 1 h F ( x , y ) , D y = 1 α Σ x = 1 w F ( x , y ) , α = Σ x Σ y F ( x , y ) ;
Wherein, h and w represents height and the width of focusedimage respectively;
S4, calculate the background layer degree of approximation of each focusedimage:
B L S ( I i ) = ρ [ Σ x = 1 w D x i ( x ) · U ( x , w ) + Σ y = 1 h D y i ( y ) · U ( y , h ) ] ,
U ( x , w ) = 1 1 + ( x η ) 2 + 1 1 + ( w - x η ) 2 , U ( y , h ) = 1 1 + ( y η ) 2 + 1 1 + ( h - y η ) 2 ,
ρ = exp λ · i N ;
Wherein, BLS (Ii) for i-th focusedimage IiThe background layer degree of approximation, Di x(x) and Di yY () represents the D of i-th focusedimage respectivelyxAnd Dy, λ is coefficient;
S5, by maximum background layer degree of approximation BLS (Ii) corresponding focusedimage is as the background layer of described total focus image。
2. the significance detection method of light field image as claimed in claim 1, is characterized in that, also comprise the steps: after step s 5
S6, calculate the target factor of each focusedimage:
O S ( I i ) = Σ x = 1 w D x i ( x ) · G ( x , w ) + Σ y = 1 h D y i ( y ) · G ( y , h ) ,
G ( x ) = exp ( x - μ ) 2 - 2 σ 2 , G ( y ) = exp ( y - μ ) 2 - 2 σ 2 ;
Wherein, μ and σ represents gaussian coefficient, OS (I respectivelyi) represent i-th focusedimage target factor;
S7, calculate the prospect degree of approximation coefficient of each focusedimage:
FLS(Ii)=OS (Ii)·(1-BLS(Ii));
Wherein, FLS (Ii) represent i-th focusedimage IiProspect degree of approximation coefficient;
S8, will be greater than focusedimage corresponding to the prospect degree of approximation coefficient of prospect degree of approximation coefficient threshold candidate's foreground layer as described total focus image。
3. the significance detection method of light field image as claimed in claim 2, is characterized in that, also comprise the steps: after step S8
S9, the background area information calculated in background layer:
B C ( r ) = 1 γ [ F B ( r ) · | | p r - c | | 2 ]
Wherein, r represents the super-pixel region of total focus image, FBR () represents the degree of focus of the super-pixel region r of background layer, prBeing the central point of region r, c represents the central point of background layer image, and γ is coefficient;
S10, the background area value of information is ranked up by order from small to large, takes front K BC value region as notable district, district as a setting, rear D BC value region, wherein K+D=NBC, NBCNumber for super-pixel。
4. the significance detection method of light field image as claimed in claim 3, is characterized in that, also comprise the steps: after step S10
S11, for marking area, calculating location information:
LC (r)=exp (-β BC (r));Wherein, LC (r) is denoted as the positional information of the super-pixel region r in notable district, and β is coefficient;
S12, calculating are schemed based on the notable of color distortion:
SC(r)=HV (r) LC (r)
Wherein, H V ( r ) = [ 1 K Σ r ′ = 1 K 1 δ ( r , r ′ ) ] - 1 ,
δ (r, r')=max{ | red (r)-red (r') |2,|green(r)-green(r')|2,|blue(r)-blue(r')|2};Wherein, r' is denoted as the super-pixel region of background area, red (r) and red (r') represents the red component of region r and r' respectively, green (r) and green (r') represents the blue component that the green component of region r and r', blue (r) and blue (r') represent region r and r' respectively respectively;
S13, calculating prospect clue:
S F j ( r ) = F j F ( r ) · L C ( r ) ;
Wherein,Represent the degree of focus of the super-pixel region r as foreground area of j-th candidates foreground layer,Represent the prospect clue of the super-pixel region r as foreground area of j-th candidates foreground layer;
S14, calculate total focus image notable district S (r):
S ( r ) = Σ j = 1 L w j · S F j ( r ) + O x · w C · S C ( r ) ;
Wherein, wj、wCAnd OxFor parameter。
5. the significance detection method of light field image as claimed in claim 4, is characterized in that,
μ takes in each focusedimage and works as DxReach abscissa during peak value。
6. the significance detection method of light field image as claimed in claim 4, is characterized in that,
μ takes in each focusedimage and works as DyReach vertical coordinate during peak value。
7. the significance detection method of light field image as claimed in claim 4, is characterized in that,
γ=max (| | pr-c||2)。
CN201610018667.3A 2016-01-11 2016-01-11 Significance detection method of light field image Pending CN105701813A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610018667.3A CN105701813A (en) 2016-01-11 2016-01-11 Significance detection method of light field image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610018667.3A CN105701813A (en) 2016-01-11 2016-01-11 Significance detection method of light field image

Publications (1)

Publication Number Publication Date
CN105701813A true CN105701813A (en) 2016-06-22

Family

ID=56226362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610018667.3A Pending CN105701813A (en) 2016-01-11 2016-01-11 Significance detection method of light field image

Country Status (1)

Country Link
CN (1) CN105701813A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107146258A (en) * 2017-04-26 2017-09-08 清华大学深圳研究生院 A kind of detection method for image salient region
CN108120392A (en) * 2017-11-30 2018-06-05 东南大学 Bubble three-dimension measuring system and method in biphase gas and liquid flow
CN108537798A (en) * 2017-11-29 2018-09-14 浙江工业大学 A kind of quick superpixel segmentation method
CN109344818A (en) * 2018-09-28 2019-02-15 合肥工业大学 A kind of light field well-marked target detection method based on depth convolutional network
CN110211115A (en) * 2019-06-03 2019-09-06 大连理工大学 A kind of light field conspicuousness detection implementation method based on depth guidance cellular automata
CN111881925A (en) * 2020-08-07 2020-11-03 吉林大学 Significance detection method based on camera array selective light field refocusing
CN114549863A (en) * 2022-04-27 2022-05-27 西安电子科技大学 Light field saliency target detection method based on pixel-level noise label supervision
CN117496187A (en) * 2023-11-15 2024-02-02 安庆师范大学 Light field image saliency detection method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102693426A (en) * 2012-05-21 2012-09-26 清华大学深圳研究生院 Method for detecting image salient regions

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102693426A (en) * 2012-05-21 2012-09-26 清华大学深圳研究生院 Method for detecting image salient regions

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JUN ZHANG等: "Saliency Detection with a Deeper Investigation of Light Field", 《PROCEEDINGS OF THE TWENTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE》 *
NIANYI LI等: "Saliency Detection on Light Field", 《SALIENCY DETECTION ON LIGHT FIELD》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107146258A (en) * 2017-04-26 2017-09-08 清华大学深圳研究生院 A kind of detection method for image salient region
CN108537798A (en) * 2017-11-29 2018-09-14 浙江工业大学 A kind of quick superpixel segmentation method
CN108537798B (en) * 2017-11-29 2021-05-18 浙江工业大学 Rapid super-pixel segmentation method
CN108120392A (en) * 2017-11-30 2018-06-05 东南大学 Bubble three-dimension measuring system and method in biphase gas and liquid flow
CN109344818A (en) * 2018-09-28 2019-02-15 合肥工业大学 A kind of light field well-marked target detection method based on depth convolutional network
CN109344818B (en) * 2018-09-28 2020-04-14 合肥工业大学 Light field significant target detection method based on deep convolutional network
CN110211115A (en) * 2019-06-03 2019-09-06 大连理工大学 A kind of light field conspicuousness detection implementation method based on depth guidance cellular automata
CN111881925A (en) * 2020-08-07 2020-11-03 吉林大学 Significance detection method based on camera array selective light field refocusing
CN114549863A (en) * 2022-04-27 2022-05-27 西安电子科技大学 Light field saliency target detection method based on pixel-level noise label supervision
CN114549863B (en) * 2022-04-27 2022-07-22 西安电子科技大学 Light field saliency target detection method based on pixel-level noise label supervision
CN117496187A (en) * 2023-11-15 2024-02-02 安庆师范大学 Light field image saliency detection method

Similar Documents

Publication Publication Date Title
CN105701813A (en) Significance detection method of light field image
CN107862698B (en) Light field foreground segmentation method and device based on K mean cluster
CN108446617B (en) Side face interference resistant rapid human face detection method
US9104914B1 (en) Object detection with false positive filtering
CN101630363B (en) Rapid detection method of face in color image under complex background
CN107491762B (en) A kind of pedestrian detection method
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
US20120070041A1 (en) System And Method For Face Verification Using Video Sequence
US11699290B1 (en) Pedestrian re-identification method and apparatus based on local feature attention
CN103999124A (en) Multispectral imaging system
CN114973317B (en) Pedestrian re-recognition method based on multi-scale adjacent interaction characteristics
CN105787930A (en) Sharpness-based significance detection method and system for virtual images
CN104463134B (en) A kind of detection method of license plate and system
WO2018076138A1 (en) Target detection method and apparatus based on large-scale high-resolution hyper-spectral image
US8520955B2 (en) Object detection apparatus and method
Xiao et al. Defocus blur detection based on multiscale SVD fusion in gradient domain
CN103530638A (en) Method for matching pedestrians under multiple cameras
US8170332B2 (en) Automatic red-eye object classification in digital images using a boosting-based framework
CN108875744A (en) Multi-oriented text lines detection method based on rectangle frame coordinate transform
CN105184308B (en) Remote sensing image building detection classification method based on global optimization decision
CN109800755A (en) A kind of remote sensing image small target detecting method based on Analysis On Multi-scale Features
Niloy et al. Cfl-net: Image forgery localization using contrastive learning
CN110348366B (en) Automatic optimal face searching method and device
CN109800637A (en) A kind of remote sensing image small target detecting method
CN112990066A (en) Remote sensing image solid waste identification method and system based on multi-strategy enhancement

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160622

RJ01 Rejection of invention patent application after publication