CN104268886B - Image conspicuousness extraction method based on color context inhibition - Google Patents

Image conspicuousness extraction method based on color context inhibition Download PDF

Info

Publication number
CN104268886B
CN104268886B CN201410523003.3A CN201410523003A CN104268886B CN 104268886 B CN104268886 B CN 104268886B CN 201410523003 A CN201410523003 A CN 201410523003A CN 104268886 B CN104268886 B CN 104268886B
Authority
CN
China
Prior art keywords
color
opposition
significance
formula
suppression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410523003.3A
Other languages
Chinese (zh)
Other versions
CN104268886A (en
Inventor
张骏
谢昭
高隽
汪萌
吴信东
杨勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN201410523003.3A priority Critical patent/CN104268886B/en
Publication of CN104268886A publication Critical patent/CN104268886A/en
Application granted granted Critical
Publication of CN104268886B publication Critical patent/CN104268886B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an image conspicuousness extraction method based on color context inhibition. The method comprises the steps of firstly, establishing Gabor opponent color filters, conducting joint encoding on the spatial information and color information of a colored image, and extracting the color surface feature with opponent color characteristics; estimating the color context inhibition phenomenon caused when the color surface feature is affected by color surface context or color edge context; extracting the color edge feature with opponent spaces and opponent colors; defining color conspicuousness measuring methods corresponding to the color surface feature and the color edge feature, and taking the product of the dissimilarity among color feature points and spatial distance between pixel points as the global color conspicuousness; conducting color energy integration on color surface conspicuousness and color edge conspicuousness on channels in different directions and on channels with different opponent colors to obtain the color conspicuousness of the colored image. By the adoption of the method, image conspicuousness extraction accuracy can be effectively improved.

Description

A kind of saliency extracting method based on the suppression of color context
Technical field
The invention belongs to computer vision, image procossing and analysis field, relate generally to a kind of color context that is based on and press down The saliency extracting method of system.
Background technology
It is one of perception of human visual system that saliency extracts.When observing piece image, visual system Region interested and target in image can be obtained in 13 seconds, this process is saliency and extracts.With section Learn developing rapidly of technological means, vision significance research receives from different field such as neuroscience, computer visions Research worker is more and more paid close attention to.Significance extracts the time that can greatly shorten to graphical analyses and understand, can apply In aspects such as image retrieval, image segmentation, visual identity and tracking.
The saliency extracting method of current main-stream can be divided into two classes: local approach and global approach.
Local approach generally utilizes many visual signatures of image, and different visual signatures is located respectively as independent passage Reason, have ignored the reciprocal action between visual signature it is intended to make significance extracting method all sensitive to all of visual signature, no But increased computational complexity and affect the right metric of significance extraction;Additionally, local approach is focused mainly on significance mesh Target edge rather than entire image, due to there are the partial discontinuous in texture in natural scene image, therefore local approach is easy The background area of abundant texture is mistakenly considered salient region.
Global approach is a kind of saliency extracting method of " overall expression ".When salient region or edge are adjacent with background Domain similar but still when being different from entire image, generally adopt global approach effective.Although global approach is effective in the range of entire image Distinguished significance target or region, however, it depends on didactic or relative complex model, have ignored in image Local contrast information.
Although in computer vision field, having occurred in that substantial amounts of saliency extracting method is notable for portraying Property region and the marked feature on border, but these methods yet suffer from deficiency has:
1st, current saliency extracts research and is incorporated into one typically by multiple visual signature independent process and finally Rise, or assisted using overall or top-down priori, not only increase the redundancy between information and increase Big computational complexity;
2 it is reported that, most of saliencies extract research using color as supplemental characteristic, not yet have research to believe color Breath is as its effect in saliency extraction of unique concern feature mining;
3rd, colouring information amount rapid growth in the colored world, but visual task is often confined to the use of colouring information Simple color space conversion, have ignored the space attribute that colouring information itself has, such as color boundary, fails to consider space Reciprocal action and color information between;
4th, the mankind are in color-aware, because the appearance of ambient background color can cause the change to the perception of color of object outward appearance Change, visual system suppresses the color information of ambient background;But so far, lack to this color in saliency extracting method The research of context suppression, have ignored the impact to color-aware for the ambient background color;
5th, current saliency extracting method is all not enough to explain that in brain visual cortex, neuron is processing image letter Number when analysis significance Behavior law, lead to significance extracting method to tend to complicated and accuracy and be difficult to be lifted.
Recently in neuroscience field, substantial amounts of neuro physiology and cognitive psychology are studied and are extracted for saliency Providing ample evidence has:
1st, brain visual cortex v1 region is the bottom-up major cortical region extracting significance.Each v1 neuron pair One or more characteristic dimension responses, such as direction, yardstick, color etc..Therefore, significance is by many different Characteristic Contrast degree Formed;
2nd, there is substantial amounts of color neuron and colouring information and directional information responded simultaneously in v1 region, and significance passes through this A little neuron responses are enhanced, rather than simply other side responds summation to neuron response and color neuron;
3rd, it is primarily present two class color neurons: color opposition neuron, the double opposition of color and direction in brain visual cortex Neuron.The former color surface respond to uniformity, the latter is to color skirt response;
4th, v1 neuron only has positive discharge rate, and receives the suppression sound of neuron in neighborhood classics receptive field region Should, with respect to observational characteristic dimension, to identical or contrary characteristic response, this inhibitory action causes surrounding to these neurons Background the color context of salient region or target is affected it is suppressed that in background area color information response, enhance Area-of-interest or the significance of target.
Content of the invention
Present invention seek to address that present image significance extracting method can not simulation brain Vision information processing exist very well Effect in significance analysis, effect in saliency extraction for the concern colouring information, propose a kind of upper and lower based on color The saliency extracting method of literary composition suppression.
The present invention is to solve technical problem to adopt the following technical scheme that
The feature of the saliency extracting method that the present invention is suppressed based on color context is to carry out as follows:
Step 1, the spatial information to coloured image i (x, y) and color information carry out combined coding, obtain described cromogram Color surface character s (x, y, o, c) as i (x, y):
1.1st, gabor wave filter group g that structure is made up of several gabor wave filter g (x, y, o), g=g (x, y, O) }, wherein: x and y is respectively abscissa and the vertical coordinate of described gabor wave filter g (x, y, o);O is described gabor wave filter The direction in space of g (x, y, o), direction in space o is by the rightabout o of pairwise orthogonal+And o-Composition;
1.2nd, the spatial information to each gabor wave filter g (x, y, o) and color information carry out combined coding, obtain each Gabor opposition color filter f (x, y, o, c), builds and opposes what color filter f (x, y, o, c) formed by described each gabor Gabor opposite color wave filter group f, f={ f (x, y, o, c) }, wherein: c is the opposition of described opposition color filter f (x, y, o, c) Chrominance channel, opposition chrominance channel c is by opposition chrominance channel c complementary two-by-two+And c-Composition;
1.2.1, the negative value of described each gabor wave filter g (x, y, o) is set to 0, obtain and only have the gabor filter activating composition The activation subelement g of ripple device g (x, y, o)+(x, y, o), by described each gabor wave filter g (x, y, o) on the occasion of setting to 0, only obtains There is the suppression subelement g of gabor wave filter g (x, y, o) of suppression composition-(x,y,o);
1.2.2, described coloured image i (x, y) is decomposed into red r, green g and blue tri- Color Channels of b, and constitutes Four groups of opposition chrominance channel c, every group of opposition chrominance channel c is by opposition chrominance channel c complementary two-by-two+And c-Composition, described four groups of opposite color Passage c is respectively:
Red green opposition chrominance channel, that is, activate red suppression green channel r+/g- and activation green suppression red channel g+/ r-;
Blue yellow opposition chrominance channel, that is, activation yellow suppression blue channel y+/b- and activation blue suppression yellow channels b+/ y-;
Dark purple opposes chrominance channel, that is, activate red suppression cyan passage r+/c- and activation cyan suppression red channel c+/ r-;
Black and white opposition chrominance channel, that is, activate white suppression black channel wh+/bl+ and activate black suppression white channel wh-/bl-;
1.2.3, formula (1) is utilized to build gabor opposition color filter f (x, y, o, c);
f ( x , y , o , c ) = σ k w k kg k ( x , y , o ) - - - ( 1 )
In formula (1): k ∈ { r, g, b }, gk(x, y, o) is the activation subelement g of Color Channel k corresponding gabor wave filter+ (x, y, o) or suppression subelement g-(x, y, o), wk∈{wr,wg,wbBe set Color Channel k weight;
1.3rd, formula (2) is utilized to calculate color surface character s (x, y, o, c) of coloured image i (x, y);
S (x, y, o, c)=i (x, y) * f (x, y, o, c) (2)
In formula (2): * is convolution operator;
Step 2, the color context suppression of described coloured image i (x, y) of estimation, described color context suppression is existing As if refer to color surface character s (x, y, o, c) of coloured image i (x, y) by color surface contextual information or color boundary Feature value changes s that contextual information affects and causes*(x, y, o, c):
2.1st, give a direction passage o, estimate opposition chrominance channel c in described coloured image i (x, y) using formula (3)+Right Color surface character s (x, y, o, the c answering+) feature value changes s that affected and cause by color surface contextual information*(x,y, o,c+), by complementary opposition chrominance channel c-Corresponding color surface character s (x, y, o, c-) as normalization factor;
In formula (3):Correct operator for half-wave;K is the scale factor setting;σ1For the semi-saturation setting Constant;
2.2nd, give a certain opposition chrominance channel c, estimate direction passage o in described coloured image i (x, y) using formula (4)+Right Color surface character s (x, y, the o answering+, c) feature value changes s being affected and causing by color boundary contextual information*(x,y, o+, c), by rightabout passage o-Corresponding color surface character s (x, y, o-, c) as normalization factor;
2.3rd, for complementary two opposition chrominance channel c in any opposition chrominance channel c+And c-Or it is contrary in the passage o of direction Both direction passage o+And o-All using with step 2.1- step 2.2 identical feature extraction mode, obtain be subject to contextual information Color surface character s of impact*(x,y,o,c);
Step 3, using formula (5) calculate coloured image i (x, y) color boundary feature d (x, y, o, c);
D (x, y, o, c)=s*(x,y,o,c)*g(x,y,o) (5)
Step 4, the color of definition coloured image i (x, y) any pixel u (x, y) under the influence of color contextual information Surface characterColor surface significance be dreg(u) and color boundary feature duThe color boundary of (x, y, o, c) Significance is dbdry(u):
4.1st, formula (6) is utilized to calculate the color surface character of pixel u (x, y)With other each pixel v The color surface character of (x, y)Between dissimilar degree, and the color surface as described pixel u (x, y) Significance dreg(u);
d reg ( u ) = σ v &element; n ( x , y ) | s u * ( x , y , o , c ) - s v * ( x , y , o , c ) | - - - ( 6 )
In formula (6): n (x, y) represents the spatial aggregation of all pixels point;
4.2nd, formula (7) is utilized to calculate color boundary feature d of pixel u (x, y)u(x, y, o, c) and other each pixel v Color boundary feature d of (x, y)vDissimilar degree between (x, y, o, c), and the color boundary as described pixel u (x, y) Significance dbdry(u);
d bdry ( u ) = σ v &element; n ( x , y ) | d u ( x , y , o , c ) - d v ( x , y , o , c ) | - - - ( 7 )
Step 5, using formula (8) define in coloured image i (x, y) arbitrarily pixel u (x, y) and other each pixel v (x, Y) space length between, and the position significance d as described pixel u (x, y)loc(u);
d loc ( u ) = σ v &element; n ( x , y ) 1 σ 2 2 π exp ( - | u ( x , y ) - v ( x , y ) | 2 2 σ 2 2 ) - - - ( 8 )
In formula (8): σ2For set coloured image i (x, y) in pixel u (x, y) arrive other each pixels v (x, y) it Between distance weighting;
Step 6, calculate the overall color surface significance d of described pixel u (x, y) respectively using formula (9)s(u) and complete Office color boundary significance dd(u):
ds(u)=dreg(u)×dloc(u)(9)
dd(u)=dbdry(u)×dloc(u)
Step 7, for described color surface global significance dsU (), on different directions passage o and opposition chrominance channel c Summation, obtains the color surface significance e of described pixel u (x, y)s(u);For described color boundary overall situation significance dd U (), sues for peace on different directions passage o and opposition chrominance channel c, obtains the color boundary significance e of described pixel u (x, y)d (u);
Step 8, using formula (11) extract pixel u (x, y) unique significance e (u):
e ( u ) = e s 2 ( u ) + e d 2 ( u ) - - - ( 11 )
Step 9, in described coloured image i (x, y) all pixels point using and step 4- step 8 identical extraction side Formula, and all pixels point significance that formula (11) is extracted carries out gaussian function Fuzzy processing, thus extracting described coloured silk The final significance of color image i (x, y).
Compared with the prior art, the present invention has the beneficial effect that:
1st, the present invention is inspired by Neuroscience Research, gives up the concept of traditional independent multiple features, is regarded according to brain Effect in vision significance analysis for the space attribute of color neuron in cortex, to the spatial information in coloured image and face Color information carries out combined coding, realizes the extraction of color significant characteristics of checking colors;
2nd, the non-negative characteristic according to brain visual cortex neuron for the present invention and its inhibitory action by neighborhood neuron, will The contrary or complementary passage of direction passage and opposition chrominance channel, as normalization factor, is dissolved in division normalization framework, Realize the suppression to surrounding background area non-limiting in coloured image, strengthen the response of salient region or target;
3rd, the saliency extracting method based on the suppression of color context proposed by the present invention, simulates to a certain extent Colouring information during neuroscience field vision significance extracts is processed, and is effectively improved colouring information in saliency Effect in extraction, improves the accuracy of significance extraction.
Brief description
Fig. 1 is the workflow diagram of the inventive method;
Fig. 2 a carries out the width coloured image inputting when significance extracts for the inventive method;
Fig. 2 b carries out the color surface significance design sketch obtaining when significance extracts for the inventive method;
Fig. 2 c carries out the color boundary significance design sketch obtaining when significance extracts for the inventive method;
Fig. 2 d carries out the final color significance effect after the fuzzy optimization obtaining when significance extracts for the inventive method Figure;
Fig. 3 a be the inventive method with " false alarm rate/verification and measurement ratio curve " as module, on mit image set, with other Multiple significance extracting method carry out quantifying the analysis chart of contrast;
Fig. 3 b be the inventive method with " false alarm rate/verification and measurement ratio curve " as module, on toronto image set, with Other multiple significance extracting method carry out quantifying the analysis chart of contrast;
Fig. 3 c be the inventive method with " false alarm rate/verification and measurement ratio curve " as module, on koostra image set, with Other multiple significance extracting method carry out quantifying the analysis chart of contrast.
Table 1a be the inventive method with " false alarm rate/verification and measurement ratio area under curve " as module, on mit image set, Carry out quantifying the analytical table of contrast with other multiple significance extracting method;
Table 1b be the inventive method with " false alarm rate/verification and measurement ratio area under curve " as module, in toronto image set On, carry out quantifying the analytical table of contrast with other multiple significance extracting method;
Table 1c be the inventive method with " false alarm rate/verification and measurement ratio area under curve " as module, in koostra image set On, carry out quantifying the analytical table of contrast with other multiple significance extracting method.
Specific embodiment
In the present embodiment, image retrieval, image are mainly used in based on the saliency extracting method of color context suppression In the application such as segmentation, visual identity and tracking.It is that computer vision is associated with Neuroscience Research, regard skin using brain Inhibitory action between the color characteristics of color neuron and spatial character and neuron in layer, in combined coding coloured image Colouring information and spatial information, and estimate color context suppression using division normalization, by empty in whole image Between in the space length product between the dissimilar degree between all color character points and all pixels point, carry for each pixel Take overall color surface significance and overall color boundary significance, the unique color of each pixel is integrated into by energy term notable Degree, the final significance obtaining coloured image.
As shown in figure 1, in the present embodiment based on color context suppression saliency extracting method as follows Carry out:
Step 1, the spatial information to coloured image i (x, y) and color information carry out combined coding, obtain described cromogram Color surface character s (x, y, o, c) as i (x, y):
1.1st, gabor wave filter group g that structure is made up of several gabor wave filter g (x, y, o), g=g (x, y, O) }, wherein: x and y is respectively abscissa and the vertical coordinate of described gabor wave filter g (x, y, o);O is described gabor wave filter The direction in space of g (x, y, o), direction in space o is by the rightabout o of pairwise orthogonal+And o-Composition;
In the experiment test of the present invention, the source color image of input extracts image set in three conventional significances, They are toronto, mit and koostra image set respectively;Described gabor wave filter group g is filtered by the gabor of four direction Device g (x, y, o) forms, and each gabor wave filter g (x, y, o) is by a two-dimentional gabor functionX'=xcos θ+ysin θ, y'=-xsin θ+ycos θ builds, its Middle four direction θ is respectively -45 °, 0 °, 45 °, 90 °, phase placeFor 0 °, dutycycle γ=0.3, window size σ0=7.25, ripple Long λ=σ0/ 0.8=9.06, wave filter size are 17 × 17, and the pixel coverage of therefore abscissa x and vertical coordinate y is [- 8,8]; It is divided into four layers by down-sampled for the coloured image of input, every layer is successively successively decreased with 0.6 ratio.Therefore, described gabor wave filter group Wave filter g (x, y, o) number in g is n=4 × 4=16.
1.2nd, the spatial information to each gabor wave filter g (x, y, o) and color information carry out combined coding, obtain each Gabor opposition color filter f (x, y, o, c), builds and opposes what color filter f (x, y, o, c) formed by described each gabor Gabor opposite color wave filter group f, f={ f (x, y, o, c) }, wherein: c is the opposition of described opposition color filter f (x, y, o, c) Chrominance channel, opposition chrominance channel c is by opposition chrominance channel c complementary two-by-two+And c-Composition;
1.2.1, the negative value of described each gabor wave filter g (x, y, o) is set to 0, obtain and only have the gabor filter activating composition The activation subelement g of ripple device g (x, y, o)+(x, y, o), by described each gabor wave filter g (x, y, o) on the occasion of setting to 0, only obtains There is the suppression subelement g of gabor wave filter g (x, y, o) of suppression composition-(x,y,o).
In experiment test, described activation subelement g+(x, y, o) and suppression subelement g-(x, y, o) is by described gabor Wave filter g (x, y, o) solves the two separate subelements obtaining respectively.
1.2.2, described coloured image i (x, y) is decomposed into red r, green g and blue tri- Color Channels of b, and constitutes Four groups of opposition chrominance channel c, every group of opposition chrominance channel c is by opposition chrominance channel c complementary two-by-two+And c-Composition, therefore oppose chrominance channel Number is m=4 × 2=8, and described four groups of opposition chrominance channel c are respectively:
Red green opposition chrominance channel, that is, activate red suppression green channel r+/g- and activation green suppression red channel g+/ r-;
Blue yellow opposition chrominance channel, that is, activation yellow suppression blue channel y+/b- and activation blue suppression yellow channels b+/ y-;
Dark purple opposes chrominance channel, that is, activate red suppression cyan passage r+/c- and activation cyan suppression red channel c+/ r-;
Black and white opposition chrominance channel, that is, activate white suppression black channel wh+/bl+ and activate black suppression white channel wh-/bl-;
1.2.3, formula (1) is utilized to build gabor opposition color filter f (x, y, o, c);
f ( x , y , o , c ) = σ k w k kg k ( x , y , o ) - - - ( 1 )
In formula (1): k ∈ { r, g, b }, gk(x, y, o) is the activation subelement g of Color Channel k corresponding gabor wave filter+ (x, y, o) or suppression subelement g-(x, y, o), wk∈{wr,wg,wbBe set Color Channel k weight;In experiment test In, the weight [w of red r, green g and blue tri- Color Channels of b in four groups of opposition chrominance channelsr,wg,wb] it is respectively: red green right Vertical chrominance channel weightBlue Huang opposite color passage weight isBlue yellow right Vertical chrominance channel weight isBlack and white opposite color passage weight is Acquisition opposite color number of filter is r=m × n=128.
1.3rd, formula (2) is utilized to calculate color surface character s (x, y, o, c) of coloured image i (x, y);
S (x, y, o, c)=i (x, y) * f (x, y, o, c) (2)
In formula (2): * is convolution operator.
Step 2, the color context suppression of described coloured image i (x, y) of estimation, described color context suppression is existing As if refer to color surface character s (x, y, o, c) of coloured image i (x, y) by color surface contextual information or color boundary Feature value changes s that contextual information affects and causes*(x, y, o, c):
2.1st, give a direction passage o, estimate opposition chrominance channel c in described coloured image i (x, y) using formula (3)+Right Color surface character s (x, y, o, the c answering+) feature value changes s that affected and cause by color surface contextual information*(x,y, o,c+), by complementary opposition chrominance channel c-Corresponding color surface character s (x, y, o, c-) as normalization factor;
In formula (3):Correct operator for half-wave;K is the scale factor setting;σ1For the semi-saturation setting Constant;In experiment test, k=1, σ1=0.225.
2.2nd, give a certain opposition chrominance channel c, estimate direction passage o in described coloured image i (x, y) using formula (4)+Right Color surface character s (x, y, the o answering+, c) feature value changes s being affected and causing by color boundary contextual information*(x,y, o+, c), by rightabout passage o-Corresponding color surface character s (x, y, o-, c) as normalization factor;
2.3rd, for complementary two opposition chrominance channel c in any opposition chrominance channel c+And c-, or contrary in the passage o of direction Both direction passage o+And o-All using with step 2.1- step 2.2 identical feature extraction mode, obtain believed by context Color surface character s retiring into private life loud*(x,y,o,c);
Step 3, using formula (5) calculate coloured image i (x, y) color boundary feature d (x, y, o, c);
D (x, y, o, c)=s*(x,y,o,c)*g(x,y,o) (5)
In formula (5): for different direction passage o and opposition chrominance channel c corresponding color surface character s*(x,y,o, C), adopt and extract color boundary feature d (x, y, o, c) in a like fashion, wherein direction passage o and opposition chrominance channel c contain All contrary direction passage o+And o-And the opposition chrominance channel c of complementation+And c-
Step 4, the color of definition coloured image i (x, y) any pixel u (x, y) under the influence of color contextual information Surface characterColor surface significance be dreg(u) and color boundary feature duThe color boundary of (x, y, o, c) Significance is dbdry(u):
4.1st, formula (6) is utilized to calculate the color surface character of pixel u (x, y)With other each pixel v The color surface character of (x, y)Between dissimilar degree, and the color surface as described pixel u (x, y) Significance dreg(u);
d reg ( u ) = σ v &element; n ( x , y ) | s u * ( x , y , o , c ) - s v * ( x , y , o , c ) | - - - ( 6 )
In formula (6): n (x, y) represents the spatial aggregation of all pixels point;
4.2nd, formula (7) is utilized to calculate color boundary feature d of pixel u (x, y)u(x, y, o, c) and other each pixel v Color boundary feature d of (x, y)vDissimilar degree between (x, y, o, c), and the color boundary as described pixel u (x, y) Significance dbdry(u);
d bdry ( u ) = σ v &element; n ( x , y ) | d u ( x , y , o , c ) - d v ( x , y , o , c ) | - - - ( 7 )
Step 5, using formula (8) define in coloured image i (x, y) arbitrarily pixel u (x, y) and other each pixel v (x, Y) space length between, and the position significance d as described pixel u (x, y)loc(u);
d loc ( u ) = σ v &element; n ( x , y ) 1 σ 2 2 π exp ( - | u ( x , y ) - v ( x , y ) | 2 2 σ 2 2 ) - - - ( 8 )
In formula (8): σ2Arrive between other pixels v (x, y) for pixel u (x, y) in coloured image i (x, y) of setting Distance weighting, in experiment test, σ2=2.5.
Step 6, calculate the overall color surface significance d of described pixel u (x, y) respectively using formula (9)s(u) and complete Office color boundary significance dd(u):
ds(u)=dreg(u)×dloc(u) (9)
dd(u)=dbdry(u)×dloc(u)
Step 7, for described color surface global significance dsU (), on different directions passage o and opposition chrominance channel c Summation, obtains the color surface significance e of described pixel u (x, y)s(u);For described color boundary overall situation significance dd U (), sues for peace on different directions passage o and opposition chrominance channel c, obtains the color boundary significance e of described pixel u (x, y)d (u);
Step 8, using formula (11) extract pixel u (x, y) unique significance e (u):
e ( u ) = e s 2 ( u ) + e d 2 ( u ) - - - ( 11 )
Step 9, in described coloured image i (x, y) all pixels point using and step 4- step 8 identical extraction side Formula, and all pixels point significance that formula (11) is extracted carries out gaussian function Fuzzy processing, thus extracting described coloured silk The final significance of color image i (x, y).
In experiment test, described gaussian function is by two one-dimensional Normal probability distribution functionsConstitute, Fuzzy processing is two Normal probability distribution functions and described coloured image i The x direction of (x, y) and y direction convolution successively, convolution vector length is respectively x direction and the y side of described coloured image i (x, y) To 0.04 times of length, standard deviation sigma3It is respectively the x direction of described coloured image i (x, y) and 0.02 times of y direction length, all Value μ=0.
Fig. 2 a, Fig. 2 b, Fig. 2 c and Fig. 2 d are the saliency extracting method of the present invention to the given colored input of a width Image, the color surface significance of acquisition and color boundary significance, and the final color significance after integrated optimization Visual effect.Fig. 3 a, Fig. 3 b and the saliency extracting method that Fig. 3 c is the present invention with " false alarm rate/verification and measurement ratio curve " are Module, carries out quantifying the analysis chart of contrast with other multiple saliency extracting method.In Fig. 3 a, Fig. 3 b and Fig. 3 c, Ours represents the saliency extracting method of the present invention;Wctr* represents the significance extracting method based on contour connection;it Represent the significance extracting method of feature based Integration Theory;Ca represents the significance extracting method based on context;Sr represents Significance extracting method based on logarithmic spectrum;Aim represents the significance extracting method based on self-information;Sun represents based on nature The top-down significance extracting method of scene statistics amount;Covsal represents the significance extracting method based on region covariance.8 The method of kind is all tested on mit, toronto and koostra image set.Quantitation curves figure by Fig. 3 a, Fig. 3 b and Fig. 3 c It can be seen that, the inventive method all shows " false alarm rate/verification and measurement ratio curve " near the upper left corner on three different images collection, that is, detect Rate is higher, and when verification and measurement ratio is identical, probability of failure is relatively low, and therefore false alarm rate is relatively low.Table 1a, table 1b and table 1c are the present invention Saliency extracting method with " false alarm rate/verification and measurement ratio area under curve " as module, notable with other multiple images Property extracting method carry out quantify contrast analytical table, " false alarm rate/verification and measurement ratio area under curve " is " false alarm rate/verification and measurement ratio curve " The statistical indicator of tolerance, its value, closer to 1, shows that the effect that significance extracts is better.Divided by the quantitation of table 1a, table 1b and table 1c Analysis table is visible, and " false alarm rate/verification and measurement ratio area under curve " that the inventive method obtains is above it on three different images collection His significance extracting method.
More than, only preferably a kind of embodiment of the present invention, any those familiar with the art is at this In the technical scope of bright exposure, equivalent or relevant parameter change in addition for technology according to the present invention scheme and its inventive concept Become, all should be included within the scope of the present invention.
Table 1a
Significance extracting method ours wctr* it ca sr aim sun covsal
Area under curve 0.8239 0.7852 0.7673 0.7471 0.6872 0.7166 0.6534 0.7069
Table 1b
Significance extracting method ours wctr* it ca sr aim sun covsal
Area under curve 0.8391 0.6745 0.6712 0.7911 0.7132 0.7670 0.6637 0.5937
Table 1c
Significance extracting method ours wctr* it ca sr aim sun covsal
Area under curve 0.8100 0.6490 0.6712 0.7123 0.6574 0.6390 0.5330 0.6139

Claims (1)

1. a kind of saliency extracting method based on the suppression of color context, is characterized in that carrying out as follows:
Step 1, the spatial information to coloured image i (x, y) and color information carry out combined coding, obtain described coloured image i Color surface character s (x, y, o, c) of (x, y):
1.1st, gabor wave filter group g that structure is made up of several gabor wave filter g (x, y, o), g={ g (x, y, o) }, its In, x and y is respectively abscissa and the vertical coordinate of described gabor wave filter g (x, y, o);O be described gabor wave filter g (x, y, O) direction in space, direction in space o is by the rightabout o of pairwise orthogonal+And o-Composition;
1.2nd, the spatial information to each gabor wave filter g (x, y, o) and color information carry out combined coding, obtain each gabor pair Vertical color filter f (x, y, o, c), builds the gabor opposite color being made up of described each gabor opposition color filter f (x, y, o, c) Wave filter group f, f={ f (x, y, o, c) }, wherein, c is the opposition chrominance channel of described opposition color filter f (x, y, o, c), opposition Chrominance channel c is by opposition chrominance channel c complementary two-by-two+And c-Composition;
1.2.1, the negative value of described each gabor wave filter g (x, y, o) is set to 0, obtain and only have the gabor wave filter g activating composition The activation subelement g of (x, y, o)+(x, y, o), by described each gabor wave filter g (x, y, o) on the occasion of setting to 0, obtains to only have and presses down The suppression subelement g of gabor wave filter g (x, y, o) making point-(x,y,o);
1.2.2, described coloured image i (x, y) is decomposed into red r, green g and blue tri- Color Channels of b, and constitutes four groups Opposition chrominance channel c, every group of opposition chrominance channel c is by opposition chrominance channel c complementary two-by-two+And c-Composition, described four groups of opposition chrominance channels C is respectively:
Red green opposition chrominance channel, that is, activate red suppression green channel r+/g- and activation green suppression red channel g+/r-;
Blue yellow opposition chrominance channel, i.e. activation yellow suppression blue channel y+/b- and activation blue suppression yellow channels b+/y-;
Dark purple opposition chrominance channel, that is, activate red suppression cyan passage r+/c- and activate cyan suppression red channel c+/r-;
Black and white oppose chrominance channel, that is, activate white suppression black channel wh+/bl- and activation black suppression white channel bl+/ wh-;
1.2.3, formula (1) is utilized to build gabor opposition color filter f (x, y, o, c);
f ( x , y , o , c ) = σ k w k kg k ( x , y , o ) - - - ( 1 )
In formula (1), k ∈ { r, g, b }, gk(x, y, o) is the activation subelement g of Color Channel k corresponding gabor wave filter+(x, Y, o) or suppression subelement g-(x, y, o), wk∈{wr,wg,wbBe set Color Channel k weight;
1.3rd, formula (2) is utilized to calculate color surface character s (x, y, o, c) of coloured image i (x, y);
S (x, y, o, c)=i (x, y) * f (x, y, o, c) (2)
In formula (2), * is convolution operator;
Step 2, estimate the color context suppression of described coloured image i (x, y), described color context suppression is Color surface character s (x, y, o, c) referring to coloured image i (x, y) is subject to color surface contextual information or color boundary upper and lower Civilian informational influence and feature value changes s that cause*(x, y, o, c):
2.1st, give a direction passage o, estimate opposition chrominance channel c in described coloured image i (x, y) using formula (3)+Corresponding Color surface character s (x, y, o, c+) feature value changes s that affected and cause by color surface contextual information*(x,y,o,c+), by complementary opposition chrominance channel c-Corresponding color surface character s (x, y, o, c-) as normalization factor;
In formula (3),Correct operator for half-wave;K is the scale factor setting;σ1For the semi-saturation constant setting;
2.2nd, give a certain opposition chrominance channel c, estimate direction passage o in described coloured image i (x, y) using formula (4)+Corresponding Color surface character s (x, y, o+, c) feature value changes s being affected and causing by color boundary contextual information*(x,y,o+, C), by rightabout passage o-Corresponding color surface character s (x, y, o-, c) as normalization factor;
2.3rd, for complementary two opposition chrominance channel c in any opposition chrominance channel c+And c-Or contrary two in the passage o of direction Direction passage o+And o-All using with step 2.1- step 2.2 identical feature extraction mode, obtain affected by contextual information Color surface character s*(x,y,o,c);
Step 3, using formula (5) calculate coloured image i (x, y) color boundary feature d (x, y, o, c);
D (x, y, o, c)=s*(x,y,o,c)*g(x,y,o) (5)
Step 4, the color surface of definition coloured image i (x, y) any pixel u (x, y) under the influence of color contextual information FeatureColor surface significance be dreg(u) and color boundary feature duThe color boundary of (x, y, o, c) is notable Spend for dbdry(u):
4.1st, formula (6) is utilized to calculate the color surface character of pixel u (x, y)With other each pixels v (x, y) Color surface characterBetween dissimilar degree, and the color surface significance as described pixel u (x, y) dreg(u);
d r e g ( u ) = σ v &element; n ( x , y ) | s u * ( x , y , o , c ) - s v * ( x , y , o , c ) | - - - ( 6 )
In formula (6), n (x, y) represents the spatial aggregation of all pixels point;
4.2nd, formula (7) is utilized to calculate color boundary feature d of pixel u (x, y)u(x, y, o, c) and other each pixels v (x, y) Color boundary feature dvDissimilar degree between (x, y, o, c), and the color boundary as described pixel u (x, y) is notable Degree dbdry(u);
d b d r y ( u ) = σ v &element; n ( x , y ) | d u ( x , y , o , c ) - d v ( x , y , o , c ) | - - - ( 7 )
Step 5, using formula (8) define in coloured image i (x, y) arbitrarily pixel u (x, y) and other each pixels v (x, y) it Between space length, and the position significance d as described pixel u (x, y)loc(u);
d l o c ( u ) = σ v &element; n ( x , y ) 1 σ 2 2 π exp ( - | u ( x , y ) - v ( x , y ) | 2 2 σ 2 2 ) - - - ( 8 )
In formula (8), σ2For set coloured image i (x, y) in pixel u (x, y) arrive other each pixels v (x, y) between away from From weight;
Step 6, calculate the overall color surface significance d of described pixel u (x, y) respectively using formula (9)s(u) and overall color Edge significance dd(u):
d s ( u ) = d r e g ( u ) × d l o c ( u ) d d ( u ) = d b d r y ( u ) × d l o c ( u ) - - - ( 9 )
Step 7, for described color surface global significance dsU (), sues for peace on different directions passage o and opposition chrominance channel c, Obtain the color surface significance e of described pixel u (x, y)s(u);For described color boundary overall situation significance ddU (), not Sue for peace on equidirectional passage o and opposition chrominance channel c, obtain the color boundary significance e of described pixel u (x, y)d(u);
Step 8, using formula (11) extract pixel u (x, y) unique significance e (u):
e ( u ) = e s 2 ( u ) + e d 2 ( u ) - - - ( 11 )
Step 9, in described coloured image i (x, y) all pixels point using and step 4- step 8 identical extracting mode, and The all pixels point significance that formula (11) is extracted carries out gaussian function Fuzzy processing, thus extracting described coloured image The final significance of i (x, y).
CN201410523003.3A 2014-09-30 2014-09-30 Image conspicuousness extraction method based on color context inhibition Expired - Fee Related CN104268886B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410523003.3A CN104268886B (en) 2014-09-30 2014-09-30 Image conspicuousness extraction method based on color context inhibition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410523003.3A CN104268886B (en) 2014-09-30 2014-09-30 Image conspicuousness extraction method based on color context inhibition

Publications (2)

Publication Number Publication Date
CN104268886A CN104268886A (en) 2015-01-07
CN104268886B true CN104268886B (en) 2017-01-18

Family

ID=52160405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410523003.3A Expired - Fee Related CN104268886B (en) 2014-09-30 2014-09-30 Image conspicuousness extraction method based on color context inhibition

Country Status (1)

Country Link
CN (1) CN104268886B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778466B (en) * 2015-04-16 2018-02-02 北京航空航天大学 A kind of image attention method for detecting area for combining a variety of context cues

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136766A (en) * 2012-12-28 2013-06-05 上海交通大学 Object significance detecting method based on color contrast and color distribution
CN103793717A (en) * 2012-11-02 2014-05-14 阿里巴巴集团控股有限公司 Methods for determining image-subject significance and training image-subject significance determining classifier and systems for same

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8243068B2 (en) * 2007-05-17 2012-08-14 University Of Maryland Method, system and apparatus for determining and modifying saliency of a visual medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793717A (en) * 2012-11-02 2014-05-14 阿里巴巴集团控股有限公司 Methods for determining image-subject significance and training image-subject significance determining classifier and systems for same
CN103136766A (en) * 2012-12-28 2013-06-05 上海交通大学 Object significance detecting method based on color contrast and color distribution

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A New Biologically Inspired Color Image Descriptor;Jun Zhang等;《Computer Vision-ECCV 2012》;20121231;参见第312-324页 *
基于色彩对比度的快速视觉显著目标分割方法;罗荣华等;《华中科技大学学报(自然科学版)》;20111130;第39卷;参见第92-119页 *
彩色空间及空间上的彩色图像边缘检测;计润生等;《仪器仪表学报》;20060630;第27卷(第6期);参见第724-726页 *

Also Published As

Publication number Publication date
CN104268886A (en) 2015-01-07

Similar Documents

Publication Publication Date Title
CN106228547B (en) A kind of profile and border detection algorithm of view-based access control model color theory and homogeneity inhibition
CN104392463B (en) Image salient region detection method based on joint sparse multi-scale fusion
CN104966085B (en) A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features
CN104616664B (en) A kind of audio identification methods detected based on sonograph conspicuousness
CN107578418A (en) A kind of indoor scene profile testing method of confluent colours and depth information
CN103186904B (en) Picture contour extraction method and device
CN106296638A (en) Significance information acquisition device and significance information acquisition method
CN107944442A (en) Based on the object test equipment and method for improving convolutional neural networks
CN103177458B (en) A kind of visible remote sensing image region of interest area detecting method based on frequency-domain analysis
CN102096824B (en) Multi-spectral image ship detection method based on selective visual attention mechanism
CN106462771A (en) 3D image significance detection method
CN104346607A (en) Face recognition method based on convolutional neural network
CN104598908A (en) Method for recognizing diseases of crop leaves
CN106599854A (en) Method for automatically recognizing face expressions based on multi-characteristic fusion
CN106447646A (en) Quality blind evaluation method for unmanned aerial vehicle image
CN104680524A (en) Disease diagnosis method for leaf vegetables
CN103530657B (en) A kind of based on weighting L2 extraction degree of depth study face identification method
CN108053398A (en) A kind of melanoma automatic testing method of semi-supervised feature learning
CN103034865A (en) Extraction method of visual salient regions based on multiscale relative entropy
CN104966285A (en) Method for detecting saliency regions
CN103426158A (en) Method for detecting two-time-phase remote sensing image change
CN105426895A (en) Prominence detection method based on Markov model
CN111080574A (en) Fabric defect detection method based on information entropy and visual attention mechanism
CN105678249A (en) Face identification method aiming at registered face and to-be-identified face image quality difference
CN106446833A (en) Multichannel bionic vision method for recognizing complex scene image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170118

Termination date: 20210930

CF01 Termination of patent right due to non-payment of annual fee