CN104077609A - Saliency detection method based on conditional random field - Google Patents
Saliency detection method based on conditional random field Download PDFInfo
- Publication number
- CN104077609A CN104077609A CN201410302009.8A CN201410302009A CN104077609A CN 104077609 A CN104077609 A CN 104077609A CN 201410302009 A CN201410302009 A CN 201410302009A CN 104077609 A CN104077609 A CN 104077609A
- Authority
- CN
- China
- Prior art keywords
- image
- rectangular area
- function
- random field
- significant characteristics
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a saliency detection method based on a conditional random field. Saliency detection is considered to be an image annotation problem. Multi-scale comparison is used, and salient feature graphs are obtained through three different saliency calculation modes including a center-periphery histogram and color and space distribution. The weight of saliency of each salient feature graph is calculated through CRF study, and a model parameter is obtained through a maximum likelihood estimation method to estimate the optimal solution. Finally, the CRF is used for detecting and testing images. The method can detect salient objects more precisely, results obtained through detection are high in resolution ratio, an object boarder is defined precisely, and the method is little in calculation complexity.
Description
Technical field
The invention belongs to image processes and image object detection field, particularly a kind of conspicuousness detection method based on condition random field.
Background technology
Vision is the most important sensory perceptions of the mankind, human brain receptible external information more than 90% come from the visually-perceptible of human eye.The major function of vision is exactly to explain the surrounding environment of people life, and with its generation information interaction, developing rapidly of infotech impels various image informations day by day to expand, these mass datas are processed and analyzed to people's computer system of having to.But it should be noted that: on the one hand, gathering way of view data wants fast more than the raising speed of computer process ability; On the other hand, the content that people are concerned about is a part very little in whole data acquisition conventionally.For this reason, all view data of the overall treatment of making no exception are unpractical, are also unnecessary.How, information useful and that merit attention important from whole data centralization finds and extraction is relevant to task part as soon as possible, i.e. vision significance test problems is exactly machine vision and the medium-term and long-term important problem facing of information processing research always.Saliency region detection technique taking vision attention as representative becomes one of important technology approach improving mass data screening real-time and precision of analysis.It is an important content during image is processed that conspicuousness detects, have a wide range of applications, and as the image based on conspicuousness is cut apart, image retrieval, image automatic cutting and the compression of image screen etc.
The essence that conspicuousness detects is a kind of visual attention model, and this model is the model of setting up according to vision noticing mechanism, and it can distribute limited information operation resource, makes perception possess selective power.Utilize vision noticing mechanism to obtain the signal portion the most easily arousing attention in image, and represent its significance with a width gray level image.Psychology of vision research discovery, human visual attention can be divided into two types: bottom-up data driven mode and top-down task-driven pattern.Bottom-up is at the visual processes initial stage, is not subject to experience and the impact of task at present, and it is salient region that the mankind exist particular concern region to scene.Top-down is the visual processes later stage, and the mankind, according to the experience of self and the target of task choosing concern, are familiar with target.
Condition random field (hereinafter referred: CRF) is a kind of discriminative model.Say that simply random field can regard the set of one group of stochastic variable as, after distributing to each position to give at random a value according to certain, it is all just called random field.Compared with production Hidden Markov Model (HMM) (hereinafter referred: HMM), CRF can select context dependent feature, does not have marking bias problem.CRF is a non-directed graph model, and this model is under the condition of the given observation sequence that needs mark, calculates the joint probability of whole flag sequence.CRF model mainly contains three basic tasks: 1. the selection of fundamental function, this is directly connected to model performance.2. parameter evaluation, from marking the parameter of training data study CRF model of mark, the i.e. weight vectors of each fundamental function.3. mode inference, under specified criteria random field models parameter, dopes most probable status switch.
Early stage its major defect of saliency algorithm is that resolution is low, and object boundary definition is poor, and computation complexity is high.
Summary of the invention
Goal of the invention: for the problem of prior art existence, the invention provides the conspicuousness detection method of the condition random field that a kind of resolution is high, computation complexity is high.
Summary of the invention: the invention provides a kind of conspicuousness detection method based on condition random field, comprise the steps:
Step 10: acquisition of image data;
Step 20, the image that step 10 is obtained carries out significant characteristics extraction with three kinds of diverse ways, obtains the significant characteristics figure that the significant characteristics function different from three kinds is corresponding;
Step 30: adopt the machine learning method of conditional random field models to train the image gathering in step 10, and obtain the optimal weights of the each significant characteristics figure obtaining in step 20;
Step 40: three kinds of different significant characteristics function partition function Z that step 20 is obtained are normalized;
Step 50: set up the condition random field models, three normalization significant characteristics function conditional random field models that step 40 is obtained combine;
Step 60: optimum solution is tried to achieve in combination step 50 being obtained by maximum-likelihood criterion, obtains optimized linear combination;
Step 70: a minimum rectangle frame for the conspicuousness pixel that step 60 is calculated, wherein minimum rectangle frame at least frame go out more than 95% conspicuousness pixel, obtain final result.
Further, in described step 20, described three kinds of methods of carrying out feature extraction are respectively: multiple dimensioned pairing comparision, central authorities histogram method and color space distribution around;
Wherein, described multiple dimensioned pairing comparision, comprises the following steps:
Step 211, based on down sample after Gaussian Blur, obtains the image of different resolution to the image collecting in step 10;
Step 212, in this pyramid diagram picture of six floor heights that step 211 is obtained, every layer of contrast linear combination obtains the significant characteristics figure that multiple dimensioned contrast characteristic's function is corresponding with it;
Described central authorities are histogram method around, comprises the following steps:
Step 221: mark the obvious object in the image that step 10 obtains with the rectangular area R of multiple different Aspect Ratios, construct the rectangular area R of surrounding that the area of multiple correspondences equates around multiple rectangular area R
s;
Step 222: on the image obtaining in step 221, calculate each remarkable rectangular area R centered by pixel x with around rectangular area R
sχ between RGB color histogram
2distance;
Step 223: the rectangular area R of more each different Aspect Ratios and around rectangular area R
sχ between RGB color histogram
2distance, now χ
2the maximum rectangular area R of distance is optimum rectangular area R
*(x);
Step 224: central authorities-histogram feature function definition all optimums centered by the neighbor x ' rectangular area R around in step 221 around centered by neighbor x '
s *gauss's weighting χ of (x ')
2apart from sum;
Described color space distribution, comprises the following steps:
Step 231: all colours in the image that step 10 is obtained represents with gauss hybrid models;
Step 232: the conditional probability of utilizing the calculation of parameter of model in step 231 to calculate each pixel to be assigned to a kind of color component;
Step 233: each color component in step 232 is calculated to corresponding level variance and vertical variance, obtain the space variance of tie element;
Step 234: color space distribution characteristics function definition is the space variance central authorities weighting sum that step 433 obtains.
Further, in step 50, described as follows by conditional random field models combination significant characteristics process:
Step 501, three normalization significant characteristics that step 40 is obtained, calculate respectively monobasic potential function F
k(a
x, I), F
k(a
x, I) and k notable feature of expression;
Step 502, three normalization significant characteristics that step 40 is obtained, calculate respectively binary potential function S (a
x, a
x', I) and pairing feature, wherein, binary potential function S (a
x, a
x', I) represent neighbor to be labeled as the value of the penalty term of different value, a
xrepresent the conspicuousness of x pixel, a
x'represent the conspicuousness of x neighbor pixel;
Step 503, the notable feature obtaining in the optimal weights of each significant characteristics figure that integrating step 30 obtains and step 501,502 and pairing feature, according to formula
Carry out linear combination, wherein, A is for gathering label state set in image I, and Z is partition function, λ
krepresent the weight of k notable feature figure, K is the sum of notable feature figure, F
k(a
x, I) and be the potential function of unitary variant, F
k(a
x, I) and k notable feature figure of expression, S (a
x, a
x', I) be the mutual potential function of bivariate, S (a
x, a
x', I) expression neighbor x, the interaction relationship between x '.
Principle of work: the present invention detects conspicuousness to regard an image labeling problem as, uses multiple dimensioned contrast, central authorities-around histogram and color space this three kinds of different significances that distribute calculate notable feature figure.Learn the weight of the significance that calculates each notable feature figure by CRF, adopt maximum Likelihood to obtain model parameter estimation and obtain optimum solution.Finally utilize CRF to detect test pattern.
Beneficial effect: compared with prior art, method provided by the invention can more accurately detect well-marked target, detects the result resolution obtaining high, and precisely, method computation complexity is little in object boundary definition.
Brief description of the drawings
Fig. 1 is process flow diagram of the present invention;
Fig. 2 is that in the present invention, significant characteristics extracts process flow diagram;
Fig. 3 is the experiment comparison diagram of the present invention and prior art.
Embodiment
Below in conjunction with accompanying drawing, the present invention is described in detail.
As shown in Figure 1, a kind of conspicuousness detection method based on condition random field provided by the invention, comprises the steps:
Step 10: acquisition of image data; It is I that definition gathers image.
Step 20, the image that step 10 is obtained carries out significant characteristics extraction with three kinds of diverse ways, obtains the significant characteristics figure that the significant characteristics function different from three kinds is corresponding;
In the present embodiment, adopt multiple dimensioned pairing comparision, central histogram method around and color space distribution to carry out feature extraction.
1, multiple dimensioned pairing comparision
In conspicuousness detects, contrast method is to be the most often used on local feature.In the situation that not knowing remarkable object size, the conspicuousness that we adopt multi-scale method to carry out respectively regional area detects.Mainly comprise the following steps:
Step 211, based on down sample after Gaussian Blur, obtains the image of different resolution to the image collecting in step 10; The new figure image width and the height that obtain are 1/2 of original images simultaneously at every turn, and a series of images obtaining is called gaussian pyramid.
Step 212, in this pyramid diagram picture of six floor heights that step 211 is obtained, every layer of contrast linear combination obtains multiple dimensioned contrast characteristic's function
the significant characteristics figure corresponding with it; Wherein, I
lthe l tomographic image in pyramid, I
l(x) be illustrated in the state value of pixel x on l tomographic image, i.e. I
l(x) equal at 0 o'clock, represent that on l tomographic image, pixel x is conspicuousness pixel, I
l(x) equal at 1 o'clock, being illustrated in pixel x on l tomographic image is non-conspicuousness pixel; I
l(x ') represent pixel x on l tomographic image the state value of neighbor x ', the number of plies that pyramid has altogether is L=6, it is the window of 9x9 that N (x) represents.
2, central authorities' histogram method around
On given RGB color space basis, the pixel of adding up every kind of color component accounts for the ratio of the total pixel of image, thus obtain image shades of colour component ratio distribute be histogram.Suppose that an obvious object is gone out by rectangular area R frame, around it, we construct a rectangular area R with homalographic
s.Herein by calculating remarkable rectangular area R centered by pixel x and rectangular area R around it
s, their χ between RGB color histogram
2distance represents conspicuousness.Due to the target size difference of conspicuousness object, we select the rectangular area of different length breadth ratios to test.Mainly comprise the following steps:
Step 221: mark the obvious object in the image that step 10 obtains with the rectangular area R of five groups of different Aspect Ratios, construct the rectangular area R of surrounding that the area of multiple correspondences equates around multiple rectangular area R
s; Wherein, five groups of different Aspect Ratios are { 0.5,0.75,1.0,1.5,2};
Step 222: on the image obtaining in step 221, calculate each remarkable rectangular area R centered by pixel x with around rectangular area R
sχ between RGB color histogram
2distance;
Step 223: the rectangular area R of surrounding of the rectangular area R of more each different Aspect Ratios and rectangular area R homalographic
sχ between RGB color histogram
2distance, now χ
2the maximum rectangular area R of distance is optimum rectangular area R
*(x);
Step 224: central authorities-histogram feature function definition all optimums centered by the neighbor x ' rectangular area R around in step 221 around centered by neighbor x '
s *gauss's weighting χ of (x ')
2apart from sum; Central authorities-around histogram feature function is:
Wherein weight w
xx '=exp (0.5 σ
x ' -2|| x-x ' ||
2), represent σ by Gauss's damping capacity
x ' 2represent the covariance of pixel x', R
*(x ') represents optimum rectangular area centered by neighbor x '.
3, color space distribution
Central authorities-around histogram has only been described a local characteristic area, but the distribution of the color space of the overall situation also can be described conspicuousness object in piece image, therefore combines part and overall feature herein and carries out conspicuousness detection.
Step 231: to step: all colours in 10 images that obtain represents with gauss hybrid models; The simplest method of space distribution of describing a color is exactly to calculate the space variance of color, adopts gauss hybrid models (GMM) here.All colours in image represents with GMM, and this model has three parameters
wherein w
crepresent the weight of c color, u
crepresent the color average of c color, Σ
crepresent the covariance matrix of c color component, C represents the sum of color.
Step 232: the conditional probability of utilizing the calculation of parameter of model in step 231 to calculate each pixel to be assigned to a kind of color component; Conditional probability represents as follows:
Wherein, N (I
x| u
c, Σ
c) be a Gaussian distribution, for judging whether pixel x belongs to c color component.
Step 233: each color component in step 232 is calculated to corresponding level variance V
h(c) with vertical variance V
v(c), obtain the space variance of tie element | X|
c=V
v(c)+V
h(c);
Step 234: color space distribution characteristics function definition is the sum of the space variance central authorities weighting that obtains of step 433
Step 30: set up the condition random field models, adopts the machine learning method of conditional random field models to train the image gathering in step 10, and obtain the optimal weights of the each significant characteristics figure obtaining in step 20; The optimal weights λ obtaining in the present embodiment
*={ 0.25,0.48,0.27}
CRF definition: establishing G=(V, E) is a non-directed graph, and V is summit or node, represents stochastic variable.E is limit or arc, represents the dependence between stochastic variable.Y=(Y
v)
v ∈ V, Y is the stochastic variable Y taking node in G as index
vthe set forming.When under condition X, the pixel of the collection image I that observation sequence X represents here.Stochastic variable Y
vconditional probability distribution obey the Markov attribute of figure: p (Y
v| X, Y
w, w ≠ v)=p (Y
v| X, Y
w, w~v), wherein w~v represents that (w, v) is the limit of non-directed graph G, (X, Y) just forms a condition random field.The probability distribution of this model can be expressed as so:
Z is partition function, for the normalized of function.F
k(a
x, I) and be the potential function of unitary variant, F
k(a
x, I) and k notable feature of expression, λ
krepresent the weight of k notable feature.S (a
x, a
x', I) and be the mutual potential function of bivariate, S (a
x, a
x', I) represent neighbor to be labeled as the value of the penalty term of different value.
Step 40: three kinds of different significant characteristics function partition function Z that step 20 is obtained are normalized;
Three kinds of different significant characteristics define respectively a notable feature function f
x(x, I), is then normalized f
x(x, I) ∈ [0,1].Notable feature is defined as follows:
F
k(a
x,I)=f
x(x,I) a
x=0
Wherein, a
x=0 represents that x pixel is conspicuousness, a
x=1 represents that x pixel is non-conspicuousness.
The mutual potential function of bivariate is expressed as follows:
S(a
x,a
x',I)=|a
x-a
x′|·exp(-βd
x,x′)
Wherein, d
x, x '=|| I
x-I
x '|| two norms that are colour-difference represent, β=(2<||I
x-I
x '||
2>)
-1represent color contrast weight parameter.
Step 50: three normalization significant characteristics function conditional random field models that step 40 is obtained combine;
Step 60: optimum solution is tried to achieve in combination step 50 being obtained by maximum-likelihood criterion, obtains optimized linear combination;
In order to obtain optimum characteristic line combination, to N width training image
use maximal possibility estimation to train, wherein n represents n width training image, and it is got to expression formula after log is convex function, has optimum solution as follows:
When obtaining after optimum parameter, by CRF model inference, try to achieve final optimum solution according to maximum posteriori criterion (MAP):
y
*=arg max P(A|I)
Step 70: step 60 is calculated to a series of state values, and wherein, 0 represents conspicuousness pixel, 1 represents non-conspicuousness pixel, go out conspicuousness pixel by a minimum rectangle circle, wherein minimum rectangle frame at least frame go out more than 95% conspicuousness pixel, obtain final result.
By multiple dimensioned pairing comparision obtained above, the optimal weights that the feature that around central authorities, histogram method and color space distribution obtain obtains by training carries out linear combination, according to confession
By above-mentioned embodiment, visible tool of the present invention has the following advantages: in order to weigh the validity of algorithm herein, we can use assessment indicator: recall rate (Recall), accurate rate (Precision) and F-value (F-measure) compare.As shown in Figure 3, to adopt separately respectively local multiple dimensioned contrast, the assessment indicator that central authorities-around histogram and spatial color distribution feature obtain through test, and the assessment indicator that finally our method obtains three fundamental functions by optimum weights linear combination compares.In Fig. 3, adopting the experimental result of local multiple dimensioned contrast is separately 1, and central authorities-around histogrammic experimental result is 2 in employing separately, and adopting separately the experimental result that spatial color divides is 3, is 4 by three fundamental functions by the test findings of optimum weights linear combination.From Fig. 3 we can find out adopt the result that obtains of multiple dimensioned contrast characteristic be accurate rate very high time recall rate also very low, because obvious object interior zone is same item, so thereby inner contrast is not high has caused lower recall rate.Central authorities-around histogram obtains a good F-value in this several method, comprises some wrong pixels although this feature is described in background area, and this local feature can detect complete obvious object well.Spatial color distribution has lower accurate rate and the highest recall rate.In the conspicuousness of our research detects, the index of recall rate does not have accurate rate important.Three kinds of characteristic line combinations are obtained in this method, accurate rate, the index of recall rate and F-value is all higher.The present invention combines part and global characteristics, by CRF learning training parameter, three kinds of different characteristic line combinations is obtained to optimum result.Show by experiment, this method is with former method comparison, and this method can detect obvious object better more accurately.
Claims (3)
1. the conspicuousness detection method based on condition random field, is characterized in that, comprises the steps:
Step 10: acquisition of image data;
Step 20, the image that step 10 is obtained carries out significant characteristics extraction with three kinds of diverse ways, obtains the significant characteristics figure that the significant characteristics function different from three kinds is corresponding;
Step 30: set up random field models, adopt the machine learning method of conditional random field models to train the image gathering in step 10, and obtain the optimal weights of the each significant characteristics figure obtaining in step 20;
Step 40: three kinds of different significant characteristics function partition function Z that step 20 is obtained are normalized;
Step 50: set up the condition random field models, three normalization significant characteristics function conditional random field models that step 40 is obtained combine;
Step 60: optimum solution is tried to achieve in combination step 50 being obtained by maximum-likelihood criterion, obtains optimized linear combination;
Step 70: a minimum rectangle frame for the conspicuousness pixel that step 60 is calculated, wherein minimum rectangle frame at least frame go out more than 95% conspicuousness pixel, obtain final result.
2. a kind of conspicuousness detection method based on condition random field according to claim 1, it is characterized in that: in described step 20, described three kinds of methods of carrying out feature extraction are respectively: multiple dimensioned pairing comparision, central authorities histogram method and color space distribution around;
Wherein, described multiple dimensioned pairing comparision, comprises the following steps:
Step 211, based on down sample after Gaussian Blur, obtains the image of different resolution to the image collecting in step 10;
Step 212, in this pyramid diagram picture of six floor heights that step 211 is obtained, every layer of contrast linear combination obtains the significant characteristics figure that multiple dimensioned contrast characteristic's function is corresponding with it;
Described central authorities are histogram method around, comprises the following steps:
Step 221: mark the obvious object in the image that step 10 obtains with the rectangular area R of multiple different Aspect Ratios, construct the rectangular area R of surrounding that the area of multiple correspondences equates around multiple rectangular area R
s;
Step 222: on the image obtaining in step 221, calculate each remarkable rectangular area R centered by pixel x with around rectangular area R
sχ between RGB color histogram
2distance;
Step 223: the rectangular area R of more each different Aspect Ratios and around rectangular area R
sχ between RGB color histogram
2distance, now χ
2the maximum rectangular area R of distance is optimum rectangular area R
*(x);
Step 224: central authorities-histogram feature function definition all optimums centered by the neighbor x ' rectangular area R around in step 221 around centered by neighbor x '
s *gauss's weighting χ of (x ')
2apart from sum;
Described color space distribution, comprises the following steps:
Step 231: all colours in the image that step 10 is obtained represents with gauss hybrid models;
Step 232: the conditional probability of utilizing the calculation of parameter of model in step 231 to calculate each pixel to be assigned to a kind of color component;
Step 233: each color component in step 232 is calculated to corresponding level variance and vertical variance, obtain the space variance of tie element;
Step 234: color space distribution characteristics function definition is the sum of the space variance central authorities weighting that obtains of step 433.
3. a kind of conspicuousness detection method based on condition random field according to claim 1, is characterized in that: in step 50, described as follows by conditional random field models combination significant characteristics process:
Step 501, three normalization significant characteristics that step 40 is obtained, calculate respectively monobasic potential function F
k(a
x, I), F
k(a
x, I) and k notable feature of expression;
Step 502, three normalization significant characteristics that step 40 is obtained, calculate respectively binary potential function S (a
x, a
x', I) and pairing feature, wherein, binary potential function S (a
x, a
x', I) represent neighbor to be labeled as the value of the penalty term of different value, a
xrepresent the conspicuousness of x pixel, a
x'represent the conspicuousness of x neighbor pixel;
Step 503, the notable feature obtaining in the optimal weights of each significant characteristics figure that integrating step 30 obtains and step 501,502 and pairing feature, according to formula
Carry out linear combination, wherein, A is for gathering label state set in image I, and Z is partition function, λ
krepresent the weight of k notable feature figure, K is the sum of notable feature figure, F
k(a
x, I) and be the potential function of unitary variant, F
k(a
x, I) and k notable feature figure of expression, S (a
x, a
x', I) be the mutual potential function of bivariate, S (a
x, a
x', I) expression neighbor x, the interaction relationship between x '.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410302009.8A CN104077609A (en) | 2014-06-27 | 2014-06-27 | Saliency detection method based on conditional random field |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410302009.8A CN104077609A (en) | 2014-06-27 | 2014-06-27 | Saliency detection method based on conditional random field |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104077609A true CN104077609A (en) | 2014-10-01 |
Family
ID=51598855
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410302009.8A Pending CN104077609A (en) | 2014-06-27 | 2014-06-27 | Saliency detection method based on conditional random field |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104077609A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104680546A (en) * | 2015-03-12 | 2015-06-03 | 安徽大学 | Salient image target detection method |
CN105426895A (en) * | 2015-11-10 | 2016-03-23 | 河海大学 | Prominence detection method based on Markov model |
CN105931241A (en) * | 2016-04-22 | 2016-09-07 | 南京师范大学 | Automatic marking method for natural scene image |
CN106127210A (en) * | 2016-06-17 | 2016-11-16 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | A kind of significance detection method based on multiple features |
CN107240107A (en) * | 2017-06-30 | 2017-10-10 | 福州大学 | A kind of first appraisal procedure of conspicuousness detection based on image retrieval |
CN107729901A (en) * | 2016-08-10 | 2018-02-23 | 阿里巴巴集团控股有限公司 | Method for building up, device and the image processing method and system of image processing model |
CN108460417A (en) * | 2018-03-05 | 2018-08-28 | 重庆邮电大学 | The MCRF abnormal behaviour real-time identification methods that feature based merges |
CN110135435A (en) * | 2019-04-17 | 2019-08-16 | 上海师范大学 | A kind of conspicuousness detection method and device based on range learning system |
CN112016548A (en) * | 2020-10-15 | 2020-12-01 | 腾讯科技(深圳)有限公司 | Cover picture display method and related device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110229025A1 (en) * | 2010-02-10 | 2011-09-22 | Qi Zhao | Methods and systems for generating saliency models through linear and/or nonlinear integration |
CN103679718A (en) * | 2013-12-06 | 2014-03-26 | 河海大学 | Fast scenario analysis method based on saliency |
-
2014
- 2014-06-27 CN CN201410302009.8A patent/CN104077609A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110229025A1 (en) * | 2010-02-10 | 2011-09-22 | Qi Zhao | Methods and systems for generating saliency models through linear and/or nonlinear integration |
CN103679718A (en) * | 2013-12-06 | 2014-03-26 | 河海大学 | Fast scenario analysis method based on saliency |
Non-Patent Citations (1)
Title |
---|
TIE LIU 等: ""Learning to Detect a Salient Object"", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104680546A (en) * | 2015-03-12 | 2015-06-03 | 安徽大学 | Salient image target detection method |
CN105426895A (en) * | 2015-11-10 | 2016-03-23 | 河海大学 | Prominence detection method based on Markov model |
CN105931241B (en) * | 2016-04-22 | 2018-08-21 | 南京师范大学 | A kind of automatic marking method of natural scene image |
CN105931241A (en) * | 2016-04-22 | 2016-09-07 | 南京师范大学 | Automatic marking method for natural scene image |
CN106127210A (en) * | 2016-06-17 | 2016-11-16 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | A kind of significance detection method based on multiple features |
CN107729901A (en) * | 2016-08-10 | 2018-02-23 | 阿里巴巴集团控股有限公司 | Method for building up, device and the image processing method and system of image processing model |
CN107729901B (en) * | 2016-08-10 | 2021-04-27 | 阿里巴巴集团控股有限公司 | Image processing model establishing method and device and image processing method and system |
CN107240107A (en) * | 2017-06-30 | 2017-10-10 | 福州大学 | A kind of first appraisal procedure of conspicuousness detection based on image retrieval |
CN107240107B (en) * | 2017-06-30 | 2019-08-09 | 福州大学 | A kind of first appraisal procedure of conspicuousness detection based on image retrieval |
CN108460417A (en) * | 2018-03-05 | 2018-08-28 | 重庆邮电大学 | The MCRF abnormal behaviour real-time identification methods that feature based merges |
CN110135435A (en) * | 2019-04-17 | 2019-08-16 | 上海师范大学 | A kind of conspicuousness detection method and device based on range learning system |
CN110135435B (en) * | 2019-04-17 | 2021-05-18 | 上海师范大学 | Saliency detection method and device based on breadth learning system |
CN112016548A (en) * | 2020-10-15 | 2020-12-01 | 腾讯科技(深圳)有限公司 | Cover picture display method and related device |
CN112016548B (en) * | 2020-10-15 | 2021-02-09 | 腾讯科技(深圳)有限公司 | Cover picture display method and related device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104077609A (en) | Saliency detection method based on conditional random field | |
CN108830285B (en) | Target detection method for reinforcement learning based on fast-RCNN | |
CN111079602B (en) | Vehicle fine granularity identification method and device based on multi-scale regional feature constraint | |
CN105608456B (en) | A kind of multi-direction Method for text detection based on full convolutional network | |
CN103996018B (en) | Face identification method based on 4DLBP | |
CN105426895A (en) | Prominence detection method based on Markov model | |
CN106408030B (en) | SAR image classification method based on middle layer semantic attribute and convolutional neural networks | |
CN107862261A (en) | Image people counting method based on multiple dimensioned convolutional neural networks | |
CN105160310A (en) | 3D (three-dimensional) convolutional neural network based human body behavior recognition method | |
EP2927871A1 (en) | Method and device for calculating number of pedestrians and crowd movement directions | |
CN106778687A (en) | Method for viewing points detecting based on local evaluation and global optimization | |
CN104598908A (en) | Method for recognizing diseases of crop leaves | |
CN104182985A (en) | Remote sensing image change detection method | |
CN103839065A (en) | Extraction method for dynamic crowd gathering characteristics | |
CN104820824A (en) | Local abnormal behavior detection method based on optical flow and space-time gradient | |
CN107945200A (en) | Image binaryzation dividing method | |
CN103413303A (en) | Infrared target segmentation method based on joint obviousness | |
CN106295124A (en) | Utilize the method that multiple image detecting technique comprehensively analyzes gene polyadenylation signal figure likelihood probability amount | |
CN102867195B (en) | Method for detecting and identifying a plurality of types of objects in remote sensing image | |
CN112069985B (en) | High-resolution field image rice spike detection and counting method based on deep learning | |
CN109977968B (en) | SAR change detection method based on deep learning classification comparison | |
CN105389799B (en) | SAR image object detection method based on sketch map and low-rank decomposition | |
CN102043958A (en) | High-definition remote sensing image multi-class target detection and identification method | |
CN102542295A (en) | Method for detecting landslip from remotely sensed image by adopting image classification technology | |
CN113822352B (en) | Infrared dim target detection method based on multi-feature fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20141001 |