CN103065302B - Image significance detection method based on stray data mining - Google Patents

Image significance detection method based on stray data mining Download PDF

Info

Publication number
CN103065302B
CN103065302B CN201210569877.3A CN201210569877A CN103065302B CN 103065302 B CN103065302 B CN 103065302B CN 201210569877 A CN201210569877 A CN 201210569877A CN 103065302 B CN103065302 B CN 103065302B
Authority
CN
China
Prior art keywords
pixel
conspicuousness
distance
pixels
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210569877.3A
Other languages
Chinese (zh)
Other versions
CN103065302A (en
Inventor
胡卫明
高君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201210569877.3A priority Critical patent/CN103065302B/en
Publication of CN103065302A publication Critical patent/CN103065302A/en
Application granted granted Critical
Publication of CN103065302B publication Critical patent/CN103065302B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses an image significance detection method based on stray data mining. The method includes that local distribution density value and weighting neighbourhood distribution density value of each pixel are respectively estimated in a multiscale visual feature space of an image, and significance likelihood value of characterization image area distribution difference information is calculated. A significance likelihood value communication method based on the image significance detection method is adopted to fuse visual feature space distribution information of the image and two-dimensional plane space distribution information to amend the significance likelihood value, and significance mapping of the image is outputted.

Description

A kind of image significance detection method based on outlier data digging
Technical field
The present invention relates to computer vision field, particularly a kind of saliency based on outlier data digging detects (Image Saliency Detection) method.
Background technology
As an important research field of computer vision, saliency detects and extracts in image the region with conspicuousness by the vision noticing mechanism of simulating human, so that understand image further and analyze.In general, image significance detection method meeting output image conspicuousness mapping graph (Saliency Map), is used for reflecting that the regional of image is to the attraction degree of the mankind.Wherein, the region occupying mankind's major part notice is called as saliency region.Based on saliency mapping graph, salient region detection is widely applied in the middle of the numerous areas of computer vision, as the image scale transform etc. of image retrieval, salient region segmentation, content-based perception.
Vision noticing mechanism theory thinks that the vision system of the mankind can capture most important region in image expeditiously and concentrate most of notice to it, and ignores other parts of image.According to the type of theoretical frame, image significance detection method can be divided into two large classes: bottom-up detection framework and top-down detection framework.The bottom-up detection framework hypothesis mankind are mainly because the outside difference in each region catches salient region with contrast in image, and as color, contrast etc., and the difference in salient region and other regions of image is larger.Therefore, such detection frame method is adopted generally first to carry out the extraction of basic visual signature, if color characteristic (Color Channel), gradation of image (GreyIntensity), color are towards (Orientation) etc., then carry out computed image conspicuousness mapping graph according to interregional contrast.Therefore, these class methods mainly by data-driven and need follow about notice mechanism hypothesis rule.Method based on top-down detection framework be then understanding with successive image and analyzing of task closely-related, detected by the saliency of target drives.For this reason, these class methods need the priori obtaining image target area, if hard objectives region is face etc., then build descriptive model according to priori and are distinguished in target area and image background.Compared with top-down detection method, bottom-up detection method can without any under the prerequisite of priori, and the basic visual signature only according to image builds saliency mapping graph efficiently.Therefore, most of research work is in recent years all based on bottom-up detection method.
The greatest problem of the bottom-up detection method of current popular is the conspicuousness likelihood value (Saliency Score) that cannot give the equal size of each pixel of salient region equably, and generally only the edge of salient region and image background regions obviously can be distinguished.On the one hand, this makes the extraction in saliency region sufficiently complete, proposes more challenge thus to further image understanding and analysis; On the other hand, the conspicuousness likelihood value difference of the same area is excessive, conspicuousness likelihood value directly cannot be combined with the high-level semantics features of image as weight, limit the range of application in saliency region.
Summary of the invention
(1) technical matters that will solve
The object of the invention is to propose one be applicable to complex scene under unsupervised saliency method for detecting area, solve the technical matters of salient region or focus area in automatic recognition image.
(2) technical scheme
For achieving the above object, the present invention proposes a kind of saliency method for detecting area based on outlier data digging, fusion visual signature space distribution information and two dimensional surface space distribution information obtain a kind of detection method that can highlight whole salient region more equably, it is as follows that its method comprises step: (1) carries out multiple dimensioned sight equation specific analysis in the visual signature space of described image, obtains the conspicuousness likelihood value of each pixel of described image under different scale; (2) the conspicuousness likelihood value obtained under different scale is carried out the saliency mapping graph merging to obtain view-based access control model feature space; (3) carry out the propagation of conspicuousness likelihood value in the two dimensional surface space of described image, the visual space distributed intelligence of described image and two dimensional surface space distribution information are merged and gets up to obtain final saliency mapping graph.
Preferably, multiple dimensioned sight equation specific analysis is carried out to image, comprise the steps: that the visual signature information of each pixel in (1a) described image is represented by the visual signature distributed intelligence of pixel in its predetermined neighborhood in two dimensional surface space; (1b) find k the shortest neighbor pixel point of distance in described image in the visual signature space of each pixel under current scale and corresponding distance value, wherein parameter k is the positive integer being not more than the total number of image slices vegetarian refreshments; (1c) the local distribution density value of each pixel described is calculated; (1d) the Weighted Neighborhood distributed density values of each pixel described is calculated; (1e) described local distribution density value and described Weighted Neighborhood distributed density values is utilized to calculate the conspicuousness likelihood value of each pixel described.
Preferably, the propagation of saliency likelihood value is carried out in two-dimensional image plane space, comprise the steps: that (3a) finds the individual nearest pixel of the k of each pixel of described image in two dimensional surface space, and calculate pixel and the distance value of its neighbor pixel point in the visual signature space of original image, wherein parameter k is the positive integer being not more than the total number of image slices vegetarian refreshments; (3b) the effective conspicuousness pixel in described image is detected; (3c) conspicuousness pixel non-effective in described image is carried out to the propagation calculating of conspicuousness likelihood value.
(3) beneficial effect
The multiple dimensioned conspicuousness mapping graph construction method estimated based on local density proposed by the invention, by the fusion at overall visual angle and visual angle, local, saliency region and background area are distinguished, the situation that the visual signature of saliency extracted region problem, particularly background area under complex background and salient region is comparatively close can be processed adaptively.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of image significance detection method of the present invention;
Fig. 2 is k-distance neighborhood schematic diagram of the present invention.
Embodiment
For making the object, technical solutions and advantages of the present invention clearly understand, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in further detail.
The hardware of method carrying out practically of the present invention and programming language are also unrestricted, can realize method of the present invention by any language compilation.The present invention adopts a computing machine with 2.8G hertz central processing unit and 1G byte of memory, and has worked out the working routine of saliency region detection with Matlab language, achieves method of the present invention.
Fig. 1 is the process flow diagram of image significance detection method of the present invention.As shown in Figure 1, the method comprises:
Step 1: from overall visual angle and visual angle, local in the visual signature space of described image, carry out multiple dimensioned sight equation specific analysis, obtain the conspicuousness likelihood value of each pixel of described image under different scale;
Step 2: merged by the conspicuousness likelihood value obtained under described different scale, obtains the saliency mapping graph of view-based access control model feature space;
Step 3: the propagation carrying out conspicuousness likelihood value in the two dimensional surface space of described image, merges the visual space distributed intelligence of described image and two dimensional surface space distribution information and gets up to obtain final described saliency mapping graph.
Provide each step involved in technical solution of the present invention below in detail.
For step 1, comprise the steps:
Step 1a: in image, the visual signature distributed intelligence of its two dimensional surface space inside circumference 8 pixels of each pixel represents, as pixel i can by image block p iinterior visual signature information describes.For the ease of understanding, directly use image block p herein irepresent pixel i.This step visual signature information used is the value of each Color Channel in color space CIE L*a*b*, and wherein the visual signature information of pixel i is expressed as follows:
C i → = Σ j ∈ p i ω j * C j → Σ j ∈ p i ω j
ω j = 1 σ 2 π exp ( - d spatial ( i , j ) 2 2 σ 2 )
Wherein, vector the color feature vector of pixel i in color space CIE L*a*b*, vector it is the color feature vector of neighbor pixel point j.ω jbe the weight of neighbor pixel point j, wherein parameter σ is used to the variance calculating each neighbor pixel point weight, and default value is 1; d spatial(i, j) is the Euclidean distance of pixel i and j in two-dimensional space.
Step 1b: each pixel, with after the visual signature distributed intelligence of each pixel represents in its two dimensional surface spatial neighborhood, determines the k-distance neighborhood N of each pixel in color space CIE L*a*b* k(p i) and radius of neighbourhood k-distance (p i), and then determine the neighbor pixel point of each pixel in visual signature space, specific as follows:
In image, sum of all pixels is D, any one is not more than to the positive integer k of D, meets the pixel p of following condition iand p jbetween distance d color(p i, p j) be k-distance (p i):
1. have k pixel p ' at least j, satisfy condition d color(p i, p j')≤d color(p i, p j);
2. there is at most k-1 pixel p ' j, satisfy condition d color(p i, p j') < d color(p i, p j);
Pixel p ik-distance neighborhood N k(p i) comprise each and p idistance be less than k-distance (p i) neighbor pixel point, d color(p i, p j) be the Euclidean distance of pixel i and j in color space, it is according to the color feature vector of described pixel i in color space CIE L*a*b* calculate.Fig. 2 is pixel p 1and p 23-distance neighborhood schematic diagram in visual signature space.
Step 1c: at the N of each pixel k(p i) estimate that the local distribution density value of this pixel is as follows in neighborhood:
lde ( p i ) = | N k ( p i ) | | D | &CenterDot; 1 &Sigma; p j &Element; N k ( p i ) r - dis tan ce ( p i , p j )
r-distance(p i,p j)=max{k-distance(p j),d color(p i,p j)}
Wherein, lde (p i) be pixel p ilocal distribution density value, wherein variable r-distance (p i, p j) describe pixel p iwith its neighbor pixel point p jbetween context relation; | N k(p i) | be pixel p ineighbor pixel point number in visual signature space; K-distance (p i) be pixel p ithe radius of neighbourhood in visual signature space; d color(p i, p j) be pixel p iand p jeuclidean distance in visual signature space; Contiguous range parameter k is used for the smoothness of controls local density Estimation, and affects the extraction of pixel periphery context relation, is generally set as 40% to 60% of all pixel quantity in image.
Step 1d: the Weighted Neighborhood distributed density values calculating each pixel is as follows:
wde ( p i ) = &Sigma; p j &Element; N k ( p i ) &omega; p j &CenterDot; lde ( p i ) &Sigma; p j &Element; N k ( p i ) &omega; p j
&omega; p j = exp { - ( k - dis tan ce ( p i ) min k - 1 ) 2 2 }
Wherein, wde (p i) be pixel p iweighted Neighborhood distributed density values, wherein, lde (p i) be pixel p ilocal distribution density value; K-distance (p i) be pixel p ithe radius of neighbourhood in visual signature space; Variable pixel p ineighbor pixel point p jweight factor; Variable min kpixel p ithe minimum value of the corresponding radius of neighbourhood in all neighbor pixel points.
Step 1e: the conspicuousness likelihood value calculating each pixel is as follows:
S ( i ) = wde ( p i ) lde ( p i )
S (i) is pixel p iconspicuousness likelihood value, wherein, variable wde (p i) be pixel p iweighted Neighborhood distributed density values; Variable lde (p i) be pixel p ilocal distribution density value.Described step is the step of the conspicuousness likelihood value calculating view-based access control model feature under single yardstick.By adopting classical graphical rule changing method to obtain the derivative image of original image under different scale, and on each derivative image, calculate corresponding conspicuousness likelihood value.
Step 2: the conspicuousness likelihood value obtained under different scale is merged according to such as under type:
S ( i ) = 1 | L | &CenterDot; &Sigma; l &Element; L S l ( i )
S (i) is pixel p iconspicuousness likelihood value, wherein, variables L is the number obtaining derivative image under different scale, variable S l(i) conspicuousness likelihood value of pixel i for obtaining based on l derivative image.
After obtaining the conspicuousness likelihood value of view-based access control model feature space, need the propagation carrying out conspicuousness likelihood value in two-dimensional image plane space, for visual signature distributed intelligence and the two dimensional surface space distribution information of fused images.
For step 3, comprise the steps:
Step 3a: adopt the account form identical with step 1b to find the k-distance neighborhood of each pixel in two dimensional surface space difference is that the account form of distance is by the Euclidean distance d in visual signature space color(p i, p j) change Euclidean distance d in two dimensional surface space into spatial(i, j).Generally speaking, in two dimensional surface space, the size of neighborhood is set as 15% of pixel in image.Detailed process is as follows:
In image, sum of all pixels is D, any one is not more than to the positive integer k of D, meets the pixel p of following condition iand p jbetween distance d spatial(i, j) is k-distance spatial(p i):
(1) k pixel p ' is had at least j, satisfy condition d spatial(p i, p j')≤d spatial(p i, p j);
(2) k-1 pixel p ' is had at most j, satisfy condition d spatial(p i, p j') < d spatial(p i, p j);
Then wherein pixel p ik-distance neighborhood comprise each and p idistance be less than k-distance spatial(p i) k the shortest neighbor pixel point of distance.
Step 3b: determine the effective conspicuousness pixel in image.For the pixel that conspicuousness likelihood value in visual signature space is higher, it has had higher probability becomes effective conspicuousness pixel.For this type pixel, the transmission method of conspicuousness likelihood value should be noted that and keeps its conspicuousness likelihood value obtained.Therefore, by being sorted from high to low by the conspicuousness likelihood value of all pixels in visual signature space, pixel conspicuousness likelihood value being greater than more than 95% pixel conspicuousness likelihood value in image is judged to be effective conspicuousness pixel.
Step 3c: the propagation non-effective conspicuousness pixel being carried out to conspicuousness likelihood value, to revise the conspicuousness likelihood value of described non-effective conspicuousness pixel, corresponding computing formula is as follows:
S &OverBar; ( i ) = &Sigma; j &Element; { N spatial k ( i ) + i } K ( d color ( p i , p j ) k - dis tan ce ( p j ) ) S ( j ) &Sigma; j &Element; { N spatial k ( i ) + i } K ( d color ( p i , p j ) k - dis tan ce ( p j ) )
= &alpha;S ( i ) + &Sigma; j &Element; { N spatial k ( i ) } K ( d color ( p i , p j ) k - dis tan ce ( p j ) ) &CenterDot; &alpha; &CenterDot; S ( j )
&alpha; = 1 &Sigma; j &Element; { N spatial k ( i ) + i } K ( d color ( p i , p j ) k - dis tan ce ( p j ) )
Wherein, for the revised conspicuousness likelihood value of pixel i; S (j) is the conspicuousness likelihood value of pixel j in visual signature space; d color(p i, p j) be pixel p iand p jdistance in visual signature space; K-distance (p j) be pixel p jthe radius of neighbourhood in visual signature space; for the neighborhood of pixel i in two dimensional surface space;
Kernel function K () is defined as follows:
K ( x ) = 1 , if | | x | | &le; 1 exp ( - ( | | x | | - 1 ) 2 2 ) , otherwise
Wherein, || x|| is the norm value of variable x.
After the propagation completing conspicuousness likelihood value, the conspicuousness mapping graph of image builds as follows:
Map = S 11 S 12 . . . S 1 m S 21 S 22 . . . S 2 m . . . . . . . . . . . . S n 1 S n 2 . . . S nm
Wherein, n is the number of the row of this mapping graph, and m is the number of respective column, S ijfor the conspicuousness likelihood value of coordinate (i, j) place pixel.Above-mentioned conspicuousness mapping graph can be widely used in multiple fields of image procossing and understanding, as saliency region segmentation, Images Classification etc.
Above-described specific embodiment; object of the present invention, technical scheme and beneficial effect are further described; be understood that; the foregoing is only specific embodiments of the invention; be not limited to the present invention; within the spirit and principles in the present invention all, any amendment made, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (9)

1., based on an image significance detection method for outlier data digging, the method comprises the following steps:
Step 1: carry out multiple dimensioned sight equation specific analysis in the visual signature space of described image, obtains the conspicuousness likelihood value of each pixel of described image under different scale;
Step 2: merged by the conspicuousness likelihood value obtained under described different scale, obtains the saliency mapping graph of view-based access control model feature space;
Step 3: the propagation carrying out conspicuousness likelihood value in the two dimensional surface space of described image, merges the visual signature space distribution information of described image and two dimensional surface space distribution information and gets up to obtain final described saliency mapping graph;
Wherein, described step 3 comprises:
Step 3a: find the neighbor pixel point that k the distance of each pixel of described image in two dimensional surface space is the shortest, wherein, described k is the positive integer being not more than the total number of image slices vegetarian refreshments;
Step 3b: determine the effective conspicuousness pixel in described image; Wherein, the conspicuousness likelihood value of all pixels in visual signature space is sorted from high to low, and the pixel that described conspicuousness likelihood value is greater than predetermined value is judged to be effective conspicuousness pixel, and other pixels are non-effective conspicuousness pixel;
Step 3c: propagation conspicuousness pixel non-effective in described image being carried out to conspicuousness likelihood value calculates; Computing formula is as follows:
S &OverBar; ( i ) = &Sigma; j &Element; { N spatial k ( i ) + i } K ( d color ( p i , p j ) k - dis tan ce ( p j ) ) S ( j ) &Sigma; j { N spatial k ( i ) + i } K ( d color ( p i , p j ) k - dis tan ce ( p j ) ) = &alpha;S ( i ) + &Sigma; j &Element; { N spatial k ( i ) } K ( d color ( p i , p j ) k - dis tan ce ( p j ) ) &CenterDot; &alpha; &CenterDot; S ( j )
&alpha; = 1 &Sigma; j &Element; { N spatial k ( i ) + i } K ( d color ( p i , p j ) k - dis tan ce ( p j ) )
Wherein, S (i) is the conspicuousness likelihood value of pixel i in visual signature space; for the revised conspicuousness likelihood value of pixel i; S (j) is the conspicuousness likelihood value of pixel j in visual signature space; d color(p i, p j) be block of pixels p iand p jdistance in visual signature space; K-distance (p j) be block of pixels p jthe radius of neighbourhood in visual signature space; for the neighborhood of pixel i in two dimensional surface space;
Kernel function K () is defined as follows:
K ( x ) = 1 , if | | x | | &le; 1 exp ( - ( | | x | | - 1 ) 2 2 ) , otherwise
Wherein, || x|| is the norm value of variable x.
2. method according to claim 1, is characterized in that, described step 1 comprises:
Step 1a: in described image, the visual signature information of each pixel is represented by the visual signature distributed intelligence of the neighbor pixel point in its two dimensional surface spatial neighborhood;
Step 1b: find k the shortest neighbor pixel point of distance in the visual signature space of each pixel described under current scale and the distance value between each pixel and described neighbor pixel point, wherein, described k is the positive integer being not more than the total number of image slices vegetarian refreshments;
Step 1c: the local distribution density value calculating each pixel described;
Step 1d: the Weighted Neighborhood distributed density values calculating each pixel described;
Step 1e: utilize described local distribution density value and Weighted Neighborhood distributed density values to calculate the conspicuousness likelihood value of each pixel described.
3. method according to claim 1, is characterized in that, the conspicuousness likelihood value obtained under different scale merges according to such as under type by described step 2,
S ( i ) = 1 | L | &CenterDot; &Sigma; l &Element; L S l ( i )
Wherein, S (i) is the conspicuousness likelihood value of pixel i, | L| is the number of the conspicuousness likelihood value obtained under different scale, variable S li l conspicuousness likelihood value that () obtains under L different scale for pixel i.
4. method according to claim 2, is characterized in that, in described step 1a, visual signature information used is the value of each Color Channel in color space CIE L*a*b*, and wherein the colour vision characteristic information of pixel i is expressed as follows:
C &RightArrow; i = &Sigma; j &Element; p i &omega; j * C &RightArrow; j &Sigma; j &Element; p i &omega; j
&omega; j = 1 &sigma; 2 &pi; exp ( - d 2 spatial ( i , j ) 2 &sigma; 2 )
Wherein, vector the color feature vector of pixel i in color space CIE L*a*b*, vector the color feature vector of neighbor pixel point j, ω jbe the weight of neighbor pixel point j, parameter σ is the variance of each neighbor pixel point weight; d spatial(i, j) is the Euclidean distance of pixel i and j in two-dimensional space.
5. method according to claim 2, is characterized in that, in described step 1b, in visual signature space, described k the shortest neighbor pixel point of distance is determined as follows:
In image, sum of all pixels is D, any one is not more than to the positive integer k of D, meets the block of pixels p of following condition iand p jbetween distance d color(p i, p j) be k-distance (p i):
(1) k block of pixels p' is had at least j, satisfy condition d color(p i, p j')≤d color(p i, p j);
(2) k-1 block of pixels p' is had at most j, satisfy condition d color(p i, p j') <d color(p i, p j);
Then wherein block of pixels p ik-distance neighborhood N k(p i) comprise each and p idistance be less than k-distance (p i) k the shortest neighbor pixel block of distance.
6. method according to claim 5, is characterized in that, in described step 1c, and N in the k-distance neighborhood of each block of pixels k(p i) estimate the local distribution density value of each block of pixels described, specific as follows:
lde ( p i ) = | N k ( p i ) | | D | &CenterDot; 1 &Sigma; p j &Element; N k ( p i ) r - dis tan ce ( p i , p j )
r-distance(p i,p j)=max{k-distance(p j),d color(p i,p j)}
Wherein, lde (p i) be block of pixels p ilocal distribution density value, d color(p i, p j) be block of pixels p iand p jeuclidean distance in visual signature space.
7. method according to claim 6, is characterized in that, in described step 1d, the Weighted Neighborhood distributed density values of described each block of pixels of calculating is specific as follows:
wde ( p i ) = &Sigma; p j &Element; N k ( p i ) &omega; p j &CenterDot; lde ( p i ) &Sigma; p j &Element; N k ( p i ) &omega; p j
&omega; p j = exp { - ( k - dis tan ce ( p i ) min k - 1 ) 2 2 }
Wherein, wde (p i) be block of pixels p iweighted Neighborhood distributed density values, variable block of pixels p ineighbor pixel block p jweight factor; Variable min kblock of pixels p ithe minimum value of the corresponding radius of neighbourhood in all neighbor pixel blocks;
In described step 1e, the conspicuousness likelihood value calculating specific as follows of each block of pixels described:
S ( i ) = wde ( p i ) lde ( p i ) .
8. method according to claim 4, is characterized in that, in described step 3a, determines the k-distance neighborhood of each pixel in two dimensional surface space shown in its mode is specific as follows:
In image, sum of all pixels is D, any one is not more than to the positive integer k of D, meets the block of pixels p of following condition iand p jbetween distance d spatial(i, j) is k-distance spatial(p i), k-distance spatial(p i) be block of pixels p jthe radius of neighbourhood in two dimensional surface space:
(1) k block of pixels p' is had at least j, satisfy condition d spatial(p i, p j')≤d spatial(p i, p j);
(2) k-1 block of pixels p' is had at most j, satisfy condition d spatial(p i, p j') <d spatial(p i, p j);
Then wherein block of pixels p ik-distance neighborhood comprise each and p idistance be less than k-distance spatial(p i) k the shortest neighbor pixel block of distance.
9. method according to claim 4, is characterized in that,
In step 3c, after the propagation completing conspicuousness likelihood value, the conspicuousness mapping graph of image builds as follows:
Map = S 11 S 12 . . . S 1 m S 21 S 22 . . . S 2 m . . . . . . . . . . . . S n 1 S n 2 . . . S nm
Wherein, n is the number of the row of the conspicuousness mapping graph of this image, and m is the number of respective column, S ijfor the conspicuousness likelihood value of coordinate (i, j) place pixel.
CN201210569877.3A 2012-12-25 2012-12-25 Image significance detection method based on stray data mining Active CN103065302B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210569877.3A CN103065302B (en) 2012-12-25 2012-12-25 Image significance detection method based on stray data mining

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210569877.3A CN103065302B (en) 2012-12-25 2012-12-25 Image significance detection method based on stray data mining

Publications (2)

Publication Number Publication Date
CN103065302A CN103065302A (en) 2013-04-24
CN103065302B true CN103065302B (en) 2015-06-10

Family

ID=48107919

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210569877.3A Active CN103065302B (en) 2012-12-25 2012-12-25 Image significance detection method based on stray data mining

Country Status (1)

Country Link
CN (1) CN103065302B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103996195B (en) * 2014-05-26 2017-01-18 清华大学深圳研究生院 Image saliency detection method
CN111414922B (en) * 2019-01-07 2022-11-15 阿里巴巴集团控股有限公司 Feature extraction method, image processing method, model training method and device
CN111339917B (en) * 2020-02-24 2022-08-09 大连理工大学 Method for detecting glass in real scene
CN112861976B (en) * 2021-02-11 2024-01-12 温州大学 Sensitive image identification method based on twin graph convolution hash network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103750A (en) * 2011-01-07 2011-06-22 杭州电子科技大学 Vision significance detection method based on Weber's law and center-periphery hypothesis
CN102222324A (en) * 2011-06-17 2011-10-19 电子科技大学 Symmetry property-based method for detecting salient regions of images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8649606B2 (en) * 2010-02-10 2014-02-11 California Institute Of Technology Methods and systems for generating saliency models through linear and/or nonlinear integration

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103750A (en) * 2011-01-07 2011-06-22 杭州电子科技大学 Vision significance detection method based on Weber's law and center-periphery hypothesis
CN102222324A (en) * 2011-06-17 2011-10-19 电子科技大学 Symmetry property-based method for detecting salient regions of images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Context-Aware Saliency Detection;Stas Goferman,Lihi Zelnik-Manor,Ayellet Tal;《Pattern Analysis and Machine Intelligence》;20111227;第34卷(第10期);第3.1-3.4节 *
罗荣华,郑华强,陈聪,闵华清,毕盛,朱金辉.基于色彩对比度的快速视觉显著目标分割方法.《华中科技大学学报》.2011, *

Also Published As

Publication number Publication date
CN103065302A (en) 2013-04-24

Similar Documents

Publication Publication Date Title
CN109635694B (en) Pedestrian detection method, device and equipment and computer readable storage medium
Qadir et al. Improving automatic polyp detection using CNN by exploiting temporal dependency in colonoscopy video
Li et al. Robust visual tracking based on convolutional features with illumination and occlusion handing
EP3076367A1 (en) Method for road detection from one image
US7352881B2 (en) Method for tracking facial features in a video sequence
US10262214B1 (en) Learning method, learning device for detecting lane by using CNN and testing method, testing device using the same
Mathias et al. ATLAS: A three-layered approach to facade parsing
CN103177446A (en) Image foreground matting method based on neighbourhood and non-neighbourhood smoothness prior
CN103337072B (en) A kind of room objects analytic method based on texture and geometric attribute conjunctive model
CN107424161B (en) Coarse-to-fine indoor scene image layout estimation method
US20040213460A1 (en) Method of human figure contour outlining in images
CN103810503A (en) Depth study based method for detecting salient regions in natural image
CN111428604A (en) Facial mask recognition method, device, equipment and storage medium
WO2019071976A1 (en) Panoramic image saliency detection method based on regional growth and eye movement model
CN112395442B (en) Automatic identification and content filtering method for popular pictures on mobile internet
Grudic et al. Outdoor Path Labeling Using Polynomial Mahalanobis Distance.
CN104346801A (en) Image-composition evaluating device, information processing device and method thereof
CN103106409A (en) Composite character extraction method aiming at head shoulder detection
CN103065302B (en) Image significance detection method based on stray data mining
KR20190126857A (en) Detect and Represent Objects in Images
Li et al. Transmission line detection in aerial images: An instance segmentation approach based on multitask neural networks
Kim et al. Robust facial landmark extraction scheme using multiple convolutional neural networks
Wang et al. Recent advances in 3D object detection based on RGB-D: A survey
Alsanad et al. Real-time fuel truck detection algorithm based on deep convolutional neural network
Koh et al. CDTS: Collaborative detection, tracking, and segmentation for online multiple object segmentation in videos

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant