CN101866484B - Method for computing significance degree of pixels in image - Google Patents
Method for computing significance degree of pixels in image Download PDFInfo
- Publication number
- CN101866484B CN101866484B CN2010101938533A CN201010193853A CN101866484B CN 101866484 B CN101866484 B CN 101866484B CN 2010101938533 A CN2010101938533 A CN 2010101938533A CN 201010193853 A CN201010193853 A CN 201010193853A CN 101866484 B CN101866484 B CN 101866484B
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- pixel
- pixels
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 230000000007 visual effect Effects 0.000 abstract description 5
- 230000004438 eyesight Effects 0.000 abstract description 4
- 238000004364 calculation method Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 238000011160 research Methods 0.000 description 3
- 230000016776 visual perception Effects 0.000 description 2
- ZAMOUSCENKQFHK-UHFFFAOYSA-N Chlorine atom Chemical compound [Cl] ZAMOUSCENKQFHK-UHFFFAOYSA-N 0.000 description 1
- 241000764238 Isis Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 239000000460 chlorine Substances 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000000205 computational method Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method for computing the significance degree of pixels in an image. Aiming at each pixel in the image, a group of adjacent areas in a multi-scale range are designed; by adopting the L*a*b* color characteristics, pixels, which have characteristic difference from the pixel, in each adjacent area are respectively computed; and the number proportion of the pixels with the difference in each adjacent area is taken as the significance degree of the pixel. The multi-scale computation of the pixels is taken into consideration, and the method has the advantages of simple computation and easy implementation. The obtained image significance degree distribution map has the same resolution with the original image, and has more reasonability in visual sense, so the input with good robustness is provided for the significance degree-based computational vision application.
Description
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a method for calculating the pixel significance degree in an image.
Background
Research in psychology and cognitive science has shown that when one views an image, our visual system quickly focuses on one or more salient regions, which are often of interest, and then further views the content of the image. Currently, the extraction of a salient region in an Image has a wide application in the field of computational vision, for example, in Content-based Image extraction (CBIR) research, if Image matching uses features extracted from a salient region, the matching effect and the final extraction hit rate are better than those using features extracted from the whole Image. Other applications include, for example, white-adaptive image compression techniques (e.g., JPEG2000), video white-adaptive browsing techniques, object recognition, white-adaptive image scaling techniques, and the like. In fact, these techniques have a common assumption that the main information content in the image is concentrated in the salient region of the image, and the features extracted from the salient region of the image can not only provide enough information for the application, but also avoid the interference information caused by the non-salient region. Therefore, the method has important research value for extracting the salient region in the image.
At present, in the field of computational vision, in order to extract a significant region in an image, a method is generally adopted, in which a significant degree distribution map (saliency map) of the image is obtained by computing a significant degree of each pixel in the image, and then a threshold value is taken from the significant distribution map to obtain a binary image segmentation result, where a foreground region is a significant region to be extracted. In the above process, the saliency map has an important influence on the effect of the final saliency region extraction. Currently, the calculation of the degree of pixel saliency in an image is mainly divided into physiological model-based methods and computational model-based methods, and the saliency maps obtained by these methods are generally low in resolution. Recently, acant achnat (reference: r. achanta, s.hemami, f.estrada, et al. frequency-tunedsalient region detection. ieee conf.on CVPR, 2009) proposed a simple computational method, although the saliency map obtained by this method achieved the same resolution as the original image, in some cases it appeared somewhat unreasonable from the human visual perception.
Disclosure of Invention
The invention aims to provide a method for calculating the significance of pixels in an image, wherein an image significance distribution graph calculated by the method not only has the same resolution as an original image, but also is more reasonable in visual perception, so that input with good robustness is provided for the visual application based on the computation of the significance.
A method for calculating the significance of a pixel in an image comprises the following steps of calculating for each pixel p in the image:
(1) solving a group of neighborhood regions in a multi-scale range of the pixel p; wherein,
the nearest neighbourhood to pixel p is a circular region R0(p), defined as follows:
R0(p)={q|0≤‖p-q‖2≤r0,q∈∧}
‖·‖2representing a Euclidean distance measure, Λ representing all pixels in the image, r0Is the radius of the circular area;
circular region R0(p) the neighborhood region outside is an annular region surrounded by k concentric circles with the pixel p as the center, and they are defined as follows:
Ri(p)={t|ri-1≤‖p-t‖2≤ri,t∈∧},i=1,...,k
riis a ring-shaped region Ri(p) a radius from the outer boundary of p,w is the width of the image, H is the height of the image, and the difference of the radius step length between the adjacent annular areas is constant;
(2) using L of pixels in the image*a*b*Color features, which are used for respectively calculating pixels with different degrees with the pixel p in each neighborhood region;
(2.1) in the circular region R0(p) pixels D having a different degree from the pixels p0(p) is characterized by:
wherein Ip,Which represents the pixel p of the image element,l of*a*b*The value of the characteristic of the color is,whereinAndrespectively, all pixels in the image are at L*a*b*Variance over three channels in color space;
(2.2) in each annular region Ri(p) pixels D having a different degree from the pixels pi(p) is characterized by:
whereinRepresenting a pixelL of*a*b*The value of the characteristic of the color is,or |. non chlorinecardRepresenting the potential of the collection, Iq′L representing pixel q*a*b*A color feature value;
(3) calculating the significance of a pixel p <math>
<mrow>
<mi>S</mi>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mi>k</mi>
</mfrac>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>k</mi>
</munderover>
<mfrac>
<msub>
<mrow>
<mo>|</mo>
<msub>
<mi>D</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
</mrow>
<mi>card</mi>
</msub>
<msub>
<mrow>
<mo>|</mo>
<msub>
<mi>R</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
</mrow>
<mi>card</mi>
</msub>
</mfrac>
<mo>.</mo>
</mrow>
</math>
The technical effects of the invention are as follows: it can be seen from the definition of s (p), the significant degree calculation of the pixel p in the present invention is an average value of the proportion of the number of pixels dissimilar to the pixel p in k annular regions of the pixel p, the definition is simple and easy to implement, s (p) can be regarded as a filter, and then the image is filtered, and when the definition is implemented, a parallel processing method can be adopted to accelerate the operation process. The obtained image significance degree distribution map has the same resolution as that of the original image and is more reasonable visually, so that robust input is provided for the calculation visual application based on the significance degree.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a neighborhood region within a set of multi-scale ranges of the present invention;
fig. 3 is a comparison graph of the effects calculated by different methods for six examples, in which fig. 3(a) is six real natural images, fig. 3(b) is the result obtained by the method of the document (l.itti, c.koch, e.nieburg.a model of saliency based visual attribute for Rapid scene analysis. ieee trans. on pami., 1998, 20 (11): 1254-.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
As shown in fig. 1, the specific process is as follows:
(1) given a natural image I, for each pixel p in the image, first a solution is obtainedThe pixel p is a set of neighborhood regions within the multi-scale range. Wherein the nearest neighbourhood region to the pixel p is a circular region R0(p), as shown in FIG. 2, is defined as follows:
R0(p)={q|0≤‖p-q‖2≤r0,q∈∧}
wherein | · |)2Representing a Euclidean distance measure, Λ representing all pixels in the image, r0Is the radius of the circular area.
In addition to the above-mentioned circular region R0(p) the remaining neighborhood regions are k ring regions surrounded by a set of concentric circles centered on the pixel p, as shown in fig. 2, and they are defined as follows:
Ri(p)={t|ri-1≤‖p-t‖2≤ri,t∈∧},i=1,...,k
wherein r isiIs a ring-shaped region Ri(p) radius from the outer boundary of (p) to p, in the present invention, let r bei=ri-1+ Δ r and the largest radius r is requiredkSatisfy the requirement ofWhere W is the width of image I and H is the height of image I. For this set of relations, if we set the number k of annular regions and the radius r0Then the value of ar is readily available.
(2) Using L of pixels in the image*a*b*And (4) color features, and calculating all pixels which have a certain difference degree with the pixel p in each neighborhood region.
(2.1) wherein in the circular region R0(p) for those pixels D which have a certain degree of difference from the pixel p0(p), we define its calculation as:
wherein Ip,Which represents the pixel p of the image element,l of*a*b*A color feature value | · |2Is a Euclidean distance metric, andwhereinAndrespectively corresponding to all pixels in the image I at L*a*b*Variance over three channels in color space.
(2.2) in each annular region Ri(p) for those pixels D which have a certain degree of difference from the pixel pi(p), we define its calculation as:
whereinRepresenting a pixelL of*a*b*A color feature value | · |2Referred to as euclidean distance metric.Denotes a ring-shaped region Ri-1(p) L of all pixels similar to pixel p*a*b*A color feature vector mean, defined as:
in the above formula Ri-1(p)\Di-1(p) denotes a ring-shaped region Ri-1(p) a set of pixels similar to pixel p, by set Ri-1(p) and set Di-1(p) is obtained by a difference | · non-cardRepresenting the potential of the collection. For special cases, when in the annular region Ri-1Set of pixels D in (p) that are dissimilar to pixel pi-1(p) occupies the entire set Ri-1When (p) is, i.e., Di-1(p)=Ri-1When (p), we giveNamely, it isIs L of pixel p*a*b*A color feature vector value.
(3) Based on the above calculation, we give the calculation formula of the present invention regarding the degree of saliency of each pixel p in the image s (p), namely:
wherein |. noncardRepresenting the potential of the collection.
When applying S (p), we need to set two parameters, one is k value, i.e. how many annular areas in FIG. 2 we need, and the other is the central circle in FIG. 2Radius of the shaped area r0The specific size is determined according to actual conditions. The value of r is suggested to be 3-5, and in the experiment of the invention, k is 3, r0=3。
If the degree of saliency is calculated by using s (p) for each pixel in an image, a saliency map of the image can be obtained, fig. 3 shows a group of real natural images and saliency maps obtained by three other current classical algorithms, and the saliency map obtained by the calculation method of the present invention is also shown. As can be seen from fig. 3, the image saliency map (fig. 3(e)) obtained by the present invention not only has the same resolution as the original image, but also is more visually reasonable.
According to an exemplary embodiment of the present invention, a computer system for implementing the present invention may include, inter alia, a Central Processing Unit (CPU), a memory, and an input/output (I/O) interface. The computer system is typically connected through an I/O interface to a display and various input devices such as a mouse and keyboard, and supporting circuitry may include such circuits as cache, power supplies, clock circuits, and a communications bus. The memory may include Random Access Memory (RAM), Read Only Memory (ROM), disk drives, tape drives, etc., or a combination thereof. The computer platform also includes an operating system and microinstruction code. The various processes and functions described herein may either be part of the microinstruction code or part of the application program (or a combination thereof) that is executed via the operating system. In addition, various other peripheral devices may be connected to the computer platform such as an additional data storage device and a printing device.
It is to be further understood that, because some of the constituent system components and method steps depicted in the accompanying figures may be implemented in software, the actual connections between the system components (or the process steps) may differ depending upon the manner in which the present invention is programmed. Based on the inventive principles set forth herein, one of ordinary skill in the related art may contemplate these and similar implementations or configurations of the present invention.
Claims (1)
1. A method for calculating the significance of a pixel in an image comprises the following steps of calculating for each pixel p in the image:
(1) solving a group of neighborhood regions in a multi-scale range of the pixel p; wherein the nearest neighborhood region from the pixel p is a circular region R0(p), defined as follows:
R0(p)={q|0≤||p-q||2≤r0,q∈∧}
||·||2representing a Euclidean distance measure, Λ representing all pixels in the image, r0Is the radius of the circular area;
circular region R0(p) the neighborhood region outside is an annular region surrounded by k concentric circles with the pixel p as the center, and they are defined as follows:
Ri(p)={t|ri-1≤||p-t||2≤ri,t∈∧},i=1,…,k
riis a ring-shaped region Ri(p) a radius from the outer boundary of p,w is the width of the image, H is the height of the image, and the difference of the radius step length between the adjacent annular areas is constant;
(2) respectively calculating pixels with difference degrees with the pixels p in each neighborhood region;
(2.1) in the circular region R0(p) pixels D having a different degree from the pixels p0(p) is characterized by:
whereinIp,Which represents the pixel p of the image element,l of*a*b*The value of the characteristic of the color is,wherein Andrespectively, all pixels in the image are at L*a*b*Variance over three channels in color space;
(2.2) in each annular region Ri(p) pixels D having a different degree from the pixels pi(p) is characterized by: <math>
<mrow>
<mrow>
<msub>
<mi>D</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mo>{</mo>
<mover>
<mi>t</mi>
<mo>~</mo>
</mover>
<mo>|</mo>
<msub>
<mi>σ</mi>
<mi>I</mi>
</msub>
<mo>≤</mo>
<msub>
<mrow>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>I</mi>
<mover>
<mi>t</mi>
<mo>~</mo>
</mover>
</msub>
<mo>-</mo>
<msubsup>
<mi>M</mi>
<mi>p</mi>
<mrow>
<mi>i</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mo>|</mo>
<mo>|</mo>
</mrow>
<mn>2</mn>
</msub>
<mover>
<mi>t</mi>
<mo>~</mo>
</mover>
<mo>∈</mo>
<msub>
<mi>R</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
<mo>}</mo>
</mrow>
<mo>,</mo>
</mrow>
</math> i=1,…,k
whereinRepresenting a pixelL of*a*b*Color feature value, annular region Ri-1(p) L of all pixels similar to pixel p*a*b*Color feature vector mean|·|cardRepresenting the potential of the collection, Iq′L representing pixel q*a*b*Color characteristic value, Ri-1(p)\Di-1(p) denotes a ring-shaped region Ri-1(p) a set of pixels similar to pixel p;
(3) calculating the significance of a pixel p
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2010101938533A CN101866484B (en) | 2010-06-08 | 2010-06-08 | Method for computing significance degree of pixels in image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2010101938533A CN101866484B (en) | 2010-06-08 | 2010-06-08 | Method for computing significance degree of pixels in image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101866484A CN101866484A (en) | 2010-10-20 |
CN101866484B true CN101866484B (en) | 2012-07-04 |
Family
ID=42958198
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2010101938533A Expired - Fee Related CN101866484B (en) | 2010-06-08 | 2010-06-08 | Method for computing significance degree of pixels in image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101866484B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2761533A4 (en) * | 2011-09-30 | 2016-05-11 | Intel Corp | Human head detection in depth images |
CN104123720B (en) * | 2014-06-24 | 2017-07-04 | 小米科技有限责任公司 | Image method for relocating, device and terminal |
US9665925B2 (en) | 2014-06-24 | 2017-05-30 | Xiaomi Inc. | Method and terminal device for retargeting images |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1760634A2 (en) * | 2002-11-18 | 2007-03-07 | Qinetiq Limited | Measurement of mitotic activity |
CN101493890A (en) * | 2009-02-26 | 2009-07-29 | 上海交通大学 | Dynamic vision caution region extracting method based on characteristic |
CN101520894A (en) * | 2009-02-18 | 2009-09-02 | 上海大学 | Method for extracting significant object based on region significance |
-
2010
- 2010-06-08 CN CN2010101938533A patent/CN101866484B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1760634A2 (en) * | 2002-11-18 | 2007-03-07 | Qinetiq Limited | Measurement of mitotic activity |
CN101520894A (en) * | 2009-02-18 | 2009-09-02 | 上海大学 | Method for extracting significant object based on region significance |
CN101493890A (en) * | 2009-02-26 | 2009-07-29 | 上海交通大学 | Dynamic vision caution region extracting method based on characteristic |
Non-Patent Citations (2)
Title |
---|
Qiling Tang et al.Extraction of salient contours from cluttered scenes.《Pattern Recognition》.2007, * |
张芳 等.基于显著特征引导的红外舰船目标快速分割方法研究.《红外与激光工程》.2004,第33卷(第6期), * |
Also Published As
Publication number | Publication date |
---|---|
CN101866484A (en) | 2010-10-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220092882A1 (en) | Living body detection method based on facial recognition, and electronic device and storage medium | |
US20220044006A1 (en) | Method and appratus for face recognition and computer readable storage medium | |
EP3454250B1 (en) | Facial image processing method and apparatus and storage medium | |
CN107358242B (en) | Target area color identification method and device and monitoring terminal | |
Saavedra | Sketch based image retrieval using a soft computation of the histogram of edge local orientations (s-helo) | |
KR101141643B1 (en) | Apparatus and Method for caricature function in mobile terminal using basis of detection feature-point | |
US8675966B2 (en) | System and method for saliency map generation | |
CN101061502B (en) | Magnification and pinching of two-dimensional images | |
US10134114B2 (en) | Apparatus and methods for video image post-processing for segmentation-based interpolation | |
KR101912748B1 (en) | Scalable Feature Descriptor Extraction and Matching method and system | |
WO2022127111A1 (en) | Cross-modal face recognition method, apparatus and device, and storage medium | |
WO2018082308A1 (en) | Image processing method and terminal | |
EP4404148A1 (en) | Image processing method and apparatus, and computer-readable storage medium | |
CN113096140A (en) | Instance partitioning method and device, electronic device and storage medium | |
CN112380978A (en) | Multi-face detection method, system and storage medium based on key point positioning | |
CN101866484B (en) | Method for computing significance degree of pixels in image | |
KR101791604B1 (en) | Method and apparatus for estimating position of head, computer readable storage medium thereof | |
WO2021179751A1 (en) | Image processing method and system | |
Chen et al. | Saliency modeling via outlier detection | |
Tan et al. | Local context attention for salient object segmentation | |
Saire et al. | Global and Local Features Through Gaussian Mixture Models on Image Semantic Segmentation | |
JP6717049B2 (en) | Image analysis apparatus, image analysis method and program | |
Han et al. | Automatic salient object segmentation using saliency map and color segmentation | |
KR20150094108A (en) | Method for generating saliency map based background location and medium for recording the same | |
CN110443244B (en) | Graphics processing method and related device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20120704 Termination date: 20180608 |