CN104463870A - Image salient region detection method - Google Patents

Image salient region detection method Download PDF

Info

Publication number
CN104463870A
CN104463870A CN201410742968.1A CN201410742968A CN104463870A CN 104463870 A CN104463870 A CN 104463870A CN 201410742968 A CN201410742968 A CN 201410742968A CN 104463870 A CN104463870 A CN 104463870A
Authority
CN
China
Prior art keywords
msub
mrow
background
pixel
super
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410742968.1A
Other languages
Chinese (zh)
Inventor
卿来云
苗军
帅佳玫
黄庆明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Chinese Academy of Sciences
Original Assignee
University of Chinese Academy of Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Chinese Academy of Sciences filed Critical University of Chinese Academy of Sciences
Priority to CN201410742968.1A priority Critical patent/CN104463870A/en
Publication of CN104463870A publication Critical patent/CN104463870A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image salient region detection method. The method includes the steps that background estimation is conducted on an image generated after super pixel partition is conducted; the contrast ratio of all super pixels and the background is obtained according to the color difference of all the super pixels in the image and the super pixels in the background estimation; a super pixel saliency map is obtained according to the contrast ratio of all the super pixels and the background. By means of the image salient region detection method, robustness can be conducted on background noise of the image and the like, and calculation is easy and fast to conduct.

Description

Image salient region detection method
Technical Field
The invention relates to the technical field of image processing, in particular to a method for detecting an image salient region.
Background
In all human senses, at least 70% of the extrinsic information is acquired through the visual system. Biological vision systems, including the human vision system, can automatically select and note a few "relevant" locations in a scene. For a given input image, as shown in FIG. 1(a), FIG. 1(b) shows an annotation of its salient region. The human eye will pay more attention to the safflower in the foreground while sweeping through the green leaves and other background areas when viewing. This process of the biological vision system, which can quickly focus on a few prominent visual objects when facing a natural scene, is called visual attention selection. This ability allows biological tissues to concentrate their limited cognitive resources in the most relevant part of the data, thus allowing them to process large numbers of signals quickly and efficiently, and survive in complex changing environments.
If this mechanism can be introduced into the field of image analysis, and computing resources are preferentially allocated to salient regions which are easy to attract the attention of the observer, the working efficiency of the existing image analysis method is necessarily greatly improved, and the image salient region detection is provided and developed on the basis of this idea.
A salient region in an image is typically defined as a region where the two are significantly different compared to the neighborhood. One of the most common implementations of this definition is the central-peripheral mechanism, i.e., regions where the central and peripheral differences are large are salient regions. Such differences may be color differences, orientation differences, texture differences, and the like. The most well-known salient region detection model proposed by Itti and Koch et al is to perform multiscale, multidirectional Gabor convolution on an image, extract features such as color, brightness, and orientation of the image, and then approximate the central-peripheral difference with a difference gaussian. In a recent study, yiche Wei et al proposed a background distribution prior-based approach to estimate the saliency of an image block based on the geodesic distance of the image block to the surrounding background of the image. Background prior-based methods work well on some natural images, as shown in fig. 2(a) (original image) and fig. 2(b) (saliency map). However, although the geodesic distance measurement method used in the above method is reasonable, when processing an image with a large background variation or a very rich texture, the method may not accurately estimate the saliency of the image due to the accumulation phenomenon of the contrast of the texture region, as shown in fig. 2(c) (original image) and 2(d) (saliency map).
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a method for detecting an image salient region, which comprises the following steps:
step 1), carrying out background estimation on the image subjected to superpixel segmentation;
step 2), obtaining the contrast ratio of each super pixel and the background according to the color difference between each super pixel in the image and the super pixel in the background estimation;
and 3) obtaining a superpixel saliency map based on the contrast of each superpixel and the background.
In step 1) of the method, a set of superpixels in the image, which includes pixels within n pixel ranges from the image boundary, is used as a background estimation, where n is a positive integer.
In step 2) of the method, the contrast of each super pixel in the image with the background is calculated by adopting the following steps:
step 21), obtaining a set of La × b color space distances between the super pixel and all super pixels in the background estimation according to the following formula:
<math> <mrow> <msub> <mi>D</mi> <mi>i</mi> </msub> <mo>=</mo> <mo>{</mo> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>c</mi> <mi>j</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>|</mo> <mo>&ForAll;</mo> <msub> <mi>S</mi> <mi>j</mi> </msub> <mo>&Element;</mo> <mover> <mi>B</mi> <mo>^</mo> </mover> <mo>}</mo> </mrow> </math>
wherein D isiRepresenting a super-pixel SiSet of La b color space distances from each super pixel in the background estimation, ciRepresenting a super-pixel SiLa b color of (a),representing a background estimate;
step 22), sorting the La and b color space distances in the set from small to large;
step 23), taking the sum of the first k La b color space distances in the set as the contrast of the super pixel with the background, wherein k is a positive integer.
In the above method, step 3) further includes:
obtaining background connectivity of each super pixel according to the geodesic distance between each super pixel in the image and the super pixel in the background estimation;
for each superpixel in the image, linearly superposing the contrast and the background connectivity of the superpixel with a background;
and obtaining a super-pixel saliency map according to the linear superposition result.
In the above method, for each super pixel in the image, the minimum value of geodesic distances of the super pixel and its color k neighbor in the background estimation is used as the background connectivity of the super pixel, where k is a positive integer. Wherein the background connectivity of each superpixel in the image is calculated by the following steps:
establishing an undirected weighted graph for the image, the nodes in the graph comprising superpixels in the image and virtual background nodes B, the edges E in the graph comprising inner edges connecting neighboring superpixels and outer edges connecting color k neighbors of the superpixels in the background estimation with virtual background nodes B;
the background connectivity of each superpixel is calculated according to the following formula:
<math> <mrow> <mi>Connectivity</mi> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>min</mi> <mrow> <msub> <mi>S</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>S</mi> <mi>n</mi> </msub> <mo>=</mo> <mi>B</mi> </mrow> </munder> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>n</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <mi>weight</mi> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mi>j</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>&Exists;</mo> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mi>j</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>E</mi> </mrow> </math>
wherein two adjacent super-pixels Sj、Sj+1Weight (S) ofj,Sj+1) La b color space distance for them and are expressed as follows:
weight(Sj,Sj+1)=||cj-cj+1||2
wherein, cjRepresenting a super-pixel SjLa B, the weight between the super pixel connected to the virtual background node B and the virtual background node B is 0.
In the above method, for each super-pixel in the image, the contrast and the background connectivity with the background are linearly superimposed according to the following formula:
Saliency(Si)=Constrast(Si)+α·Connectivty(Si)
among them, Saliency (S)i) Representing a super-pixel SiSignificance of, Constrast (S)i) Representing a super-pixel SiContrast with background, Connectivity (S)i) Representing a super-pixel Siα is a weight of the linear superposition and is a number greater than 0 and less than 1.
The above method may further comprise:
and 4) processing the super-pixel saliency map to obtain a pixel saliency map. Calculating the significance of each pixel in the image according to the following formula to obtain a pixel significance map:
<math> <mrow> <mi>Saliency</mi> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>w</mi> <mi>pj</mi> </msub> <mi>Saliency</mi> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow> </math>
wherein, Saliency (I)P) Representing a pixel IPSignificance of (1), Saliency (S)j) Representing a super-pixel SjSignificance of, weight wpjIs represented as follows:
<math> <mrow> <msub> <mi>w</mi> <mi>pj</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>Z</mi> <mi>i</mi> </msub> </mfrac> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mrow> <mo>(</mo> <mi>&alpha;</mi> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mover> <mi>c</mi> <mo>~</mo> </mover> <mi>p</mi> </msub> <mo>-</mo> <msub> <mi>c</mi> <mi>j</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mi>&beta;</mi> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mover> <mi>L</mi> <mo>~</mo> </mover> <mi>p</mi> </msub> <mo>-</mo> <msub> <mi>L</mi> <mi>j</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </math>
wherein,and cjRespectively represent pixels IPAnd super pixel SjLa b color of (a),and LjRespectively represent pixels IPAnd super pixel SjThe coordinates in the image, α and β, represent weights.
The method further comprises the following steps before the step 1):
and step 0), carrying out texture blurring processing on the image.
The method further comprises the following steps:
and 5) carrying out binarization processing on the obtained saliency map.
The invention can achieve the following beneficial effects:
1. and (4) the detection accuracy. Compared with the prior method of positioning the salient object by using global or local contrast, the method provided by the invention fully utilizes the prior information of background distribution and utilizes the contrast and connectivity with the background to detect the foreground, thereby greatly improving the detection accuracy.
2. Robustness of the metric. The noise in the background estimation can be more robust by increasing k appropriately when calculating the background contrast, and the background connectivity can solve the problem that part of the color of the foreground object appears in the background in large numbers.
Drawings
FIGS. 1(a) and 1(b) respectively show an example image and its saliency map;
FIGS. 2(a) and 2(b) show another example image and its saliency map, respectively;
FIGS. 2(c) and 2(d) show yet another example image and its saliency map, respectively;
FIG. 3 is a flow diagram of a method for image salient region detection in accordance with one embodiment of the present invention;
FIG. 4(a) shows an exemplary original image;
FIG. 4(b) is a texture-blurred image obtained by performing texture blurring on the original image shown in FIG. 4 (a);
FIG. 4(c) is a schematic diagram of the result of superpixel segmentation of the texture-blurred image of FIG. 4 (b);
FIG. 4(d) is a diagram illustrating the result of background estimation performed on the image of FIG. 4 (c);
FIG. 4(e) is a schematic representation of the background contrast of each super-pixel in graph (c);
FIG. 4(f) is a schematic illustration of the background connectivity of each superpixel in graph (c);
FIG. 4(g) is a superpixel saliency map resulting from linear superposition of the background contrast shown in FIG. 4(e) and the background connectivity shown in FIG. 4 (f);
FIG. 4(h) is the final saliency map resulting from smooth upsampling of the super-pixel saliency map of FIG. 4 (g);
FIG. 5(a) is a PR curve using the method of the present invention and the prior art method on an ASD database;
FIG. 5(b) is an evaluation result of adaptive segmentation on an ASD database using the method provided by the present invention and the existing method;
FIG. 6(a) is a PR plot of the method provided by the present invention on the SED2 database with the prior method;
fig. 6(b) is an evaluation result under adaptive segmentation on SED2 database using the method provided by the present invention and the existing method.
Detailed Description
The invention is described below with reference to the accompanying drawings and the detailed description. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
According to one embodiment of the invention, an image salient region detection method is provided.
In summary, the method comprises: carrying out background estimation on the image subjected to the superpixel segmentation; obtaining the background contrast of each super pixel according to the color difference between each super pixel in the image and the super pixel in the background estimation; and obtaining a super-pixel saliency map according to the background contrast of each super-pixel.
The individual steps of the method will be described in detail below in connection with fig. 3.
The first step is as follows: performing texture blurring operation on an input image
To remove high frequency texture variations that are not sensitive to the human eye and that may create an additive effect and thus affect the computation result, such as small variations in the background of an image (e.g., input image I of H, W, respectively, in length and width), the input image may first be texture blurred. For example, a texture region is suppressed by applying a structure extraction method to the input image, and an image with suppressed texture (or texture-blurred image) is obtained. This smoothing of the texture portion not only better conforms to the perception of the human eye, but also makes the results of the subsequent superpixel segmentation more uniform and natural.
Fig. 4(a) shows an original input image, and the effect after applying the texture blurring processing by the structure extraction method is shown in fig. 4 (b).
It should be noted that this step is not necessarily required, and may be skipped for an image with no or less high-frequency texture change, or an input image that has been subjected to texture blurring processing.
The second step is that: superpixel segmentation for texture blurred images
After the texture-suppressed image is obtained according to the first step, the image is then divided into apparently homogenous superpixels (i.e., uniformly colored superpixels) using existing superpixel segmentation algorithms in order to reduce computational complexity.
For example, for the texture-blurred image shown in fig. 4(b), the result of the superpixel segmentation thereof is shown in fig. 4 (c).
The third step: background estimation by using superpixel segmentation result according to background prior
In summary, the background of an image is estimated a priori from the background distribution, resulting in a background estimate represented by a set of superpixels.
In order to obtain a set of superpixels of an estimated background (i.e., a background estimate), the probability that the background is around the image is very high according to the background prior of the significant image, and therefore, a superpixel in the image including pixels within n pixels from the image boundary is defined as the background estimate of the image in this document, as shown in the following formula:
<math> <mrow> <mover> <mi>B</mi> <mo>^</mo> </mover> <mo>=</mo> <mo>{</mo> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>|</mo> <mi>min</mi> <mo>{</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mo>|</mo> <mi>W</mi> <mo>-</mo> <mi>x</mi> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mi>H</mi> <mo>-</mo> <mi>y</mi> <mo>|</mo> <mo>}</mo> <mo>&le;</mo> <mi>n</mi> <mo>,</mo> <mo>&Exists;</mo> <msub> <mi>I</mi> <mi>p</mi> </msub> <mo>&Element;</mo> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein,representing the background estimation, SiRepresenting the ith super pixel, and (x, y) representing the pixel IPThe coordinates in the image, W, H, represent the width and height, respectively, of the input image. n is a positive integer, preferably, n is 10.
For the embodiment of fig. 4(c), the result of the background estimation is shown in fig. 4(d), and in fig. 4(d), the darkest part represents the background estimation.
The fourth step: calculating the background contrast of each superpixel
As is well known to those skilled in the art, the foreground in an image is usually much different from the background, and the background is relatively little different from the background estimation of the previous step, and a measure of the difference between a superpixel and the background of the image, i.e., a measure of the background contrast of the superpixel, is designed accordingly.
In one embodiment, to calculate the background contrast of a super-pixel (i.e., the contrast of a super-pixel to the background), the La b color space distance of the super-pixel to each super-pixel in the background estimate is first calculated according to the following equation:
<math> <mrow> <msub> <mi>D</mi> <mi>i</mi> </msub> <mo>=</mo> <mo>{</mo> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>c</mi> <mi>j</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>|</mo> <mo>&ForAll;</mo> <msub> <mi>S</mi> <mi>j</mi> </msub> <mo>&Element;</mo> <mover> <mi>B</mi> <mo>^</mo> </mover> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein D isiRepresenting a super-pixel SiSet of La b color space distances from each super pixel in the background estimation, ciRepresenting a super-pixel SiLa b color of (a).
Then, for DiThe distance terms in (1) are reordered from small to large:
<math> <mrow> <msub> <mover> <mi>D</mi> <mo>~</mo> </mover> <mi>i</mi> </msub> <mo>=</mo> <mo>&lt;</mo> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mn>2</mn> </mrow> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>d</mi> <mi>iM</mi> </msub> <mo>></mo> <mo>,</mo> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mn>1</mn> </mrow> </msub> <mo>&le;</mo> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mn>2</mn> </mrow> </msub> <mo>&le;</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>&le;</mo> <msub> <mi>d</mi> <mi>iM</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein,representing the ordered superpixel SiA set of La b color space distances from each super pixel in the background estimate; each dimM is 1, …, M denotes a distance term with a sequence number M, where M is the number of superpixels included in the background estimation.
For a superpixel belonging to a background area (background for short), a superpixel with a very similar color to the superpixel is easily found in background estimation, so that the superpixel is obtainedThe value of the first k term of (a) goes to zero; for a super-pixel belonging to the foreground, it is a significant object (i.e. the foreground) because of its color difference from the background areaThe value of the first k term of (a) is relatively large. Based on the color difference of each superpixel from the superpixel in the background estimate, in one embodiment, the background contrast of the superpixel is calculated according to the following equation:
<math> <mrow> <mi>Contrast</mi> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </munderover> <msub> <mi>d</mi> <mi>ij</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein Constrast (S)i) Representing a super-pixel SiThe background contrast of (2) is a super pixel SiSum of La × b color space distances of k superpixels (or color k neighbors) that are closest to the color in the background estimation. k is a positive integer, preferably, k is 5.
According to the above method of calculating background contrast, for each superpixel in the image its contrast with the background estimate is calculated. After the background contrast of each super pixel is obtained, linear stretching processing is carried out, so that a corresponding super pixel saliency map can be obtained. For example, FIG. 4(e) shows the background contrast of each superpixel of FIG. 4(d) in the form of a superpixel saliency map, the greater the background contrast value of a superpixel, the lower the luminance of that superpixel in FIG. 4 (e).
The image salient region detection method is quite effective for detecting salient regions in most images, and is more robust to a small amount of noise in background estimation by properly increasing the value of k. However, when the method is used, when the color in the foreground appears in the background estimation in a large amount, the part of the foreground may be erroneously detected as the background, thereby affecting the detection accuracy.
To solve this problem, according to an embodiment of the present invention, the image salient region detecting method further includes the steps of:
the fifth step: computing background connectivity for each superpixel
The observation of significant foreground objects in the image shows that the change can be measured by the geodesic distance estimated from the superpixel to the background, because the transition between the background in the natural image is generally relatively smooth, and the transition from the foreground to the background has relatively large change, which can be understood as the enclosing property of the object.
In one embodiment, the minimum value of geodesic distances between a superpixel and the k superpixels with the closest colors in the background estimation can be taken as the background connectivity of the superpixel. The following gives a method of calculating the superpixel background connectivity:
for an input image, an undirected weighted graph G is created { V, E }. Wherein the node in G is the super-pixel set { S ] in the input imageiAdd a virtual background node B, i.e., V ═ Si} { [ B ]; there are two types of edges in G: inner edges connecting neighboring superpixels, and outer edges connecting the color k neighbor of a superpixel in the background estimation with the virtual background node B, i.e. E { (P)i,Pj)|PiAnd PjAdjacent { (P) } { (U { (P)i,B)|PiIs the color k nearest neighbor of the current superpixel in the background estimate }. Preferably, k is 5.
One super pixel SiIs defined as the geodesic distance from SiStarting from the accumulated edge weights on graph G to reach background node B along the shortest path, the superpixel SiBackground Connectivity (S) ofi) Is represented as follows:
<math> <mrow> <mi>Connectivity</mi> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>min</mi> <mrow> <msub> <mi>S</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>S</mi> <mi>n</mi> </msub> <mo>=</mo> <mi>B</mi> </mrow> </munder> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>n</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <mi>weight</mi> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mi>j</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>&Exists;</mo> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mi>j</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>E</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein two adjacent super-pixels Sj、Sj+1Weight (S) ofj,Sj+1) As their La b color space distance, i.e. weight (S)j,Sj+1)=||cj-cj+1||2,cjRepresenting a super-pixel SjLa b color of (a); the weight between the superpixel connected to the virtual background node B and the background node is 0.
After the background connectivity of each superpixel is obtained, the value of the background connectivity can be linearly stretched and thus represented graphically.
For example, for the embodiment given in FIG. 4(d), FIG. 4(f) shows the background connectivity of each superpixel, where lighter colors represent greater values of connectivity.
And a sixth step: obtaining a superpixel saliency map
After the background contrast and the background connectivity of each super-pixel are obtained through calculation, the saliency of each super-pixel is obtained through linear superposition of the two measurement modes, and therefore a saliency map of the super-pixel is obtained according to the saliency, and the saliency map is shown as the following formula:
Saliency(Si)=Constrast(Si)+Connectivity(Si) (6)
among them, Saliency (S)i) Representing a super-pixel SiSignificance of, Constrast (S)i) For the super-pixel S obtained from equation (4)iBackground contrast, Connectivity (S)i) For the super pixel S obtained by equation (5)iBackground connectivity of (1). α is a weight of the linear superposition and is a number greater than 0 and less than 1, preferably α ∈ [0.3,0.6]。
For the background contrast and background connectivity shown in fig. 4(e) and 4(f), the result of linear superposition of the two metrics is shown in fig. 4 (g).
The seventh step: processing the super-pixel saliency map to obtain a final pixel saliency map
Since the saliency map obtained after the superposition still uses the super-pixel as a basic unit, to obtain a more accurate saliency map represented by the pixel, the super-pixel saliency map may be post-processed according to the color and position information of the pixel in the original image, for example, a smoothing operation similar to upsampling is performed to obtain the saliency of each pixel, as shown in the following formula:
<math> <mrow> <mi>Saliency</mi> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>w</mi> <mi>pj</mi> </msub> <mi>Saliency</mi> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, Saliency (I)P) Representing a pixel IPSignificance of (1), Saliency (S)j) Representing a super-pixel SjOf (2) is of smoothWeight wpjThe calculations are expressed as follows:
<math> <mrow> <msub> <mi>w</mi> <mi>pj</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>Z</mi> <mi>i</mi> </msub> </mfrac> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mrow> <mo>(</mo> <mi>&alpha;</mi> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mover> <mi>c</mi> <mo>~</mo> </mover> <mi>p</mi> </msub> <mo>-</mo> <msub> <mi>c</mi> <mi>j</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mi>&beta;</mi> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mover> <mi>L</mi> <mo>~</mo> </mover> <mi>p</mi> </msub> <mo>-</mo> <msub> <mi>L</mi> <mi>j</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein,and cjAre respectively a pixel IPAnd super pixel SjLa b color vector of (a),and LjAre respectively a pixel IPAnd super pixel SjThe coordinate vectors in the image, α and β represent weights for color and position, respectively, preferably α -1/30 and β -1/30.
For the super-pixel saliency map of fig. 4(g), fig. 4(h) gives the final pixel saliency map obtained after post-processing it.
In a further embodiment, the obtained saliency map may be further processed according to a specific application, for example, binarized, so as to highlight the salient region in the image.
The above method detects salient regions in an image for the visual characteristics of salient objects in combination with background contrast and background connectivity, using a priori of background distribution in the image, based on the fact that the human visual system is insensitive to high frequency information in the image (e.g., small variations in background texture in the image), which can effectively detect salient objects and reduce accumulation.
In order to verify the effectiveness of the image salient region detection method provided by the invention, the inventor conducts experiments on an ASD database and an SED2 database by using a conventional evaluation method for salient region detection problems, and the experiments adopt the following two evaluation indexes: precision-recall curves and saliency maps compare the precision (precision), recall (recall) and F-value (F-measure) of the adaptive threshold segmentation results.
The results of the experiments and evaluations on the ASD database are as follows:
the ASD database is a subset of the MSRA database and contains 1000 test images, each corresponding to an artificial pixel-level saliency tag. On an ASD database, the salient region detection method which is provided by the invention and has the background contrast and the background connectivity fused is compared with the 11 best current salient region detection methods. The 6 methods are respectively as follows: FT method, RC method, GS method, SF method, GC method, MC method, the method provided by the invention is abbreviated as TB method. The evaluation results of the significance maps generated by the various methods on the ASD database are shown in fig. 5(a) and 5 (b). As can be seen, the detection method provided by the present invention achieves results comparable to current methods.
The results of the experiments and evaluations on the SED2 database are as follows:
the SED2 database is a database that includes two salient objects in each image. The database contains 100 images and provides corresponding, accurately manually labeled foreground pixels. The detection method provided by the present invention was compared with 6 currently better salient region detection methods on the SED2 database. The 10 methods are respectively as follows: FT method, RC method, SF method, GC method, MC method, the method provided by the invention is abbreviated as TB method. The results of the evaluation of the significance map generated by each method on the SED2 database are shown in figures 6(a) and 6 (b). As can be seen from the figure, the method provided by the invention obtains the best experimental result on the database, wherein the F-measure index after the adaptive threshold segmentation is improved by 3.6 percentage points compared with the best result in the existing method. It is to be noted and understood that various modifications and improvements can be made to the invention described in detail above without departing from the spirit and scope of the invention as claimed in the appended claims. Accordingly, the scope of the claimed subject matter is not limited by any of the specific exemplary teachings provided.

Claims (11)

1. An image salient region detection method comprises the following steps:
step 1), carrying out background estimation on the image subjected to superpixel segmentation;
step 2), obtaining the contrast ratio of each super pixel and the background according to the color difference between each super pixel in the image and the super pixel in the background estimation;
and 3) obtaining a superpixel saliency map based on the contrast of each superpixel and the background.
2. The method according to claim 1, wherein in step 1), a set of superpixels in the image, which contains pixels within n pixels from the image boundary, where n is a positive integer, is used as the background estimate.
3. The method according to claim 1 or 2, wherein in step 2), the contrast of each superpixel in the image with the background is calculated by the following steps:
step 21), obtaining a set of La × b color space distances between the super pixel and all super pixels in the background estimation according to the following formula:
<math> <mrow> <msub> <mi>D</mi> <mi>i</mi> </msub> <mo>=</mo> <mo>{</mo> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>c</mi> <mi>j</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>|</mo> <mo>&ForAll;</mo> <msub> <mi>S</mi> <mi>j</mi> </msub> <mo>&Element;</mo> <mover> <mi>B</mi> <mo>^</mo> </mover> <mo>}</mo> </mrow> </math>
wherein D isiRepresenting a super-pixel SiSet of La b color space distances from each super pixel in the background estimation, ciRepresenting a super-pixel SiLa b color of (a),representing a background estimate;
step 22), sorting the La and b color space distances in the set from small to large;
step 23), taking the sum of the first k La b color space distances in the set as the contrast of the super pixel with the background, wherein k is a positive integer.
4. The method according to claim 1 or 2, wherein step 3) further comprises:
obtaining background connectivity of each super pixel according to the geodesic distance between each super pixel in the image and the super pixel in the background estimation;
for each superpixel in the image, linearly superposing the contrast and the background connectivity of the superpixel with a background;
and obtaining a super-pixel saliency map according to the linear superposition result.
5. The method of claim 4, wherein for each superpixel in the image, the minimum of geodesic distances of that superpixel to its k neighbors of color in the background estimate is taken as the background connectivity of that superpixel, where k is a positive integer.
6. The method of claim 5, wherein the background connectivity of each superpixel in the image is calculated using the steps of:
establishing an undirected weighted graph for the image, the nodes in the graph comprising superpixels in the image and virtual background nodes B, the edges E in the graph comprising inner edges connecting neighboring superpixels and outer edges connecting color k neighbors of the superpixels in the background estimation with virtual background nodes B;
the background connectivity of each superpixel is calculated according to the following formula:
<math> <mrow> <mi>Connectivity</mi> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>min</mi> <mrow> <msub> <mi>S</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>S</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>S</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>S</mi> <mi>n</mi> </msub> <mo>=</mo> <mi>B</mi> </mrow> </munder> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>n</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <mi>weigth</mi> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mi>j</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>&Exists;</mo> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mi>j</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>E</mi> </mrow> </math>
wherein two adjacent super-pixels Sj、Sj+1Weight (S) ofj,Sj+1) La b color space distance for them and are expressed as follows:
weight(Sj,Sj+1)=||cj-cj+1||2
wherein, cjRepresenting a super-pixel SjLa B, the weight between the super pixel connected to the virtual background node B and the virtual background node B is 0.
7. The method of claim 4, wherein for each superpixel in the image, its contrast with the background and background connectivity are linearly superimposed according to:
Saliency(Si)=Constrast(Si)+α·Connectivity(Si)
among them, Saliency (S)i) Representing a super-pixel SiSignificance of, Constrast (S)i) Representing a super-pixel SiContrast with background, Connectivity (S)i) Representing a super-pixel Siα is a weight of the linear superposition and is a number greater than 0 and less than 1.
8. The method of claim 1 or 2, further comprising:
and 4) processing the super-pixel saliency map to obtain a pixel saliency map.
9. The method of claim 8, wherein the saliency of each pixel in the image is calculated according to the following formula, resulting in a pixel saliency map:
<math> <mrow> <mi>Saliency</mi> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>w</mi> <mi>pj</mi> </msub> <mi>Saliency</mi> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow> </math>
wherein, Saliency (I)P) Representing a pixel IPSignificance of (1), Saliency (S)j) Representing a super-pixel SjSignificance of, weight wpjIs represented as follows:
<math> <mrow> <msub> <mi>w</mi> <mi>pj</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>Z</mi> <mi>i</mi> </msub> </mfrac> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mrow> <mo>(</mo> <mi>&alpha;</mi> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mover> <mi>c</mi> <mo>~</mo> </mover> <mi>p</mi> </msub> <mo>-</mo> <msub> <mi>c</mi> <mi>j</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mi>&beta;</mi> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mover> <mi>L</mi> <mo>~</mo> </mover> <mi>p</mi> </msub> <mo>-</mo> <msub> <mi>L</mi> <mi>j</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </math>
wherein,and cjRespectively represent pixels IPAnd super pixel SjLa b color of (a),and LjRespectively represent pixels IPAnd super pixel SjThe coordinates in the image, α and β, represent weights.
10. The method according to claim 1 or 2, wherein step 1) is preceded by:
and step 0), carrying out texture blurring processing on the image.
11. The method of claim 8, further comprising:
and 5) carrying out binarization processing on the obtained saliency map.
CN201410742968.1A 2014-12-05 2014-12-05 Image salient region detection method Pending CN104463870A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410742968.1A CN104463870A (en) 2014-12-05 2014-12-05 Image salient region detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410742968.1A CN104463870A (en) 2014-12-05 2014-12-05 Image salient region detection method

Publications (1)

Publication Number Publication Date
CN104463870A true CN104463870A (en) 2015-03-25

Family

ID=52909852

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410742968.1A Pending CN104463870A (en) 2014-12-05 2014-12-05 Image salient region detection method

Country Status (1)

Country Link
CN (1) CN104463870A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809729A (en) * 2015-04-29 2015-07-29 山东大学 Robust automatic image salient region segmenting method
CN105654475A (en) * 2015-12-25 2016-06-08 中国人民解放军理工大学 Image saliency detection method and image saliency detection device based on distinguishable boundaries and weight contrast
CN105913427A (en) * 2016-04-12 2016-08-31 福州大学 Machine learning-based noise image saliency detecting method
CN105931241A (en) * 2016-04-22 2016-09-07 南京师范大学 Automatic marking method for natural scene image
CN106127197A (en) * 2016-04-09 2016-11-16 北京交通大学 A kind of saliency object detection method based on notable tag sorting
CN106780505A (en) * 2016-06-20 2017-05-31 大连民族大学 Super-pixel well-marked target detection algorithm based on region energy
CN106919950A (en) * 2017-01-22 2017-07-04 山东大学 Probability density weights the brain MR image segmentation of geodesic distance
CN107146258A (en) * 2017-04-26 2017-09-08 清华大学深圳研究生院 A kind of detection method for image salient region
CN107194886A (en) * 2017-05-03 2017-09-22 深圳大学 A kind of dust detection method and device for camera sensor
CN107527350A (en) * 2017-07-11 2017-12-29 浙江工业大学 A kind of solid waste object segmentation methods towards visual signature degraded image
CN107730515A (en) * 2017-10-12 2018-02-23 北京大学深圳研究生院 Panoramic picture conspicuousness detection method with eye movement model is increased based on region
CN108573506A (en) * 2017-03-13 2018-09-25 北京贝塔科技股份有限公司 Image processing method and system
CN109448015A (en) * 2018-10-30 2019-03-08 河北工业大学 Image based on notable figure fusion cooperates with dividing method
CN110084782A (en) * 2019-03-27 2019-08-02 西安电子科技大学 Full reference image quality appraisement method based on saliency detection
CN111091129A (en) * 2019-12-24 2020-05-01 沈阳建筑大学 Image salient region extraction method based on multi-color characteristic manifold sorting
CN112037109A (en) * 2020-07-15 2020-12-04 北京神鹰城讯科技股份有限公司 Improved image watermarking method and system based on saliency target detection

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136766A (en) * 2012-12-28 2013-06-05 上海交通大学 Object significance detecting method based on color contrast and color distribution
CN103208115A (en) * 2013-03-01 2013-07-17 上海交通大学 Detection method for salient regions of images based on geodesic line distance
CN104103082A (en) * 2014-06-06 2014-10-15 华南理工大学 Image saliency detection method based on region description and priori knowledge

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136766A (en) * 2012-12-28 2013-06-05 上海交通大学 Object significance detecting method based on color contrast and color distribution
CN103208115A (en) * 2013-03-01 2013-07-17 上海交通大学 Detection method for salient regions of images based on geodesic line distance
CN104103082A (en) * 2014-06-06 2014-10-15 华南理工大学 Image saliency detection method based on region description and priori knowledge

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘明媚: "基于区域的超像素显著性检测", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
王飞: "基于上下文和背景的视觉显著性检测", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809729B (en) * 2015-04-29 2018-08-28 山东大学 A kind of saliency region automatic division method of robust
CN104809729A (en) * 2015-04-29 2015-07-29 山东大学 Robust automatic image salient region segmenting method
CN105654475B (en) * 2015-12-25 2018-07-06 中国人民解放军理工大学 Based on the image significance detection method and its device that can distinguish boundary and weighting contrast
CN105654475A (en) * 2015-12-25 2016-06-08 中国人民解放军理工大学 Image saliency detection method and image saliency detection device based on distinguishable boundaries and weight contrast
CN106127197A (en) * 2016-04-09 2016-11-16 北京交通大学 A kind of saliency object detection method based on notable tag sorting
CN106127197B (en) * 2016-04-09 2020-07-07 北京交通大学 Image saliency target detection method and device based on saliency label sorting
CN105913427A (en) * 2016-04-12 2016-08-31 福州大学 Machine learning-based noise image saliency detecting method
CN105931241A (en) * 2016-04-22 2016-09-07 南京师范大学 Automatic marking method for natural scene image
CN105931241B (en) * 2016-04-22 2018-08-21 南京师范大学 A kind of automatic marking method of natural scene image
CN106780505A (en) * 2016-06-20 2017-05-31 大连民族大学 Super-pixel well-marked target detection algorithm based on region energy
CN106780505B (en) * 2016-06-20 2019-08-27 大连民族大学 Super-pixel well-marked target detection method based on region energy
CN106919950A (en) * 2017-01-22 2017-07-04 山东大学 Probability density weights the brain MR image segmentation of geodesic distance
CN106919950B (en) * 2017-01-22 2019-10-25 山东大学 The brain MR image segmentation of probability density weighting geodesic distance
CN108573506A (en) * 2017-03-13 2018-09-25 北京贝塔科技股份有限公司 Image processing method and system
CN107146258A (en) * 2017-04-26 2017-09-08 清华大学深圳研究生院 A kind of detection method for image salient region
CN107194886B (en) * 2017-05-03 2020-11-10 深圳大学 Dust detection method and device for camera sensor
CN107194886A (en) * 2017-05-03 2017-09-22 深圳大学 A kind of dust detection method and device for camera sensor
CN107527350A (en) * 2017-07-11 2017-12-29 浙江工业大学 A kind of solid waste object segmentation methods towards visual signature degraded image
CN107527350B (en) * 2017-07-11 2019-11-05 浙江工业大学 A kind of solid waste object segmentation methods towards visual signature degraded image
CN107730515A (en) * 2017-10-12 2018-02-23 北京大学深圳研究生院 Panoramic picture conspicuousness detection method with eye movement model is increased based on region
CN107730515B (en) * 2017-10-12 2019-11-22 北京大学深圳研究生院 Increase the panoramic picture conspicuousness detection method with eye movement model based on region
WO2019071976A1 (en) * 2017-10-12 2019-04-18 北京大学深圳研究生院 Panoramic image saliency detection method based on regional growth and eye movement model
CN109448015A (en) * 2018-10-30 2019-03-08 河北工业大学 Image based on notable figure fusion cooperates with dividing method
CN110084782A (en) * 2019-03-27 2019-08-02 西安电子科技大学 Full reference image quality appraisement method based on saliency detection
CN110084782B (en) * 2019-03-27 2022-02-01 西安电子科技大学 Full-reference image quality evaluation method based on image significance detection
CN111091129A (en) * 2019-12-24 2020-05-01 沈阳建筑大学 Image salient region extraction method based on multi-color characteristic manifold sorting
CN111091129B (en) * 2019-12-24 2023-05-09 沈阳建筑大学 Image salient region extraction method based on manifold ordering of multiple color features
CN112037109A (en) * 2020-07-15 2020-12-04 北京神鹰城讯科技股份有限公司 Improved image watermarking method and system based on saliency target detection

Similar Documents

Publication Publication Date Title
CN104463870A (en) Image salient region detection method
US11443436B2 (en) Interactive image matting method, computer readable memory medium, and computer device
CN105404888B (en) The conspicuousness object detection method of color combining and depth information
CN105869173A (en) Stereoscopic vision saliency detection method
CN104966286A (en) 3D video saliency detection method
Lo et al. Joint trilateral filtering for depth map super-resolution
CN104463865A (en) Human image segmenting method
Hua et al. Extended guided filtering for depth map upsampling
CN104574328A (en) Color image enhancement method based on histogram segmentation
CN104134200A (en) Mobile scene image splicing method based on improved weighted fusion
CN104680546A (en) Image salient object detection method
CN104537355A (en) Remarkable object detecting method utilizing image boundary information and area connectivity
US20140050411A1 (en) Apparatus and method for generating image feature data
CN104680483A (en) Image noise estimating method, video image de-noising method, image noise estimating device, and video image de-noising device
CN107403414A (en) A kind of image area selecting method and system for being beneficial to fuzzy kernel estimates
CN103268482A (en) Low-complexity gesture extracting and gesture depth acquiring method
Abubakar A study of region-based and contour-based image segmentation
CN103871089B (en) Image superpixel meshing method based on fusion
Hussain et al. Dark image enhancement by locally transformed histogram
Srikakulapu et al. Depth estimation from single image using defocus and texture cues
Hamzah et al. Improvement of stereo corresponding algorithm based on sum of absolute differences and edge preserving filter
CN105469369B (en) Digital picture filtering method and system based on segmentation figure
Nasonov et al. Edge width estimation for defocus map from a single image
Yang et al. RIFO: Restoring images with fence occlusions
Hoshi et al. Accurate and robust image correspondence for structure-from-motion and its application to multi-view stereo

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150325

WD01 Invention patent application deemed withdrawn after publication