CN104050674A - Salient region detection method and device - Google Patents

Salient region detection method and device Download PDF

Info

Publication number
CN104050674A
CN104050674A CN201410301797.9A CN201410301797A CN104050674A CN 104050674 A CN104050674 A CN 104050674A CN 201410301797 A CN201410301797 A CN 201410301797A CN 104050674 A CN104050674 A CN 104050674A
Authority
CN
China
Prior art keywords
local
mrow
global
value
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410301797.9A
Other languages
Chinese (zh)
Other versions
CN104050674B (en
Inventor
王鹏
罗永康
黎万义
乔红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201410301797.9A priority Critical patent/CN104050674B/en
Publication of CN104050674A publication Critical patent/CN104050674A/en
Application granted granted Critical
Publication of CN104050674B publication Critical patent/CN104050674B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a salient region detection method and device. The method includes the steps that an input image is segmented in advance into multiple segmented regions according to a segmentation method based on local homogeneity analysis; a local salient value and a global salient value of each segmentation region are calculated, wherein the local salient value is obtained by calculating the contrast ratio of different characteristics on multi-scale neighbors, and the global salient value is obtained by metering spatial distribution of different characteristics and isolation between regions; through the amount of information contained in a local salient image and a global salient image of each segmentation region, the weight of the local salient value and the weight of the global salient value in an eventual salient value are automatically adjusted, weighted summation is conducted on the local salient value and the global salient value, the eventual salient value of each segmented region is obtained, and the eventual salient image of each segmented region is obtained, wherein the local salient image and the global salient image of each segmented region are images represented by the local salient value and the global salient value of each segmentation region; salient regions are extracted from the input image according to the eventual salient image.

Description

Salient region detection method and device
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method and a device for detecting a salient region based on total and local information.
Background
Visual attention has attracted a large number of researchers from different disciplines for extensive research. Detecting and segmenting salient regions is an important task for many visual applications. Extracting the salient region can improve effective information for applications in different situations, such as image compression, object detection and segmentation, and object identification and tracking.
The salient region detection comprises the following steps: a bottom-up significance detection method, a top-down significance detection method and a significance detection method combining the two. The bottom-up saliency detection method is pure data driving, does not depend on knowledge or prior information, and obtains a saliency region by extracting difference information of an image. The top-down significance detection method is based on task driving and extracts significant regions by using priori knowledge.
The method for detecting the significance from bottom to top mainly comprises the following steps: a detection method based on local information, a detection method based on global information, and a detection method combining local information and global information. Local information based Salient region detection methods (L.Itti, C.Koch and E.Niebur, "odor of significance-based visual attribute for Rapid scene analysis," IEEE Trans.Pattern analysis. Mach.Intell., vol.20, No.11, pp.1254-1259, 1998; S.J.park, J.K.shin and M.Lee, "biological embedded significance map model for bottom-visual attribute," Proc.of the Second International Workshop on biological significance Computer Vision, pp.418-426, 2002; R.Achanta, F.Estrada, P.Willd and "S.simple," local significance map analysis, 2008. 65, or local contrast values calculated by the respective local contrast values, 75.75. the local contrast values are obtained. The method is simple and has good real-time performance, and can be widely applied. However, such methods often do not utilize global information, and therefore, the method is not effective for detecting scenes which are globally significant and not locally significant. Global information based salient region detection methods (t.liu, j.sun, n.zheng, x.tang and h.y.shum, "Learning to detect a salient object," proc.of IEEE Computer Society convention and Vision Pattern registration, pp.1-8, 2007; x.hou and l.zhang, "business detection: a spectral response approach," proc.of IEEE Computer Society convention and Vision Pattern registration, pp.1-8, 2007; c.m.kuo, y.h.kuan and n.c. yang, "Color-based image analysis, y.h.kuan and n.c. lifting," Color-based image analysis, "IEEE-based salient region information distribution," IEEE 5.10.11, and IEEE image information extraction. The method has a good effect on scenes with simple scenes and single remarkable areas. But the detection of the boundary of the salient region is not accurate due to the fact that local information is not fully utilized.
Disclosure of Invention
Technical problem to be solved
In view of the above, the present invention is directed to overcome the shortcomings of the prior art, and provides a method and an apparatus for detecting a salient region based on total and local information, which can extract a salient region of an image quickly and efficiently.
(II) technical scheme
In order to achieve the above object, the present invention provides a method for detecting a salient region, including:
step S1: pre-dividing an input image into a plurality of divided regions by using a dividing method based on local homogeneity analysis;
step S2: respectively calculating a local significant value and a global significant value for each segmentation area, wherein the local significant value is obtained by calculating the contrast of different features in multi-scale neighbors, and the global significant value is obtained by measuring the spatial distribution of the different features and the isolation among the areas;
step S3: automatically adjusting the weight of the local significant value and the global significant value in the final significant value by using the information quantity in the local significant image and the global significant image of each segmentation area, and carrying out weighted summation on the local significant value and the global significant value to obtain the final significant value of each segmentation area so as to obtain the final significant image of each segmentation area; the local saliency map and the global saliency map are maps represented by the local saliency value and the global saliency map;
step S4: extracting a salient region from the input image according to the final salient map.
The invention also provides a salient region detection device, which comprises:
a segmentation module: pre-dividing an input image into a plurality of divided regions by using a dividing method based on local homogeneity analysis;
a first calculation module: respectively calculating a local significant value and a global significant value for each segmentation area, wherein the local significant value is obtained by calculating the contrast of different features in multi-scale neighbors, and the global significant value is obtained by measuring the spatial distribution of the different features and the isolation among the areas;
a second calculation module: automatically adjusting the weight of the local significant value and the global significant value in the final significant value by using the information quantity in the local significant image and the global significant image of each segmentation area, and carrying out weighted summation on the local significant value and the global significant value to obtain the final significant value of each segmentation area so as to obtain the final significant image of each segmentation area; the local saliency map and the global saliency map are maps represented by the local saliency value and the global saliency map;
a detection module: extracting a salient region from the input image according to the final salient map.
(III) the beneficial effects are as follows:
the invention has the beneficial effects that: the conventional saliency detection method mainly obtains a saliency region through image local information without considering global information, or obtains a saliency region through image global information while discarding local information. The salient region detection method based on local information has poor detection effect on the condition that the salient appears on the whole image; the salient region detection method based on the global information has low boundary detection precision for the salient region. The method combines the local and global information of the image for salient region detection by utilizing the characteristic that the local salient information and the global salient information complement each other in the salient detection process. Local salient information is obtained by calculating local multi-scale neighbor contrast, global salient information is obtained by calculating the global spatial distribution of image features and the isolation characteristics among regions, and then the local salient information and the global salient information are combined by using a self-adaptive method, so that an image salient region is obtained. The method (1) performs salient region detection by combining local salient information and global salient information, and solves the defect that a good detection effect is difficult to obtain due to the fact that only local information or only global information is considered in the traditional salient region detection method; (2) the real-time performance is good, and the image salient region can be quickly and accurately obtained.
Drawings
Fig. 1 is a flowchart of a salient region detection method according to the present invention.
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the accompanying drawings: the present embodiment is implemented on the premise of the technical solution of the present invention, and combines the detailed implementation manner and the specific operation process, but the protection scope of the present invention is not limited to the following embodiments.
Fig. 1 shows a flowchart of a salient region detection method based on total and local information according to the present invention, in the image salient region detection process, an input image is first pre-divided into a plurality of regions, then local and global saliency maps are respectively calculated, and finally the local and global saliency maps are combined to obtain a final image salient region. The invention comprises the following steps:
the first step is as follows: for an input image, pre-dividing the input image into a plurality of regions by utilizing a segmentation method based on local homogeneity analysis;
the second step is that: respectively calculating local saliency and global saliency for each region according to the segmentation result obtained in the last step, wherein the local saliency is obtained by calculating the contrast of multi-scale neighbor, and the global saliency is obtained by measuring the global spatial distribution of different features and the isolation among the regions;
the third step: utilizing the information quantity contained in the local saliency map corresponding to the local saliency value and the global saliency map corresponding to the global saliency value to automatically adjust the weight of the local saliency value and the global saliency value in the final saliency map, and carrying out weighted summation on the local saliency value and the global saliency value to obtain a final saliency value, thereby obtaining a final saliency map of the image;
the fourth step: and extracting a salient region according to the final salient image of the image.
The first step is as follows:
for an input image, obtaining an image local consistency description map H-map by using a local homogeneity analysis method, and then dividing the input image into a plurality of regions by using a region growing method on the basis of the H-map. The local homogeneity-based analysis method generally refers to dividing regions with similar image features into the same region according to the features of an image, such as color, texture, and the like, and dividing regions with larger differences in image features into different regions. These regions are represented in the form of histograms, and over-segmentation is prevented by a region fusion method based on the similarity of the region histograms.
The second step is as follows:
and respectively calculating the local significance and the global significance according to the segmentation result obtained in the last step.
The calculation of local saliency comprises the following steps:
step 1: calculating the local contrast of each pixel of the whole input image which is not segmented:
<math> <mrow> <msubsup> <mi>LocContr</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>Fea</mi> </msubsup> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>s</mi> </munder> <mo>|</mo> <mo>|</mo> <msup> <mi>F</mi> <mi>Fea</mi> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>M</mi> <mi>Fea</mi> </msup> <msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>s</mi> </msub> <mo>|</mo> <mo>|</mo> <mo>,</mo> </mrow> </math>
wherein,local contrast at position (x, y) based on the employed feature Fea ═ color, direction } where the directional feature is obtained by Gabor filtering the image; fFea(x, y) represents a feature vector at position (x, y); the scale s represents three scale levels 1/4, 1/8, 1/16 with the pixel neighborhood size being the size of the entire input image, M (x, y)sThe mean value of the feature vectors representing the pixels contained in the neighborhood of the pixel of scale s is calculated by:
<math> <mrow> <msup> <mi>M</mi> <mi>Fea</mi> </msup> <msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>s</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <msub> <mi>NR</mi> <mi>s</mi> </msub> </mrow> </msub> <msup> <mi>F</mi> <mi>Fea</mi> </msup> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>)</mo> </mrow> </mrow> <mi>N</mi> </mfrac> <mo>,</mo> </mrow> </math>
wherein NR issRepresenting the neighborhood of pixels on the scale s, N representing NRsThe number of pixels in the region.
Step 2: calculating a local saliency value for each of the segmented regions, as calculated by:
<math> <mrow> <msubsup> <mi>LocSal</mi> <mi>i</mi> <mi>Fea</mi> </msubsup> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msubsup> <mi>LocContr</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>Fea</mi> </msubsup> </mrow> <msub> <mi>N</mi> <mi>i</mi> </msub> </mfrac> <mo>,</mo> </mrow> </math>
wherein N isiThe number of pixels in the ith divided region is represented, and I represents a set of pixels belonging to the divided region I.
And step 3: local saliency values are calculated for the color and orientation features, as calculated by:
<math> <mrow> <msub> <mi>LocSal</mi> <mi>i</mi> </msub> <mo>=</mo> <msubsup> <mi>LocSal</mi> <mi>i</mi> <mi>color</mi> </msubsup> <mo>+</mo> <mi>&beta;</mi> <msubsup> <mi>LocSal</mi> <mi>i</mi> <mi>orientation</mi> </msubsup> <mo>,</mo> </mrow> </math>
wherein,the local color saliency value of the area i is represented and obtained by calculation of Fea in the formula in the step 2 according to color features;the direction local significant value of the area i is represented and obtained by calculation of Fea direction characteristic in the formula in the step 2; β represents the ratio of the entropy of the local saliency map with respect to color and orientation features obtained by step 2, wherein the entropy of the image is given by the formula:
calculation of PiThe significance map is represented by an n-level histogram showing the probability that a pixel gray value belongs to the ith gray level. The entropy is one of the methods for quantizing the amount of information included in the image, but the invention can also calculate the amount of information included in the saliency map by using other known quantization methods.
The calculation of global saliency includes the following steps:
step 1: the spatial distribution of each segmented region i is calculated by:
<math> <mrow> <msub> <mi>DISTRI</mi> <mi>i</mi> </msub> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msup> <mrow> <mo>|</mo> <mo>|</mo> <mi>X</mi> <mo>-</mo> <msubsup> <mi>Cen</mi> <mi>j</mi> <mi>sp</mi> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> </mrow> <msub> <mi>N</mi> <mi>i</mi> </msub> </mfrac> <mo>,</mo> </mrow> </math>
wherein, DISTRIiDenotes the spatial distribution of the divided region i, X ═ X, y]TRepresenting pixel coordinates, j being any one of the segmented regions of the input image,representing the centroid coordinates of the segmented region j, NiIndicates the number of pixels in the divided region I, and I indicates the pixel set belonging to the region I.
Step 2: calculating the characteristic isolation of the segmentation region i according to the following formula:
<math> <mrow> <msubsup> <mi>ISO</mi> <mi>i</mi> <mi>Fea</mi> </msubsup> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msup> <mi>F</mi> <mi>Fea</mi> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>Mean</mi> <mi>j</mi> <mi>Fea</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> </mrow> <msub> <mi>N</mi> <mi>i</mi> </msub> </mfrac> </mrow> </math>
wherein,indicating the characteristic isolation of the segmented region i, FFea(x, y) represents a feature vector at position (x, y),means of feature vectors, N, representing pixels in an arbitrary partition jiIndicates the number of pixels in the divided area i.
And step 3: calculating a global saliency value of the segmented region i, calculated by:
GloSal i Fea = ISO i Fea DISTRI i ,
wherein,indicating the global saliency value of the segmented area i with respect to the feature Fea.
And 4, step 4: calculating a final global saliency value for the segmented region i, calculated by:
<math> <mrow> <msub> <mi>GloSal</mi> <mi>i</mi> </msub> <mo>=</mo> <msubsup> <mi>GloSal</mi> <mi>i</mi> <mi>color</mi> </msubsup> <mo>+</mo> <mi>&gamma;</mi> <msubsup> <mi>GloSal</mi> <mi>i</mi> <mi>orientation</mi> </msubsup> <mo>,</mo> </mrow> </math>
wherein, GloSiliRepresenting the final global saliency value of the segmented area i,the global significance value of the segmentation area i relative to the color feature is obtained by calculation of the color feature by Fea in the formula of step 3,the global saliency value of the segmented area i with respect to the directional feature is represented by Fea in the formula of step 3, and γ represents the ratio of the entropy of the global saliency map with respect to the color and directional feature obtained in step 3, wherein the entropy of the image is represented by the formula:calculation of PiThe significance map is represented by an n-level histogram showing the probability that a pixel gray value belongs to the ith gray level.
The third step is as follows:
according to the local significant value and the global significant value obtained in the second step, the weight of the local significant value and the global significant value in the final significant value is automatically adjusted by using the information contained in the image, the local significant value and the global significant value are weighted and summed to obtain the final significant value, so that a final significant graph is obtained, and the calculation formula is as follows:
Sali=WLoc×LocSali+WGlo×GloSali
wherein, SaliRepresents the final saliency value, W, of the divided region iLocRepresenting the weight of the local saliency value, WGloRepresenting the weight taken by the global saliency value. W is represented by the formula: w ═ EMaxE, calculating. E represents the entropy of the information of the local saliency map or the global saliency map, given by the formula:calculation of PiRepresenting the probability that a pixel gray value belongs to the ith gray level when the saliency map is represented by an n-level histogram, EMaxRepresenting the maximum entropy in different histogram series representations. The entropy is one of the methods for quantizing the amount of information included in the image, but the invention may also adopt other known quantization methods to calculate the amount of information included in the saliency map.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention and are not intended to limit the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A salient region detection method comprising:
step S1: pre-dividing an input image into a plurality of divided regions by using a dividing method based on local homogeneity analysis;
step S2: respectively calculating a local significant value and a global significant value for each segmentation area, wherein the local significant value is obtained by calculating the contrast of different features in multi-scale neighbors, and the global significant value is obtained by measuring the spatial distribution of the different features and the isolation among the areas;
step S3: automatically adjusting the weight of the local significant value and the global significant value in the final significant value by using the information quantity in the local significant image and the global significant image of each segmentation area, and carrying out weighted summation on the local significant value and the global significant value to obtain the final significant value of each segmentation area so as to obtain the final significant image of each segmentation area; the local saliency map and the global saliency map are maps represented by the local saliency value and the global saliency map;
step S4: extracting a salient region from the input image according to the final salient map.
2. The detection method according to claim 1, wherein the local saliency calculation for each of the segmented regions in step S2 includes:
step 1, calculating the local contrast of each pixel of the input image aiming at different image characteristics;
step 2, calculating a local significant value of each segmentation region aiming at different image characteristics according to the local contrast, wherein the local significant value is an average value of the local contrasts of all pixels in the segmentation region;
and 3, weighting and summing the local significant values of all the image features in each segmentation region to obtain the local significant values of the segmentation regions, wherein the weight is determined according to the information quantity contained in the local significant images corresponding to all the image features.
3. The detection method according to claim 2, wherein the local contrast of each pixel of the input image for different image features in step 1 is calculated by:
<math> <mrow> <msubsup> <mi>LocContr</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>Fea</mi> </msubsup> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>s</mi> </munder> <mo>|</mo> <mo>|</mo> <msup> <mi>F</mi> <mi>Fea</mi> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>M</mi> <mi>Fea</mi> </msup> <msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>s</mi> </msub> <mo>|</mo> <mo>|</mo> <mo>,</mo> </mrow> </math>
wherein,is the local contrast of one of the image features at position (x, y); fFea(x, y) represents the one of the image features at location (x, y); s represents different level scales of the pixel neighborhood size; mFea(x, y) represents the mean of the feature vectors of all pixels contained in the neighborhood of the pixel of scale s.
4. The detection method according to claim 3, wherein in step 3, the weight is a ratio of entropies of local saliency maps corresponding to different image features.
5. The method according to claim 1, wherein the global saliency value of each of the partitions in step s2 is calculated as follows:
step 1, calculating the spatial distribution of each segmentation region aiming at different image characteristics;
step 2, calculating the characteristic isolation of each segmentation area aiming at different image characteristics;
step 3, calculating a global significant value of each segmentation region aiming at different image characteristics, wherein the global significant value is a ratio of the characteristic isolation line of each segmentation region to the spatial distribution;
and 4, calculating a final global significant value of each segmentation area, wherein the final global significant value is a weighted sum of global significant values corresponding to all image features, and the weight is determined according to the information quantity contained in the global significant image corresponding to all the image features.
6. The detection method according to claim 5, wherein the spatial distribution of each divided region is calculated by:
<math> <mrow> <msub> <mi>DISTRI</mi> <mi>i</mi> </msub> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msup> <mrow> <mo>|</mo> <mo>|</mo> <mi>X</mi> <mo>-</mo> <msubsup> <mi>Cen</mi> <mi>j</mi> <mi>sp</mi> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> </mrow> <msub> <mi>N</mi> <mi>i</mi> </msub> </mfrac> <mo>,</mo> </mrow> </math>
wherein, DISTRIiDenotes the spatial distribution of the i-th divided region, X ═ X, y]TRepresenting pixel coordinates, j being the jth segment of the input image,centroid coordinates, N, representing the jth segmented regioniThe number of pixels in the ith divided region is represented, and I represents a pixel set belonging to the ith divided region.
7. The detection method according to claim 5, wherein the feature isolation of each divided region is calculated by:
<math> <mrow> <msubsup> <mi>ISO</mi> <mi>i</mi> <mi>Fea</mi> </msubsup> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msup> <mi>F</mi> <mi>Fea</mi> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>Mean</mi> <mi>j</mi> <mi>Fea</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> </mrow> <msub> <mi>N</mi> <mi>i</mi> </msub> </mfrac> </mrow> </math>
wherein,denotes the characteristic isolation of the i-th divided region, FFea(x, y) represents the corresponding image feature vector at position (x, y), j being the jth segmented region of the input image,means of feature vectors, N, representing pixels in the jth segmentiIndicating the number of pixels in the ith division area.
8. The detection method according to claim 1, wherein the weight of the local saliency value and the global saliency value in each divided region in step S3 in the final saliency value is calculated as follows:
W=EMax-E
wherein, W represents the weight of the local saliency value or the global saliency value, E represents the entropy of the information of the local saliency map or the global saliency map, and is represented by the formula:calculation of PkRepresenting the probability that a pixel gray value belongs to the kth gray level when the local saliency map or the global saliency map is represented by an n-level histogram, EMaxThe maximum entropy under different levels of histogram representation is shown.
9. The detection method according to claim 2 or 5, wherein the amount of information contained in the local saliency map or the global saliency map corresponding to all the image features is determined by the ratio of the entropies of the local saliency map or the global saliency map corresponding to the image features.
10. A salient region detection apparatus comprising:
a segmentation module: pre-dividing an input image into a plurality of divided regions by using a dividing method based on local homogeneity analysis;
a first calculation module: respectively calculating a local significant value and a global significant value for each segmentation area, wherein the local significant value is obtained by calculating the contrast of different features in multi-scale neighbors, and the global significant value is obtained by measuring the spatial distribution of the different features and the isolation among the areas;
a second calculation module: automatically adjusting the weight of the local significant value and the global significant value in the final significant value by using the information quantity in the local significant image and the global significant image of each segmentation area, and carrying out weighted summation on the local significant value and the global significant value to obtain the final significant value of each segmentation area so as to obtain the final significant image of each segmentation area; the local saliency map and the global saliency map are maps represented by the local saliency value and the global saliency map;
a detection module: extracting a salient region from the input image according to the final salient map.
CN201410301797.9A 2014-06-27 2014-06-27 Salient region detection method and device Active CN104050674B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410301797.9A CN104050674B (en) 2014-06-27 2014-06-27 Salient region detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410301797.9A CN104050674B (en) 2014-06-27 2014-06-27 Salient region detection method and device

Publications (2)

Publication Number Publication Date
CN104050674A true CN104050674A (en) 2014-09-17
CN104050674B CN104050674B (en) 2017-01-25

Family

ID=51503457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410301797.9A Active CN104050674B (en) 2014-06-27 2014-06-27 Salient region detection method and device

Country Status (1)

Country Link
CN (1) CN104050674B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106404793A (en) * 2016-09-06 2017-02-15 中国科学院自动化研究所 Method for detecting defects of bearing sealing element based on vision
CN108805826A (en) * 2018-05-07 2018-11-13 珠海全志科技股份有限公司 Improve the method for defog effect
CN109242877A (en) * 2018-09-21 2019-01-18 新疆大学 Image partition method and device
WO2019015344A1 (en) * 2017-07-21 2019-01-24 北京大学深圳研究生院 Image saliency object detection method based on center-dark channel priori information
CN112122175A (en) * 2020-08-12 2020-12-25 浙江大学 Material enhanced feature recognition and selection method of color sorter

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496157B (en) * 2011-11-22 2014-04-09 上海电力学院 Image detection method based on Gaussian multi-scale transform and color complexity
CN102800086B (en) * 2012-06-21 2015-02-04 上海海事大学 Offshore scene significance detection method
CN103337075B (en) * 2013-06-20 2016-04-27 浙江大学 A kind of image significance computing method based on isophote
CN103632153B (en) * 2013-12-05 2017-01-11 宁波大学 Region-based image saliency map extracting method
CN103729848B (en) * 2013-12-28 2016-08-31 北京工业大学 High-spectrum remote sensing small target detecting method based on spectrum saliency

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106404793A (en) * 2016-09-06 2017-02-15 中国科学院自动化研究所 Method for detecting defects of bearing sealing element based on vision
CN106404793B (en) * 2016-09-06 2020-02-28 中国科学院自动化研究所 Bearing sealing element defect detection method based on vision
WO2019015344A1 (en) * 2017-07-21 2019-01-24 北京大学深圳研究生院 Image saliency object detection method based on center-dark channel priori information
CN108805826A (en) * 2018-05-07 2018-11-13 珠海全志科技股份有限公司 Improve the method for defog effect
CN109242877A (en) * 2018-09-21 2019-01-18 新疆大学 Image partition method and device
CN109242877B (en) * 2018-09-21 2021-09-21 新疆大学 Image segmentation method and device
CN112122175A (en) * 2020-08-12 2020-12-25 浙江大学 Material enhanced feature recognition and selection method of color sorter

Also Published As

Publication number Publication date
CN104050674B (en) 2017-01-25

Similar Documents

Publication Publication Date Title
CN106909902B (en) Remote sensing target detection method based on improved hierarchical significant model
Wang et al. SSRNet: In-field counting wheat ears using multi-stage convolutional neural network
CN105740945B (en) A kind of people counting method based on video analysis
Morris A pyramid CNN for dense-leaves segmentation
CN104050674B (en) Salient region detection method and device
Gui et al. A new method for soybean leaf disease detection based on modified salient regions
CN104715251B (en) A kind of well-marked target detection method based on histogram linear fit
WO2019071976A1 (en) Panoramic image saliency detection method based on regional growth and eye movement model
CN103632153B (en) Region-based image saliency map extracting method
CN112991238B (en) Food image segmentation method, system and medium based on texture and color mixing
CN107369158A (en) The estimation of indoor scene layout and target area extracting method based on RGB D images
CN107506792B (en) Semi-supervised salient object detection method
CN105335965B (en) Multi-scale self-adaptive decision fusion segmentation method for high-resolution remote sensing image
CN105405138A (en) Water surface target tracking method based on saliency detection
CN110827312A (en) Learning method based on cooperative visual attention neural network
CN112084988B (en) Lane line instance clustering method and device, electronic equipment and storage medium
CN109800713A (en) The remote sensing images cloud detection method of optic increased based on region
CN103678552A (en) Remote-sensing image retrieving method and system based on salient regional features
CN112329559A (en) Method for detecting homestead target based on deep convolutional neural network
CN103324753B (en) Based on the image search method of symbiotic sparse histogram
CN105913425A (en) Self-adaptive oval blocking and wavelet transformation-based multi-pig contour extraction method
CN105354547A (en) Pedestrian detection method in combination of texture and color features
Tang et al. Salient object detection of dairy goats in farm image based on background and foreground priors
CN109242854A (en) A kind of image significance detection method based on FLIC super-pixel segmentation
CN110135435B (en) Saliency detection method and device based on breadth learning system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant