CN102509072A - Method for detecting salient object in image based on inter-area difference - Google Patents
Method for detecting salient object in image based on inter-area difference Download PDFInfo
- Publication number
- CN102509072A CN102509072A CN2011103120919A CN201110312091A CN102509072A CN 102509072 A CN102509072 A CN 102509072A CN 2011103120919 A CN2011103120919 A CN 2011103120919A CN 201110312091 A CN201110312091 A CN 201110312091A CN 102509072 A CN102509072 A CN 102509072A
- Authority
- CN
- China
- Prior art keywords
- indicate
- saliency maps
- rectangular area
- pixel
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a method for detecting a salient object in an image based on inter-area difference. The method specifically comprises the following steps of: (1) inputting an original image and calculating a saliency map of the original image; (2) calculating and modifying the saliency map; and (3) updating the saliency map through iteration and finding a target rectangle in the greatest difference from the external area, wherein the image content of the internal area of the target rectangle is the detected salient object. The method can accurately detect the salient object in the image without setting any parameter.
Description
Technical field
The present invention relates to computer vision, technical field of image processing, a kind of detection method of significant object in the image based on region difference is concretely related to.
Background technique
The research achievement of psychology and perception science is it has been shown that when people observes piece image, to the concern in each region of image and unequal, to generate Saliency maps corresponding with attention rate.In most cases, it is focused on a certain region of image when people observes piece image, which is referred to as significant object.In other words, significant object obtains higher attention rate compared to other regions of image.If can come out significant object detection, applications many to significant Object identifying, image adaptive, compression of images, image retrieval etc. can provide biggish help.Significant method for checking object is exactly applied in this background and is given birth to, it is intended to quickly and accurately detected the significant object in image using Saliency maps picture corresponding with image attention degree.The result of detection shows as marking one piece of rectangular area in the picture, the rectangular area it is as much as possible it is as few as possible comprising significant object include background.Significant method for checking object has had preliminary research at present, as Liu et al. people in June, 2007 American Institute of Electrical and Electronics Engineers computer vision and pattern identification meeting on " the significant object detection based on study " one text for delivering, significant object detection described in this article is to search for target rectangle on Saliency maps picture using exhaust algorithm, which has framed at least 95% high pixel of conspicuousness.This detection method needs given threshold, and the detection speed of significant object is slow, and detection effect depends on the effect of Saliency maps picture." the conspicuousness model based on concentric curve and color " text that Valenti et al. is delivered in American Institute of Electrical and Electronics Engineers computer vision meeting in 2009, significant object detection described in this article is to search for target rectangle on Saliency maps picture using efficient child window searching algorithm, this algorithm accelerates the speed of search target rectangle, but cannot accurately detect significant object, specific step is as follows for efficient child window searching algorithm:
(1) p is set as an empty ordered queue, a point set is formed with four apex coordinates of image, using the point set as the point set of the head of the queue of ordered queue p;
(2) point set of the head of the queue of ordered queue p is divided into two subsets from that maximum side of its interval;
(3) upper bound of each subset is calculated by borderline quality function;
(4) two subsets for obtaining second step are inserted into ordered queue p according to the calculated upper bound of third step;
(5) step (2)-(4) are repeated, when the subset taken out from the head of the queue of ordered queue p only includes a rectangle, which is global maximum, the target rectangle exactly searched.
Luo et al. has delivered " object detection based on maximum conspicuousness density " text in Asia computer vision meeting in 2010, " object detection " described in this article is to search for target rectangle on Saliency maps picture using maximum significant density algorithm, this algorithm improves the accuracy of significant object detection, but this method needs to design different parameters for different conspicuousness models, can not achieve adaptivity.Liu et al. people has delivered " detecting without ginseng conspicuousness based on Density Estimator " in the American Institute of Electrical and Electronics Engineers image procossing international conference of in September, 2010, this article establishes the conspicuousness model of printenv with nonparametric probability algorithm, for obtaining the Saliency maps of image, specific step is as follows for the algorithm:
It (1) is several regions image pre-segmentation with mean shift algorithm;
(2) each pixel in image and the color similarity of each region are calculated with nonparametric probability algorithm;
(3) color distance between each region is calculated using the color similarity of each pixel and each region in image, forms the color Saliency maps of image;
(4) space length between each region is calculated using the color similarity of each pixel and each region in image, forms the spatial saliency figure of image;
(5) by the color Saliency maps of image and spatial saliency figure, the final Saliency maps of image are formed.
Described in summary, existing significant method for checking object needs that corresponding parameter is arranged for various conspicuousness models, is just able to achieve the accurate detection of significant object, this affects the extensive use of significant object detection.
Summary of the invention
It is an object of the invention to a kind of detection method of significant object in the image based on region difference is proposed for defect present in prior art, this method can accurately detect significant object, without corresponding parameter is arranged for various conspicuousness models.
In order to achieve the above object, the technical scheme adopted by the invention is that it is as follows:
The detection method of significant object in a kind of image based on region difference, the specific steps of which are as follows:
(1), original image is inputted, the Saliency maps of original image are calculated;
(2), modification Saliency maps are calculated;
(3), by iteration update Saliency maps, find with the maximum target rectangle of perimeter difference, the picture material of the target rectangle interior zone is the significant object detected.
Input original image described in above-mentioned steps (1), calculates the Saliency maps of original image, the specific steps of which are as follows:
(1-1), original image is divided into several regions with mean shift algorithm;
(1-2), each pixel in image and the color similarity of each region, calculating formula are calculated are as follows:
(1)
Wherein,Indicate theA region,Indicate the region being divided intoIn pixel number,Indicate regionIn pixel,Indicate pixelThe color characteristic at place,Indicate pixelThe color characteristic at place,Indicate theThe kernel function in a region,It indicatesThe pixel at placeWithThe color similarity in a region;
(1-3), the color Saliency maps for calculating original image, calculating formula are as follows:
(2)
Wherein,Indicate theA region, n indicate region sum,It indicatesThe pixel at placeWithThe color similarity in a region,Indicate theThe color conspicuousness in a region,Indicate the color Saliency maps of original image;
(1-4), the spatial saliency figure for calculating original image, calculating formula are as follows:(3)
Wherein,Indicate theA region, n indicate region sum,It indicatesThe pixel at placeWithThe color similarity in a region,Indicate theThe spatial saliency in a region,Indicate the spatial saliency figure of original image;
(1-5), the Saliency maps for calculating original image, calculating formula are as follows:
Wherein,Indicate the color Saliency maps of original image,Indicate the spatial saliency figure of original image,Indicate the Saliency maps of original image.
The numerical value of each pixel position is the significance value of the pixel in Saliency maps, and the value range of the significance value is 0~255, and significance value is bigger, and the expression pixel is more significant, and the smaller expression pixel of significance value is not significant.
Saliency maps are modified in calculating described in above-mentioned steps (2), the specific steps of which are as follows:
(2-2), each pixel on Saliency maps is calculatedTo the center of gravityEuclidean distance, calculating formula are as follows:
Wherein,、Indicate pixel coordinate,、Indicate barycentric coodinates,Indicate pixelTo the Euclidean distance of the center of gravity of Saliency maps;
(2-3), modified Saliency maps, calculating formula are calculated are as follows:
Wherein, W and H respectively indicates the width and height of image,Indicate pixelTo the Euclidean distance of the center of gravity of Saliency maps,Indicate original Saliency maps,Indicate modified Saliency maps.
Described in above-mentioned steps (3) by iteration update Saliency maps, find with the maximum target rectangle of Saliency maps perimeter difference, the picture material of the target rectangle interior zone is the significant object detected, the specific steps of which are as follows:
(3-1), the initial value that iteration is set, the specific steps of which are as follows:
(3-1-2), it setsIt indicates theThe Saliency maps updated in secondary iteration, Saliency maps when original state, whereinIndicate modified Saliency maps obtained in step (2);
(3-1-3), it setsIt indicates theThe rectangular area obtained in secondary iteration,It indicates rectangular area when original state, is entire Saliency maps;
(3-1-4), it setsIndicate theThe difference value of the rectangular area and perimeter that obtain in secondary iteration, perimeter refer to the region removed behind rectangular area in Saliency maps, the difference value of rectangular area and perimeter when original state;
(3-1-5), it setsIndicate theSaliency maps in secondary iterationThe mean value of the significance value of upper all pixels point, Saliency maps when original stateThe mean value of the significance value of upper all pixels point is;
(3-2), Saliency maps are updated by iteration, obtains rectangular area, the specific steps of which are as follows:
(3-2-1), it is located atIn secondary iteration, withThe Saliency maps updated in -1 iterationThe significance value of upper each pixel subtracts Saliency mapsThe mean value of the significance value of upper all pixels point, the Saliency maps that are updated;
(3-2-2), using efficient child window searching algorithm, in the Saliency maps of updateOne rectangular area of middle acquisition, the sum of all pixels point is greater than the Saliency maps updated in the rectangular areaThe sum of pixel in upper any other rectangle;
(3-2-3), the rectangular area obtained in step (3-2-2) is calculatedWith the difference value of perimeter, calculating formula are as follows:
Wherein,It indicates theThe rectangular area obtained in secondary iteration,Indicate modified Saliency maps obtained in step (2),For withIt is correspondingOn rectangular area,Indicate theThe difference value of rectangular area and perimeter in secondary iteration,Indicate an intermediate variable;
Wherein,It indicates theThe rectangular area obtained in secondary iteration,Indicate modified Saliency maps obtained in step (2),For withIt is correspondingOn rectangular area,Indicate rectangular areaInternal pixel number,Indicate rectangular areaThe mean value of the significance value of internal all pixels point,Indicate rectangular areaExternal pixel number,Indicate rectangular areaThe mean value of the significance value of external all pixels point;
(3-2-4), Saliency maps are updatedThe mean value of the significance value of upper all pixels point, calculating formula are as follows:
Wherein,Indicate Saliency maps before updatingThe mean value of the significance value of upper all pixels point,Indicate rectangular areaThe mean value of the significance value of internal all pixels point,For withIt is correspondingOn rectangular area,It indicates theThe rectangular area obtained in secondary iteration,Indicate modified Saliency maps obtained in step (2),Indicate Saliency maps after updatingThe mean value of the significance value of upper all pixels point;
(3-2-5), in Saliency mapsOn byThe rectangular area obtained in secondary iterationThe significance value of all pixels point in addition is set to;
(3-3) if,The difference value of the rectangular area and perimeter that are obtained in secondary iteration, thenFor the target rectangle of acquisition;Otherwise continue step (3-2) and Saliency maps are updated by iteration, obtain target rectangle, the picture material of the target rectangle interior zone is the significant object detected.
Compared with prior art, the detection of significant object in image can relatively accurately be realized to the detection method of significant object by having the advantages that this method does not need any parameter is arranged in image based on region difference of the invention.
Detailed description of the invention
Fig. 1 is the flow chart of the detection method of significant object in the image of the invention based on region difference;
Fig. 2 is the original image of input;
Fig. 3 is the Saliency maps picture of original image;
Fig. 4 is the target rectangle figure obtained in the notable figure of modification;
Fig. 5 is to obtain the significant object diagram detected on the original image.
Specific embodiment
The embodiment of the present invention is described in further detail with reference to the accompanying drawings of the specification.
The emulation experiment that the present invention carries out is 2.53GHz, programs and realize on the interior PC test platform for saving as 1.96GB in CPU.
As shown in Figure 1, in the image of the invention based on region difference significant object detection method, described using following steps:
(1), input original image calculates the Saliency maps of original image such as Fig. 2 (a), the specific steps of which are as follows:
(1-1), original image is divided into several regions with mean shift algorithm;
(1-2), each pixel in image and the color similarity of each region, calculating formula are calculated are as follows:
Wherein,Indicate theA region,Indicate the region being divided intoIn pixel number,Indicate regionIn pixel,Indicate pixelThe color characteristic at place,Indicate pixelThe color characteristic at place,Indicate theThe kernel function in a region,It indicatesThe pixel at placeWithThe color similarity in a region;
(1-3), the color Saliency maps for calculating original image, calculating formula are as follows:
Wherein,Indicate theA region, n indicate region sum,It indicatesThe pixel at placeWithThe color similarity in a region,Indicate theThe color conspicuousness in a region,Indicate the color Saliency maps of original image;
(1-4), the spatial saliency figure for calculating original image, calculating formula are as follows:(3)
Wherein,Indicate theA region, n indicate region sum,It indicatesThe pixel at placeWithThe color similarity in a region,Indicate theThe spatial saliency in a region,Indicate the spatial saliency figure of original image;
(1-5), the Saliency maps for calculating original image, calculating formula are as follows:
Wherein,Indicate the color Saliency maps of original image,Indicate the spatial saliency figure of original image,The Saliency maps for indicating original image, such as Fig. 2 (b).
The numerical value of each pixel position is the significance value of the pixel in Saliency maps, and the value range of the significance value is 0~255, significance value is bigger, and the expression pixel is more significant, and the smaller expression pixel of significance value is not significant;
(2), modification Saliency maps are calculated, the specific steps of which are as follows:
(2-1), the center of gravity for setting Saliency maps;
(2-2), each pixel on Saliency maps is calculatedTo the center of gravityEuclidean distance, calculating formula are as follows:
Wherein,、Indicate pixel coordinate,、Indicate barycentric coodinates,Indicate pixelTo the Euclidean distance of the center of gravity of Saliency maps;
(2-3), modified Saliency maps, calculating formula are calculated are as follows:
Wherein, W and H respectively indicates the width and height of image,Indicate pixelTo the Euclidean distance of the center of gravity of Saliency maps,Indicate original Saliency maps, such as Fig. 2 (b),Modified Saliency maps are indicated, such as Fig. 3 (a);
(3), by iteration update Saliency maps, find with the maximum target rectangle of Saliency maps perimeter difference, the picture material of the target rectangle interior zone is the significant object detected, the specific steps of which are as follows:
(3-1), the initial value that iteration is set, the specific steps of which are as follows:
(3-1-2), it setsIt indicates theThe Saliency maps updated in secondary iteration, Saliency maps when original state, whereinIndicate modified Saliency maps obtained in step (2);
(3-1-3), it setsIt indicates theThe rectangular area obtained in secondary iteration,It indicates rectangular area when original state, is entire Saliency maps;The resolution ratio of original image is 378 × 400 in this experiment, the rectangular area of original stateIt is 378 × 400;
(3-1-4), it setsIndicate theThe difference value of the rectangular area and perimeter that obtain in secondary iteration, perimeter refer to the region removed behind rectangular area in Saliency maps, the difference value of rectangular area and perimeter when original state;
(3-1-5), it setsIndicate theSaliency maps in secondary iterationThe mean value of the significance value of upper all pixels point, Saliency maps when original stateThe mean value of the significance value of upper all pixels point is, whereinIndicate Saliency mapsMiddle pixel coordinateThe significance value at place,Indicate Saliency mapsIn pixel number,Indicate Saliency maps when original state;
(3-2), Saliency maps are updated by iteration, obtains rectangular area, the specific steps of which are as follows:
(3-2-1), by taking the 1st iteration as an example, with the Saliency maps of original stateThe significance value of upper each pixel subtractsThe mean value of the significance value of upper all pixels point, the Saliency maps that are updated;
(3-2-2), using efficient child window searching algorithm, in the Saliency maps of updateOne rectangular area of middle acquisition, the sum of all pixels point is greater than the Saliency maps updated in the rectangular areaThe sum of pixel in upper any other rectangle;
(3-2-3), the rectangular area obtained in step (3-2-2) is calculatedWith the difference value of perimeter, calculating formula are as follows:
Wherein,It indicates theThe rectangular area obtained in secondary iteration,Indicate modified Saliency maps obtained in step (2),For withIt is correspondingOn rectangular area, such as the purple rectangle in Fig. 3 (a),Indicate theThe difference value of rectangular area and perimeter in secondary iteration,Indicate an intermediate variable;
Wherein,It indicates theThe rectangular area obtained in secondary iteration,Indicate modified Saliency maps obtained in step (2),For withIt is correspondingOn rectangular area,Indicate rectangular areaInternal pixel number,Indicate rectangular areaThe mean value of the significance value of internal all pixels point,Indicate rectangular areaExternal pixel number,Indicate rectangular areaThe mean value of the significance value of external all pixels point;
For example, calculating the rectangular area obtained in the 1st iterationWith the difference value of perimeter, then obtained according to formula (7), whereinIndicate the rectangular area obtained in the 1st iteration,Indicate modified Saliency maps obtained in step (2),For withIt is correspondingOn rectangular area,Indicate the intermediate variable obtained by formula (8);
(3-2-4), Saliency maps are updatedThe mean value of the significance value of upper all pixels point, calculating formula are as follows:
Wherein,Indicate Saliency maps before updatingThe mean value of the significance value of upper all pixels point,Indicate rectangular areaThe mean value of the significance value of internal all pixels point,For withIt is correspondingOn rectangular area,It indicates theThe rectangular area obtained in secondary iteration,Indicate modified Saliency maps obtained in step (2),Indicate Saliency maps after updatingThe mean value of the significance value of upper all pixels point;
For example, updating Saliency maps according to formula (9) in the 1st iterationThe significance value of upper all pixels point is worth;
(3-2-5), in Saliency mapsOn byThe rectangular area obtained in secondary iterationThe significance value of all pixels point in addition is set to;
(3-3) if,The difference value of the rectangular area and perimeter that are obtained in secondary iteration, thenFor the target rectangle of acquisition;Otherwise continuing step (3-2) and Saliency maps are updated by iteration, obtain target rectangle, the picture material of the target rectangle interior zone is the significant object detected, for example, in the 1st iteration,,,Then continue step (3-2) and Saliency maps are updated by iteration, obtain target rectangle, if the yellow rectangle in Fig. 3 (b) is objective correct target rectangle, purple rectangle is the target rectangle detected on the original image, and the picture material of the target rectangle interior zone is the significant object detected.
It can be seen that method of the invention under conditions of not needing that any parameter is arranged from above-mentioned the simulation experiment result, can accurately come out significant object detection.
Claims (4)
1. the detection method of significant object in a kind of image based on region difference, the specific steps of which are as follows:
(1), original image is inputted, the Saliency maps of original image are calculated;
(2), modification Saliency maps are calculated;
(3), by iteration update Saliency maps, find with the maximum target rectangle of perimeter difference, the picture material of the target rectangle interior zone is the significant object detected.
2. the detection method of significant object in the image according to claim 1 based on region difference, which is characterized in that input original image described in above-mentioned steps (1) calculates the Saliency maps of original image, the specific steps of which are as follows:
(1-1), original image is divided into several regions with mean shift algorithm;
(1-2), each pixel in image and the color similarity of each region, calculating formula are calculated are as follows:
Wherein,Indicate theA region,Indicate the region being divided intoIn pixel number,Indicate regionIn pixel,Indicate pixelThe color characteristic at place,Indicate pixelThe color characteristic at place,Indicate theThe kernel function in a region,It indicatesThe pixel at placeWithThe color similarity in a region;
(1-3), the color Saliency maps for calculating original image, calculating formula are as follows:
Wherein,Indicate theA region, n indicate region sum,It indicatesThe pixel at placeWithThe color similarity in a region,Indicate theThe color conspicuousness in a region,Indicate the color Saliency maps of original image;
(1-4), the spatial saliency figure for calculating original image, calculating formula are as follows:
Wherein,Indicate theA region, n indicate region sum,It indicatesThe pixel at placeWithThe color similarity in a region,Indicate theThe spatial saliency in a region,Indicate the spatial saliency figure of original image;
(1-5), the Saliency maps for calculating original image, calculating formula are as follows:
Wherein,Indicate the color Saliency maps of original image,Indicate the spatial saliency figure of original image,Indicate the Saliency maps of original image, the numerical value of each pixel position is the significance value of the pixel in Saliency maps, and the value range of the significance value is 0~255, significance value is bigger, and the expression pixel is more significant, and the smaller expression pixel of significance value is not significant.
3. the detection method of significant object in the image according to claim 2 based on region difference, which is characterized in that Saliency maps are modified in calculating described in above-mentioned steps (2), the specific steps of which are as follows:
(2-2), each pixel on Saliency maps is calculatedTo the center of gravityEuclidean distance, calculating formula are as follows:
(5)
Wherein,、Indicate pixel coordinate,、Indicate barycentric coodinates,Indicate pixelTo the Euclidean distance of the center of gravity of Saliency maps;
(2-3), modified Saliency maps, calculating formula are calculated are as follows:
4. the detection method of significant object in the image according to claim 3 based on region difference, it is characterized in that, Saliency maps are updated by iteration described in above-mentioned steps (3), it finds and the maximum target rectangle of Saliency maps perimeter difference, the picture material of the target rectangle interior zone is the significant object detected, the specific steps of which are as follows:
(3-1), the initial value that iteration is set, the specific steps of which are as follows:
(3-1-2), it setsIt indicates theThe Saliency maps updated in secondary iteration, Saliency maps when original state, whereinIndicate modified Saliency maps obtained in step (2);
(3-1-3), it setsIt indicates theThe rectangular area obtained in secondary iteration,It indicates rectangular area when original state, is entire Saliency maps;
(3-1-4), it setsIndicate theThe difference value of the rectangular area and perimeter that obtain in secondary iteration, perimeter refer to the region removed behind rectangular area in Saliency maps, the difference value of rectangular area and perimeter when original state;
(3-1-5), it setsIndicate theSaliency maps in secondary iterationThe mean value of the significance value of upper all pixels point, Saliency maps when original stateThe mean value of the significance value of upper all pixels point is;
(3-2), Saliency maps are updated by iteration, obtains rectangular area, the specific steps of which are as follows:
(3-2-1), it is located atIn secondary iteration, withThe Saliency maps updated in -1 iterationThe significance value of upper each pixel subtracts Saliency mapsThe mean value of the significance value of upper all pixels point, the Saliency maps that are updated;
(3-2-2), using efficient child window searching algorithm, in the Saliency maps of updateOne rectangular area of middle acquisition, the sum of all pixels point is greater than the Saliency maps updated in the rectangular areaThe sum of pixel in upper any other rectangle;
(3-2-3), the rectangular area obtained in step (3-2-2) is calculatedWith the difference value of perimeter, calculating formula are as follows:
Wherein,It indicates theThe rectangular area obtained in secondary iteration,Indicate modified Saliency maps obtained in step (2),For withIt is correspondingOn rectangular area,Indicate theThe difference value of rectangular area and perimeter in secondary iteration,Indicate an intermediate variable;
Wherein,It indicates theThe rectangular area obtained in secondary iteration,Indicate modified Saliency maps obtained in step (2),For withIt is correspondingOn rectangular area,Indicate rectangular areaInternal pixel number,Indicate rectangular areaThe mean value of the significance value of internal all pixels point,Indicate rectangular areaExternal pixel number,Indicate rectangular areaThe mean value of the significance value of external all pixels point;
(3-2-4), Saliency maps are updatedThe mean value of the significance value of upper all pixels point, calculating formula are as follows:
Wherein,Indicate Saliency maps before updatingThe mean value of the significance value of upper all pixels point,Indicate rectangular areaThe mean value of the significance value of internal all pixels point,For withIt is correspondingOn rectangular area,It indicates theThe rectangular area obtained in secondary iteration,Indicate modified Saliency maps obtained in step (2),Indicate Saliency maps after updatingThe mean value of the significance value of upper all pixels point;
(3-2-5), in Saliency mapsOn byThe rectangular area obtained in secondary iterationThe significance value of all pixels point in addition is set to;
(3-3) if,The difference value of the rectangular area and perimeter that are obtained in secondary iteration, thenFor the target rectangle of acquisition;Otherwise continue step (3-2) and Saliency maps are updated by iteration, obtain target rectangle, the picture material of the target rectangle interior zone is the significant object detected.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110312091 CN102509072B (en) | 2011-10-17 | 2011-10-17 | Method for detecting salient object in image based on inter-area difference |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110312091 CN102509072B (en) | 2011-10-17 | 2011-10-17 | Method for detecting salient object in image based on inter-area difference |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102509072A true CN102509072A (en) | 2012-06-20 |
CN102509072B CN102509072B (en) | 2013-08-28 |
Family
ID=46221153
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201110312091 Expired - Fee Related CN102509072B (en) | 2011-10-17 | 2011-10-17 | Method for detecting salient object in image based on inter-area difference |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102509072B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102938139A (en) * | 2012-11-09 | 2013-02-20 | 清华大学 | Automatic synthesis method for fault finding game images |
CN103218832A (en) * | 2012-10-15 | 2013-07-24 | 上海大学 | Visual saliency algorithm based on overall color contrast ratio and space distribution in image |
CN106407978A (en) * | 2016-09-24 | 2017-02-15 | 上海大学 | Unconstrained in-video salient object detection method combined with objectness degree |
CN110689007A (en) * | 2019-09-16 | 2020-01-14 | Oppo广东移动通信有限公司 | Subject recognition method and device, electronic equipment and computer-readable storage medium |
CN111461139A (en) * | 2020-03-27 | 2020-07-28 | 武汉工程大学 | Multi-target visual saliency layered detection method in complex scene |
CN113114943A (en) * | 2016-12-22 | 2021-07-13 | 三星电子株式会社 | Apparatus and method for processing image |
US11670068B2 (en) | 2016-12-22 | 2023-06-06 | Samsung Electronics Co., Ltd. | Apparatus and method for processing image |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101510299A (en) * | 2009-03-04 | 2009-08-19 | 上海大学 | Image self-adapting method based on vision significance |
-
2011
- 2011-10-17 CN CN 201110312091 patent/CN102509072B/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101510299A (en) * | 2009-03-04 | 2009-08-19 | 上海大学 | Image self-adapting method based on vision significance |
Non-Patent Citations (1)
Title |
---|
郎丛妍等: "一种基于模糊信息粒化的视频时空显著单元提取方法", 《电子学报》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103218832A (en) * | 2012-10-15 | 2013-07-24 | 上海大学 | Visual saliency algorithm based on overall color contrast ratio and space distribution in image |
CN103218832B (en) * | 2012-10-15 | 2016-01-13 | 上海大学 | Based on the vision significance algorithm of global color contrast and spatial distribution in image |
CN102938139A (en) * | 2012-11-09 | 2013-02-20 | 清华大学 | Automatic synthesis method for fault finding game images |
CN102938139B (en) * | 2012-11-09 | 2015-03-04 | 清华大学 | Automatic synthesis method for fault finding game images |
CN106407978A (en) * | 2016-09-24 | 2017-02-15 | 上海大学 | Unconstrained in-video salient object detection method combined with objectness degree |
CN106407978B (en) * | 2016-09-24 | 2020-10-30 | 上海大学 | Method for detecting salient object in unconstrained video by combining similarity degree |
CN113114943A (en) * | 2016-12-22 | 2021-07-13 | 三星电子株式会社 | Apparatus and method for processing image |
US11670068B2 (en) | 2016-12-22 | 2023-06-06 | Samsung Electronics Co., Ltd. | Apparatus and method for processing image |
CN113114943B (en) * | 2016-12-22 | 2023-08-04 | 三星电子株式会社 | Apparatus and method for processing image |
CN110689007A (en) * | 2019-09-16 | 2020-01-14 | Oppo广东移动通信有限公司 | Subject recognition method and device, electronic equipment and computer-readable storage medium |
CN110689007B (en) * | 2019-09-16 | 2022-04-15 | Oppo广东移动通信有限公司 | Subject recognition method and device, electronic equipment and computer-readable storage medium |
CN111461139A (en) * | 2020-03-27 | 2020-07-28 | 武汉工程大学 | Multi-target visual saliency layered detection method in complex scene |
Also Published As
Publication number | Publication date |
---|---|
CN102509072B (en) | 2013-08-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102509072A (en) | Method for detecting salient object in image based on inter-area difference | |
CN110232311B (en) | Method and device for segmenting hand image and computer equipment | |
US11062124B2 (en) | Face pose detection method, device and storage medium | |
CN108052624A (en) | Processing Method of Point-clouds, device and computer readable storage medium | |
Ye et al. | A new method based on hough transform for quick line and circle detection | |
CN107063261B (en) | Multi-feature information landmark detection method for precise landing of unmanned aerial vehicle | |
CN104615986B (en) | The method that pedestrian detection is carried out to the video image of scene changes using multi-detector | |
CN103605978A (en) | Urban illegal building identification system and method based on three-dimensional live-action data | |
CN106384355B (en) | A kind of automatic calibration method in projection interactive system | |
CN101490711A (en) | Image processing device and image processing method | |
CN102663401B (en) | Image characteristic extracting and describing method | |
CN105574533B (en) | A kind of image characteristic extracting method and device | |
CN105457908B (en) | The sorting method for rapidly positioning and system of small size glass panel based on monocular CCD | |
US10235576B2 (en) | Analysis method of lane stripe images, image analysis device, and non-transitory computer readable medium thereof | |
CN108052869B (en) | Lane line recognition method, lane line recognition device and computer-readable storage medium | |
CN110490839B (en) | Method and device for detecting damaged area in expressway and computer equipment | |
CN107507226A (en) | A kind of method and device of images match | |
CN109509222B (en) | Method and device for detecting linear object | |
CN108615014B (en) | Eye state detection method, device, equipment and medium | |
CN102411705A (en) | Method and interface of recognizing user's dynamic organ gesture and elec tric-using apparatus using the interface | |
CN103778436A (en) | Pedestrian gesture inspecting method based on image processing | |
CN104123554A (en) | SIFT image characteristic extraction method based on MMTD | |
EP2884459A1 (en) | Image processing device, image processing method, and image processing program | |
CN115359295A (en) | Decoupling knowledge distillation hardware target detection method and system | |
US20150103080A1 (en) | Computing device and method for simulating point clouds |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20130828 Termination date: 20201017 |
|
CF01 | Termination of patent right due to non-payment of annual fee |