CN101520894B - Method for extracting significant object based on region significance - Google Patents

Method for extracting significant object based on region significance Download PDF

Info

Publication number
CN101520894B
CN101520894B CN2009100462762A CN200910046276A CN101520894B CN 101520894 B CN101520894 B CN 101520894B CN 2009100462762 A CN2009100462762 A CN 2009100462762A CN 200910046276 A CN200910046276 A CN 200910046276A CN 101520894 B CN101520894 B CN 101520894B
Authority
CN
China
Prior art keywords
image
zone
formula
significant
conspicuousness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2009100462762A
Other languages
Chinese (zh)
Other versions
CN101520894A (en
Inventor
韩忠民
刘志
颜红波
李伟伟
张兆杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shine Energy Info-Tech Co., Ltd.
State Grid Shanghai Electric Power Co Ltd
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN2009100462762A priority Critical patent/CN101520894B/en
Publication of CN101520894A publication Critical patent/CN101520894A/en
Application granted granted Critical
Publication of CN101520894B publication Critical patent/CN101520894B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method for extracting a significant object based on region significance. The method comprises the following steps: firstly, establishing a significant image with constant scaThe invention discloses a method for extracting a significant object based on region significance. The method comprises the following steps: firstly, establishing a significant image with constant scale through by calculating the multiresolution contrast feature of an input image, and dividing the input image into different regions by a non-parameter nuclear density evaluation method; secondly, anle through by calculating the multiresolution contrast feature of an input image, and dividing the input image into different regions by a non-parameter nuclear density evaluation method; secondly, and then, calculating specific values of region significance of each region assembly and a complementary set thereof; and finally, extracting the significant object through by acquiring the maximum valud then, calculating specific values of region significance of each region assembly and a complementary set thereof; and finally, extracting the significant object through by acquiring the maximum value in the specific values, which comprises the following steps: (1) inputting the an image, and establishing the a significant image with constant scale; (2) inputting the image to realize image segmene in the specific values, which comprises the following steps: (1) inputting the an image, and establishing the a significant image with constant scale; (2) inputting the image to realize image segmentation; and (3) extracting the significant image. The method is combined with the region significance, not only can accurately extract a single significant object, but also can extract a plurality oftation; and (3) extracting the significant image. The method is combined with the region significance, not only can accurately extract a single significant object, but also can extract a plurality ofsignificant objects, so that the extracted significant object can meet the requirement of human vision, and can improve the accuracy of segmentation.significant objects, so that the extracted significant object can meet the requirement of human vision, and can improve the accuracy of segmentation.

Description

Remarkable object extraction method based on region significance
Technical field
The present invention relates to a kind of method of Computer Image Processing, specifically relate to a kind of image partitioning method.
Background technology
Image segmentation is a major issue of graphical analysis, pattern-recognition and computer vision field, also is a difficult point problem simultaneously, and the final purpose of image segmentation is to be partitioned into the object with particular real-world meaning, i.e. semantic object.The method that has is used the remarkable object in discernible high-level information (for example people's face and text) and the image conspicuousness positioning image.But because there is not discernible high-level information in some images, even perhaps exist discernible high-level information to be difficult to extract automatically, so its use is limited yet.The image conspicuousness is always available, but it can not provide enough information for the remarkable object in the positioning image.Rudimentary spatial feature not necessarily well mates remarkable object.For example, the edge of some high-contrast between the image-region is normally noticeable, will be identified as remarkable object, and in fact not be correct remarkable object.In addition, existing image conspicuousness detection method can not be identified in the specific image feature that occurs on the different graphical rules.Do not solve the constant conspicuousness problem of graphical rule, can lose the notable feature on some graphical rules, cause the image segmentation effect not fully up to expectations.
Summary of the invention
The objective of the invention is to overcome the deficiencies in the prior art a kind of remarkable object extraction method based on region significance is provided, this method can accurately extract a plurality of remarkable objects, improve the accuracy of image segmentation, make segmentation result satisfy the human eye vision requirement.
For solving the problems of the technologies described above, the technical solution adopted in the present invention is: above-mentioned remarkable object extraction method based on region significance, at first, multiresolution contrast metric by calculating input image, set up a conspicuousness image that yardstick is constant, input picture is divided into different zones with the norm of nonparametric kernel density method of estimation; Then, calculate the ratio of the region significance of each zone combination and its supplementary set; At last, extract remarkable object by the maximal value of getting this ratio, concrete steps are as follows:
1, input picture, set up the constant conspicuousness image of yardstick:
1., input picture is converted to L *a *The b color space;
2., use formula (1), set up the Gaussian image pyramid
I l ( i , j ) = Σ m = - 2 2 Σ n = - 2 2 w ( m , n ) I l - 1 ( 2 i + m , 2 j + n ) - - - ( 1 )
In the formula, 0<l<N, 0≤i<C l, 0≤j<R l, the pyramidal progression N=log of Gaussian image 2(min (W, H)/10), W and H are respectively the wide and high of original image, C lAnd R lBe respectively the picture size of l level, (m n) is weight to w;
3., use formula (2), (3), thereby set up the contrast image pyramid, the contrast value C on graphical rule l by on each graphical rule, calculating contrast image I, j, lBe defined in pixel on the yardstick l (i, j) and the weighted sum of the difference between its neighborhood territory pixel, promptly
C i , j , l = Σ q ∈ Θ w i , j , l D ( p i , j , l , p q ) - - - ( 2 )
w i,j,l=1-r i,j,l/r l,max (3)
In the formula, Θ is pixel (i, neighborhood territory pixel set j), the p on yardstick l I, j, lBe pixel (i, color j), the p on yardstick l qBe p I, j, lThe color of pixel of neighborhood territory pixel set, D is to use the colour-difference of Euclidean distance.Weight factor w I, j, lBe used to illustrate that the center of an image visually is more significant usually.r I, j, lBe from (i, j) to the distance of picture centre, r L, maxBe ultimate range to picture centre;
4., use formula (4) that the contrast image on all yardsticks is adjusted to the size the same with original image, then their are set up the conspicuousness image mutually,
I l , k ( i , j ) = Σ m = - 2 2 Σ n = - 2 2 w ( m , n ) I l , k - 1 ( i - m 2 , j - n 2 ) - - - ( 4 )
0<l in the formula<N, 0≤k, 0≤i<C L-k, 0≤j<R L-k, I L, k(i is I j) lEnlarge k result doubly.
2, input picture is realized image segmentation
At first, image with input, use formula (5), (6), utilize the image self-information, obtain representative seed points, and use the seed points quantized image, then, removing and use formula (7), (8) that the zone is carried out in the low zone of conspicuousness according to the area in zone to noise more among the quantification figure merges, realizes the Region Segmentation of image, be the formula of nonparametric density Estimation below
f ( x ) = 1 U Σ i = 1 U K σ ( x - x i ) - - - ( 5 )
K σ ( x ) = 1 2 π σ 2 e - | | x | | 2 / 2 σ 2 - - - ( 6 )
Density f (x) value is the convolution value of feature point set and kernel function.U is all the pixel numbers in the image.K (x) is a kernel function, and wherein x is the unique point that will calculate.σ is a frequency range.
The significant indexes in zone is used for that each zone causes the probability size that human eye is paid close attention in the presentation video.Utilize each regional area shared ratio and this zone in having identical seed points color region to obtain regional significant indexes in the shared ratio of entire image.If certain zone shared ratio in all same color zones is less, significant indexes that so should the zone is just lower.The significant indexes formula definition in zone is
S ( R i j ) = ( N R i j Σ j = 1 m j N R i j ) × ( N R i j Σ i = 1 n Σ j = 1 m j N R i j ) - - - ( 7 )
R in the formula j iRepresent j zone in i the seed points color, S (R j i) be R j iSignificant indexes,
Figure G2009100462762D00032
Be R j iNumber of pixels,
Figure G2009100462762D00033
Be the number of pixels summation of All Ranges in i the seed points color,
Figure G2009100462762D00034
Number summation for all pixels in the image.
After having obtained regional significant indexes, need to remove some non-significant zones.As shown in Figure 5, if non-marking area is only adjacent with a zone, then can be merged by adjacent area.If removed zone is positioned at the intersection in several zones, then need the acting force between the zoning to handle, see shown in the formula (8)
F ij=k ijA(R i)A(R j) (8)
K in the formula IjBe the acting force factor, A (R i), A (R j) be respectively each regional area, F IjBe the gravitation between i and two zones of j, can further it be divided into two parts, i.e. F I ← jAnd F J ← i, F wherein I ← jBe the adjacent area j in i zone contribution degree to its acting force, on the contrary F J ← iBe the adjacent area i in j zone contribution degree to its acting force.The acting force of both direction is because of k IjDifference and difference.
Remarkable object may be that an independent zone among the Region Segmentation figure may be the set in several zones also, supposes that E is the set of All Ranges in the image, if V zone arranged in the image, then the number of the potential object of the candidate among the E is 2 VBut whether each subclass can both become remarkable object, and common one entire image is not as a remarkable object significantly to liking an integral body, and significantly object can not be sky.Suppose that C ∈ E is a remarkable object, it must satisfy following three conditions:
①C≠{};
②C≠{1,2,3...,V};
If 3. C comprises a more than zone, these zones must interconnect so, and the object of extraction must be an integral body.
Obtain C, also can obtain its supplementary set C.Zone in the supplementary set must be adjacent with corresponding region among the C.
3, remarkable object extraction
After obtaining above-mentioned zone combination and supplementary set thereof, use formula (9), (10) and (11) to calculate their conspicuousness value and ratio.When the difference of subject area and its peripheral region is big, can think this to as if significant, so the question resolves itself into that extracts a remarkable object finds an effectively zone combination C i, it makes T Div(C) reach maximal value.
The formula of the conspicuousness value of zoning combination and supplementary set thereof:
T(C)=S(C)/A(C) (9)
T(C)=S(C)/A(C) (10)
S in the formula (C) and S (C) be respectively C and C all pixels the conspicuousness value and, A (C) and A (C) are respectively the areas of C and C; When the difference of subject area and its peripheral region is big, then think this to as if significant,
Calculate the ratio of T (C) and T (C):
T div(C)=T(C)/T(C) (11)
So the question resolves itself into that extracts a remarkable object finds an effectively zone combination C i, it makes T Div(C) reach maximal value, promptly
C i=argmax[T div(C)] (12)
Significantly the object extraction step is as follows:
(1) calculate the conspicuousness value T (C) and the T (C) of each zone combination and supplementary set thereof, and their ratio T Div(C).Find one to make T Div(C) reach the combination of peaked zone;
(2), extract C if do not satisfy end condition iAs remarkable object, delete C then i, execution in step (1); Otherwise execution in step (3);
(3) if satisfy any one end condition, terminator,
End condition has:
1. A (C)<λ A, A is the area of original image in the formula,
②cpt(C)<μ,
cpt(C)=A(C)/(H(C)*W(C)) (13)
Cpt in the formula (C) is the tight ness rating of zone combination C, and H (C) and W (C) are respectively the height and width of C.
3. only remain a zone,
Otherwise, execution in step (2).
Remarkable object extraction method based on region significance of the present invention has following advantage compared with prior art: this method calmodulin binding domain CaM conspicuousness, can not only extract single remarkable object accurately, and can accurately extract a plurality of remarkable objects, make the remarkable object of extraction satisfy the human eye vision requirement, and can improve the accuracy of cutting apart.
Description of drawings
Fig. 1 is the process flow diagram of the remarkable object extraction based on region significance of the present invention;
Fig. 2 is the process flow diagram of image segmentation;
Fig. 3 quantizes rear region figure;
Fig. 4 schemes after eliminating the zonule;
Fig. 5 is the areal map after merging;
Fig. 6 is that the zone merges process flow diagram;
Fig. 7 is the extraction diagram to image, and provides the result schematic diagram of step;
Fig. 8 is the extraction result diagram of other types of image.
Embodiment
Embodiment to the remarkable object extraction method that the present invention is based on region significance is described in further detail below in conjunction with accompanying drawing.
Fig. 1 is a process flow diagram of the present invention, at first, obtains the conspicuousness image that yardstick is constant by Gaussian image pyramid and the contrast image pyramid of setting up input picture, with the norm of nonparametric kernel density method of estimation input picture is divided into different zones; Calculate the ratio of the region significance of each zone combination and its supplementary set then; Extract remarkable object by the maximal value of getting this ratio at last.The specific implementation process is as follows:
1, input picture, set up the constant conspicuousness image of yardstick:
1., input picture is converted to L *a *The b color space;
2., use formula (1) to set up the Gaussian image pyramid;
3., use formula (2), (3) thus on each graphical rule, calculate contrast image and set up the contrast image pyramid;
4., use formula (4) that the contrast image on all yardsticks is adjusted to the size the same with original image, then their are set up the conspicuousness image mutually.
2, input picture is realized image segmentation
As shown in Figure 2, at first, by calculating the original image of input, use formula (5), (6), obtain representative seed points, and use the seed points quantized image, then, removing and use formula (7), (8) that the zone is carried out in the low zone of conspicuousness according to the area in zone to noise more among the quantification figure merges, realizes the Region Segmentation of image.
(1) image quantization
As shown in Figure 2, at first, convert L to by the original image that calculates input *a *The b color space, and add up the pixel quantity of three Color Channels; Then, use formula (5), (6), statistic being examined the probability density of estimation calculates, obtain the gray level probability density value of three passages, be convenient to use subsequently the gradient rise method to obtain the local extremum of three passages, utilize permutation and combination to obtain seed points again, each pixel value in the image and seed points are done color distance calculating, select immediate seed points to replace original pixel value, finish the quantification of image.
Occurred a lot of scattered noises among the quantification figure, as shown in Figure 3, according to region significance, the conspicuousness that remarkable object should have and the requirement of integrality are not satisfied in these scattered zonules, so it must be eliminated.Here mainly utilized regional area to judge, if the area in zone then is the zone that will be eliminated less than certain critical value.Rule of thumb setting critical value is 0.5% of the total area.Region area then is eliminated less than this value, and be about to this zone and merge to contiguous zone, as shown in Figure 4, the noise region that the above-mentioned step of process just can be eliminated small size on the above-mentioned image.But divided area still can not be fully and the condition of marking area coincide, therefore next need to carry out zone merging so that marking area is more complete.
(2) zone merges
The zone merges flow process and sees Fig. 6.After the less noise region removal in the above-mentioned quantized image, tend to occur the zone that there is identical seed points color value many places in the image.Visually human eye is difficult to identical attention rate is all expressed in the zone of the many places color similarity that occurs in the piece image, so need to handle this problem.As shown in Figure 4, zone 1,2,3,4,5 is arranged in the image, zone 3,5 two zones with same color, zone have wherein appearred, and in entire image, so because the attractive force of the 3 pairs of human eyes in zone is obviously lower it is eliminated, utilizes regional significant indexes whether each zone is eliminated below and judge.
The significant indexes in zone is used for that each zone causes the probability size that human eye is paid close attention in the presentation video.Utilize each regional area shared ratio and this zone in having identical seed points color region to obtain regional significant indexes in the shared ratio of entire image.If certain zone shared ratio in all same color zones is less, significant indexes that so should the zone is just lower.The significant indexes formula in zone is shown in (7).
Use an adjustable critical value to remove the zone with less significant indexes, critical value setting is that the maximal value of significant indexes is multiplied by certain ratio, and the ratio of choosing here is 0.6%.
After having obtained regional significant indexes, need to remove some non-significant zones.As shown in Figure 5, if non-marking area is only adjacent with a zone, then can be merged by adjacent area.If removed zone is positioned at the intersection in several zones, then need the acting force between the zoning to handle, see shown in the formula (8).
During gravitation between the zoning, the adjacent areas area big gravitation of healing is also just big, otherwise then little.If region area is big but the different degree height of color then can't produce enough gravitation.For being merged the zone of depending on the gravitation maximum by which zone.Repeated calculation up to the significant indexes of All Ranges greater than critical value, thereby obtain the Region Segmentation figure of final image.
Remarkable object may be that an independent zone among the Region Segmentation figure may be the set in several zones also, supposes that E is the set of All Ranges in the image, if V zone arranged in the image, then the number of the potential object of the candidate among the E is 2 VAs V=3,8 subclass are arranged among the E then:
{},{1},{1,2},{1,2,3},{1,3},{2},{2,3},{3}
But whether each subclass can both become remarkable object, and common one entire image is not as a remarkable object significantly to liking an integral body, and significantly object can not be sky.Suppose that C ∈ E is a remarkable object, it must satisfy following three conditions:
①C≠{};
②C≠{1,2,3...,V};
If 3. C comprises a more than zone, these zones must interconnect so, and the object of extraction must be an integral body.
Obtain C, also can obtain its supplementary set C.Zone in the supplementary set must be adjacent with corresponding region among the C.
3, remarkable object extraction
After having obtained regional combination and supplementary set thereof, use formula (9), (10) and (11) to calculate their region significance value and ratio.When the difference of subject area and its peripheral region is big, can think this to as if significant, so the question resolves itself into that extracts a remarkable object finds an effectively zone combination C i, it makes T Div(C) reach maximal value.
Significantly the object extraction step is as follows:
(1) calculate the conspicuousness value T (C) and the T (C) of each zone combination and supplementary set thereof, and their ratio T Div(C).Find one to make T Div(C) reach peaked zone combination C i
(2), extract C if do not satisfy end condition iAs remarkable object, delete C then i, execution in step (1); Otherwise execution in step (3);
(3) if satisfy any one end condition, terminator,
End condition has:
1. A (C)<λ A, 2. 3. cpt (C)<μ only remains a zone,
Otherwise, execution in step (2).
Set λ and μ and be respectively 0.01 and 0.2.
The present invention carries out emulation experiment, and as shown in Figure 7 and Figure 8, they are to be that programming realizes on the PC test platform of 2.0GHz, internal memory 512M at CPU, wherein, the experimental result picture that Fig. 7 provides has comprised the several important steps in the algorithm, (a) is original image, and a safflower wherein is remarkable object; (b)-(f) be the experimental result of step 1, (b)-(e) be respectively the conspicuousness image on four different images yardsticks from thin yardstick to thick yardstick, on thinner graphical rule, the profile that safflower and part greenery are arranged (b), and on thicker graphical rule, (e) having only safflower in substantially, (f) is the constant conspicuousness image of yardstick; (g) being Region Segmentation figure after merging, is the experimental result of step 2, and safflower and greenery etc. are divided into different zones; (h) being final remarkable object extraction figure, is the experimental result of step 3, and safflower is extracted accurately as remarkable object.
Fig. 8 then is the experimental result that provides all the other types, in order to practicality of the present invention and algorithm accuracy to be described.Wherein as shown in Figure 8, this figure has animal 3 width of cloth, car 1 width of cloth, view 1 width of cloth and spend 1 width of cloth, be divided into 6 row four row and arrange, first row are original images, secondary series, the 3rd row and the 4th row be respectively the original image from first row extract first, the image of second and the 3rd remarkable object.
From the result of the foregoing description as can be seen, the present invention can not only extract single remarkable object accurately, and can accurately extract a plurality of remarkable objects, satisfies the vision requirement of human eye simultaneously.

Claims (1)

1. remarkable object extraction method based on region significance, it is characterized in that at first, by the multiresolution contrast metric of calculating input image, set up a conspicuousness image that yardstick is constant, with the norm of nonparametric kernel density method of estimation input picture is divided into different zones; Then, calculate the ratio of the region significance of each zone combination and its supplementary set; At last, extract remarkable object by the maximal value of getting this ratio, its concrete steps are as follows:
(1), input picture, set up the constant conspicuousness image of yardstick;
(2), input picture, realize image segmentation;
(3), remarkable object extraction;
Above-mentioned steps (1) input picture, the step of setting up the constant conspicuousness image of yardstick comprises:
1., input picture is converted to the L*a*b color space;
2., use formula (1), set up the Gaussian image pyramid
In the formula, 0<l<N, 0≤i<C l, 0≤j<R l, the pyramidal progression N=log of Gaussian image 2(min (W, H)/10), W and H are respectively the wide and high of original image, C lAnd R lBe respectively the picture size of l level, (m n) is weight to w;
3., use formula (2), (3), thereby set up the contrast image pyramid, the contrast value C on graphical rule l by on each graphical rule, calculating contrast image I, j, lBe defined in pixel on the yardstick l (i, j) and the weighted sum of the difference between its neighborhood territory pixel, promptly
Figure DEST_PATH_FSB00000382865000012
w i,j,l=1-r i,j,l/r l,max (3)
In the formula, Θ is pixel (i, neighborhood territory pixel set j), the p on yardstick l I, j, lBe pixel (i, color j), the p on yardstick l qBe p I, j, lThe color of pixel of neighborhood territory pixel set, D is to use the colour-difference of Euclidean distance, weight factor w I, j, lBe used to illustrate that the center of an image visually is more significant usually, r I, j, lBe from (i, j) to the distance of picture centre, r L, maxBe ultimate range to picture centre;
4., use formula (4) that the contrast image on all yardsticks is adjusted to the size the same with original image, then their are set up the conspicuousness image mutually,
Figure DEST_PATH_FSB00000382865000013
0<l in the formula<N, 0≤k, 0≤i<C L-k, 0≤j<R L-k, I L, k(i is I j) lEnlarge k result doubly.
Above-mentioned steps (2) input picture, the step that realizes image segmentation comprises: at first, with the image of input, use formula (5), (6), utilize the image self-information, obtain representative seed points, and use the seed points quantized image, then, remove and use formula (7), (8) that the zone is carried out in the low zone of conspicuousness according to the area in zone to noise more among the quantification figure and merge, realizing the Region Segmentation of image, is the formula of nonparametric density Estimation below
Figure DEST_PATH_FSB00000382865000021
Figure DEST_PATH_FSB00000382865000022
Density f (x) value is the convolution value of feature point set and kernel function, and U is all the pixel numbers in the image, and K (x) is a kernel function, and wherein x is the unique point that will calculate, and σ is a frequency range, and the significant indexes formula in zone is:
In the formula
Figure DEST_PATH_FSB00000382865000024
Represent j zone in i the seed points color,
Figure DEST_PATH_FSB00000382865000025
For
Figure DEST_PATH_FSB00000382865000026
Significant indexes, For Number of pixels,
Figure DEST_PATH_FSB00000382865000029
Be the number of pixels summation of All Ranges in i the seed points color,
Figure DEST_PATH_FSB000003828650000210
Be the number summation of all pixels in the image, see shown in the formula (8)
F ij=k ijA(R i)A(R j) (8)
K in the formula IjBe the acting force factor, A (R i), A (R j) be respectively each regional area, F IjGravitation between i and two zones of j further is divided into it two parts, i.e. F I ← jAnd F J ← i, F wherein I ← jBe the adjacent area j in i zone contribution degree to its acting force, on the contrary F J ← iBe the adjacent area i in the j zone contribution degree to its acting force, the acting force of both direction is because of k IjDifference and difference, repeated calculation up to the significant indexes of All Ranges greater than critical value, thereby obtain the Region Segmentation figure of final image.
Described step (3) is the object extraction step significantly: after obtaining above-mentioned zone combination and supplementary set thereof, use formula (9), (10) and (11) to calculate their conspicuousness value and ratio, the question resolves itself into that extracts a remarkable object finds an effectively zone combination C i, it makes T Div(C) reach maximal value, the formula of the conspicuousness value of zoning combination and supplementary set thereof:
T(C)=S(C)/A(C) (9)
Figure DEST_PATH_FSB00000382865000031
S in the formula (C) and
Figure DEST_PATH_FSB00000382865000032
Be respectively C and
Figure DEST_PATH_FSB00000382865000033
All pixels the conspicuousness value and, A (C) and
Figure DEST_PATH_FSB00000382865000034
Be respectively C and Area; Calculate T (C) and
Figure DEST_PATH_FSB00000382865000036
Ratio:
Figure DEST_PATH_FSB00000382865000037
The question resolves itself into that extracts a remarkable object finds an effectively zone combination C i, it makes T Div(C) reach maximal value, promptly
C i=argmax[T div(C)] (12)
Significantly the object extraction step is as follows:
(1), calculate each zone combination and supplementary set thereof conspicuousness value T (C) and
Figure DEST_PATH_FSB00000382865000038
And their ratio T Div(C), find one to make T Div(C) reach peaked zone combination C i
(2), extract C if do not satisfy end condition iAs remarkable object, delete C then i, execution in step (1); Otherwise execution in step (3);
(3) if satisfy any one end condition, stop extracting, end condition has:
1. A (C)<λ A, A is the area of original image in the formula;
②cpt(C)<μ,
cpt(C)=A(C)/(H(C)*W(C)) (13)
Cpt in the formula (C) is the tight ness rating of zone combination C, and H (C) and W (C) are respectively the height and width of C;
3. only remain a zone, otherwise, execution in step (2).
CN2009100462762A 2009-02-18 2009-02-18 Method for extracting significant object based on region significance Active CN101520894B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100462762A CN101520894B (en) 2009-02-18 2009-02-18 Method for extracting significant object based on region significance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100462762A CN101520894B (en) 2009-02-18 2009-02-18 Method for extracting significant object based on region significance

Publications (2)

Publication Number Publication Date
CN101520894A CN101520894A (en) 2009-09-02
CN101520894B true CN101520894B (en) 2011-03-30

Family

ID=41081467

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100462762A Active CN101520894B (en) 2009-02-18 2009-02-18 Method for extracting significant object based on region significance

Country Status (1)

Country Link
CN (1) CN101520894B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123720A (en) * 2014-06-24 2014-10-29 小米科技有限责任公司 Image repositioning method, device and terminal
US9665925B2 (en) 2014-06-24 2017-05-30 Xiaomi Inc. Method and terminal device for retargeting images

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102087741B (en) * 2009-12-03 2013-01-02 财团法人工业技术研究院 Method and system for processing image by using regional architecture
CN101866484B (en) * 2010-06-08 2012-07-04 华中科技大学 Method for computing significance degree of pixels in image
CN102129693B (en) * 2011-03-15 2012-07-25 清华大学 Image vision significance calculation method based on color histogram and global contrast
CN102779338B (en) 2011-05-13 2017-05-17 欧姆龙株式会社 Image processing method and image processing device
CN103093415A (en) * 2013-01-31 2013-05-08 哈尔滨工业大学 Image prominence computing method based on coordination representation
CN104658004B (en) * 2013-11-20 2018-05-15 南京中观软件技术有限公司 A kind of air refuelling auxiliary marching method based on video image
CN104867094B (en) * 2014-02-20 2018-11-13 联想(北京)有限公司 A kind of method and electronic equipment of image procossing
CN103984944B (en) * 2014-03-06 2017-08-22 北京播点文化传媒有限公司 The method and apparatus that target object in one group of image is extracted and continuously played
CN103927526B (en) * 2014-04-30 2017-02-15 长安大学 Vehicle detecting method based on Gauss difference multi-scale edge fusion
CN104484418B (en) * 2014-12-17 2017-10-31 中国科学技术大学 A kind of characteristic quantification method and system based on dual resolution design
CN104537681A (en) * 2015-01-21 2015-04-22 北京联合大学 Method and system for extracting spectrum-separated visual salient region
CN105608673B (en) * 2015-12-16 2020-09-25 清华大学 Image color quantization and dithering method and system
CN106056579A (en) * 2016-05-20 2016-10-26 南京邮电大学 Saliency detection method based on background contrast
CN106204551A (en) * 2016-06-30 2016-12-07 北京奇艺世纪科技有限公司 A kind of image significance detection method and device
CN107016682B (en) * 2017-04-11 2020-03-31 四川大学 Self-adaptive segmentation method for salient objects of natural images
CN109118459B (en) 2017-06-23 2022-07-19 南开大学 Image salient object detection method and device
US10679351B2 (en) * 2017-08-18 2020-06-09 Samsung Electronics Co., Ltd. System and method for semantic segmentation of images
CN109190473A (en) * 2018-07-29 2019-01-11 国网上海市电力公司 The application of a kind of " machine vision understanding " in remote monitoriong of electric power
CN110251076B (en) * 2019-06-21 2021-10-22 安徽大学 Method and device for detecting significance based on contrast and fusing visual attention
CN110853120B (en) * 2019-10-09 2023-05-19 上海交通大学 Network layout method, system and medium based on segmentation drawing method
CN111783878B (en) 2020-06-29 2023-08-04 北京百度网讯科技有限公司 Target detection method, target detection device, electronic equipment and readable storage medium
CN111797226B (en) 2020-06-30 2024-04-05 北京百度网讯科技有限公司 Conference summary generation method and device, electronic equipment and readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1700238A (en) * 2005-06-23 2005-11-23 复旦大学 Method for dividing human body skin area from color digital images and video graphs
CN1916906A (en) * 2006-09-08 2007-02-21 北京工业大学 Image retrieval algorithm based on abrupt change of information

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1700238A (en) * 2005-06-23 2005-11-23 复旦大学 Method for dividing human body skin area from color digital images and video graphs
CN1916906A (en) * 2006-09-08 2007-02-21 北京工业大学 Image retrieval algorithm based on abrupt change of information

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123720A (en) * 2014-06-24 2014-10-29 小米科技有限责任公司 Image repositioning method, device and terminal
US9665925B2 (en) 2014-06-24 2017-05-30 Xiaomi Inc. Method and terminal device for retargeting images

Also Published As

Publication number Publication date
CN101520894A (en) 2009-09-02

Similar Documents

Publication Publication Date Title
CN101520894B (en) Method for extracting significant object based on region significance
Kang et al. Hyperspectral anomaly detection with attribute and edge-preserving filters
Jia et al. Three-dimensional local binary patterns for hyperspectral imagery classification
Ghamisi et al. Automatic spectral–spatial classification framework based on attribute profiles and supervised feature extraction
CN104392463B (en) Image salient region detection method based on joint sparse multi-scale fusion
WO2018023734A1 (en) Significance testing method for 3d image
Li et al. Complex contourlet-CNN for polarimetric SAR image classification
CN105389550B (en) It is a kind of based on sparse guide and the remote sensing target detection method that significantly drives
CN111833273B (en) Semantic boundary enhancement method based on long-distance dependence
Wang et al. FE-YOLOv5: Feature enhancement network based on YOLOv5 for small object detection
CN108122008A (en) SAR image recognition methods based on rarefaction representation and multiple features decision level fusion
CN104268593A (en) Multiple-sparse-representation face recognition method for solving small sample size problem
CN101334834B (en) Bottom-up caution information extraction method
CN103345760B (en) A kind of automatic generation method of medical image object shapes template mark point
CN104657980A (en) Improved multi-channel image partitioning algorithm based on Meanshift
CN103679719A (en) Image segmentation method
Pham The semi-variogram and spectral distortion measures for image texture retrieval
CN103839075A (en) SAR image classification method based on united sparse representation
CN109034213B (en) Hyperspectral image classification method and system based on correlation entropy principle
CN109635726A (en) A kind of landslide identification method based on the symmetrical multiple dimensioned pond of depth network integration
Li et al. An aerial image segmentation approach based on enhanced multi-scale convolutional neural network
CN103942526A (en) Linear feature extraction method for discrete data point set
CN106971402B (en) SAR image change detection method based on optical assistance
CN102968793B (en) Based on the natural image of DCT domain statistical property and the discrimination method of computer generated image
Sedik et al. AI-enabled digital forgery analysis and crucial interactions monitoring in smart communities

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: SHANGHAI SHINE ENERGY INFORMATION TECHNOLOGY DEVEL

Effective date: 20141011

Owner name: STATE GRID SHANGHAI ELECTRIC POWER COMPANY

Free format text: FORMER OWNER: SHANGHAI UNIVERSITY

Effective date: 20141011

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 200444 BAOSHAN, SHANGHAI TO: 200122 PUDONG NEW AREA, SHANGHAI

TR01 Transfer of patent right

Effective date of registration: 20141011

Address after: 200122 No. 1671 South Pudong Road, Shanghai, Pudong New Area

Patentee after: State Grid Shanghai Municipal Electric Power Company

Patentee after: Shanghai Shine Energy Info-Tech Co., Ltd.

Address before: 200444 Baoshan District Road, Shanghai, No. 99

Patentee before: Shanghai University