CN106056592B - A kind of vision significance detection method based on rarefaction representation - Google Patents

A kind of vision significance detection method based on rarefaction representation Download PDF

Info

Publication number
CN106056592B
CN106056592B CN201610356541.7A CN201610356541A CN106056592B CN 106056592 B CN106056592 B CN 106056592B CN 201610356541 A CN201610356541 A CN 201610356541A CN 106056592 B CN106056592 B CN 106056592B
Authority
CN
China
Prior art keywords
atom
dictionary
image block
conspicuousness
dnew
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610356541.7A
Other languages
Chinese (zh)
Other versions
CN106056592A (en
Inventor
王鑫
沈思秋
张春燕
周韵
吕国芳
徐玲玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Youjing (Suzhou) Digital Technology Co.,Ltd.
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN201610356541.7A priority Critical patent/CN106056592B/en
Publication of CN106056592A publication Critical patent/CN106056592A/en
Application granted granted Critical
Publication of CN106056592B publication Critical patent/CN106056592B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The invention discloses a kind of vision significance detection methods based on rarefaction representation, first, by distinctive dictionary learning and analysis, to including that the atom of significant characteristics and the atom without significant characteristics are classified in dictionary, to which structure obtains multiple distinctive dictionaries;Secondly, by the analysis of the sparse coding coefficient to each image block under distinctive dictionary, classify to the classification belonging to each image block, distinguish the foreground image block containing vision significance feature and the background image block without significant characteristics;Then, by the analysis of the sparse reconstruction error to each image block, removal is strong to foreground image block re-configurability from distinctive dictionary, and the atom weak to background image block re-configurability;Finally, notable figure calculating is carried out.The present invention realizes that vision significance detects by deep excavation sparse representation model itself, therefore can obtain the more accurate result of conspicuousness detection method than tradition based on rarefaction representation.

Description

A kind of vision significance detection method based on rarefaction representation
Technical field
The present invention relates to one kind in the case where analysing in depth sparse representation model feature, to the conspicuousness target in scene The method accurately detected, belongs to technical field of computer vision.
Background technology
Vision significance detection is widely used in fields such as target detection, target identification, image repairs, it is Pattern-recognition, computer vision subject an important research content, to its research have it is very important it is theoretical with it is practical Meaning.With the continuous development of sparse representation theory, much the conspicuousness detection method based on rarefaction representation is being studied Persons constantly propose.However, much the conspicuousness detection method based on rarefaction representation did not excavate complete dictionary, sparse fully The features such as code coefficient and sparse reconstruction error, therefore very ideal conspicuousness testing result can not be obtained.
Current most of conspicuousness detection methods based on rarefaction representation are around being used in one locally or globally range Image block comes to carry out sparse reconstruct to center image block as dictionary, then judges center image block phase with rarefaction representation error For the conspicuousness of image block around.However, these methods are only simply according to neighboring area image block to center image block Reconstructed error judge that the conspicuousness of center image block is strong and weak, deeply excavate the spy that sparse representation model itself is included Sign.
A kind of salient region of image detection side based on joint sparse Multiscale Fusion of Publication No. CN104392463A Method carries out dictionary training to training image collection first under different scale, obtains the dictionary under different scale.Then, to test Each pixel in image takes image block, joint sparse to solve rarefaction representation coefficient of the image block under each scale. Then, using rarefaction representation coefficient as feature, the calculating of conspicuousness is carried out.Finally, the notable result merged under multiple scales obtains To final notable figure.
Zhu et al. published thesis in 2014 on Image and Vision Computing " Ensemble Dictionary learning for saliency detection ", the spy that this method passes through the excessively complete dictionary of analysis mining It levies to realize the detection to saliency region.
Xia et al. published thesis in 2015 on Pattern Recognition " Nonlocal center-surround Reconstruction-based bottom-up saliency estimation ", this method pass through testing image block is all The multiple images block enclosed was used as complete dictionary, and carrying out sparse reconstruction to center image block then utilizes sparse reconstruction error Conspicuousness to judge testing image block is strong and weak.
In short, the existing conspicuousness detection method based on rarefaction representation, existing limitation are mainly manifested in:
Most of conspicuousness detection methods based on rarefaction representation are images around being used in one locally or globally range Block comes then to judge center image block phase by rarefaction representation error to the sparse reconstruct of center image block progress as dictionary For the conspicuousness of image block around, they did not excavate the spy that complete dictionary and sparse coding coefficient itself are included deeply Sign, therefore ideal conspicuousness testing result can not be obtained.
Invention content
Goal of the invention:For problems of the prior art, it is aobvious that the present invention provides a kind of vision based on rarefaction representation Work property detection method.This method passes through excessively complete dictionary in deep excavation sparse representation model, sparse coding coefficient and sparse heavy The feature of error is built to realize the accurate detection to conspicuousness.
Technical solution:A kind of vision significance detection method based on rarefaction representation, includes the following steps:
(1) step 1:By distinctive dictionary learning and analysis, to including the atom of significant characteristics in dictionary and being free of The atom of significant characteristics is classified, to which structure obtains multiple distinctive dictionaries.
First more dictionary learnings based on stochastical sampling.First, it in random sampling procedure each time, is in a center Arbitrary location of pixels inside image-region, the window that size is m × n carry out stochastical sampling to testing image, obtain one group of small figure As block S=[s1,…,sK].Wherein, si∈Rm×nI-th of image block is represented, K is the sum of image block.Then, each figure of extraction As the feature of block.In vision noticing mechanism model, since ICOPP (intensity color opponent) feature can be real Now to the good expression of image, therefore the feature of each image block is extracted with ICOPP models herein, to obtain eigenmatrix F =[f1,…,fK]∈Rp×K.Wherein, fiRepresentative image block siFeature column vector.Then, each row of eigenmatrix are returned One changes operation, to obtain complete dictionary D ∈ Rp×K(K > p).Each atom d in dictionary DiIt is the list that a modulus value is 1 Bit vector.Specific normalization process is shown below:
Finally, repeat the above steps NsIt is secondary, to obtain NsA excessively complete dictionary D, each excessively complete dictionary DiIt indicates, i =1 ..., Ns.Wherein, each dictionary is that the image block obtained through independent random sampling by one group trains to obtain.NsValue Bigger, although the detection result of algorithm is more ideal, operand is also bigger.Therefore consider algorithm performance and efficiency, take Ns =4.
The dictionary atom classification based on atom inner product is carried out below.Each atom is relative to other in first, Dictionary of Computing D The inner product of all atoms and, i.e.,:
Wherein, ρiRepresent i-th of atom diInner product summation , &lt relative to other all atoms;·,·>Indicate inner product fortune It calculates.Therefore for dictionary D, an inner product and set P=&#91 can be obtained by above-mentioned operation;ρ1,…,ρK].Second, due in dictionary The number of conspicuousness atom is far smaller than the number of non-limiting atom, therefore the inner product of conspicuousness atom and will be significantly less than non- The inner product of conspicuousness atom and.Based on this it is assumed that can distinguish by the following method conspicuousness atom in dictionary and Non-limiting atom:
Wherein, the quantitative value of the conspicuousness atom of selection is set as KS=K × salAtomRatio, salAtomRatio ∈ [0,1]Represent the percentage that the conspicuousness atomic quantity chosen accounts for entire dictionary atomic quantity.Due to the quantity of conspicuousness atom Significantly lower than the quantity of non-limiting atom, and ensure KSFor integer, therefore set salAtomRatio=0.15.Third, will be notable Property atom and non-limiting atom are stored in conspicuousness atom collection respectivelyWith non-limiting atom collectionIt finally, can be in the hope of a new dictionary Dnew, i.e.,:
Dnew=[salAtomSet,nonsalAtomSet]∈Rp×K
Since there are one labels for each atom in dictionary Dnew, therefore dictionary Dnew has distinguishing ability.Therefore, pass through To dictionaryIntratomic integral analysis is carried out, N may finally be obtainedSA distinctive dictionary
(2) step 2:By the analysis of the sparse coding coefficient to each image block under distinctive dictionary, to each image block Affiliated classification is classified, and the foreground image block containing vision significance feature and the background without significant characteristics are distinguished Image block.
Sparse coding first is carried out to all image blocks in testing image.First, input picture is divided into and N number of is not overlapped Image block Y=[y1,…,yN], wherein yi∈Rm×nFor i-th of image block.Second, the ICOPP for extracting each image block is special Sign, so as to obtain an eigenmatrix G=[g1,…;gN]∈Rp×N.Wherein, giRepresentative image block yiFeature vector.The Three, seek sparse coding coefficient X=&#91s of the G at distinctive dictionary Dnew with OMP algorithms;x1,…,xN]∈RK×N.Wherein, xi∈ RK×1Represent the sparse coding coefficient of i-th of image block.
The subsequent conspicuousness based on the sparse coding coefficient analysis/non-limiting atom that carries out differentiates.First, will each scheme As the sparse coding coefficient x of blockiIt is divided into salCoefiAnd nonsalCoefiTwo parts.Wherein, salCoefiRepresentative belongs to notable Property atom collection code coefficient and, and nonsalCoefiThen represent belong to non-limiting atom collection code coefficient and. salCoefiAnd nonsalCoefiComputational methods it is as follows:
Wherein, xi(j) x is representediIn j-th of element.Then, more each image block yiConspicuousness sparse coding system Number and salCoefiAnd non-limiting sparse coding coefficient and nonsalCoefiSize.If salCoefi> nonsalCoefi, then judge yiFor conspicuousness image block;Otherwise, it is determined that yiBelong to non-limiting image block.It finally, will be all aobvious Work property image block and their feature vector and sparse coding coefficient are stored in set Ysal, Gsal and Xsal respectively.Simultaneously will All non-limiting image blocks and their feature vector and sparse coding coefficient are stored in set Ynonsal, Gnonsal respectively And Xnonsal.
(3) step 3:By the analysis of the sparse reconstruction error to each image block, removal is to foreground from distinctive dictionary Image block re-configurability is strong, and the atom weak to background image block re-configurability.
First seek the sparse reconstruction error of conspicuousness image block and non-limiting image block.Firstly, for each distinctive Dictionary Dnew removes the atom in conspicuousness atom collection salAtomSet in the dictionary, and calculates conspicuousness image block collection one by one Close sparse reconstruction errors of the Ysal and non-limiting image block set Ynonsal under the dictionary for removing current atom.Enable Dnew:j Represent the submatrix after removing jth row in Dnew, wherein 1≤j≤KS.Ysal and Ynonsal is in dictionary Dnew:jUnder it is dilute It is as follows to dredge reconstruction error computational methods:
Wherein, Xsal..jAnd Xnonsal..jRespectively represent the submatrix after Ysal and Ynonsal removal jth rows.Then, Calculate ErrsaljAnd ErrnonsaljWeighted difference, i.e.,:
Diffj=α × Errsalj-(1-α)×Errnonsalj
Wherein, parameter 0≤α≤1 can control ErrsaljAnd ErrnonsaljWeight.
The dictionary atom removal analyzed based on sparse reconstruction error is carried out below.First, each weighted difference system that will be acquired Number Diffj(1≤j≤KS) deposit weighting difference set Diff.Second, select K from set DiffS× inAtomRatio value is most Big element, and its subscript is stored in set Δ.At this point, will have in distinctive dictionary Dnew and same index in set Δ Atom is considered as the very strong atom of significant characteristics.Wherein, parameter inAtomRatio ∈ [0,1]It represents from the significant original of institute The ratio of the very strong atom of the significant characteristics selected in son.Third, what removal identified from dictionary Dnew shows comprising strong The atom of work property feature, to obtain removing the dictionary Dnew after strong conspicuousness atom.Wherein, DnewIt represents from Dnew Remove the submatrix having with after the atom of same index in set Δ.
(4) step 4:Carry out notable figure calculating.
Multiple notable figures are first generated according to the dictionary after the strong conspicuousness atom of removal.First, calculate each image block yi ∈Rm×nIn dictionary DnewUnder sparse reconstruction error:
Wherein, gi∈Rp×1And xi∈RK×1Respectively represent image block yiCharacteristic series vector sum sparse coding coefficient,Generation Table is from xiMiddle removal has and the submatrix after the row vector of same index in set Δ.Second, by the reconstruction of each image block Error directly as the image block pixel value, to obtain thick notable figure Rawmap.Third, since each image block does not weigh It is folded, therefore in order to overcome block effect, operation is filtered to thick notable figure with a Gaussian filter, i.e.,:Map=g* Rawmap.Wherein, map represents initial notable figure, and g is the 2-d gaussian filters device that a standard deviation is σ.Finally, due to have NSDictionary after a strong conspicuousness atom of removalIt can obtain NSA initial notable figure
The final notable figure based on initial notable figure Weighted Fusion is carried out below to calculate.The initial notable figure that will be obtained
mapi(i=1 ..., NS) it is weighted addition, to obtain final notable figure MAP, i.e.,:
Wherein, wiRepresent the weight coefficient of the initial notable figure of every width.
The present invention uses above-mentioned technical proposal, has the advantages that:
(1) method of the invention is by the analysis to excessively complete dictionary, to include in dictionary significant characteristics atom and Atom without significant characteristics has carried out effective classification, so as to greatly enhance dictionary to salient region and non-significant The distinguishing ability in property region;
(2) this method is effectively distinguished by the analysis to sparse coding coefficient and sparse reconstruction error containing vision The foreground image block of significant characteristics and the background image block without significant characteristics, and eliminated to foreground image from dictionary Block re-configurability is strong, and the atom weak to background image block re-configurability, to improve the precision of conspicuousness detection.
Description of the drawings
Fig. 1 is the frame diagram of the embodiment of the present invention.
Specific implementation mode
With reference to specific embodiment, the present invention is furture elucidated, it should be understood that these embodiments are merely to illustrate the present invention Rather than limit the scope of the invention, after having read the present invention, various equivalences of the those skilled in the art to the present invention The modification of form falls within the application range as defined in the appended claims.
As shown in Figure 1, described in further detail as follows:
First, distinctive dictionary learning and analysis are carried out.Specifically include following two steps:
(1) more dictionary learnings based on stochastical sampling.
First, in random sampling procedure each time, it is in arbitrary location of pixels inside image-region with a center, greatly The small window for m × n carries out stochastical sampling to testing image, obtains image block S=&#91 one group small;s1,…,sK].Wherein, si∈Rm ×nI-th of image block is represented, K is the sum of image block.Then, the feature of each image block is extracted with ICOPP models, to Obtain eigenmatrix F=[f1,…,fK]∈Rp×K.Wherein, fiRepresentative image block siFeature column vector.Then, to eigenmatrix Operation is normalized in each row, to obtain complete dictionary D ∈ Rp×K(K > p).Each atom d in dictionary DiIt is one The unit vector that modulus value is 1.Specific normalization process is shown below:
Finally, repeat the above steps NsIt is secondary, to obtain NsA excessively complete dictionary Di, i=1 ..., Ns.Wherein, each Dictionary is all that the image block obtained through independent random sampling by one group trains to obtain.NsValue it is bigger, although the detection of algorithm Effect is more ideal, but operand is also bigger.Therefore consider algorithm performance and efficiency, take Ns=4.
(2) the dictionary atom classification based on atom inner product.
In first, Dictionary of Computing D each atom relative to other all atoms inner product and, i.e.,:
Wherein, ρiRepresent i-th of atom diInner product summation , &lt relative to other all atoms;·,·>Indicate inner product fortune It calculates.Therefore for dictionary D, an inner product and set P=&#91 can be obtained by above-mentioned operation;ρ1,…,ρK].Second, due in dictionary The number of conspicuousness atom is far smaller than the number of non-limiting atom, therefore the inner product of conspicuousness atom and will be significantly less than non- The inner product of conspicuousness atom and.Based on this it is assumed that can distinguish by the following method conspicuousness atom in dictionary and Non-limiting atom:
Wherein, the quantitative value of the conspicuousness atom of selection is set as KS=K × salAtomRatio, salAtomRatio ∈ [0,1]Represent the percentage that the conspicuousness atomic quantity chosen accounts for entire dictionary atomic quantity.Due to the quantity of conspicuousness atom Significantly lower than the quantity of non-limiting atom, and ensure KSFor integer, therefore set salAtomRatio=0.15.Third, will be notable Property atom and non-limiting atom are stored in conspicuousness atom collection respectivelyWith non-limiting atom collectionIt finally, can be in the hope of a new dictionary Dnew, i.e.,:
Dnew=[salAtomSet,nonsalAtomSet]∈Rp×K
Since there are one labels for each atom in dictionary Dnew, therefore dictionary Dnew has distinguishing ability.Therefore, pass through To dictionaryIntratomic integral analysis is carried out, N may finally be obtainedSA distinctive dictionary
Then, sparse coding coefficient is analyzed, includes mainly following (3)-(4) the two steps.
(3) sparse coding is carried out to all image blocks in testing image.
First, input picture is divided into N number of nonoverlapping image block Y=[y1,…,yN], wherein yi∈Rm×nIt is i-th A image block.Second, the ICOPP features of each image block are extracted, so as to obtain an eigenmatrix G=[g1,…;gN] ∈Rp×N.Wherein, giRepresentative image block yiFeature vector.It is dilute at distinctive dictionary Dnew to seek G with OMP algorithms for third Dredge code coefficient X=[x1,…,xN]∈RK×N.Wherein, xi∈RK×1Represent the sparse coding coefficient of i-th of image block.
(4) conspicuousness/non-limiting atom based on sparse coding coefficient analysis differentiates.
First, by the sparse coding coefficient x of each image blockiIt is divided into salCoefiAnd nonsalCoefiTwo parts.Wherein, salCoefiRepresent the code coefficient for belonging to conspicuousness atom collection and, and nonsalCoefiIt then represents and belongs to non-limiting atom collection Code coefficient and.salCoefiAnd nonsalCoefiComputational methods it is as follows:
Wherein, xi(j) x is representediIn j-th of element.Then, more each image block yiConspicuousness sparse coding system Number and salCoefiAnd non-limiting sparse coding coefficient and nonsalCoefiSize.If salCoefi> nonsalCoefi, then judge yiFor conspicuousness image block;Otherwise, it is determined that yiBelong to non-limiting image block.It finally, will be all aobvious Work property image block and their feature vector and sparse coding coefficient are stored in set Ysal, Gsal and Xsal respectively.Simultaneously will All non-limiting image blocks and their feature vector and sparse coding coefficient are stored in set Ynonsal, Gnonsal respectively And Xnonsal.
Then, sparse reconstruction error is analyzed.Specifically comprise the following steps:
(5) the sparse reconstruction error of conspicuousness image block and non-limiting image block is sought.
Firstly, for each distinctive dictionary Dnew, remove one by one in the dictionary in conspicuousness atom collection salAtomSet Atom, and calculate conspicuousness image block set Ysal and non-limiting image block set Ynonsal and removing current atom Sparse reconstruction error under dictionary.Enable Dnew:jRepresent the submatrix after removing jth row in Dnew, wherein 1≤j≤KS。 Ysal and Ynonsal is in dictionary Dnew:jUnder sparse reconstruction error computational methods it is as follows:
Wherein, Xsal..jAnd Xnonsal..jRespectively represent the submatrix after Ysal and Ynonsal removal jth rows.Then, Calculate ErrsaljAnd ErrnonsaljWeighted difference, i.e.,:
Diffj=α × Errsalj-(1-α)×Errnonsalj
Wherein, parameter 0≤α≤1 can control ErrsaljAnd ErrnonsaljWeight.
(6) the dictionary atom removal based on the analysis of sparse reconstruction error.
First, each weighted difference coefficient Diff that will be acquiredj(1≤j≤KS) deposit weighting difference set Diff.Second, from collection It closes in Diff and selects KS× inAtomRatio maximum the element of value, and its subscript is stored in set Δ.At this point, by distinctive There is the atom with same index in set Δ to be considered as the very strong atom of significant characteristics in dictionary Dnew.Wherein, parameter inAtomRatio∈[0,1]Represent from the ratio of the very strong atom of significant characteristics selected in significant atom.The Three, the atom for including strong significant characteristics identified is removed from dictionary Dnew, after obtaining removing strong conspicuousness atom Dictionary Dnew.Wherein, DnewRepresenting has and the sub- square after the atom of same index in set Δ from removal in Dnew Battle array.
Finally, notable figure calculating is carried out.Specifically include following (7)-(8) the two steps:
(7) multiple notable figures are generated according to the dictionary after the strong conspicuousness atom of removal.
First, calculate each image block yi∈Rm×nIn dictionary DnewUnder sparse reconstruction error:
Wherein, gi∈Rp×1And xi∈RK×1Respectively represent image block yiCharacteristic series vector sum sparse coding coefficient,Generation Table is from xiMiddle removal has and the submatrix after the row vector of same index in set Δ.Second, by the reconstruction of each image block Error directly as the image block pixel value, to obtain thick notable figure Rawmap.Third, since each image block does not weigh It is folded, therefore in order to overcome block effect, operation is filtered to thick notable figure with a Gaussian filter, i.e.,:Map=g* Rawmap.Wherein, map represents initial notable figure, and g is the 2-d gaussian filters device that a standard deviation is σ.Finally, due to have NSDictionary after a strong conspicuousness atom of removalIt can obtain NSA initial notable figure
(8) the final notable figure based on initial notable figure Weighted Fusion calculates.
The initial notable figure map that will be obtainedi(i=1 ..., NS) it is weighted addition, to obtain final notable figure MAP, I.e.:Wherein, wiRepresent the weight coefficient of the initial notable figure of every width.

Claims (4)

1. a kind of vision significance detection method based on rarefaction representation, which is characterized in that include the following steps:
Step 1:By distinctive dictionary learning and analysis, to including the atom of significant characteristics in dictionary and being free of conspicuousness The atom of feature is classified, to which structure obtains multiple distinctive dictionaries;
Step 2:By the analysis of the sparse coding coefficient to each image block under distinctive dictionary, belonging to each image block Classification is classified, and the foreground image block containing vision significance feature and the background image without significant characteristics are distinguished Block;
Step 3:By the analysis of the sparse reconstruction error to each image block, removal is to foreground image block from distinctive dictionary Re-configurability is strong, and the atom weak to background image block re-configurability;
Step 4:Carry out notable figure calculating;
By distinctive dictionary learning and analysis, to including the atom of significant characteristics and without the original of significant characteristics in dictionary Son is classified, and to which structure obtains multiple distinctive dictionaries, operating process is as follows:
(1) more dictionary learnings based on stochastical sampling;
First, in random sampling procedure each time, being in arbitrary location of pixels, size inside image-region with a center is The window of m × n carries out stochastical sampling to testing image, obtains image block S=&#91 one group small;s1,…,sK];Wherein, si∈Rm×nGeneration I-th of image block of table, K are the sum of image block;
Then, the feature that each image block is extracted with ICOPP models, to obtain eigenmatrix F=[f1,…,fK]∈Rp×K; Wherein, fiRepresentative image block siFeature column vector;
Then, operation is normalized in row each to eigenmatrix, to obtain complete dictionary D ∈ Rp×K, K > p;In dictionary D Each atom diIt is the unit vector that a modulus value is 1, specific normalization process is shown below:
Finally, repeat the above steps NsIt is secondary, to obtain NsA excessively complete dictionary D, each excessively complete dictionary DiIt indicates, i= 1,…,Ns;Wherein, each dictionary is that the image block obtained through independent random sampling by one group trains to obtain;
(2) the dictionary atom classification based on atom inner product is carried out;
In first, Dictionary of Computing D each atom relative to other all atoms inner product and, i.e.,:
Wherein, ρiRepresent i-th of atom diInner product summation , &lt relative to other all atoms;·,·>Indicate inner product operation;Therefore For dictionary D, an inner product and set P=&#91 can be obtained by above-mentioned operation;ρ1,…,ρK];
Second, since the number of conspicuousness atom in dictionary is far smaller than the number of non-limiting atom, therefore conspicuousness atom Inner product and will be significantly less than non-limiting atom inner product and;Based on this it is assumed that can distinguish by the following method Conspicuousness atom in dictionary and non-limiting atom:
Wherein, the quantitative value of the conspicuousness atom of selection is set as KS=K × salAtomRatio, salAtomRatio ∈ [0,1]Generation The conspicuousness atomic quantity that table is chosen accounts for the percentage of entire dictionary atomic quantity;Since the quantity of conspicuousness atom is significantly lower than The quantity of non-limiting atom, and in order to ensure KSFor integer, therefore set salAtomRatio=0.15;
Conspicuousness atom and non-limiting atom are stored in conspicuousness atom collection by third respectivelyWith it is non- Conspicuousness atom collection
It finally, can be in the hope of a new dictionary Dnew, i.e.,:
Dnew=[salAtomSet,nonsalAtomSet]∈Rp×K
Since there are one labels for each atom in dictionary Dnew, therefore dictionary Dnew has distinguishing ability;Therefore, by word Allusion quotation D1,…,Intratomic integral analysis is carried out, N may finally be obtainedSA distinctive dictionary Dnew1,…,
2. the vision significance detection method according to claim 1 based on rarefaction representation, which is characterized in that by each The analysis of sparse coding coefficient of the image block under distinctive dictionary, classifies to the classification belonging to each image block, operation Process is as follows:
(1) sparse coding is carried out to all image blocks in testing image;
First, input picture is divided into N number of nonoverlapping image block Y=[y1,…,yN], wherein yi∈Rm×nFor i-th of figure As block;
Second, the ICOPP features of each image block are extracted, so as to obtain an eigenmatrix G=[g1,…;gN]∈Rp×N; Wherein, giRepresentative image block yiFeature vector;
Third seeks sparse coding coefficient X=&#91s of the G at distinctive dictionary Dnew with OMP algorithms;x1,…,xN]∈RK×N;Its In, xi∈RK×1Represent the sparse coding coefficient of i-th of image block;
(2) conspicuousness/non-limiting atom based on sparse coding coefficient analysis differentiates;
First, by the sparse coding coefficient x of each image blockiIt is divided into salCoefiAnd nonsalCoefiTwo parts;Wherein, salCoefiRepresent the code coefficient for belonging to conspicuousness atom collection and, and nonsalCoefiIt then represents and belongs to non-limiting atom collection Code coefficient and;salCoefiAnd nonsalCoefiComputational methods it is as follows:
Wherein, xi(j) x is representediIn j-th of element;
Then, more each image block yiConspicuousness sparse coding coefficient and salCoefiAnd non-limiting sparse coding system Number and nonsalCoefiSize;If salCoefi> nonsalCoefi, then judge yiFor conspicuousness image block;Otherwise, sentence Determine yiBelong to non-limiting image block;
Finally, all conspicuousness image blocks and their feature vector and sparse coding coefficient are stored in set Ysal respectively, Gsal and Xsal;All non-limiting image blocks and their feature vector and sparse coding coefficient are stored in collection respectively simultaneously Close Ynonsal, Gnonsal and Xnonsal.
3. the vision significance detection method according to claim 1 based on rarefaction representation, which is characterized in that by each The analysis of the sparse reconstruction error of image block, removal is strong to foreground image block re-configurability from distinctive dictionary, and to background The weak atom of image block re-configurability, operating process are as follows:
(1) the sparse reconstruction error of conspicuousness image block and non-limiting image block is sought;
Firstly, for each distinctive dictionary Dnew, the original in conspicuousness atom collection salAtomSet in the dictionary is removed one by one Son, and conspicuousness image block set Ysal and non-limiting image block set Ynonsal are calculated in the dictionary for removing current atom Under sparse reconstruction error;Enable Dnew:jRepresent the submatrix after removing jth row in Dnew, wherein 1≤j≤KS;Ysal and Ynonsal is in dictionary Dnew:jUnder sparse reconstruction error computational methods it is as follows:
Wherein, Xsal..jAnd Xnonsal..jRespectively represent the submatrix after Ysal and Ynonsal removal jth rows;
Then, Errsal is calculatedjAnd ErrnonsaljWeighted difference, i.e.,:
Diffj=α × Errsalj-(1-α)×Errnonsalj
Wherein, parameter 0≤α≤1 can control ErrsaljAnd ErrnonsaljWeight;
(2) the dictionary atom removal based on the analysis of sparse reconstruction error;
First, each weighted difference coefficient Diff that will be acquiredjDeposit weighting difference set Diff, 1≤j≤KS
Second, select K from set DiffS× inAtomRatio maximum the element of value, and its subscript is stored in set Δ;This When, will there is the atom with same index in set Δ to be considered as the very strong atom of significant characteristics in distinctive dictionary Dnew; Wherein, parameter inAtomRatio ∈ [0,1]Represent from the very strong atom of the significant characteristics selected in significant atom Ratio;
Third removes the atom for including strong significant characteristics identified, to obtain removing strong conspicuousness from dictionary Dnew Dictionary Dnew after atom;Wherein, DnewRepresenting has from removal in Dnew and gathers in Δ after the atom of same index Submatrix.
4. the vision significance detection method according to claim 1 based on rarefaction representation, which is characterized in that carry out notable Figure calculates:
(1) multiple notable figures are generated according to the dictionary after the strong conspicuousness atom of removal;
First, calculate each image block yi∈Rm×nIn dictionary DnewUnder sparse reconstruction error:
Wherein, gi∈Rp×1And xi∈RK×1Respectively represent image block yiCharacteristic series vector sum sparse coding coefficient,Represent from xiMiddle removal has and the submatrix after the row vector of same index in set Δ;
Second, by the reconstruction error of each image block directly as the pixel value of the image block, to obtain thick notable figure Rawmap;
Third, since each image block is not overlapped, therefore in order to overcome block effect, with a Gaussian filter to thick notable figure into Row filtering operation, i.e.,:Map=g*Rawmap;Wherein, map represents initial notable figure, and g is the dimensional Gaussian that a standard deviation is σ Filter;
Finally, due to have NSDictionary after a strong conspicuousness atom of removalIt can obtain NSIt is a initial Notable figure map1,…,
(2) the final notable figure based on initial notable figure Weighted Fusion calculates;
The initial notable figure map that will be obtainediIt is weighted addition, i=1 ..., NS, to obtain final notable figure MAP, i.e.,:
Wherein, wiRepresent the weight coefficient of the initial notable figure of every width.
CN201610356541.7A 2016-05-26 2016-05-26 A kind of vision significance detection method based on rarefaction representation Active CN106056592B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610356541.7A CN106056592B (en) 2016-05-26 2016-05-26 A kind of vision significance detection method based on rarefaction representation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610356541.7A CN106056592B (en) 2016-05-26 2016-05-26 A kind of vision significance detection method based on rarefaction representation

Publications (2)

Publication Number Publication Date
CN106056592A CN106056592A (en) 2016-10-26
CN106056592B true CN106056592B (en) 2018-10-23

Family

ID=57174608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610356541.7A Active CN106056592B (en) 2016-05-26 2016-05-26 A kind of vision significance detection method based on rarefaction representation

Country Status (1)

Country Link
CN (1) CN106056592B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392211B (en) * 2017-07-19 2021-01-15 苏州闻捷传感技术有限公司 Salient target detection method based on visual sparse cognition
CN107423765A (en) * 2017-07-28 2017-12-01 福州大学 Based on sparse coding feedback network from the upper well-marked target detection method in bottom
CN113870240B (en) * 2021-10-12 2024-04-16 大连理工大学 Safety valve cavitation phenomenon judging method based on image significance detection

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831402A (en) * 2012-08-09 2012-12-19 西北工业大学 Sparse coding and visual saliency-based method for detecting airport through infrared remote sensing image
CN105389550A (en) * 2015-10-29 2016-03-09 北京航空航天大学 Remote sensing target detection method based on sparse guidance and significant drive

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831402A (en) * 2012-08-09 2012-12-19 西北工业大学 Sparse coding and visual saliency-based method for detecting airport through infrared remote sensing image
CN105389550A (en) * 2015-10-29 2016-03-09 北京航空航天大学 Remote sensing target detection method based on sparse guidance and significant drive

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Image Saliency Detection with Sparse Representation of Learnt Texture Atoms;Lai Jiang et al;《2015 IEEE International Conference on Computer Vision Workshops》;20151231;第895页左栏第2段-第898页右栏第1段 *
Spatiotemporal saliency model for small moving object detection in infrared videos;Xin Wang et al;《Infrared Physics & Technology》;20150129;全文 *
字典原子优化的图像稀疏表示及其应用;李洪均 等;《东南大学学报(自然科学版)》;20140131;第44卷(第1期);全文 *

Also Published As

Publication number Publication date
CN106056592A (en) 2016-10-26

Similar Documents

Publication Publication Date Title
CN111259930B (en) General target detection method of self-adaptive attention guidance mechanism
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN110287960A (en) The detection recognition method of curve text in natural scene image
CN108830188A (en) Vehicle checking method based on deep learning
CN109766858A (en) Three-dimensional convolution neural network hyperspectral image classification method combined with bilateral filtering
CN107133627A (en) Infrared light spot center point extracting method and device
CN107945153A (en) A kind of road surface crack detection method based on deep learning
Ding et al. Adversarial shape learning for building extraction in VHR remote sensing images
CN108122008A (en) SAR image recognition methods based on rarefaction representation and multiple features decision level fusion
CN107145836A (en) Hyperspectral image classification method based on stack boundary discrimination self-encoding encoder
CN103617413B (en) Method for identifying object in image
CN113989662A (en) Remote sensing image fine-grained target identification method based on self-supervision mechanism
CN109635811A (en) The image analysis method of spatial plant
Ngugi et al. A new approach to learning and recognizing leaf diseases from individual lesions using convolutional neural networks
CN109461163A (en) A kind of edge detection extraction algorithm for magnetic resonance standard water mould
CN105989336A (en) Scene identification method based on deconvolution deep network learning with weight
CN106056592B (en) A kind of vision significance detection method based on rarefaction representation
CN109284781A (en) Image classification algorithms and system based on manifold learning
CN103984954B (en) Image combining method based on multi-feature fusion
Feng et al. Mutual-complementing framework for nuclei detection and segmentation in pathology image
Feng et al. Cacnet: Salient object detection via context aggregation and contrast embedding
CN108985161A (en) A kind of low-rank sparse characterization image feature learning method based on Laplace regularization
CN106548195A (en) A kind of object detection method based on modified model HOG ULBP feature operators
CN114299383A (en) Remote sensing image target detection method based on integration of density map and attention mechanism
CN109241981A (en) A kind of characteristic detection method based on sparse coding

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210903

Address after: 215332 room 1210, 12 / F, block B, building 1, Zhongke innovation Plaza, Huaqiao Town, Kunshan City, Suzhou City, Jiangsu Province

Patentee after: Youjing (Suzhou) Digital Technology Co.,Ltd.

Address before: 211100 No. 8 West Buddha Road, Jiangning District, Jiangsu, Nanjing

Patentee before: HOHAI University