CN106296681B - Cooperative Study conspicuousness detection method based on binary channels low-rank decomposition - Google Patents

Cooperative Study conspicuousness detection method based on binary channels low-rank decomposition Download PDF

Info

Publication number
CN106296681B
CN106296681B CN201610648091.9A CN201610648091A CN106296681B CN 106296681 B CN106296681 B CN 106296681B CN 201610648091 A CN201610648091 A CN 201610648091A CN 106296681 B CN106296681 B CN 106296681B
Authority
CN
China
Prior art keywords
pixel
region
super
indicate
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610648091.9A
Other languages
Chinese (zh)
Other versions
CN106296681A (en
Inventor
杨淑媛
焦李成
王梦娜
王士刚
刘红英
马晶晶
马文萍
刘芳
侯彪
杜娟妮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201610648091.9A priority Critical patent/CN106296681B/en
Publication of CN106296681A publication Critical patent/CN106296681A/en
Application granted granted Critical
Publication of CN106296681B publication Critical patent/CN106296681B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The Cooperative Study conspicuousness detection method based on binary channels low-rank decomposition that the invention discloses a kind of.The present invention is according to the primary vision characteristic of human visual perception system, the parallel processing Theory of analog vision, conspicuousness detection process is handled as binary channels Cooperative Study process, " where " characteristic remarkable picture and " what " characteristic remarkable picture are obtained respectively, effectively two width characteristic remarkable pictures are combined by syncretizing mechanism, and fused characteristic remarkable picture is diffused, it effectively highlights salient region and inhibits ambient noise, so that the present invention has preferable testing result, the accuracy rate and recall rate of detection are improved.

Description

Cooperative Study conspicuousness detection method based on binary channels low-rank decomposition
Technical field
The invention belongs to field of computer technology, further relate to one of technical field of computer vision based on double The Cooperative Study conspicuousness detection method of channel low-rank decomposition.The present invention can be used for compression to natural image, natural image Many image procossings such as segmentation.
Background technique
Vision significance detection is by allowing biology that its limited feeling and cognitive ability is concentrated most to induce one to obtain in scene The sensorial data paid attention to, it is considered to be the key attention mechanism for learning and surviving can be promoted.In recent years, conspicuousness, which detects, is The research hotspot of computer vision field, research is directed generally to find to learn and detects in scene with making computer intelligence The technical method in most attractive region, usual Ying Yu scene analysis, target detection and segmentation, image restoration, image pressure These special dimensions such as contracting and coding.
University of Anhui its apply for a patent document " a kind of image well-marked target detection method " (number of patent application: CN201510118787.6, publication number: CN104680546A) in disclose it is a kind of based on color contrast priori and boundary priori Conspicuousness detection method.This method respectively obtains two by the color contrast priori features and boundary priori features for extracting image Width Saliency maps are added the two width Saliency maps, and then obtain fused notable figure, then with whole image The exponential function of Objectness feature is multiplied, and obtains final notable figure.This method is simple and effective, available preferable aobvious Write figure.But this method there are still shortcoming be that the notable figure that color contrast priori obtains is obtained with boundary priori features To notable figure ratio shared in the whole picture notable figure of image be different, while also comparing the ratio for being difficult to obtain shared by it Weight, and merely with addition method be clearly it is unreasonable, this necessarily will affect the accuracy rate of conspicuousness testing result.
Paper " the Saliency Detection via Dense and Sparse that Xiaohui Li et al. people delivers at it Reconstruction”(Computer Vision(ICCV),2013IEEEInternational Conference A kind of conspicuousness detection method of dense and sparse reconstruct based on image boundary is disclosed in on.2013).This method first will be former Beginning image segmentation is many different zonules, sharp respectively by the surrounding borderline region of image as the background for being entire image Entire image is carried out with Principle component extraction and sparse coding method it is dense with sparse reconstruct, thus obtain based on dense reconstruct with Two width notable figures of sparse reconstruct further refine notable figure by multiple dimensioned operation, then carry out Pixel-level to two width notable figures Final result is merged and then obtained to other diffusion simultaneously.Although this method can the conspicuousness part to image accurately examined It surveys, still, the shortcoming that this method still has is, and not all image background regions are respectively positioned on image border, reality Situation is really not so, when the salient region of image concentrates a certain boundary in the picture, result that this method obtains Obviously unsatisfactory.In addition, this method using multiple dimensioned operation by original image divide repeatedly, this undoubtedly increase algorithm when Between complexity.
Summary of the invention
The present invention is directed to the deficiency of the above method, proposes that a kind of new Cooperative Study based on binary channels low-rank decomposition is significant Property detection learning method conspicuousness is detected by simulating the characteristic processed in parallel to information of human visual perception system Process processing is binary channels Cooperative Study process, obtains " where " characteristic remarkable picture and " what " characteristic remarkable picture respectively, passes through Syncretizing mechanism effectively combines two width notable figures, and is diffused to fused notable figure, effectively highlights aobvious Work property and inhibits ambient noise at region so that the present invention has preferable testing result, improve detection accuracy rate and Recall rate.
Realizing concrete thought of the invention is: original image being carried out super-pixel segmentation, to divide the image into many areas Domain.Low-rank matrix decomposition is carried out to borderline region to obtain the pure background without salient region.Processed reason in parallel by vision The inspiration of opinion carries out " where " and the processing of " what " access respectively to image, synergistically obtains " where " and " what " feature Notable figure.Finally, the advantage of result obtained by making full use of, it is significant effectively to merge two width to propose a kind of binding mechanism Figure, and then obtain final result.
To achieve the goals above, the method for the present invention includes the following steps:
(1) natural image is inputted:
(1a) inputs a width natural image;
(1b) uses superpixel segmentation method, and the natural image of input is divided into different regions;
(1c) finds out the average value that each region corresponds to the color characteristic in RGB rgb space and the space LAB respectively;
Each region is corresponded to the average value of the color characteristic in RGB rgb space and the space LAB by (1d), as the region The color characteristic of super-pixel;
(1e) finds out the average value of each region respective pixel coordinate position in input natural image respectively;
(2) resolve boundary matrix:
(2a) extracts the peripheral regions super-pixel of input natural image as boundary matrix;
(2b) carries out low-rank decomposition according to the following formula, to boundary matrix:
min||L||*+λ||S||1S.t.B=S+L
Wherein, min expression is minimized operation, | | | |*After indicating that nuclear norm operation, L indicate that boundary matrix is decomposed Low-rank matrix, λ indicate that the parameter for balancing low-rank matrix L and sparse matrix S, the value range of λ are [0,1], | | | |1Table Show that a norm operates, S indicates that the sparse matrix after boundary matrix decomposition, s.t. indicate that constraint condition symbol, Β indicate shape based moment Battle array;
A corresponding norm in each column in low-rank matrix after (2c) calculating boundary matrix decomposition, to all norms It is averaged, uses the average value as background threshold;
(2d) chooses all columns that a norm is greater than background threshold from the low-rank matrix after boundary matrix decomposition, by institute There is a norm to be greater than the corresponding super-pixel of columns of background threshold as pure background;
(3) " where " characteristic remarkable picture is obtained:
According to the following formula, calculated eigenvalue cluster is combined into " where " spy by " where " characteristic value for calculating pure background Levy notable figure:
Wherein, αiIndicate that natural image is divided into the sparse system of the ith zone super-pixel in different zones super-pixel Number, argmin expression are minimized operation,Indicate that two norms operate, xiIt is super to indicate that natural image is divided into different zones The characteristic value of ith zone super-pixel in pixel, D indicate the low-rank matrix after boundary matrix decomposition, and μ indicates dilute for balancing Dredge bound term | | αi||1With reconstructed error itemParameter, the value range of μ is [0,1], | | | |1Indicate a norm Operation;
(4) region " where " super-pixel is obtained:
(4a) is averaged all " where " characteristic values of pure background, uses the average value as targets threshold;
(4b) chooses the value for being greater than targets threshold from all " where " characteristic values of pure background, is greater than mesh for all The region super-pixel for marking the value of threshold value is used as the region " where " super-pixel;
(5) " what " characteristic remarkable picture is obtained:
(5a) calculates the hybrid similarity value of non-" where " region super-pixel and the region " where " super-pixel, is mixed Similarity matrix;
(5b) calculates the self similarity angle value in the non-region " where ", obtains self-similarity matrix;
(5c) sums the every row of self-similarity matrix, and the sum of required group is successively combined into diagonal matrix;
The characteristic value of (5d) calculating non-region " where " super-pixel;
All eigenvalue clusters in the non-region " where " and the region " where " are combined into " what " characteristic remarkable picture by (5e);
(6) fusion feature notable figure:
According to the following formula, " where " characteristic remarkable picture and " what " characteristic remarkable picture are merged, obtains fused spy Levy notable figure:
Sf(y)=Sbg(y).*Sgoal(y)
Wherein, SfIndicate that fused characteristic remarkable picture, y indicate that natural image is divided into y-th in different zones Region, the value range of y are [1, N], and N indicates the divided region sum of natural image, SbgIndicate " where " characteristic remarkable Figure .* indicate point multiplication operation operation, SgoalIndicate " what " characteristic remarkable picture;
(7) it spreads:
(7a) utilizes K mean cluster algorithm, and input natural image all areas super-pixel is divided into 8 classes;
(7b) according to the following formula, is diffused fused characteristic remarkable picture, the notable figure after being spread:
Wherein,Indicate to belong to the significance value after the ith zone super-pixel diffusion of t class, the value range of i is [1, K], K are the sum of t class region super-pixel, and the value range of t is [1,8],Indicate the super picture of the ith zone of t class The significance value of element,Indicating the significance value of j-th of region super-pixel of t class, the value range of j is [1, K], and j ≠ I, wijIndicate the similarity degree of region super-pixel i and region super-pixel j;;
(8) notable figure after output diffusion.
Compared with prior art, the present invention having the advantage that
First, image boundary matrix is divided into sparse and low-rank two parts, meter since the present invention is decomposed using low-rank matrix A corresponding norm in each column, is averaged all norms, with this in low-rank matrix after calculating boundary matrix decomposition Average value chooses all column that a norm is greater than background threshold as background threshold from the low-rank matrix after boundary matrix decomposition Number, as pure background, this operation effectively inhibits the corresponding super-pixel of columns using all norms greater than background threshold Target noise, the salient region for overcoming image in the prior art concentrate what is failed when a certain boundary in the picture to lack Point allows the present invention to extract effective background information, improves detection accuracy.
Second, the present invention obtains " where " characteristic remarkable picture and " what " characteristic remarkable picture respectively, has by syncretizing mechanism Effect ground combines two width notable figures, and is diffused to fused characteristic remarkable picture, avoids two width in the prior art The pro rate problem of color contrast priori and boundary priori notable figure effectively highlights salient region and inhibits Ambient noise improves the accuracy rate of detection so that the present invention has preferable testing result.
Detailed description of the invention
Fig. 1 is flow chart of the invention;
Fig. 2 is the original image and corresponding true value figure in emulation experiment of the present invention;
Fig. 3 is the notable figure in emulation experiment of the present invention;
Fig. 4 is the comparison diagram in emulation experiment of the present invention on data set ASD and data set ECSSD;
Specific implementation measure
Invention is further described with reference to the accompanying drawing.
In conjunction with attached drawing 1, specific steps of the invention are described as follows.
Step 1, natural image is inputted.
Input a width natural image.
Using superpixel segmentation method, the natural image of input is divided into different regions.
Specific step is as follows for the region superpixel segmentation method:
Step 1, with fixed rangeA cluster centre is successively initialized, X is each cluster after initialization An individual label is distributed at center, wherein X 200, M indicate the sum of the pixel of input natural image;
Step 2 calculates separately similarity of each pixel therewith apart from nearest cluster centre according to the following formula:
Wherein, dLABIndicate two different pixel k, i the color space LAB color distortion degree,Expression is opened Radical sign operation, lkIndicate lightness l value of the pixel k in the color space LAB, liIndicate pixel i in the brightness L of the color space LAB Information, akIndicate range A value of the pixel k in the color space LAB red to green, aiIndicate pixel i in the color space LAB Range A value of the red to green, bkIndicate range B value of the pixel k in the color space LAB yellow to blue, biIndicate pixel I is in the color space LAB yellow to the range B value of blue, dxyIndicate the space length of two different pixel k, i, xkAnd yk Respectively indicate the abscissa value and ordinate value of pixel k, xiAnd yiThe abscissa value and ordinate value of pixel i are respectively indicated, DiIndicate the similarity of pixel i and pixel k, m indicate to be used to balance color distortion degree between two pixels and space away from From parameter, the value range of m is [1,20], and S indicates the fixed range of initialization cluster centre;
The label of cluster centre is assigned to the pixel of maximum similarity value by step 3;
Step 4, the continuous iteration above process, until each pixel imparts corresponding label, by all labels Identical pixel combines together, corresponding region after the segmentation of formation.
The average value that each region corresponds to the color characteristic in RGB rgb space and the space LAB is found out respectively.
The average value that each region is corresponded to the color characteristic in RGB rgb space and the space LAB, as the super picture in the region The color characteristic of element.
The average value of each region respective pixel coordinate position in input natural image is found out respectively.
Step 2, resolve boundary matrix.
The peripheral regions super-pixel of input natural image is extracted as boundary matrix.
According to the following formula, low-rank decomposition is carried out to boundary matrix:
min||L||*+λ||S||1S.t.B=S+L
Wherein, min expression is minimized operation, | | | |*After indicating that nuclear norm operation, L indicate that boundary matrix is decomposed Low-rank matrix, λ indicate that the parameter for balancing low-rank matrix L and sparse matrix S, the value range of λ are [0,1], | | | |1Table Show that a norm operates, S indicates that the sparse matrix after boundary matrix decomposition, s.t. indicate that constraint condition symbol, Β indicate shape based moment Battle array.
A corresponding norm in each column, makes even to all norms in low-rank matrix after calculating boundary matrix decomposition Mean value uses the average value as background threshold.
All columns that a norm is greater than background threshold are chosen from the low-rank matrix after boundary matrix decomposition, by all one Norm is greater than the corresponding super-pixel of columns of background threshold as pure background.
Step 3, " where " characteristic remarkable picture is obtained.
According to the following formula, calculated eigenvalue cluster is combined into " where " spy by " where " characteristic value for calculating pure background Levy notable figure:
Wherein, αiIndicate that natural image is divided into the sparse system of the ith zone super-pixel in different zones super-pixel Number, argmin expression are minimized operation,Indicate that two norms operate, xiIt is super to indicate that natural image is divided into different zones The characteristic value of ith zone super-pixel in pixel, D indicate the low-rank matrix after boundary matrix decomposition, and μ indicates dilute for balancing Dredge bound term | | αi||1With reconstructed error itemParameter, the value range of μ is [0,1], | | | |1Indicate a norm Operation.
Step 4, the region " where " super-pixel is obtained.
All ' where ' characteristic values of pure background are averaged, use the average value as targets threshold.
The value for being greater than targets threshold is chosen from all ' where ' characteristic values of pure background, is greater than target threshold for all The region super-pixel of the value of value is used as the region " where " super-pixel.
Step 5, " what " characteristic remarkable picture is obtained.
The hybrid similarity value for calculating non-" where " region super-pixel Yu the region " where " super-pixel, obtains mixing similar Spend matrix.
The hybrid similarity value for calculating non-" where " region super-pixel and the region " where " super-pixel, is mixed Specific step is as follows for similarity matrix:
Step 1 arbitrarily chooses a super-pixel region from non-" where " super-pixel region;
Step 2 calculates separately selected super-pixel region and " where " in non-" where " super-pixel region according to the following formula The similarity value in each super-pixel region in super-pixel region:
Wherein, wi,jIndicate the similarity value between region super-pixel i and region super-pixel j, the value range of i be [1, N1], N1Indicate the sum of the non-region " where " super-pixel, the value range of j is [1, N2], N2Indicate the super picture in the region " where " The sum of element, e indicate index operation, | | | | indicate modulo operation, ciIndicate that super-pixel i in region is right in the color space LAB institute The color feature vector answered, cjIndicate region super-pixel j color feature vector corresponding to the color space LAB, σ2Expression is used for Control weight wi,jThe parameter of size, σ2Value is 0.02;
The similarity value of calculating is combined into hybrid similarity vector by step 3;
Step 4 judges whether to have selected all super-pixel regions in non-" where " super-pixel region, if so, executing the 5th Otherwise step executes step 1;
All hybrid similarity vectors are combined into hybrid similarity matrix by step 5.
The self similarity angle value for calculating the non-region " where ", obtains self-similarity matrix.
The self similarity angle value for calculating the non-region " where ", obtaining self-similarity matrix, specific step is as follows:
Step 1 arbitrarily chooses a super-pixel region from non-" where " super-pixel region;
Step 2, according to the following formula, calculate separately in non-" where " super-pixel region selected super-pixel region with it is non- The similarity value in each super-pixel region in " where " super-pixel region:
Wherein, wi,jIndicate the similarity value between region super-pixel i and region super-pixel j, the value range of i and j are [0,N1], N1Indicate that the sum of the non-region " where " super-pixel, e indicate index operation, | | | | indicate modulo operation, ciIt indicates Region super-pixel i color feature vector corresponding to the color space LAB, cjIndicate super-pixel j in region in the color space LAB institute Corresponding color feature vector, σ2It indicates for controlling weight wi,jThe parameter of size, σ2Value is 0.02;
The similarity value of calculating is combined into self-similarity vector by step 3;
Step 4 judges whether to have selected all super-pixel regions in non-" where " super-pixel region, if so, executing the 5th Otherwise step executes step 1;
All similarity vectors are combined into self-similarity matrix by step 5.
By the every row summation of self-similarity matrix, the sum of required group is successively combined into diagonal matrix.
Calculate the characteristic value of the non-region " where " super-pixel.
Specific step is as follows for the characteristic value for calculating the non-region " where " super-pixel:
Step 1 calculates the characteristic value of the non-region " where " super-pixel according to the following formula:
fu=(Duu-Wuu)-1Wulfl
Wherein, fuIndicate the characteristic value of the non-region " where " super-pixel in input natural image, DuuIndicate input nature figure The sum that the non-region " where " forms every row value in similarity matrix as in is formed by diagonal matrix, WuuIndicate input nature figure Non- " where " region super-pixel forms similarity matrix, W as inulIndicate input natural image in the non-region " where " with The hybrid similarity matrix that the region " where " is formed, flIndicate the characteristic value in region " where " in input natural image;
Step 2, the non-region " where " to the characteristic value normalization of the non-region " where " super-pixel, after being normalized The characteristic value of super-pixel.
All eigenvalue clusters in the non-region " where " and the region " where " are combined into " what " characteristic remarkable.
Step 6, fusion feature notable figure.
According to the following formula, " where " characteristic remarkable picture and " what " characteristic remarkable picture are merged, obtains fused spy Levy notable figure:
S (i)=Sbg(i).*Sgoal(i)
Wherein, S indicates that fused characteristic remarkable picture, i indicate that natural image is divided into i-th of area in different zones Domain, the value range of i are [0, N], and N indicates the divided region sum of natural image, SbgIndicate " where " characteristic remarkable Figure .* indicate point multiplication operation operation, SgoalIndicate " what " characteristic remarkable picture.
Step 7, it spreads.
Using K mean cluster algorithm, input natural image is divided into 8 classes.
According to the following formula, fused characteristic remarkable picture is diffused, the notable figure after being spread:
Wherein,Indicate to belong to the significance value after the ith zone super-pixel diffusion of t class, t indicates t class, t's Value range is [1,8], and i indicates that ith zone super-pixel, the value range of i are [1, N], and N is t class region super-pixel Sum,Indicate the significance value of the ith zone super-pixel of t class,Indicate the aobvious of j-th of region super-pixel of t class Work value, j indicate j-th of region super-pixel, and the value range of j is [1, N], and j ≠ i, wijIndicate that region super-pixel i and region are super The similarity degree of pixel j.
Step 8, the notable figure after output diffusion.
Effect of the invention can be described further by following emulation experiment.
1. simulated conditions:
Emulation of the invention is carried out on two representative public data collection ASD and ECCSD, ASD data Collection includes the single natural image of 1000 width contents, and ECCSD data set includes the 1000 more complicated natural images of width content, and two The corresponding true value figure of database is all based on artificially demarcating for pixel scale.
N in emulation experiment of the present invention in step 1 is set as 200, and the λ in step 2 is set as 0.08, and the μ in step 3 is set as 0.01, the σ in step 52It is set as 0.02.
Emulation experiment environment of the invention is MATLAB 2014b, processor processor Inter (R) Core (TM) i5- 7 Ultimate of 6200U CPU 2.40GHz, memory 4.00GB, Windows, 64 bit manipulation system.In emulation experiment of the present invention from The original image and corresponding true value figure chosen in ASD data set, such as image in attached drawing 2 (a), Fig. 2 (b).Wherein, attached drawing 2 It (a) is natural image (dimension of picture be 400 × 300 pixels) of the present invention in emulation experiment, attached drawing 2 (b) is that the present invention exists The true value figure referred in emulation experiment (dimension of picture is 400 × 300 pixels).
2. emulation content and analysis
Fig. 3 (a) is " where " characteristic remarkable picture that the present invention carries out that " where " feature extraction is obtained to Fig. 2 (a), Fig. 3 It (b) is " what " characteristic remarkable picture of the invention obtained to non-" where " feature extraction of Fig. 2 (a) progress, Fig. 3 (c) is the present invention Fused effect picture carried out to Fig. 3 (a) and Fig. 3 (b), Fig. 3 (d) be formed after the present invention is diffused Fig. 3 (c) it is final Effect picture.
The simulation experiment result objective analysis of the invention:
In order to prove effect of the invention, now method of the invention and existing 19 kinds of methods carry out pair natural image Than existing method is respectively: CSP, CW, DSR, FT, GBVS, HFT, LRMR, MSSS, SDSR, SR, SRDS, SRIV, SUN, BFSS,SDBM,MSS,IT,GBRM,GRSD.The result images of above-mentioned 19 method and accuracy rate-recall rate of Fig. 2 are drawn respectively PR curve and accuracy rate-recall rate-F are worth histogram.
Accuracy rate-recall rate PR curve is used in emulation experiment, method for drafting is as follows:
To any conspicuousness detection method, the notable figure generated is split with threshold tau ∈ [0,255], is then compared Compared with the binary map and true value figure after segmentation, to calculate accuracy rate and recall rate of each width notable figure under 256 threshold values.It is quasi- True rate is the ratio of the area of binary map and true value figure target overlapping region and binary map target area after segmentation, and recall rate is attached most importance to The area in folded region and the ratio of true value target area.Calculate 256 average standards of all image saliency maps in whole image library True rate and recall rate, and on the coordinate plane that horizontal axis is recall rate, the longitudinal axis is accuracy rate, they are described as 256 in couples It is a, these points are smoothly connected, accuracy rate-recall rate curve is just formd.
Accuracy rate-recall rate-F value histogram is used in emulation experiment, method for drafting is as follows:
To every kind of conspicuousness detection algorithm, to all notable figures in whole image library, the average accurate of them is calculated separately Rate and average recall rate calculate F value according to the following formula:
Wherein β2Indicate the parameter for being used for precise control rate and recall rate significance level, β2Value is that 0.3, P indicates average standard True rate, R are averaged recall rate.
Depict calculated Average Accuracy, recall rate and F value as histogram.
Fig. 4 (a) (b) (c) is accuracy rate-recall rate PR curve on data set ASD, and accuracy rate-recall rate-F is worth column Shape figure comparing result, Fig. 4 (d) (e) (f) are accuracy rate-recall rate PR curve on ECCSD data set, and accuracy rate-is recalled Rate-F is worth histogram comparing result.
One good conspicuousness detection algorithm is embodied in attached drawing 4 it is required that accuracy rate P, recall rate R and F are sufficiently large Be exactly PR curve on curve closer to the upper right corner, detection it is more accurate.From fig. 4, it can be seen that method of the invention is accurate It shows optimal in rate, recall rate and F value, illustrates validity of the present invention in the conspicuousness of detection object;Side of the invention The characteristic that method processes information by simulating human visual perception system in parallel, conspicuousness detection process is handled as bilateral Road Cooperative Study process illustrates that the present invention has apparent advantage compared with the existing methods.
To sum up, the characteristic that the present invention processes information by simulating human visual perception system in parallel, by conspicuousness Detection process processing is binary channels Cooperative Study process, obtains " where " characteristic remarkable picture and " what " characteristic remarkable picture respectively, Effectively two width notable figures are combined by syncretizing mechanism, and fused notable figure is diffused, are effectively protruded Salient region and ambient noise is inhibited, so that the present invention has preferable testing result, improves the accurate of detection Rate and recall rate.

Claims (5)

1. a kind of Cooperative Study conspicuousness detection method based on binary channels low-rank decomposition, comprising the following steps:
(1) natural image is inputted:
(1a) inputs a width natural image;
(1b) uses superpixel segmentation method, and the natural image of input is divided into different regions;
(1c) finds out the average value that each region corresponds to the color characteristic in RGB rgb space and the space LAB respectively;
Each region is corresponded to the average value of the color characteristic in RGB rgb space and the space LAB by (1d), as the super picture in the region The color characteristic of element;
(1e) finds out the average value of each region respective pixel coordinate position in input natural image respectively;
(2) resolve boundary matrix:
(2a) extracts the peripheral regions super-pixel of input natural image as boundary matrix;
(2b) carries out low-rank decomposition according to the following formula, to boundary matrix:
min||L||*+λ||S||1S.t.B=S+L
Wherein, min expression is minimized operation, | | | |*Indicate nuclear norm operation, L indicates the low-rank square after boundary matrix decomposition Battle array, λ indicate that the parameter for balancing low-rank matrix L and sparse matrix S, the value range of λ are [0,1], | | | |1Indicate a model Number operation, S indicate that the sparse matrix after boundary matrix decomposition, s.t. indicate that constraint condition symbol, Β indicate boundary matrix;
(2c) calculates a corresponding norm in each column of low-rank matrix after boundary matrix is decomposed, and is averaged to all norms Value, uses the average value as background threshold;
(2d) chooses all columns that a norm is greater than background threshold from the low-rank matrix after boundary matrix decomposition, by all one Norm is greater than the corresponding super-pixel of columns of background threshold as pure background;
(3) " where " characteristic remarkable picture is obtained:
According to the following formula, it is aobvious to be combined into " where " feature by " where " characteristic value for calculating pure background for calculated eigenvalue cluster Write figure;
Wherein, αiIndicate that natural image is divided into the sparse coefficient of the ith zone super-pixel in different zones super-pixel, Argmin expression is minimized operation,Indicate that two norms operate, xiIndicate that natural image is divided into different zones super-pixel In ith zone super-pixel characteristic value, D indicate boundary matrix decompose after low-rank matrix, μ indicate for balance it is sparse about Beam item | | αi||1With reconstructed error itemParameter, the value range of μ is [0,1], | | | |1Indicate that a norm is grasped Make;
(4) region " where " super-pixel is obtained:
(4a) is averaged all " where " characteristic values of pure background, uses the average value as targets threshold;
(4b) chooses the value for being greater than targets threshold from all " where " characteristic values of pure background, is greater than target threshold for all The region super-pixel of the value of value is used as the region " where " super-pixel;
(5) " what " characteristic remarkable picture is obtained:
(5a) calculates the hybrid similarity value of non-" where " region super-pixel and the region " where " super-pixel, obtains mixing similar Spend matrix;
(5b) calculates the self similarity angle value in the non-region " where ", obtains self-similarity matrix;
(5c) sums the every row of self-similarity matrix, and the sum of required group is successively combined into diagonal matrix;
The characteristic value of (5d) calculating non-region " where " super-pixel;
All eigenvalue clusters in the non-region " where " and the region " where " are combined into " what " characteristic remarkable picture by (5e);
(6) fusion feature notable figure:
According to the following formula, " where " characteristic remarkable picture and " what " characteristic remarkable picture are merged, it is aobvious obtains fused feature Write figure:
Sf(y)=Sbg(y).*Sgoal(y)
Wherein, Sf(y) indicate that fused characteristic remarkable picture, y indicate that natural image is divided into y-th of area in different zones Domain, the value range of y are [1, N], and N indicates the divided region sum of natural image, SbgIndicate " where " characteristic remarkable Figure .* indicate dot product operation, SgoalIndicate " what " characteristic remarkable picture;
(7) it spreads:
(7a) utilizes K mean cluster algorithm, and input natural image all areas super-pixel is divided into 8 classes;
(7b) according to the following formula, is diffused fused characteristic remarkable picture, the notable figure after being spread:
Wherein,Indicating to belong to the significance value after the ith zone super-pixel diffusion of t class, the value range of i is [1, K], K indicates the sum of t class region super-pixel, and the value range of t is [1,8],Indicate the ith zone super-pixel of t class Significance value,Indicate the significance value of j-th of region super-pixel of t class, the value range of j is [1, K], and j ≠ i, wij Indicate the similarity degree of region super-pixel i and region super-pixel j;
(8) notable figure after output diffusion.
2. the Cooperative Study conspicuousness detection method according to claim 1 based on binary channels low-rank decomposition, feature exist In specific step is as follows for superpixel segmentation method described in step (1b):
Step 1, with fixed rangeX cluster centre is initialized, successively for each cluster centre after initialization Distribute an individual label, wherein the value of X is the sum for the pixel that 200, M indicates input natural image;
Step 2 calculates separately similarity of each pixel therewith apart from nearest cluster centre according to the following formula:
Wherein, dLABIndicate two different pixel k, i the color space LAB color distortion degree,Radical sign behaviour is opened in expression Make, lkIndicate lightness l value of the pixel k in the color space LAB, liIndicate pixel i the color space LAB brightness L information, akIndicate range A value of the pixel k in the color space LAB red to green, aiIndicate that pixel i is arrived in the color space LAB red The range A value of green, bkIndicate range B value of the pixel k in the color space LAB yellow to blue, biIndicate pixel i in LAB Range B value of the color space yellow to blue, dxyIndicate the space length of two different pixel k, i, xkAnd ykTable respectively Show the abscissa value and ordinate value of pixel k, xiAnd yiRespectively indicate the abscissa value and ordinate value of pixel i, DiIt indicates The similarity of pixel i and pixel k, m indicate the ginseng for balancing color distortion degree and space length between two pixels Number, the value range of m are [1,20], and S indicates the fixed range of initialization cluster centre;
The label of cluster centre is assigned to the pixel of maximum similarity value by step 3;
Step 4, the continuous iteration above process are identical by all labels until each pixel imparts corresponding label Pixel combination together, corresponding region after the segmentation of formation.
3. the Cooperative Study conspicuousness detection method according to claim 1 based on binary channels low-rank decomposition, feature exist In the hybrid similarity value of calculating non-" where " region super-pixel and the region " where " super-pixel described in step (5a) obtains To hybrid similarity matrix, specific step is as follows:
Step 1 arbitrarily chooses a super-pixel region from non-" where " super-pixel region;
Step 2 calculates separately selected super-pixel region and " where " super picture in non-" where " super-pixel region according to the following formula The similarity value in each super-pixel region in plain region:
Wherein, wi,jIndicate the similarity value between region super-pixel i and region super-pixel j, the value range of i is [1, N1], N1 Indicate the sum of the non-region " where " super-pixel, the value range of j is [1, N2], N2Indicate the total of the region " where " super-pixel Number, e indicate index operation, | | | | indicate modulo operation, ciIndicate region super-pixel i face corresponding to the color space LAB Color characteristic vector, cjIndicate region super-pixel j color feature vector corresponding to the color space LAB, σ2It indicates to be used for control Weight wi,jThe parameter of size, σ2Value is 0.02;
The similarity value of calculating is combined into hybrid similarity vector by step 3;
Step 4 judges whether to have selected all super-pixel regions in non-" where " super-pixel region, if so, step 5 is executed, it is no Then, step 1 is executed;
All hybrid similarity vectors are combined into hybrid similarity matrix by step 5.
4. the Cooperative Study conspicuousness detection method according to claim 1 based on binary channels low-rank decomposition, feature exist In the self similarity angle value of the non-region " where " of calculating described in step (5b) obtains the specific steps of self-similarity matrix such as Under:
Step 1 arbitrarily chooses a super-pixel region from non-" where " super-pixel region;
It is super to calculate separately selected super-pixel region and non-" where " in non-" where " super-pixel region according to the following formula for step 2 The similarity value in each super-pixel region in pixel region:
Wherein, wi,jThe value range of similarity value between expression region super-pixel i and region super-pixel j, i and j are [1, N1], N1Indicate that the sum of the non-region " where " super-pixel, e indicate index operation, | | | | indicate modulo operation, ciIndicate that region is super Pixel i color feature vector corresponding to the color space LAB, cjIndicate super-pixel j in region corresponding to the color space LAB Color feature vector, σ2It indicates for controlling weight wi,jThe parameter of size, σ2Value is 0.02;
The similarity value of calculating is combined into self-similarity vector by step 3;
Step 4 judges whether to have selected all super-pixel regions in non-" where " super-pixel region, if so, step 5 is executed, it is no Then, step 1 is executed;
All similarity vectors are combined into self-similarity matrix by step 5.
5. the Cooperative Study conspicuousness detection method according to claim 1 based on binary channels low-rank decomposition, feature exist In specific step is as follows for the characteristic value of the calculating non-region " where " super-pixel described in step (5d):
Step 1 calculates the characteristic value of the non-region " where " super-pixel according to the following formula:
fu=(Duu-Wuu)-1Wulfl
Wherein, fuIndicate the characteristic value of the non-region " where " super-pixel in input natural image, DuuIt indicates in input natural image The sum that the non-region " where " forms every row value in similarity matrix is formed by diagonal matrix, WuuIt indicates in input natural image Non- " where " region super-pixel forms similarity matrix, WulIndicate the non-region " where " and " where " in input natural image The hybrid similarity matrix that region is formed, flIndicate the characteristic value in region " where " in input natural image;
Step 2, the super picture in the non-region " where " to the characteristic value normalization of the non-region " where " super-pixel, after being normalized The characteristic value of element.
CN201610648091.9A 2016-08-09 2016-08-09 Cooperative Study conspicuousness detection method based on binary channels low-rank decomposition Active CN106296681B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610648091.9A CN106296681B (en) 2016-08-09 2016-08-09 Cooperative Study conspicuousness detection method based on binary channels low-rank decomposition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610648091.9A CN106296681B (en) 2016-08-09 2016-08-09 Cooperative Study conspicuousness detection method based on binary channels low-rank decomposition

Publications (2)

Publication Number Publication Date
CN106296681A CN106296681A (en) 2017-01-04
CN106296681B true CN106296681B (en) 2019-02-15

Family

ID=57667120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610648091.9A Active CN106296681B (en) 2016-08-09 2016-08-09 Cooperative Study conspicuousness detection method based on binary channels low-rank decomposition

Country Status (1)

Country Link
CN (1) CN106296681B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103326B (en) * 2017-04-26 2020-06-02 苏州大学 Collaborative significance detection method based on super-pixel clustering
CN107169959B (en) * 2017-05-08 2022-01-21 中国计量大学 White blood cell detection method based on background suppression and visual perception positive feedback
CN107392211B (en) * 2017-07-19 2021-01-15 苏州闻捷传感技术有限公司 Salient target detection method based on visual sparse cognition
CN107909079B (en) * 2017-10-11 2021-06-04 天津大学 Cooperative significance detection method
CN108009549B (en) * 2017-11-02 2021-06-04 天津大学 Iterative collaborative significance detection method
CN108304849A (en) * 2018-01-15 2018-07-20 浙江理工大学 A kind of bird plumage color character extracting method
CN108229477B (en) * 2018-01-25 2020-10-09 深圳市商汤科技有限公司 Visual relevance identification method, device, equipment and storage medium for image
CN109509149A (en) * 2018-10-15 2019-03-22 天津大学 A kind of super resolution ratio reconstruction method based on binary channels convolutional network Fusion Features
CN109635809B (en) * 2018-11-02 2021-08-17 浙江工业大学 Super-pixel segmentation method for visual degradation image

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101286237A (en) * 2008-05-22 2008-10-15 重庆大学 Movement target detection method based on visual sense bionics
CN101726498A (en) * 2009-12-04 2010-06-09 河海大学常州校区 Intelligent detector and method of copper strip surface quality on basis of vision bionics
CN102622761A (en) * 2012-04-13 2012-08-01 西安电子科技大学 Image segmentation method based on similarity interaction mechanism
CN103390279A (en) * 2013-07-25 2013-11-13 中国科学院自动化研究所 Target prospect collaborative segmentation method combining significant detection and discriminant study
US20140254922A1 (en) * 2013-03-11 2014-09-11 Microsoft Corporation Salient Object Detection in Images via Saliency
CN104680546A (en) * 2015-03-12 2015-06-03 安徽大学 Salient image target detection method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101286237A (en) * 2008-05-22 2008-10-15 重庆大学 Movement target detection method based on visual sense bionics
CN101726498A (en) * 2009-12-04 2010-06-09 河海大学常州校区 Intelligent detector and method of copper strip surface quality on basis of vision bionics
CN102622761A (en) * 2012-04-13 2012-08-01 西安电子科技大学 Image segmentation method based on similarity interaction mechanism
US20140254922A1 (en) * 2013-03-11 2014-09-11 Microsoft Corporation Salient Object Detection in Images via Saliency
CN103390279A (en) * 2013-07-25 2013-11-13 中国科学院自动化研究所 Target prospect collaborative segmentation method combining significant detection and discriminant study
CN104680546A (en) * 2015-03-12 2015-06-03 安徽大学 Salient image target detection method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Saliency Detection via Dense and Sparse Reconstruction";Xiaohui Li等;《2013 IEEE International Conference on Computer Vision》;20131231;第2976-2983页
"基于what和where信息的目标检测方法";田媚;《电子学报》;20071130;第35卷(第11期);第2055-2061页
"基于视觉系统"what"和"where"通路的图像显著区域检测";田媚等;《模式识别与人工智能》;20060430;第19卷(第2期);第155-159页

Also Published As

Publication number Publication date
CN106296681A (en) 2017-01-04

Similar Documents

Publication Publication Date Title
CN106296681B (en) Cooperative Study conspicuousness detection method based on binary channels low-rank decomposition
Zoran et al. Learning ordinal relationships for mid-level vision
CN102708370B (en) Method and device for extracting multi-view angle image foreground target
CN105931180B (en) Utilize the irregular mosaic joining method of the image of significant information guidance
CN107944428B (en) Indoor scene semantic annotation method based on super-pixel set
CN105976378A (en) Graph model based saliency target detection method
CN112862792B (en) Wheat powdery mildew spore segmentation method for small sample image dataset
CN107844795A (en) Convolutional neural networks feature extracting method based on principal component analysis
Kekre et al. Improved CBIR using multileveled block truncation coding
CN107481236A (en) A kind of quality evaluating method of screen picture
CN113255915B (en) Knowledge distillation method, device, equipment and medium based on structured instance graph
Fang et al. Saliency-based stereoscopic image retargeting
CN103020993A (en) Visual saliency detection method by fusing dual-channel color contrasts
CN108388901B (en) Collaborative significant target detection method based on space-semantic channel
CN104715266B (en) The image characteristic extracting method being combined based on SRC DP with LDA
CN111739037B (en) Semantic segmentation method for indoor scene RGB-D image
CN103778430B (en) Rapid face detection method based on combination between skin color segmentation and AdaBoost
CN111046868A (en) Target significance detection method based on matrix low-rank sparse decomposition
Huang et al. A fully-automatic image colorization scheme using improved CycleGAN with skip connections
CN110689020A (en) Segmentation method of mineral flotation froth image and electronic equipment
Feng et al. Finding intrinsic color themes in images with human visual perception
CN107392211B (en) Salient target detection method based on visual sparse cognition
CN111179272B (en) Rapid semantic segmentation method for road scene
Nogales et al. ARQGAN: An evaluation of generative adversarial network approaches for automatic virtual inpainting restoration of Greek temples
CN107507263A (en) A kind of Texture Generating Approach and system based on image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant