CN107977948A - A kind of notable figure fusion method towards sociogram's picture - Google Patents
A kind of notable figure fusion method towards sociogram's picture Download PDFInfo
- Publication number
- CN107977948A CN107977948A CN201710613716.2A CN201710613716A CN107977948A CN 107977948 A CN107977948 A CN 107977948A CN 201710613716 A CN201710613716 A CN 201710613716A CN 107977948 A CN107977948 A CN 107977948A
- Authority
- CN
- China
- Prior art keywords
- notable
- image
- picture
- sociogram
- towards
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 29
- 238000000034 method Methods 0.000 claims abstract description 156
- 238000012549 training Methods 0.000 claims abstract description 26
- 238000012163 sequencing technique Methods 0.000 claims abstract description 19
- 238000012360 testing method Methods 0.000 claims abstract description 18
- 238000000205 computational method Methods 0.000 claims abstract description 4
- 238000000605 extraction Methods 0.000 claims description 41
- 238000001514 detection method Methods 0.000 claims description 16
- HUTDUHSNJYTCAR-UHFFFAOYSA-N ancymidol Chemical compound C1=CC(OC)=CC=C1C(O)(C=1C=NC=NC=1)C1CC1 HUTDUHSNJYTCAR-UHFFFAOYSA-N 0.000 claims description 11
- 230000004927 fusion Effects 0.000 abstract description 19
- 239000000284 extract Substances 0.000 abstract description 7
- 230000000694 effects Effects 0.000 description 9
- 230000010354 integration Effects 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 4
- 239000000203 mixture Substances 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000000052 comparative effect Effects 0.000 description 2
- 235000013399 edible fruits Nutrition 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- FKOQWAUFKGFWLH-UHFFFAOYSA-M 3,6-bis[2-(1-methylpyridin-1-ium-4-yl)ethenyl]-9h-carbazole;diiodide Chemical compound [I-].[I-].C1=C[N+](C)=CC=C1C=CC1=CC=C(NC=2C3=CC(C=CC=4C=C[N+](C)=CC=4)=CC=2)C3=C1 FKOQWAUFKGFWLH-UHFFFAOYSA-M 0.000 description 1
- 244000025254 Cannabis sativa Species 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000002844 melting Methods 0.000 description 1
- 230000008018 melting Effects 0.000 description 1
- 238000002203 pretreatment Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of notable figure fusion method towards sociogram's picture, including input training image, comprises the following steps, for the image I in D, using m kind extracting methods, extracts the notable figure of the training image, wherein D is training set;Calculate AUC value;According to step 1 and the computational methods of step 2, the sequencing table of the extracting method of each image is obtained, sequencing table collection is combined intoT;The neighbor search carried out in training set;Result in step 4 is merged;Merge the notable figure of test image.Notable figure fusion method purpose proposed by the present invention towards sociogram's picture for sociogram as the characteristics of propose specific notable figure fusion method, the performance of fusion improves a lot compared with the single method performance before fusion.
Description
Technical field
The present invention relates to the technical field of computer vision, particularly a kind of notable figure fusion side towards sociogram's picture
Method.
Background technology
Saliency detection is intended to find out most important part in image, is that computer vision field is used for reducing calculating
The important pre-treatment step of complexity, has a wide range of applications in fields such as compression of images, target identification, image segmentations.Together
When its problem of being again challenging in computer vision, these methods are each to have the advantage and deficiency of oneself by oneself, even together
One conspicuousness detection method, is also that difference is huge for different picture detection results.A variety of conspicuousnesses can be merged for this
Detection method as a result, the method for having obtained more excellent notable figure is just particularly important.There are some traditional notable figure fusions
Method, it is averagely or to be simply multiplied to be averaged this notable figure fusion for the simple adduction of several notable figures progress that they are mostly
Mode puts on an equal footing various notable figures, and the weights that various conspicuousnesses detect are set to same numerical value, this does not conform in actually attention
Reason, because for a width picture even each pixel, the detection result of various conspicuousness detection methods is all difference,
The weights of each conspicuousness detection method ought to also set difference for this.Currently studied there is also some and merge several notable figures
Method, as Mai et al. using condition random field (CRF) merges several notable figures, has obtained good effect, but it is recalled
Effect is not satisfactory in terms of rate.
Study [L.Mai, Y.Niu, and F.Liu.Saliency Aggregation:A Data-Driven
Approach.IEEE Computer Society, CVPR 2013, page 1131-1138.] show carrying for Different Extraction Method
It is different to take performance, even if same extracting method is also different to the extraction effect of different images.However, do not having
In the case of having benchmark two-value mark, the extraction effect of notable figure how is judged, that is, how to be selected in multiple notable figures
It is an extremely difficult thing that the good notable figure of extraction effect, which carries out fusion, and research is considerably less.
In the case where no benchmark two-value marks, document [Long M, Liu F.Comparing Salient Object
Detection Results without Ground Truth[C]//European Conference on Computer
Vision.Springer International Publishing,2014:76-91.] carry out the fusions of a variety of notable figures.
6 standards for having evaluated notable figure of this working definition:The coverage of marking area, the tight ness rating of notable figure, notable figure are straight
Fang Tu, the separability of marking area color, notable figure segmentation quality and borderline quality, according to this 6 rule to multiple notable figures
It is ranked up, finally obtains the notable figure of fusion.
The application for a patent for invention of Application No. CN106570851A discloses a kind of based on weight assignment DS evidence theories
Notable figure fusion method, solves the problems, such as the effective integration for the notable figure that a variety of conspicuousness detection methods obtain.First, using melting
The conspicuousness detection method of conjunction generates respective notable figure.Secondly, obtained each notable figure is considered as evidence, it is aobvious according to what is obtained
Write figure and define each identification framework significantly corresponding to detection method and mass functions.Then, calculate each evidence and seem similarity factor
With similar matrix, and then the support and degree of belief of each evidence are obtained.Then mass functional values are carried out using confidence level as weight
Weighted average, obtains a width notable figure.Then it is notable weighted average combining evidences to be obtained into another width using D-S composition rules
Figure.Finally, by two obtained width notable figures, weighted sum obtains notable figure to the end again.Mass letters are employed in the method
Number is weighted averagely, but mass functions are applied in D-S composition rules, may be because of the size of mass function conflict degrees
Change influences the effect of synthesis, causes last notable figure unintelligible.
The patent application of Application No. CN106780422A discloses a kind of notable figure fusion side based on Choquet integrations
Method, solves the problems, such as the effective integration for the notable figure that a variety of conspicuousness detection methods obtain.First, the conspicuousness inspection that use to be merged
Survey method generates respective notable figure.Secondly, the similarity factor and similar matrix between each notable figure are calculated, and then obtain each width and show
The degree of being supported and confidence level of work figure.Then, the fuzzy mearue value during the confidence level of each notable figure is integrated as Choquet.
At the same time, the sequence of Pixel-level is carried out to the notable figure to be merged, during the discrete saliency value of sequence is integrated as Choquet
Non-negative real value measurable function.Finally, Choquet integrations are calculated and are worth to last notable figure.It the method use Choquet
The method of integration carries out notable figure fusion, and workload is bigger, it is necessary to which more calculating, is not very convenient to use.
The content of the invention
In order to solve above-mentioned technical problem, the present invention proposes a kind of notable figure fusion method towards sociogram's picture, solution
Problem certainly is merged towards the notable figure of sociogram's picture, based on the notable figure that label is semantic and the image of picture appearance relies on into action
State sorts, and the fusion of notable figure is carried out according to ranking results.
The present invention provides a kind of notable figure fusion method towards sociogram's picture, including input training image, including following
Step:
Step 1:For the image I in D, using m kind extracting methods, the notable figure of the training image is extracted, wherein D is
Training set;
Step 2:AUC value is calculated according to the corresponding standard two-value marks of image I;
Step 3:According to step 1 and the computational methods of step 2, the sequencing table of the extracting method of each image is obtained, is sorted
The composition of table detects the AUC value of notable figure for the sequence number of method and its this method, and sequencing table collection is combined into T;
Step 4:The neighbor search carried out in training set;
Step 5:Result in step 4 is merged;
Step 6:Merge the notable figure of test image.
Preferably, the extraction result of the various extracting methods is S={ S1,S2,S3,…,Si,…,SM, SiRepresent the
The notable figure of i kinds method extraction.
In any of the above-described scheme preferably, the step 2 is labeled as the corresponding benchmark two-values of setting described image I
G, the S and the G are compared, and obtain the AUC value of extracting method described in m kinds.
In any of the above-described scheme preferably, the AUC value of the extracting method is ranked up, obtains sequencing table Ti。
In any of the above-described scheme preferably, the neighbor search includes the neighbor search based on label semanteme and is based on
The neighbor search of picture appearance.
In any of the above-described scheme preferably, the step 4 is setting test set image I, corresponding tag set T=
{t1,t2,…,ti,…,tn, the image number of neighbour is set to k.
In any of the above-described scheme preferably, the neighbor search based on label semanteme refer to tag match when
Time is accurately matched, and matched number is y.
In any of the above-described scheme preferably, x is set as final neighbour's number, x≤k.
In any of the above-described scheme preferably, y is worked as>During=k, then according to the similarity of external appearance characteristic in y image
It is ranked up, chooses k arest neighbors as final label semanteme arest neighbors set, then x=k.
In any of the above-described scheme preferably, y is worked as<During k, then x=y, label neighbour collection are combined intoWherein, it is the neighbour that label search obtains that T, which represents this set,.
In any of the above-described scheme preferably, the neighbor search based on picture appearance refers in the near of picture appearance
External appearance characteristic ties up statistic histogram features using the 256 of RGB color feature space and uses χ in neighbour's search2Distance is calculated.
In any of the above-described scheme preferably, arest neighbors set of the k arest neighbors as external appearance characteristic is chosen,Wherein, it is the neighbour that appearance is retrieved that A, which represents this set,.
In any of the above-described scheme preferably, the step 5 is by label neighbour set and the arest neighbors collection
Conjunction merges, and obtains Img={ Img1, Img2,…,Imgx,…,Imgx+k}。
In any of the above-described scheme preferably, the step 6 includes following sub-step:
Step 61:Calculate weight vectors;
Step 62:Weight is normalized;
Step 63:Marking area extraction is carried out to test image I using M kinds extracting method;
Step 64:The notable figure of test image is merged.
In any of the above-described scheme preferably, the step 61 is to be arranged to the AUC value of every kind of extracting method
To the ballot weight of this extracting method.
In any of the above-described scheme preferably, the step 61 obtains weight vectors after also summing for ballot weightWherein,In i represent i-th of neighbour's image, j
Represent jth kind method, M represents the quantity of extracting method.
In any of the above-described scheme preferably, the step 62 is expressed as W={ w for the weight after normalization1, w2,…,
wj,…,wM, wherein, wjRefer to the weight of jth kind method.
In any of the above-described scheme preferably, the marking area extraction result is S (I)={ S1(I),S2(I),…,
Si(p),…,SM(I) }, wherein Sj(I) the extraction result of jth kind extracting method is represented.
In any of the above-described scheme preferably, the notable figure after fusion is
The present invention proposes the notable figure fusion method towards sociogram's picture.Since the extraction performance of Different Extraction Method is not
The same, even if same extracting method is also different to the extraction effect of different images.Therefore the present invention proposes face
To sociogram's picture notable figure fusion method purpose for sociogram as the characteristics of propose specific notable figure fusion method, merge
Performance improve a lot compared with the single method performance before fusion.
Brief description of the drawings
Fig. 1 is the flow chart of a preferred embodiment of the notable figure fusion method according to the invention towards sociogram's picture.
Fig. 2 is the image of the embodiment as shown in Figure 1 of the notable figure fusion method according to the invention towards sociogram's picture
And its notable figure ordering chart.
Fig. 3 is the training of the embodiment as shown in Figure 1 of the notable figure fusion method according to the invention towards sociogram's picture
Procedure chart.
Fig. 4 is the test of the embodiment as shown in Figure 1 of the notable figure fusion method according to the invention towards sociogram's picture
Procedure chart.
Fig. 5 is that the FBS of the notable figure fusion method according to the invention towards sociogram's picture and 29 kinds of popular approach compare
As a result the PR curve maps of a preferred embodiment.
Fig. 6 is that the ROC of the embodiment as shown in Figure 5 of the notable figure fusion method according to the invention towards sociogram's picture is bent
Line chart.
Fig. 7 is this method and MDCL comparative results of the notable figure fusion method according to the invention towards sociogram's picture
The visual effect figure of one preferred embodiment.
Fig. 8 is this method and DRFI comparative results of the notable figure fusion method according to the invention towards sociogram's picture
The visual effect figure of one preferred embodiment.
Embodiment
The present invention is further elaborated with specific embodiment below in conjunction with the accompanying drawings.
Embodiment one
In the present embodiment, this method is divided into training stage and test phase.
As shown in Figure 1, there is training set D in the training stage;Corresponding benchmark two-value marks G;There are M kind extracting methods.
Step 100 is performed, for the image I in D, using m kind extracting methods, extracts the notable figure of training image.It is various
The extraction result of method is S={ S1,S2,S3,…,Si,…,SM, SiRepresent the notable figure of i-th kind of method extraction.Perform step
The corresponding benchmark two-value of 110, image I is labeled as G.SiIt is compared with G, calculates AUC value (area below ROC curve), AUC
It is worth big extracting method and illustrates that performance is fine, the result of extracting method is ranked up, obtains sequencing table Ti.Step 120 is performed,
According to step 100 and the computational methods of step 110, the sequencing table of the extracting method of each image is obtained, sequencing table collection is combined into T.
In test phase U for test set image I, corresponding tag set T={ t1,t2,…,ti,…,tn, neighbour's
Image number is set to k.
Perform step 130:Although neighbor search parents based on label semanteme and subclass are carried out in training set in classification
There is very big correlation in definition, but there is very big difference in environment and shape existing for many subclasses, so in label
Accurately matched when matching, matched number is y.X is final neighbour's number, x≤k.If y>=k, then in y figure
It is ranked up as according to the similarity of external appearance characteristic, chooses k arest neighbors as final label semanteme arest neighbors set, i.e. x
=k.If y<K, then x=y.Label neighbour collection is combined into
Step 140 is performed, the neighbor search based on picture appearance is carried out in training set.In the neighbor search of picture appearance
In, external appearance characteristic ties up statistic histogram features using the 256 of RGB color feature space, and uses χ2Into the calculating of row distance.Choosing
Arest neighbors set of the k arest neighbors as external appearance characteristic is taken,
Step 150 is performed, obtained from step 130 and step 140 two neighbours gather to (label neighbour gathers and most
Neighbour gathers) merge, obtain
Img={ Img1, Img2,…,Imgx,…,Imgx+k} (3)。
Step 160 is performed, every width neighbour image has corresponding notable figure sequencing table, with reference to the row obtained in step 120
Sequence table set T, using the AUC value of every kind of extracting method as the ballot weight to this extracting method, obtains after ballot weight summation
To weight vectors
Wherein,In i represent i-th of neighbour's image, j represent jth kind method, M represent extracting method quantity.
Step 170 is performed, weight is normalized, is the fusion weight of every kind of extracting method, after normalization
Weight be expressed as W={ w1, w2,…,wj,…,wM} (5)。
Step 180 is performed, marking area extraction, marking area extraction knot are carried out to test image I using M kinds extracting method
Fruit is S (I)={ S1(I),S2(I),…,Si(p),…,SM(I) }, wherein Sj(I) the extraction result of jth kind extracting method is represented.
Step 190 is performed, using the weight W being calculated in step 170, to the test image obtained from step 180
Notable figure merged, the notable figure after fusion is
Embodiment two
As shown in Fig. 2, the extracting method used has 4 kinds,
Sequence number 1 is FT methods (Achanta, R., Hemami, S., Estrada, F.and Susstrunk, S. (2009)
‘Frequency-tuned salient region detection’,Proceedings of IEEE International
Conference on Computer Vision and Pattern Recognition(CVPR),20–26June,Miami,
FL,USA,pp.1597–1604.);
Sequence number 2 is SEG methods (Rahtu, E., Kannala, J., Salo, M.and Heikkila, J. (2010)
‘Segmenting salient objects from images and videos’,The European Conference
on Computer Vision(ECCV),5–11September,Crete,Greece,pp.366–379.);
Sequence number 3 is CB methods (Jiang, H., Wang, J., Yuan, Z., Liu, T., Zheng, N.and Li, S. (2011)
‘Automatic salient object segmentation based on context and shape prior’,The
British Machine Vision Conference(BMVC),29August–2September,Dundee,Scotland,
pp.1–12.);
Sequence number 4 is RC methods (Ming-Ming Cheng, Guo-Xin Zhang, Niloy J.Mitra, Xiaolei
Huang,Shi-Min Hu.Global Contrast based Salient Region Detection.IEEE
International Conference on Computer Vision and Pattern Recognition,pages
409–416,2011.).The data field of gauge outfit node is image, and pointer field is directed toward the sequencing table of extracting method;Non- gauge outfit node bag
Containing 3 domains, the AUC value of first data field storage corresponding method extraction result, the sequence of second data field the inside deposit method
Number, 1,2,3,4 marking area extracting methods are included in example, pointer field is directed toward next node.
The numeric field of the gauge outfit node 200 of chained list 1 is image, and pointer field is directed toward the sequencing table of extracting method, that is, is directed toward the
Two nodes 201;Section point 201 includes 3 domains, and first data field storage corresponding method extracts the AUC value 0.88 of result, the
The sequence number 2 of the extracting method is stored inside two data fields, pointer field is directed toward the 3rd node 202;3rd node 202 includes 3
Domain, the AUC value 0.79 of first data field storage corresponding method extraction result, the extracting method is stored in second data field the inside
Sequence number 3, pointer field be directed toward the 4th node 203;Fourth node 203 includes 3 domains, and first data field storage corresponding method carries
The AUC value 0.67 of result is taken, the sequence number 1 of the extracting method is stored inside second data field, pointer field is directed toward the 5th node
204;5th node 204 includes 3 domains, the AUC value 0.55 of first data field storage corresponding method extraction result, second number
According to the sequence number 4 that the extracting method is stored inside domain, pointer field full stop.
The numeric field of the gauge outfit node 210 of chained list 2 is image, and pointer field is directed toward the sequencing table of extracting method, that is, is directed toward the
Two nodes 211;Section point 211 includes 3 domains, and first data field storage corresponding method extracts the AUC value 0.79 of result, the
The sequence number 1 of the extracting method is stored inside two data fields, pointer field is directed toward the 3rd node 212;3rd node 212 includes 3
Domain, the AUC value 0.71 of first data field storage corresponding method extraction result, the extracting method is stored in second data field the inside
Sequence number 3, pointer field be directed toward the 4th node 213;Fourth node 213 includes 3 domains, and first data field storage corresponding method carries
The AUC value 0.59 of result is taken, the sequence number 2 of the extracting method is stored inside second data field, pointer field is directed toward the 5th node
214;5th node 214 includes 3 domains, the AUC value 0.47 of first data field storage corresponding method extraction result, second number
According to the sequence number 4 that the extracting method is stored inside domain, pointer field full stop.
The numeric field of the gauge outfit node 220 of chained list 3 is image, and pointer field is directed toward the sequencing table of extracting method, that is, is directed toward the
Two nodes 221;Section point 221 includes 3 domains, and first data field storage corresponding method extracts the AUC value 0.93 of result, the
The sequence number 1 of the extracting method is stored inside two data fields, pointer field is directed toward the 3rd node 222;3rd node 222 includes 3
Domain, the AUC value 0.85 of first data field storage corresponding method extraction result, the extracting method is stored in second data field the inside
Sequence number 2, pointer field be directed toward the 4th node 223;Fourth node 223 includes 3 domains, and first data field storage corresponding method carries
The AUC value 0.76 of result is taken, the sequence number 4 of the extracting method is stored inside second data field, pointer field is directed toward the 5th node
224;5th node 224 includes 3 domains, the AUC value 0.63 of first data field storage corresponding method extraction result, second number
According to the sequence number 3 that the extracting method is stored inside domain, pointer field full stop.
The numeric field of the gauge outfit node 230 of chained list 4 is image, and pointer field is directed toward the sequencing table of extracting method, that is, is directed toward the
Two nodes 231;Section point 231 includes 3 domains, and first data field storage corresponding method extracts the AUC value 0.67 of result, the
The sequence number 2 of the extracting method is stored inside two data fields, pointer field is directed toward the 3rd node 232;3rd node 232 includes 3
Domain, the AUC value 0.57 of first data field storage corresponding method extraction result, the extracting method is stored in second data field the inside
Sequence number 1, pointer field be directed toward the 4th node 233;Fourth node 233 includes 3 domains, and first data field storage corresponding method carries
The AUC value 0.49 of result is taken, the sequence number 4 of the extracting method is stored inside second data field, pointer field is directed toward the 5th node
234;5th node 234 includes 3 domains, the AUC value 0.37 of first data field storage corresponding method extraction result, second number
According to the sequence number 3 that the extracting method is stored inside domain, pointer field full stop.
The numeric field of the gauge outfit node 240 of chained list 5 is image, and pointer field is directed toward the sequencing table of extracting method, that is, is directed toward the
Two nodes 241;Section point 241 includes 3 domains, and first data field storage corresponding method extracts the AUC value 0.73 of result, the
The sequence number 3 of the extracting method is stored inside two data fields, pointer field is directed toward the 3rd node 242;3rd node 242 includes 3
Domain, the AUC value 0.68 of first data field storage corresponding method extraction result, the extracting method is stored in second data field the inside
Sequence number 2, pointer field be directed toward the 4th node 243;Fourth node 243 includes 3 domains, and first data field storage corresponding method carries
The AUC value 0.54 of result is taken, the sequence number 4 of the extracting method is stored inside second data field, pointer field is directed toward the 5th node
244;5th node 244 includes 3 domains, the AUC value 0.42 of first data field storage corresponding method extraction result, second number
According to the sequence number 1 that the extracting method is stored inside domain, pointer field full stop.
Embodiment three
One width sociogram picture has two parts information:Image self-information and label semantic information., can be with the training stage
Obtain the priori to sort to the extracting method of piece image extraction performance quality.Process flow is as shown in figure 3, social picture
300 label semantic information 301 is person and grass, and the image self-information of social picture 300 is 302.Carried using four kinds
The notable figure of the social picture 300 of method extraction is taken, 311,312,313 and 314 extracted in result 311 are four kinds of extraction sides respectively
The notable figure that method obtains.Ordering chart 330 is obtained after the notable figure domain standard binary map 320 extracted in result 311 is compared,
331,332,333 and 334 in ordering chart 330 be respectively adopted that FT methods, SEG methods, CB methods and RC methods obtain by
The result obtained after sorting according to AUC value, it can be seen that AUC value shows that more greatly the notable figure effect that extraction obtains is better.
Example IV
In test phase, the image in the training set similar to test image, the distinct methods of these similar images are found
Extraction performance ranking obtained in the training stage, take ballot thought obtain test image distinct methods performance row
Sequence, the fusion of marking area is carried out according to ranking results.Process flow is as shown in figure 4, the label semantic information of social picture 400
401 be person, and the image self-information of social picture 400 is 402.Social picture 400 is extracted using four kinds of extracting methods
Notable figure, it is the notable figure that four kinds of extracting methods obtain respectively to extract 411,412,413 and 414 in result 411.Pass through neighbour
Search, finds image 430, image 440 and the image 450 in the training set 420 similar to test image.The difference of image 430
The extraction performance ranking of method has obtained respectively image 431, image 432, image 433 and image 434 in the training stage.Figure
As the extraction performance ranking of 440 distinct methods has obtained respectively image 441, image 442, image 443 in the training stage
With image 444.The extraction performance ranking of the distinct methods of image 450 has obtained respectively image 451, image in the training stage
452nd, image 453 and image 454.Take ballot thought obtain test image distinct methods performance ranking, obtain sequence knot
Fruit 460, sequentially illustrates image self-information 402 and image 461, figure according to the sequence of AUC value size in ranking results 460
As 462, image 463 and image 464.The fusion of marking area is carried out according to ranking results, obtains fusion Figure 47 0.
Embodiment five
In quantitative performance evaluation, using Performance Evaluating Indexes currently popular:
(1) precision ratio and recall curve (PR curves);
(2) F-measure values;
(3) Receiver operating curve (ROC Curve);
(4) AUC value (area below ROC curve)
(5) mean absolute error (MAE).
The method of the present invention is referred to as FBS.PR curve maps and ROC curve figure are as shown in Figure 5 and Figure 6.It can be seen that FBS
PR curves and ROC curve be above other all methods.
Embodiment six
The visual effect contrast that some typical images carry out FBS methods and MDCL methods is have selected, as shown in fig. 7, the
One is classified as original image, and second is classified as standard two-value mark, and the 3rd is classified as the notable figure obtained using FBS methods, and the 4th is classified as
The notable figure obtained using MDCL methods.For the first width and the 4th width image, the result of FBS methods extraction is more complete high
It is bright, and the result of MDCL methods is relatively fuzzy;For the second width and the 3rd width image, the result of FBS methods extraction is more complete high
It is bright, and the result of MDCL methods is imperfect.As can be seen that fusion after the more simple depth characteristic of extracted region effect region more
Add whole, treatment of details obtains more preferably.
Embodiment seven
The visual effect contrast that some typical images carry out FBS methods and DRFI methods is have chosen, such as Fig. 8 institutes
Show, first is classified as original image, and second is classified as standard two-value mark, and the 3rd is classified as the notable figure obtained using FBS methods, and the 4th
It is classified as the notable figure obtained using DRFI methods.As can be seen that for the 2nd, 3 width images, the result that FBS methods obtain than
The result of DRFI methods is more complete, and the result of DRFI methods has the phenomenon of missing;For the 1st, 4 width images, FBS methods
Obtained result is complete and sharpness of border, and the result of DRFI methods contains non-significant region, that is, error detection
Phenomenon.
For a better understanding of the present invention, it is described in detail above in association with the specific embodiment of the present invention, but is not
Limitation of the present invention.Every technical spirit according to the present invention still belongs to any simple modification made for any of the above embodiments
In the scope of technical solution of the present invention.What each embodiment stressed in this specification be it is different from other embodiments it
Locate, the same or similar part cross-reference between each embodiment.For system embodiment, due to itself and method
Embodiment corresponds to substantially, so description is fairly simple, the relevent part can refer to the partial explaination of embodiments of method.
The methods, devices and systems of the present invention may be achieved in many ways.For example, software, hardware, firmware can be passed through
Or any combinations of software, hardware, firmware come realize the present invention method and system.The step of for the method it is above-mentioned
Order is merely to illustrate, and the step of method of the invention is not limited to order described in detail above, unless with other sides
Formula illustrates.In addition, in certain embodiments, the present invention can be also embodied as recording program in the recording medium, these
Program includes the machine readable instructions for being used for realization the method according to the invention.Thus, the present invention also covering storage is used to perform
The recording medium of the program of the method according to the invention.
Description of the invention provides for the sake of example and description, and is not exhaustively or by the present invention
It is limited to disclosed form.Many modifications and variations are obvious for the ordinary skill in the art.Select and retouch
State embodiment and be to more preferably illustrate the principle of the present invention and practical application, and those of ordinary skill in the art is managed
The solution present invention is so as to design the various embodiments with various modifications suitable for special-purpose.
Claims (10)
1. a kind of notable figure fusion method towards sociogram's picture, including input training image, it is characterised in that including following step
Suddenly:
Step 1:For the image I in D, using m kind extracting methods, the notable figure of the training image is extracted, wherein D is training
Collection;
Step 2:AUC value is calculated according to the corresponding standard two-value marks of described image I;
Step 3:According to step 1 and the computational methods of step 2, the sequencing table of the extracting method of each image is obtained, sequencing table
It is configured to the sequence number of method and its AUC value of this method detection notable figure, sequencing table collection is combined into T;
Step 4:The neighbor search carried out in training set;
Step 5:Result in step 4 is merged;
Step 6:Merge the notable figure of test image.
2. as claimed in claim 1 towards the notable figure fusion method of sociogram's picture, it is characterised in that:The various extraction sides
The extraction result of method is S={ S1,S2,S3,…,Si,…,SM, SiRepresent the notable figure of i-th kind of method extraction.
3. as claimed in claim 2 towards the notable figure fusion method of sociogram's picture, it is characterised in that:The step 2 is to set
Determine the corresponding benchmark two-values of described image I and be labeled as G, the S and the G to be compared, obtain extracting method described in m kinds
AUC value.
4. as claimed in claim 3 towards the notable figure fusion method of sociogram's picture, it is characterised in that:To the extracting method
AUC value be ranked up, obtain sequencing table Ti。
5. as claimed in claim 1 towards the notable figure fusion method of sociogram's picture, it is characterised in that:The neighbor search bag
Include the neighbor search based on label semanteme and the neighbor search based on picture appearance.
6. as claimed in claim 5 towards the notable figure fusion method of sociogram's picture, it is characterised in that:The step 4 is to set
Determine test set image I, corresponding tag set T={ t1,t2,…,ti,…,tn, the image number of neighbour is set to k.
7. as claimed in claim 6 towards the notable figure fusion method of sociogram's picture, it is characterised in that:It is described to be based on label language
The neighbor search of justice refers to accurately be matched when tag match, and matched number is y.
8. as claimed in claim 7 towards the notable figure fusion method of sociogram's picture, it is characterised in that:X is set to be final
Neighbour's number, x≤k.
9. as claimed in claim 7 towards the notable figure fusion method of sociogram's picture, it is characterised in that:Work as y>During=k, then exist
Similarity in y image according to external appearance characteristic is ranked up, and chooses k arest neighbors as final label semanteme arest neighbors collection
Close, then x=k.
10. as claimed in claim 9 towards the notable figure fusion method of sociogram's picture, it is characterised in that:Work as y<During k, then x=
Y, label neighbour collection are combined intoWherein, it is mark that T, which represents this set,
The neighbour that label retrieval obtains.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710613716.2A CN107977948B (en) | 2017-07-25 | 2017-07-25 | Salient map fusion method facing community image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710613716.2A CN107977948B (en) | 2017-07-25 | 2017-07-25 | Salient map fusion method facing community image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107977948A true CN107977948A (en) | 2018-05-01 |
CN107977948B CN107977948B (en) | 2019-12-24 |
Family
ID=62012334
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710613716.2A Active CN107977948B (en) | 2017-07-25 | 2017-07-25 | Salient map fusion method facing community image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107977948B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108711147A (en) * | 2018-05-11 | 2018-10-26 | 天津大学 | A kind of conspicuousness fusion detection algorithm based on convolutional neural networks |
CN110826573A (en) * | 2019-09-16 | 2020-02-21 | 北京联合大学 | Saliency map fusion method and system |
CN110866523A (en) * | 2019-10-25 | 2020-03-06 | 北京联合大学 | Saliency map fusion method and system |
CN111626306A (en) * | 2019-03-25 | 2020-09-04 | 北京联合大学 | Saliency map fusion method and system |
CN111666952A (en) * | 2020-05-22 | 2020-09-15 | 北京联合大学 | Salient region extraction method and system based on label context |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101388022A (en) * | 2008-08-12 | 2009-03-18 | 北京交通大学 | Web portrait search method for fusing text semantic and vision content |
CN106570851A (en) * | 2016-10-27 | 2017-04-19 | 大连理工大学 | Weighted assignment D-S (Dempster-Shafer) evidence theory-based salient map fusion method |
CN106780422A (en) * | 2016-12-28 | 2017-05-31 | 深圳市美好幸福生活安全系统有限公司 | A kind of notable figure fusion method based on Choquet integrations |
-
2017
- 2017-07-25 CN CN201710613716.2A patent/CN107977948B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101388022A (en) * | 2008-08-12 | 2009-03-18 | 北京交通大学 | Web portrait search method for fusing text semantic and vision content |
CN106570851A (en) * | 2016-10-27 | 2017-04-19 | 大连理工大学 | Weighted assignment D-S (Dempster-Shafer) evidence theory-based salient map fusion method |
CN106780422A (en) * | 2016-12-28 | 2017-05-31 | 深圳市美好幸福生活安全系统有限公司 | A kind of notable figure fusion method based on Choquet integrations |
Non-Patent Citations (2)
Title |
---|
JAMES W. DAVIS 等: "Fusion-Based Background-Subtraction using Contour Saliency", 《IEEE WORKSHOP ON OBJECT TRACKING AND CLASSIFICATION BEYOND THE VISIBLE SPECTRUM》 * |
LONG MAI 等: "Saliency Aggregation: A Data-driven Approach", 《2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108711147A (en) * | 2018-05-11 | 2018-10-26 | 天津大学 | A kind of conspicuousness fusion detection algorithm based on convolutional neural networks |
CN111626306A (en) * | 2019-03-25 | 2020-09-04 | 北京联合大学 | Saliency map fusion method and system |
CN111626306B (en) * | 2019-03-25 | 2023-10-13 | 北京联合大学 | Saliency map fusion method and system |
CN110826573A (en) * | 2019-09-16 | 2020-02-21 | 北京联合大学 | Saliency map fusion method and system |
CN110826573B (en) * | 2019-09-16 | 2023-10-27 | 北京联合大学 | Saliency map fusion method and system |
CN110866523A (en) * | 2019-10-25 | 2020-03-06 | 北京联合大学 | Saliency map fusion method and system |
CN111666952A (en) * | 2020-05-22 | 2020-09-15 | 北京联合大学 | Salient region extraction method and system based on label context |
CN111666952B (en) * | 2020-05-22 | 2023-10-24 | 北京腾信软创科技股份有限公司 | Label context-based salient region extraction method and system |
Also Published As
Publication number | Publication date |
---|---|
CN107977948B (en) | 2019-12-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Jiang et al. | Saliency detection via absorbing markov chain | |
CN107977948A (en) | A kind of notable figure fusion method towards sociogram's picture | |
CN105844283B (en) | Method, image search method and the device of image classification ownership for identification | |
CN109670528B (en) | Data expansion method facing pedestrian re-identification task and based on paired sample random occlusion strategy | |
Shahrian et al. | Improving image matting using comprehensive sampling sets | |
CN110363134B (en) | Human face shielding area positioning method based on semantic segmentation | |
CN108388905B (en) | A kind of Illuminant estimation method based on convolutional neural networks and neighbourhood context | |
CN104281572B (en) | A kind of target matching method and its system based on mutual information | |
CN102385592B (en) | Image concept detection method and device | |
CN109614508A (en) | A kind of image of clothing searching method based on deep learning | |
CN108805900A (en) | A kind of determination method and device of tracking target | |
CN106548169A (en) | Fuzzy literal Enhancement Method and device based on deep neural network | |
CN108154159A (en) | A kind of method for tracking target with automatic recovery ability based on Multistage Detector | |
CN110210567A (en) | A kind of image of clothing classification and search method and system based on convolutional neural networks | |
CN106056122A (en) | KAZE feature point-based image region copying and pasting tampering detection method | |
Zhang et al. | Multi-features integration based hyperspectral videos tracker | |
CN108734200A (en) | Human body target visible detection method and device based on BING features | |
Zhang et al. | Study of visual saliency detection via nonlocal anisotropic diffusion equation | |
Huo et al. | Semisupervised learning based on a novel iterative optimization model for saliency detection | |
CN111291818B (en) | Non-uniform class sample equalization method for cloud mask | |
CN112926429A (en) | Machine audit model training method, video machine audit method, device, equipment and storage medium | |
Mercovich et al. | Techniques for the graph representation of spectral imagery | |
CN108647703A (en) | A kind of type judgement method of the classification image library based on conspicuousness | |
CN107704509A (en) | A kind of method for reordering for combining stability region and deep learning | |
CN105102607A (en) | Image processing device, program, storage medium, and image processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |