CN111626306A - Saliency map fusion method and system - Google Patents
Saliency map fusion method and system Download PDFInfo
- Publication number
- CN111626306A CN111626306A CN201910229519.XA CN201910229519A CN111626306A CN 111626306 A CN111626306 A CN 111626306A CN 201910229519 A CN201910229519 A CN 201910229519A CN 111626306 A CN111626306 A CN 111626306A
- Authority
- CN
- China
- Prior art keywords
- saliency map
- image
- fusion
- saliency
- fusion method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 34
- 230000004927 fusion Effects 0.000 claims abstract description 45
- 238000000034 method Methods 0.000 claims abstract description 43
- 238000000605 extraction Methods 0.000 claims abstract description 38
- 238000012549 training Methods 0.000 claims abstract description 33
- 238000012360 testing method Methods 0.000 claims abstract description 23
- 239000011159 matrix material Substances 0.000 claims description 26
- 238000001514 detection method Methods 0.000 claims description 21
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 12
- 230000006870 function Effects 0.000 description 13
- 238000012545 processing Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 230000015572 biosynthetic process Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000003786 synthesis reaction Methods 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 241000251468 Actinopterygii Species 0.000 description 1
- 241000255777 Lepidoptera Species 0.000 description 1
- 101150064138 MAP1 gene Proteins 0.000 description 1
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 101150077939 mapA gene Proteins 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Abstract
The invention provides a saliency map fusion method and a saliency map fusion system, wherein the method comprises the following steps: preparing a training set; and searching neighbors of the test image X in the training set, and fitting the saliency map of the test image X through the saliency map of the neighbor image to obtain a final saliency map. The invention provides a saliency map fusion method and a saliency map fusion system, which take the difference of different extraction methods on the extraction effects of different images into consideration, and greatly improve the performance of fusion compared with the performance of a single method before fusion.
Description
Technical Field
The invention relates to the field of computer vision and the field of image processing, in particular to a saliency map fusion method and a saliency map fusion system.
Background
The image saliency detection aims at finding out the most important part in an image, is an important preprocessing step for reducing the computational complexity in the field of computer vision, and has wide application in the fields of image compression, target recognition, image segmentation and the like. Meanwhile, the method is a challenging problem in computer vision, the methods have own advantages and disadvantages, and even if the same significance detection method is used, the detection effect on different pictures is greatly different. Therefore, the results of a plurality of significance detection methods can be fused, and the method for obtaining a better significance map is particularly important. There are some traditional significant image fusion methods, which mostly simply add and average or simply multiply and average a plurality of significant images, and this significant image fusion method treats various significant images equally and sets the weights of various significance detections to the same value, which is unreasonable in practice because the detection effects of various significance detection methods are different for one image or even each pixel point, and therefore the weights of the significance detection methods should be set differently. Some methods for fusing multiple saliency maps also exist currently, for example, Mai et al uses Conditional Random Fields (CRF) to fuse multiple saliency maps to obtain good effect, but the effect on recall rate is not satisfactory.
Research [ L.Mai, Y.Niu, and F.Liu.Saliency Aggregation: A Data-driven N applications, IEEE Computer Society, CVPR 2013, page 1131 and 1138 ]) shows that the extraction performance of different extraction methods is different, and the extraction effect of different images by the same extraction method is different. However, in the case of no reference binary label, it is very difficult to determine how to determine the extraction effect of the saliency map, that is, how to select a saliency map with a good extraction effect from a plurality of saliency maps for fusion, and research is very rare.
The document [ Long M, Liu F. computing sales object detection Results with out group Truth [ C ]. European Conference on computing Vision. Springer International Publishing,2014:76-91 ] performs fusion of multiple saliency maps without reference binary labeling. This work defined 6 criteria for evaluating a good saliency map: the coverage of the salient region, the compactness of the salient map, the histogram of the salient map, the color separability of the salient region, the segmentation quality and the boundary quality of the salient map are sequenced according to the 6 rules, and finally the fused salient map is obtained. The method has large calculation amount and complicated processing process.
The invention patent application with the application number of CN106570851A discloses a saliency map fusion method based on a weighting distribution DS evidence theory, and solves the effective fusion problem of saliency maps obtained by a plurality of saliency detection methods. First, respective saliency maps are generated using a saliency detection method to be fused. And secondly, taking the obtained saliency maps as evidences, and defining the identification frameworks and the mass functions corresponding to the saliency detection methods according to the obtained saliency maps. And then, calculating the similarity coefficient and the similarity matrix of each evidence, and further obtaining the support degree and the trust degree of each evidence. And then, carrying out weighted average on the mass function values by taking the credibility as weight to obtain a saliency map. The weighted average evidence is then synthesized using a D-S synthesis rule to obtain another saliency map. And finally, weighting and summing the two obtained saliency maps again to obtain the final saliency map. In the method, a mass function is adopted for weighted average, but the application of the mass function in the D-S synthesis rule may cause that the size change of the conflict degree of the mass function affects the synthesis effect, so that the final saliency map is unclear.
The invention application with the application number of CN106780422A discloses a significant map fusion method based on Choquet integration, and solves the problem of effective fusion of significant maps obtained by a plurality of significance detection methods. First, respective saliency maps are generated using a saliency detection method to be fused. And secondly, calculating a similarity coefficient and a similarity matrix among the saliency maps so as to obtain the supported degree and the credibility of each saliency map. The confidence of each saliency map is then taken as the measure of blur value in the Choquet integral. At the same time, the saliency maps to be fused are sorted at the pixel level, and the sorted discrete saliency values are taken as non-negative real measurable functions in the Choquet integral. And finally, calculating a Choquet integral value to obtain a final saliency map. The method uses a Choquet integral method for significant map fusion, has larger workload, needs more calculation and is inconvenient to use.
Disclosure of Invention
In order to solve the technical problem, the invention provides a saliency map fusion method, which considers the difference of different extraction methods on different image extraction effects, and the fusion performance is greatly improved compared with that of a single method before fusion.
The first purpose of the invention is to provide a saliency map fusion method, which comprises the following steps:
step 1: preparing a training set;
step 2: and searching neighbors of the test image X in the training set, and fitting the saliency map of the test image X through the saliency map of the neighbor image to obtain a final saliency map.
Preferably, the training set includes a training image set D, corresponding reference binary label sets G, M extraction methods, and saliency map extraction results a of the M extraction methods.
In any of the above schemes, preferably, the step 2 includes the following sub-steps:
step 21: calculating the chi-square distance of the 256-dimensional color histograms of the test image X and the training set image;
step 22: k nearest neighbor obtained after retrievalEach of the neighboring images XkThe corresponding standard binary value is labeled αk,Representing the detection results of M methods of the adjacent images, wherein K is more than or equal to 1 and less than or equal to K;
step 23: calculating a vector beta;
In any of the above schemes, preferably, the step 23 is to calculate the vector β according to an objective function.
In any of the above schemes, preferably, the objective function formula is as follows:
the first term is a reconstruction error marked by the fusion result and the reference binary value, and the second term is a regular term.
In any of the above solutions, it is preferable that the vector β varies with a variation of the scale parameter λ.
In any of the above schemes, preferably, the closed-form solution formula of the vector β is as follows:
wherein ,Pk and BkIs a matrix associated only with the K nearest neighbor image, I represents the identity matrix.
In any of the above aspects, it is preferable that the matrix Pk and BkObtained in the training.
In any of the above solutions, preferably, the step 24 includes using the test image X and its corresponding M saliency mapsThe calculation method of the saliency map obtained by fusion is as follows:
wherein ,a matrix of saliency maps representing the M predictions,showing saliency mapThe coefficients representing the fusion are represented by the coefficients,representing the significance map results obtained from the fusion.
In any of the above schemes, preferably, the step 24 is to apply the vectorTransforming into a matrix to obtain the final saliency map
The second purpose of the invention is to provide a saliency map fusion system, which comprises the following modules:
a training set and an image fitting module are provided,
the image fitting module is used for searching neighbors of the test image X in the training set and fitting the saliency map of the test image X through the saliency map of the neighbor image to obtain a final saliency map.
Preferably, the training set includes a training image set D, corresponding reference binary label sets G, M extraction methods, and saliency map extraction results a of the M extraction methods.
In any of the above schemes, preferably, the image fitting module works as follows:
step 21: calculating the chi-square distance of the 256-dimensional color histograms of the test image X and the training set image;
step 22: k nearest neighbor obtained after retrievalEach of the neighboring images XkThe corresponding standard binary value is labeled αk,Representing the detection results of M methods of the adjacent images, wherein K is more than or equal to 1 and less than or equal to K;
step 23: calculating a vector beta;
In any of the above schemes, preferably, the step 23 is to calculate the vector β according to an objective function.
In any of the above schemes, preferably, the objective function formula is as follows:
the first term is a reconstruction error marked by the fusion result and the reference binary value, and the second term is a regular term.
In any of the above solutions, it is preferable that the vector β varies with a variation of the scale parameter λ.
In any of the above schemes, preferably, the closed-form solution formula of the vector β is as follows:
wherein ,Pk and BkIs a matrix associated only with the K nearest neighbor image, I represents the identity matrix.
In any of the above aspects, it is preferable that the matrix Pk and BkObtained in the training.
In any of the above solutions, preferably, the step 24 includes using the test image X and its corresponding M saliency mapsThe calculation method of the saliency map obtained by fusion is as follows:
wherein ,a matrix of saliency maps representing the M predictions,showing saliency mapThe coefficients representing the fusion are represented by the coefficients,representing the significance map results obtained from the fusion.
In any of the above schemes, preferably, the step 24 is to apply the vectorTransforming into a matrix to obtain the final saliency map
The saliency map fusion method and the saliency map fusion system are simple in concept, and beneficial to developing a saliency region extraction method with high robustness, and the universality of the detection method is improved.
Drawings
Fig. 1 is a flow chart of a preferred embodiment of a saliency map fusion method according to the present invention.
FIG. 1A is a test image artwork of the embodiment shown in FIG. 1 according to the saliency map fusion method of the present invention.
FIG. 1B is a test image artwork of the embodiment shown in FIG. 1 according to the saliency map fusion method of the present invention.
FIG. 1C is a saliency map of various methods of the embodiment shown in FIG. 1 of a saliency map fusion method according to the present invention.
Fig. 1D is a graph of the fusion results of the embodiment shown in fig. 1 of the saliency map fusion method according to the present invention.
FIG. 2 is a PR graph of one embodiment of performance comparison results of the saliency map fusion method according to the present invention.
Fig. 2A is a ROC graph of the embodiment shown in fig. 2 for a saliency map fusion method according to the present invention.
Fig. 3 is a comparison diagram of an embodiment of the visual effects of the saliency map fusion method according to the present invention.
Fig. 4 is a comparison diagram of another embodiment of the visual effects of the saliency map fusion method according to the present invention.
Fig. 5 is a block diagram of a preferred embodiment of a saliency map fusion system according to the present invention.
Detailed Description
The invention is further illustrated with reference to the figures and the specific examples.
Example one
And step 100 is executed, a training process is performed, and a training set is prepared, wherein the training set comprises a training image set D, corresponding G, M extraction methods of the reference binary label set and saliency map extraction results A of the M extraction methods. Step 110 is performed to calculate the chi-squared distance of the 256-dimensional color histograms of the test image X and the training set image given a test image X as shown in fig. 1A. Step 120 is executed, the K nearest neighbor obtained after the retrievalEach neighbor image XkThe corresponding standard binary value is labeled αk,The results of M methods of detection of neighboring images are represented, and K is more than or equal to 1 and less than or equal to K, as shown in FIG. 1B. Step 130 is performed to formulate the fusion problem as a ridge regression problem according to the above assumptions, with the objective function as follows:
the first term is a reconstruction error marked by a fusion result and a reference binary value, the second term is a regular term, and the vector beta changes along with the change of the scale parameter lambda.
The closed-form solution for vector β is formulated as follows:
wherein ,andthe matrix is only related to the K nearest neighbor image, the matrices can be obtained during training, and I represents an identity matrix. Step 140 is executed to use the test image X and its corresponding M saliency maps(as shown in fig. 1C), the method for calculating the saliency map obtained by fusion is as follows:
Example two
The application belongs to the technical field of computer vision and the field of image processing, and discloses a saliency map fusion method. The invention observes that the extraction performance of different extraction methods is different, and the extraction effect of different images is also different even if the same extraction method is used. The saliency map fusion method provided by the invention considers the difference of different extraction methods on the extraction effects of different images, and the fusion performance is greatly improved compared with that of a single method before fusion.
Due to individual differences of images, each method cannot guarantee that the extraction performance on each image is better than that of all other methods. In order to overcome the problem, the application provides an image-dependent saliency map fusion model, so that the methods complement each other, and the performance of the extraction result is further improved.
Since the detection performance varies from image to image, the fusion method should be image-dependent, i.e. the parameters of the fusion are adaptive and vary from image to image.
Assume that there are M significant extraction methods. Input image X, predict M saliency mapsThe basic assumption of the fusion method is that the fusion result can be obtained by linear combination of saliency maps.
EXAMPLE III
In quantitative performance evaluation, the currently popular performance evaluation indexes are adopted:
(1) precision and recall curves (PR curves);
(2) receiver operating characteristic Curve (ROC Curve);
the inventive method is abbreviated as FBS and PR graphs are shown in FIG. 2. by comparing it with the other 14 popular methods (HS, MR, DRFI, PCA, HM, GC, MC, DSR, SBF, BD, SMD, MCDL, LEGS and RFCN), it can be seen that the PR curve of FBS is higher than that of all other methods.
ROC curves As shown in FIG. 2A, it can be seen that the ROC curves for FBS are higher than for all other methods by comparison with the other 14 popular methods (HS, MR, DRFI, PCA, HM, GC, MC, DSR, SBF, BD, SMD, MCDL, LEGS, and RFCN).
Example four
Typical images were selected for visual effect comparison of the FBS method and the MDCL method, as shown in fig. 3, the order of the images being: original image, standard binary annotation, FBS, MDCL. The MDCL method is a method that solely depends on deep learning features, and it can be seen that the arms, legs, and heads of a person extracted by the FBS method are more complete, have clearer boundaries, and have better detail processing than the arms, legs, and heads extracted by the MDCL method.
EXAMPLE five
Some typical images were selected for visual effect comparison of the FBS method and the DRFI method, and as shown in FIG. 4, the images in each column appear in the following order: original image, standard binary annotation, FBS extraction result and DRFI extraction result. The DRFI method is a method which only depends on artificial design characteristics, and it can be seen that the extraction results of fish, butterflies and flowers extracted by the FBS method are more complete, the boundaries are clearer and the details are better processed than those extracted by the DRFI method.
EXAMPLE six
As shown in fig. 5, the saliency map fusion system includes a training set 500 and an image fitting module 510.
The training set 500 includes a training image set D, corresponding G, M extraction methods of reference binary label sets, and saliency map extraction results a of the M extraction methods.
The image fitting module 510 works as follows:
step 21: the chi-squared distance of the 256-dimensional color histograms of the test image X and the training set image is calculated.
Step 22: k nearest neighbor obtained after retrievalEach of the neighboring images XkThe corresponding standard binary value is labeled αk,Representing the detection results of M methods of the adjacent images, and K is more than or equal to 1 and less than or equal to K.
Step 23: the vector β is calculated according to an objective function. The objective function is formulated as follows:
the first term is a reconstruction error marked by the fusion result and the reference binary value, and the second term is a regular term. The vector beta changes along with the change of the scale parameter lambda, and the closed solution formula of the vector beta is as follows:
wherein ,Pk and BkIs a matrix related to K nearest neighbor images only, I represents an identity matrix, and P is a matrixk and BkObtained in training
Step 24: using the test image X and its corresponding M saliency mapsThe calculation method of the saliency map obtained by fusion is as follows:
wherein ,a matrix of saliency maps representing the M predictions,showing saliency mapThe coefficients representing the fusion are represented by the coefficients,representing the significance map results obtained from the fusion. The vector is processedTransforming into a matrix to obtain the final saliency map
For a better understanding of the present invention, the foregoing detailed description has been given in conjunction with specific embodiments thereof, but not with the intention of limiting the invention thereto. Any simple modifications of the above embodiments according to the technical essence of the present invention still fall within the scope of the technical solution of the present invention. In the present specification, each embodiment is described with emphasis on differences from other embodiments, and the same or similar parts between the respective embodiments may be referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Claims (10)
1. A saliency map fusion method comprising the steps of:
step 1: preparing a training set;
step 2: and searching neighbors of the test image X in the training set, and fitting the saliency map of the test image X through the saliency map of the neighbor image to obtain a final saliency map.
2. The saliency map fusion method of claim 1, characterized by: the training set comprises a training image set D, corresponding reference binary label sets G, M extraction methods and saliency map extraction results A of the M extraction methods.
3. The saliency map fusion method of claim 2, characterized by: the step 2 comprises the following substeps:
step 21: calculating the chi-square distance of the 256-dimensional color histograms of the test image X and the training set image;
step 22: k nearest neighbor obtained after retrievalEach of the neighboring images XkThe corresponding standard binary value is labeled αk,Representing the detection results of M methods of the adjacent images, wherein K is more than or equal to 1 and less than or equal to K;
step 23: calculating a vector beta;
4. The saliency map fusion method of claim 3, characterized by: the step 23 is to calculate the vector β according to an objective function.
6. The saliency map fusion method of claim 5, characterized in that: the vector β varies with the variation of the scale parameter λ.
8. The saliency map fusion method of claim 7, characterized by: the matrix Pk and BkObtained in the training.
9. The saliency map fusion method of claim 8, characterized by: the step 24 comprises using the test image X and its corresponding M saliency mapsThe calculation method of the saliency map obtained by fusion is as follows:
10. A saliency map fusion system comprising the following modules:
a training set and an image fitting module are provided,
the image fitting module is used for searching neighbors of the test image X in the training set and fitting the saliency map of the test image X through the saliency map of the neighbor image to obtain a final saliency map.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910229519.XA CN111626306B (en) | 2019-03-25 | 2019-03-25 | Saliency map fusion method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910229519.XA CN111626306B (en) | 2019-03-25 | 2019-03-25 | Saliency map fusion method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111626306A true CN111626306A (en) | 2020-09-04 |
CN111626306B CN111626306B (en) | 2023-10-13 |
Family
ID=72260519
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910229519.XA Active CN111626306B (en) | 2019-03-25 | 2019-03-25 | Saliency map fusion method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111626306B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5499010A (en) * | 1994-04-25 | 1996-03-12 | The Regents Of The University Of California | Braking light system for a vehicle |
US20020154833A1 (en) * | 2001-03-08 | 2002-10-24 | Christof Koch | Computation of intrinsic perceptual saliency in visual environments, and applications |
CN102054178A (en) * | 2011-01-20 | 2011-05-11 | 北京联合大学 | Chinese painting image identifying method based on local semantic concept |
CN103065326A (en) * | 2012-12-26 | 2013-04-24 | 西安理工大学 | Target detection method based on time-space multiscale motion attention analysis |
CN103810274A (en) * | 2014-02-12 | 2014-05-21 | 北京联合大学 | Multi-feature image tag sorting method based on WordNet semantic similarity |
CN104616316A (en) * | 2014-05-23 | 2015-05-13 | 苏州大学 | Method for recognizing human behavior based on threshold matrix and characteristics-fused visual word |
CN105631898A (en) * | 2015-12-28 | 2016-06-01 | 西北工业大学 | Infrared motion object detection method based on spatio-temporal saliency fusion |
CN106780422A (en) * | 2016-12-28 | 2017-05-31 | 深圳市美好幸福生活安全系统有限公司 | A kind of notable figure fusion method based on Choquet integrations |
CN107977948A (en) * | 2017-07-25 | 2018-05-01 | 北京联合大学 | A kind of notable figure fusion method towards sociogram's picture |
CN108694710A (en) * | 2018-04-18 | 2018-10-23 | 大连理工大学 | One kind being based on the notable figure fusion method of (N) fuzzy integral |
-
2019
- 2019-03-25 CN CN201910229519.XA patent/CN111626306B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5499010A (en) * | 1994-04-25 | 1996-03-12 | The Regents Of The University Of California | Braking light system for a vehicle |
US20020154833A1 (en) * | 2001-03-08 | 2002-10-24 | Christof Koch | Computation of intrinsic perceptual saliency in visual environments, and applications |
CN102054178A (en) * | 2011-01-20 | 2011-05-11 | 北京联合大学 | Chinese painting image identifying method based on local semantic concept |
CN103065326A (en) * | 2012-12-26 | 2013-04-24 | 西安理工大学 | Target detection method based on time-space multiscale motion attention analysis |
CN103810274A (en) * | 2014-02-12 | 2014-05-21 | 北京联合大学 | Multi-feature image tag sorting method based on WordNet semantic similarity |
CN104616316A (en) * | 2014-05-23 | 2015-05-13 | 苏州大学 | Method for recognizing human behavior based on threshold matrix and characteristics-fused visual word |
CN105631898A (en) * | 2015-12-28 | 2016-06-01 | 西北工业大学 | Infrared motion object detection method based on spatio-temporal saliency fusion |
CN106780422A (en) * | 2016-12-28 | 2017-05-31 | 深圳市美好幸福生活安全系统有限公司 | A kind of notable figure fusion method based on Choquet integrations |
CN107977948A (en) * | 2017-07-25 | 2018-05-01 | 北京联合大学 | A kind of notable figure fusion method towards sociogram's picture |
CN108694710A (en) * | 2018-04-18 | 2018-10-23 | 大连理工大学 | One kind being based on the notable figure fusion method of (N) fuzzy integral |
Non-Patent Citations (2)
Title |
---|
RICHARD JIANG等: "Face Recognition in the Scrambled Domain via Salience-Aware Ensembles of Many Kernels", 《IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 》, pages 1807 - 1817 * |
许明文等: "基于显著性特征的交通信号灯检测和识别", 《计算机与数字工程》, pages 1397 - 1401 * |
Also Published As
Publication number | Publication date |
---|---|
CN111626306B (en) | 2023-10-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Hosu et al. | KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment | |
CN110866140B (en) | Image feature extraction model training method, image searching method and computer equipment | |
CN111340123A (en) | Image score label prediction method based on deep convolutional neural network | |
Kucer et al. | Leveraging expert feature knowledge for predicting image aesthetics | |
CN110322445B (en) | Semantic segmentation method based on maximum prediction and inter-label correlation loss function | |
CN110287777B (en) | Golden monkey body segmentation algorithm in natural scene | |
WO2019218136A1 (en) | Image segmentation method, computer device, and storage medium | |
CN108846404B (en) | Image significance detection method and device based on related constraint graph sorting | |
CN111814620A (en) | Face image quality evaluation model establishing method, optimization method, medium and device | |
CN106157330B (en) | Visual tracking method based on target joint appearance model | |
CN107590505B (en) | Learning method combining low-rank representation and sparse regression | |
TWI761813B (en) | Video analysis method and related model training methods, electronic device and storage medium thereof | |
CN107977948B (en) | Salient map fusion method facing community image | |
Wang et al. | Aspect-ratio-preserving multi-patch image aesthetics score prediction | |
Huo et al. | Semisupervised learning based on a novel iterative optimization model for saliency detection | |
CN111222546B (en) | Multi-scale fusion food image classification model training and image classification method | |
CN113807176A (en) | Small sample video behavior identification method based on multi-knowledge fusion | |
Zhou et al. | Attention transfer network for nature image matting | |
CN109741380B (en) | Textile picture fast matching method and device | |
CN112528077B (en) | Video face retrieval method and system based on video embedding | |
CN113327227B (en) | MobileneetV 3-based wheat head rapid detection method | |
CN112508135B (en) | Model training method, pedestrian attribute prediction method, device and equipment | |
CN111626306B (en) | Saliency map fusion method and system | |
CN114202694A (en) | Small sample remote sensing scene image classification method based on manifold mixed interpolation and contrast learning | |
CN115995079A (en) | Image semantic similarity analysis method and homosemantic image retrieval method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |