CN111626306B - Saliency map fusion method and system - Google Patents

Saliency map fusion method and system Download PDF

Info

Publication number
CN111626306B
CN111626306B CN201910229519.XA CN201910229519A CN111626306B CN 111626306 B CN111626306 B CN 111626306B CN 201910229519 A CN201910229519 A CN 201910229519A CN 111626306 B CN111626306 B CN 111626306B
Authority
CN
China
Prior art keywords
saliency map
image
saliency
fusion
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910229519.XA
Other languages
Chinese (zh)
Other versions
CN111626306A (en
Inventor
梁晔
马楠
李大伟
孙晨昊
徐俊
张磊
周航
王楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Union University
Original Assignee
Beijing Union University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Union University filed Critical Beijing Union University
Priority to CN201910229519.XA priority Critical patent/CN111626306B/en
Publication of CN111626306A publication Critical patent/CN111626306A/en
Application granted granted Critical
Publication of CN111626306B publication Critical patent/CN111626306B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a saliency map fusion method and a system, wherein the method comprises the following steps: preparing a training set; and searching for the neighbor of the test image X in the training set, and fitting the saliency map of the test image X through the saliency map of the neighbor image to obtain a final saliency map. The application provides a saliency map fusion method and a system, which consider the difference of extraction effects of different extraction methods on different images, and the fusion performance is greatly improved compared with the performance of a single method before fusion.

Description

Saliency map fusion method and system
Technical Field
The application relates to the field of computer vision and the field of image processing, in particular to a saliency map fusion method and a saliency map fusion system.
Background
The image saliency detection aims at finding out the most important part in the image, is an important preprocessing step for reducing the computational complexity in the field of computer vision, and has wide application in the fields of image compression, target recognition, image segmentation and the like. At the same time, the method is a challenging problem in computer vision, and the methods have own advantages and disadvantages, and even the same significance detection method has huge difference in detection effect for different pictures. Therefore, the method which can integrate the results of various saliency detection methods and obtain a better saliency map is particularly important. There are some traditional saliency map fusion methods, which are to simply add and average or simply multiply and average a plurality of saliency maps, and the saliency map fusion method treats various saliency maps equally, and sets the weight of various saliency detection as the same value, which is unreasonable in practice, because the detection effect of various saliency detection methods is different for one picture even for each pixel point, and the weight of various saliency detection methods should be set differently for this reason. There are also some methods of studying to fuse multiple saliency maps, such as Mai et al using Conditional Random Field (CRF) to fuse multiple saliency maps, which have good results but unsatisfactory in terms of recall.
The study [ L.Mai, Y.Niu, and F.Liu.Saliency Aggregation: A Data-drive application.IEEE Computer Society, CVPR 2013, pages 1131-1138 ] shows that the extraction performance of different extraction methods is different, even though the extraction effect of the same extraction method on different images is different. However, without the reference binary label, how to judge the extraction effect of the saliency map, that is, how to select the saliency map with good extraction effect from a plurality of saliency maps to fuse is a very difficult matter, and little research is done.
Without reference binary labels, literature [ Long M, liu F. Coding Salient Object Detection Results without Ground Truth [ C ]. European Conference on Computer Vision. Spring International Publishing,2014:76-91 ] performed fusion of multiple saliency maps. This work defines 6 criteria for evaluating a saliency map: the coverage of the salient region, the compactness of the salient map, the salient map histogram, the color separability of the salient region, the salient map segmentation quality and the boundary quality are used for sequencing a plurality of salient maps according to the 6 rules, and finally the fused salient map is obtained. The method has large calculated amount and complex processing process.
The application patent application with the application number of CN106570851A discloses a significance map fusion method based on a weighted distribution DS evidence theory, and solves the problem of effective fusion of significance maps obtained by various significance detection methods. First, respective saliency maps are generated using a saliency detection method to be fused. Secondly, taking each obtained saliency map as evidence, and defining an identification framework and a mass function corresponding to each saliency detection method according to the obtained saliency map. And then, calculating the similarity coefficient and the similarity matrix of each evidence, and further obtaining the support degree and the trust degree of each evidence. And then, weighting and averaging the mass function values by taking the credibility as a weight to obtain a saliency map. The weighted average evidence is then synthesized using D-S synthesis rules to yield another saliency map. Finally, the two obtained saliency maps are weighted and summed again to obtain the final saliency map. In the method, a mass function is adopted for weighted average, but when the mass function is applied to a D-S synthesis rule, the effect of synthesis is possibly affected by the change of the degree of conflict of the mass function, so that the final saliency map is unclear.
The application application with the application number of CN106780422A discloses a saliency map fusion method based on a choket integral, and solves the problem of effective fusion of saliency maps obtained by a plurality of saliency detection methods. First, respective saliency maps are generated using a saliency detection method to be fused. And secondly, calculating the similarity coefficient and the similarity matrix between the salient graphs to obtain the supported degree and the credibility of each salient graph. The confidence level of each saliency map is then taken as a fuzzy measure in the choket integral. At the same time, the saliency maps to be fused are subjected to pixel-level sequencing, and the sequenced discrete saliency values are used as non-negative real-value measurable functions in choket integration. Finally, calculating the choket integral value to obtain a final saliency map. The method uses a choket integral method to carry out salient map fusion, has larger workload, needs more calculation and is not very convenient to use.
Disclosure of Invention
In order to solve the technical problems, the application provides a saliency map fusion method, which considers the difference of extraction effects of different extraction methods on different images, and the fusion performance is greatly improved compared with the performance of a single method before fusion.
The first object of the present application is to provide a saliency map fusion method, comprising the steps of:
step 1: preparing a training set;
step 2: and searching for the neighbor of the test image X in the training set, and fitting the saliency map of the test image X through the saliency map of the neighbor image to obtain a final saliency map.
Preferably, the training set includes a training image set D, a corresponding reference binary label set G, M extraction methods, and a saliency map extraction result a of the M extraction methods.
In any of the above schemes, preferably, the step 2 includes the following substeps:
step 21: calculating chi-square distances of 256-dimensional color histograms of the test image X and the training set image;
step 22: k nearest neighbor obtained after retrievalEach adjacent image X k The corresponding standard binary value is denoted as alpha k ,/>Representing the detection results of M methods of the neighboring images, wherein K is more than or equal to 1 and less than or equal to K;
step 23: calculating a vector beta;
step 24: calculating to obtain a final saliency map
In any of the above schemes, it is preferable that the step 23 is to calculate the vector β according to an objective function.
In any of the above schemes, preferably, the objective function formula is as follows:
the first term is a reconstruction error of the fusion result and the reference binary label, and the second term is a regular term.
In any of the above schemes, it is preferable that the vector β varies with the variation of the scale parameter λ.
In any of the above schemes, preferably, the closed-form solution formula of the vector β is as follows:
wherein ,Pk and Bk Is a matrix related to only K nearest neighbor images, I represents an identity matrix.
In any of the above schemes, it is preferable that the matrix P k and Bk Obtained during training.
In any of the above embodiments, preferably, the step 24 includes using the test image X and M saliency maps corresponding theretoThe calculation method of the fused saliency map comprises the following steps:
wherein ,matrix of saliency maps representing M predictions, < >>Representing saliency map->Representing the fused coefficients,/->Representing the results of the saliency map obtained by fusion.
In any of the above aspects, preferably, the step 24 is to use the vectorTransforming into a matrix to obtain the final saliency map +.>
A second object of the present application is to provide a saliency map fusion system, comprising the following modules:
the training set and the image fitting module are used for matching the training set with the image,
the image fitting module is used for searching for the neighbor of the test image X in the training set, and fitting the saliency map of the test image X through the saliency map of the neighbor image to obtain a final saliency map.
Preferably, the training set includes a training image set D, a corresponding reference binary label set G, M extraction methods, and a saliency map extraction result a of the M extraction methods.
In any of the above schemes, preferably, the working method of the image fitting module includes the following steps:
step 21: calculating chi-square distances of 256-dimensional color histograms of the test image X and the training set image;
step 22: k nearest neighbor obtained after retrievalEach adjacent image X k The corresponding standard binary value is denoted as alpha k ,/>Representing the detection results of M methods of the neighboring images, wherein K is more than or equal to 1 and less than or equal to K;
step 23: calculating a vector beta;
step 24: calculating to obtain a final saliency map
In any of the above schemes, it is preferable that the step 23 is to calculate the vector β according to an objective function.
In any of the above schemes, preferably, the objective function is formulated as follows:
the first term is a reconstruction error of the fusion result and the reference binary label, and the second term is a regular term.
In any of the above schemes, it is preferable that the vector β varies with the variation of the scale parameter λ.
In any of the above schemes, preferably, the closed-form solution formula of the vector β is as follows:
wherein ,Pk and Bk Is a matrix related to only K nearest neighbor images, I represents an identity matrix.
In any of the above schemes, it is preferable that the matrix P k and Bk Obtained during training.
In any of the above embodiments, preferably, the step 24 includes using the test image X and M saliency maps corresponding theretoThe calculation method of the fused saliency map comprises the following steps:
wherein ,matrix of saliency maps representing M predictions, < >>Representing saliency map->Representing the fused coefficients,/->Representing the results of the saliency map obtained by fusion.
In any of the above aspects, preferably, the step 24 is to use the vectorTransforming into a matrix to obtain the final saliency map +.>
The application provides a saliency map fusion method and a system, which have simple concepts, are favorable for developing a saliency region extraction method with high robustness and improve the universality of a detection method.
Drawings
Fig. 1 is a flow chart of a preferred embodiment of a saliency map fusion method according to the present application.
Fig. 1A is a diagram of a test image of the embodiment of fig. 1 in accordance with the saliency map fusion method of the present application.
Fig. 1B is a diagram of a test image of the embodiment of fig. 1 in accordance with the saliency map fusion method of the present application.
FIG. 1C is a multiple method saliency map of the embodiment of FIG. 1 according to the saliency map fusion methods of this application.
Fig. 1D is a graph of fusion results for the embodiment of fig. 1 in accordance with the saliency map fusion method of this application.
Fig. 2 is a PR graph of an embodiment of performance comparison results of a saliency map fusion method according to the present application.
Fig. 2A is a ROC graph of the embodiment of fig. 2 in accordance with the saliency map fusion method of the present application.
Fig. 3 is a comparative diagram of one embodiment of the visual effect of the saliency map fusion method according to the present application.
Fig. 4 is a comparative diagram of another embodiment of the visual effect of the saliency map fusion method according to the present application.
Fig. 5 is a block diagram of a preferred embodiment of a saliency map fusion system according to the present application.
Detailed Description
The application is further illustrated by the following figures and specific examples.
Example 1
Step 100 is executed, a training process is performed, and a training set is prepared, wherein the training set comprises a training image set D, corresponding reference binary label sets G, M extraction methods and saliency map extraction results A of the M extraction methods. Step 110 is performed, given a test image X as shown in fig. 1A, calculating the chi-square distance of the 256-dimensional color histograms of the test image X and the training set image. Step 120 is executed to retrieve the K nearest neighborEach neighboring image X k The corresponding standard binary value is denoted as alpha k ,/>Representing the detection results of M methods of the neighbor image, wherein K is equal to or less than 1 and K is equal to or less than K, as shown in FIG. 1B. Step 130 is performed to form the fusion problem as a ridge regression problem based on the above assumption, and the objective function is as follows:
the first term is a reconstruction error of the fusion result and the reference binary label, the second term is a regular term, and the vector beta changes along with the change of the scale parameter lambda.
The closed-form solution formula for vector β is as follows:
wherein , and />Is a matrix related to K nearest neighbor images only, which can be obtained during training, and I represents an identity matrix. Step 140 is executed, wherein the test image X and M saliency maps corresponding to the test image X are used for +.>(as shown in fig. 1C), the method for calculating the fused saliency map comprises the following steps:
wherein ,matrix of saliency maps representing M predictions, < >>Representing saliency map->β={β 1 ,β 2 ,…,β M The } represents the fused coefficients,/->Representing the fused saliency map results to vector the vectorTransforming into a matrix to obtain the final saliency map +.>As in fig. 1D.
Example two
The application belongs to the technical field of computer vision and the field of image processing, and discloses a saliency map fusion method. The application observes that the extraction performance of different extraction methods is different, even if the extraction effect of the same extraction method on different images is different. The saliency map fusion method provided by the application considers the difference of extraction effects of different extraction methods on different images, and the fusion performance is greatly improved compared with the performance of a single method before fusion.
Each method cannot guarantee extraction performance on each image superior to all other methods due to individual differences in images. In order to overcome the problem, the application provides a salient image fusion model of image dependence, so that the methods complement each other, and the performance of an extraction result is further improved.
Since the detection performance varies from image to image, the fusion method should be image dependent, i.e. the parameters of the fusion are adaptive, and vary from image to image.
Assume that there are M saliency extraction methods. Input image X, predict M saliency mapsThe basic assumption of the fusion method is that the fusion result can be obtained by linear combination of saliency maps.
wherein ,saliency map representing M predictionsMatrix of (a)>Representing saliency map->β={β 1 ,β 2 ,…,β M The } represents the fused coefficients,/->Representing the results of the saliency map obtained by fusion.
Example III
In quantitative performance evaluation, the currently popular performance evaluation index is adopted:
(1) Precision and recall curves (PR curves);
(2) Subject work characteristic Curve (ROC Curve);
the method of the present application is abbreviated as FBS, and PR curve is shown in FIG. 2, and by comparing with other 14 popular methods (HS, MR, DRFI, PCA, HM, GC, MC, DSR, SBF, BD, SMD, MCDL, LEGS and RFCN), the PR curve of FBS can be seen to be higher than that of all other methods.
ROC curves as shown in fig. 2A, by comparison with the other 14 popular methods (HS, MR, DRFI, PCA, HM, GC, MC, DSR, SBF, BD, SMD, MCDL, LEGS and RFCN), it can be seen that the ROC curves of FBSs are higher than for all other methods.
Example IV
Typical images were selected for visual effect comparison of the FBS method and the MDCL method, as shown in fig. 3, the order of the images is: original image, standard binary label, FBS, MDCL. The MDCL method is a method which purely depends on deep learning characteristics, and it can be seen that arms, legs and heads of people extracted by the FBS method are more complete than arms, legs and heads extracted by the MDCL method, boundaries are clearer, and details are better processed.
Example five
Some typical images are selected for visual effect comparison of the FBS method and the DRFI method, as shown in fig. 4, the sequence of the images in each column is as follows: original image, standard binary label, FBS extraction result and DRFI extraction result. The DRFI method is a method which purely depends on the artificial design characteristics, and can be seen that the fish, the butterfly and the flower extracted by the FBS method are more complete in extraction result, clearer in boundary and better in detail processing compared with the DRFI method.
Example six
As shown in fig. 5, the saliency map fusion system includes a training set 500 and an image fitting module 510.
The training set 500 includes a training image set D, a corresponding reference binary label set G, M extraction methods, and a saliency map extraction result a of the M extraction methods.
The working method of the image fitting module 510 is as follows:
step 21: and calculating the chi-square distance of the 256-dimensional color histograms of the test image X and the training set image.
Step 22: k nearest neighbor obtained after retrievalEach adjacent image X k The corresponding standard binary value is denoted as alpha k ,/>Representing the detection results of M methods of the neighbor images, wherein K is more than or equal to 1 and less than or equal to K.
Step 23: the vector beta is calculated from an objective function. The objective function formula is as follows:
the first term is a reconstruction error of the fusion result and the reference binary label, and the second term is a regular term. The vector beta changes along with the change of the scale parameter lambda, and a closed solution formula of the vector beta is as follows:
wherein ,Pk and Bk Is a matrix related to K nearest neighbor image only, I represents an identity matrix, and matrix P k and Bk Obtained in training
Step 24: using the test image X and M corresponding saliency mapsThe calculation method of the fused saliency map comprises the following steps:
wherein ,matrix of saliency maps representing M predictions, < >>Representing saliency map->Representing the fused coefficients,/->Representing the results of the saliency map obtained by fusion. The vector +.>Transforming into a matrix to obtain the final saliency map +.>
The foregoing description of the application has been presented for purposes of illustration and description, but is not intended to be limiting. Any simple modification of the above embodiments according to the technical substance of the present application still falls within the scope of the technical solution of the present application. In this specification, each embodiment is mainly described in the specification as a difference from other embodiments, and the same or similar parts between the embodiments need to be referred to each other. For system embodiments, the description is relatively simple as it essentially corresponds to method embodiments, and reference should be made to the description of method embodiments for relevant points.

Claims (8)

1. A saliency map fusion method comprising the steps of:
step 1: preparing a training set;
step 2: searching for the neighbor of the test image X in the training set, and fitting the saliency map of the test image X through the saliency map of the neighbor image to obtain a final saliency map, wherein the method comprises the following sub-steps:
step 21: calculating chi-square distances of 256-dimensional color histograms of the test image X and the training set image;
step 22: k nearest neighbor obtained after retrievalThe corresponding standard binary label for each neighboring image Xk is denoted as ak,representing the detection results of M methods of the neighboring images, wherein K is more than or equal to 1 and less than or equal to K;
step 23: calculating a vector beta;
step 24: calculating to obtain a final saliency mapUsing the test image X and M saliency maps corresponding thereto>The calculation method of the fused saliency map comprises the following steps:
wherein ,Matrix of saliency maps representing M predictions, < >>Representing a saliency mapRepresenting the fused coefficients,/->Representing the results of the saliency map obtained by fusion.
2. The saliency map fusion method of claim 1, wherein: the training set comprises a training image set D, G, M extraction methods of a corresponding reference binary label set and a salient map extraction result A of the M extraction methods.
3. The saliency map fusion method of claim 2, wherein: the step 23 is to calculate the vector β according to an objective function.
4. A saliency map fusion method as claimed in claim 3, wherein: the objective function formula is as follows:
the first term is a reconstruction error of the fusion result and the reference binary label, and the second term is a regular term.
5. The saliency map fusion method of claim 4, wherein: the vector β varies with the variation of the scale parameter λ.
6. The saliency map fusion method of claim 5, wherein: the closed-form solution formula of the vector beta is as follows:
wherein ,Pk and Bk Is a matrix related to only K nearest neighbor images, I represents an identity matrix.
7. The saliency map fusion method of claim 6, wherein: the matrix P k and Bk Obtained during training.
8. A saliency map fusion system comprising the following modules:
the training set and the image fitting module are used for matching the training set with the image,
the image fitting module is used for searching the neighbor of the test image X in the training set, fitting the saliency map of the test image X through the saliency map of the neighbor image to obtain a final saliency map, and the working method of the image fitting module comprises the following sub-steps:
step 21: calculating chi-square distances of 256-dimensional color histograms of the test image X and the training set image;
step 22: k nearest neighbor obtained after retrievalEach neighboring image X k The corresponding standard binary value is denoted as alpha k ,/>Representing the detection results of M methods of the neighboring images, wherein K is more than or equal to 1 and less than or equal to K;
step 23: calculating a vector beta;
step 24: calculating to obtain a final saliency mapUsing the test image X and M saliency maps corresponding thereto>The calculation method of the fused saliency map comprises the following steps:
wherein ,matrix of saliency maps representing M predictions, < >>Representing a saliency mapβ={β 1 ,β 2 ,...,β M The } represents the fused coefficients,/->Representing the results of the saliency map obtained by fusion.
CN201910229519.XA 2019-03-25 2019-03-25 Saliency map fusion method and system Active CN111626306B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910229519.XA CN111626306B (en) 2019-03-25 2019-03-25 Saliency map fusion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910229519.XA CN111626306B (en) 2019-03-25 2019-03-25 Saliency map fusion method and system

Publications (2)

Publication Number Publication Date
CN111626306A CN111626306A (en) 2020-09-04
CN111626306B true CN111626306B (en) 2023-10-13

Family

ID=72260519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910229519.XA Active CN111626306B (en) 2019-03-25 2019-03-25 Saliency map fusion method and system

Country Status (1)

Country Link
CN (1) CN111626306B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5499010A (en) * 1994-04-25 1996-03-12 The Regents Of The University Of California Braking light system for a vehicle
CN102054178A (en) * 2011-01-20 2011-05-11 北京联合大学 Chinese painting image identifying method based on local semantic concept
CN103065326A (en) * 2012-12-26 2013-04-24 西安理工大学 Target detection method based on time-space multiscale motion attention analysis
CN103810274A (en) * 2014-02-12 2014-05-21 北京联合大学 Multi-feature image tag sorting method based on WordNet semantic similarity
CN104616316A (en) * 2014-05-23 2015-05-13 苏州大学 Method for recognizing human behavior based on threshold matrix and characteristics-fused visual word
CN105631898A (en) * 2015-12-28 2016-06-01 西北工业大学 Infrared motion object detection method based on spatio-temporal saliency fusion
CN106780422A (en) * 2016-12-28 2017-05-31 深圳市美好幸福生活安全系统有限公司 A kind of notable figure fusion method based on Choquet integrations
CN107977948A (en) * 2017-07-25 2018-05-01 北京联合大学 A kind of notable figure fusion method towards sociogram's picture
CN108694710A (en) * 2018-04-18 2018-10-23 大连理工大学 One kind being based on the notable figure fusion method of (N) fuzzy integral

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020154833A1 (en) * 2001-03-08 2002-10-24 Christof Koch Computation of intrinsic perceptual saliency in visual environments, and applications

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5499010A (en) * 1994-04-25 1996-03-12 The Regents Of The University Of California Braking light system for a vehicle
CN102054178A (en) * 2011-01-20 2011-05-11 北京联合大学 Chinese painting image identifying method based on local semantic concept
CN103065326A (en) * 2012-12-26 2013-04-24 西安理工大学 Target detection method based on time-space multiscale motion attention analysis
CN103810274A (en) * 2014-02-12 2014-05-21 北京联合大学 Multi-feature image tag sorting method based on WordNet semantic similarity
CN104616316A (en) * 2014-05-23 2015-05-13 苏州大学 Method for recognizing human behavior based on threshold matrix and characteristics-fused visual word
CN105631898A (en) * 2015-12-28 2016-06-01 西北工业大学 Infrared motion object detection method based on spatio-temporal saliency fusion
CN106780422A (en) * 2016-12-28 2017-05-31 深圳市美好幸福生活安全系统有限公司 A kind of notable figure fusion method based on Choquet integrations
CN107977948A (en) * 2017-07-25 2018-05-01 北京联合大学 A kind of notable figure fusion method towards sociogram's picture
CN108694710A (en) * 2018-04-18 2018-10-23 大连理工大学 One kind being based on the notable figure fusion method of (N) fuzzy integral

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Face Recognition in the Scrambled Domain via Salience-Aware Ensembles of Many Kernels;Richard Jiang等;《IEEE Transactions on Information Forensics and Security 》;第1807-1817页 *
基于显著性特征的交通信号灯检测和识别;许明文等;《计算机与数字工程》;第1397-1401页 *

Also Published As

Publication number Publication date
CN111626306A (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN111177446B (en) Method for searching footprint image
CN111723675B (en) Remote sensing image scene classification method based on multiple similarity measurement deep learning
CN113449594B (en) Multilayer network combined remote sensing image ground semantic segmentation and area calculation method
CN108629783B (en) Image segmentation method, system and medium based on image feature density peak search
CN110322445B (en) Semantic segmentation method based on maximum prediction and inter-label correlation loss function
CN106157330B (en) Visual tracking method based on target joint appearance model
WO2021129145A1 (en) Image feature point filtering method and terminal
CN111340123A (en) Image score label prediction method based on deep convolutional neural network
CN112115963A (en) Method for generating unbiased deep learning model based on transfer learning
CN109471982B (en) Web service recommendation method based on QoS (quality of service) perception of user and service clustering
CN106874862B (en) Crowd counting method based on sub-model technology and semi-supervised learning
CN113420794B (en) Binaryzation Faster R-CNN citrus disease and pest identification method based on deep learning
CN109146925A (en) Conspicuousness object detection method under a kind of dynamic scene
CN109215003B (en) Image fusion method and device
CN111222546B (en) Multi-scale fusion food image classification model training and image classification method
CN113111716A (en) Remote sensing image semi-automatic labeling method and device based on deep learning
Zhou et al. Attention transfer network for nature image matting
CN115376159A (en) Cross-appearance pedestrian re-recognition method based on multi-mode information
CN112860936B (en) Visual pedestrian re-identification method based on sparse graph similarity migration
CN113420173A (en) Minority dress image retrieval method based on quadruple deep learning
CN110334226B (en) Depth image retrieval method fusing feature distribution entropy
CN111626306B (en) Saliency map fusion method and system
CN108765384B (en) Significance detection method for joint manifold sequencing and improved convex hull
CN116523877A (en) Brain MRI image tumor block segmentation method based on convolutional neural network
CN116704378A (en) Homeland mapping data classification method based on self-growing convolution neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant