CN107977970A - A kind of evaluating method of saliency data collection - Google Patents

A kind of evaluating method of saliency data collection Download PDF

Info

Publication number
CN107977970A
CN107977970A CN201611259133.6A CN201611259133A CN107977970A CN 107977970 A CN107977970 A CN 107977970A CN 201611259133 A CN201611259133 A CN 201611259133A CN 107977970 A CN107977970 A CN 107977970A
Authority
CN
China
Prior art keywords
image
data collection
data set
marking area
saliency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611259133.6A
Other languages
Chinese (zh)
Other versions
CN107977970B (en
Inventor
梁晔
李华丽
陈强
宋恒达
胡路明
蒋元
昝艺璇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Union University
Original Assignee
Beijing Union University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Union University filed Critical Beijing Union University
Priority to CN201611259133.6A priority Critical patent/CN107977970B/en
Publication of CN107977970A publication Critical patent/CN107977970A/en
Application granted granted Critical
Publication of CN107977970B publication Critical patent/CN107977970B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention provides a kind of evaluation method for evaluating saliency data collection performance, comprises the following steps:Statistically significant area size occupies the ratio of entire image, the number for counting the marking area being connected with image border accounts for the ratio of all marking areas of saliency data collection, the RGB color characteristics of statistically significant region and entire image are poor, calculate the step 1 to the Performance Score of each saliency data collection in step 3.The present invention to data set can count so as to which the comprehensive performance to data set is evaluated and tested from different angles.Help to research and develop objective and science conspicuousness detection algorithm, be avoided to cater to database deviation and carry out the not high algorithm design of robustness.

Description

A kind of evaluating method of saliency data collection
Technical field
The present invention relates to the technical field of image procossing, particularly a kind of performance evaluation methodology of saliency data collection.
Background technology
From the point of view of the pertinent literature of saliency data collection, saliency data collection comes from two fields:One be exclusively for Notable Journal of Sex Research and the data set established, another kind are the saliency data collection to come from image segmentation field extension.At present, The picture structure of many data sets is simple, and foreground and background has obvious difference, such as color distortion, so may result in figure Notable object as in is easier to detect.In addition, many saliency data collection all carry obvious centre deviation.It is but right In the evaluating method of data set and few.
Paper (Visual Saliency Based on Scale-Space Analysis in the Frequency Domain.pami.2012 a data set for including 235 width images is constructed in), and the notable object that data are concentrated The division that size is thought:Comprising the big significantly image of object, the image comprising medium notable object, include small notable object Image.However, this principle for judging notable object size is quite subjective, not to the size of notable object into professional etiquette Fixed, judgment principle cannot allow people to convince.
Background priori is repeatedly in the calculating of conspicuousness.Geodetic conspicuousness detection method [Y.Wei, F.Wen, W.Zhu,and J.Sun.(2012),’Geodesic saliency using background priors’,ECCV, Pp.29-42] it is exactly a representational job, the Main Basiss judged are that the borderline region of image is more likely to the back of the body as image Scape.Paper [W.Zhu, S.Liang, Y.Wei, and J.Sun, (2014), ' Saliency optimization from Robust background detection ', CVPR, pp.2814-2821] by contour connection degree it is mostly that prior information auxiliary is aobvious The detection of object is write, algorithm robustness becomes stronger.In addition, paper [H.Jiang, J.Wang, Z.Yuan, Y.Wu, N.Zheng, and S.Li,(2013),’Salient object detection:A discriminative regional feature Integration approach ', CVPR, pp.1-8] concept of background degree (backgroundness) is proposed, background degree can To regard the contradictory of objectivity (objectness) as, measurement conspicuousness is gone from opposite angle.But these documents are total to It is that algorithm will be no longer valid when notable object is connected with image boundary with problem.So, it is believed that notable object connects The detection difficulty of notable object is added when edge fit circle.
In numerous conspicuousness detection documents, all contrast is regarded as the key for calculating conspicuousness.When foreground object and When background has obvious color distortion, notable object is easier to detect.When foreground object and entire image are poor When different less, the difficulty of notable object detection is increased naturally.
The content of the invention
In order to solve above-mentioned technical problem, the present invention proposes to have formulated three following statistical methods to weigh data set Performance:1st, marking area size accounts for the ratio of entire image, and 2, the number of the marking area being connected with image border and account for The ratio of all marking areas of data set, 3, the RGB color feature of marking area and entire image it is poor, then calculate every number again According to the Performance Score of collection, the performance of data set quality is illustrated by Performance Score.
The present invention provides a kind of evaluating method of saliency data collection performance, comprises the following steps:
Step 1:Statistically significant area size occupies the ratio of entire image;
Step 2:The number for counting the marking area being connected with image border accounts for all notable areas of the saliency data collection The ratio in domain;
Step 3:The RGB color feature of statistically significant region and entire image is poor;
Step 4:The step 1 is calculated to the Performance Score of each saliency data collection in step 3.
Preferably, what the step 1 inputted schemes S for the corresponding two-value mark of data set D and he.
In any of the above-described scheme preferably, the step 1 export for the marking area in 10 proportion grades Number accounts for the percentage of all marking area numbers.
In any of the above-described scheme preferably, the step 1 is further comprising the steps of:
Step 101:Connected in image I and I corresponding two-value mark figure G, extraction two-value mark figure G in the data set The number of marking area set C, the C be m;
Step 102:I-th piece of marking area in representative image I, calculates xiArea account for the percentage of entire image, calculate Method is as follows;
Step 103:Judge diProportion grades be j, numj=numj+ 1,1≤j≤10, numjInitial value be 0;
Step 104:The step 101 is all carried out to institute to each image in data set D and corresponding two-value mark figure State the calculating process of step 103;
Step 105:The ratio that the marking area number in 10 proportion grades accounts for all marking area numbers is calculated, is calculated Method is as follows:
In any of the above-described scheme preferably, the xi∈ C, 1≤i≤m.
In any of the above-described scheme preferably, what the step 2 inputted schemes for data set D two-value marks corresponding with it S。
In any of the above-described scheme preferably, the step 2 export for the marking area that is connected with image border Number and the ratio for accounting for all marking areas of data set.
In any of the above-described scheme preferably, the step 2 is further comprising the steps of:
Step 201:Read IjAnd IjCorresponding two-value mark figure G, image Ij∈D;
Step 202:The number of the marking area set C, C that are connected in extraction two-value mark figure G are m, sum=sum+m,;
Step 203:Work as xiWhen being connected with the edge of image, num=num+1;
Step 204:Calculate the number for the marking area being connected with image border and account for all marking areas of data set Ratio, computational methods are as follows:
In any of the above-described scheme preferably, the num represents the number of the marking area connected with image border, The initial value of num is 0.
In any of the above-described scheme preferably, the sum represents the number of marking area, and the initial value of sum is 0.
In any of the above-described scheme preferably, what the step 3 inputted schemes for data set D two-value marks corresponding with it S。
In any of the above-described scheme preferably, the marking area and entire image for data set that the step 3 exports RGB color feature difference average.
In any of the above-described scheme preferably, the step 3 is further comprising the steps of:
Step 301:Read image Ij, described image Ij∈D;
Step 302:Calculate image IjRGB color feature Fj
Step 303:Reading and IjCorresponding two-value mark figure G;
Step 304:The number of the marking area set C, C that are connected in extraction two-value mark figure G are m;
Step 305:The number sum=sum+m in statistically significant region;
Step 306:Calculate xiRGB color feature fi, wherein xi∈ C, 1≤i≤m;
Step 307:Calculate fiWith FjDifference dij
Step 308:Statistical distance difference d=d+dij
Step 309:Calculate the average of the RGB color feature difference of marking area and entire image
In any of the above-described scheme preferably, the sum represents the number of marking area, and the initial value of the sum is 0。
In any of the above-described scheme preferably, the d represents the colour-difference for writing region and image, the initial value of the d For 0.
In any of the above-described scheme preferably, the set D for saliency data collection that the step 4 inputs.
In any of the above-described scheme preferably, the Performance Score for each saliency data collection that the step 4 exports.
In any of the above-described scheme preferably, the performance of the smaller explanation data set of the Performance Score obtained is better.
In any of the above-described scheme preferably, the step 4 is further comprising the steps of:
Step 401:Calculate data set datasetjMarking area size account for entire image percentage each grade Ratio value, calculate 10 ratio values variance fj, wherein datasetj∈ D, 1≤j≤| | D | |;
Step 402:Calculate data set datasetjIn the number of marking area that is connected with image border and account for data Collect the ratio c of all marking areasj
Step 403:Calculate data set datasetjMarking area and entire image RGB color feature difference average dj
Step 404:scorej=fj+(1-cj)+dj, wherein 1≤j≤| | D | |.
The present invention propose saliency data collection performance evaluation methodology, the method can from different angles to data set into Row statistics is evaluated and tested so as to the comprehensive performance to data set.It is well known that good data set helps to research and develop objective and science Conspicuousness detection algorithm, be avoided to cater to database deviation and carry out the not high algorithm design of robustness.This method can be with It is extended, introduces new statistical method;In addition the weight of every kind of statistical method can be different, can calculate and introduce weight New evaluation and test (increase evaluation and test) ranking.
Brief description of the drawings
Fig. 1 is the flow chart of a preferred embodiment of the evaluating method of saliency data collection according to the invention.
Fig. 2 is the connection image boundary of a preferred embodiment of the evaluating method of saliency data collection according to the invention The number and ratio table figure of marking area.
Fig. 3 is the marking area of the embodiment as shown in Figure 2 of the evaluating method of saliency data collection according to the invention Ratio table figure of the size in each rate range.
Fig. 4 is the aobvious of the data set of the embodiment as shown in Figure 2 of the evaluating method of saliency data collection according to the invention Write region and entire image apart from average table figure.
Fig. 5 is the property of the data set of the embodiment as shown in Figure 2 of the evaluating method of saliency data collection according to the invention Can score value and sequencing table figure.
Embodiment
The present invention is further elaborated with specific embodiment below in conjunction with the accompanying drawings.
Embodiment one
As shown in Figure 1, performing step 100, the data set for needing to weigh is opened.
Step 110 is performed, statistically significant area size occupies the ratio of entire image.
Piece image I and the diagram mark figure G as corresponding two-value, it is assumed that region is not each other in two-value marks figure G The number of the marking area of connection is m.xI,I-th piece of marking area in 1≤i≤m, representative image I, calculates xiArea account for view picture The percentage of image.Percentage is divided into 10 proportion grades, [0,0.1), [0.1,0.2), [0.2,0.3), [0.3, 0.4), [0.4,0.5), [0.5,0.6), [0.6,0.7), [0.7,0.8]), [0.8,0.9), [0.9,1], xiWhich compare at Then the number of marking area adds 1, num in the range of comparative example in example gradej=numj+1,1≤j≤10.Data are concentrated each A image carries out operation above, and the marking area number for finally calculating 10 proportion grades respectively accounts for all marking area numbers Percentage.
Step 120 is performed, the number for the marking area that statistics is connected with image border, which accounts for the saliency data collection, to be owned The ratio of marking area.
Piece image IjAnd the diagram marks figure G as corresponding two-value, it is assumed that region is not each other in two-value marks figure G The number of the marking area of connection is mj。xi, 1≤i≤mj, representative image IjIn i-th piece of marking area, judge xiWhether and image Edge connection, if connection, the marking area number being connected with edge adds 1, num=num+1.Data are concentrated each A image carries out operation above, finally calculates the number for the marking area being connected with image border and accounts for that data set is all to be shown Write the ratio in region.
The RGB color feature of execution step 130, statistically significant region and entire image is poor.
Piece image IjAnd the diagram marks figure G as corresponding two-value, it is assumed that region is not each other in two-value marks figure G The number of the marking area of connection is mj。xi, 1≤i≤mj, representative image IjIn i-th piece of marking area.Extract image IjRGB Color characteristic.Extract xiRGB color feature, and calculate xiRGB color feature and image IjRGB color feature between Difference, is denoted as dij.Operation above is carried out to each image that data are concentrated, finally calculates marking area and entire image The average of RGB color feature difference.
Step 140 is performed, calculates saliency data collection Performance Score.
The status of three kinds of evaluating methods in step 110- steps 130 is the same, to the Performance Score of each data set Calculated, the performance of the smaller explanation data set of score value is better.
Embodiment two
The calculating process that the marking area size of data set accounts for the percentage of entire image is as follows:
Input:Data set D two-value mark figure Ss corresponding with it;
Output:Marking area number in 10 proportion grades accounts for the percentage of all marking area numbers.
Calculating process:
1.for image i ∈ [1,10] do
2.numi=0;// number of each grade is initialized as 0
3.end for
4.for images Ij∈D do
5. read IjAnd IjCorresponding two-value mark figure G;
6. the number of marking area the set C, C connected in extraction two-value mark figure G is m;
7.for xi∈ C, 1≤i≤m do
8. calculate xiImage IjThe percentage of area
9. judge diGrade be j, numj=numj+ 1,1≤j≤10;
end for
10.end for
Marking area quantity in 11.for j ∈ [1,10] 10 proportion grades of do//calculating accounts for all marking area numbers The ratio of amount
12
13.end for
Form as shown in Figure 2, which is shown, to connect the number of marking area and the evaluating method of ratio of image boundary Apply the result on seven benchmark datasets (ECSSD, ASD, MSRA5k, MIT, ImgSal, MSRA10k, DUT_OMRON). The notable number of objects of data set ECSSD is 1171, and the notable object number being connected with image boundary is 225, and ratio is 19.2%;The notable number of objects of data set ASD is 1209, and the notable object number being connected with image boundary is 18, ratio For 1.5%;The notable number of objects of data set MSRA5k is 5594, and the notable object number being connected with image boundary is 191, Ratio is 3.4%;The notable number of objects of data set MIT is 1015, and the notable object number being connected with image boundary is 107 It is a, ratio 10.5%;The notable number of objects of data set ImgSal is 480, the notable object number being connected with image boundary For 1, ratio 0.2%;The notable number of objects of data set MSRA10k is 6841, the notable object being connected with image boundary Number is 556, ratio 8.1%;The notable number of objects of data set DUT_OMRON is 10915, is connected with image boundary Notable object number is 590, ratio 5.4%.
Embodiment three
The number for the marking area being connected with image border and account for all marking areas of data set ratio calculating Journey is as follows:
Input:Data set D two-value mark figure Ss corresponding with it;
Output:The number for the marking area being connected with image border and the ratio for accounting for all marking areas of data set.
Calculating process:
1.num represents the number of the marking area connected with image border, num=0;
2.sum represents the number of marking area, sum=0;
3.for images Ij∈D do
4. read IjAnd IjCorresponding two-value mark figure G;
5. the number of marking area the set C, C connected in extraction two-value mark figure G is m;
6.sum=sum+m;
7.for xi∈ C, 1≤i≤m do
8.if xiConnected with the edge of image
9.num=num+1;
10.end if
11.end for
12.end for
13. calculate the ratio that the marking area number being connected with image border accounts for all marking areas
Ratio of the size for writing region in each rate range is shown in form as shown in Figure 3, shows respectively Data set ECSSD, ASD, MSRA5k, MIT, ImgSa, MSRA10k and DUT_OMRON rate range (0,0.1], (0.1, 0.2], (0.2,0.3], (0.3,0.4], (0.4,0.5], (0.5,0.6], (0.6,0.7], (0.7,0.8], (0.8,0.9] and (0.9,1.0] in ratio.
Example IV
The calculating process of the marking area of data set and the RGB color feature difference of entire image is as follows:
Input:Data set D two-value mark figure Ss corresponding with it;
Output:The marking area of data set and the RGB color feature difference average of entire image.
Calculating process:
1.sum represents the number of marking area, sum=0;
2.d represents the colour-difference for writing region and image, d=0;
3.for images Ij∈D do
4. read Ij
5. calculate image IjRGB color feature Fj
6. reading and IjCorresponding two-value mark figure G;
7. the number of marking area the set C, C connected in extraction two-value mark figure G is m;
8. the number sum=sum+m in statistically significant region;
9.for xi∈ C, 1≤i≤m do
10. calculate xiRGB color feature fi
11. calculate fiWith FjDifference dij
12. statistical distance difference d=d+dij
13.end for
14.end for
15. calculate the average of the RGB color feature difference of marking area and entire image
The marking area of data set and the RGB color of entire image is shown apart from average in form as shown in Figure 4.Will After color distance mean normalization, the aberration of data set ECSSD is 0;The aberration of data set ASD is 1;The color of data set MSRA5k Difference is 0.5386;The aberration of data set MIT is 0.1441;The aberration of data set ImgSal is 0.2681;Data set MSRA10k's Aberration is 0.6752;The aberration of data set DUT_OMRON is 0.4316.
Embodiment five
The calculating process of saliency data collection Performance Score is as follows:
Input:The set D of saliency data collection;
Output:The Performance Score of each saliency data collection.
Calculating process:
1.for datasetj∈ D, 1≤j≤| | D | |, do
2. calculate data set datasetjMarking area size account for entire image percentage each grade ratio Value, calculates the variance f of 10 ratio valuesj
3. calculate data set datasetjIn the number of marking area that is connected with image border and account for data set and own The ratio c of marking areaj
4. calculate data set datasetjMarking area and entire image RGB color feature difference average dj
5.end for;
6.for j, 1≤j≤| | D | |, do
7.scorej=fj+(1-cj)+dj
8.end for
According to the computational methods of data set Performance Score:Variance+(1- and image of score value=marking area size value The marking area ratio of contour connection)+marking area and entire image colour-difference average, obtain the performance point of each data set Value, Performance Score is lower, and data set performance is better.
The Performance Score of data set is listed in form as shown in Figure 5:
The Performance Score of data set ECSSD is 0.8775;The Performance Score of data set ASD is 1.9991;Data set The Performance Score of MSRA5k is 1.5259;The Performance Score of data set MIT is 1.0495;The Performance Score of data set ImgSal is 1.2912;The Performance Score of data set MSRA10k is 1.6259;The Performance Score of data set DUT_OMRON is 1.3965.
According to data above Performance Score obtain below data set ranking:
1st, data set ECSSD;2nd, data set MIT;3rd, data set ImgSal;4th, data set DUT_OMRON;5th, data set MSRA5k;6th, data set MSRA10k;7th, data set ASD.
For a better understanding of the present invention, it is described in detail above in association with the specific embodiment of the present invention, but is not Limitation of the present invention.Every technical spirit according to the present invention still belongs to any simple modification made for any of the above embodiments In the scope of technical solution of the present invention.What each embodiment stressed in this specification be it is different from other embodiments it Locate, the same or similar part cross-reference between each embodiment.For system embodiment, due to itself and method Embodiment corresponds to substantially, so description is fairly simple, the relevent part can refer to the partial explaination of embodiments of method.
The methods, devices and systems of the present invention may be achieved in many ways.For example, software, hardware, firmware can be passed through Or any combinations of software, hardware, firmware come realize the present invention method and system.The step of for the method it is above-mentioned Order is merely to illustrate, and the step of method of the invention is not limited to order described in detail above, unless with other sides Formula illustrates.In addition, in certain embodiments, the present invention can be also embodied as recording program in the recording medium, these Program includes the machine readable instructions for being used for realization the method according to the invention.Thus, the present invention also covering storage is used to perform The recording medium of the program of the method according to the invention.
Description of the invention provides for the sake of example and description, and is not exhaustively or by the present invention It is limited to disclosed form.Many modifications and variations are obvious for the ordinary skill in the art.Select and retouch State embodiment and be to more preferably illustrate the principle of the present invention and practical application, and those of ordinary skill in the art is managed The solution present invention is so as to design the various embodiments with various modifications suitable for special-purpose.

Claims (10)

1. a kind of evaluating method of saliency data collection performance, comprises the following steps:
Step 1:Statistically significant area size occupies the ratio of entire image;
Step 2:The number for counting the marking area being connected with image border accounts for all marking areas of saliency data collection Ratio;
Step 3:The RGB color feature of statistically significant region and entire image is poor;
Step 4:The step 1 is calculated to the Performance Score of each saliency data collection in step 3.
2. the evaluating method of saliency data collection performance as claimed in claim 1, it is characterised in that:What the step 1 inputted For data set D and he corresponding two-value mark figure S.
3. the evaluating method of saliency data collection performance as claimed in claim 2, it is characterised in that:What the step 1 exported The percentage of all marking area numbers is accounted for for the marking area number in 10 proportion grades.
4. the evaluating method of saliency data collection performance as claimed in claim 3, it is characterised in that:The step 1 further includes Following steps:
Step 101:Connected in image I and I corresponding two-value mark figure G, extraction two-value mark figure G in the data set aobvious The number for writing regional ensemble C, the C is m;
Step 102:xiI-th piece of marking area in representative image I, calculates xiArea account for the percentage of entire image, computational methods It is as follows;
Step 103:Judge diProportion grades be j, numj=numj+ 1,1≤j≤10, numjInitial value be 0;
Step 104:The step 101 is all carried out to each image in data set D and corresponding two-value mark figure and arrives the step Rapid 103 calculating process;
Step 105:Calculate the ratio that the marking area number in 10 proportion grades accounts for all marking area numbers, computational methods It is as follows:
5. the evaluating method of saliency data collection performance as claimed in claim 4, it is characterised in that:The xi∈ C, 1≤i≤ m。
6. the evaluating method of evaluation and test saliency data collection performance as claimed in claim 1, it is characterised in that:The step 2 is defeated Enter for data set D two-value mark figure Ss corresponding with it.
7. the evaluating method of saliency data collection performance as claimed in claim 6, it is characterised in that:What the step 2 exported For the number of marking area being connected with image border and the ratio for accounting for all marking areas of data set.
8. the evaluating method of saliency data collection performance as claimed in claim 7, it is characterised in that:The step 2 further includes Following steps:
Step 201:Read IjAnd IjCorresponding two-value mark figure G, image Ij∈D;
Step 202:The number of the marking area set C, C that are connected in extraction two-value mark figure G are m, sum=sum+m,;
Step 203:Work as xiWhen being connected with the edge of image, num=num+1;
Step 204:Calculate the number for the marking area being connected with image border and account for the ratio of all marking areas of data set, Computational methods are as follows:
9. the evaluating method of saliency data collection performance as claimed in claim 8, it is characterised in that:The num is represented and figure As the number for the marking area that edge connects, the initial value of num is 0.
10. the evaluating method of saliency data collection performance as claimed in claim 8, it is characterised in that:The sum represents notable The number in region, the initial value of sum is 0.
CN201611259133.6A 2016-12-30 2016-12-30 A kind of evaluating method of saliency data collection Active CN107977970B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611259133.6A CN107977970B (en) 2016-12-30 2016-12-30 A kind of evaluating method of saliency data collection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611259133.6A CN107977970B (en) 2016-12-30 2016-12-30 A kind of evaluating method of saliency data collection

Publications (2)

Publication Number Publication Date
CN107977970A true CN107977970A (en) 2018-05-01
CN107977970B CN107977970B (en) 2019-10-29

Family

ID=62005259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611259133.6A Active CN107977970B (en) 2016-12-30 2016-12-30 A kind of evaluating method of saliency data collection

Country Status (1)

Country Link
CN (1) CN107977970B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110704292A (en) * 2019-10-15 2020-01-17 中国人民解放军海军大连舰艇学院 Evaluation method for display control interface design
CN111914850A (en) * 2019-05-07 2020-11-10 百度在线网络技术(北京)有限公司 Picture feature extraction method, device, server and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012173017A (en) * 2011-02-18 2012-09-10 Hitachi High-Technologies Corp Defect classification device
CN106055553A (en) * 2016-04-27 2016-10-26 中国人民解放军陆军军官学院 Foggy-day image database used for significance detection and quality evaluation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012173017A (en) * 2011-02-18 2012-09-10 Hitachi High-Technologies Corp Defect classification device
CN106055553A (en) * 2016-04-27 2016-10-26 中国人民解放军陆军军官学院 Foggy-day image database used for significance detection and quality evaluation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GUANBIN LI 等: "Visual Saliency Based on MuItiscale Deep Features", 《2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION(CVPR)》 *
QIONG YAN 等: "Hierarchical Saliency Detection", 《2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
梁晔 等: "显著区域检测技术研究", 《计算机科学》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914850A (en) * 2019-05-07 2020-11-10 百度在线网络技术(北京)有限公司 Picture feature extraction method, device, server and medium
CN111914850B (en) * 2019-05-07 2023-09-19 百度在线网络技术(北京)有限公司 Picture feature extraction method, device, server and medium
CN110704292A (en) * 2019-10-15 2020-01-17 中国人民解放军海军大连舰艇学院 Evaluation method for display control interface design
CN110704292B (en) * 2019-10-15 2020-11-03 中国人民解放军海军大连舰艇学院 Evaluation method for display control interface design

Also Published As

Publication number Publication date
CN107977970B (en) 2019-10-29

Similar Documents

Publication Publication Date Title
CN101976258B (en) Video semantic extraction method by combining object segmentation and feature weighing
WO2018072233A1 (en) Method and system for vehicle tag detection and recognition based on selective search algorithm
CN105956582B (en) A kind of face identification system based on three-dimensional data
CN107316031A (en) The image characteristic extracting method recognized again for pedestrian
JP2021524630A (en) Multi-sample whole slide image processing via multi-resolution registration
CN101872424B (en) Facial expression recognizing method based on Gabor transform optimal channel blur fusion
CN101847208B (en) Secondary classification fusion identification method for fingerprint and finger vein bimodal identification
CN102163281B (en) Real-time human body detection method based on AdaBoost frame and colour of head
CN114723704B (en) Textile quality evaluation method based on image processing
CN104008375B (en) The integrated face identification method of feature based fusion
CN106529448A (en) Method for performing multi-visual-angle face detection by means of integral channel features
CN103400136B (en) Target identification method based on Elastic Matching
CN103473551A (en) Station logo recognition method and system based on SIFT operators
CN106384126A (en) Clothes pattern identification method based on contour curvature feature points and support vector machine
CN107886507B (en) A kind of salient region detecting method based on image background and spatial position
CN105718552A (en) Clothing freehand sketch based clothing image retrieval method
CN106845542A (en) Paper money number intelligent identification Method based on DSP
CN104268529A (en) Judgment method and device for quality of fingerprint images
CN106846354B (en) A kind of Book Inventory method on the frame converted based on image segmentation and random hough
CN106485253A (en) A kind of pedestrian of maximum particle size structured descriptor discrimination method again
CN104156690A (en) Gesture recognition method based on image space pyramid bag of features
CN105488475A (en) Method for detecting human face in mobile phone
CN107977970B (en) A kind of evaluating method of saliency data collection
CN113392856A (en) Image forgery detection device and method
CN112115835A (en) Face key point-based certificate photo local anomaly detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant