CN109886267A - A kind of soft image conspicuousness detection method based on optimal feature selection - Google Patents

A kind of soft image conspicuousness detection method based on optimal feature selection Download PDF

Info

Publication number
CN109886267A
CN109886267A CN201910086101.8A CN201910086101A CN109886267A CN 109886267 A CN109886267 A CN 109886267A CN 201910086101 A CN201910086101 A CN 201910086101A CN 109886267 A CN109886267 A CN 109886267A
Authority
CN
China
Prior art keywords
image
pixel
super
conspicuousness
saliency maps
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910086101.8A
Other languages
Chinese (zh)
Inventor
王强
杨安宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201910086101.8A priority Critical patent/CN109886267A/en
Publication of CN109886267A publication Critical patent/CN109886267A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The soft image conspicuousness detection method based on optimal feature selection that the invention discloses a kind of.The present invention hierarchically refines Saliency maps by the best features being recursively adaptive selected in soft image.Inhibit background interference first with multiple dimensioned super-pixel segmentation.Then, initial Saliency maps are generated by global contrast and spatial relationship.The final Saliency maps optimized using part and global adaptability.And test on four data sets using the method, experimental evaluation shows proposed model in terms of efficiency and accuracy better than 15 kinds of state-of-the-art methods.It include: MSRA data set, SOD data set, DUT-OMRON data set and NI data set.

Description

A kind of soft image conspicuousness detection method based on optimal feature selection
Technical field
The soft image conspicuousness detection method based on optimal feature selection that the present invention relates to a kind of, belongs at image Reason field can provide theory and technology basis for hot issues such as night safe monitoring, the positioning of complex environment target.
Background technique
Saliency detection can greatly promote the extensive use that computer is independently detected and identified.Although scheming in recent years It has been achieved for being markedly improved as conspicuousness detects, but low signal-to-noise ratio, the characteristic image of low contrast is still to current method Propose huge challenge.This patent proposes a kind of significant seed dispersal method based on optimal feature selection to detect low comparison Spend the significant object in image.The key idea of the method proposed is by the way that soft image is recursively adaptive selected In optimal characteristics hierarchically refine Saliency maps.It is intended to imitate human visual system, it can be like a dream from scene Maximally related object is picked out, obvious object detection can greatly promote image segmentation, image retrieval, compression of images, wireless network The application of network node distribution etc..
By calculating the pixel in low level or high-level clue or the uniqueness in region, existing conspicuousness model can be with It is divided into two classes.1) one kind is to be mainly based upon locally or globally contrast from the upper unsupervised method in bottom.These algorithms have Center ring based on multiple characteristic patterns is calculated around difference, is had and is carried out conspicuousness calculating based on histogram and region contrast, has Estimate saliency using contrast and spatial distribution, have through higher-dimension colour switching and regression estimates overall situation conspicuousness and Local conspicuousness is had and is assumed by using color and the compactedness of textural characteristics to construct Saliency maps.In general, these bottom of from and On method meet difficulty when handling the image of mixed and disorderly background, and be difficult to find that really when picture contrast is relatively low Significant object.
2) it is another kind of be from top and under the method that Target Acquisition is instructed by supervised learning.By method using support Vector machine (SVM) training generates super-pixel grade notable figure, and some is propagated and trained by merging the Laplce based on super-pixel Convolutional neural networks (CNN) design a kind of conspicuousness model based on deep learning, some propositions combine part and complete The simplification CNN of office's feature, also some is proposed one and is learnt saliency based on the CNN model of covariance and utilize edge Retain with multiple dimensioned context neural network and generates notable figure.These methods have higher computation complexity, current depth The performance that neural network model is very time-consuming and they are in terms of accurate positioning is relatively weak.With in recent years to deep learning Research go deep into, using convolutional neural networks as the network model of Typical Representative because its powerful learning ability is widely paid close attention to And it is successfully applied to different visual tasks.A kind of model of the convolutional neural networks as simulation human brain neuromechanism, The Object identifying that can complete similar human perception performance also can be considered that a kind of advanced significant clue is applied to soft image In significant object detection.
Although have been proposed it is many upper the bottom of from and from top and under conspicuousness model, most of which quilt Designed for the obvious object detection in scene on daytime.Due to indicating the useful feature of conspicuousness information in soft image Compare deficient, these models may face huge challenge in low light scene.In terms of its reason is mainly following two: 1) mesh The feature of preceding hand-designed is difficult to assess the objectivity in image;2) current advanced features are detecting accurate significant object edges Huge challenge is faced in terms of boundary, the multistage convolution sum pond layer in CNN model makes significant object bounds very fuzzy.
Summary of the invention
The soft image conspicuousness detection method based on optimal feature selection that the present invention relates to a kind of.Saliency Detection can greatly promote the extensive use that computer is independently detected and identified.Although saliency detection in recent years has taken Significant improvement, but low signal-to-noise ratio were obtained, the characteristic image of low contrast still proposes huge challenge to current method.This is specially Benefit proposes a kind of significant seed dispersal method based on optimal feature selection to detect the significant object in soft image. The key idea of the method proposed is the best features by being recursively adaptive selected in soft image to be layered Ground refines Saliency maps.Inhibit background interference first with multiple dimensioned super-pixel segmentation.Then, pass through global contrast and sky Between relationship generate initial Saliency maps.The final Saliency maps optimized using part and global adaptability.And utilize this Method test on four data sets, and experimental evaluation shows that proposed model is better than in terms of efficiency and accuracy 15 kinds of state-of-the-art methods.The best features selection method that soft image conspicuousness reproduces, the technology specifically comprises the steps of:
Step (1) is to retain and utilize the information of image, divides the image into super-pixel;
Step (2) extracts several visual signatures and therefrom selects optimal feature;
Step (3) constructs conspicuousness model and generates initial notable figure;
Step (4) recursively refines initial conspicuousness seed and optimizes Saliency maps.
In step (1), for object of reservation structural information and using the intermediate information of original image, changed using simple linear Input picture is divided into super-pixel by generation cluster (SLIC) algorithm, is expressed as { si, i=1, N.The processing can lead to It crosses and super-pixel is considered as processing unit to improve the efficiency of model.Since the accuracy of testing result is highly dependent on super-pixel Quantity, the model proposed capture the super-pixel of three kinds of different scales, and wherein N is respectively set to 100,200 and 300.
In step (2), extract several lower-level vision features from input picture, including color characteristic, textural characteristics, Direction character and Gradient Features, wherein color characteristic has nine kinds;The validity of lower-level vision feature is according to the difference of input picture Contrast and change, the therefrom adaptively selected coldest days of the year end optimal characteristics of the comentropy based on lower-level vision feature.Lower-level vision is special Sign extraction process is described as follows:
2-1. is first normalized the RGB color of input picture, to eliminate the influence of light and shade.Then will Image is converted to LAB, HSV and YCbCr color space after normalization, to extract nine kinds of color characteristics.
2-2. indicates its textural characteristics using the two-dimensional entropy of image.The variation of textural characteristics is by the variation of entropy come really Fixed, textural characteristics have the ability of very strong antinoise and geometry deformation.
2-3. is filtered by executing the Gabor of different directions θ ∈ { 0 °, 45 °, 90 °, 135 ° } on the gray level image of input Wave device obtains direction character.It is smaller that the global property and rotational invariance of direction character influence it by low contrast.
2-4. calculates Gradient Features by average level gradient and vertical gradient.Part can be described by Gradient Features The amplitude of grey scale change.
After the 1 dimension entropy by calculating each characteristic pattern, from extract 12 features L, A, B, H, S, V, Y, Cb, Cr, E, O, G } in select nine optimal characteristics, nine optimal characteristics are expressed as { Fk, k=1,2 ..., 9 is its method are as follows:
Here pIIndicate the ratio I of the pixel where gray value.As characteristic statistics form, the polymerization of image grayscale distribution The average information for including in characteristic can be reflected by image entropy.The Image entropy of characteristic pattern is bigger, and the validity of feature is got over It is high.In this patent, nine features have been selected, it can be ensured that the good description of image information.
In step (3), building significantization model generates the specific steps of initial notable figure are as follows:
The significance value of each super-pixel is obtained based on global area contrast and spatial relationship, and calculation is such as Under:
Here pos (si,sj) indicate conspicuousness siAnd sjThe distance between.c(si) calculate pixel (xi,yi) and image Space length between center (x ', y ') coordinate.vxAnd vyIt is the variable determined by the horizontal and vertical information of image.
In step (4), initial conspicuousness seed and final optimization pass conspicuousness are recursively refined using two cost functions Figure, specific steps are as follows:
Conspicuousness s first by calculating each super-pixeliSmap is expressed as to obtain initial Saliency mapsk, k=0;So Initial Saliency maps are divided into non-significant and marking area using the threshold value of Otsu afterwards.Non-significant and marking area can be regarded For original image background seed (being expressed as BS) and foreground seeds (being expressed as FS).Due to the difference between super-pixel and background Bigger, the significance value of super-pixel is higher.Bigger with the difference of prospect on the contrary, significance value is lower.Therefore, super-pixel is significant Property value siIt can be recalculated based on BS and FS:
The process completes an iteration optimization, and forms a new Saliency maps Smapk, k=1.Next, weight New BS and FS, follow-on Saliency maps Smap are newly obtained using the method for Otsuk+1It can be obtained by formula (4-6). Determine whether iteration has reached termination condition by defining two cost functions.
Function f1(k) global fitness is measured, it means that the difference between a new generation and previous generation notable figure is smaller, mesh Mark is just more acurrate.Function f2(k) local fitness is measured, it means that the variation between super-pixel super-pixel adjacent thereto is got over Small, the significance value of each decision variable is bigger.By minimizing f1(k) and f2(k), optimal super-pixel grade can be generated Notable figure.
The present invention is effective as follows:
Model proposed by the invention can realize state-of-the-art performance in soft image.
The present invention extracts foreground and background seed by recurrence and carries out conspicuousness calculating.In best features and conspicuousness seed Guidance under, obtained Saliency maps can be generated by integrating multiple super-pixel grade Saliency maps on three scales. The experimental results showed that the model proposed has been more than 15 progressive dies at first in three common data sets and nighttime image data set Type, and there is optimum performance.
Detailed description of the invention
The basic flow chart of Fig. 1 the method for the present invention.
Fig. 2 is using the method for the present invention and existing image significance detection method respectively in MSRA data set, SOD data The Saliency maps that the 16 conspicuousness models tested on collection, DUT-OMRON data set and NI data set proposed by the present invention obtain Visual performance comparison diagram.
Specific embodiment
The embodiment of technical solution of the present invention is described in further detail with reference to the accompanying drawing.
1. as shown in Figure 1, dividing the image into super-pixel to retain and using the information of image;
2. as shown in Figure 1, extracting several visual signatures and therefrom selecting optimal feature;
3. as shown in Figure 1, building significantization model generates initial notable figure;
4. as shown in Figure 1, recursively refining initial conspicuousness seed and optimizing Saliency maps.
Step (1) is to retain and utilize the information of image, divides the image into super-pixel;
Step (2) extracts several visual signatures and therefrom selects optimal feature;
Step (3) constructs conspicuousness model and generates initial notable figure;
Step (4) recursively refines initial conspicuousness seed and optimizes Saliency maps.
In step (1), for object of reservation structural information and using the intermediate information of original image, changed using simple linear Input picture is divided into super-pixel by generation cluster (SLIC) algorithm, is expressed as { si, i=1, N.The processing can lead to It crosses and super-pixel is considered as processing unit to improve the efficiency of model.Since the accuracy of testing result is highly dependent on super-pixel Quantity, the model proposed capture the super-pixel of three kinds of different scales, and wherein N is respectively set to 100,200 and 300.
In step (2), extract several lower-level vision features from input picture, including color characteristic, textural characteristics, Direction character and Gradient Features, wherein color characteristic has nine kinds;The validity of lower-level vision feature is according to the difference of input picture Contrast and change, the therefrom adaptively selected coldest days of the year end optimal characteristics of the comentropy based on lower-level vision feature.Lower-level vision is special Sign extraction process is described as follows:
2-1. is first normalized the RGB color of input picture, to eliminate the influence of light and shade.Then will Image is converted to LAB, HSV and YCbCr color space after normalization, to extract nine kinds of color characteristics.
2-2. indicates its textural characteristics using the two-dimensional entropy of image.The variation of textural characteristics is by the variation of entropy come really Fixed, textural characteristics have the ability of very strong antinoise and geometry deformation.
2-3. is filtered by executing the Gabor of different directions θ ∈ { 0 °, 45 °, 90 °, 135 ° } on the gray level image of input Device obtains direction character.It is smaller that the global property and rotational invariance of direction character influence it by low contrast.
2-4. calculates Gradient Features by average level gradient and vertical gradient.Part can be described by Gradient Features The amplitude of grey scale change.
After the 1 dimension entropy by calculating each characteristic pattern, from extract 12 features L, A, B, H, S, V, Y, Cb, Cr, E, O, G } in select nine optimal characteristics, nine optimal characteristics are expressed as { Fk, k=1,2 ..., 9 is its method are as follows:
Here pIIndicate the ratio I of the pixel where gray value.As characteristic statistics form, the polymerization of image grayscale distribution The average information for including in characteristic can be reflected by image entropy.The Image entropy of characteristic pattern is bigger, and the validity of feature is got over It is high.In this patent, nine features have been selected, it can be ensured that the good description of image information.
In step (3), building significantization model generates the specific steps of initial notable figure are as follows:
The significance value of each super-pixel is obtained based on global area contrast and spatial relationship, and calculation is such as Under:
Here pos (si,sj) indicate conspicuousness siAnd sjThe distance between.c(si) calculate pixel (xi,yi) and image Space length between center (x ', y ') coordinate.vxAnd vyIt is the variable determined by the horizontal and vertical information of image.
In step (4), initial conspicuousness seed and final optimization pass conspicuousness are recursively refined using two cost functions Figure, specific steps are as follows:
Conspicuousness s first by calculating each super-pixeliSmap is expressed as to obtain initial Saliency mapsk, k=0;So Initial Saliency maps are divided into non-significant and marking area using the threshold value of Otsu afterwards.Non-significant and marking area can be regarded For original image background seed (being expressed as BS) and foreground seeds (being expressed as FS).Due to the difference between super-pixel and background Bigger, the significance value of super-pixel is higher.Bigger with the difference of prospect on the contrary, significance value is lower.Therefore, super-pixel is significant Property value siIt can be recalculated based on BS and FS:
The process completes an iteration optimization, and forms a new Saliency maps Smapk, k=1.Next, The method of Otsu is reused to obtain new BS and FS, follow-on Saliency maps Smapk+1It can be obtained by formula (4-6) ?.Determine whether iteration has reached termination condition by defining two cost functions.
Function f1(k) global fitness is measured, it means that the difference between a new generation and previous generation notable figure is smaller, mesh Mark is just more acurrate.Function f2(k) local fitness is measured, it means that the variation between super-pixel super-pixel adjacent thereto is got over Small, the significance value of each decision variable is bigger.By minimizing f1(k) and f2(k), optimal super-pixel grade can be generated Notable figure.
The method of the present invention and existing saliency detection model are respectively in MSRA data set, SOD data set, DUT- The detection effect comparison of obtained notable figure is tested on OMRON data set and NI data set as shown in Fig. 2, wherein 1) MSRA number Include 10000 natural images according to collection, there is simple background and high contrast;2) SOD data set includes multiple objects and answers The image of miscellaneous background;3) DUT-OMRON data set includes relative complex and challenging image;4) our NI data set The soft image shot comprising 200 evenings with vertical camera.The resolution ratio of every picture is 640 × 480, is additionally provided The benchmark notable figure of hand labeled.It includes: IT model, SR model, FT model, NP model, CA that 15, which are made comparison conspicuousness model, Model, IS model, LR model, PD model, MR model, SO model, BL model, GP model, SC model, SMD model, MIL model And the model of the method for the present invention.All experiments make on Intel i5-5250CPU (1.6GHz) PC with 8GB RAM It is executed with MATLAB.
We have used seven standards, including PR (precision-recall) curve, TPR-FPR to assess performance (true positive rate and false positive rate) curve, AUC (area under the curve) Point, MAE (mean absolute error) score, WF (weighted F-measure) score, OR (overlapping Ratio) the average performance times (second) of score and each image.
It is compared by different threshold values by notable figure binaryzation and by itself and benchmark notable figure, so that it may obtain different Precision P, recall rate R, kidney-Yang rate TPR and false positive rate FPR, PR curve and TPR-FPR song can be drawn by having obtained these ratios Line.AUC score is the percentage of TPR-FPR area under the curve, and it is really aobvious that it can intuitively show that Saliency maps can be predicted Write the fine or not degree of object.MAE score is considered as the mean absolute difference between Saliency maps obtained and true notable figure.Its It is worth smaller, similitude is higher.F-measure score is defined as the weighted harmonic mean value between precision and recall rate, WF Score is calculated by introducing weighting function to detection error.OR score be considered as binary saliency map and true notable figure it Between the significant pixel of overlapping ratio.
Quantitative detection performance such as table 1 institute of the conspicuousness model proposed on four data sets with other 15 models Show.Respectively with red, blue and green font highlight three best results in table 1.Indicate that the value is bigger to upward arrow Performance is better, and down arrow indicates opposite meaning.The result shows that in most cases, the model proposed is in three public affairs It ranks the first on image data set altogether or second, and real on soft image data set in relatively short time loss Existing optimum performance.
Table 1. uses the quantitative result of five standard each conspicuousness models on four data sets
On MSRA and DUT-OMRON data set (Fig. 2), the model that is proposed TPRs-FPRs curve, PR curve and Optimum performance is obtained on AUC score, and SO model reaches most preferably on MAE and WF score, MIL model reaches on OR score Most preferably.This is because SO model improves robustness using boundary connectivity and global optimization, and to propose one kind more for MIL model Case-based learning strategy improves accuracy.The two models are measured using background, and the significant object of complex background is effectively detected. Although we are slightly below SO and MIL model at MAE, WF and the OR value of model, other scores of our score ratio are more competitive. The average time-consuming of MIL model is more than every image 100 seconds, this is very inefficient.
On SOD data set (Fig. 2), the model proposed realizes optimality on TPR-FPR, PR, AUC, WF and OR Energy.In terms of MAE scoring, the performance of our model be ranked second, and there was only little difference with the optimum of SO model (0.002)。
On NI data set (Fig. 2), the model proposed is superior, is disappeared because it obtains the time in these standards Consume relatively low optimum.
The qualitative comparison of the Saliency maps of 16 conspicuousness models is as shown in Fig. 2, the model proposed on four data sets The true significant object in complicated and soft image can accurately be extracted.IT, NP, IS and SC model are easy by background The influence of noise.Significant object can not be accurately positioned in SR and FT model.CA and PD model can not reflect the internal junction of significant object Structure.The subjective performance of LR, MR, BL and GP model is influenced very big by low contrast background.SO, SMD and MIL model can not be Obvious object is steadily detected under low contrast environment.The model proposed can be realized state-of-the-art in soft image Performance.
It is to sum up told, the present invention constructs a kind of soft image conspicuousness detection side based on optimal feature selection Method extracts foreground and background seed by recurrence and carries out conspicuousness calculating.It, can under the guidance of best features and conspicuousness seed To generate obtained Saliency maps by integrating multiple super-pixel grade Saliency maps on three scales.Experimental result table Bright, the model proposed has been more than 15 most advanced models in three common data sets and nighttime image data set, and is had Optimum performance.
Finally, it should be noted that the foregoing is only a preferred embodiment of the present invention, it is not intended to restrict the invention, While in accordance with previous embodiment, invention is explained in detail, for those skilled in the art, still can be with It modifies to technical solution documented by previous embodiment or equivalent replacement of some of the technical features.It is all Within the spirit and principles in the present invention, any modification, equivalent replacement, improvement and so on should be included in protection of the invention Within the scope of.
The foregoing describe basic principles and main features of the invention and advantages of the present invention.Industry technical staff answers The understanding, the present invention is not limited to the above embodiments, and the above embodiments and description only describe of the invention Principle, without departing from the spirit and scope of the present invention, various changes and improvements may be made to the invention, these change and change Into all fall within the protetion scope of the claimed invention.

Claims (4)

1. a kind of soft image conspicuousness detection method based on optimal feature selection, it is characterised in that including walking as follows It is rapid:
Step (1) is to retain and utilize the information of input picture, and input picture is divided into super-pixel;
Step (2) extracts several visual signatures and therefrom selects optimal feature;
Step (3) constructs conspicuousness model and generates initial notable figure;
Step (4) recursively refines initial conspicuousness seed and optimizes Saliency maps.
2. a kind of soft image conspicuousness detection method based on optimal feature selection according to claim 1, It is characterized in that:
It is poly- using simple linear iteration for object of reservation structural information and using the intermediate information of input picture in step (1) Input picture is divided into super-pixel by class algorithm, is expressed as { si }, i=1, N;Since the accuracy of testing result is high Degree depends on the quantity of super-pixel, and wherein N is respectively set to 100,200 and 300;
Several lower-level vision features, including color characteristic, textural characteristics, direction are extracted in step (2) from input picture Feature and Gradient Features, wherein color characteristic has nine kinds;The validity of lower-level vision feature is compared according to the difference of input picture Degree and change, therefrom adaptively selected nine optimal characteristics of the comentropy based on lower-level vision feature, lower-level vision feature extraction Process description is as follows:
2-1. is first normalized the RGB color of input picture, and image after normalization is then converted to LAB, HSV With YCbCr color space, to extract nine kinds of color characteristics;
2-2. indicates its textural characteristics using the two-dimensional entropy of image, and the variation of textural characteristics is determined by the variation of entropy;
2-3. is come by executing the Gabor filter of different directions θ ∈ { 0 °, 45 °, 90 °, 135 ° } on the gray level image of input Obtain direction character;
2-4. calculates Gradient Features by average level gradient and vertical gradient;
After the 1 dimension entropy by calculating each characteristic pattern, from 12 features { L, A, B, H, S, V, Y, Cb, Cr, E, O, G } extracted Nine optimal characteristics of middle selection, nine optimal characteristics are expressed as { Fk, k=1,2 ..., 9 is its method are as follows:
Here pIIndicate the ratio I of the pixel where gray value;As characteristic statistics form, the polymerization property of image grayscale distribution In include average information can be reflected by image entropy;The Image entropy of characteristic pattern is bigger, and the validity of feature is higher.
3. a kind of soft image conspicuousness detection method based on optimal feature selection according to claim 2, It is characterized in that:
The specific implementation steps are as follows for step (3):
The significance value of each super-pixel is obtained based on global area contrast and spatial relationship, and calculation is as follows:
Here pos (si,sj) indicate conspicuousness siAnd sjThe distance between;c(si) calculate pixel (xi,yi) and picture centre Space length between (x ', y ') coordinate;vxAnd vyIt is the variable determined by the horizontal and vertical information of image.
4. a kind of soft image conspicuousness detection method based on optimal feature selection according to claim 3, It is characterized in that:
Initial conspicuousness seed and final optimization pass Saliency maps, tool are recursively refined in step (4) using two cost functions Body step are as follows:
Conspicuousness s first by calculating each super-pixeliSmap is expressed as to obtain initial Saliency mapsk, k=0;Then make Initial Saliency maps are divided into non-significant and marking area with the threshold value of Otsu;Non-significant and marking area can be considered as former Beginning image background seed BS and foreground seeds FS, the significance value s of super-pixeliIt is recalculated based on BS and FS are as follows:
This recalculates process and completes an iteration optimization, and forms a new Saliency maps Smapk, k=1;It connects down Come, reuses the method for Otsu to obtain new BS and FS, follow-on Saliency maps Smapk+1Pass through recurring formula (4-6) It obtains;
Determine whether iteration has reached termination condition by defining two cost functions;
Function f1(k) global fitness is measured, it means that the difference between a new generation and previous generation notable figure is smaller, and target is just It is more acurrate;Function f2(k) local fitness is measured, it means that the variation between super-pixel super-pixel adjacent thereto is smaller, often The significance value of a decision variable is bigger;By minimizing f1(k) and f2(k) optimal super-pixel grade notable figure is generated.
CN201910086101.8A 2019-01-29 2019-01-29 A kind of soft image conspicuousness detection method based on optimal feature selection Pending CN109886267A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910086101.8A CN109886267A (en) 2019-01-29 2019-01-29 A kind of soft image conspicuousness detection method based on optimal feature selection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910086101.8A CN109886267A (en) 2019-01-29 2019-01-29 A kind of soft image conspicuousness detection method based on optimal feature selection

Publications (1)

Publication Number Publication Date
CN109886267A true CN109886267A (en) 2019-06-14

Family

ID=66927253

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910086101.8A Pending CN109886267A (en) 2019-01-29 2019-01-29 A kind of soft image conspicuousness detection method based on optimal feature selection

Country Status (1)

Country Link
CN (1) CN109886267A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110390327A (en) * 2019-06-25 2019-10-29 北京百度网讯科技有限公司 Foreground extracting method, device, computer equipment and storage medium
CN110991547A (en) * 2019-12-12 2020-04-10 电子科技大学 Image significance detection method based on multi-feature optimal fusion
CN111008585A (en) * 2019-11-29 2020-04-14 西安电子科技大学 Ship target detection method based on self-adaptive layered high-resolution SAR image
CN111160180A (en) * 2019-12-16 2020-05-15 浙江工业大学 Night green apple identification method of apple picking robot
CN111505038A (en) * 2020-04-28 2020-08-07 中国地质大学(北京) Implementation method for quantitatively analyzing sandstone cementation based on cathodoluminescence technology
CN111583279A (en) * 2020-05-12 2020-08-25 重庆理工大学 Super-pixel image segmentation method based on PCBA
TWI739401B (en) * 2020-04-22 2021-09-11 國立中央大學 Object classification method and object classification device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056590A (en) * 2016-05-26 2016-10-26 重庆大学 Manifold Ranking-based foreground- and background-characteristic combined saliency detection method
CN107169487A (en) * 2017-04-19 2017-09-15 西安电子科技大学 The conspicuousness object detection method positioned based on super-pixel segmentation and depth characteristic
US20170351941A1 (en) * 2016-06-03 2017-12-07 Miovision Technologies Incorporated System and Method for Performing Saliency Detection Using Deep Active Contours
CN107992874A (en) * 2017-12-20 2018-05-04 武汉大学 Image well-marked target method for extracting region and system based on iteration rarefaction representation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056590A (en) * 2016-05-26 2016-10-26 重庆大学 Manifold Ranking-based foreground- and background-characteristic combined saliency detection method
US20170351941A1 (en) * 2016-06-03 2017-12-07 Miovision Technologies Incorporated System and Method for Performing Saliency Detection Using Deep Active Contours
CN107169487A (en) * 2017-04-19 2017-09-15 西安电子科技大学 The conspicuousness object detection method positioned based on super-pixel segmentation and depth characteristic
CN107992874A (en) * 2017-12-20 2018-05-04 武汉大学 Image well-marked target method for extracting region and system based on iteration rarefaction representation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MU NAN等: "A Multiscale Superpixel-level Salient Object Detection Model Using Local-Global Contrast Cue", 《JOURNAL OF SHANGHAI JIAOTONG UNIVERSITY (SCIENCE)》 *
NAN MU等: "Optimal Feature Selection for Saliency Seed Propagation in Low Contrast Images", 《ADVANCES IN MULTIMEDIA INFORMATION PROCESSING – PCM 2018》 *
张艳邦等: "基于颜色和纹理特征的显著性检测算法", 《计算机应用研究》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110390327A (en) * 2019-06-25 2019-10-29 北京百度网讯科技有限公司 Foreground extracting method, device, computer equipment and storage medium
CN110390327B (en) * 2019-06-25 2022-06-28 北京百度网讯科技有限公司 Foreground extraction method and device, computer equipment and storage medium
CN111008585A (en) * 2019-11-29 2020-04-14 西安电子科技大学 Ship target detection method based on self-adaptive layered high-resolution SAR image
CN111008585B (en) * 2019-11-29 2023-04-07 西安电子科技大学 Ship target detection method based on self-adaptive layered high-resolution SAR image
CN110991547A (en) * 2019-12-12 2020-04-10 电子科技大学 Image significance detection method based on multi-feature optimal fusion
CN111160180A (en) * 2019-12-16 2020-05-15 浙江工业大学 Night green apple identification method of apple picking robot
TWI739401B (en) * 2020-04-22 2021-09-11 國立中央大學 Object classification method and object classification device
CN111505038A (en) * 2020-04-28 2020-08-07 中国地质大学(北京) Implementation method for quantitatively analyzing sandstone cementation based on cathodoluminescence technology
CN111583279A (en) * 2020-05-12 2020-08-25 重庆理工大学 Super-pixel image segmentation method based on PCBA

Similar Documents

Publication Publication Date Title
CN109886267A (en) A kind of soft image conspicuousness detection method based on optimal feature selection
CN111062973B (en) Vehicle tracking method based on target feature sensitivity and deep learning
CN107016405B (en) A kind of pest image classification method based on classification prediction convolutional neural networks
CN102214298B (en) Method for detecting and identifying airport target by using remote sensing image based on selective visual attention mechanism
CN104077605B (en) A kind of pedestrian's search recognition methods based on color topological structure
Hsu et al. An interactive flower image recognition system
CN109800629A (en) A kind of Remote Sensing Target detection method based on convolutional neural networks
CN106649487A (en) Image retrieval method based on interest target
CN113963222B (en) High-resolution remote sensing image change detection method based on multi-strategy combination
CN105528575B (en) Sky detection method based on Context Reasoning
CN104881671B (en) A kind of high score remote sensing image Local Feature Extraction based on 2D Gabor
CN109325484A (en) Flowers image classification method based on background priori conspicuousness
CN102496023A (en) Region of interest extraction method of pixel level
CN107633226A (en) A kind of human action Tracking Recognition method and system
CN108629783A (en) Image partition method, system and medium based on the search of characteristics of image density peaks
CN105493141A (en) Unstructured road boundary detection
CN108664838A (en) Based on the monitoring scene pedestrian detection method end to end for improving RPN depth networks
CN111860587B (en) Detection method for small targets of pictures
CN108596195B (en) Scene recognition method based on sparse coding feature extraction
CN107808376A (en) A kind of detection method of raising one's hand based on deep learning
CN114693616B (en) Rice disease detection method, device and medium based on improved target detection model and convolutional neural network
CN111199245A (en) Rape pest identification method
CN106446925A (en) Dolphin identity recognition method based on image processing
CN106650798A (en) Indoor scene recognition method combining deep learning and sparse representation
CN114495170A (en) Pedestrian re-identification method and system based on local self-attention inhibition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190614