CN106780450A - A kind of image significance detection method based on low-rank Multiscale Fusion - Google Patents

A kind of image significance detection method based on low-rank Multiscale Fusion Download PDF

Info

Publication number
CN106780450A
CN106780450A CN201611110790.4A CN201611110790A CN106780450A CN 106780450 A CN106780450 A CN 106780450A CN 201611110790 A CN201611110790 A CN 201611110790A CN 106780450 A CN106780450 A CN 106780450A
Authority
CN
China
Prior art keywords
conspicuousness
image
rank
low
saliency maps
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611110790.4A
Other languages
Chinese (zh)
Inventor
冯伟
孙济洲
黄睿
刘烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201611110790.4A priority Critical patent/CN106780450A/en
Publication of CN106780450A publication Critical patent/CN106780450A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of image significance detection method based on low-rank Multiscale Fusion, its technical characterstic includes:Image to being input into carries out single scale conspicuousness detection;Image after being detected to single scale conspicuousness carries out multiple dimensioned conspicuousness fusion treatment, obtains merging Saliency maps;Conspicuousness micronization processes are carried out to the fusion Saliency maps after multiple dimensioned conspicuousness fusion treatment, final collaboration Saliency maps picture is obtained.Be applied to the method for the conspicuousness detection method recovered based on low-rank matrix and the fusion of multiple dimensioned conspicuousness in conspicuousness detection by the present invention, and by with the collaboration conspicuousness priori based on GMM, the detection of multiple dimensioned low-rank conspicuousness is generalized in multiple image collaboration conspicuousness detection, to detect the same or analogous region occurred in multiple image, solve the problems, such as that scale selection is difficult, more reliable conspicuousness testing result is achieved, helps further to improve the disposal ability of conspicuousness detection.

Description

A kind of image significance detection method based on low-rank Multiscale Fusion
Technical field
It is aobvious the invention belongs to Computer Vision Detection Technique field, especially a kind of image based on low-rank Multiscale Fusion Work property detection method.
Background technology
In computer vision field, conspicuousness object detecting method is divided into bottom-up scene drive model and Zi Ding The downward major class of expectation driving model two.Bottom-up method is based primarily upon the scene information of picture scenery, and top-down Method be then by knowledge, expect and purpose determine.Many conspicuousness detection methods, such as RC, CA have been proposed now Deng.The most of conspicuousness both for single scale picture of these conspicuousness detection methods is detected and had been achieved for good Effect.But it is exactly when object is in the natural scene of the big contrast of small yardstick that these methods have a common problem When, typically can not well detect the conspicuousness object in picture.For such case, typically there are two kinds of solutions, one Plant and be to continue with finding more preferable conspicuousness object, another kind is auxiliary using other also pictures comprising identical conspicuousness object Monitoring conspicuousness object is helped, this method is referred to as collaboration conspicuousness detection.
The conspicuousness detection method recovered based on low-rank matrix is based on following a priori assumption:Conspicuousness target is in view picture figure On be it is sparse, such piece image can just regard as background plus in background sparse distribution some conspicuousness targets, And image background has low-rank characteristic, and then a secondary natural image is decomposed into a low-rank matrix and a sparse matrix, because The detection of this conspicuousness is converted into a recovery problem for low-rank matrix.
The content of the invention
It is an object of the invention to overcome the deficiencies in the prior art, there is provided a kind of image based on low-rank Multiscale Fusion shows Work property detection method, solves the problems, such as that existing detection method has scale selection difficulty and reliability.
The present invention solves its technical problem and takes following technical scheme to realize:
A kind of image significance detection method based on low-rank Multiscale Fusion, comprises the following steps:
Step 1, the image to being input into carry out single scale conspicuousness detection;
Step 2, single scale conspicuousness is detected after image carry out multiple dimensioned conspicuousness fusion treatment, obtain fusion notable Property figure;
Step 3, conspicuousness micronization processes are carried out to the fusion Saliency maps after multiple dimensioned conspicuousness fusion treatment, obtained most Whole collaboration Saliency maps picture.
Further, the specific processing method of the step 1 is comprised the following steps:
(1) image is too cut into multi-scale division figure and feature extraction is carried out;
(2) conspicuousness priori treatment is carried out using background transcendental method;
(3) conspicuousness calculating is carried out.
Further, step method (1) is:By the image for being input into, the image point that will be input into using SLIC methods Super-pixel is cut into, and extracts position feature, color characteristic and the textural characteristics of 122 dimensions.
Further, step conspicuousness computational methods (3) are carried out using following conspicuousness model:
SP (i) is i-th significance value of super-pixel,It is the significance value of i-th super-pixel, j-th feature.It is i-th significance value vector of all features of super-pixel.
Further, the specific method of the step 2 is:First, piece image is divided into different yardsticks;Then, count Calculate the Saliency maps on each yardstick;Finally, counted by the way that the significance value of all yardsticks is multiplied by into corresponding adaptive weighting Calculate fusion Saliency maps.
Further, the adaptive weighting is expressed as follows:
Wherein, Z is a partition function;
The fusion Saliency maps are calculated using equation below:
ω i represent i-th adaptive weighting of the Saliency maps of yardstick,Represent i-th feature of yardstick Value,Represent the Saliency maps after multiple yardstick fusions.
Further, the processing method of the step 3 includes:
(1) smooth disposal is carried out to present image so that image reaches space smoothing;
(2) collaboration conspicuousness detection is carried out to image.
Further, (1) the step is to the method that present image carries out smooth disposal:Using following energy function reality It is existing:
Wherein, SIThe significance value of each super-pixel i is represented,The probability of background is represented,Expression prospect it is general Rate,Nei(i):Represent i-th neighborhood of super-pixel, weights omegaij:It is defined as:
Wherein,The L2 distances of the color average in CIE-LAB color spaces are represented,
Further, (2) the step carries out cooperateing with conspicuousness detection to comprise the following steps to image:
1. single significant point detection:For a series of image I for givingset={ I1, I2..., In, calculate every width figure The single Saliency maps of picture, use SiRepresent i-th single Saliency maps of image;
2. binary segmentation:Use adaptive threshold Ti:Single Saliency maps are divided into binary mask Mi, TiIt is defined as:
Ti=α mean (Si)
Wherein, α=2;
3. conspicuousness prior estimate is cooperateed with:GMM algorithms are the foreground pixel in i-th picture using 5 Gauss models Build color model Gi, then with the mould M estimated in j-th picturejProspect probability;It is general n prospect to be obtained for each picture Then every pictures are calculated collaboration conspicuousness priori to obtain the average value of these estimates by the estimate of rate;
4. collaboration conspicuousness is calculated:Collaboration conspicuousness priori is merged into single conspicuousness detection model and obtains last Collaboration Saliency maps picture.
Advantages and positive effects of the present invention are:
The method application that the present invention merges the conspicuousness detection method recovered based on low-rank matrix and multiple dimensioned conspicuousness Arrive in conspicuousness detection, and by with the collaboration conspicuousness priori based on GMM, multiple dimensioned low-rank conspicuousness being detected and being promoted To in multiple image collaboration conspicuousness detection, to detect the same or analogous region occurred in multiple image, the present invention is carried The multiple dimensioned super-pixel blending algorithm of low-rank for going out solves the problems, such as that scale selection is difficult, achieves more reliable conspicuousness detection As a result, contribute to further to improve the disposal ability that conspicuousness is detected.
Brief description of the drawings
Fig. 1 is the image significance detection method flow chart based on low-rank Multiscale Fusion of the invention;
Fig. 2 is the significant characteristics dimension that the present invention is extracted and description schematic diagram;
Fig. 3 is the comparative effectiveness figure of present invention performance on MSRA data sets;
Fig. 4 is the comparative effectiveness figure of present invention performance on ESCCD data sets;
Fig. 5 is the comparing figure of collaboration conspicuousness method performance of the present invention on image pair data sets.
Specific embodiment
The embodiment of the present invention is further described below in conjunction with accompanying drawing:
A kind of image significance detection method based on low-rank Multiscale Fusion, as shown in figure 1, comprising the following steps:
Step 1, the image to being input into carry out the conspicuousness detection of single scale.Specific method is:
(1) image is too cut into multi-scale division figure and feature extraction is carried out
For the image being input into, we are divided into super-pixel using SLIC, and extract the feature of 122 dimensions, including Position, color, texture, as shown in Figure 2.Specific practice is:We extract the color characteristic of 40 dimensions, lower 4 directions of 3 yardsticks totally 12 Individual steerable pyramids features, the direction of 3 yardstick 12 totally 36 Gabor characteristics, the feature of 31 dimensions is extracted with HOG.
(2) conspicuousness priori treatment
At present, some top-down methods are already used to further improve the performance of conspicuousness detection.It is representational Method has a variety of conspicuousness priori, and such as center priori, object priori, background priori, these methods are provided to improve Conspicuousness object position that may be present in piece image.In the method, we carry out conspicuousness using background transcendental method Priori treatment.
(3) conspicuousness is calculated
Due to low rank analysis to conspicuousness detect it is helpful, we can piece image be divided into redundancy section and significantly Property part.Redundancy section represents the systematicness with height, and conspicuousness part represents novelty.We can be this breakdown It is shown as the recovery problem of low-rank matrix:
S.t.F=B+S
Wherein, F=[f1,f2,...,fn] be N number of characteristic vector composition eigenmatrix, B is obtained by background modeling The low-rank matrix for arriving, S is to model the sparse matrix for obtaining by conspicuousness.
Problem due to more than is a np problem, therefore we are converted into following mode to solve:
S.t.F=B+S
But, poor object conspicuousness testing result can be always obtained in initial Feature Space Decomposing F.In order to obtain One good result, we first learn a transformation matrix T, by the way that by eigenmatrix F premultiplication T, it is special that we obtain a conversion Levy matrix TF.In space after the conversion, the feature of image background is present in a low dimensional subspace.Therefore, they can To be expressed as a low-rank matrix.Priori P can be updated with P premultiplications TF, therefore, final conspicuousness model is:
S.t.TFP=B+S
Assuming that S is the optimal solution of equation.Significance value SP (i) of so i-th super-pixel is:
Step 2, single scale conspicuousness is detected after image carry out multiple dimensioned conspicuousness fusion treatment, obtain fusion notable Property figure.
It is reliable aobvious in order to obtain one because the conspicuousness Detection results on single scalogram picture may be undesirable Work property testing result, this patent Multiscale Fusion method:First, piece image is divided into different yardsticks by we;Then, use Above method calculates the Saliency maps on each yardstick;Finally, we are corresponding by the way that the significance value of all yardsticks is multiplied by Adaptive weighting calculates fusion Saliency maps.
One significance value of super-pixel is just included in the average value of all significance value in this super-pixel region, The significance value of all super-pixel on each yardstick is expressed as a row vector by we, each super-pixel on all yardsticks Significance value constitute a conspicuousness oriental matrix SI.Ideally conspicuousness testing result is all on all yardsticks Consistent, therefore, the order of oriental matrix should be 1.We can be converted into this problem the recovery problem of low-rank matrix:
S.t.SI=L+E
Wherein, optimal solution E represents the difference of multiple dimensioned conspicuousness testing result.We are absolute the every row element in E Value summation, will obtain a vectorWherein, n represents yardstick.EiIt is bigger, represent i-th it is notable Property figure is higher with the inconsistency of other Saliency maps.Therefore, corresponding Saliency maps should assign a weight for very little.Adapt to Property weight is expressed as:
Wherein, Z is a partition function.Finally, fusion Saliency maps can be calculated with equation below:
Step 3, conspicuousness micronization processes are carried out to the fusion Saliency maps after multiple dimensioned conspicuousness fusion treatment, obtained most Whole Saliency maps picture.Specifically include:
(1) smooth disposal is carried out to present image so that image reaches space smoothing
After completing, we start to consider the flatness between neighbouring super pixels.We are with an energy function come excellent Change the Saliency maps after fusion:
Wherein, SIThe significance value of each super-pixel i is represented,The probability of background is represented,The probability of expression prospect. (in the method), Nei (i):Represent i-th neighborhood of super-pixel.Weights omegaij:It is defined as:
Wherein,The L2 distances of the color average in CIE-LAB color spaces are represented,
(2) collaboration conspicuousness detection, including the detection of single significant point, binary segmentation, collaboration conspicuousness priori are carried out to image Estimate, collaboration conspicuousness calculates process step, be described as follows:
1. single significant point detection
For a series of image I for givingset={ I1, I2..., In, calculate every width figure with method mentioned above The single Saliency maps of picture, use SiRepresent i-th single Saliency maps of image.
2. binary segmentation
We use adaptive threshold Ti:Single Saliency maps are divided into binary mask Mi, TiIt is defined as:
Ti=α mean (Si)
Wherein, α=2 in our experiment.The significance value pixel bigger than the adaptive threshold that we give or super picture Element is exactly prospect, is otherwise exactly background.
3. conspicuousness prior estimate is cooperateed with
We obtain collaboration conspicuousness priori using GMM, and specific method is:GMM algorithms using 5 Gauss models come for Foreground pixel in i-th picture builds color model Gi, then with the mould M estimated in j-th picturejProspect probability.For Each picture will obtain the n estimate of prospect probability, then calculate collaboration conspicuousness priori to every pictures to obtain this The average value of a little estimates.
4. collaboration conspicuousness is calculated
Finally, we are merged into collaboration conspicuousness priori in single conspicuousness detection model that to obtain last collaboration notable The image of property.
The multiple dimensioned super-pixel of low rank analysis can be realized by above step to detect obvious object function.
Fig. 3 is the comparative effectiveness figure of present invention performance on MSRA data sets, it can be seen that on the data set Compared with prior art, the PR curves and ROC curve of our methods be it is optimal, MAE for minimum, AUC highests;Fig. 4 is this hair The comparative effectiveness figure of the bright performance on ESCCD data sets, it can be seen that, on the data set compared with prior art The PR curves and ROC curve of our methods be it is optimal, MAE for minimum, AUC highests;Fig. 5 is the present invention in image pair numbers According to the comparing figure of the collaboration conspicuousness method performance on collection, it can be seen that, on the data set compared with prior art The fmeasure of our methods, precision and recall are highest, and MAE is minimum;Therefore the present invention is in different data On collection compared with prior art, its Detection results is significantly increased.
It is emphasized that embodiment of the present invention is illustrative, rather than limited, therefore present invention bag The embodiment for being not limited to described in specific embodiment is included, it is every by those skilled in the art's technology according to the present invention scheme The other embodiment for drawing, also belongs to the scope of protection of the invention.

Claims (9)

1. a kind of image significance detection method based on low-rank Multiscale Fusion, it is characterised in that comprise the following steps:
Step 1, the image to being input into carry out single scale conspicuousness detection;
Step 2, to single scale conspicuousness detect after image carry out multiple dimensioned conspicuousness fusion treatment, obtain merge conspicuousness Figure;
Step 3, conspicuousness micronization processes are carried out to the fusion Saliency maps after multiple dimensioned conspicuousness fusion treatment, obtain final Collaboration Saliency maps picture.
2. a kind of image significance detection method based on low-rank Multiscale Fusion according to claim 1, its feature exists In:The specific processing method of the step 1 is comprised the following steps:
(1) image is too cut into multi-scale division figure and feature extraction is carried out;
(2) conspicuousness priori treatment is carried out using background transcendental method;
(3) conspicuousness calculating is carried out.
3. a kind of image significance detection method based on low-rank Multiscale Fusion according to claim 2, its feature exists In:Step method (1) is:By for be input into image, using SLIC methods will be input into image segmentation into super-pixel, And extract position feature, color characteristic and the textural characteristics of 122 dimensions.
4. a kind of image significance detection method based on low-rank Multiscale Fusion according to claim 2, its feature exists In:Step conspicuousness computational methods (3) are carried out using following conspicuousness model:
S P ( i ) = | | S ^ ( : , i ) | | 2 Σ i | | S ^ ( : , i ) | | 2 = Σ j ( S ^ ( j , i ) ) 2 Σ i Σ j ( S ^ ( j , i ) ) 2
SP (i) is i-th significance value of super-pixel,It is the significance value of i-th super-pixel, j-th feature,It is i-th significance value vector of all features of super-pixel.
5. a kind of image significance detection method based on low-rank Multiscale Fusion according to claim 1, its feature exists In:The specific method of the step 2 is:First, piece image is divided into different yardsticks;Then, calculate on each yardstick Saliency maps;Finally, fusion conspicuousness is calculated by the way that the significance value of all yardsticks is multiplied by into corresponding adaptive weighting Figure.
6. a kind of image significance detection method based on low-rank Multiscale Fusion according to claim 5, its feature exists In:The adaptive weighting is expressed as follows:
Wherein, Z is a partition function;
The fusion Saliency maps are calculated using equation below:
S m a p f u s e = Σ ω i * S m a p i
ωiI-th adaptive weighting of the Saliency maps of yardstick is represented,I-th characteristic value of yardstick is represented,Represent the Saliency maps after multiple yardstick fusions.
7. a kind of image significance detection method based on low-rank Multiscale Fusion according to claim 1, its feature exists In:The processing method of the step 3 includes:
(1) smooth disposal is carried out to present image so that image reaches space smoothing;
(2) collaboration conspicuousness detection is carried out to image.
8. a kind of image significance detection method based on low-rank Multiscale Fusion according to claim 7, its feature exists In:(1) the step be to the method that present image carries out smooth disposal:Realized using following energy function:
E = Σ i ω i b g * s i 2 + Σ i ω i f g ( s i - 1 ) 2 + Σ i , j ∈ N e i ( i ) ω i j ( s i - s j ) 2
Wherein, SlThe significance value of each super-pixel i is represented,The probability of background is represented,The probability of expression prospect,Nei (i) represents i-th neighborhood of super-pixel, weights omegaij:It is defined as:
Wherein,Represent the L2 distances of the color average in CIE-LAB color spaces, 6=10.
9. a kind of image significance detection method based on low-rank Multiscale Fusion according to claim 7, its feature exists In:(2) the step carries out collaboration conspicuousness detection to image and comprises the following steps:
1. single significant point detection:For a series of image I for givingset={ I1, I2..., In, calculate each image Single Saliency maps, use SiRepresent i-th single Saliency maps of image;
2. binary segmentation:Use adaptive threshold Ti:Single Saliency maps are divided into binary mask Mi, TiIt is defined as:
Ti=α mean (Si)
Wherein, α=2;
3. conspicuousness prior estimate is cooperateed with:GMM algorithms are built using 5 Gauss models for the foreground pixel in i-th picture Color model Gi, then with the mould M estimated in j-th picturejProspect probability;N prospect probability is obtained for each picture Then every pictures are calculated collaboration conspicuousness priori to obtain the average value of these estimates by estimate;
4. collaboration conspicuousness is calculated:Collaboration conspicuousness priori is merged into single conspicuousness detection model and obtains last collaboration Saliency maps picture.
CN201611110790.4A 2016-12-06 2016-12-06 A kind of image significance detection method based on low-rank Multiscale Fusion Pending CN106780450A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611110790.4A CN106780450A (en) 2016-12-06 2016-12-06 A kind of image significance detection method based on low-rank Multiscale Fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611110790.4A CN106780450A (en) 2016-12-06 2016-12-06 A kind of image significance detection method based on low-rank Multiscale Fusion

Publications (1)

Publication Number Publication Date
CN106780450A true CN106780450A (en) 2017-05-31

Family

ID=58874396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611110790.4A Pending CN106780450A (en) 2016-12-06 2016-12-06 A kind of image significance detection method based on low-rank Multiscale Fusion

Country Status (1)

Country Link
CN (1) CN106780450A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107527348A (en) * 2017-07-11 2017-12-29 湖州师范学院 Conspicuousness detection method based on multi-scale division
CN107909078A (en) * 2017-10-11 2018-04-13 天津大学 Conspicuousness detection method between a kind of figure
CN108437933A (en) * 2018-02-10 2018-08-24 深圳智达机械技术有限公司 A kind of vehicle startup system
CN108549891A (en) * 2018-03-23 2018-09-18 河海大学 Multi-scale diffusion well-marked target detection method based on background Yu target priori
CN108961196A (en) * 2018-06-21 2018-12-07 华中科技大学 A kind of 3D based on figure watches the conspicuousness fusion method of point prediction attentively
CN109325507A (en) * 2018-10-11 2019-02-12 湖北工业大学 A kind of image classification algorithms and system of combination super-pixel significant characteristics and HOG feature
CN109978819A (en) * 2019-01-22 2019-07-05 安徽海浪智能技术有限公司 A method of segmentation retinal vessel is detected based on low scale blood vessel
CN116994006A (en) * 2023-09-27 2023-11-03 江苏源驶科技有限公司 Collaborative saliency detection method and system for fusing image saliency information

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390279A (en) * 2013-07-25 2013-11-13 中国科学院自动化研究所 Target prospect collaborative segmentation method combining significant detection and discriminant study
CN103700091A (en) * 2013-12-01 2014-04-02 北京航空航天大学 Image significance object detection method based on multiscale low-rank decomposition and with sensitive structural information
CN103914834A (en) * 2014-03-17 2014-07-09 上海交通大学 Significant object detection method based on foreground priori and background priori
CN104392463A (en) * 2014-12-16 2015-03-04 西安电子科技大学 Image salient region detection method based on joint sparse multi-scale fusion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390279A (en) * 2013-07-25 2013-11-13 中国科学院自动化研究所 Target prospect collaborative segmentation method combining significant detection and discriminant study
CN103700091A (en) * 2013-12-01 2014-04-02 北京航空航天大学 Image significance object detection method based on multiscale low-rank decomposition and with sensitive structural information
CN103914834A (en) * 2014-03-17 2014-07-09 上海交通大学 Significant object detection method based on foreground priori and background priori
CN104392463A (en) * 2014-12-16 2015-03-04 西安电子科技大学 Image salient region detection method based on joint sparse multi-scale fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
RUI HUANG 等: ""SALIENCY AND CO-SALIENCY DETECTION BY LOW-RANK MULTISCALE FUSION"", 《2015 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107527348B (en) * 2017-07-11 2020-10-30 湖州师范学院 Significance detection method based on multi-scale segmentation
CN107527348A (en) * 2017-07-11 2017-12-29 湖州师范学院 Conspicuousness detection method based on multi-scale division
CN107909078A (en) * 2017-10-11 2018-04-13 天津大学 Conspicuousness detection method between a kind of figure
CN107909078B (en) * 2017-10-11 2021-04-16 天津大学 Inter-graph significance detection method
CN108437933A (en) * 2018-02-10 2018-08-24 深圳智达机械技术有限公司 A kind of vehicle startup system
CN108437933B (en) * 2018-02-10 2021-06-08 聊城市敏锐信息科技有限公司 Automobile starting system
CN108549891A (en) * 2018-03-23 2018-09-18 河海大学 Multi-scale diffusion well-marked target detection method based on background Yu target priori
CN108549891B (en) * 2018-03-23 2019-10-01 河海大学 Multi-scale diffusion well-marked target detection method based on background Yu target priori
CN108961196A (en) * 2018-06-21 2018-12-07 华中科技大学 A kind of 3D based on figure watches the conspicuousness fusion method of point prediction attentively
CN108961196B (en) * 2018-06-21 2021-08-20 华中科技大学 Significance fusion method for 3D fixation point prediction based on graph
CN109325507B (en) * 2018-10-11 2020-10-16 湖北工业大学 Image classification method and system combining super-pixel saliency features and HOG features
CN109325507A (en) * 2018-10-11 2019-02-12 湖北工业大学 A kind of image classification algorithms and system of combination super-pixel significant characteristics and HOG feature
CN109978819A (en) * 2019-01-22 2019-07-05 安徽海浪智能技术有限公司 A method of segmentation retinal vessel is detected based on low scale blood vessel
CN116994006A (en) * 2023-09-27 2023-11-03 江苏源驶科技有限公司 Collaborative saliency detection method and system for fusing image saliency information
CN116994006B (en) * 2023-09-27 2023-12-08 江苏源驶科技有限公司 Collaborative saliency detection method and system for fusing image saliency information

Similar Documents

Publication Publication Date Title
CN106780450A (en) A kind of image significance detection method based on low-rank Multiscale Fusion
CN105574534A (en) Significant object detection method based on sparse subspace clustering and low-order expression
CN103745468B (en) Significant object detecting method based on graph structure and boundary apriority
CN102663400B (en) LBP (length between perpendiculars) characteristic extraction method combined with preprocessing
CN103218605B (en) A kind of fast human-eye positioning method based on integral projection and rim detection
CN104299009B (en) License plate character recognition method based on multi-feature fusion
CN102592136A (en) Three-dimensional human face recognition method based on intermediate frequency information in geometry image
CN109902565B (en) Multi-feature fusion human behavior recognition method
CN104091157A (en) Pedestrian detection method based on feature fusion
CN110060286B (en) Monocular depth estimation method
CN104636732A (en) Sequence deeply convinced network-based pedestrian identifying method
CN109034035A (en) Pedestrian's recognition methods again based on conspicuousness detection and Fusion Features
CN102708370A (en) Method and device for extracting multi-view angle image foreground target
CN113744311A (en) Twin neural network moving target tracking method based on full-connection attention module
CN102426653B (en) Static human body detection method based on second generation Bandelet transformation and star type model
CN103093470A (en) Rapid multi-modal image synergy segmentation method with unrelated scale feature
CN103886585A (en) Video tracking method based on rank learning
CN104751111A (en) Method and system for recognizing human action in video
CN104809457A (en) Three-dimensional face identification method and system based on regionalization implicit function features
CN104966054A (en) Weak and small object detection method in visible image of unmanned plane
CN107301643A (en) Well-marked target detection method based on robust rarefaction representation Yu Laplce's regular terms
CN103745209A (en) Human face identification method and system
CN103020614A (en) Human movement identification method based on spatio-temporal interest point detection
CN104299238A (en) Organ tissue contour extraction method based on medical image
CN103714340A (en) Self-adaptation feature extracting method based on image partitioning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170531