CN106203430A - A kind of significance object detecting method based on foreground focused degree and background priori - Google Patents
A kind of significance object detecting method based on foreground focused degree and background priori Download PDFInfo
- Publication number
- CN106203430A CN106203430A CN201610531085.5A CN201610531085A CN106203430A CN 106203430 A CN106203430 A CN 106203430A CN 201610531085 A CN201610531085 A CN 201610531085A CN 106203430 A CN106203430 A CN 106203430A
- Authority
- CN
- China
- Prior art keywords
- background
- pixel
- significance
- super
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/255—Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
Abstract
A kind of significance object detecting method based on foreground focused degree and background priori, step is as follows: one, Image semantic classification;Two, significance based on foreground focused degree;Three, significance based on background priori;Four, significance optimization fusion;The method obtains concentration class feature by the way of based on Hash coding and concentration class weight, obtains prospect and significantly scheme after being combined with contrast metric based on center priori;By rejecting super-pixel too high with prospect similarity in edge seed, thus obtain background seed, and obtain background significance by the diversity factor calculating each super-pixel and background seed;Finally, build and comprise background item, prospect item and the cost function of smooth item, obtain and optimize final significance by minimizing cost function.The notable uniform highlighting foreground target of figure energy that the present invention obtains, and suppress background noise well, it being widely portable to natural image, beneficially succeeding target detection, Target Segmentation etc. are applied, and have actual application value.
Description
(1) technical field
The present invention relates to a kind of significance object detecting method based on foreground focused degree and background priori, belong to computer
Vision and digital image processing field.The field such as Target Segmentation, target recognition has broad application prospects.
(2) background technology
Human eye is often easier to notice in scene entirely different attribute with surrounding, the obvious object of difference, this
Interested in scene can be partially separated out by kind of autosense ability easily.This and the surrounding of object just
The significant difference of environment the ability causing human eye to note are referred to as vision significance.Theoretical according to classical significance, vision
Attention mechanism can be divided into two classes: top-down attention mechanism and attention mechanism from bottom to top.Top-down attention machine
System is by task-driven, and in this mechanism, it is that the consciousness of people and task determine the well-marked target in image for which part, root
According to task, having the subjective consciousness of people to set out, the view of people has played the biggest effect in this mechanism.And bottom-up note
Meaning mechanism, by data-driven, is namely determined by self-contained information in image.Often by the object in image with
Diversity about determines the significance of this object.
At computer vision field, top-down vision noticing mechanism is owing to it is based on the subjective consciousness of people, research
Get up the most difficult, so present significance field is with bottom-up vision noticing mechanism for mainly studying discovery.The end of from
Upwards classical in the research of mechanism be then that this model is extracted at multiscale space in the Itti model of proposition in 1998
Brightness, color, three, direction feature, as the feature of image, obtain three by filtering and central authorities' difference algorithm around
The characteristic pattern of individual feature, finally uses the method linearly adding sum by three kinds of Feature Fusion, obtains final notable figure.Liu et al. exists
Within 2007, by well-marked target is carried out feature extraction, obtaining multiscale contrast, center-periphery contrast, color space divides
3 kinds of features of cloth, then be combined with conditional random field models, obtain final significance testing result.Hou et al. was in 2009
Propose a kind of significance computation model based on residual spectrum, at Fourier's domain of variation with the difference of preimage information Yu its redundancy
Obtain composing residual risk, change to spatial domain with residual risk contravariant and obtain its Saliency maps.Jiang et al. proposes for 2013
UFO model, by the uniqueness of combining target, centrality and weighs significance like physical property characteristic.Zhu et al. is by calculating district
The length that territory profile is connected with border and the ratio of region area, obtained the contour connection feature in region, and basis at this
On calculate contrast based on background weight, estimate the significance in each region finally by the method optimized.
Conventional significance model is usual only from target, or only from background.The present invention combines foreground target
Feature and the advantage of background priori, it is proposed that the computational methods of a kind of foreground focused degree and background initial point selection method, and profit
By the mode optimized, prospect is merged with background, highlight prospect fully and inhibit background.
(3) summary of the invention
(1) purpose of the present invention
In order to make up the deficiency of traditional method, the present invention is from prospect concentration class priori and background priori, it is provided that one
Plant significance object detection method based on foreground focused degree and background priori.
Concentration class priori in the present invention combines concentration class and center priori, and background priori is then that the border from image goes out
Send out.Because by substantial amounts of image viewing it is found that gathering is compared in the distribution in entire image of the significance target, but
Background is then distributed relatively broad, is often distributed among entire image, finds according to this, and the present invention constructs concentration class priori.Separately
Outward, according to photography custom, target is normally located in image by paracentral position, is much all by image in existing method
Center is as center priori, but this method is easy to mistake occur, in order to solve this problem, the present invention have employed based on
The center priori of convex closure, can choose more reliable center priori according to image adaptive.See again by substantial amounts of image
Examining it is found that the part of close image boundary mostly typically is background, the most existing a lot of methods then select the border of image
As background priori.But reality exists some situations, the border of image comprises a part of significance target, in order to tackle
This situation, the present invention proposes the system of selection of a kind of background seed points, thus provides background priori more accurately.
(2) technical scheme
A kind of based on foreground focused degree and background priori the significance object detecting method of the present invention, its concrete grammar walks
Rapid as follows:
Step one: Image semantic classification;For subsequent step, first, the gauss hybrid models by structure input picture will
Input picture is divided into multilamellar, and utilizes hash conversion to obtain the binary code of each layer;Furthermore, by super-pixel segmentation, input is schemed
As being divided into many color similarities, protect the super-pixel on border, and calculate mean place and the average color of each super-pixel;Additionally carry
Take the convex closure comprising well-marked target in input picture, using convex closure center as center priori;
Wherein, in " the utilizing hash conversion to obtain the binary code of each layer " described in step one, its practice is as follows: first
Build the gauss hybrid models of input picture, represent a kind of color by each composition correspondence of gauss hybrid models, then can will input
The color of image is divided into 6 classes, obtains the probability that each pixel belongs to all kinds of simultaneously.Pixel belongs to the probability of each layer can use image
Represent, then decompose for 6 parts relative to by input picture, i.e. represent 6 layers of gray level image of degree of membership with gray value;Then
This 6 width image is all downsampled to the image that size is 8 × 8, calculates its gray average, gray value is more than the mark of average pixel
It is designated as 1, is otherwise 0, thus obtain 64 binary codes that every tomographic image is corresponding;
Step 2: significance based on foreground focused degree;First using the similarity degree between each layer binary code as similar
Property estimate, each for the gauss hybrid models of input picture layer is classified, then by calculate all kinds of gatherings based on center priori
Degree obtains concentration class feature as weight to carrying out fusion;Calculate each super-pixel again and combine the global contrast of central authorities' priori,
To contrast metric.Finally concentration class feature is multiplied with contrast metric, significantly schemes as foreground focused degree;
Wherein, " being classified by each for the gauss hybrid models of input picture layer " described in step 2, its practice is such as
Under: first survey using the inverse of the Euclidean distance between binary code corresponding to each tomographic image of gauss hybrid models as similarity
Degree, utilizing the clustering method of Alex Rodriguez to be gathered by this 6 tomographic image is 3 classes, respectively the prospect in representative image, background and
Dash area, the most each pixel belongs to the probability of this three apoplexy due to endogenous wind K class and is:
Wherein p (k | Ix) it is pixel IxBelong to the probability of gauss hybrid models kth composition, and this kth becomes to belong to the
K class, be equivalent to add several tomographic images belonging to K class and.
Wherein, described in step 2 " again by calculate all kinds of concentration class based on center priori as weight to entering
Row fusion obtains concentration class feature ", its process calculated is as follows: add with the three class images that classification is obtained by concentration class for weight
With, obtain concentration class characteristic pattern:
Comp (K) is the concentration class that K class image is corresponding:
Step 3: significance based on background priori;First the super-pixel being connected with image boundary is obtained as background kind
Son;Then, figure binaryzation notable to the prospect obtained in step 2, using the super-pixel that is marked as 1 as foreground seeds point, meter
Calculate the similarity degree of other super-pixel and foreground seeds, and determine threshold value;By big with foreground seeds similarity in the super-pixel of border
Part super-pixel in threshold value is rejected from background seed, then obtain final background subset;Finally, by calculating each super-pixel
With the contrast of background seed, thus obtain background significance;
Wherein, " the calculating the similarity degree of other super-pixel and foreground seeds " described in step 3, its computational methods
As follows:
FS represents foreground seeds point set.
Wherein, " will surpass more than the part of threshold value with foreground seeds similarity in the super-pixel of border described in step 3
Pixel is rejected from background seed ", its process rejected is as follows:
Determined threshold value T of similarity by OSTU algorithm, border super-pixel will be more than threshold value T with foreground seeds similarity
Part super-pixel reject from background seed, then obtain final background subset BS;Finally, with each super-pixel and background seed
Contrast as to background significance:
Step 4: significance optimization fusion;Fusion problem is considered as optimization problem, builds one and comprise prospect item, background
Item and the cost function of smooth item, combine prospect background, obtains final notable figure by minimizing cost function;
In described step 4, first build a cost function, prospect background combined:
Foreground represents prospect item, and Background represents background item, and Smoothness is smooth item;Wherein S (i)
For the final significance average of i-th super-pixel, obtain final notable figure by minimizing cost function;α is balance prospect
Significance and the background significance weight to final significance power of influence size, λ is the weight of the smooth item effect size of regulation, i.e.
Regulate the smoothness of final significance.
Finally by minimizing cost function, obtain final significance S;
By above step, this detection method combines display foreground concentration class and background priori, it is possible to before preferably prominent
Scape and suppression background, then can relatively accurately detect image object, for other image processing field such as Target Segmentation, mesh
Mark is followed the tracks of has actual application value with target retrieval etc..
(3) compared with prior art, advantages of the present invention:
First, the present invention is using convex closure center as center priori, it is proposed that the meter of the concentration class feature at a kind of relative center
Calculation method, and it is combined with global contrast based on center priori, the most complete obvious object can be obtained, and fill
Divide the significance of the prospect that highlighted.
Secondly, the present invention proposes a kind of background seed points selection algorithm based on prospect, it is to avoid part is positioned at limit
The prospect on boundary is falsely dropped as background seed, thus improves the accuracy of background priori.Significance based on background priori calculates to be had
Imitate inhibits the background parts in notable figure.
Finally, the fusion of prospect Yu background significance is considered as optimization problem and processes by the present invention, by building cost function
In conjunction with prospect and background, the prospect that takes full advantage of significantly schemes the advantage of figure notable with background, and notable figure is seamlessly transitted, and fills
While dividing prominent prospect, inhibit background the most well.
(4) accompanying drawing explanation
Fig. 1 is the FB(flow block) of detection method of the present invention.
(5) detailed description of the invention
In order to be more fully understood that technical scheme, below in conjunction with accompanying drawing, embodiments of the present invention are made further
Describe.
The FB(flow block) of the present invention is as it is shown in figure 1, the present invention is a kind of based on foreground focused degree with the significance of background priori
Object detecting method, it is as follows that it is embodied as step:
Step one: Image semantic classification
First, build the gauss hybrid models of input picture, represent a kind of face by each composition correspondence of gauss hybrid models
Color, then can be divided into 6 classes by the color of input picture, obtains each pixel simultaneously and belongs to the probability of kth class color:
{ωk,μk,∑kIt is the parameter of gauss hybrid models, pixel belongs to the probability of each layer and can represent with image, then
Decompose for 6 parts relative to by input picture, i.e. represent 6 layers of gray level image of degree of membership with gray value.Then by this 6 width figure
As being all downsampled to the image that size is 8 × 8, calculate its gray average, gray value is labeled as 1 more than average pixel, no
Be then 0, then every tomographic image all can get the binary code of 64.
Then, utilize SLIC algorithm, input picture is too segmented into M=200 super-pixel.And calculate the position of each super-pixel
Put μiWith color average ci:
Wherein IcFor pixel I belonged toxColor vector, IμFor corresponding space coordinates vector, qiSuper-pixel block PiMiddle bag
The number of pixels contained.
Finally, the coloured image of input is carried out Harris Corner Detection, the Harris angle point energy of calculating input image
Function obtains energy diagram, chooses several points that energy value in energy diagram is maximum, and rejects the point of near image boundaries, obtains calibrated
All point of significance are surrounded with a convex closure and represent marking area by true point of significance, and using convex closure center as center first
Test.
Step 2: significance based on foreground focused degree
First, using the inverse of the Euclidean distance between binary code corresponding to each tomographic image of gauss hybrid models as similar
Degree is estimated, and utilizing the clustering method of Alex Rodriguez to be gathered by this 6 tomographic image is 3 classes, respectively the prospect in representative image, the back of the body
Scape and dash area.The most each pixel belongs to the probability of this three apoplexy due to endogenous wind K class:
Wherein p (k | Ix) it is pixel IxBelong to the probability of gauss hybrid models kth composition, and this kth becomes to belong to the
K class, be equivalent to add several tomographic images belonging to K class and.With concentration class for weight this three classes image added again and, assembled
Degree feature:
Comp (K) is the concentration class that K class image is corresponding, and concrete formula is:
X is pixel IxCoordinate position, μ is the coordinate position of picture centre.
Then, the computing formula in conjunction with the global contrast of center priori is:
ciRepresent the color average of super-pixel i, μiRepresent the position average of super-pixel i.σpFor adjusting color and locus
The weight of power of influence, σcIt it is then the weight of control centre's priori power of influence.
Finally, with the form of multiplication concentration class feature and contrast metric combined and obtain final prospect significance:
Sfg(i)=SC(i)·SU(i) (7)
SCI () represents the average aggregate degree eigenvalue of super-pixel i.
Step 3: significance based on background priori
First the super-pixel being connected with image boundary is obtained as background seed.Then, to the prospect obtained in step 2
Notable figure binaryzation, using be marked as 1 super-pixel as foreground seeds point, calculate other super-pixel similar to foreground seeds
Degree, concrete formula is:
Wherein FS represents foreground seeds point set.
Determined threshold value T of similarity by OSTU algorithm, border super-pixel will be more than threshold value T with foreground seeds similarity
Part super-pixel reject from background seed, then obtain final background subset BS.Finally, each super-pixel and background kind are calculated
The contrast of son, thus obtain background significance, concrete formula is:
Step 4: significance optimization fusion
Build one and comprise prospect item, background item and the cost function of smooth item, prospect background is combined, specifically
Formula is:
Foreground represents prospect item, and Background represents background item, and Smoothness is smooth item.Wherein S (i)
For the final significance average of i-th super-pixel, obtain final notable figure by minimizing cost function.α is balance prospect
Significance and the background significance weight to final significance power of influence size, λ is the weight of the smooth item effect size of regulation, i.e.
Regulate the smoothness of final significance.
Finally by minimizing cost function, obtain final significance S.
Claims (6)
1. a significance object detecting method based on foreground focused degree and background priori, it is characterised in that: its concrete grammar
Step is as follows:
Step one: Image semantic classification;For subsequent step, first, by building the gauss hybrid models of input picture by input
Image is divided into multilamellar, and utilizes hash conversion to obtain the binary code of each layer;Furthermore, by super-pixel segmentation, input picture is divided
It is segmented into many color similarities, protects the super-pixel on border, and calculate mean place and the average color of each super-pixel;Additionally extract defeated
Enter image comprises the convex closure of well-marked target, using convex closure center as center priori;
Step 2: significance based on foreground focused degree;First survey using the similarity degree between each layer binary code as similarity
Degree, classifies each for the gauss hybrid models of input picture layer, then makees by calculating all kinds of concentration class based on center priori
Concentration class feature is obtained to carrying out fusion for weight;Calculating each super-pixel again and combine the global contrast of central authorities' priori, it is right to obtain
Ratio degree feature;Finally concentration class feature is multiplied with contrast metric, significantly schemes as foreground focused degree;
Step 3: significance based on background priori;First the super-pixel being connected with image boundary is obtained as background seed;So
After, figure binaryzation notable to the prospect obtained in step 2, using be marked as 1 super-pixel as foreground seeds point, calculate it
The similarity degree of his super-pixel and foreground seeds, and determine threshold value;Border super-pixel will be more than threshold with foreground seeds similarity
The part super-pixel of value is rejected from background seed, then obtain final background subset;Finally, by calculating each super-pixel and the back of the body
The contrast of scape seed, thus obtain background significance;
Step 4: significance optimization fusion;Fusion problem is considered as optimization problem, build one comprise prospect item, background item and
The cost function of smooth item, combines prospect background, obtains final notable figure by minimizing cost function;
In described step 4, first build a cost function, prospect background combined:
Foreground represents prospect item, and Background represents background item, and Smoothness is smooth item;Wherein S (i) is
The final significance average of i super-pixel, obtains final notable figure by minimizing cost function;α is that balance prospect is notable
Property with the background significance weight to final significance power of influence size, λ is the weight of the smooth item effect size of regulation, i.e. regulates
The smoothness of final significance;
Finally by minimizing cost function, obtain final significance S;
By above step, this detection method combines display foreground concentration class and background priori, it is possible to preferably highlight prospect and
Suppression background, the most relatively accurately detects image object, for other image processing field such as Target Segmentation, target following and
Target retrieval has actual application value.
A kind of significance object detecting method based on foreground focused degree and background priori the most according to claim 1, its
It is characterised by: in " the utilizing hash conversion to obtain the binary code of each layer " described in step one, its practice is as follows: first build
The gauss hybrid models of input picture, represents a kind of color by each composition correspondence of gauss hybrid models, then by input picture
Color is divided into 6 classes, obtains the probability that each pixel belongs to all kinds of simultaneously;Pixel belongs to the probability of each layer and can represent with image,
Then decompose for 6 parts relative to by input picture, i.e. represent 6 layers of gray level image of degree of membership with gray value;Then by this 6 width
Image is all downsampled to the image that size is 8 × 8, calculates its gray average, and what gray value was more than average pixel is labeled as 1,
It is otherwise 0, thus obtains 64 binary codes that every tomographic image is corresponding.
A kind of significance object detecting method based on foreground focused degree and background priori the most according to claim 1, its
It is characterised by: " being classified by each for the gauss hybrid models of input picture layer " described in step 2, its practice is as follows: first
The inverse of the Euclidean distance between the first binary code corresponding using each tomographic image of gauss hybrid models, as similarity measure, utilizes
It is 3 classes that this 6 tomographic image is gathered by the clustering method of Alex Rodriguez, respectively prospect, background and the shadow part in representative image
Point, the most each pixel belongs to the probability of this three apoplexy due to endogenous wind K class and is:
Wherein p (k | Ix) it is pixel IxBelong to the probability of gauss hybrid models kth composition, and this kth become to belong to K class,
Be equivalent to add several tomographic images belonging to K class and.
A kind of significance object detecting method based on foreground focused degree and background priori the most according to claim 1, its
Be characterised by: described in step 2 " again by calculate all kinds of concentration class based on center priori as weight to melting
Conjunction obtains concentration class feature ", its process calculated is as follows: add with concentration class for the three class images that classification is obtained by weight and,
To concentration class characteristic pattern:
Comp (K) is the concentration class that K class image is corresponding:
A kind of significance object detecting method based on foreground focused degree and background priori the most according to claim 1, its
Being characterised by: " the calculating the similarity degree of other super-pixel and foreground seeds " described in step 3, its computational methods are as follows:
FS represents foreground seeds point set.
A kind of significance object detecting method based on foreground focused degree and background priori the most according to claim 1, its
Be characterised by: described in step 3 " will in the super-pixel of border with foreground seeds similarity more than the part super-pixel of threshold value
Reject from background seed ", its process rejected is as follows:
Determined threshold value T of similarity by OSTU algorithm, border super-pixel will be more than the portion of threshold value T with foreground seeds similarity
Divide super-pixel to reject from background seed, then obtain final background subset BS;Finally, right with each super-pixel and background seed
Than degree as to background significance:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610531085.5A CN106203430B (en) | 2016-07-07 | 2016-07-07 | A kind of conspicuousness object detecting method based on foreground focused degree and background priori |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610531085.5A CN106203430B (en) | 2016-07-07 | 2016-07-07 | A kind of conspicuousness object detecting method based on foreground focused degree and background priori |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106203430A true CN106203430A (en) | 2016-12-07 |
CN106203430B CN106203430B (en) | 2017-11-03 |
Family
ID=57472414
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610531085.5A Expired - Fee Related CN106203430B (en) | 2016-07-07 | 2016-07-07 | A kind of conspicuousness object detecting method based on foreground focused degree and background priori |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106203430B (en) |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106651853A (en) * | 2016-12-28 | 2017-05-10 | 北京工业大学 | Establishment method for 3D saliency model based on prior knowledge and depth weight |
CN106778903A (en) * | 2017-01-09 | 2017-05-31 | 深圳市美好幸福生活安全系统有限公司 | Conspicuousness detection method based on Sugeno fuzzy integrals |
CN106780422A (en) * | 2016-12-28 | 2017-05-31 | 深圳市美好幸福生活安全系统有限公司 | A kind of notable figure fusion method based on Choquet integrations |
CN106846331A (en) * | 2016-12-22 | 2017-06-13 | 中国科学院文献情报中心 | Joint vision significance and figure cut the image automatic segmentation method of optimization |
CN107085725A (en) * | 2017-04-21 | 2017-08-22 | 河南科技大学 | A kind of method that image-region is clustered by the LLC based on adaptive codebook |
CN107133558A (en) * | 2017-03-13 | 2017-09-05 | 北京航空航天大学 | A kind of infrared pedestrian's conspicuousness detection method based on probability propagation |
CN107424142A (en) * | 2017-03-30 | 2017-12-01 | 上海万如科技发展有限公司 | A kind of weld joint recognition method based on saliency detection |
CN107766857A (en) * | 2017-10-17 | 2018-03-06 | 天津大学 | The vision significance detection algorithm propagated based on graph model structure with label |
CN107833220A (en) * | 2017-11-28 | 2018-03-23 | 河海大学常州校区 | Fabric defect detection method based on depth convolutional neural networks and vision significance |
CN107886507A (en) * | 2017-11-14 | 2018-04-06 | 长春工业大学 | A kind of salient region detecting method based on image background and locus |
CN107992874A (en) * | 2017-12-20 | 2018-05-04 | 武汉大学 | Image well-marked target method for extracting region and system based on iteration rarefaction representation |
CN108416768A (en) * | 2018-03-01 | 2018-08-17 | 南开大学 | One kind being based on binary foreground picture similarity evaluating method |
CN108416347A (en) * | 2018-01-04 | 2018-08-17 | 天津大学 | Well-marked target detection algorithm based on boundary priori and iteration optimization |
CN108537242A (en) * | 2017-03-03 | 2018-09-14 | 防城港市港口区思达电子科技有限公司 | A kind of new conspicuousness object detection method |
CN108537819A (en) * | 2017-03-03 | 2018-09-14 | 防城港市港口区思达电子科技有限公司 | Super-pixel moving target detecting method |
CN108537816A (en) * | 2018-04-17 | 2018-09-14 | 福州大学 | A kind of obvious object dividing method connecting priori with background based on super-pixel |
CN108805139A (en) * | 2018-05-07 | 2018-11-13 | 南京理工大学 | A kind of image similarity computational methods based on frequency-domain visual significance analysis |
CN110245659A (en) * | 2019-05-21 | 2019-09-17 | 北京航空航天大学 | The significant object segmentation methods of image and device based on preceding background correlation |
CN110287802A (en) * | 2019-05-29 | 2019-09-27 | 南京邮电大学 | Human eye fixation point prediction technique based on optimization display foreground and background seed |
CN110827309A (en) * | 2019-11-12 | 2020-02-21 | 太原理工大学 | Polaroid appearance defect segmentation method based on super-pixels |
CN111080748A (en) * | 2019-12-27 | 2020-04-28 | 北京工业大学 | Automatic picture synthesis system based on Internet |
CN111144294A (en) * | 2019-12-26 | 2020-05-12 | 上海眼控科技股份有限公司 | Target identification method and device, computer equipment and readable storage medium |
CN111815610A (en) * | 2020-07-13 | 2020-10-23 | 广东工业大学 | Lesion focus detection method and device of lesion image |
CN113379691A (en) * | 2021-05-31 | 2021-09-10 | 南方医科大学 | Breast lesion deep learning segmentation method based on prior guidance |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102722891A (en) * | 2012-06-12 | 2012-10-10 | 大连理工大学 | Method for detecting image significance |
CN103914834A (en) * | 2014-03-17 | 2014-07-09 | 上海交通大学 | Significant object detection method based on foreground priori and background priori |
CN103996198A (en) * | 2014-06-04 | 2014-08-20 | 天津工业大学 | Method for detecting region of interest in complicated natural environment |
US20160004929A1 (en) * | 2014-07-07 | 2016-01-07 | Geo Semiconductor Inc. | System and method for robust motion detection |
US20160104054A1 (en) * | 2014-10-08 | 2016-04-14 | Adobe Systems Incorporated | Saliency Map Computation |
-
2016
- 2016-07-07 CN CN201610531085.5A patent/CN106203430B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102722891A (en) * | 2012-06-12 | 2012-10-10 | 大连理工大学 | Method for detecting image significance |
CN103914834A (en) * | 2014-03-17 | 2014-07-09 | 上海交通大学 | Significant object detection method based on foreground priori and background priori |
CN103996198A (en) * | 2014-06-04 | 2014-08-20 | 天津工业大学 | Method for detecting region of interest in complicated natural environment |
US20160004929A1 (en) * | 2014-07-07 | 2016-01-07 | Geo Semiconductor Inc. | System and method for robust motion detection |
US20160104054A1 (en) * | 2014-10-08 | 2016-04-14 | Adobe Systems Incorporated | Saliency Map Computation |
Non-Patent Citations (2)
Title |
---|
LU LI等: "Contrast and Distribution based Saliency Detection in Infrared Images", 《2015 IEEE 17TH INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP)》 * |
WANGJIANG ZHU等: "Saliency Optimization from Robust Background Detection", 《2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106846331A (en) * | 2016-12-22 | 2017-06-13 | 中国科学院文献情报中心 | Joint vision significance and figure cut the image automatic segmentation method of optimization |
CN106651853A (en) * | 2016-12-28 | 2017-05-10 | 北京工业大学 | Establishment method for 3D saliency model based on prior knowledge and depth weight |
CN106780422A (en) * | 2016-12-28 | 2017-05-31 | 深圳市美好幸福生活安全系统有限公司 | A kind of notable figure fusion method based on Choquet integrations |
CN106651853B (en) * | 2016-12-28 | 2019-10-18 | 北京工业大学 | The method for building up of 3D conspicuousness model based on priori knowledge and depth weight |
CN106778903A (en) * | 2017-01-09 | 2017-05-31 | 深圳市美好幸福生活安全系统有限公司 | Conspicuousness detection method based on Sugeno fuzzy integrals |
CN108537242A (en) * | 2017-03-03 | 2018-09-14 | 防城港市港口区思达电子科技有限公司 | A kind of new conspicuousness object detection method |
CN108537819A (en) * | 2017-03-03 | 2018-09-14 | 防城港市港口区思达电子科技有限公司 | Super-pixel moving target detecting method |
CN107133558A (en) * | 2017-03-13 | 2017-09-05 | 北京航空航天大学 | A kind of infrared pedestrian's conspicuousness detection method based on probability propagation |
CN107133558B (en) * | 2017-03-13 | 2020-10-20 | 北京航空航天大学 | Infrared pedestrian significance detection method based on probability propagation |
CN107424142A (en) * | 2017-03-30 | 2017-12-01 | 上海万如科技发展有限公司 | A kind of weld joint recognition method based on saliency detection |
CN107424142B (en) * | 2017-03-30 | 2020-05-19 | 上海万如科技发展有限公司 | Weld joint identification method based on image significance detection |
CN107085725A (en) * | 2017-04-21 | 2017-08-22 | 河南科技大学 | A kind of method that image-region is clustered by the LLC based on adaptive codebook |
CN107085725B (en) * | 2017-04-21 | 2020-08-14 | 河南科技大学 | Method for clustering image areas through LLC based on self-adaptive codebook |
CN107766857A (en) * | 2017-10-17 | 2018-03-06 | 天津大学 | The vision significance detection algorithm propagated based on graph model structure with label |
CN107886507A (en) * | 2017-11-14 | 2018-04-06 | 长春工业大学 | A kind of salient region detecting method based on image background and locus |
CN107886507B (en) * | 2017-11-14 | 2018-08-21 | 长春工业大学 | A kind of salient region detecting method based on image background and spatial position |
CN107833220B (en) * | 2017-11-28 | 2021-06-11 | 河海大学常州校区 | Fabric defect detection method based on deep convolutional neural network and visual saliency |
CN107833220A (en) * | 2017-11-28 | 2018-03-23 | 河海大学常州校区 | Fabric defect detection method based on depth convolutional neural networks and vision significance |
CN107992874B (en) * | 2017-12-20 | 2020-01-07 | 武汉大学 | Image salient target region extraction method and system based on iterative sparse representation |
CN107992874A (en) * | 2017-12-20 | 2018-05-04 | 武汉大学 | Image well-marked target method for extracting region and system based on iteration rarefaction representation |
CN108416347A (en) * | 2018-01-04 | 2018-08-17 | 天津大学 | Well-marked target detection algorithm based on boundary priori and iteration optimization |
CN108416768B (en) * | 2018-03-01 | 2021-05-25 | 南开大学 | Binary-based foreground image similarity evaluation method |
CN108416768A (en) * | 2018-03-01 | 2018-08-17 | 南开大学 | One kind being based on binary foreground picture similarity evaluating method |
CN108537816B (en) * | 2018-04-17 | 2021-08-31 | 福州大学 | Salient object segmentation method based on superpixel and background connection prior |
CN108537816A (en) * | 2018-04-17 | 2018-09-14 | 福州大学 | A kind of obvious object dividing method connecting priori with background based on super-pixel |
CN108805139A (en) * | 2018-05-07 | 2018-11-13 | 南京理工大学 | A kind of image similarity computational methods based on frequency-domain visual significance analysis |
US11151725B2 (en) | 2019-05-21 | 2021-10-19 | Beihang University | Image salient object segmentation method and apparatus based on reciprocal attention between foreground and background |
CN110245659B (en) * | 2019-05-21 | 2021-08-13 | 北京航空航天大学 | Image salient object segmentation method and device based on foreground and background interrelation |
CN110245659A (en) * | 2019-05-21 | 2019-09-17 | 北京航空航天大学 | The significant object segmentation methods of image and device based on preceding background correlation |
CN110287802A (en) * | 2019-05-29 | 2019-09-27 | 南京邮电大学 | Human eye fixation point prediction technique based on optimization display foreground and background seed |
CN110287802B (en) * | 2019-05-29 | 2022-08-12 | 南京邮电大学 | Human eye gaze point prediction method based on optimized image foreground and background seeds |
CN110827309A (en) * | 2019-11-12 | 2020-02-21 | 太原理工大学 | Polaroid appearance defect segmentation method based on super-pixels |
CN111144294A (en) * | 2019-12-26 | 2020-05-12 | 上海眼控科技股份有限公司 | Target identification method and device, computer equipment and readable storage medium |
CN111080748A (en) * | 2019-12-27 | 2020-04-28 | 北京工业大学 | Automatic picture synthesis system based on Internet |
CN111080748B (en) * | 2019-12-27 | 2023-06-02 | 北京工业大学 | Automatic picture synthesizing system based on Internet |
CN111815610A (en) * | 2020-07-13 | 2020-10-23 | 广东工业大学 | Lesion focus detection method and device of lesion image |
CN111815610B (en) * | 2020-07-13 | 2023-09-12 | 广东工业大学 | Lesion detection method and device for lesion image |
CN113379691A (en) * | 2021-05-31 | 2021-09-10 | 南方医科大学 | Breast lesion deep learning segmentation method based on prior guidance |
CN113379691B (en) * | 2021-05-31 | 2022-06-24 | 南方医科大学 | Breast lesion deep learning segmentation method based on prior guidance |
Also Published As
Publication number | Publication date |
---|---|
CN106203430B (en) | 2017-11-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106203430A (en) | A kind of significance object detecting method based on foreground focused degree and background priori | |
CN106909902B (en) | Remote sensing target detection method based on improved hierarchical significant model | |
CN109934200A (en) | A kind of RGB color remote sensing images cloud detection method of optic and system based on improvement M-Net | |
CN106462771A (en) | 3D image significance detection method | |
CN105138970B (en) | Classification of Polarimetric SAR Image method based on spatial information | |
CN103971115B (en) | Automatic extraction method for newly-increased construction land image spots based on NDVI and PanTex index | |
CN109614985A (en) | A kind of object detection method based on intensive connection features pyramid network | |
CN101840581B (en) | Method for extracting profile of building from satellite remote sensing image | |
CN108009518A (en) | A kind of stratification traffic mark recognition methods based on quick two points of convolutional neural networks | |
CN110909690A (en) | Method for detecting occluded face image based on region generation | |
CN106296638A (en) | Significance information acquisition device and significance information acquisition method | |
CN107292234A (en) | It is a kind of that method of estimation is laid out based on information edge and the indoor scene of multi-modal feature | |
CN103839267B (en) | Building extracting method based on morphological building indexes | |
CN110047139B (en) | Three-dimensional reconstruction method and system for specified target | |
CN102073995B (en) | Color constancy method based on texture pyramid and regularized local regression | |
CN107564022A (en) | Saliency detection method based on Bayesian Fusion | |
CN109766936A (en) | Image change detection method based on information transmitting and attention mechanism | |
CN107330875A (en) | Based on the forward and reverse heterogeneous water body surrounding enviroment change detecting method of remote sensing images | |
CN109948593A (en) | Based on the MCNN people counting method for combining global density feature | |
CN106709517A (en) | Mangrove recognition method and system | |
CN110097101A (en) | A kind of remote sensing image fusion and seashore method of tape sorting based on improvement reliability factor | |
CN107330861B (en) | Image salient object detection method based on diffusion distance high-confidence information | |
CN108446694A (en) | A kind of object detection method and device | |
CN110263712A (en) | A kind of coarse-fine pedestrian detection method based on region candidate | |
CN107886507B (en) | A kind of salient region detecting method based on image background and spatial position |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20171103 Termination date: 20200707 |
|
CF01 | Termination of patent right due to non-payment of annual fee |