CN106780376A - The background image dividing method of partitioning algorithm is detected and combined based on conspicuousness - Google Patents
The background image dividing method of partitioning algorithm is detected and combined based on conspicuousness Download PDFInfo
- Publication number
- CN106780376A CN106780376A CN201611116554.3A CN201611116554A CN106780376A CN 106780376 A CN106780376 A CN 106780376A CN 201611116554 A CN201611116554 A CN 201611116554A CN 106780376 A CN106780376 A CN 106780376A
- Authority
- CN
- China
- Prior art keywords
- image
- conspicuousness
- common
- background
- energy
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 238000000638 solvent extraction Methods 0.000 title abstract description 6
- 230000011218 segmentation Effects 0.000 claims abstract description 51
- 238000001514 detection method Methods 0.000 claims abstract description 13
- 230000008569 process Effects 0.000 claims description 13
- 238000013480 data collection Methods 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 description 13
- 229920000742 Cotton Polymers 0.000 description 6
- 241000219146 Gossypium Species 0.000 description 6
- 238000010606 normalization Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000010276 construction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000003292 glue Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000035772 mutation Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000004611 spectroscopical analysis Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30188—Vegetation; Agriculture
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of background image dividing method for being detected based on conspicuousness and combining partitioning algorithm, comprises the following steps:S1, original image is processed, obtain single image Saliency maps and multiple images combine Saliency maps;S2, using multiple images combine Saliency maps in significant characteristics as the common conspicuousness of each single image, prospect is made a distinction with background;S3, background image is split.Can disposably by the common foreground segmentation in multiple images out, segmentation adhesion of the normal segmentation method appeared in cutting procedure can be well solved the problems, such as, to reach the purpose of accurate segmentation;And unsupervised conspicuousness detection algorithm is used, this method is realized the automation of cutting procedure again.
Description
Technical field
The present invention relates to technical field of image processing, detected based on conspicuousness more particularly, to one kind and joint segmentation
The background image dividing method of algorithm.
Background technology
Disease geo-radar image identification includes image preprocessing, image segmentation, feature extraction and pattern-recognition.Wherein, image segmentation
It is one of committed step, segmentation precision directly affects the reliability of feature extraction and the accuracy of pattern-recognition.Classical segmentation
Method has threshold method, rim detection, the dividing method based on statistical-simulation spectrometry, the dividing method based on artificial neural network
Deng.
Every kind of dividing method is suitable for each specific situation in the prior art.As Threshold segmentation calculates simple, computing
Efficiency is higher, speed is fast, only considers gray value, space characteristics is not considered, to noise-sensitive;After threshold value determines, by threshold value and picture
The gray value of vegetarian refreshments is compared one by one, and pixel segmentation can be carried out to each pixel-parallel, and the result of segmentation directly gives figure
As region;However, Threshold segmentation is only applicable to the strong image of preceding background contrasts, due in practical situations both, object background and
The contrast of prospect is different from everywhere in the middle of image, therefore is difficult have a threshold value for determination.Edge-detected image gray scale
Level or the place of structural mutation are detected, so that it is determined that edge distribution, commonly use Roberts operators, Prewitt operators and
The first order differential operators such as Sobel operators and the Second Order Differential Operator such as Laplace operators and Kirsh operators carry out rim detection, but
It is usually applicable only to the simple image of small noise characteristic.Therefore, classical dividing method also has certain limitation with deficiency.
Under natural conditions, complex background, weather illumination all can cause large effect to the image quality of image, easily make
Display foreground and background contrasts are not strong, change very big between different images, cause to use conventional method segmentation effect not good, suitable
Answering property is not high.At present, and in the absence of a kind of general dividing method, the image under all features, all situations can be applied to.
It is the method for a kind of new solution segmentation problem for occurring in recent years based on the dividing method that figure cuts, splits in treatment
Problem has universality, and segmentation effect is preferable.But in the case where display foreground and background contrasts are not high, segmentation result holds
Easily there is adhesion, influence segmentation precision.Some scholars advise being detected from multiple images with common objective or common prospect
To strong feature, help to distinguish foreground and background, this kind of method is referred to as joint split plot design.Joint segmentation typically first uses horse to image
Er Kefu random fields are modeled, and construct energy equation, solution are then optimized to it with figure segmentation method, to realize dividing segmentation
Cut treatment.
The content of the invention
The present invention provide it is a kind of overcome above mentioned problem or solve the above problems at least in part based on conspicuousness detect
With the background image dividing method of joint partitioning algorithm, normal segmentation method can be well solved appeared in cutting procedure
Split the problem of adhesion, to reach the purpose of accurate segmentation;Using unsupervised conspicuousness detection algorithm, enable this method real again
The automation of existing cutting procedure.
According to an aspect of the present invention, there is provided a kind of background image dividing method, comprise the following steps:
S1, original image is processed, obtain single image Saliency maps and multiple images combine Saliency maps;
S2, using multiple images combine Saliency maps in significant characteristics as each single image common conspicuousness,
Prospect is made a distinction with background;
S3, background image is split.
Used as preferred, the step S1 is specifically included:By non-supervisory common conspicuousness algorithm to image at
Reason, obtains every single image Saliency maps of image and multiple images combine Saliency maps.
Used as preferred, the step S2 includes:
S21, the image optimal scheme for a group with common conspicuousness are marked, and are after marking using Markov random field
Image set up one joint segmentation energy equation;
S22, the energy equation by setting up, by the local feature picture with common conspicuousness in single image Saliency maps
Element mark is that rest of pixels is labeled as background.
Used as preferred, the step S21 is specifically included:Saliency maps are combined to multiple images by mixed Gauss model
Difference with single image Saliency maps is modeled, and constructs as the global restriction of Markov random field model
Combine the energy equation of segmentation.
As preferred, in the step s 21, for image I={ I1,…,Ii,…,IN, the energy of the joint segmentation
Equation is:
E (S)=EA(S)+Ei(S)
In formula, EA(S) it is single image internal energy term, Ei(S) it is global energy between image;It is image IiIn
Common significant performance quantifier,It is image IiIn smoothness energy, η is common significant performance quantifierWeight;P
() represents gaussian probability distribution,It is image IiThe feature of middle pixel k or j,It is image IiMiddle kth pixel,WithTable
Diagram is as IiThe mixed Gauss model parameter of foreground and background, ΘcomRepresent and the mixed of common objective is constituted by the prospect of all images
Close Gauss model parameter.
Used as preferred, the step S22 is specifically included:By energy equation, will there is common conspicuousness in image
Local feature pixel is labeled as prospect with internal energy term, in labeling process, by the data item in internal energy term, uses non-prison
The conspicuousness detection algorithm superintended and directed extracts every Saliency maps of blade in this group of image, and the smooth item in internal energy term is then roused
Encourage the regional area distribution with similar features value on image and obtain uniformity mark, at the same time, weighed by global energy
The prospect of each image and the difference of common objective, force the prospect of all images consistent with the common objective of this group of image.
As preferred, in the step S22, when scape is identified with background before dispensing, encourage that there is similar spy on image
The regional area distribution uniformity of value indicative is identified, and the smoothed energy is:
In formula, [f] is indicator function, and for prediction f correctly or incorrectly, its value is 1 or is 0,It is image Ii
The feature of middle pixel k or j, N is the neighborhood in image, and β is scale coefficient, can be passed throughObtain,<·>Table
Diagram is as IiDesired value, as image IiWhen middle adjacent pixel is labeled with different labels in Markov random field model,
Discontinuously punishment will be made to this.
As preferred, in the step S3, algorithm and iteration are cut by standard drawing so that energy function minimum, realizes
Segmentation to entire set of image.
Used as preferred, the step S3 is specifically included:
S31, the color data collection modeling to image so that image is in RGB color by pixelConstitute;
S32, in figure cuts iteration is performed, in the way of iteration energy minimization, background image split.
Compared with prior art, the beneficial effects of the present invention are:The present invention is examined with non-supervisory common conspicuousness first
Method of determining and calculating is every image common Saliency maps of generation in one group of image, and it is random that these Saliency maps are used to construction Markov
Internal image energy function in.Using mixed Gauss model to the common objective of this group of image whole Saliency maps and individual
The difference of image Leaf is modeled, and a new global restriction as Markov random field optimal model is gone
Construction global energy;Cut algorithm and iteration finally by standard drawing so that energy function minimize, to realize to cotton seedling leaf
The segmentation of picture.Can disposably by the common foreground segmentation in multiple images out, can well solve normal segmentation
The problem of segmentation adhesion of the method appeared in cutting procedure, to reach the purpose of accurate segmentation;And use unsupervised notable
Property detection algorithm, enables this method realize the automation of cutting procedure again.
Brief description of the drawings
Fig. 1 is the method flow block diagram of the embodiment of the present invention;
Fig. 2 be the embodiment of the present invention in method be used for cotton background image split schematic flow sheet;
Fig. 3 is every common Saliency maps of image generation in cotton original image in the embodiment of the present invention;
Fig. 4 is cotton original image segmentation result schematic diagram in the embodiment of the present invention.
Specific embodiment
With reference to the accompanying drawings and examples, specific embodiment of the invention is described in further detail.Hereinafter implement
Example is not limited to the scope of the present invention for illustrating the present invention.
Fig. 1 shows a kind of flow chart of the background image dividing method for being detected based on conspicuousness and combining partitioning algorithm,
Comprise the following steps:
S1, original image is processed, obtain single image Saliency maps and multiple images combine Saliency maps;
S2, using multiple images combine Saliency maps in significant characteristics as each single image common conspicuousness,
Prospect is made a distinction with background;
S3, background image is split.
One group of original image is given for I={ I1,…,Ii,…,IN, pixel byIt is represented, NiRepresent IiImage
I lattice,Represent IiPixel in imageNormalization position.Image wherein IiThe binary of image is designated It is IiIn k-th mark of pixel,For 1 represents prospect, it is 0 and is expressed as background, niIt is Ii's
Total pixel number.Wherein image inside cluster numbers are K1, cluster numbers are K between image2。
The step S1 is processed image by non-supervisory common conspicuousness algorithm, obtains every image individual
Image saliency map and multiple images combine Saliency maps.
In this embodiment, the step S1 is specifically included:
S11, by image inside cluster segmentation in original image into K1Class, calculates each by common conspicuousness detection method
The contrast clue and spatial cues of class, obtain single image Saliency maps;
S12, by cluster segmentation between image into K2Class, the contrast line of each class is calculated by common conspicuousness detection method
Rope, spatial cues and correlation clue, with reference to single image Saliency maps, obtain multiple images joint Saliency maps.
Specifically, the step S11 is specifically included:
S111, first to this group of original image I={ I1,…,Ii,…,INIn N width image clusterings be divided into K1Class, obtains
To K1Phylogenetic groupPhylogenetic groupBy one group of D dimensional vectorTo represent.μkRepresent and cluster CkAssociated is poly-
Class center, function ψ:L2→ { 1 ..., K } and pixelIndexed with focusing onIt is associated;
S112, the contrast clue and spatial cues that calculate each class.
Wherein contrast clue is calculated as follows:
Wherein, L2Norm is used to calculate the distance of feature space, niRepresent cluster CiPixel count, N represents all images
Pixel count.
Wherein spatial cues are calculated as follows:
Wherein, δ () is Kronecker delta functions (the kronecker δ function), oiRepresent IiThe center of image, it is high
This core N () is used to calculate pixelWith picture centre oiEuclidean distance, variances sigma2It is the normalization radius of image.Normalization
Coefficient nkIt is cluster CkPixel count.Same, the spatial cues ω with single image modelsThe position between cluster rank is represented, is represented
Global center deviation between multiple image;
Circulation is performed to more than based on the contrast clue calculating process on the basis of cluster and spatial cues calculating process to refer to
Order, until by original image I={ I1,…,Ii,…,INIn all images cluster be all disposed by stop computing.
S113, contrast clue calculating process and spatial cues calculating process are combined by being defined as below, so that
To cluster K1Joint significance collection:
Wherein, wiRepresent conspicuousness clue.Before conspicuousness clue is combined, our normalizations each clue figures makes
Balanced Gaussian function is removed with the score distribution of all clusters;
S114, calculate joint conspicuousness level of constellation value, there is provided discrete distribution, then smooth each pixel connection
The value of conspicuousness is closed, the significant likelihood of pixel x is satisfied with a C of Gaussian Profile NkAggregated pattern:
S115, single Saliency maps are obtained with equation below, so as to obtain the joint conspicuousness of set of pixels level:
S116, the process execution recursion instruction based on the acquisition joint Saliency maps in pixel basis, Zhi Daozhi to more than
To by original image I={ I1,…,Ii,…,INIn all images pixel be all disposed by stop computing.
In the present embodiment, the step S12 is specifically included:
S121, to this group of original image I={ I1,…,Ii,…,INIn N width image clusterings be divided into K2Class, obtains K2
Phylogenetic groupPhylogenetic groupBy one group of D dimensional vectorTo represent.μkRepresent and cluster CkIn associated cluster
The heart, function ψ:L2→ { 1 ..., K } and pixelIndexed with focusing onIt is associated;
S122, the contrast clue ω for calculating each classc(k), spatial cues ωs(k) and correlation clue ωd(k);
Wherein contrast clue is calculated as follows:
Wherein, L2Norm is used to calculate the distance of feature space, niRepresent cluster CiPixel count, N represents all images
Pixel count.
Wherein spatial cues are calculated as follows:
Wherein, δ () is Kronecker delta functions, oiRepresent IiThe center of image, Gaussian kernel N () is based on
Calculate pixelWith picture centre oiEuclidean distance, variances sigma2It is the normalization radius of image.Normalization coefficient nkIt is cluster Ck's
Pixel count.Same, the spatial cues ω with single image modelsThe position between cluster rank is represented, is represented between multiple image
Global center deviation.
Wherein correlation clue is calculated as follows:
First, M-bin histogramFor describing to cluster C in N imageskDistribution:
nkIt is cluster CkPixel count, force
Then correlation clue is calculated as follows:
Wherein, var (qk) have recorded cluster CkHistogram qkVariation, high correspondence clue collection group representation, the picture of the cluster
Element is evenly distributed on every image.
S123, execution circulation contrast clue calculating process, spatial cues calculating process and correlation clue were calculated
Journey, until by original image I={ I1,…,Ii,…,INIn the pixel cluster of all images be all disposed;
S124, with reference to single image Saliency maps and three kinds of clue formulasCarry out at computing
Reason;
S125, formulaThe final joint conspicuousness of smooth gained
Figure.
The process based on the final joint Saliency maps of the smooth gained in pixel basis performs recursion instruction to more than, directly
To until by original image I={ I1,…,Ii,…,INIn all images pixel be all disposed by stop computing.
In the present embodiment, the step S2 includes:
S21, the image optimal scheme for a group with common conspicuousness are marked, and are after marking using Markov random field
Image set up one joint segmentation energy equation;
S22, the energy equation by setting up, by the local feature picture with common conspicuousness in single image Saliency maps
Element mark is that rest of pixels is labeled as background.
S21 is specifically included:Saliency maps and single image Saliency maps are combined to multiple images by mixed Gauss model
Difference be modeled, and as Markov random field model global restriction come tectonic syntaxis segmentation energy side
Journey.
It is the image optimal scheme mark with common conspicuousness, by Markov random field for these images set up one
The individual joint for being added and being obtained by global energy between single image internal energy term and image splits energy equation:
E (S)=EA(S)+Ei(S)
In formula, EA(S) it is single image internal energy term, Ei(S) it is global energy between image;It is image IiIn
Common significant performance quantifier,It is image IiIn smoothness energy, η is common significant performance quantifierWeight;P
() represents gaussian probability distribution,It is image IiThe feature of middle pixel k or j,It is image IiMiddle kth pixel,With
Represent image IiGMM (Gaussian Mixture Model, gauss hybrid models) model parameter, Θ of foreground and backgroundcomTable
Show the GMM model parameter that common objective is made up of the prospect of all images.
The step S22 is specifically included:Pass through set up energy equation, will there is common conspicuousness in Saliency maps picture
Local feature pixel be labeled as prospect, rest of pixels be labeled as background;
By energy equation, by the local feature pixel with common conspicuousness in image with internal energy term labeled as preceding
Scape, in labeling process, by the data item in common conspicuousness model internal energy term, is detected using non-supervisory conspicuousness
Algorithm extracts every Saliency maps of blade in this group of image, and the smooth item in internal energy term then encourages have phase on image
Uniformity mark is obtained like the regional area distribution of characteristic value, so that the foreground features of prominent Saliency maps, make foreground features more
Substantially;At the same time, the global energy for being defined by gauss hybrid models, weighs the prospect of each image and the difference of common objective
It is different, force the prospect of all images consistent with the common objective of this group of image.
In allocation identification, the regional area distribution uniformity mark on image with similar features value is encouraged, it is described flat
Sliding energy term is:
In formula, [f] is indicator function, and for prediction f correctly or incorrectly, its value is 1 or is 0,It is image Ii
The feature of middle pixel k or j, N is the neighborhood in image, and β is scale coefficient, can be passed throughObtain,<·>Represent
Image IiDesired value, as image IiWhen middle adjacent pixel is labeled with different labels in MRF models,Will be to this discontinuous
Make punishment.
In the present embodiment, the step S3 is specifically included:Algorithm is cut to being optimized to energy equation using standard drawing,
The difference of each image prospect and common objective is weighed by the global energy constructed by image feature data, using iteration side
Method, for each segmentation result, every image background is polymerized to using K-means clustering methodsThe quantity K of middle Gaussian kernelb,
The expectation and variance of the central value and variance of each class as corresponding Gaussian kernel are calculated, while in an iterative process by all figures
The prospect of picture tries to achieve Θ as common objective using same methodcomMiddle KcGroup GMM parameters, finally, are scratched at figure using edge
Reason image.
Fig. 2 is applied to the schematic flow sheet in the segmentation of cotton background image for the method for the present invention, is total to by non-supervisory
With conspicuousness detection algorithm, using CIE Lab colors (CIE Lab:L * component in color space represents the brightness of pixel;A tables
Show the scope from red to green;B is represented from yellow to blue scope) and it is next with the Gabor filter that 8 orientation are responded
Represent characteristic vector, one scale of 1 bandwidth of selection and extraction.By combining 8 textural characteristics in orientation, (8 orientation are responded
Gabor wavelet energy:Deflection Θ according to wave filter is different, is divided into eight orientation:45 degree, 90 degree, 135 degree, 180 degree,
225 degree, 270 degree, 315 degree, 360 degree), calculate the image size of Gabor filter.K averages are applied into two-layer cluster to work as
In.Single image is directed to, the cluster numbers of common conspicuousness detection algorithm are set to 6, and for multiple images, cluster numbers are then set to
Min { 3M, 20 }, wherein M represents picture number, obtains one group of gray-scale map with common conspicuousness, as shown in figure 3, be respectively locating
Before reason after (a, b, c, d) and treatment common conspicuousness gray-scale map (e, f, g, h) schematic diagram.
Joint segmentation to the common objective optimal scheme mark in N figures, by every internal energy term of image and
External energy term sum between image carries out optimizing and obtains.The binary of note N width image distribution is designated S={ s1,…,sN, lead to
Cross to energy equation E (S)=EA(S)+Ei(S) minimum optimizing is carried out, optimal solution S can be obtained*, so as to obtain optimum segmentation result.
Pass through set up energy equation, the local feature pixel with common conspicuousness in Saliency maps picture is labeled as
Prospect, rest of pixels is labeled as background;
Using alternative manner, minimum treatment is carried out to energy equation, to realize the segmentation to entire set of image.
First, to the color data collection z of imagenModeling so that image is in RGB color by pixelConstitute.
Iteration is performed in figure cuts, in this mode of iteration energy minimization, instead of previous disposable partitioning algorithm.
First, it is directed to the T of initial picturesUEach pixel in regionIt is pixel one gauss hybrid models of establishment
Inscape:
Then, learning gauss hybrid models coefficient is worked as from data set z:
Segmentation equation is then set up, is understood using minimal cut place to go and determined:
And then brought into operation from newly from the first step, until set restrains;
Using edge FIG pull handle image, result is finally given as shown in figure 4, for one group of original image box segmentation result shows
It is intended to (figure is the gray level image of original graph and segmentation result).
In sum, the present invention is every image life in one group of image first with non-supervisory common conspicuousness detection algorithm
Into common Saliency maps, these Saliency maps are used to construct the internal image energy function in markov random file.Using
Mixed Gauss model is modeled to the common objective of this group of image whole Saliency maps with the difference of single image Leaf, will
It removes construction global energy as a new global restriction of Markov random field optimal model;Finally by standard
Figure cut algorithm and iteration so that energy function minimize, to realize the segmentation to cotton seedling leaf image.Can disposably will be many
Common foreground segmentation in image out, can well solve segmentation of the normal segmentation method appeared in cutting procedure and glue
Problem even, to reach the purpose of accurate segmentation;And unsupervised conspicuousness detection algorithm is used, this method is realized point again
Cut the automation of process.
Finally, the present processes are only preferably embodiment, are not intended to limit the scope of the present invention.It is all
Within the spirit and principles in the present invention, any modification, equivalent substitution and improvements made etc. should be included in protection of the invention
Within the scope of.
Claims (9)
1. a kind of background image dividing method, it is characterised in that comprise the following steps:
S1, original image is processed, obtain single image Saliency maps and multiple images combine Saliency maps;
S2, using multiple images combine Saliency maps in significant characteristics as each single image common conspicuousness, to preceding
Scape makes a distinction with background;
S3, background image is split.
2. background image dividing method according to claim 1, it is characterised in that the step S1 is specifically included:Pass through
Non-supervisory common conspicuousness algorithm is processed image, obtains every single image Saliency maps of image and multiple images
Joint Saliency maps.
3. background image dividing method according to claim 1, it is characterised in that the step S2 includes:
S21, the image optimal scheme for a group with common conspicuousness are marked, and are the figure after mark using Markov random field
As setting up an energy equation for joint segmentation;
S22, the energy equation by setting up, by the local feature pixel mark with common conspicuousness in single image Saliency maps
Prospect is designated as, rest of pixels is labeled as background.
4. background image dividing method according to claim 3, it is characterised in that the step S21 is specifically included:Pass through
Mixed Gauss model is modeled to multiple images joint Saliency maps with the difference of single image Saliency maps, and as
The global restriction of Markov random field model carrys out the energy equation of tectonic syntaxis segmentation.
5. background image dividing method according to claim 4, it is characterised in that in the step s 21, for image I=
{I1,…,Ii,…,IN, the energy equation of the joint segmentation is:
E (S)=EA(S)+Ei(S)
In formula, EA(S) it is single image internal energy term, Ei(S) it is global energy between image;Ei CIt is image IiIn it is common
Significant performance quantifier,It is image IiIn smoothness energy, η is common significant performance quantifier Ei CWeight;P () is represented
Gaussian probability is distributed,It is image IiThe feature of middle pixel k or j,It is image IiMiddle kth pixel,WithRepresent image Ii
The mixed Gauss model parameter of foreground and background, ΘcomExpression is made up of the mixed Gaussian mould of common objective the prospect of all images
Shape parameter.
6. background image dividing method according to claim 5, it is characterised in that the step S22 is specifically included:Pass through
Energy equation, prospect, labeling process are labeled as by the local feature pixel with common conspicuousness in image with internal energy term
In, by the data item in internal energy term, every blade in this group of image is extracted using non-supervisory conspicuousness detection algorithm
Saliency maps, the smooth item in internal energy term then encourage on image with similar features value regional area distribution obtain one
Cause property mark, at the same time, the prospect and the difference of common objective of each image is weighed by global energy, forces all images
Prospect it is consistent with the common objective of this group of image.
7. background image dividing method according to claim 6, it is characterised in that in the step S22, before dispensing scape
When being identified with background, the regional area distribution uniformity mark on image with similar features value, the smoothed energy are encouraged
For:
In formula, [f] is indicator function, and for prediction f correctly or incorrectly, its value is 1 or is 0,It is image IiMiddle picture
The feature of plain k or j, N is the neighborhood in image, and β is scale coefficient, can be passed throughObtain,<·>Represent image
IiDesired value, as image IiWhen middle adjacent pixel is labeled with different labels in Markov random field model,Will be to this
Plant and discontinuously make punishment.
8. background image dividing method according to claim 1, it is characterised in that in the step S3, by standard drawing
Algorithm and iteration are cut so that energy function minimum, realizes the segmentation to entire set of image.
9. background image dividing method according to claim 8, it is characterised in that the step S3 is specifically included:
S31, the color data collection modeling to image so that image is in RGB color by pixelConstitute;
S32, in figure cuts iteration is performed, in the way of iteration energy minimization, background image split.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611116554.3A CN106780376A (en) | 2016-12-07 | 2016-12-07 | The background image dividing method of partitioning algorithm is detected and combined based on conspicuousness |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611116554.3A CN106780376A (en) | 2016-12-07 | 2016-12-07 | The background image dividing method of partitioning algorithm is detected and combined based on conspicuousness |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106780376A true CN106780376A (en) | 2017-05-31 |
Family
ID=58882231
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611116554.3A Pending CN106780376A (en) | 2016-12-07 | 2016-12-07 | The background image dividing method of partitioning algorithm is detected and combined based on conspicuousness |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106780376A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107481198A (en) * | 2017-07-27 | 2017-12-15 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and computer equipment |
CN107644429A (en) * | 2017-09-30 | 2018-01-30 | 华中科技大学 | A kind of methods of video segmentation based on strong goal constraint saliency |
CN108154488A (en) * | 2017-12-27 | 2018-06-12 | 西北工业大学 | A kind of image motion ambiguity removal method based on specific image block analysis |
CN108229288A (en) * | 2017-06-23 | 2018-06-29 | 北京市商汤科技开发有限公司 | Neural metwork training and clothes method for detecting color, device, storage medium, electronic equipment |
CN109448015A (en) * | 2018-10-30 | 2019-03-08 | 河北工业大学 | Image based on notable figure fusion cooperates with dividing method |
CN113190737A (en) * | 2021-05-06 | 2021-07-30 | 上海慧洲信息技术有限公司 | Website information acquisition system based on cloud platform |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013148485A2 (en) * | 2012-03-30 | 2013-10-03 | Clarient Diagnostic Services, Inc. | Detection of tissue regions in microscope slide images |
CN103810473A (en) * | 2014-01-23 | 2014-05-21 | 宁波大学 | Hidden Markov model based human body object target identification method |
-
2016
- 2016-12-07 CN CN201611116554.3A patent/CN106780376A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013148485A2 (en) * | 2012-03-30 | 2013-10-03 | Clarient Diagnostic Services, Inc. | Detection of tissue regions in microscope slide images |
CN103810473A (en) * | 2014-01-23 | 2014-05-21 | 宁波大学 | Hidden Markov model based human body object target identification method |
Non-Patent Citations (6)
Title |
---|
CHANG K Y ETAL.: "From co-saliency to co-segmentation:an efficient and fully unsupervised energy minimization model", 《PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
H. FU ETAL.: "Cluster-based co-saliency detection", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 * |
YU H K ETAL.: "Unsupervised co-segmentation based on a new global GMM constraint in MRF", 《PROCEEDINGS OF THE 2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)》 * |
刘松涛 等: "基于图割的图像分割方法及其新进展", 《自动化学报》 * |
臧顺全: "基于图割优化的Markov 随机场图像分割方法综述", 《电视技术》 * |
邵昊阳 等: "基于多域先验的乳腺超声图像协同分割", 《自动化学报》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108229288A (en) * | 2017-06-23 | 2018-06-29 | 北京市商汤科技开发有限公司 | Neural metwork training and clothes method for detecting color, device, storage medium, electronic equipment |
CN107481198A (en) * | 2017-07-27 | 2017-12-15 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and computer equipment |
CN107644429A (en) * | 2017-09-30 | 2018-01-30 | 华中科技大学 | A kind of methods of video segmentation based on strong goal constraint saliency |
CN107644429B (en) * | 2017-09-30 | 2020-05-19 | 华中科技大学 | Video segmentation method based on strong target constraint video saliency |
CN108154488A (en) * | 2017-12-27 | 2018-06-12 | 西北工业大学 | A kind of image motion ambiguity removal method based on specific image block analysis |
CN109448015A (en) * | 2018-10-30 | 2019-03-08 | 河北工业大学 | Image based on notable figure fusion cooperates with dividing method |
CN113190737A (en) * | 2021-05-06 | 2021-07-30 | 上海慧洲信息技术有限公司 | Website information acquisition system based on cloud platform |
CN113190737B (en) * | 2021-05-06 | 2024-04-16 | 上海慧洲信息技术有限公司 | Website information acquisition system based on cloud platform |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106780376A (en) | The background image dividing method of partitioning algorithm is detected and combined based on conspicuousness | |
CN110363122B (en) | Cross-domain target detection method based on multi-layer feature alignment | |
CN104091321B (en) | It is applicable to the extracting method of the multi-level point set feature of ground laser radar point cloud classifications | |
CN104599275B (en) | The RGB-D scene understanding methods of imparametrization based on probability graph model | |
CN104537676B (en) | Gradual image segmentation method based on online learning | |
CN102651128B (en) | Image set partitioning method based on sampling | |
CN112750106B (en) | Nuclear staining cell counting method based on incomplete marker deep learning, computer equipment and storage medium | |
CN110929713B (en) | Steel seal character recognition method based on BP neural network | |
CN103886330A (en) | Classification method based on semi-supervised SVM ensemble learning | |
CN105005565B (en) | Live soles spoor decorative pattern image search method | |
CN106021406B (en) | A kind of online mask method of iterative image of data-driven | |
CN105488536A (en) | Agricultural pest image recognition method based on multi-feature deep learning technology | |
CN104966085A (en) | Remote sensing image region-of-interest detection method based on multi-significant-feature fusion | |
CN101526994B (en) | Fingerprint image segmentation method irrelevant to collecting device | |
CN102054170B (en) | Visual tracking method based on minimized upper bound error | |
CN103996018A (en) | Human-face identification method based on 4DLBP | |
CN109948625A (en) | Definition of text images appraisal procedure and system, computer readable storage medium | |
CN110807760A (en) | Tobacco leaf grading method and system | |
CN110097060A (en) | A kind of opener recognition methods towards trunk image | |
CN103177266A (en) | Intelligent stock pest identification system | |
CN111860459A (en) | Gramineous plant leaf stomata index measuring method based on microscopic image | |
CN110853070A (en) | Underwater sea cucumber image segmentation method based on significance and Grabcut | |
CN104392459A (en) | Infrared image segmentation method based on improved FCM (fuzzy C-means) and mean drift | |
CN105426924A (en) | Scene classification method based on middle level features of images | |
CN106228136A (en) | Panorama streetscape method for secret protection based on converging channels feature |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170531 |