CN103425981A - Cluster-based block mass extraction device and method - Google Patents

Cluster-based block mass extraction device and method Download PDF

Info

Publication number
CN103425981A
CN103425981A CN2012101596376A CN201210159637A CN103425981A CN 103425981 A CN103425981 A CN 103425981A CN 2012101596376 A CN2012101596376 A CN 2012101596376A CN 201210159637 A CN201210159637 A CN 201210159637A CN 103425981 A CN103425981 A CN 103425981A
Authority
CN
China
Prior art keywords
agglomerate
image
picture point
module
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012101596376A
Other languages
Chinese (zh)
Other versions
CN103425981B (en
Inventor
张雪林
朱豪
吴贻刚
邓海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen ZTE Netview Technology Co Ltd
Original Assignee
Shenzhen ZTE Netview Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen ZTE Netview Technology Co Ltd filed Critical Shenzhen ZTE Netview Technology Co Ltd
Priority to CN201210159637.6A priority Critical patent/CN103425981B/en
Publication of CN103425981A publication Critical patent/CN103425981A/en
Application granted granted Critical
Publication of CN103425981B publication Critical patent/CN103425981B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a cluster-based block mass extraction device and method. The cluster-based block mass extraction device and method are based on existing image pyramid block mass extraction device and method, an image block mass extraction module and an image block mass merging module are improved, and models are introduced into the two modules. The module is built for image points of a downsampling module in the image block mass extraction module, and the image point gray scale and the image point block mass number are computed from the perspective of probability; the module is built for block masses in the image block mass merging module, the merging of the block masses is converted into the judgment of similarities among a plurality of modules, therefore, logical judgment is simplified, data are classified from the perspective of data distribution, the accuracy of block mass merging is ensured, and a merging process is simplified.

Description

A kind of agglomerate extraction element and method based on cluster
Technical field
The present invention relates to image processing field, in particular to a kind of agglomerate extraction element and method based on cluster.
Background technology
The agglomerate feature of image is exactly in image processing process, and according to connectedness, by pixel value in image, similar picture point is classified as an agglomerate, and this method has been simulated the mankind's visually-perceptible system.Usually, human vision is after obtaining image, and the picture point that brain can be close by color in image deteriorates to a kind of color, so image has just become the zone of several different colours, the mankind pay close attention to a kind of color according to the needs of oneself again.
At present, in target tracking algorism, adopt the agglomerate feature can effectively improve the accuracy of tracking, but to the extracting method of agglomerate feature also proposed algorithm simple, process real-time requirement.For algorithm, require briefly algorithm easy to use, the algorithm too complex can increase the complexity of target following, affects the use of tracking.For processing in real time, require algorithm to calculate Simple fast, can process video flowing, even can sacrifice certain accuracy in case of necessity.
The extracting method of existing agglomerate feature has a lot, it is generally first image to be carried out to pre-service, eliminate noise, then to image according to color classification or cluster, but the classification as a result obtained can be many, and in disorder, therefore further need to be merged these classifications, obtain important severally, finally the remaining mode such as rectangle, ellipse for classification is described.
At present, the agglomerate extracting method based on image pyramid is more typical a kind of agglomerate extracting method, and the method is calculated fairly simple, and result is more satisfactory.As shown in Figure 1, its corresponding agglomerate extraction element comprises following module: image pretreatment module, image agglomerate extraction module, image agglomerate merge module, and image agglomerate fitting module, and its agglomerate extraction step comprises:
The first step, image pretreatment module, for image is carried out to pre-service work, mainly comprise image are carried out to noise restraint and smoothing processing, make agglomerate extract the impact that less is subject to noise.
Second step, image agglomerate extraction module are numbered for each picture point to image, and wherein, described numbering represents the agglomerate under corresponding picture point.The detailed operation step of this image agglomerate extraction module is as follows:
1, the memory headroom size of the hierarchical model of computed image, to distribute enough memory headrooms, and be stored in original image the 0th layer of hierarchical model, and in initialization the 0th tomographic image, the confidence level of each picture point is 1.Wherein, hierarchical model is one group of image, as shown in Figure 2, hierarchical model at the middle and upper levels image width and highly be respectively the lower image width and the height half, the original image of input is at the 0th layer of hierarchical model, by calculating other each tomographic image of hierarchical model;
2, n tomographic image down-sampling is generated to the n+1 tomographic image.The pixel value of each picture point of n+1 tomographic image utilizes hierarchical model picture point computing method to obtain, and the picture point confidence level is set.Wherein, in the down-sampling process, the n tomographic image is reduced into to 1/4 size of n+1 tomographic image, every four picture points mean by a picture point.
Wherein, hierarchical model picture point computing method adopt formula (1-1) to be calculated, be that pixel coordinate is (i, the gray-scale value of pixel j) utilizes formula (1-1) to obtain, as shown in Figure 3, the set of 12 lower floor's pixels that obtain by the sampling template means with R for the upper strata picture point of utilizing 12 neighborhood sampling templates to obtain and the coordinate mapping relations of lower floor's picture point.The computing formula of the gray average of l pixel of n tomographic image is:
gray n l = 1 ∑ R ∈ k ω k t n - 1 k r n - 1 k ∑ R ∈ k ω k t n - 1 k r n - 1 k gray n - 1 k - - - ( 1 - 1 )
ω wherein kFor weight, importance corresponding to each data in 12 neighborhood sampling templates, can arrange different coefficients according to the use object.T kFor rejecting the factor, for example when it is 1, this picture point participates in calculating.R means the pixel confidence level, and its size is the hierarchical model size, and its value, for two-value data, when the confidence level of the some pixels of n tomographic image is 1, means that the gray scale of this pixel can represent the pixel in point set R; Be 0 o'clock, think that this picture point can not replace the pixel in the point set R of its lower floor.
All t when arranging, are set in initialization kBe 1, calculate
Figure BDA00001668317200022
In rear traversal point set R
Figure BDA00001668317200023
If the Euclidean distance of two gray scales is greater than the threshold value Dis of setting, its t is set kBe 0, adopt formula (1-1) to calculate once again, then judge whether to need to reject, if so, continue preceding step, if not, finish.Also judgement simultaneously
Figure BDA00001668317200024
Value, if this value is greater than the threshold value Ran of a setting, the confidence level that this point is set is 1, otherwise the confidence level that this point is set is 0.
If 3 n+1 images are pyramidal top layer picture point, go to step 4, otherwise n add 1, and go to step 2;
4, the agglomerate that initialization arranges the top layer picture point of hierarchical model is numbered 0, and wherein the agglomerate numbering is the numbering of the agglomerate under picture point, and agglomerate 0 means that the picture point of this agglomerate is to abandon the set of no point, does not participate in follow-up calculating;
5, calculate the agglomerate numbering of each picture point of n tomographic image according to the coordinate relation of n tomographic image and n+1 tomographic image, computing method are picture point agglomerate numbering computing method, and preserve the numbering result;
Wherein, picture point agglomerate numbering computing method are determined according to the similarity of levels picture point, at first find upper strata picture point S set corresponding to lower floor's picture point, obtaining institute's agglomerate a little in set numbers, calculate the Euclidean distance of all agglomerate pixel values and this lower floor's picture point pixel value, one of the chosen distance minimum and with the threshold value Dis set relatively, if be less than setting threshold, this picture point belongs to this agglomerate, otherwise, the agglomerate of this picture point of Credibility judgement according to this picture point is numbered 0 or be a new agglomerate, if confidence level is 0, its agglomerate is numbered 0, otherwise this picture point belongs to a new agglomerate, the total number of agglomerate numbering increases by one.Wherein, the agglomerate pixel value means that first is labeled as the gray-scale value of the picture point of agglomerate, and each agglomerate only has an agglomerate pixel value, only has when the agglomerate numbering increases, and just can increase the agglomerate pixel value newly, and assignment, and change is no longer done in back.Equally, the 0th class, because there is no classification, does not therefore have the agglomerate pixel value.
If 6 n are 0, go to step 7; Otherwise n subtracts 1, go to step 5;
7, the agglomerate numbering result of output original image.
The 3rd step, image agglomerate merge module and are merged for the Output rusults to image agglomerate extraction module, to set up gray distribution model to agglomerate, according to the similarity of each agglomerate gray distribution model and the adjacency on image, two agglomerates be merged.
The 4th step, image agglomerate fitting module are carried out ellipse fitting for the agglomerate result to output, and are exported oval expression formula.
But still there is following problem in this existing agglomerate extracting method:
1, calculating parameter is too many, and described parameter does not have versatility, and generally speaking, for different images, its parameter need to arrange difference, need to, in operating process, also need parameter is carried out to adaptive adjustment to realize result optimizing;
Although in 2 agglomerate merging methods in this agglomerate extracting method, in the time of under its situation comparatively complicated at image, accuracy is higher, but its Consideration is more, calculate more complicated, extracting result for the agglomerate of the image in simple two-value situation has larger deviation.
Summary of the invention
In order to solve the above-mentioned problems in the prior art, the object of the present invention is to provide a kind of agglomerate extraction element and method based on cluster.
In order to reach purpose of the present invention, the present invention realizes by the following technical solutions:
A kind of agglomerate extraction element based on cluster, comprise that image pretreatment module, image agglomerate extraction module, image agglomerate merge module and image agglomerate fitting module, wherein:
The image pretreatment module, carry out the image pre-service for the image to obtaining;
Image agglomerate extraction module, be used in hierarchical model picture point computation process, the adaptive model of a gray scale of setting up according to the gray scale to N picture point on the sampling template set in advance is processed the picture point in present image, when meeting this model, current picture point is retained, otherwise reject, when last numerical convergence, calculate the gray-scale value of upper strata picture point, preserve the adaptive model parameter of gray scale of this upper strata picture point simultaneously; Utilize the retrodict agglomerate numbering of each picture point on the orlop image of hierarchical model, obtain lower floor's picture point after the mapping relations of upper strata picture point, determining the agglomerate numbering of lower floor's picture point according to the probabilistic relation between the adaptive model of the gray scale of lower floor's picture point gray scale and upper strata picture point;
The image agglomerate merges module, for the Output rusults to image agglomerate extraction module, merged, its merging method is: agglomerate is set up to gray distribution model, according to the similarity of each agglomerate gray distribution model and the adjacency on image, two agglomerates are merged;
And image agglomerate fitting module, carry out agglomerate matching output for the Output rusults that the image agglomerate is merged to module.
Under a kind of embodiment preferably, described image agglomerate extraction module is also further numbered in computation process at the picture point agglomerate, is handled as follows:
Obtain all upper stratas picture point S set corresponding to lower image picture point according to down-sampling;
Obtain in the picture point S set of upper strata with the nearest point of the gray scale Euclidean distance of lower image picture point under agglomerate, and determine according to the pixel value of this agglomerate whether this lower image picture point belongs to this agglomerate; If the gray scale of this lower image picture point meets the adaptive model of gray scale, judge that this lower image picture point belongs to this agglomerate, otherwise, assess the confidence level of this lower image picture point and according to assessment result, this agglomerate be numbered, if confidence level is 1, this lower image picture point belongs to a new agglomerate, distributes a new agglomerate to number to this agglomerate, and preserves agglomerate pixel value and the adaptive model parameter of gray scale; If confidence level is 0, think that this lower image picture point can't sort out, its agglomerate numbering is made as to 0.
Under a kind of embodiment preferably, the image agglomerate merges the step that module merged the Output rusults of image agglomerate extraction module and comprises:
Output rusults according to image agglomerate extraction module is set up respectively the blob match model to each agglomerate;
Obtain a pair of adjacent agglomerate and compare their blob match model, if its model is not close, again obtaining a pair of adjacent agglomerate to proceed comparison; If its model approaches, merge this two agglomerates, and upgrade the blob match model parameter of this new agglomerate, again search for afterwards and find a pair of adjacent agglomerate the blob match model that upgrades the marquis according to it to compare; When completeer all adjacent agglomerates and can't continue to merge the time, finish.
Under a kind of embodiment preferably, image agglomerate extraction module adopts following mathematical expression to calculate the gray-scale value of upper strata picture point:
gray n l = 1 ∑ R ∈ k ω k t n - 1 k r n - 1 k ∑ R ∈ k ω k t n - 1 k r n - 1 k gray n - 1 k ;
Wherein, ω kFor weight, t kFor rejecting the factor, r means the pixel confidence level.
Under a kind of embodiment preferably, image agglomerate fitting module is carried out ellipse fitting output for the Output rusults that the image agglomerate is merged to module.
A kind of agglomerate extracting method based on cluster, its agglomerate extraction element based on cluster comprises that image pretreatment module, image agglomerate extraction module, image agglomerate merge module and image agglomerate fitting module, wherein, described method comprises:
The image pretreatment module is carried out pre-service to the image obtained;
In hierarchical model picture point computation process, the adaptive model of a gray scale that image agglomerate extraction module is set up according to the gray scale to N picture point on the sampling template set in advance is processed the picture point in present image, when meeting this model, current picture point is retained, otherwise reject, when last numerical convergence, calculate the gray-scale value of upper strata picture point, preserve the adaptive model parameter of gray scale of this upper strata picture point simultaneously; Utilize the retrodict agglomerate numbering of each picture point on the orlop image of hierarchical model, obtain lower floor's picture point after the mapping relations of upper strata picture point, determining the agglomerate numbering of lower floor's picture point according to the probabilistic relation between the adaptive model of the gray scale of lower floor's picture point gray scale and upper strata picture point;
Image agglomerate merging module is merged the Output rusults of image agglomerate extraction module, its merging method is: agglomerate is set up to gray distribution model, according to the similarity of each agglomerate gray distribution model and the adjacency on image, two agglomerates are merged;
The Output rusults that image agglomerate fitting module merges module to the image agglomerate carries out agglomerate matching output.
Under a kind of embodiment preferably, described image agglomerate extraction module is also further numbered in computation process at the picture point agglomerate, is handled as follows:
Obtain all upper stratas picture point S set corresponding to lower image picture point according to down-sampling;
Obtain in the picture point S set of upper strata with the nearest point of the gray scale Euclidean distance of lower image picture point under agglomerate, and determine according to the pixel value of this agglomerate whether this lower image picture point belongs to this agglomerate; If the gray scale of this lower image picture point meets the adaptive model of gray scale, judge that this lower image picture point belongs to this agglomerate, otherwise, assess the confidence level of this lower image picture point and according to assessment result, this agglomerate be numbered, if confidence level is 1, this lower image picture point belongs to a new agglomerate, distributes a new agglomerate to number to this agglomerate, and preserves agglomerate pixel value and the adaptive model parameter of gray scale; If confidence level is 0, think that this lower image picture point can't sort out, its agglomerate numbering is made as to 0.
Under a kind of embodiment preferably, the image agglomerate merges the step that module merged the Output rusults of image agglomerate extraction module and comprises:
Output rusults according to image agglomerate extraction module is set up respectively the blob match model to each agglomerate;
Obtain a pair of adjacent agglomerate and compare their blob match model, if its model is not close, again obtaining a pair of adjacent agglomerate to proceed comparison; If its model approaches, merge this two agglomerates, and upgrade the blob match model parameter of this new agglomerate, again search for afterwards and find a pair of adjacent agglomerate the blob match model that upgrades the marquis according to it to compare; When completeer all adjacent agglomerates and can't continue to merge the time, finish.
Under a kind of embodiment preferably, image agglomerate extraction module adopts following mathematical expression to calculate the gray-scale value of upper strata picture point:
gray n l = 1 ∑ R ∈ k ω k t n - 1 k r n - 1 k ∑ R ∈ k ω k t n - 1 k r n - 1 k gray n - 1 k ;
Wherein, ω kFor weight, t kFor rejecting the factor, r means the pixel confidence level.
Under a kind of embodiment preferably, image agglomerate fitting module is carried out ellipse fitting output for the Output rusults that the image agglomerate is merged to module.
Compared to prior art, the present invention has following beneficial effect:
1, the speed of merging agglomerate is faster, has guaranteed real-time;
2, view data is set up to model, replace the parameter of direct significance by model parameter, make the agglomerate extracting method have more versatility;
3, due to the number that has reduced parameter, make arranging of whole agglomerate extracting method convenient, for domestic consumer, can be more prone to use.
The accompanying drawing explanation
The realization of the object of the invention, functional characteristics and excellent effect, be described further below in conjunction with specific embodiment and accompanying drawing, wherein,
The structural representation that Fig. 1 is existing agglomerate extraction element;
Fig. 2 is the hierarchical model schematic diagram;
Fig. 3 is 12 neighborhood sampling template schematic diagram;
Lower floor's picture point mapping upper strata picture point schematic diagram that Fig. 4 obtains for adopting 12 neighborhood sampling templates.
Embodiment
Below in conjunction with the drawings and specific embodiments, technical scheme of the present invention is described in further detail, can be implemented so that those skilled in the art can better understand the present invention also, but illustrated embodiment is not as a limitation of the invention.
Agglomerate extracting method provided by the invention improves image agglomerate extraction module and image agglomerate merging module on the thinking of continuing to use the existing agglomerate extraction element structure based on image pyramid, in these two modules, introduces model.In image agglomerate extraction module, the picture point in the down-sampling template is set up to model, from the angle of probability, calculate picture point gray scale and picture point agglomerate numbering.Merge module at the image agglomerate agglomerate is set up to model, convert the merging of agglomerate to judgement to similarity between a plurality of models, thereby simplified the logic judgement, the angle distributed from data is sorted out data, guarantee the accuracy that agglomerate merges, simplified merging process simultaneously.
Agglomerate extracting method provided by the invention mainly comprises following several step:
The first step, image pretreatment module are responsible for the pre-service work of image.Described pre-service can comprise carries out noise restraint and smoothing processing to image, makes agglomerate extract the impact that less is subject to noise.In addition, in pre-treatment step, can also comprise other processing modes, its pretreatment mode not done to restriction here.
Second step, image agglomerate extraction module are numbered for the agglomerate of each picture point of computed image, and wherein, the agglomerate numbering represents the agglomerate under this picture point.Its core methed comprises hierarchical model picture point computing method and picture point agglomerate numbering computing method.Introduce probability model in this image agglomerate extraction module, when guaranteeing result, can simplify computation process.
When down-sampling, a picture point on upper strata is that N the picture point (number of N is determined by the sampling template size) by lower floor calculated by hierarchical model picture point computing method, in this process, right SamplingThe gray scale of the N of a template picture point is set up a model, the picture point that meets model will be retained, otherwise reject, then utilize formula (1-2) Renewal model, same, the picture point that meets model will be retained, otherwise reject ... repeat this process, when last numerical convergence, utilize formula (1-2) can obtain the gray-scale value of upper strata picture point, preserve the model parameter of this picture point, for calculating its agglomerate numbering simultaneously.
gray n l = 1 ∑ R ∈ k ω k t n - 1 k r n - 1 k ∑ R ∈ k ω k t n - 1 k r n - 1 k gray n - 1 k - - - ( 1 - 2 )
Wherein, ω kFor weight, t kFor rejecting the factor, r means the pixel confidence level.
After hierarchical model calculating is complete, in follow-up picture point agglomerate numbering computing method, suppose that the agglomerate of top layer picture point is numbered 0, the agglomerate of lower floor's picture point numbering just can be come definite by the agglomerate numbering of upper strata picture point so.At first obtain all upper stratas picture point S set corresponding to lower floor's picture point according to the down-sampling template.Then find in S set with the nearest point of the gray scale Euclidean distance of this lower floor's picture point under agglomerate, determine according to the agglomerate pixel value whether this lower floor's picture point belongs to this agglomerate.If the gray scale of this lower floor's picture point meets the model of agglomerate pixel value, think that this lower floor's picture point belongs to this agglomerate, otherwise, assess the confidence level of this lower floor's picture point and accordingly agglomerate be numbered, if confidence level is 1, this lower floor's picture point belongs to a new agglomerate, and the agglomerate sum increases by one, preserves agglomerate pixel value and model parameter; If confidence level is 0, think that this lower floor's picture point can't sort out, its agglomerate numbering is made as to 0.
The 3rd step, agglomerate merge module to be responsible for the Output rusults of image agglomerate extraction module is merged, and smaller agglomerate or the relatively more approaching agglomerate that still is divided into two classes of agglomerate gray level model are combined.Equally, introduce model in this module and can not need complexity, discrete data heap, as long as they are set up to model, utilize their data to distribute to be merged, not only simply but also effective, for the less demanding tracking in agglomerate edge, utilizing model to merge is a reasonable disposal route, and its concrete implementation step comprises:
At first utilize the data of agglomerate to set up model to each agglomerate;
Then find a pair of adjacent agglomerate, relatively their model, if these two agglomerate models are not close, pick up a pair of adjacent agglomerate, continues relatively; If the gray distribution model of two agglomerates is more approaching, merge this two agglomerates, and upgrade the model parameter of this new agglomerate, a pair of adjacent agglomerate is found in search again.Still can't merge when having searched all adjacent agglomerates, finish.
The 4th step, image agglomerate fitting module are responsible for the agglomerate result of output is carried out to matching, and the expression formula after the output matching.
Below will be described in more detail spirit of the present invention by a specific embodiment in order better to explain.
Continuation is with reference to Fig. 1, and the agglomerate extraction element that the embodiment of the present invention provides comprises that image pretreatment module, image agglomerate extraction module, image agglomerate merge module and image agglomerate fitting module, and its step of carrying out the agglomerate extraction comprises:
Step 1, image is carried out to pre-service, for example, at the image pretreatment stage, image is carried out to gaussian filtering, eliminate the processing such as noise spot.
Step 2, calculate the size of hierarchical model storage allocation according to the size of input picture.Calculate the memory headroom size of hierarchical model, and distribute enough storage spaces.At first initial pictures is copied in the 0th layer of hierarchical model.Suppose the picture traverse of input and highly be respectively W0 and H0, be i.e. the size of the 0th tomographic image.The width of the 1st tomographic image and highly be respectively [W0/2+1] and [H0/2+1] so, wherein [] means to round downwards, the width of the 2nd tomographic image and be highly [W1/2+1] and [H1/2+1] ..., the size of last layer data is 1.
Fill hierarchical model by down-sampling, with reference to figure 3, and in setting initialization down-sampling template, the weight of 12 elements is 1/12, data is set up to Gauss model simultaneously, according to formula (1-2), calculates hierarchical model.Wherein hierarchical model picture point computing method are as follows.L the picture point for the n layer
Figure BDA00001668317200091
Pixel value
Figure BDA00001668317200092
Computation process be:
1, the t of 12 picture point of initial n-1 layer kBe 1, wherein, r is known;
2, to t kThe pixel value that equals 1 picture point is set up Gauss model, according to formula (1-2), calculates the Gaussian distribution center
Figure BDA00001668317200093
According to
Figure BDA00001668317200094
Standard deviation with the calculated for pixel values Gaussian distribution of these picture points
Figure BDA00001668317200095
3, judgement t kValue (k=1,2 ..., 12), work as t kBe 0 o'clock, k adds 1, if k equals 12, goes to step 5, otherwise, go to step 3.Work as t kBe 1 o'clock, go to step 4;
4, calculate
Figure BDA00001668317200101
With
Figure BDA00001668317200102
Euclidean distance, if this distance be less than
Figure BDA00001668317200103
(wherein, β is input parameter, and scope is 0 ~ 4), k adds 1, goes to step 3, otherwise, reject this picture point, i.e. t k=0, go to step 2;
5, preserve last Calculating (for example meaning with Dir) for the agglomerate numbering.Then to t kSummation, if the result that summation obtains is less than a threshold value set in advance,
Figure BDA00001668317200105
Be 0, otherwise
Figure BDA00001668317200106
Be 1, go to step 6;
6, finish.
List file names with picture point agglomerate numbering computing method, the computation process of numbering for the agglomerate of s picture point of n-1 layer is as follows:
1, with reference to figure 4, determine picture point
Figure BDA00001668317200107
The picture point S set of corresponding n layer;
2, calculation level Pixel value with the picture point S set under the Euclidean distance of agglomerate pixel value of agglomerate, if agglomerate is numbered 0, without compute euclidian distances, afterwards by the threshold value Dir of the shortest value dmin of Euclidean distance that calculates and preservation relatively, if it is less than Dir, picture point
Figure BDA00001668317200109
Agglomerate be numbered and there is the corresponding agglomerate of the shortest value dmin of Euclidean distance numbering, go to step 5, otherwise go to step 3;
If 3 picture points
Figure BDA000016683172001010
Confidence level be 0, the agglomerate of this picture point is numbered 0, goes to step 5, if confidence level is not 0, goes to step 4;
4, total agglomerate numbering adds 1, and the agglomerate pixel value of this newly-increased agglomerate is just picture point Pixel value, picture point
Figure BDA000016683172001012
The agglomerate numbering agglomerate numbering for newly increasing just, go to step 5;
5, finish.
Step 3, agglomerate merge module to be responsible for the Output rusults of image agglomerate extraction module is merged, and smaller agglomerate or the relatively more approaching agglomerate that still is divided into two classes are combined.The calculation procedure of this module is as follows:
1, traversal agglomerate result, set up Gauss model for each agglomerate, calculates the pixel average of each agglomerate, agglomerate pixel variance, agglomerate coordinate center.Judge whether two agglomerates have intersection point, and the number of intersection point, be saved in the common factor array by result simultaneously;
2, traversal common factor array, search immediate two agglomerates of pixel average, and judge whether the Euclidean distance of the pixel average of these two agglomerates is less than β * σ, the maximal value of the standard deviation that wherein σ is two agglomerates.If be greater than, nonjoinder is rejected this from the common factor array, goes to step 2.If be greater than β * σ, merged, go to step 3.If traveled through the common factor array, go to step 4;
If exist and merged the agglomerate more than 2 times in 3 these two agglomerates, cancel current the merging, directly turn 2.Otherwise these two agglomerates are merged into to an agglomerate, change agglomerate numbering, and upgrade the every data in the common factor array, go to step 2;
4, finish to merge, recalculate agglomerate result output.
Step 4, image agglomerate fitting module are responsible for the agglomerate result of output is carried out to process of fitting treatment, and the output fitting result.For example, in the present embodiment, described image agglomerate fitting module is ellipse fitting agglomerate module, and it selects the ellipse fitting method based on least square method commonly used to calculate the ellipse of each agglomerate, and this paper does not do and is described in detail these detailed computing method.
The foregoing is only the preferred embodiments of the present invention; not thereby limit the scope of the claims of the present invention; every equivalent structure or conversion of equivalent flow process that utilizes instructions of the present invention and accompanying drawing content to do; or directly or indirectly be used in other relevant technical fields, all in like manner be included in scope of patent protection of the present invention.

Claims (10)

1. the agglomerate extraction element based on cluster, comprise image pretreatment module, image agglomerate extraction module, image agglomerate merging module and image agglomerate fitting module, it is characterized in that:
The image pretreatment module, carry out the image pre-service for the image to obtaining;
Image agglomerate extraction module, be used in hierarchical model picture point computation process, the adaptive model of a gray scale of setting up according to the gray scale to N picture point on the sampling template set in advance is processed the picture point in present image, when meeting this model, current picture point is retained, otherwise reject, when last numerical convergence, calculate the gray-scale value of upper strata picture point, preserve the adaptive model parameter of gray scale of this upper strata picture point simultaneously; Utilize the retrodict agglomerate numbering of each picture point on the orlop image of hierarchical model, obtain lower floor's picture point after the mapping relations of upper strata picture point, determining the agglomerate numbering of lower floor's picture point according to the probabilistic relation between the adaptive model of the gray scale of lower floor's picture point gray scale and upper strata picture point;
The image agglomerate merges module, for the Output rusults to image agglomerate extraction module, merged, its merging method is: agglomerate is set up to gray distribution model, according to the similarity of each agglomerate gray distribution model and the adjacency on image, two agglomerates are merged;
And image agglomerate fitting module, carry out agglomerate matching output for the Output rusults that the image agglomerate is merged to module.
2. according to the agglomerate extraction element based on cluster claimed in claim 1, it is characterized in that, described image agglomerate extraction module is also further numbered in computation process at the picture point agglomerate, is handled as follows:
Obtain all upper stratas picture point S set corresponding to lower image picture point according to down-sampling;
Obtain in the picture point S set of upper strata with the nearest point of the gray scale Euclidean distance of lower image picture point under agglomerate, and determine according to the pixel value of this agglomerate whether this lower image picture point belongs to this agglomerate; If the gray scale of this lower image picture point meets the adaptive model of gray scale, judge that this lower image picture point belongs to this agglomerate, otherwise, assess the confidence level of this lower image picture point and according to assessment result, this agglomerate be numbered, if confidence level is 1, this lower image picture point belongs to a new agglomerate, distributes a new agglomerate to number to this agglomerate, and preserves agglomerate pixel value and the adaptive model parameter of gray scale; If confidence level is 0, think that this lower image picture point can't sort out, its agglomerate numbering is made as to 0.
3. according to the agglomerate extraction element based on cluster claimed in claim 1, it is characterized in that, the image agglomerate merges the step that module merged the Output rusults of image agglomerate extraction module and comprises:
Output rusults according to image agglomerate extraction module is set up respectively the blob match model to each agglomerate;
Obtain a pair of adjacent agglomerate and compare their blob match model, if its model is not close, again obtaining a pair of adjacent agglomerate to proceed comparison; If its model approaches, merge this two agglomerates, and upgrade the blob match model parameter of this new agglomerate, again search for afterwards and find a pair of adjacent agglomerate the blob match model that upgrades the marquis according to it to compare; When completeer all adjacent agglomerates and can't continue to merge the time, finish.
4. according to the agglomerate extraction element based on cluster claimed in claim 1, it is characterized in that, image agglomerate extraction module adopts following mathematical expression to calculate the gray-scale value of upper strata picture point:
gray n l = 1 ∑ R ∈ k ω k t n - 1 k r n - 1 k ∑ R ∈ k ω k t n - 1 k r n - 1 k gray n - 1 k ;
Wherein, ω kFor weight, t kFor rejecting the factor, r means the pixel confidence level.
5. according to the agglomerate extraction element based on cluster claimed in claim 1, it is characterized in that, image agglomerate fitting module is carried out ellipse fitting output for the Output rusults that the image agglomerate is merged to module.
6. the agglomerate extracting method based on cluster, its agglomerate extraction element based on cluster comprises that image pretreatment module, image agglomerate extraction module, image agglomerate merge module and image agglomerate fitting module, it is characterized in that, described method comprises:
The image pretreatment module is carried out pre-service to the image obtained;
In hierarchical model picture point computation process, the adaptive model of a gray scale that image agglomerate extraction module is set up according to the gray scale to N picture point on the sampling template set in advance is processed the picture point in present image, when meeting this model, current picture point is retained, otherwise reject, when last numerical convergence, calculate the gray-scale value of upper strata picture point, preserve the adaptive model parameter of gray scale of this upper strata picture point simultaneously; Utilize the retrodict agglomerate numbering of each picture point on the orlop image of hierarchical model, obtain lower floor's picture point after the mapping relations of upper strata picture point, determining the agglomerate numbering of lower floor's picture point according to the probabilistic relation between the adaptive model of the gray scale of lower floor's picture point gray scale and upper strata picture point;
Image agglomerate merging module is merged the Output rusults of image agglomerate extraction module, its merging method is: agglomerate is set up to gray distribution model, according to the similarity of each agglomerate gray distribution model and the adjacency on image, two agglomerates are merged;
The Output rusults that image agglomerate fitting module merges module to the image agglomerate carries out agglomerate matching output.
7. the agglomerate extracting method based on cluster as claimed in claim 6, is characterized in that, described image agglomerate extraction module is also further numbered in computation process at the picture point agglomerate, is handled as follows:
Obtain all upper stratas picture point S set corresponding to lower image picture point according to down-sampling;
Obtain in the picture point S set of upper strata with the nearest point of the gray scale Euclidean distance of lower image picture point under agglomerate, and determine according to the pixel value of this agglomerate whether this lower image picture point belongs to this agglomerate; If the gray scale of this lower image picture point meets the adaptive model of gray scale, judge that this lower image picture point belongs to this agglomerate, otherwise, assess the confidence level of this lower image picture point and according to assessment result, this agglomerate be numbered, if confidence level is 1, this lower image picture point belongs to a new agglomerate, distributes a new agglomerate to number to this agglomerate, and preserves agglomerate pixel value and the adaptive model parameter of gray scale; If confidence level is 0, think that this lower image picture point can't sort out, its agglomerate numbering is made as to 0.
8. the agglomerate extracting method based on cluster as claimed in claim 6, is characterized in that, the image agglomerate merges the step that module merged the Output rusults of image agglomerate extraction module and comprises:
Output rusults according to image agglomerate extraction module is set up respectively the blob match model to each agglomerate;
Obtain a pair of adjacent agglomerate and compare their blob match model, if its model is not close, again obtaining a pair of adjacent agglomerate to proceed comparison; If its model approaches, merge this two agglomerates, and upgrade the blob match model parameter of this new agglomerate, again search for afterwards and find a pair of adjacent agglomerate the blob match model that upgrades the marquis according to it to compare; When completeer all adjacent agglomerates and can't continue to merge the time, finish.
9. the agglomerate extracting method based on cluster as claimed in claim 6, is characterized in that, image agglomerate extraction module adopts following mathematical expression to calculate the gray-scale value of upper strata picture point:
gray n l = 1 ∑ R ∈ k ω k t n - 1 k r n - 1 k ∑ R ∈ k ω k t n - 1 k r n - 1 k gray n - 1 k ;
Wherein, ω kFor weight, t kFor rejecting the factor, r means the pixel confidence level.
10. the agglomerate extracting method based on cluster as claimed in claim 6, is characterized in that, image agglomerate fitting module is carried out ellipse fitting output for the Output rusults that the image agglomerate is merged to module.
CN201210159637.6A 2012-05-22 2012-05-22 A kind of agglomerate extraction element and method based on cluster Active CN103425981B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210159637.6A CN103425981B (en) 2012-05-22 2012-05-22 A kind of agglomerate extraction element and method based on cluster

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210159637.6A CN103425981B (en) 2012-05-22 2012-05-22 A kind of agglomerate extraction element and method based on cluster

Publications (2)

Publication Number Publication Date
CN103425981A true CN103425981A (en) 2013-12-04
CN103425981B CN103425981B (en) 2017-09-15

Family

ID=49650692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210159637.6A Active CN103425981B (en) 2012-05-22 2012-05-22 A kind of agglomerate extraction element and method based on cluster

Country Status (1)

Country Link
CN (1) CN103425981B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886168A (en) * 2019-02-01 2019-06-14 淮阴工学院 A kind of traffic above-ground sign based on layer rank

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080187220A1 (en) * 2006-12-04 2008-08-07 Lockheed Martin Corporation Device and method for fast computation of region based image features
CN102054269A (en) * 2009-10-27 2011-05-11 华为技术有限公司 Method and device for detecting feature point of image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080187220A1 (en) * 2006-12-04 2008-08-07 Lockheed Martin Corporation Device and method for fast computation of region based image features
CN102054269A (en) * 2009-10-27 2011-05-11 华为技术有限公司 Method and device for detecting feature point of image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KYUJIN CHO等: "Image Segmentation from Consensus Information", 《COMPUTER VISION AND IMAGE UNDERSTANDING》 *
隋淑娟: "基于形状特征的光学图像检索方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886168A (en) * 2019-02-01 2019-06-14 淮阴工学院 A kind of traffic above-ground sign based on layer rank

Also Published As

Publication number Publication date
CN103425981B (en) 2017-09-15

Similar Documents

Publication Publication Date Title
Liu et al. RoadNet: Learning to comprehensively analyze road networks in complex urban scenes from high-resolution remotely sensed images
Wu et al. Milcut: A sweeping line multiple instance learning paradigm for interactive image segmentation
CN105206041B (en) Smart-phone track chain-cluster identification method considering sequential DBSCAN
CN107169421A (en) A kind of car steering scene objects detection method based on depth convolutional neural networks
CN110264468A (en) Point cloud data mark, parted pattern determination, object detection method and relevant device
CN108710826A (en) A kind of traffic sign deep learning mode identification method
CN106203430A (en) A kind of significance object detecting method based on foreground focused degree and background priori
CN110378297A (en) A kind of Remote Sensing Target detection method based on deep learning
CN105046689B (en) A kind of interactive stereo-picture fast partition method based on multi-level graph structure
CN111680739A (en) Multi-task parallel method and system for target detection and semantic segmentation
CN111126459A (en) Method and device for identifying fine granularity of vehicle
Xu et al. BANet: A balanced atrous net improved from SSD for autonomous driving in smart transportation
CN113838064B (en) Cloud removal method based on branch GAN using multi-temporal remote sensing data
CN113642571A (en) Fine-grained image identification method based on saliency attention mechanism
Ma et al. Location-aware box reasoning for anchor-based single-shot object detection
CN115937552A (en) Image matching method based on fusion of manual features and depth features
CN109671055A (en) Pulmonary nodule detection method and device
CN111368865B (en) Remote sensing image oil storage tank detection method and device, readable storage medium and equipment
CN112287906B (en) Template matching tracking method and system based on depth feature fusion
CN107170004A (en) To the image matching method of matching matrix in a kind of unmanned vehicle monocular vision positioning
CN104778683A (en) Multi-modal image segmenting method based on functional mapping
CN102184524B (en) Neighbourhood learning culture gene image segmentation method based on standard cut
CN103425981A (en) Cluster-based block mass extraction device and method
Sun et al. Decoupled feature pyramid learning for multi-scale object detection in low-altitude remote sensing images
CN107146215A (en) A kind of conspicuousness detection method based on color histogram and convex closure

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant