CN102855492B - Classification method based on mineral flotation foam image - Google Patents

Classification method based on mineral flotation foam image Download PDF

Info

Publication number
CN102855492B
CN102855492B CN201210265125.8A CN201210265125A CN102855492B CN 102855492 B CN102855492 B CN 102855492B CN 201210265125 A CN201210265125 A CN 201210265125A CN 102855492 B CN102855492 B CN 102855492B
Authority
CN
China
Prior art keywords
image
sigma
vector
classification
foam
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210265125.8A
Other languages
Chinese (zh)
Other versions
CN102855492A (en
Inventor
王雅琳
张润钦
陈晓方
谢永芳
桂卫华
阳春华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201210265125.8A priority Critical patent/CN102855492B/en
Publication of CN102855492A publication Critical patent/CN102855492A/en
Application granted granted Critical
Publication of CN102855492B publication Critical patent/CN102855492B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a classification method based on a mineral flotation foam image. A real-time acquired foam image is classified in different known working conditions. The classification method comprises the following steps of: introducing a vocabulary in text classification into a flotation foam image, blocking the foam image acquired by an industrial camera and extracting the characteristic parameters, clustering the color and texture characteristic parameters of the extracted foam image by employing a K-mean clustering method, obtaining a plurality of clustering centers, and constructing a foam state vocabulary; describing the real-time foam image through a word bag method by utilizing the obtained foam state vocabulary, and forming a vector representation of the foam image; and finally, classifying the foam image through the similarity between measuring vectors by employing a vector space model. Because different types correspond to different working conditions, the flotation working condition recognition can be performed according to the classification result of the foam image; and therefore, the operation guide is given, the production is optimized, and the production efficiency is improved.

Description

Based on the sorting technique of mineral flotation foam image
[technical field]
The invention belongs to mineral floating field.The particularly sorting technique of mineral flotation foam image.
[background technology]
The state that mineral floating production run generally observes flotation froth by experienced workman controls, and the uncertainty of this operation is difficult to make floatation process operate in optimum state.Utilizing digital image processing techniques, classify and explain floatation foam image, obtain the work information of mineral floating, and then be optimized control, is a kind of effective ways of increasing economic efficiency.The research of present stage to the classification of floatation foam image and identification is mainly described froth images by the low-level image feature parameter such as textural characteristics, color characteristic, foam size distribution extracting floatation foam image, then utilize the method such as neural network or support vector machine to carry out Classification and Identification, thus obtain operating mode with Instructing manufacture.But integrally processed by image, being only not easy to people with the froth images that image low-level image feature is described and understanding, there is semantic gap problem in the classification results namely based on image low-level image feature.In addition, because the local message of froth images is not utilized, and there is the noise such as illumination, dust in industry spot, two diverse images, the overall low-level image feature extracted describes may be very close, the accuracy which results in based on the neural network of froth images low-level image feature or the Classification and Identification of SVM is low, thus makes the frequent misoperation of production operation based on operating mode, produces and is difficult to stable optimized operation.
Vector space model (VSM) is proposed in 20 century 70s by people such as Salton, is a kind of algebraic model representing text.VSM the vector operation be reduced to the process of content of text in vector space, and expresses semantic similarity by similarity spatially, visual and understandable.Corresponding to froth images, if can go to understand image from semantic level, then carry out Classification and Identification to froth images on semantic level, then classification results is closer to the understanding of people.There is the impact of the illumination, dust etc. of industry spot due to froth images, and all kinds of froth images has itself, and brightness is low, similarity high, goes to distinguish all kinds of image just seem more important from details, local.
[summary of the invention]
The object of the invention is to, for the image classification method described based on flotation froth low-level image feature, there is semantic gap and the inaccurate problem of classification, a kind of sorting technique based on mineral flotation foam image is provided.
Floatation foam image sorting technique provided by the present invention, mainly comprises three phases: (1) generates based on the froth images foam state vocabulary of textural characteristics and color characteristic; (2) the word bag of froth images describes; (3) vector space model is utilized to carry out froth images classification.Specifically describe as follows:
(1) the froth images foam state vocabulary based on textural characteristics and color characteristic generates
For flotation site, the froth images of different operating mode has very large difference, be mainly manifested in the fineness degree of texture, groove the depth equal, in addition, flotation froth has obvious color, and color can reflect the information such as mineral type entrained by foam, mineral are how many to a great extent.Therefore the textural characteristics of froth images and color characteristic is selected to be described froth images.Utilize gray level co-occurrence matrixes (GLCM) algorithm to extract the parametric texture of froth images, comprise angle second moment, entropy, contrast, unfavourable balance square and correlativity; Then the color characteristic of froth images is extracted, i.e. relative red component.
Gray level co-occurrence matrixes be by image gray levels between joint probability density P(i, j, d, θ) matrix that forms, reflect the spatial coherence of gray scale between any two points image from the angle of statistics.Its definition direction is θ, spacing is the gray level co-occurrence matrixes P(i of d, j, d, θ) be the value of the i-th row j column element of co-occurrence matrix.θ gets 0 0, 45 0, 90 0with 135 04 directions, if I (x, y) is the digital picture of a width two dimension, its size is U × V pixel, and x, y are respectively the pixel coordinate value of pixel.Then for different θ, P(i, j, d, θ) computing method are as follows:
P(i,j,d,0°)=Num{(x 1,y 1)(x 2,y 2)∈U*V|x 1-x 2=0,
|y 1-y 2|=d;I(x 1,y 1)=i,I(x 2,y 2)=j}
P (i, j, d, 45 °)=Num{ (x 1, y 1) (x 2, y 2) ∈ U*V| (x 1-x 2=d, y 1-y 2=d) or
(x 1-x 2=-d,y 1-y 2=-d);I(x 1,y 1)=i,I(x 2,y 2)=j}
P(i,j,d,90°)=Num{(x 1,y 1)(x 2,y 2)∈U*V|x 1-x 2=d,
y 1-y 2=0;I(x 1,y 1)=i,I(x 2,y 2)=j}
P (i, j, d, 135 °)=Num{ (x 1, y 1) (x 2, y 2) ∈ U*V| (x 1-x 2=d, y 1-y 2=-d) or
(x 1-x 2=-d,y 1-y 2=d);I(x 1,y 1)=i,I(x 2,y 2)=j}
Wherein Num{X} represents the element number in set X.
Texture, color parameter computing formula and be described below:
1. ASM (angle second moment)
ASM = Σ i , j { P ( i , j | d , θ ) } 2
The gradation uniformity of Description Image, texture is thick, and ASM is comparatively large, otherwise then less.
2. ENT (entropy)
ENT = - Σ i , j { P ( i , j | d , θ ) } log { P ( i , j | d , θ ) }
The quantity of information that Description Image has, if the texture of image is many, entropy is just large; Otherwise texture is few, be exactly smoother, then entropy is just little.
3. CON (contrast)
CON = Σ ( i - j ) 2 P ( i , j | d , θ )
The sharpness of Description Image, texture rill depth degree, rill is darker, and contrast is larger, and image is also more clear.
4. IDM (unfavourable balance square)
IDM = Σ i , j P ( i , j | d , θ ) / [ 1 + ( i - j ) 2 ]
The homogeney of Description Image texture, tolerance image texture localized variation number.Co-occurrence matrix is diagonally concentrated, then unfavourable balance square value is larger.
5. COR (correlativity)
COR = Σ i , j ( i - μ x ) ( j - μ y ) P ( i , j | d , θ ) / σ x σ y
μ x = Σ i i Σ j p ( i , j | d , θ ) μ y = Σ j j Σ i p ( i , j | d , θ )
σ x = Σ i ( i - μ x ) 2 Σ j p ( i , j | d , θ ) σ y = Σ j ( j - μ y ) 2 Σ i p ( i , j | d , θ )
Describe the similarity degree of element on the direction of row or column in gray level co-occurrence matrixes, reflect that certain color is at the development length along certain direction, extension longer, COR is larger.
6. R relative(relative red component)
R relative = R red R gray
In formula, R redand R grayrepresent red component average and gray average respectively.
Froth images foam state vocabulary generative process is as follows:
Step 1: choose N width image (all image category of covering that this N width image will be wide as far as possible) from image library, every width image is all truncated to a certain identical pixel size L x× L y;
Step 2: for the N width image after intercepting, every width image uniform is divided into m × m block;
Step 3: for each piecemeal, obtains its textural characteristics value and relative red color component value, forms the low-level image feature vector description of one 1 × 6 dimension;
Step 4: carry out K-means cluster to all low-level image feature vector descriptions, D the cluster centre obtained is foam state vocabulary.
(2) the word bag of froth images describes
After obtaining the foam state vocabulary of froth images, just the method for word bag each width froth images can be described, obtain a vector representation of image.
It is as follows that the word bag of froth images describes process:
Step 1: image pixel is intercepted as L x× L ypixel size;
Step 2: image uniform is divided into m × m block;
Step 3: for each piecemeal, obtains its textural characteristics and relative red color component value, forms one 1 × 6 dimension low-level image feature vector description;
Step 4: by calculating the Euclidean distance of each foam state vocabulary in this low-level image feature vector description and foam state vocabulary, measure the similarity of all vocabulary in the vector description of each piecemeal and foam state vocabulary, the most similar to which foam state vocabulary, be just demarcated as which vocabulary;
Step 5: add up each foam state vocabulary frequency of occurrence, obtains the word bag vector representation of image.
Described m value is 5 ~ 20.
(3) vector space model is utilized to carry out froth images classification
Have multiple, as bayes method, k nearest neighbor method, support vector machine, neural net method etc. based on the training of vector space model and sorting technique.The present invention classifies to froth images by inner product of vectors classification and k nearest neighbor classification two kinds of methods respectively.
Method 1: inner product of vectors classification
In order to classifying quality is stablized, the present invention does not characterize such image with the center vector of each classification, but is classified by the similarity sum calculating all images in each classification in image to be classified and training set.Classification thinking is that, for the image in classification each in training set, the method for word bag is described, and obtains such word bag vector set C i, i=1,2 ..., N s(N sclassification number for classified image in training set).When getting new realtime graphic, the method for this image word bag being described, is obtained the vector representation of image, being gathered C by calculating this word bag vector with the word bag vector of all kinds of image isimilarity classify.Detailed process is as follows:
Step 1: the image collection word bag model of each classification in training picture library is described the vector set obtaining such image, and each classification image gets M width image (M value 10 ~ 200), then the word bag vector set of the i-th class image is combined into
C i = c i 1 c i 2 · · · c iM T , i = 1,2 , . . . , N S
In formula, c ijrepresenting that the word bag of an i-th class jth image describes, is the row vector of 1 × D;
Step 2: for image to be sorted, obtains its word bag and describes Q, Q=[q 1q 2q d] be the row vector of 1 × D;
Step 3: by Q and C ieach vectorial c in class ijcarry out Similarity Measure (measuring its similarity by the inner product of vector here), obtain Q and each category set C ithe similarity sum SC of middle institute directed quantity i;
SC i = Σ j = 1 M Sim ( c ij , Q ) , i = 1,2,3 . . . , N S
In formula, Sim ( W , Q ) = W · Q = Σ i = 1 D w i × q i , W=[w 1w 2w d] be the row vector of 1 × D;
Step 4: Q is classified as the kth class that similarity is maximum, .
Method 2:K nearest neighbour classification method
The thinking of this sorting technique is, for image to be sorted, first word bag method is described, and calculation training concentrates the K the most similar to this new images image, and the classification belonging to K image judges the classification belonging to new images.Concrete steps are as follows:
Step 1: N width training image word bag method described, obtains a vector set Z as training vector collection, Z=[Z 1z 2z n] t, wherein Z i=[z i1z i2z iD] be the word bag vector description of the i-th width image.
Step 2: to image to be classified, word bag method is described, and obtains vectorial Q, a Q=[q 1q 2q d] be the row vector of 1 × D.
Step 3: select the individual image the most similar to image to be classified Q of K from training image.The computing formula of similarity is:
Sim ( Z i , Q ) = Z i · Q = Σ j = 1 D z ij × q j ,Z i∈Z
Step 4: according to the category distribution situation of this K image, determine the classification of image to be classified.
The present invention is directed to these features of froth images, froth images is abstracted into text, then just as process text, Classification and Identification can be carried out at semantic level to froth images.Owing to making full use of the local message of image, to the process of froth images piecemeal, and carry out twice abstract process, obtain the description of froth images semantic level, one aspect of the present invention improves the accuracy rate of classification, also semantic gap problem is solved on the other hand, for froth images classification and operating mode's switch provide a kind of new thinking.After froth images is classified, just have identified current working, be used to guide thus, optimize production, can dosing be reduced, improve concentrate grade and mineral recovery rate.
[accompanying drawing explanation]
Fig. 1 is the froth images belonging to different vocabulary;
Fig. 2 is that the word bag of froth images describes schematic diagram.
[embodiment]
Be described the specific embodiment of the present invention below in conjunction with accompanying drawing, according to different operating mode, froth images be divided into 4 classes, the length of foam state vocabulary is chosen for 8.
1. the foam state vocabulary based on foam textural characteristics and color characteristic generates
From a collection of froth images of certain factory's floatation process collection in worksite totally 400 width.For this 400 image, all being intercepted by every width image is 960 × 960 pixel sizes, after carrying out pre-service, is evenly divided into 10 × 10 pieces.To each little piecemeal, extract its textural characteristics parameter and color parameter, obtain the low-level image feature vector of one 1 × 6 dimension.Therefore obtain 40000 vectors altogether, and K mean cluster (initial cluster center Stochastic choice) is carried out to these 40000 vectors, obtain D center (because the foam state vocabulary length chosen is 8, all D=8 here).The value that table 1 obtains when being and choosing 8 cluster centres, constitutes the foam state vocabulary containing 8 vocabulary.Fig. 1 gives the froth images sub-block corresponding to different 8 vocabulary.
The foam state vocabulary of table 1 containing 8 vocabulary
Foam state vocabulary Foam state vocabulary vector
Foam state vocabulary 1 (0.4597,1.5542,0.2826,0.3183,0.9159,1.0807)
Foam state vocabulary 2 (0.2767,2.0477,0.4839,0.2399,0.8663,1.0650)
Foam state vocabulary 3 (0.2636,2.1293,0.5499,0.3051,0.8526,1.0782)
Foam state vocabulary 4 (0.4028,1.8784,0.5426,0.3011,0.8675,1.0853)
Foam state vocabulary 5 (0.5218,1.5766,0.6347,0.2590,0.8817,1.0551)
Foam state vocabulary 6 (0.6718,1.1333,0.4545,0.3190,0.9211,1.0424)
Foam state vocabulary 7 (0.6566,1.0234,0.3362,0.4186,0.9394,1.0407)
Foam state vocabulary 8 (0.3485,1.5968,0.1562,0.6743,0.9349,1.0704)
2. the word bag of training set froth images describes
4 classes for expertise artificial division correspond to the industry spot history image of different operating mode, and every class gets 100 width image, i.e. M=100.First, by every width image interception to 960 × 960 pixel size, and 10 × 10 pieces are evenly divided into; To each little piecemeal, try to achieve its textural characteristics parameter and color parameter, obtain the low-level image feature vector of one 1 × 6 dimension; By the low-level image feature of each little piecemeal vector compared with the foam state vocabulary in foam state vocabulary (calculating Euclidean distance), it and which vocabulary the most similar (Euclidean distance is the shortest) in foam state vocabulary, be just designated as that vocabulary by this piecemeal; Finally, utilize the method for word bag, obtain the vector set C of every class image 1, C 2, C 3, C 4, wherein C i=[c i1c i2c iM] t, c ijrepresenting the vector representation of an i-th class jth image, is the row vector of 1 × D.
The word bag that Fig. 2 gives certain piece image describes process.In Fig. 2, being 1. image after piecemeal, is 2. foam state vocabulary vector table, is 3. that the word bag that image is corresponding describes.Will 1. after piecemeal, 2. contrast vocabulary, can obtain the vocabulary corresponding to each piecemeal, statistics 1. in the frequency that occurs of each vocabulary, 3. the word bag obtaining froth images describes.
3. carry out froth images classification
To the froth images t to be sorted obtained in real time, the method for word bag is described, and obtains vectorial Q tafter, just can classify to it as follows.
(1) inner product of vectors method: calculate Q respectively twith C 1, C 2, C 3, C 4the inner product sum of each vector in set, maximum with which kind of inner product sum, just which kind of image i is divided into.Way is as follows:
For the word bag vector representation Q of the froth images to be sorted of trying to achieve t=[2,76,16,4,0,2,0,0]; Each classification C in training set 1, C 2, C 3, C 4the value of vector set is as follows respectively:
First kind vector value (i=1) Equations of The Second Kind vector value (i=2) 3rd class vector value (i=3) 4th class vector value (i=4)
c i1 [2,64,22,4,2,2,4,0] [0,4,2,3,0,2,82,7] [2,0,2,4,15,65,12,0] [32,0,2,4,0,0,2,60]
c i2 [4,80,10,0,0,0,3,3] [1,4,2,4,0,2,72,15] [2,4,0,12,70,10,2,0] [25,1,2,4,0,0,12,56]
c i3 [0,68,21,1,4,2,1,3] [2,4,2,0,4,11,62,15] [1,10,2,1,45,40,1,0] [52,10,2,0,1,0,2,33]
c i4 [8,62,20,4,4,2,0,0] [1,5,2,4,0,2,74,12] [2,0,12,0,62,20,4,0] [23,0,2,4,1,5,10,55]
c i5 [5,72,15,0,0,4,2,2] [0,0,5,2,2,1,68,22] [0,0,10,3,80,6,0,1] [42,1,2,6,0,0,12,37]
c i6 [4,76,11,4,5,0,0,0] [1,2,8,2,0,0,77,10] [9,0,0,2,72,12,0,5] [34,10,2,4,0,0,0,50]
c i7 [4,70,26,0,0,0,0,0] [6,0,3,3,0,12,66,10] [0,0,11,5,22,60,2] [56,0,0,4,0,0,12,28]
... ... .. ... ...
c i100 [2,76,18,2,0,0,1,1] [4,4,3,0,0,9,67,13]. [2,4,2,0,35,45,11,1] [45,10,2,0,0,1,2,40]
Calculate Q twith sum in each vector in each class.Maximum with the inner product of which classification, just by Q twhich kind of is classified as.By formula SC i = Σ j = 1 M Sim ( c ij , Q t ) , i = 1,2,3,4 With k = arg max i * SC i , i = 1,2,3,4 , M=100 here, calculates SC 1=532040, SC 2=23450, SC 3=4324, SC 4=5480.So by Q tbe classified as First Kind Graph picture.
(2) k nearest neighbor method: from training set C 1, C 2, C 3, C 4in select row vector and Q tk the most similar image, the classification belonging to K image judges the classification belonging to new images obtained in real time.
Table 2 and table 3 are respectively inner product of vectors method and the classification results of k nearest neighbor method when choosing different foam state vocabulary length (D=8,12,16) and different images piecemeal number (m=8,10,12).As can be seen from the table, the piecemeal number of image and the scale of foam state vocabulary have impact to classification results.The more classifying qualities of foam state vocabulary are better, but the piecemeal of image is not The more the better.If too much (namely each piecemeal is too small), because image exists noise pollution (spot zone etc.), the textural characteristics obtained and color characteristic can not characterize its actual value to piecemeal number, will cause vision word matching error; And foam state vocabulary is larger, calculated amount is also larger.Therefore, need to consider real-time and accuracy, choose the foam state vocabulary of suitable length, and carry out rational image block, generally get m=5 ~ 20.
Table 2 is based on the inner product of vectors classification results of VSM
Table 3 is based on the k nearest neighbor classification results of VSM
Conclusion: froth images sorting technique of the present invention can carry out Classification and Identification to froth images more exactly, thus determine current production belong to excellent, good, in, difference which kind of operating mode, be used to guide thus, optimize production, effectively reduce additive amount of medicament, improve concentrate grade and mineral recovery rate.Bubble sort method of the present invention is effective.

Claims (2)

1. carry out a method for flotation operating mode's switch based on the classification of mineral flotation foam image, it is characterized in that comprising following process:
(1) the floatation foam image foam state vocabulary based on textural characteristics and color characteristic is generated
Utilize gray level co-occurrence matrixes algorithm to extract the parametric texture of froth images, comprise angle second moment, entropy, contrast, unfavourable balance square and correlativity; Extract the color characteristic of froth images, i.e. relative red component:
Gray level co-occurrence matrixes be by image gray levels between joint probability density P (i, j, d, matrix θ) formed, its definition direction is θ, spacing is d gray level co-occurrence matrixes P (i, j, d, θ) be the value of the i-th row j column element of co-occurrence matrix, θ gets 0 0, 45 0, 90 0with 135 04 directions, if I (x, y) is the digital picture of a width two dimension, its size is U × V pixel, and x, y are respectively the pixel coordinate value of pixel, then as follows for different θ, P (i, j, d, θ) computing method:
P(i,j,d,0 0)=Num{(x 1,y 1)(x 2,y 2)∈U*V|x 1-x 2=0,
|y 1-y 2|=d;I(x 1,y 1)=i,I(x 2,y 2)=j}
P (i, j, d, 45 0)=Num{ (x 1, y 1) (x 2, y 2) ∈ U*V| (x 1-x 2=d, y 1-y 2=d) or
(x 1-x 2=-d,y 1-y 2=-d);I(x 1,y 1)=i,I(x 2,y 2)=j}
P(i,j,d,90 0)=Num{(x 1,y 1)(x 2,y 2)∈U*V||x 1-x 2|=d,
y 1-y 2=0;I(x 1,y 1)=i,I(x 2,y 2)=j}
P (i, j, d, 135 0)=Num{ (x 1, y 1) (x 2, y 2) ∈ U*V| (x 1-x 2=d, y 1-y 2=-d) or
(x 1-x 2=-d,y 1-y 2=d);I(x 1,y 1)=i,I(x 2,y 2)=j}
Wherein, Num{X} represents the element number in set X;
Texture, color parameter computing formula and be described below:
Angle second moment
ASM = Σ i , j { P ( i , j | d , θ ) } 2
Entropy
ENT = - Σ i , j { P ( i , j | d , θ ) } log { P ( i , j | d , θ ) }
Contrast
CON=Σ(i-j) 2P(i,j|d,θ)
Unfavourable balance square
IDM = Σ i , j P ( i , j | d , θ ) / [ 1 + ( i - j ) 2 ]
Correlativity
COR = Σ i , j ( i - μ x ) ( j - μ y ) P ( i , j | d , θ ) / σ x σ y
μ x = Σ i i Σ j p ( i , j | d , θ ) , μ y = Σ j j Σ i p ( i , j | d , θ )
σ x = Σ i ( i - μ x ) 2 Σ j p ( i , j | d , θ ) , σ y = Σ j ( j - μ y ) 2 Σ i p ( i , j | d , θ )
Relative red component
R relative = R red R gray
In formula, R redand R grayrepresent red component average and gray average respectively;
Froth images foam state vocabulary generative process is as follows:
Step 1: choose N width image from image library, all image category of covering that N width image will be wide as far as possible, every width image is all truncated to a certain identical pixel size L x× L y;
Step 2: for the N width image after intercepting, every width image uniform is divided into m × m block,
Step 3: for each piecemeal, obtains its textural characteristics value and relative red color component value, forms the low-level image feature vector description of one 1 × 6 dimension;
Step 4: K-means cluster is carried out to all low-level image feature vector descriptions, D the cluster centre obtained is foam state vocabulary;
(2) the word bag of froth images describes
After obtaining the foam state vocabulary of froth images, just the method for word bag each width froth images can be described, obtain a vector representation of image;
The word bag of froth images describes, and detailed process is as follows:
Step 1: image pixel is intercepted as L x× L ypixel size;
Step 2: image uniform is divided into m × m block;
Step 3: for each piecemeal, obtains its textural characteristics and relative red color component value, forms one 1 × 6 dimension low-level image feature vector description;
Step 4: by calculating the Euclidean distance of each foam state vocabulary in this low-level image feature vector description and foam state vocabulary, measure the similarity of all vocabulary in the vector description of each piecemeal and foam state vocabulary, the most similar to which foam state vocabulary, be just demarcated as which vocabulary;
Step 5: add up each foam state vocabulary frequency of occurrence, obtains the word bag vector representation of image;
(3) vector space model is utilized to carry out froth images classification
Adopt inner product of vectors method or k nearest neighbor method train froth images classification and classify, obtain the different classes of of image;
The process of inner product of vectors method is:
For the image in classification each in training set, the method for word bag is described, and obtains such word bag vector set C e, e=1,2 ..., N s, N sfor the classification number of classified image in training set; When getting new realtime graphic, the method for this image word bag being described, is obtained the vector representation of image, being gathered C by calculating this word bag vector with the word bag vector of all kinds of image esimilarity classify;
Step 1: the image collection word bag model of each classification in training picture library is described the vector set obtaining such image, and each classification image gets M width image, M value 10 ~ 200, then the word bag vector set of e class image is combined into
C e=[c e1c e2… c eM] T,e=1,2,...,N S
In formula, c efrepresenting that the word bag of e class f image describes, is the row vector of 1 × D, f=1,2 ..., M;
Step 2: for image to be sorted, obtains its word bag and describes Q, Q=[q 1q 2q d] be the row vector of 1 × D;
Step 3: by Q and C eeach vectorial c in class efcarry out Similarity Measure, measure its similarity by the inner product of vector, obtain Q and each category set C ethe similarity sum SC of middle institute directed quantity e;
S C e = Σ f = 1 M Sim ( c ef , Q ) , e = 1,2,3 . . . , N S
In formula, w=[w 1w 2w d] be the row vector of 1 × D;
Step 4: Q is classified as the kth class that similarity is maximum, classification results according to froth images carries out flotation operating mode's switch.
2. the method for flotation operating mode's switch is carried out in the classification based on mineral flotation foam image according to claim 1, it is characterized in that: in (1) or the step 2 described in (2), and m value is 5 ~ 20.
CN201210265125.8A 2012-07-27 2012-07-27 Classification method based on mineral flotation foam image Active CN102855492B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210265125.8A CN102855492B (en) 2012-07-27 2012-07-27 Classification method based on mineral flotation foam image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210265125.8A CN102855492B (en) 2012-07-27 2012-07-27 Classification method based on mineral flotation foam image

Publications (2)

Publication Number Publication Date
CN102855492A CN102855492A (en) 2013-01-02
CN102855492B true CN102855492B (en) 2015-02-04

Family

ID=47402068

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210265125.8A Active CN102855492B (en) 2012-07-27 2012-07-27 Classification method based on mineral flotation foam image

Country Status (1)

Country Link
CN (1) CN102855492B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103345636B (en) * 2013-06-24 2016-04-13 中南大学 Based on the copper flotation site bubble condition recognition methods of multi-scale wavelet binaryzation
CN106257498B (en) * 2016-07-27 2019-12-17 中南大学 Zinc flotation working condition state division method based on heterogeneous texture characteristics
CN107766856A (en) * 2016-08-20 2018-03-06 湖南军芃科技股份有限公司 A kind of ore visible images method for separating based on Adaboost machine learning
CN106597898B (en) * 2016-12-16 2019-05-31 鞍钢集团矿业有限公司 A kind of the Floating Production Process control method and system of Behavior-based control portrait
CN106650823A (en) * 2016-12-30 2017-05-10 湖南文理学院 Probability extreme learning machine integration-based foam nickel surface defect classification method
CN108805148B (en) * 2017-04-28 2022-01-11 富士通株式会社 Method of processing image and apparatus for processing image
CN107392232B (en) * 2017-06-23 2020-09-29 中南大学 Flotation working condition classification method and system
CN109214235A (en) * 2017-06-29 2019-01-15 沈阳新松机器人自动化股份有限公司 outdoor scene classification method and system
CN109815963B (en) * 2019-01-28 2022-03-04 东北大学 Automatic flotation tailing foam image target area acquisition method based on sliding window technology
CN111028193B (en) * 2019-03-26 2020-09-04 三明市润泽环保科技有限公司 Real-time water surface data monitoring system
CN111860533B (en) * 2019-04-30 2023-12-12 深圳数字生命研究院 Image recognition method and device, storage medium and electronic device
CN110288260B (en) * 2019-07-02 2022-04-22 太原理工大学 Coal slime flotation reagent addition amount evaluation method based on semi-supervised clustering
CN110288592B (en) * 2019-07-02 2021-03-02 中南大学 Zinc flotation dosing state evaluation method based on probability semantic analysis model
CN110728677B (en) * 2019-07-22 2021-04-02 中南大学 Texture roughness defining method based on sliding window algorithm
CN110738674B (en) * 2019-07-22 2021-03-02 中南大学 Texture feature measurement method based on particle density
CN110942104B (en) * 2019-12-06 2023-08-25 中南大学 Mixed feature selection method and system for foam flotation working condition identification process
CN111709942B (en) * 2020-06-29 2022-04-12 中南大学 Zinc flotation dosing amount prediction control method based on texture degree optimization
CN112330588B (en) * 2020-08-07 2023-09-12 辽宁中新自动控制集团股份有限公司 Classification method for flotation foam images

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1102180A1 (en) * 1999-11-16 2001-05-23 STMicroelectronics S.r.l. Content-based digital-image classification method
CN101036904A (en) * 2007-04-30 2007-09-19 中南大学 Flotation froth image recognition device based on machine vision and the mine concentration grade forecast method
EP2383680A1 (en) * 2010-04-29 2011-11-02 Dralle A/S Classification of objects in harvesting applications
CN102360435A (en) * 2011-10-26 2012-02-22 西安电子科技大学 Undesirable image detecting method based on connotative theme analysis

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8705866B2 (en) * 2010-12-07 2014-04-22 Sony Corporation Region description and modeling for image subscene recognition

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1102180A1 (en) * 1999-11-16 2001-05-23 STMicroelectronics S.r.l. Content-based digital-image classification method
CN101036904A (en) * 2007-04-30 2007-09-19 中南大学 Flotation froth image recognition device based on machine vision and the mine concentration grade forecast method
EP2383680A1 (en) * 2010-04-29 2011-11-02 Dralle A/S Classification of objects in harvesting applications
CN102360435A (en) * 2011-10-26 2012-02-22 西安电子科技大学 Undesirable image detecting method based on connotative theme analysis

Also Published As

Publication number Publication date
CN102855492A (en) 2013-01-02

Similar Documents

Publication Publication Date Title
CN102855492B (en) Classification method based on mineral flotation foam image
CN105957076A (en) Clustering based point cloud segmentation method and system
CN104615638B (en) A kind of distributed Density Clustering method towards big data
CN101980298B (en) Multi-agent genetic clustering algorithm-based image segmentation method
CN104268600B (en) A kind of mineral flotation foam image texture analysis and operating mode's switch method based on Minkowski distances
CN106257498A (en) Zinc flotation work condition state division methods based on isomery textural characteristics
CN111127499A (en) Security inspection image cutter detection segmentation method based on semantic contour information
CN105260738A (en) Method and system for detecting change of high-resolution remote sensing image based on active learning
CN107180426A (en) Area of computer aided Lung neoplasm sorting technique based on transportable multiple-model integration
CN110322453A (en) 3D point cloud semantic segmentation method based on position attention and auxiliary network
CN104463199A (en) Rock fragment size classification method based on multiple features and segmentation recorrection
CN103839065A (en) Extraction method for dynamic crowd gathering characteristics
CN105138982A (en) Crowd abnormity detection and evaluation method based on multi-characteristic cluster and classification
CN102842043B (en) Particle swarm classifying method based on automatic clustering
CN101251896B (en) Object detecting system and method based on multiple classifiers
CN102622753A (en) Semi-supervised spectral clustering synthetic aperture radar (SAR) image segmentation method based on density reachable measure
CN103425994A (en) Feature selecting method for pattern classification
CN104751175A (en) Multi-label scene classification method of SAR (Synthetic Aperture Radar) image based on incremental support vector machine
CN102902976A (en) Image scene classification method based on target and space relationship characteristics
CN102708367A (en) Image identification method based on target contour features
CN103020979A (en) Image segmentation method based on sparse genetic clustering
CN104050680A (en) Image segmentation method based on iteration self-organization and multi-agent inheritance clustering algorithm
CN114359654A (en) YOLOv4 concrete apparent disease detection method based on position relevance feature fusion
CN104574368A (en) Self-adaptive kernel cluster image partitioning method
CN102289661A (en) Method for matching three-dimensional grid models based on spectrum matching

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant