CN102122353A - Method for segmenting images by using increment dictionary learning and sparse representation - Google Patents

Method for segmenting images by using increment dictionary learning and sparse representation Download PDF

Info

Publication number
CN102122353A
CN102122353A CN 201110059202 CN201110059202A CN102122353A CN 102122353 A CN102122353 A CN 102122353A CN 201110059202 CN201110059202 CN 201110059202 CN 201110059202 A CN201110059202 A CN 201110059202A CN 102122353 A CN102122353 A CN 102122353A
Authority
CN
China
Prior art keywords
image
formula
matrix
subband
split
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 201110059202
Other languages
Chinese (zh)
Inventor
杨淑媛
焦李成
朱君林
韩月
胡在林
王爽
侯彪
刘芳
缑水平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN 201110059202 priority Critical patent/CN102122353A/en
Publication of CN102122353A publication Critical patent/CN102122353A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method for segmenting images by using increment dictionary learning and sparse representation, which mainly solves the problem that the accuracy rate of the sparse representation segmentation result is reduced quickly when the similarity among the categories is increased in the prior art. The method comprises the following steps of: obtaining clustering results of to-be-segmented image categories K by a K mean algorithm; training a part of results of each category serving as a training sample set to obtain K dictionaries; performing sparse representation on characteristic points of each unknown tag by using the dictionaries to obtain K sparse representation errors; performing classification by using the magnitude of the sparse representation errors, selecting a training sample set for increment learning based on the classification result, and retraining the training sample set to obtain K dictionaries; performing sparse representation on input signals by using the dictionaries to obtain K sparse representation errors; and performing final segmentation of the images by using the errors. Compared with the prior art, the method remarkably improves the segmentation performance of the images, and can be used for target detection and background separation.

Description

Utilize study of increment dictionary and rarefaction representation to carry out the method for image segmentation
Technical field
The invention belongs to technical field of image processing, relate to a kind of dividing method of image, can be used for texture image is cut apart.
Background technology
Image segmentation is one of technology basic and crucial in Flame Image Process and the computer vision, its objective is target and background is separated.Image segmentation just is meant image is divided into the zone of each tool characteristic and extracts the technology and the process of interesting target, for follow-up classification, identification and retrieval provide foundation.Here characteristic can be gray scale, color, texture of pixel etc., and Dui Ying target can corresponding single zone in advance, also can corresponding a plurality of zones.The application of image segmentation is very extensive, almost appears at all spectra of relevant Flame Image Process, for example medical image analysis, and the military field engineering field, the remote sensing meteorological field, traffic image analysis etc., image partition method also becomes the focus of people's research.The method of image segmentation has a variety of, rarefaction representation sorter for example, K mean algorithm etc.
The rarefaction representation sorter that people such as Wright propose is to use training data as base, utilizes the error of rarefaction representation to come image is classified.The weak point of this method is the selection that accuracy relatively depends on base, and when having selected reasonable sample as base, this method can obtain reasonable segmentation effect.
And K mean algorithm velocity ratio is very fast, though can obtain better segmentation effect, not enough place is because the selection of initial point is at random, so segmentation result and unstable.
Summary of the invention
The objective of the invention is to overcome the shortcoming of above-mentioned prior art, a kind of method of utilizing study of increment dictionary and rarefaction representation to carry out image segmentation has been proposed, with the rarefaction representation of realization, thereby improve effect of cutting apart and the stability that increases segmentation result to each width of cloth image.
For achieving the above object, at first obtain the cluster result of image K class to be split by the K mean algorithm, then with the part among every class result as training sample set, training obtains K dictionary, utilize dictionary that the unique point of each Unknown Label is done rarefaction representation again, obtain K rarefaction representation error, utilize the size of rarefaction representation error to classify, on the basis of this classification results, select to carry out the training sample set of incremental learning, again training obtains K dictionary, utilize dictionary that each input signal is done rarefaction representation again, obtain K rarefaction representation error, utilize error to carry out finally cutting apart of image.Concrete steps comprise:
(1) image to be split of input one width of cloth N * N, this image to be split is carried out three layers of wavelet transformation successively, extracts the wavelet character of image to be split and calculates gray level co-occurrence matrixes, and from gray level co-occurrence matrixes, extract gray scale symbiosis feature, obtain the wavelet character vector of 10 * 1 corresponding dimensions of each point and the gray scale symbiosis feature of 12 * 1 dimensions, the total dimension of composition characteristic vector is 22 * 1, and size is 22 * N 2The eigenmatrix M of dimension,
Described gray scale symbiosis feature is included in 0 °, and 45 °, 90 °, the angle second moment f on 135 ° of four directions 1, homogeneity district f 2With contrast f 3Three statistics are calculated these statistics respectively on 4 directions, obtain characteristic v=(f 1,, f 2..., f 12);
(2) utilize eigenmatrix M, treat split image with the K mean algorithm and carry out cluster, be divided into the K class, and put from this K class and to choose 50% proper vector K training sample set Y=Y the pairing proper vector respectively as correspondence 1, Y 2... Y K, Y iBe 50% the proper vector of from i category feature vector, choosing, i=1,2, L, K;
(3) utilize the KSVD algorithm to separate formula:
Figure BDA0000049917750000021
Obtain target training dictionary D=[D 1, D 2... D K], D iBe i class training sample set Y iThe dictionary that training obtains, i=1,2, L, K
In the formula, X is sparse matrix of coefficients, and min|||| represents to allow the value of bracket the inside reach minimum, and Subject to represents constraint condition,
Figure BDA0000049917750000022
Be any i row, X iBe the i row of sparse matrix of coefficients, || || 0Be 0 norm of vector,
Figure BDA0000049917750000023
For 2 norms of matrix square, T 0Be the degree of rarefication control coefrficient;
(4) utilize the BP algorithm to separate formula: min||X i|| Subject to M j=D iX i, i=1,2, L, K upgrades K sparse matrix of coefficients X=[X 1, X 2... X K], X iBe the sparse matrix of coefficients of i category dictionary correspondence, i=1,2, L, K,
In the formula, M jJ column vector for M in the eigenmatrix is proper vector, j=1,2, L, N 2, N 2Dimension for image to be split;
(5) utilize identify (M j)=min{M j-D iX i, i=1,2, L, K is the calculated characteristics vector M respectively jRarefaction representation error under the K category dictionary is with proper vector M jCorresponding point is assigned in pairing that class of the dictionary that makes rarefaction representation error amount minimum and is gone, in the formula, and identify (M j) be proper vector M jAffiliated classification;
(6) repeating step (5) all points in image are all classified and are finished, and according to the classification under the pixel, image are cut apart, and obtain K class initial segmentation result;
(7) in K class initial segmentation result, according to existing label, the correct point of classification in the segmentation result is elected, in the correct point of these classification, when this point is consistent with some classification in its neighborhood window, select the training sample of this characteristic of correspondence vector as such, with each class choose total sample number 50% as training sample, obtain K class training sample set Y ' 1, Y ' 2... Y ' K
(8) repeating step (3) obtains the final segmentation result of image, and with different colors different classes of pixel is showed to step (6).
The present invention has the following advantages compared with prior art:
(1) the present invention makes that the segmentation result of this method is more stable owing to utilize KSVD algorithm training dictionary;
(2) the present invention makes that the segmentation result of this method is more accurate owing to taked the method for incremental learning.
Description of drawings
Fig. 1 is realization flow figure of the present invention;
Fig. 2 is the present invention and existing K mean algorithm, the KNSC algorithm segmentation result comparison diagram to two class texture image test1;
Fig. 3 is the present invention and existing K mean algorithm, the KNSC algorithm segmentation result comparison diagram to three class texture image test2;
Fig. 4 is the present invention and existing K mean algorithm, the KNSC algorithm segmentation result comparison diagram to four class texture image test3;
Fig. 5 is the present invention and existing K mean algorithm, the KNSC algorithm segmentation result comparison diagram to four class texture image test4;
Fig. 6 is the present invention and existing K mean algorithm, the KNSC algorithm segmentation result comparison diagram to four class texture image test5.
Specific implementation method
With reference to accompanying drawing 1, concrete steps of the present invention comprise:
Step 1. input size is the image to be split of N * N, extracts the eigenmatrix M of image to be split
The textural characteristics that the present invention treats the split image extraction comprises gray scale symbiosis feature and wavelet character:
1a) use gray level co-occurrence matrixes to treat split image and carry out feature extraction
Image to be split is generated gray level co-occurrence matrixes p Ij(s, θ), wherein s is pixel x iAnd x jBetween distance, the value of θ is 4 discrete directions: 0 °, 45 °, 90 °, 135 °, get three statistics on each direction: the angle second moment, the homogeneity district, contrast, each statistic is calculated according to following formula:
The angle second moment: f 1 = Σ i = 0 N 2 - 1 Σ j = 0 N 2 - 1 p 2 ( i , j )
The homogeneity district: f 2 = Σ i = 0 N 2 - 1 Σ j = 0 N 2 - 1 p ( i , j ) / [ 1 + ( i - j ) 2 ] 2
Contrast: f 3 = Σ i = 0 N 2 - 1 Σ j = 0 N 2 - 1 | i - j | p ( i , j )
Wherein, N 2Be total sample number, (i j) is gray level co-occurrence matrixes p to p Ij(s, θ) element of the capable j row of i calculates above-mentioned statistic respectively on 4 directions, obtains characteristic v=(f 1,, f 2..., f 12);
1b) wavelet character extracts
Treat that the pixel with feature to be extracted is the center in the split image, image subblock in the characteristic window of setting carries out one-dimensional filtering respectively along x direction and y direction, again each yardstick is resolved into four subband LL, HL, LH and HH, with the details on the low-frequency information of token image respectively and level, the vertical and tilted direction, the LL subband of ground floor is decomposed, obtain four subband LL, HL, LH and HH of the second layer, LL subband to the second layer decomposes again, obtain the 3rd layer four subband LL, HL, LH and HH, three layers obtain 10 subbands altogether;
By formula
Figure BDA0000049917750000041
Obtain the L of each subband respectively 1Norm, in the formula, W represents the L of subband 1Norm, A is the line number of sub-band coefficients, B is the columns of sub-band coefficients, A * B is the subband size, and a represents the index of row coefficient in the subband, and b represents the index of row coefficient in the subband, (a is the coefficient value of the capable b row of a in this subband b) to coef, thereby obtains the proper vector (W of one 10 dimension 0, W 1..., W 9).
Step 2. utilizes the K mean algorithm that this eigenmatrix M is carried out cluster
2a) from the N of eigenmatrix M 2Picked at random K is listed as K initial cluster center in the row, is designated as C=[C 1, C 2, L, C K], each column vector C iAll be 22 * 1 dimensions, i=1,2, L, K;
2b) to the proper vector among the eigenmatrix M
Figure BDA0000049917750000042
I=1,2, L, N 2, utilize formula
Figure BDA0000049917750000043
K=1,2, L, K is the calculated characteristics vector M respectively iWith K initial cluster center C=[C 1, C 2, L, C K] Euclidean distance, obtain K Euclidean distance (O 1, O 2, L, O K), from these Euclidean distances, take out minimum Euclidean distance O j, j ∈ 1,2, L, K is with M iCorresponding pixel is divided into C jGo in that corresponding class;
2c) in eigenmatrix M, to K initial cluster center C=[C 1, C 2, L, C K] in addition the multiple step 2b of each column weight), obtain K cluster result;
2d) in K cluster result of gained, with all proper vector additions of each class,, obtain such new cluster centre: C '=[C respectively again divided by the number of the proper vector that such comprised 1', C 2', L, C K'], repeating step 2b) to 2c) the renewal cluster result;
2e) repeating step 2b) to 2d) reach its set iterations up to the K mean algorithm, obtain K cluster result at last, and it is made as the initial clustering result of image, number of iterations is relevant with different classes of texture similarity, for example to the image to be split among the accompanying drawing figure of the present invention (3a), number of times chooses 1000 just can obtain result preferably.
Step 3. is extracted K training sample set Y=Y from the initial clustering result 1, Y 2, L, Y K
3a) to being divided into any one point in the k class, k=1,2, L, K judges whether point in this vertex neighborhood window and its are same class, if 22 * 1 dimensional feature vectors of its correspondence are made as a training sample that collects in the k class training sample;
3b) training sample of choosing the k class reaches the ratio of setting up to number of samples, then forwarding to and utilize step 3a in other class) the selected characteristic vector comes the composing training sample, this ratio is relevant with different classes of texture similarity, for example, under 50% ratio, just can obtain result preferably to the image to be split among the accompanying drawing figure of the present invention (3a).
Step 4. utilizes the KSVD algorithm to K class training sample set Y=Y 1, Y 2, L, Y KTrain, obtain K target training dictionary D 1, D 2... D k
4a) from the KSVD algorithm, provide total optimization formula
Figure BDA0000049917750000051
Wherein, D=[D 1, D 2... D k] being initialized as dictionary at random, X is sparse matrix of coefficients, and min|||| represents to allow the value of bracket the inside reach minimum, and Subject to represents constraint condition,
Figure BDA0000049917750000052
Be any i row, X iBe the i row of sparse matrix of coefficients, || || 0Be 0 norm of vector, For 2 norms of matrix square, T 0Be the degree of rarefication control coefrficient;
4b) in total optimization formula
Figure BDA0000049917750000054
Be out of shape and obtain:
| | Y - DX | | 2 2 = | | Y - Σ j = 1 L d j x j T | | 2 2 = | | ( Y - Σ j ≠ z L d j x j T ) - d z x z T | | 2 2 = | | E z - d z x z T | | 2 2
In the formula, d jBe the j row atom of D,
Figure BDA0000049917750000056
For the j of X is capable, L is total columns of D, d zBe the z row atom of D,
Figure BDA0000049917750000057
For the z of X is capable, E zFor not using the z row atom d of D zCarry out the error matrix that Sparse Decomposition produced;
4c) to being out of shape the formula of back gained
Figure BDA0000049917750000058
Multiply by matrix Ω z, obtain the target decomposition formula | | E z Ω z - d z x z T Ω z | | 2 2 = | | E z R - d z x z R | | 2 2 ,
Distortion inaccuracy matrix in the formula
Figure BDA00000499177500000510
Be error matrix E zDistortion,
Figure BDA00000499177500000511
Ω zSize be P * | ω z|, P is the columns of training sample set Y,
Figure BDA00000499177500000512
| ω z| be ω zThe mould value, and Ω zAt (ω z(j), j) locating is 1, and other place is 0 entirely, wherein 1≤j≤| ω z|, ω z(j) be ω zThe j number;
4d) to gained target decomposition formula
Figure BDA00000499177500000513
In the distortion inaccuracy matrix
Figure BDA00000499177500000514
Carrying out the SVD decomposition obtains
Figure BDA00000499177500000515
Wherein U is a left singular matrix, V TBe right singular matrix, Δ is a singular value matrix;
4e) first row with gained left singular matrix U remove the more z row atom d of fresh target train word allusion quotation D z
4f) repeating step 4c) to step 4e) all atoms among the D are upgraded processing, obtain K new dictionary D ' 1, D ' 2... D ' K
Step 5. couple gained eigenmatrix M is at K dictionary D ' 1, D ' 2... D ' KOn utilize the BP algorithm to carry out Sparse Decomposition, obtain the sparse matrix of coefficients corresponding, and obtain the error of the rarefaction representation of each category dictionary with the K category dictionary, utilize the gained error that image is carried out cluster
5a) to a column vector M among the gained eigenmatrix M i, utilize the BP algorithm to separate following formula:
Figure BDA0000049917750000061
Obtain and the corresponding K of K category dictionary sparse matrix of coefficients X j, j=1,2, L, K, wherein, M iBe i proper vector in the eigenmatrix, D jBe the dictionary of j class data correspondence, X jBe the sparse matrix of coefficients corresponding with it;
5b) utilize formula R j=M i-D jX j, j=1,2, L, K calculate the error of each category dictionary, obtain the error amount R of a minimum Min, min ∈ 1,2, and L, k} is with this feature M iPairing point is poly-again in the min class, just allows the label of this point be made as min;
5c) to each column vector of gained eigenmatrix M, repeating step 5a) and 5b), finish up to have a few all again cluster.
The cluster label that step 6. obtains according to step 5 is treated split image and is carried out initial segmentation, and the pixel that is about to identical cluster label is grouped in the same classification, obtains initial segmentation result.
Step 7. is extracted the K class sample set Y ' that carries out incremental learning according to initial segmentation result 1, Y ' 2, L, Y ' K
7a), will cut apart correct clicking and take out by given correct label;
7b) cut apart correct point in the k class and judge being divided into, k=1,2, L, K, if the point in this neighborhood of a point window all belongs to a class together with this point, then the proper vector that will put is grouped in such training sample set;
7c) point in the same classification is judged successively, reach a set ratio up to the training sample number, then such training sample extracts and finishes, and forward to utilize step 7b in other classifications) the selected characteristic vector comes the composing training sample set, this ratio is relevant with different classes of texture similarity, for example, under 50% ratio, just can obtain result preferably to the image to be split among the accompanying drawing figure of the present invention (3a).
Step 8. repeating step 4 obtains final segmentation result to step 6
Utilize K class training sample set Y ' 1, Y ' 2, L, Y ' KRepeating step 4 obtains final segmentation result to step 6 and also with different colors different classes of pixel is showed.
Effect of the present invention can further specify by following experiment:
1) experiment condition
The experiment simulation environment is: MATLAB 7.1.0.246 (R14) Service Pack 3,
Figure BDA0000049917750000062
4CPU 2.4GHz, Window XP Professional; Experimental image is: two class texture image test1, and three class texture image test2, three four class texture image: test3, test4, test5, five width of cloth texture image sizes are 256 * 256; The test feature parameter is: gray scale symbiosis feature and wavelet character, proper vector dimension are 22 * 1, and gray scale symbiosis characteristic window size is 17 * 17, and the wavelet character window is 16 * 16.
2) experiment content
2.1) with the present invention and K-means and three kinds of methods of KNSC texture image test1 is experimentized, three kinds of dividing method results as shown in Figure 2, wherein figure (2a) is the original image of test1; Figure (2b) is the template of cutting apart of figure (2a); Figure (2c) is the segmentation result of existing K-means algorithm to figure (2a); Figure (2d) is the segmentation result of existing KNSC algorithm to figure (2a); Figure (2e) is the segmentation result of algorithm of the present invention to figure (2a);
2.2) with the present invention and K-means and three kinds of methods of KNSC texture image test2 is experimentized, three kinds of dividing method results as shown in Figure 3, wherein figure (3a) is the original image of test2; Figure (3b) is the template of cutting apart of figure (3a); Figure (3c) is the segmentation result of existing K-means algorithm to figure (3a); Figure (3d) is the segmentation result of existing KNSC algorithm to figure (3a); Figure (3e) is the segmentation result of algorithm of the present invention to figure (3a);
2.3) with the present invention and K-means and three kinds of methods of KNSC texture image test3 is experimentized, three kinds of dividing method results as shown in Figure 4, wherein figure (4a) is the original image of test3; Figure (4b) is the template of cutting apart of figure (4a); Figure (4c) is the segmentation result of existing K-means algorithm to figure (4a); Figure (4d) is the segmentation result of existing KNSC algorithm to figure (4a); Figure (4e) is the segmentation result of algorithm of the present invention to figure (4a);
2.4) with the present invention and K-means and three kinds of methods of KNSC texture image test4 is experimentized, three kinds of dividing method results as shown in Figure 5, wherein figure (5a) is the original image of test4; Figure (5b) is the template of cutting apart of figure (5a); Figure (5c) is the segmentation result of existing K-means algorithm to figure (5a); Figure (5d) is the segmentation result of existing KNSC algorithm to figure (5a); Figure (5e) is the segmentation result of algorithm of the present invention to figure (5a);
2.5) with the present invention and K-means and three kinds of methods of KNSC texture image test5 is experimentized, three kinds of dividing method results as shown in Figure 6, wherein figure (6a) is the original image of test5; Figure (6b) is the template of cutting apart of figure (6a); Figure (6c) is the segmentation result of existing K-means algorithm to figure (6a); Figure (6d) is the segmentation result of existing KNSC algorithm to figure (6a); Figure (6e) is the segmentation result of algorithm of the present invention to figure (6a).
3) interpretation
From Fig. 2, Fig. 3, Fig. 4, Fig. 5, Fig. 6 can see that K average and the segmentation effect of KNSC algorithm on the edge are not fine, and the result that result of the present invention obtains on the edge is relatively good.In addition, preceding two kinds of methods keep to such an extent that be not fine on regional consistance mutually for purposes of the invention.
Table 1 has provided algorithms of different to test1, test2, test3, test4, test5 segmentation result, the number percent of correct classified pixels point number of data representation and total number of image pixels in the table 1, correct classified pixels number/total number of image pixels * 100%.
Table 1. texture image mistake divides rate relatively
As can be known from Table 1, segmentation result of the present invention is compared with other two kinds of algorithm segmentation results, and accuracy obviously improves.

Claims (6)

1. method of utilizing increment dictionary study and rarefaction representation to carry out image segmentation comprises following steps:
(1) image to be split of input one width of cloth N * N, this image to be split is carried out three layers of wavelet transformation successively, extracts the wavelet character of image to be split and calculates gray level co-occurrence matrixes, and from gray level co-occurrence matrixes, extract gray scale symbiosis feature, obtain the wavelet character vector of 10 * 1 corresponding dimensions of each point and the gray scale symbiosis feature of 12 * 1 dimensions, the total dimension of composition characteristic vector is 22 * 1, and size is 22 * N 2The eigenmatrix M of dimension,
Described gray scale symbiosis feature is included in 0 °, and 45 °, 90 °, the angle second moment f on 135 ° of four directions 1, homogeneity district f 2With contrast f 3Three statistics are calculated these statistics respectively on 4 directions, obtain characteristic v=(f 1,, f 2..., f 12);
(2) utilize eigenmatrix M, treat split image with the K mean algorithm and carry out cluster, be divided into the K class, and put from this K class and to choose 50% proper vector K training sample set Y=Y the pairing proper vector respectively as correspondence 1, Y 2... Y K, Y iBe 50% the proper vector of from i category feature vector, choosing, i=1,2, L, K;
(3) utilize the KSVD algorithm to separate formula:
Figure FDA0000049917740000011
Obtain target training dictionary D=[D 1, D 2... D K], D iBe i class training sample set Y iThe dictionary that training obtains, i=1,2, L, K
In the formula, X is sparse matrix of coefficients, and min|||| represents to allow the value of bracket the inside reach minimum, and Subject to represents constraint condition,
Figure FDA0000049917740000012
Be any i row, X iBe the i row of sparse matrix of coefficients, || || 0Be 0 norm of vector,
Figure FDA0000049917740000013
For 2 norms of matrix square, T 0Be the degree of rarefication control coefrficient;
(4) utilize the BP algorithm to separate formula: min||X i|| Subject to M j=D iX i, i=1,2, L, K upgrades K sparse matrix of coefficients X=[X 1, X 2... X K], X iBe the sparse matrix of coefficients of i category dictionary correspondence, i=1,2, L, K,
In the formula, M jJ column vector for M in the eigenmatrix is proper vector, j=1,2, L, N 2, N 2Dimension for image to be split;
(5) utilize identify (M j)=min{M j-D iX i, i=1,2, L, K is the calculated characteristics vector M respectively jRarefaction representation error under the K category dictionary is with proper vector M jCorresponding point is assigned in pairing that class of the dictionary that makes rarefaction representation error amount minimum and is gone, in the formula, and identify (M j) be proper vector M jAffiliated classification;
(6) repeating step (5) all points in image are all classified and are finished, and according to the classification under the pixel, image are cut apart, and obtain K class initial segmentation result;
(7) in K class initial segmentation result, according to existing label, the correct point of classification in the segmentation result is elected, in the correct point of these classification, when this point is consistent with some classification in its neighborhood window, select the training sample of this characteristic of correspondence vector as such, with each class choose total sample number 50% as training sample, obtain K class training sample set Y ' 1, Y ' 2... Y ' K
(8) repeating step (3) obtains the final segmentation result of image, and with different colors different classes of pixel is showed to step (6).
2. according to claims 1 described image partition method, wherein step (1) is described treats the wavelet character that split image carries out three layers of wavelet transformation successively, extracts image to be split, carries out according to following steps:
2a) treat that the pixel with feature to be extracted is the center in the split image, image subblock in the characteristic window of setting carries out one-dimensional filtering respectively along x direction and y direction, again each yardstick is resolved into four subband LL, HL, LH and HH, the details on the low-frequency information of token image and level, the vertical and tilted direction respectively, the LL subband of ground floor is decomposed, obtain four subband LL, HL, LH and HH of the second layer, LL subband to the second layer decomposes again, obtain the 3rd layer four subband LL, HL, LH and HH, three layers obtain 10 subbands altogether;
2b) by formula
Figure FDA0000049917740000021
Obtain the L of each subband respectively 1Norm, in the formula, W represents the L of subband 1Norm, A is the line number of sub-band coefficients, B is the columns of sub-band coefficients, A * B is the subband size, and a represents the index of row coefficient in the subband, and b represents the index of row coefficient in the subband, (a is the coefficient value of the capable b row of a in this subband b) to coef, thereby obtains the proper vector (W of one 10 dimension 0, W 1..., W 9).
3. according to claims 1 described image partition method, step (1) described angle second moment f wherein 1, be calculated as follows:
f 1 = Σ i = 0 N 2 - 1 Σ j = 0 N 2 - 1 p 2 ( i , j )
Wherein, N 2Be the dimension of image to be split, (i j) is the gray level co-occurrence matrixes p of image to be split to p Ij(s is pixel x for s, the θ) element of the capable j row of i iAnd x jBetween distance, the value of θ is 4 discrete directions: 0 °, 45 °, 90 °, 135 °.
4. according to claims 1 described image partition method, the described homogeneity of step (1) district f wherein 2, be calculated as follows:
f 2 = Σ i = 0 N 2 - 1 Σ j = 0 N 2 - 1 p ( i , j ) / [ 1 + ( i - j ) 2 ] 2 .
5. according to claims 1 described image partition method, the described contrast f of step (1) wherein 3, be calculated as follows:
f 3 = Σ i = 0 N 2 - 1 Σ j = 0 N 2 - 1 | i - j | p ( i , j ) .
6. according to claims 1 described image partition method method, wherein the described KSVD of the utilization algorithm of step (3) is separated formula:
Figure FDA0000049917740000033
Carry out according to following steps:
6a) in total optimization formula Be out of shape and obtain:
| | Y - DX | | 2 2 = | | Y - Σ j = 1 L d j x j T | | 2 2 = | | ( Y - Σ j ≠ z L d j x j T ) - d z x z T | | 2 2 = | | E z - d z x z T | | 2 2
In the formula, Y=Y 1, Y 2... Y KBe training sample set, D=[D 1, D 2... D k] being initialized as dictionary at random, X is sparse matrix of coefficients,
Figure FDA0000049917740000036
For 2 norms of matrix square, T 0Be degree of rarefication control coefrficient, d jBe the j row atom of D,
Figure FDA0000049917740000037
For the j of X is capable, L is total columns of D, d zBe the z row atom of D,
Figure FDA0000049917740000038
For the z of X is capable, E zFor not using the z row atom d of D zCarry out the error matrix that Sparse Decomposition produced;
6b) to being out of shape the formula of back gained
Figure FDA0000049917740000039
Multiply by matrix Ω z, obtain the target decomposition formula | | E z Ω z - d z x z T Ω z | | 2 2 = | | E z R - d z x z R | | 2 2 ,
Distortion inaccuracy matrix in the formula
Figure FDA00000499177400000311
Be error matrix E zDistortion,
Figure FDA00000499177400000312
Ω zSize be P * | ω z|, P is the columns of training sample set Y,
Figure FDA00000499177400000313
| ω z| be ω zThe mould value, and Ω zAt (ω z(j), j) locating is 1, and other place is 0 entirely, wherein 1≤j≤| ω z|, ω z(j) be ω zThe j number;
6c) to gained target decomposition formula
Figure FDA00000499177400000314
In the distortion inaccuracy matrix
Figure FDA00000499177400000315
Carrying out the SVD decomposition obtains
Figure FDA00000499177400000316
Wherein U is a left singular matrix, V TBe right singular matrix, Δ is a singular value matrix;
6d) first row with gained left singular matrix U remove the more z row atom d of fresh target train word allusion quotation D z
6e) repeating step 6b) to step 6d) all atoms among the D are upgraded processing, obtain K new dictionary D ' 1, D ' 2... D ' K
CN 201110059202 2011-03-11 2011-03-11 Method for segmenting images by using increment dictionary learning and sparse representation Pending CN102122353A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110059202 CN102122353A (en) 2011-03-11 2011-03-11 Method for segmenting images by using increment dictionary learning and sparse representation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110059202 CN102122353A (en) 2011-03-11 2011-03-11 Method for segmenting images by using increment dictionary learning and sparse representation

Publications (1)

Publication Number Publication Date
CN102122353A true CN102122353A (en) 2011-07-13

Family

ID=44250907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110059202 Pending CN102122353A (en) 2011-03-11 2011-03-11 Method for segmenting images by using increment dictionary learning and sparse representation

Country Status (1)

Country Link
CN (1) CN102122353A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102436645A (en) * 2011-11-04 2012-05-02 西安电子科技大学 Spectral clustering image segmentation method based on MOD dictionary learning sampling
CN102521382A (en) * 2011-12-21 2012-06-27 中国科学院自动化研究所 Method for compressing video dictionary
CN102592267A (en) * 2012-01-06 2012-07-18 复旦大学 Medical ultrasonic image filtering method based on sparse representation
CN103955915A (en) * 2014-03-17 2014-07-30 西安电子科技大学 SAR image segmentation based on sparse expression and multiple dictionaries
CN104376565A (en) * 2014-11-26 2015-02-25 西安电子科技大学 Non-reference image quality evaluation method based on discrete cosine transform and sparse representation
CN104572930A (en) * 2014-12-29 2015-04-29 小米科技有限责任公司 Data classifying method and device
CN106023221A (en) * 2016-05-27 2016-10-12 哈尔滨工业大学 Remote sensing image segmentation method based on nonnegative low-rank sparse correlated drawing
CN107292272A (en) * 2017-06-27 2017-10-24 广东工业大学 A kind of method and system of the recognition of face in the video of real-time Transmission
CN107506761A (en) * 2017-08-30 2017-12-22 山东大学 Brain image dividing method and system based on notable inquiry learning convolutional neural networks
CN108537227A (en) * 2018-03-21 2018-09-14 华中科技大学 A kind of offline false distinguishing method of commodity based on width study and wide-angle micro-image
CN110443194A (en) * 2019-08-05 2019-11-12 哈尔滨工业大学 Time varying signal component extracting method based on SPI sparse constraint
CN114255540A (en) * 2022-01-25 2022-03-29 中国农业银行股份有限公司 Method, device, equipment and storage medium for identifying stained paper money

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763514A (en) * 2010-01-15 2010-06-30 西安电子科技大学 Image segmentation method based on characteristic importance sorting spectral clustering

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763514A (en) * 2010-01-15 2010-06-30 西安电子科技大学 Image segmentation method based on characteristic importance sorting spectral clustering

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《2011 International Workshop on Multi-Platform/Multi-Sensor Remote Sensing and Mapping (M2RSM)》 20110112 Shuyuan Yang et.al Cooperative Synthetic Aperture Radar Image Segmentation Using Learning Sparse Representation Based Clustering Scheme 第1-6页 1-6 , *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102436645A (en) * 2011-11-04 2012-05-02 西安电子科技大学 Spectral clustering image segmentation method based on MOD dictionary learning sampling
CN102521382A (en) * 2011-12-21 2012-06-27 中国科学院自动化研究所 Method for compressing video dictionary
CN102592267A (en) * 2012-01-06 2012-07-18 复旦大学 Medical ultrasonic image filtering method based on sparse representation
CN102592267B (en) * 2012-01-06 2014-09-03 复旦大学 Medical ultrasonic image filtering method based on sparse representation
CN103955915A (en) * 2014-03-17 2014-07-30 西安电子科技大学 SAR image segmentation based on sparse expression and multiple dictionaries
CN104376565B (en) * 2014-11-26 2017-03-29 西安电子科技大学 Based on discrete cosine transform and the non-reference picture quality appraisement method of rarefaction representation
CN104376565A (en) * 2014-11-26 2015-02-25 西安电子科技大学 Non-reference image quality evaluation method based on discrete cosine transform and sparse representation
CN104572930A (en) * 2014-12-29 2015-04-29 小米科技有限责任公司 Data classifying method and device
CN104572930B (en) * 2014-12-29 2017-10-17 小米科技有限责任公司 Data classification method and device
CN106023221A (en) * 2016-05-27 2016-10-12 哈尔滨工业大学 Remote sensing image segmentation method based on nonnegative low-rank sparse correlated drawing
CN107292272A (en) * 2017-06-27 2017-10-24 广东工业大学 A kind of method and system of the recognition of face in the video of real-time Transmission
CN107506761A (en) * 2017-08-30 2017-12-22 山东大学 Brain image dividing method and system based on notable inquiry learning convolutional neural networks
CN107506761B (en) * 2017-08-30 2020-01-17 山东大学 Brain image segmentation method and system based on significance learning convolutional neural network
CN108537227A (en) * 2018-03-21 2018-09-14 华中科技大学 A kind of offline false distinguishing method of commodity based on width study and wide-angle micro-image
CN110443194A (en) * 2019-08-05 2019-11-12 哈尔滨工业大学 Time varying signal component extracting method based on SPI sparse constraint
CN110443194B (en) * 2019-08-05 2021-09-07 哈尔滨工业大学 Time-varying signal component extraction method based on SPI sparse constraint
CN114255540A (en) * 2022-01-25 2022-03-29 中国农业银行股份有限公司 Method, device, equipment and storage medium for identifying stained paper money

Similar Documents

Publication Publication Date Title
CN102096819B (en) Method for segmenting images by utilizing sparse representation and dictionary learning
CN102122353A (en) Method for segmenting images by using increment dictionary learning and sparse representation
CN104408478B (en) A kind of hyperspectral image classification method based on the sparse differentiation feature learning of layering
CN105389550B (en) It is a kind of based on sparse guide and the remote sensing target detection method that significantly drives
CN108876796A (en) A kind of lane segmentation system and method based on full convolutional neural networks and condition random field
CN110796667B (en) Color image segmentation method based on improved wavelet clustering
CN112132014B (en) Target re-identification method and system based on non-supervised pyramid similarity learning
CN109583483A (en) A kind of object detection method and system based on convolutional neural networks
CN105956560A (en) Vehicle model identification method based on pooling multi-scale depth convolution characteristics
CN105427309A (en) Multiscale hierarchical processing method for extracting object-oriented high-spatial resolution remote sensing information
CN111753828A (en) Natural scene horizontal character detection method based on deep convolutional neural network
CN104021396A (en) Hyperspectral remote sensing data classification method based on ensemble learning
CN108537102A (en) High Resolution SAR image classification method based on sparse features and condition random field
CN106408030A (en) SAR image classification method based on middle lamella semantic attribute and convolution neural network
CN101901343A (en) Remote sensing image road extracting method based on stereo constraint
CN104036289A (en) Hyperspectral image classification method based on spatial and spectral features and sparse representation
CN106600595A (en) Human body characteristic dimension automatic measuring method based on artificial intelligence algorithm
CN104598920B (en) Scene classification method based on Gist feature and extreme learning machine
CN106257498A (en) Zinc flotation work condition state division methods based on isomery textural characteristics
CN108154158B (en) Building image segmentation method for augmented reality application
CN104268552B (en) One kind is based on the polygonal fine classification sorting technique of part
CN101986295B (en) Image clustering method based on manifold sparse coding
CN104699781B (en) SAR image search method based on double-deck anchor figure hash
CN113484875B (en) Laser radar point cloud target hierarchical identification method based on mixed Gaussian ordering
CN105654122A (en) Spatial pyramid object identification method based on kernel function matching

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20110713