A kind of woollen and cashmere recognizer based on gray level co-occurrence matrixes model
Technical field
The invention belongs to woollen and cashmere identification technical field, particularly relate to a kind of woollen and cashmere recognizer based on gray level co-occurrence matrixes model.
Background technology
Cashmere fiber is elongated, uniform, soft, has soft sliding warm feature with its textile made, and is the first-selection that faces of high-grade dress ornament.Owing to its yield is rare, on the high side, the Cashmere and Woolens of the conventional different proportion of manufacturing enterprise carries out blending.Pilus Caprae seu Ovis and cashmere broadly fall into natural protein fibre, and its structure and form all closely, carry out the normal difficulty that judges between right and wrong of kinds of fibers accurately.
Fibre identification method conventional at present is microscopic method.The composition of Cashmere and Woolens, by observing the features such as scale shape and the grain details of woollen and cashmere under the microscope, is carried out qualitative classification according to its personal experience by testing staff, and this mode not only takes time and effort, and subjectivity is big, and the concordance of measurement is also poor.
The full automatic Cashmere and Woolens recognition methods of a kind of intelligence is provided, first with microscope and the CCD image acquisition that Cashmere and Woolens is digitized, Cashmere and Woolens image is carried out the wavelet convolution under different scale and extracts feature by recycling, and utilize artificial neural network to build sorter model, it is achieved the intelligent classification identification to Cashmere and Woolens.
Gray level co-occurrence matrixes is defined as the Joint Distribution probability of pixel pair, it it is a symmetrical matrix, it not only reflect gradation of image in adjacent direction, the integrated information of adjacent spaces, amplitude of variation, but also reflects the position distribution feature between identical gray-level pixels, be the basis calculating textural characteristics.This invention obtains its co-occurrence matrix by calculating Cashmere and Woolens gray level image, then passes through this co-occurrence matrix of calculating and obtains the partial feature value of matrix, represents the textural characteristics of different fiber respectively.The artificial neural network that this feature inputs a three layers carries out the classification of Cashmere and Woolens fiber.
Summary of the invention
The present invention provides a kind of woollen and cashmere recognizer based on gray level co-occurrence matrixes model, can reach the beneficial effect that Pilus Caprae seu Ovis and cashmere accurately carry out kinds of fibers judgement of intelligence.
Technical problem solved by the invention realizes by the following technical solutions: the present invention provides a kind of woollen and cashmere recognizer based on gray level co-occurrence matrixes model, including ONLINE RECOGNITION flow process and model learning flow process:
Described ONLINE RECOGNITION flow process, carries out qualitative analysis to the fibre image of Real-time Collection, comprises the following steps:
(1) acquisition to image: adopt 3,000,000 pixel technical grade ccd to coordinate Olympus CX41 biological microscope, Cashmere and Woolens fiber is carried out capture;
(2) pretreatment: include two aspects,
A adopts Gaussian filter that image is carried out smothing filtering, to remove the noise in image;Gaussian filter is a kind of low pass filter, its process can Formal Representation be input picture I (x, y) with gaussian kernel function G (x, convolution y):
S (x, y)=I (x, y) × G (x, y;σ), wherein
Image gray levels adjustment is realized image enhaucament by b, if data xijIt is the i row j column element in image X, maxx,minxIt is maximum, the minima in X respectively;
(3) image object extracts: adopting the extraction of the rim detection based on canny and profile, canny rim detection is a multistage edge detection algorithm;Its basic step mainly has: a obtains the gradient of x, y, and the non-maximum of b suppresses, c Edge track;Here the canny operator inside opencv is directly adopted;
(4) feature extraction: gray level co-occurrence matrixes is the joint probability distribution simultaneously occurred at a distance of two gray-scale pixels for D in image, and co-occurrence matrix method conditional probability reflects unity and coherence in writing, is the performance of the Gray Correlation of neighbor;
First gray level co-occurrence matrixes is calculated, if f (x, y) is a width digital picture, and it is sized to M × N, and grey level is Ng, then the gray level co-occurrence matrixes meeting certain space relation is:
F (i, j)=#{ (x1,y1),(x2,y2)∈M×N|f(x1,y1)=i, f (x2,y2)=j}
Wherein # (x) represents the element number in set x, obvious P is the matrix of Ng × Ng, if (x1, y1) being d with (x2, y2) spacing, both angles with abscissa line are θ, then can obtain the gray level co-occurrence matrixes (i of various spacing and angle, j, d, θ);Wherein element (i, value j) represents that a gray scale is i, another gray scale be two of j at a distance of the number of times to occurring on the direction at angle of the pixel for d;Here angle takes (0306090120150) distance D respectively and takes (2,4,8), totally 18 groups of parameters;Being scaled 48*48 size by unified for each fibre image of input, therefore each width inputs 18 the corresponding gray level co-occurrence matrixes images that obtain of fibre image meeting, and corresponding dimension is 48*48*18=41472;
Calculate texture characteristic amount on this basis, adopt contrast, energy, entropy, 4 kinds of statistics of dependency to represent textural characteristics (a kind of unsupervised dimensionality reduction mode such as PCA, ICA etc. can also be taked);Specific formula for calculation is as follows:
Contrast: be also called contrast,
Energy: be the quadratic sum of each element value of gray level co-occurrence matrixes, and Asm=∑ ∑ p (i, j)2
Entropy: be the image randomness metrics that comprises quantity of information, Ent=-∑ ∑ p (i, j) logp (i, j)
Dependency: also referred to as homogeneity,
(5) by input feature value directly as artificial neural network of the characteristic vector (18*4=72 dimension) that obtained by previous step;
Described model learning flow process, is to obtain a grader, takes a kind of sorter model based on artificial neural network, comprise the following steps:
(1) premise of model learning is the data base of the substantial amounts of woollen and cashmere of accumulation;
(2) on this basis, adopt the mode of artificial mark, make kind and the location of machine hard objectives fiber, belong to a kind of supervised learning mode;
(3) fibre image in data base being carried out pretreatment, feature extraction, this step is consistent with (2) (4) step of ONLINE RECOGNITION flow process;
(4) learning process takes the artificial neural network of three layers, and including 72 nodes of input layer, 60 nodes of hidden layer, 2 nodes of output layer, activation primitive is RBF RBF.
The invention have the benefit that
What 1, the present invention can be intelligent accurately carries out kinds of fibers judgement to Pilus Caprae seu Ovis and cashmere.
2, the measurement result of the present invention is more objective, and has good concordance.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of the present invention.
Detailed description of the invention
Below in conjunction with accompanying drawing, the present invention is described further:
Embodiment:
Present invention resides in line identification process and model learning flow process:
Described ONLINE RECOGNITION flow process, carries out qualitative analysis to the fibre image of Real-time Collection, comprises the following steps:
(1) acquisition to image: adopt 3,000,000 pixel technical grade ccd to coordinate Olympus CX41 biological microscope, Cashmere and Woolens fiber is carried out capture;
(2) pretreatment: include two aspects,
A adopts Gaussian filter that image is carried out smothing filtering, to remove the noise in image;Gaussian filter is a kind of low pass filter, its process can Formal Representation be input picture I (x, y) with gaussian kernel function G (x, convolution y):
S (x, y)=I (x, y) × G (x, y;σ), wherein
Image gray levels adjustment is realized image enhaucament by b, if data xijIt is the i row j column element in image X, maxx,minxIt is maximum, the minima in X respectively;
(3) image object extracts: adopting the extraction of the rim detection based on canny and profile, canny rim detection is a multistage edge detection algorithm;Its basic step mainly has: a obtains the gradient of x, y, and the non-maximum of b suppresses, c Edge track;Here the canny operator inside opencv is directly adopted;
(4) feature extraction: gray level co-occurrence matrixes is the joint probability distribution simultaneously occurred at a distance of two gray-scale pixels for D in image, and co-occurrence matrix method conditional probability reflects unity and coherence in writing, is the performance of the Gray Correlation of neighbor;
First gray level co-occurrence matrixes is calculated, if f (x, y) is a width digital picture, and it is sized to M × N, and grey level is Ng, then the gray level co-occurrence matrixes meeting certain space relation is:
F (i, j)=#{ (x1,y1),(x2,y2)∈M×N|f(x1,y1)=i, f (x2,y2)=j}
Wherein # (x) represents the element number in set x, obvious P is the matrix of Ng × Ng, if (x1, y1) being d with (x2, y2) spacing, both angles with abscissa line are θ, then can obtain the gray level co-occurrence matrixes (i of various spacing and angle, j, d, θ);Wherein element (i, value j) represents that a gray scale is i, another gray scale be two of j at a distance of the number of times to occurring on the direction at angle of the pixel for d;Here angle takes (0306090120150) distance D respectively and takes (2,4,8), totally 18 groups of parameters;Being scaled 48*48 size by unified for each fibre image of input, therefore each width inputs 18 the corresponding gray level co-occurrence matrixes images that obtain of fibre image meeting, and corresponding dimension is 48*48*18=41472;
Calculate texture characteristic amount on this basis, adopt contrast, energy, entropy, 4 kinds of statistics of dependency to represent textural characteristics (a kind of unsupervised dimensionality reduction mode such as PCA, ICA etc. can also be taked);Specific formula for calculation is as follows:
Contrast: be also called contrast,
Energy: be the quadratic sum of each element value of gray level co-occurrence matrixes, and Asm=∑ ∑ p (i, j)2
Entropy: be the image randomness metrics that comprises quantity of information, Ent=-∑ ∑ p (i, j) logp (i, j)
Dependency: also referred to as homogeneity,
(5) by input feature value directly as artificial neural network of the characteristic vector (18*4=72 dimension) that obtained by previous step;
Described model learning flow process, is to obtain a grader, takes a kind of sorter model based on artificial neural network, comprise the following steps:
(1) premise of model learning is the data base of the substantial amounts of woollen and cashmere of accumulation;
(2) on this basis, adopt the mode of artificial mark, make kind and the location of machine hard objectives fiber, belong to a kind of supervised learning mode;
(3) fibre image in data base being carried out pretreatment, feature extraction, this step is consistent with (2) (4) step of ONLINE RECOGNITION flow process;
(4) learning process takes the artificial neural network of three layers, and including 72 nodes of input layer, 60 nodes of hidden layer, 2 nodes of output layer, activation primitive is RBF RBF.
Above by embodiment being described in detail the present invention, but described content is only presently preferred embodiments of the present invention, it is impossible to be considered the practical range for limiting the present invention.All utilize technical solutions according to the invention; or those skilled in the art is under the inspiration of technical solution of the present invention; design similar technical scheme and reach above-mentioned technique effect; or impartial change and improvement etc. that application range is made, the invention that all should still belong to the present invention is contained within protection domain.