CN111861103A - Fresh tea leaf classification method based on multiple features and multiple classifiers - Google Patents

Fresh tea leaf classification method based on multiple features and multiple classifiers Download PDF

Info

Publication number
CN111861103A
CN111861103A CN202010505288.3A CN202010505288A CN111861103A CN 111861103 A CN111861103 A CN 111861103A CN 202010505288 A CN202010505288 A CN 202010505288A CN 111861103 A CN111861103 A CN 111861103A
Authority
CN
China
Prior art keywords
tea
image
distance matrix
classification
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010505288.3A
Other languages
Chinese (zh)
Other versions
CN111861103B (en
Inventor
毛腾跃
黄印
帖军
张雯娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South Central Minzu University
Original Assignee
South Central University for Nationalities
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South Central University for Nationalities filed Critical South Central University for Nationalities
Priority to CN202010505288.3A priority Critical patent/CN111861103B/en
Publication of CN111861103A publication Critical patent/CN111861103A/en
Application granted granted Critical
Publication of CN111861103B publication Critical patent/CN111861103B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06395Quality analysis or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Strategic Management (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Educational Administration (AREA)
  • Artificial Intelligence (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Multimedia (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Marketing (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a fresh tea leaf classification method based on multiple features and multiple classifiers, which comprises the following steps: extracting geometric features and texture features from the tea image training samples, and inputting the geometric features and the texture features into an SVM (support vector machine) for training to obtain a trained SVM model; predicting the tea image to be predicted by using the trained SVM model to obtain the classification probability of each type of tea based on the SVM classifier; performing angular point detection and distance matrix similarity calculation on the tea image training sample, and obtaining classification probabilities of various types of tea based on special angle quantity and distance matrix according to the angular point detection result and the distance matrix similarity; fusing the classification probability of each type of tea based on the SVM classifier and the classification probability of each type of tea based on the special angle quantity and distance matrix by adopting a KNN-based result to obtain the final classification probability of each type of tea; in the classification probabilities of various types of the final vegetable leaves, the category label corresponding to the maximum probability value is the final classification result; the beneficial effects provided by the invention are as follows: the accuracy of tea classification is improved.

Description

Fresh tea leaf classification method based on multiple features and multiple classifiers
Technical Field
The invention relates to the field of image classification, in particular to a fresh tea leaf classification method based on multiple features and multiple classifiers.
Background
In recent years, with the increasing labor cost and the shortage of tea plucking workers, more and more tea plants begin to adopt mechanical tea plucking in order to reduce the production cost and improve the production efficiency. However, the mechanical tea plucking machine cannot collect fresh leaves of specific grades according to production and processing requirements, and the fresh leaves of various grades (such as single bud, one bud and one leaf, one bud and two leaves and one bud and three leaves) are mixed together, so that the quality of the tea leaves is greatly reduced, the tea leaves can only be produced and sold as low-price tea, and the mechanical tea plucking machine becomes a main reason for restricting the development and popularization of mechanical tea plucking.
At present, some tea leaf grading methods based on physical modes exist, such as a vibration screening type tea leaf grading machine, in the process of vibrating tea leaves from a conveying end to an output end to move forwards, the tea leaves of different grades fall on a material receiver below through different apertures of a sorting device under the action of gravity to realize fresh leaf grading, and the accuracy rate is about 70%; the tea winnowing machine separates fresh leaves with different weights through blowing, the lighter bud leaves are thrown off far away, the heavier bud leaves are thrown off near, and the accuracy is about 70%. Therefore, the classification accuracy of the fresh leaves by a physical mode is low, and the error is large. In addition, the tea color sorter based on the photoelectric sensor can effectively separate substances such as impurities, leaf stalks and the like after tea processing, the accuracy is more than 90%, but only the impurities can be removed, and effective classification still cannot be carried out.
With the rapid development of computer technology, image classification technology based on computer vision is applied to various fields. At present, some researches for classifying plant leaves by using computer vision at home and abroad obtain better effects. For example, the donghua and the like are combined with shape and texture characteristics, and a BP feedforward neural network is used for classifying 300 sample leaves of 6 leaves, so that the accuracy rate reaches 98.4%; zhengying force [8] and the like provide a convolutional neural network plant leaf image identification method based on transfer learning aiming at the characteristic that a plant leaf database belongs to a small sample database, and the identification rates of pre-trained AlexNet and Inception V3 models to an ICL database reach 95.31% and 95.40%; turkoglu et al, in order to increase the similarity rate among leaves of the same species, the leaves are divided into two parts and four parts, then texture features, color features, gray level co-occurrence matrixes and Fourier descriptors are extracted, and leaf identification is performed by combining an extreme learning machine, and the accuracy rate of the Flavia leaf data set reaches 99.10%.
The research and application of classifying tea leaves by using a computer vision technology are few, and the classification is mainly divided into two aspects: firstly, impurities are separated and classified from processed tea leaves, for example, the processed tea leaves are classified by BP neural network based on color and shape characteristics, and the accuracy rate reaches more than 90%; wuzhengmin and the like judge feature weights by adopting a random forest method based on morphological features and classify by combining a support vector machine, wherein the accuracy rate is 93.8%; the song bore, etc. proposes a tea grade identification method based on shape characteristic histogram combined with support vector machine, and its accuracy rate is 95.71%. Secondly, classifying picked fresh leaves, selecting 6 geometric features and 2 textural features of a tea image as classification features of the tea, and constructing a classification model by using a BP neural network, wherein the accuracy rate is over 90 percent, but the classification features do not distinguish two leaves in one bud from three leaves in one bud; a set of fresh tea intelligent sorting system is designed in high-earthquake and the like, a convolutional neural network identification model with a 7-layer structure is built, and the classification accuracy of single bud, one bud and one leaf, one bud and two leaves and one bud and three leaves is 92.25%; wuzhengmin and the like classify the green tea by combining morphological characteristics such as convex hull area, convex hull perimeter, long axis length, short axis length and the like with a BP neural network, the accuracy rate is more than 90 percent, and one bud two leaves and one bud three leaves are not distinguished.
Disclosure of Invention
In view of the above, the method provides a classification method based on tea special angular points and distance matrixes thereof on the basis of extracting geometric features and textural features of fresh tea and classifying the fresh tea by combining a support vector machine, obtains classification results through result fusion, and provides a new method for fresh tea classification research.
The invention provides a fresh tea leaf classification method based on multiple features and multiple classifiers, which comprises the following steps:
s101: acquiring a tea image as a training data set;
s102: preprocessing the tea image to obtain a preprocessed tea image;
s103: extracting geometric features of the preprocessed tea images to obtain 5 relative geometric features and 7 Hu invariant moment relative geometric features of the preprocessed tea images;
carrying out contour extraction on the preprocessed tea leaf image to obtain a tea leaf contour, and carrying out polygon fitting according to the tea leaf contour to obtain a fitted polygon;
s104: extracting texture features of the gray level image of the tea image to obtain 1 texture feature of the tea image;
carrying out angular point detection on the fitted polygons, and obtaining the tea category and the classification probability corresponding to the tea category by combining the tea sample data statistics according to the number of special angular points obtained by angular point detection;
S105: training an SVM classifier by using 12 relative geometric features and 1 texture feature to obtain a fresh tea classification model based on an SVM;
measuring the distance matrix of the special angular points to obtain a distance matrix characteristic library of a special angular point sequence;
carrying out angular point detection on the tea image to be predicted, and carrying out distance matrix measurement according to the special angular points of the sample to be predicted to obtain a distance matrix of the sample to be predicted;
calculating the similarity of the distance matrix according to the distance matrix of the tea image to be predicted and the distance matrix feature library of the special angular point sequence;
s106: classifying and predicting the tea images to be predicted by using the SVM-based fresh tea classification model to obtain the classification probability of each category of tea based on relative geometric features, textural features and the SVM;
classifying and predicting the tea images to be predicted according to the number of special angular points and the distance matrix similarity of the tea images to be predicted to obtain classification probabilities of various classes of tea based on the special angular points and the distance matrix similarity;
s107: and performing KNN-based result fusion on the classification probability of each class of tea based on the relative geometric features, the textural features and the SVM and the classification probability of each class of tea based on the special angular points and the distance matrix to obtain a final classification result of the tea image to be predicted.
Further, in step S102, the tea image is preprocessed to obtain a preprocessed tea image, specifically: carrying out gray level transformation on the tea image to obtain a gray level image; performing Gaussian filtering on the gray level image to obtain a filtered and denoised image; and processing the filtered and denoised image by adopting an Otsu algorithm to obtain a preprocessed tea image, namely a binary image.
Further, in step S103, the 5 relative geometric features include the squareness, the circularity, the sphericity, the eccentricity, and the circumference concave-convex ratio, and the calculation formulas are respectively expressed by equations (1), (2), (3), (4), and (5):
Figure BDA0002526314880000041
Figure BDA0002526314880000042
Figure BDA0002526314880000043
Figure BDA0002526314880000044
Figure BDA0002526314880000045
in the formula (1), A is the area of the blade, AMERIs the minimum circumscribed rectangle area; in the formula (2), A is the area of the blade, and P is the circumference of the blade; in the formula (3), RMICMaximum inscribed radius of the blade, RMCCThe minimum circumscribed circle radius of the blade; in the formula (4), A is a long shaft of the blade, and B is a short shaft of the blade; in the formula (5), P is the blade circumference, PCHThe perimeter of the convex hull of the blade.
Further, in step S103, the relative geometric features of the 7 Hu invariant moments are specifically obtained by constructing a second-order normalized central moment and a third-order normalized central moment, which are H1-H7, respectively, and the calculation formulas are shown in formulas (6), (7), (8), (9), (10), (11), and (12):
H1=η2002(6)
H2=(η20-3η02)2+4η11 2(7)
H3=(η30-3η12)2+(3η2103)2(8)
H4=(η3012)2+(η2103)2(9)
H5=(η30-3η12)(η3012)[(η3012)2-3(η2103)2](10)
+(3η2103)(η2103)[3(η3012)2-(η2103)2]
H6=(η2002)[(η3012)2-(η2103)2]+4η113012)(η2103) (11)
H7=(3η2103)(η3012)[(η3012)2-3(η0321)2](12)
-(η30-3η21)(η2103)[3(η1230)2-(η0321)2]
In the formulae (6), (7), (8), (9), (10), (11) and (12),
Figure BDA0002526314880000051
is normalized central moment, wherein p, q is order, p, q is 0,1,2,3 and p + q takes the value 2 or 3;
Figure BDA0002526314880000052
wherein
Figure BDA0002526314880000053
And
Figure BDA0002526314880000054
is the center of gravity of the image,
Figure BDA0002526314880000055
n and M are the height and width of the image, and f (x, y) is a tea digital image function after the tea image is subjected to discretization after pretreatment;
Figure BDA0002526314880000056
is the p + q order geometric moment of the discretized tea digital image.
Further, in step S104, performing texture feature extraction on the gray level image of the tea image, specifically including:
s201: carrying out color space normalization on the gray level image of the tea image by adopting a Gamma correction method, reducing the influence caused by local shadow and illumination change of the image and obtaining the corrected tea image;
s202: dividing the corrected tea image into a plurality of blocks, wherein each block consists of a plurality of cells; each cell size is n × n pixels; each block is m × m cells;
s203: calculating the gradient of each pixel point in each cell, and counting a histogram of the gradient direction of each pixel point of each cell to obtain a descriptor of each cell;
s204: connecting descriptors of each cell in series to obtain HOG descriptors of each block, and meanwhile normalizing gradient strength in each block;
S205: and connecting all fast HOG descriptors of the corrected tea image in series to obtain HOG characteristics of the corrected tea image, namely 1 texture characteristic of the tea image.
Further, in step S105, training an SVM classifier using 12 relative geometric features and 1 texture feature to obtain an SVM-based fresh tea leaf classification model, specifically: carrying out parameter optimization by adopting different punishment coefficients, different kernel functions and parameters corresponding to the kernel functions in a mode based on grid search and cross validation to determine the optimal punishment coefficients and kernel functions of the SVM fresh tea classification model; the different kernel functions include: a linear kernel function, a polynomial kernel function, and a radial basis kernel function; the parameters corresponding to the kernel function comprise a kernel parameter gamma and a highest power number degree.
Further, in step S103, performing contour extraction on the preprocessed tea image to obtain a tea contour, and performing polygon fitting according to the tea contour to obtain a fitted polygon, specifically: the outline extraction adopts an OPENCV FindContours method; and fitting the polygon by adopting a Douglas-Peucker algorithm of OPENCV.
Further, in step S104, performing corner detection on the fitted polygon, and according to the number of special corners obtained by the corner detection, the method for distinguishing the special corners specifically includes: in the fitted polygons, the corner point corresponding to the corner P of the polygon satisfying the condition of the formula (13) is the special corner point:
∠P≤A(0°≤A≤180°) (13)
In equation (13), angle P is the angle of angle P, and a is a preset angle threshold.
In step S105, performing distance matrix measurement on the special corner points, and calculating distance matrix similarity, specifically: the distance matrix is as shown in equation (14):
Figure BDA0002526314880000071
in the formula (14), di,jRepresenting the Euclidean distance between the ith point and the jth point in the special corner point sequence,
Figure BDA0002526314880000072
dmaxall d in the representation matrixi,jMaximum value of (d);
the distance matrix similarity is as shown in formula (15):
Figure BDA0002526314880000073
in the formula (15), S represents the similarity of distance matrix, DADistance matrix representing the image of the tea to be predicted, DnAnd the corresponding distance matrix in the distance matrix feature library representing the special corner point sequence is a known value.
The technical scheme provided by the invention has the beneficial effects that: the accuracy of tea classification is improved.
Drawings
FIG. 1 is a flow chart of a fresh tea leaf classification method based on multiple features and multiple classifiers according to the present invention;
FIG. 2 is a schematic diagram illustrating the effect of image preprocessing according to the present invention;
FIG. 3 shows penalty coefficient C and classification accuracy under a linear kernel function based on an SVM classifier according to an embodiment of the present invention;
FIG. 4 is a line graph of the highest classification accuracy of different types of kernel functions of the SVM-based classifier according to the present invention;
FIG. 5 is a schematic view of a process of fitting polygons to tea leaves according to the present invention;
FIG. 6 is a diagram illustrating a relationship between a value of an angle A and a total number N of special corner points in an embodiment of the present invention;
FIG. 7 is a flow chart of the classification of fresh tea leaves based on the similarity of the special corner points and the distance matrix according to the present invention;
FIG. 8 is a schematic diagram of a selection relationship between a distance calculation formula and K in KNN fusion according to the present invention;
FIG. 9 is a schematic view of the invention data set illustrating the fresh tea leaf classification;
FIG. 10 is a schematic diagram of a classification result confusion matrix of an SVM based classification model according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of a classification result confusion matrix of the SVM + classification model based on the special corner points and the distance matrix thereof in the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be further described with reference to the accompanying drawings.
Referring to fig. 1, an embodiment of the present invention provides a fresh tea leaf classification method based on multiple features and multiple classifiers, including the following steps:
s101: acquiring a tea image as a training data set; the training data set comprises a plurality of classes of tea images, and each class of tea image comprises a plurality of tea images;
s102: preprocessing the tea image to obtain a preprocessed tea image;
S103: extracting geometric features of the preprocessed tea images to obtain 5 relative geometric features and 7 Hu invariant moment relative geometric features of the preprocessed tea images;
carrying out contour extraction on the preprocessed tea leaf image to obtain a tea leaf contour, and carrying out polygon fitting according to the tea leaf contour to obtain a fitted polygon;
s104: extracting texture features of the gray level image of the tea image to obtain 1 texture feature of the tea image;
carrying out angular point detection on the fitted polygons, and obtaining the tea category and the classification probability corresponding to the tea category by combining the tea sample data statistics according to the number of special angular points obtained by angular point detection;
s105: training an SVM classifier by using 12 relative geometric features and 1 texture feature to obtain a fresh tea classification model based on an SVM;
measuring the distance matrix of the special angular points to obtain a distance matrix characteristic library of a special angular point sequence;
carrying out angular point detection on the tea image to be predicted, and carrying out distance matrix measurement according to the special angular points of the sample to be predicted to obtain a distance matrix of the sample to be predicted;
calculating the similarity of the distance matrix according to the distance matrix of the tea image to be predicted and the distance matrix feature library of the special angular point sequence;
S106: classifying and predicting the tea images to be predicted by using the SVM-based fresh tea classification model to obtain the classification probability of each category of tea based on relative geometric features, textural features and the SVM;
classifying and predicting the tea images to be predicted according to the number of special angular points and the distance matrix similarity of the tea images to be predicted to obtain classification probabilities of various classes of tea based on the special angular points and the distance matrix similarity;
s107: and performing KNN-based result fusion on the classification probability of each class of tea based on the relative geometric features, the textural features and the SVM and the classification probability of each class of tea based on the special angular points and the distance matrix to obtain a final classification result of the tea image to be predicted.
Referring to fig. 2, fig. 2 is a diagram illustrating the image preprocessing effect of the method;
in step S102, the tea image is preprocessed to obtain a preprocessed tea image, which specifically includes: carrying out gray level transformation on the tea image to obtain a gray level image; performing Gaussian filtering on the gray level image to obtain a filtered and denoised image; and processing the filtered and denoised image by adopting an Otsu algorithm to obtain a preprocessed tea image, namely a binary image.
In step S103, geometric feature extraction is performed on the preprocessed tea image, so as to obtain 5 relative geometric features and 7 Hu invariant moment relative geometric features of the preprocessed tea image.
Based on a binary image obtained by image preprocessing, 5 relative geometric features of the rectangular degree, the circularity, the sphericity, the eccentricity and the circumference concave-convex ratio are extracted, and a calculation formula is shown in table 1.
TABLE 1 relative geometry calculation formula
Figure BDA0002526314880000091
Figure BDA0002526314880000101
Based on a binary image obtained by image preprocessing, 7 Hu invariant moment relative geometric features are extracted,
specifically, the second-order normalized central moment and the third-order normalized central moment are respectively H1-H7, and the calculation formulas are shown in formulas (6), (7), (8), (9), (10), (11) and (12):
H1=η2002(6)
H2=(η20-3η02)2+4η11 2(7)
H3=(η30-3η12)2+(3η2103)2(8)
H4=(η3012)2+(η2103)2(9)
H5=(η30-3η12)(η3012)[(η3012)2-3(η2103)2](10)
+(3η2103)(η2103)[3(η3012)2-(η2103)2]
H6=(η2002)[(η3012)2-(η2103)2]+4η113012)(η2103) (11)
H7=(3η2103)(η3012)[(η3012)2-3(η0321)2](12)
-(η30-3η21)(η2103)[3(η1230)2-(η0321)2]
in the formulae (6), (7), (8), (9), (10), (11) and (12),
Figure BDA0002526314880000102
is normalized central moment, wherein p, q is order, p, q is 0,1,2,3 and p + q takes the value 2 or 3;
Figure BDA0002526314880000103
wherein
Figure BDA0002526314880000104
And
Figure BDA0002526314880000105
is the center of gravity of the image,
Figure BDA0002526314880000106
n and M are the height and width of the image, and f (x, y) is a tea digital image function after the tea image is subjected to discretization after pretreatment;
Figure BDA0002526314880000107
is the p + q order geometric moment of the discretized tea digital image.
In step S104, texture feature extraction is performed on the grayscale image of the tea image to obtain 1 texture feature of the tea image, specifically:
The grayscale image obtained based on image preprocessing utilizes directional gradient histograms. Firstly, normalizing the color space of an input image by using a Gamma correction method, adjusting the contrast of the image, reducing the influence caused by local shadow and illumination change of the image and simultaneously inhibiting the interference of noise; dividing an image into a plurality of blocks (blocks), wherein each block consists of a plurality of cells (cells), the cell size is set to be 8 multiplied by 8 pixels, and each block is set to be 4 multiplied by 4 cells; then calculating the direction histogram of the gradient or the edge of each pixel point in the cell, counting the direction histogram of the gradient of each cell to obtain a descriptor of each cell, then connecting the feature vectors of the cells in series to obtain the HOG feature of the block, and meanwhile normalizing the gradient strength in the block; and finally, connecting HOG feature descriptions of all blocks in the image in series to obtain the HOG feature of the image, namely the tea texture feature for image classification.
In step S106, based on the 12 relative geometric features and 1 texture feature, the fresh tea leaf classification model based on the SVM is used to perform classification prediction on the tea leaf image, so as to obtain classification probabilities of each category of tea leaves based on the relative geometric features, the texture features and the SVM.
In this embodiment, the penalty coefficient C, the different kernel functions (linear kernel function, polynomial kernel function, radial basis kernel function) and the parameters (kernel parameter gamma, highest power degree) corresponding to the kernel functions are optimized in a grid search and cross validation based manner, and the optimal combination is determined for the classification of fresh tea: the kernel function kernel is a linear kernel function, the penalty coefficient C is 0.25, the relationship between the penalty coefficient C and the classification accuracy under the linear kernel function is shown in fig. 3, and the highest classification accuracy of each kernel function is shown in fig. 4. In fig. 4, when kernel is polynomial kernel, C is 0.8, gamma is 1.67, and degree is 5; when kernel is a linear kernel, C is 0.25; when kernel is a radial basis kernel function, C is 3 and gamma is 0.25;
referring to fig. 5, fig. 5 is a schematic view illustrating a process of fitting a polygon of tea leaves according to an embodiment of the present invention; taking a one-bud two-leaf sample as an example, based on a binary image obtained in an image preprocessing stage, performing contour extraction by using an OPENCV (open content circuit) findcours method, performing polygon fitting on a tea contour by using a Douglas-Peucker algorithm, setting approximation accuracy as the contour perimeter multiplied by 0.02, and performing color filling on the obtained fitting polygon.
And detecting the number of special angular points of the obtained fitting polygon, wherein the definition of the special angular points is as follows: any angle point corresponding to angle P meeting the following conditions in the fitting polygon
∠P≤A(0°≤A≤180°)
Wherein ≈ P is the angle of angle P.
Firstly, setting A to be 90 degrees, carrying out special angular point detection and quantity statistics on 1755 fresh tea leaf samples which are divided into four types (single bud, one bud and one leaf, one bud and two leaves and one bud and three leaves), and finding that the number of the special angular points detected by each sample is distributed between 1 and 8 and most of the special angular points are distributed between 2 and 5. Among these, 424 (N1) special corner points out of 2 detected in 450 single bud samples, 355 (N2) special corner points out of 3 detected in 432 single bud-leaf samples, 291 (N3) special corner points out of 4 detected in 441 single bud-leaf samples, 222 (N4) special corner points out of 5 detected in 432 single bud-leaf samples, and 1292 (N) total number of N1, N2, N3, and N4 were found, which is consistent with the direct observation of the fitted polygon of the samples. When A changes, the total number N changes accordingly. And when the value of A is the maximum of N, the A is the optimal value of the special corner point condition. Through experiments on values of A, when A is 100 degrees, the value N is maximum, namely the special corner point is the point corresponding to the angle which is less than or equal to 100 degrees in the fitting polygon. The relationship between the value of the angle a and the total number N is shown in fig. 6.
When the special angular points meet the condition that the angle P is less than or equal to 100 degrees, special angular point detection and statistics are carried out on the sample, an 8 x 4 matrix is formed by the number (1-8) of each special angular point and corresponding values of tea categories (single bud, one bud and one leaf, one bud and two leaves and one bud and three leaves), and normalization processing is carried out on each row of the matrix, so that each classification probability corresponding to the number of each special angular point can be obtained. Taking 1755 samples in the data set as an example, the results are shown in table 1.
TABLE 1 Classification probabilities corresponding to the number of special corners
Figure BDA0002526314880000131
Through the analysis of the data in the table 1, for the samples with the number of the special corner points between 3 and 5, although most of the samples with the detected 3 special corner points are one bud and one leaf, part of the samples are other types. Similarly, the samples with 4 special corner points detected have other categories besides one bud and two leaves, and the samples with 5 special corner points detected also have other categories besides one bud and three leaves. In order to reduce the influence on the classification accuracy, the similarity of the distance matrix of the special corner sequence is used for comparison again.
In step S105, performing distance matrix measurement on the special corner points, and calculating distance matrix similarity, specifically: the distance matrix is as shown in equation (14):
Figure BDA0002526314880000132
In the formula (14), di,jRepresenting the Euclidean distance between the ith point and the jth point in the special corner point sequence,
Figure BDA0002526314880000133
dmaxall d in the representation matrixi,jMaximum value of (d);
the distance matrix similarity is as shown in formula (15):
Figure BDA0002526314880000134
in the formula (15), S represents the similarity of distance matrix, DADistance matrix representing the image of the tea to be predicted, DnAnd the corresponding distance matrix in the distance matrix feature library representing the special corner point sequence is a known value.
Referring to fig. 7, fig. 7 is a flowchart of a fresh tea leaf classification process based on the similarity between the special corners and the distance matrix according to the present invention; for the tea image to be predicted, firstly detecting the number (marked as count) of special angular points of the tea image, and if the number is not detected, detecting the number of special angular points of the tea image to be predicted
1) If the count is more than or equal to 1 and less than or equal to 2 or 6 and less than or equal to 8, taking four values of the row with the number of special angular points as the count in the table 1 as the classification probability of the four categories;
2) and if the count is not less than 2 and not more than 5, calculating a distance matrix of the special corner sequence of the tea leaf image to be predicted, and calculating similarity with a corresponding matrix in a distance matrix characteristic library of the special corner sequence, namely when the count is 2, calculating similarity with D0, when the count is 3, calculating similarity with D1, when the count is 4, calculating similarity with D2, and when the count is 5, calculating similarity with D3. And when the similarity S is greater than a certain threshold t, taking four values of the row with the number of special angular points being count in the table 1 as the classification probabilities of the four classes, otherwise, recording the classification probabilities of the four classes as 0. The threshold t is obtained based on a grid search and cross validation method, and the similarity threshold is shown in table 2.
TABLE 2 distance matrix similarity threshold
Figure BDA0002526314880000141
Note: t0 is a threshold for calculating the degree of similarity with D0, t1 is a threshold for calculating the degree of similarity with D1, t2 is a threshold for calculating the degree of similarity with D2, and t3 is a threshold for calculating the degree of similarity with D3.
The count is 0 or > 8, and the classification probabilities of the four classes are all marked as 0.
And performing result fusion on the classification probability of each class based on the geometric features and the texture features and the SVM classifier and the classification probability results of each class based on the special angular points and the distance matrix thereof, and obtaining a final classification result according to the fused classification probabilities.
KNN is a common method in data mining classification techniques, whose idea is that most of the k nearest samples of a sample in feature space belong to a certain class, and the sample is classified into this class. There are two important factors affecting the classification effect of KNN: firstly, selecting a distance calculation formula, and secondly, selecting a parameter K. The present invention performs experiments on the euclidean distance, the manhattan distance, and different K values, respectively, to obtain the best result when the manhattan distance calculation formula is used and K is set to 30, and the result is shown in fig. 8.
In order to better verify the method provided by the invention, the experiment of the invention adopts a self-constructed data set, 1973 pictures of fresh leaves of Longjing tea are collected together, and the pictures are divided into four types as shown in figure 9, wherein 502 single buds, 488 single buds and leaves, 494 single buds and two leaves and 489 single buds and three leaves are collected.
In order to verify the effectiveness of the fresh tea classification model based on multiple features and multiple classifiers, classification experiments are respectively carried out on the SVM fresh tea classification model, the fresh tea classification model based on the special angular points and the distance matrix thereof and the fresh tea classification model based on KNN result fusion. The classification model is realized by using a Python and sklern library, a cross-folding cross-validation method is utilized to divide 1950 samples in the data set into 10 samples, 9 samples are taken as a training set and 1 sample is taken as a testing set in turn, and the experiment results are shown in Table 3. The classification result confusion matrix of the SVM classification model and the SVM + classification model based on the special corner points and the distance matrix thereof is shown in fig. 10 and 11.
TABLE 3 Classification accuracy of different classification models
Figure BDA0002526314880000151
As can be seen from the data in Table 3, the classification model of the fresh tea based on the multiple features and the multiple classifiers, which is proposed by the research, has higher classification accuracy than that of the fresh tea based on the SVM model which is singly used, and is improved by 3.16%. In addition, the accuracy is obviously improved, the existing method mainly utilizes the absolute geometric characteristics of the tea leaves to classify, although the classification effect of three classifications (single bud, one bud and one leaf and one bud and multiple leaves) is better, the four classification results are poorer, and the two buds and the three leaves are not easy to distinguish. As can be seen from the confusion matrix in fig. 10 and fig. 11, the classification accuracy of the fresh tea classification model based on multiple features and multiple classifiers is improved for each class, wherein the improvement of the classification accuracy for one bud and one leaf and two leaves is more obvious.
The beneficial effects of the implementation of the invention are as follows: the accuracy of tea classification is improved.
The features of the embodiments and embodiments described herein above may be combined with each other without conflict.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. A fresh tea leaf classification method based on multiple features and multiple classifiers is characterized in that: the method specifically comprises the following steps:
s101: acquiring a tea image as a training data set;
s102: preprocessing the tea image to obtain a preprocessed tea image;
s103: extracting geometric features of the preprocessed tea images to obtain 5 relative geometric features and 7 Hu invariant moment relative geometric features of the preprocessed tea images;
carrying out contour extraction on the preprocessed tea leaf image to obtain a tea leaf contour, and carrying out polygon fitting according to the tea leaf contour to obtain a fitted polygon;
s104: extracting texture features of the gray level image of the tea image to obtain 1 texture feature of the tea image;
Carrying out angular point detection on the fitted polygons, and obtaining the tea category and the classification probability corresponding to the tea category by combining the tea sample data statistics according to the number of special angular points obtained by angular point detection;
s105: training an SVM classifier by using 12 relative geometric features and 1 texture feature to obtain a fresh tea classification model based on an SVM;
measuring the distance matrix of the special angular points to obtain a distance matrix characteristic library of a special angular point sequence;
carrying out angular point detection on the tea image to be predicted, and carrying out distance matrix measurement according to the special angular points of the sample to be predicted to obtain a distance matrix of the sample to be predicted;
calculating the similarity of the distance matrix according to the distance matrix of the tea image to be predicted and the distance matrix feature library of the special angular point sequence;
s106: classifying and predicting the tea images to be predicted by using the SVM-based fresh tea classification model to obtain the classification probability of each category of tea based on relative geometric features, textural features and the SVM;
classifying and predicting the tea images to be predicted according to the number of special angular points and the distance matrix similarity of the tea images to be predicted to obtain classification probabilities of various classes of tea based on the special angular points and the distance matrix similarity;
S107: and performing KNN-based result fusion on the classification probability of each class of tea based on the relative geometric features, the textural features and the SVM and the classification probability of each class of tea based on the special angular points and the distance matrix to obtain a final classification result of the tea image to be predicted.
2. The method for classifying fresh tea leaves based on multiple features and multiple classifiers according to claim 1, wherein: in step S102, the tea image is preprocessed to obtain a preprocessed tea image, which specifically includes: carrying out gray level transformation on the tea image to obtain a gray level image; performing Gaussian filtering on the gray level image to obtain a filtered and denoised image; and processing the filtered and denoised image by adopting an Otsu algorithm to obtain a preprocessed tea image, namely a binary image.
3. The method for classifying fresh tea leaves based on multiple features and multiple classifiers according to claim 1, wherein: in step S103, the 5 relative geometric features include a squareness, a circularity, a sphericity, an eccentricity, and a perimeter concave-convex ratio, and the calculation formulas are respectively expressed by equations (1), (2), (3), (4), and (5):
Figure FDA0002526314870000021
Figure FDA0002526314870000022
Figure FDA0002526314870000023
Figure FDA0002526314870000024
Figure FDA0002526314870000025
in the formula (1), A is the area of the blade, AMERIs the minimum circumscribed rectangle area; in the formula (2), A is the area of the blade, and P is the circumference of the blade; in the formula (3), R MICMaximum inscribed radius of the blade, RMCCThe minimum circumscribed circle radius of the blade; in the formula (4), A is a long shaft of the blade, and B is a short shaft of the blade; in the formula (5), P is the circumference of the bladeLength, PCHThe perimeter of the convex hull of the blade.
4. The method for classifying fresh tea leaves based on multiple features and multiple classifiers according to claim 1, wherein: in step S103, the 7 Hu invariant moments relative geometric features are specifically constructed by a second-order normalized central moment and a third-order normalized central moment, which are H1-H7, respectively, and the calculation formulas are shown in formulas (6), (7), (8), (9), (10), (11), and (12):
H1=η2002(6)
H2=(η20-3η02)2+4η11 2(7)
H3=(η30-3η12)2+(3η2103)2(8)
H4=(η3012)2+(η2103)2(9)
Figure FDA0002526314870000031
Figure FDA0002526314870000039
Figure FDA0002526314870000032
in the formulae (6), (7), (8), (9), (10), (11) and (12),
Figure FDA0002526314870000033
is normalized central moment, wherein p, q is order, p, q is 0,1,2,3 and p + q takes the value 2 or 3;
Figure FDA0002526314870000034
wherein
Figure FDA0002526314870000035
And
Figure FDA0002526314870000036
is the center of gravity of the image,
Figure FDA0002526314870000037
n and M are the height and width of the image, and f (x, y) is a tea digital image function after the tea image is subjected to discretization after pretreatment;
Figure FDA0002526314870000038
is the p + q order geometric moment of the discretized tea digital image.
5. The method for classifying fresh tea leaves based on multiple features and multiple classifiers according to claim 1, wherein: in step S104, extracting texture features of the gray scale image of the tea image, specifically comprising the following steps:
S201: carrying out color space normalization on the gray level image of the tea image by adopting a Gamma correction method, reducing the influence caused by local shadow and illumination change of the image and obtaining the corrected tea image;
s202: dividing the corrected tea image into a plurality of blocks, wherein each block consists of a plurality of cells; each cell size is n × n pixels; each block is m × m cells;
s203: calculating the gradient of each pixel point in each cell, and counting a histogram of the gradient direction of each pixel point of each cell to obtain a descriptor of each cell;
s204: connecting descriptors of each cell in series to obtain HOG descriptors of each block, and meanwhile normalizing gradient strength in each block;
s205: and connecting all fast HOG descriptors of the corrected tea image in series to obtain HOG characteristics of the corrected tea image, namely 1 texture characteristic of the tea image.
6. The method for classifying fresh tea leaves based on multiple features and multiple classifiers according to claim 1, wherein: in step S105, training an SVM classifier using 12 relative geometric features and 1 texture feature to obtain a fresh tea classification model based on an SVM, specifically: carrying out parameter optimization by adopting different punishment coefficients, different kernel functions and parameters corresponding to the kernel functions in a mode based on grid search and cross validation to determine the optimal punishment coefficients and kernel functions of the SVM fresh tea classification model; the different kernel functions include: a linear kernel function, a polynomial kernel function, and a radial basis kernel function; the parameters corresponding to the kernel function comprise a kernel parameter gamma and a highest power number degree.
7. The method for classifying fresh tea leaves based on multiple features and multiple classifiers according to claim 1, wherein: in step S103, performing contour extraction on the preprocessed tea image to obtain a tea contour, and performing polygon fitting according to the tea contour to obtain a fitted polygon, specifically: the outline extraction adopts an OPENCV FindContours method; and fitting the polygon by adopting a Douglas-Peucker algorithm.
8. The method for classifying fresh tea leaves based on multiple features and multiple classifiers according to claim 1, wherein: in step S104, performing corner detection on the fitted polygon, and according to the number of special corners obtained by the corner detection, the method for distinguishing the special corners specifically includes: in the fitted polygons, the corner point corresponding to the corner P of the polygon satisfying the condition of the formula (13) is the special corner point:
∠P≤A(0°≤A≤180°) (13)
in equation (13), angle P is the angle of angle P, and a is a preset angle threshold.
9. The method for classifying fresh tea leaves based on multiple features and multiple classifiers according to claim 1, wherein: in step S105, performing distance matrix measurement on the special corner points, and calculating distance matrix similarity, specifically: the distance matrix is as shown in equation (14):
Figure FDA0002526314870000051
In the formula (14), di,jRepresenting the Euclidean distance between the ith point and the jth point in the special corner point sequence,
Figure FDA0002526314870000052
dmaxall d in the representation matrixi,jMaximum value of (d);
the distance matrix similarity is as shown in formula (15):
Figure FDA0002526314870000053
in the formula (15), S represents the similarity of distance matrix, DADistance matrix representing the image of the tea to be predicted, DnAnd representing the corresponding distance matrix in the distance matrix characteristic library of the special corner point sequence.
CN202010505288.3A 2020-06-05 2020-06-05 Fresh tea classification method based on multiple features and multiple classifiers Active CN111861103B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010505288.3A CN111861103B (en) 2020-06-05 2020-06-05 Fresh tea classification method based on multiple features and multiple classifiers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010505288.3A CN111861103B (en) 2020-06-05 2020-06-05 Fresh tea classification method based on multiple features and multiple classifiers

Publications (2)

Publication Number Publication Date
CN111861103A true CN111861103A (en) 2020-10-30
CN111861103B CN111861103B (en) 2024-01-12

Family

ID=72984994

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010505288.3A Active CN111861103B (en) 2020-06-05 2020-06-05 Fresh tea classification method based on multiple features and multiple classifiers

Country Status (1)

Country Link
CN (1) CN111861103B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112883835A (en) * 2021-01-29 2021-06-01 中南民族大学 Tea quality grade determining method, device and equipment based on computer vision
CN113177103A (en) * 2021-04-13 2021-07-27 广东省农业科学院茶叶研究所 Evaluation comment-based tea sensory quality comparison method and system
CN113477555A (en) * 2021-07-22 2021-10-08 西华大学 Fresh tea sorting machine based on image processing
CN113838123A (en) * 2021-08-16 2021-12-24 湖南磐钴传动科技有限公司 Method for measuring tobacco shred morphology characteristics based on image processing
CN114022714A (en) * 2021-11-11 2022-02-08 哈尔滨工程大学 Harris-based data enhanced image classification method and system
CN115049853A (en) * 2022-04-14 2022-09-13 鼎云(上海)科技有限公司 Tobacco leaf curl invariant characteristic feature extraction method and storage medium
CN115439524A (en) * 2022-09-07 2022-12-06 北京爱科农科技有限公司 Blade parameter calculation method, medium, and computer device
CN116935235A (en) * 2023-09-19 2023-10-24 深圳市索威尔科技开发有限公司 Fresh tea leaf identification method and related device based on unmanned tea picking machine

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107644235A (en) * 2017-10-24 2018-01-30 广西师范大学 Image automatic annotation method based on semi-supervised learning
CN108154195A (en) * 2018-01-19 2018-06-12 镇江思泊丽农业有限公司 Tealeaves recognition methods and the tealeaves sorting equipment using this method
CN108664927A (en) * 2018-05-10 2018-10-16 林丽惠 Wuyi cliff tea leaf image sorting technique based on full-automatic support vector machines
CN109308697A (en) * 2018-09-18 2019-02-05 安徽工业大学 A kind of leaf disease recognition method based on machine learning algorithm

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107644235A (en) * 2017-10-24 2018-01-30 广西师范大学 Image automatic annotation method based on semi-supervised learning
CN108154195A (en) * 2018-01-19 2018-06-12 镇江思泊丽农业有限公司 Tealeaves recognition methods and the tealeaves sorting equipment using this method
CN108664927A (en) * 2018-05-10 2018-10-16 林丽惠 Wuyi cliff tea leaf image sorting technique based on full-automatic support vector machines
CN109308697A (en) * 2018-09-18 2019-02-05 安徽工业大学 A kind of leaf disease recognition method based on machine learning algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
燕娅;周晓锋;汤哲;张立;陈华荣;周建勇;: "基于SVM-KNN茶叶图像纹理分类", 中国茶叶加工, no. 06 *
高良;闫民;赵方;: "基于多特征融合的植物叶片识别研究", 浙江农业学报, no. 04 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112883835A (en) * 2021-01-29 2021-06-01 中南民族大学 Tea quality grade determining method, device and equipment based on computer vision
CN113177103A (en) * 2021-04-13 2021-07-27 广东省农业科学院茶叶研究所 Evaluation comment-based tea sensory quality comparison method and system
CN113477555A (en) * 2021-07-22 2021-10-08 西华大学 Fresh tea sorting machine based on image processing
CN113838123A (en) * 2021-08-16 2021-12-24 湖南磐钴传动科技有限公司 Method for measuring tobacco shred morphology characteristics based on image processing
CN113838123B (en) * 2021-08-16 2024-03-19 湖南磐钴传动科技有限公司 Method for measuring appearance characteristics of cut tobacco based on image processing
CN114022714A (en) * 2021-11-11 2022-02-08 哈尔滨工程大学 Harris-based data enhanced image classification method and system
CN114022714B (en) * 2021-11-11 2024-04-16 哈尔滨工程大学 Harris-based data enhanced image classification method and system
CN115049853A (en) * 2022-04-14 2022-09-13 鼎云(上海)科技有限公司 Tobacco leaf curl invariant characteristic feature extraction method and storage medium
CN115439524A (en) * 2022-09-07 2022-12-06 北京爱科农科技有限公司 Blade parameter calculation method, medium, and computer device
CN116935235A (en) * 2023-09-19 2023-10-24 深圳市索威尔科技开发有限公司 Fresh tea leaf identification method and related device based on unmanned tea picking machine
CN116935235B (en) * 2023-09-19 2024-04-05 深圳市索威尔科技开发有限公司 Fresh tea leaf identification method and related device based on unmanned tea picking machine

Also Published As

Publication number Publication date
CN111861103B (en) 2024-01-12

Similar Documents

Publication Publication Date Title
CN111861103B (en) Fresh tea classification method based on multiple features and multiple classifiers
CN112418117B (en) Small target detection method based on unmanned aerial vehicle image
CN105488536B (en) A kind of agricultural pests image-recognizing method based on multiple features depth learning technology
CN104091321B (en) It is applicable to the extracting method of the multi-level point set feature of ground laser radar point cloud classifications
CN102842032B (en) Method for recognizing pornography images on mobile Internet based on multi-mode combinational strategy
Wahab et al. Detecting diseases in chilli plants using K-means segmented support vector machine
CN108197538A (en) A kind of bayonet vehicle searching system and method based on local feature and deep learning
CN102324032B (en) Texture feature extraction method for gray level co-occurrence matrix in polar coordinate system
CN109784392A (en) A kind of high spectrum image semisupervised classification method based on comprehensive confidence
CN107679509A (en) A kind of small ring algae recognition methods and device
CN106845528A (en) A kind of image classification algorithms based on K means Yu deep learning
CN103679187B (en) Image-recognizing method and system
Alsmadi et al. Fish recognition based on the combination between robust feature selection, image segmentation and geometrical parameter techniques using Artificial Neural Network and Decision Tree
CN112016574B (en) Image classification method based on feature fusion
CN112733936A (en) Recyclable garbage classification method based on image recognition
CN104899595A (en) Male and female silkworm chrysalis sorting and counting device based on SIFT (Scale Invariant Feature Transform) feature image
CN116612307A (en) Solanaceae disease grade identification method based on transfer learning
CN109347719A (en) A kind of image junk mail filtering method based on machine learning
CN111046838A (en) Method and device for identifying wetland remote sensing information
Nga et al. Combining binary particle swarm optimization with support vector machine for enhancing rice varieties classification accuracy
CN104732246B (en) A kind of semi-supervised coorinated training hyperspectral image classification method
CN112014804B (en) Radar signal sorting method based on bionic pattern recognition algorithm of ball covering
CN109829511B (en) Texture classification-based method for detecting cloud layer area in downward-looking infrared image
Bai et al. Recognition of bovine milk somatic cells based on multi-feature extraction and a GBDT-AdaBoost fusion model
CN105844299A (en) Image classification method based on bag of words

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
OL01 Intention to license declared