CN113361655B - Differential fiber classification method based on residual error network and characteristic difference fitting - Google Patents

Differential fiber classification method based on residual error network and characteristic difference fitting Download PDF

Info

Publication number
CN113361655B
CN113361655B CN202110783128.XA CN202110783128A CN113361655B CN 113361655 B CN113361655 B CN 113361655B CN 202110783128 A CN202110783128 A CN 202110783128A CN 113361655 B CN113361655 B CN 113361655B
Authority
CN
China
Prior art keywords
network
classification
cottonnet
layer
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110783128.XA
Other languages
Chinese (zh)
Other versions
CN113361655A (en
Inventor
魏巍
曾霖
张晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Zhimu Intelligent Technology Partnership LP
Original Assignee
Wuhan Zhimu Intelligent Technology Partnership LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Zhimu Intelligent Technology Partnership LP filed Critical Wuhan Zhimu Intelligent Technology Partnership LP
Priority to CN202110783128.XA priority Critical patent/CN113361655B/en
Publication of CN113361655A publication Critical patent/CN113361655A/en
Application granted granted Critical
Publication of CN113361655B publication Critical patent/CN113361655B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention discloses a differential fiber classification method based on residual error network and characteristic difference fitting, which relates to the field of cotton differential fiber classification, and aims to solve the problem that the differential fiber in the conventional raw cotton is very similar to cotton in image characteristics to cause difficulty in classification and identification, the following scheme is provided, and the method comprises the following steps: s1, designing a foreign fiber classification basic network CottonET for classifying and identifying foreign fibers and cotton on the image characteristics; s2, introducing a residual error network on the basis of the CottonNet in S1, and defining a residual error module to obtain an improved network structure CottonNet-Res; and S3, aiming at the different fiber classification under the complex environment, obtaining an improved algorithm CottonNet-Fusion through feature difference fitting. The differential fiber classification method provided by the invention has the classification accuracy rate improved to 97.4%, and can still maintain the Top-1 classification accuracy rate of 90.3% on the differential fiber data in the complex environment.

Description

Differential fiber classification method based on residual error network and characteristic difference fitting
Technical Field
The invention relates to the field of cotton foreign fiber classification, in particular to a foreign fiber classification method based on residual error network and characteristic difference fitting.
Background
Cotton is one of the national important strategic substances, is closely related to the daily life of people, and plays an important role in national economy. Various foreign fibers are inevitably mixed in the planting, transportation and production links of cotton, and if the foreign fibers are not cleaned in time, the problems of equipment damage and the like of textile machinery can be caused, the quality of final cotton textile products can be reduced, and economic loss is caused.
The cotton foreign fiber detection and identification method based on machine vision has received much attention, and from the perspective of machine vision, foreign fiber classification is a typical image classification problem.
At present, an image classification technology in deep learning gets more and more extensive attention, and the method can automatically perform feature extraction and mathematical modeling through a given algorithm and learn features from original training data in an end-to-end method. Unlike the artificially designed features of the classical image processing algorithm, the shallow structure of the deep learning network detects low-order features such as edges and textures, and as the network deepens, more abstract information in the image features can be learned.
The task of identifying and classifying foreign fibers in an actual environment is not simple, and due to the fact that the foreign fiber images in a certain proportion are disturbed by complex environments, noise interference is large, images are weakened due to insufficient cleaning, and the problems that manual design features are difficult to accurately describe targets and the like occur. When the traditional method is used for designing the foreign fiber target feature extraction operator, great challenges are met, and it is difficult to establish a very general operator to extract foreign fiber image features under various backgrounds. If the method is used for meeting the application requirements, too many hyper-parameters must be designed for the different fiber classification adjustment in the actual scene, and the application capability of the algorithm is greatly reduced.
For the categories of mulching films, polypropylene fibers and the like in the foreign fibers, because the image characteristics of impurities such as yellow cotton, cotton stalks and the like in the pseudo-foreign fibers are very close to those of the foreign fibers, the target fine characteristics in the images need to be concerned, and certain requirements are made on the depth of the network. Generally speaking, the deeper the network, the more advanced semantic features are extracted, and the more meaningful the differentiation of details. However, the deepening of the network also causes the problems of increased calculation amount, gradient disappearance, reduction of prediction accuracy and the like. In order to solve the problems, a different fiber classification method based on residual error network and characteristic difference fitting is designed.
Disclosure of Invention
The invention aims to solve the problem that in the prior art, the foreign fiber in raw cotton is very similar to cotton in image characteristics, so that the classification and identification are difficult, and provides a foreign fiber classification method based on residual error network and characteristic difference fitting.
In order to achieve the purpose, the invention adopts the following technical scheme:
a differential fiber classification method based on residual error network and characteristic difference fitting comprises the following steps:
s1, designing a foreign fiber classification basic network CottonET for classifying and identifying foreign fibers and cotton on the image characteristics;
the CottonNet comprises 6 convolutional layers, parameters of the CottonNet adopt a tower structure, the front two layers of the CottonNet are mainly used for extracting bottom layer features, the four convolutional layers are used for extracting high layer features of an image, the last convolutional layer is not subjected to pooling, and a nonlinear activation function LeakyReLU is added to the output part of each convolutional layer;
the CottonNet classification model uses Softmax as a classifier, an input sample is assumed to be I, the input sample is an image or a vector, and after convolutional neural network operation, an output vector f ═ a is obtained 1 ,a 2 ,...,a N ]The probability of belonging to a given category k is:
Figure GDA0003766667160000031
s2, introducing a residual error network on the basis of the CottonNet in S1, and defining a residual error module to obtain an improved network structure CottonNet-Res, wherein the CottonNet-Res adopts the last four layers of convolutional layers to enhance the high-layer feature extraction capability and improve the accuracy of the different fiber classification;
the residual error module cascades the module input characteristics and the final output characteristics, wherein two layers of convolution are selected as a basic residual error module, the two layers of convolution are respectively defined as Layer1 and Layer2, and the output of the input characteristics after two layers of processing is respectively omega 1 And omega 2 The output is characterized by omega 0 =ω 12
S3, aiming at the different fiber classification under the complex environment, obtaining an improved algorithm CottonNet-Fusion through feature difference fitting on the basis of CottonNet-Res in S2;
the CottonNet-Fusion is characterized in that extra network branch training is designed for samples in a complex environment, feature Fusion is carried out on the samples and normal picture training branches in the training process, the mean square error of feature output of a main network and branch networks is calculated in the output stage and serves as a feature difference to fit two branch output feature vectors, feature Fusion is carried out on the output of the two branches through a convolution layer, the final classification result uses the label of a normal sample, namely the whole network uses the feature difference and the classification loss to constrain training, and the improvement of the classification accuracy in the complex environment is achieved.
Preferably, in the CottonNet-Res, assuming no cascade branch of residual modules, the output of a normal single-layer network can be expressed as formula (2-2):
Figure GDA0003766667160000041
h is convolution related operation of a single-layer network, g represents a single-layer output activation function, the convolution neural network is composed of an expression (2-2), and more characteristic operators about the image can be obtained by adding the expression (2-2), namely, the neural network is deepened; the parameter training of the network is optimized by a back propagation algorithm based on gradient, and the specific training process is as follows: the current layer uses a forward propagation input signal, then a backward propagation error and a function derivation method are used to obtain a gradient, so that the current layer parameters are updated, the parameter update of the current layer l needs to calculate the derivative with the loss being equal to, and an error term is defined as:
Figure GDA0003766667160000042
according to the chain law of back propagation, delta l Error term δ dependent on the next layer l+1 The modification (2-3) is:
Figure GDA0003766667160000043
defining a propagation gradient:
Figure GDA0003766667160000044
when gamma is l When the error term of the l layer is less than 1, the error term of the l layer is reduced compared with the l +1 layer, the gradient is gradually reduced under a back propagation algorithm, and the parameter is not updated, namely the gradient disappears; when gamma is l When the gradient is larger than 1, the gradient is gradually increased, namely the gradient explosion phenomenon; during the deepening of the network, gamma l There is an uncertainty in the value of (c),aiming at the problem, a residual error network is provided; the residual error network constructs a residual error branch in a basic network, directly concatenates input and output characteristics in a layer jump connection mode, and the output of a residual error network module is expressed as a formula (2-6):
z l =H(a l-1 )=a l-1 +F(a l-1 ) (2-6)
wherein F (a) l-1 ) Characteristic a of input as a residual function l-1 Can be propagated directly from any network lower layer to a higher layer.
Preferably, the network input of the CottonNet-Fusion is a picture pair, which consists of a sample in a normal environment and a sample in a complex environment, assuming the input picture pair (s, c), where s is the normal sample, c is the complex sample image, s is generated by a series of random image blurring and image noise disturbance method, and the feature output vectors of two groups of network learning are (f) s ,f c ) The mean square error loss method for calculating the mean square error loss of the two is shown as the formula (2-7):
Figure GDA0003766667160000051
the final loss function is defined as equations (2-8) and (2-9):
Figure GDA0003766667160000052
Figure GDA0003766667160000053
where α and β are the weights of the two types of labels.
The beneficial effects of the invention are as follows: the invention designs a basic different-fiber classification network CottonNet which gives consideration to both performance and efficiency, and the classification accuracy on a verification set reaches 94.2%; in order to enhance the high-level feature extraction capability, on the basis of a residual error network feature fusion method, the invention improves the different-fiber classification network and provides CottonNet-Res, and the classification accuracy is improved to 95.1%; and aiming at the problem of foreign fiber classification in a complex environment, a characteristic difference fitting-based classification model CottonNet-Fusion is provided, the classification accuracy is improved to 97.4%, and the Top-1 classification accuracy of 90.3% can be still maintained on foreign fiber data in the complex environment.
Drawings
FIG. 1 is a schematic diagram of a heterogeneous fiber classification infrastructure according to the present invention;
FIG. 2 is a schematic diagram of a residual module according to the present invention;
FIG. 3 is a schematic diagram of a CottonNet-Res network structure based on a residual error module according to the present invention;
FIG. 4 is a schematic diagram of the CottonNet-Fusion network structure based on feature difference fitting according to the present invention;
fig. 5 is an exemplary cnf _0 of a pseudo-differential image of the present invention: yellow cotton; cnf _ 1: crushing cotton leaves; cnf _ 2: cotton stalks;
FIG. 6 is a diagram showing the classification accuracy of CottonNet-Fusion models with different weights according to the present invention;
FIG. 7 is a diagram illustrating the effect of Gaussian noise of different variances on normal samples according to the present invention;
FIG. 8 is a schematic diagram of Gaussian noise accuracy ratio comparisons for different variances according to the present invention;
FIG. 9 is a schematic illustration of different levels of image blur according to the present invention;
FIG. 10 is a diagram illustrating the effect of different levels of blurring on classification accuracy according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
A differential fiber classification method based on residual error network and characteristic difference fitting comprises the following steps:
s1, designing a foreign fiber classification basic network CottonNet for classifying and identifying foreign fibers and cotton on the image characteristics;
s2, introducing a residual error network on the basis of the CottonNet in S1, and defining a residual error module to obtain an improved network structure CottonNet-Res, wherein the CottonNet-Res adopts the last four layers of convolutional layers to enhance the high-level feature extraction capability and improve the accuracy of foreign fiber classification;
s3, aiming at the different fiber classification under the complex environment, obtaining an improved algorithm CottonNet-Fusion through feature difference fitting on the basis of CottonNet-Res in S2;
referring to fig. 1, the invention designs a foreign fiber classification basic network CottonNet, which mainly comprises 6 convolutional layers, and related parameters use a tower structure and are suitable for dynamic adjustment. The first two layers are mainly used for extracting bottom layer features, the four layers of convolution layers are used for extracting high-layer features of the image, and in order to keep more high-layer information, the last convolution layer is not pooled. The nonlinear activation function LeakyReLU is added to the output part of each convolution layer, and compared with the traditional ReLU, the phenomenon that when the convolution output is negative, neurons die is solved
The CottonNet classification model uses Softmax as a classifier, an input sample is assumed to be I, the input sample is an image or a vector, and after convolutional neural network operation, an output vector f ═ a is obtained 1 ,a 2 ,...,a N ]The probability of belonging to a given category k is:
Figure GDA0003766667160000071
referring to fig. 2, the invention defines a residual module, generally speaking, the deeper the network layer number, the stronger the nonlinear expression capability, and the more complex expression function can be learned, so as to fit finer features. But the deepening of the network can bring the problems of unstable gradient and network degradation. The introduction of the residual error network well solves the problem of gradient disappearance caused by network deepening, and meanwhile, the network learning capability is enhanced. The method can learn more comprehensive and finer image characteristics by means of the idea of the residual error network, improve the classification accuracy and meet the detection and classification requirements under the cotton textile industry environment.
The residual module has the main idea of cascading the module input features with the final output features. Convolution layering in modulesThe number of stacks is not limited in theory, but the two-layer convolution is selected as the basic residual block composition in consideration of the amount of calculation. The two layers of convolution are respectively defined as Layer1 and Layer2, and the output of the input characteristics after two layers of processing is respectively omega 1 And omega 2 . The output is characterized by ω 0 =ω 12 The identity mapping learning method is transferred to a subsequent network layer, and finer image feature extraction is realized.
Assuming a cascaded branch without residual blocks, the output of a normal single-layer network can be expressed as equation (2-2):
Figure GDA0003766667160000081
wherein, H is convolution related operation of a single layer network, and different convolution modules have different operation rules. g represents the output activation function of the single layer. The convolutional neural network is composed of the formula (2-2), and if more characteristic operators about the image are desired, the formula (2-2) is added, namely, the neural network is deepened. The parameter training of the network is optimized by a back propagation algorithm based on gradient, and the specific training process is as follows: the current layer uses forward propagation input signals, then backward propagation errors and a function derivation method are used to obtain gradients, and then current layer parameters are updated. The parameter update of the current layer l needs to calculate the derivative with the loss of ∈ and defines the error term as:
Figure GDA0003766667160000082
according to the chain law of back propagation, delta l Error term δ dependent on the next layer l+1 The modification (2-3) is:
Figure GDA0003766667160000091
defining a propagation gradient:
Figure GDA0003766667160000092
when gamma is l When the error term of the l layer is less than 1, the error term of the l layer is reduced compared with the l +1 layer, the gradient is gradually reduced under a back propagation algorithm, and the parameter is not updated, namely the gradient disappears; when gamma is equal to l When the gradient is larger than 1, the gradient is gradually increased, namely the gradient explosion phenomenon; during network deepening, γ l The value of (2) has uncertainty, and a residual error network is proposed aiming at the problem; the residual error network constructs a residual error branch in a basic network, directly concatenates input and output characteristics in a layer jump connection mode, and the output of a residual error network module is expressed as a formula (2-6):
z l =H(a l-1 )=a l-1 +F(a l-1 ) (2-6)
wherein F (a) l-1 ) Characteristic a of input as a residual function l-1 Can be propagated directly from any network lower layer to a higher layer. As can be seen from the equation (2-6), the input characteristic α l-1 The method can be directly transmitted from any network low layer to any network high layer, and the phenomenon of unstable gradient after the network deepens is solved to a certain extent. After the characteristic information in the convolutional neural network is added with the residual error module, forward and backward propagation is smoother.
Referring to fig. 3, the modified network structure is named cottonet-Res. A bottom layer feature extraction network (Lowlevel-CONV) of the network is unchanged, and a residual error module is mainly fused with a high layer feature extraction part. Because the conventional different fiber input picture has smaller scale, the image distortion is easily caused after the interpolation amplification is adopted, and the details are lost. Therefore, the model of the invention directly uses the original scale different fiber picture in the training and reasoning process.
In an actual production environment, a great number of different-fiber images are acquired under a complex light path environment, such as different-fiber images under the scenes of cleaning failure, aging of a light supplement lamp, aging of a camera and the like.
The condition that cleaning is not in place generally means that due to the influence of dust in the actual environment, workers cannot clean timely, image blurring is caused, and details are lost or even cannot be recognized completely. The light supplement lamp of the collecting device is generally a common fluorescent lamp, and the light intensity can be attenuated along with the increase of the working time, so that the image background is dark integrally. The collection camera is used as an electronic device, and the aging phenomenon also exists, so that the camera gain is too large, the background noise is increased, and the target details are seriously interfered.
In the training algorithm of the different fiber classification network, different fiber images and normal images in a complex environment are trained in the network, and the problem of reduced image accuracy in the complex environment is solved by improving the diversity of samples. However, in many cases, the network training is difficult and the classification accuracy is not improved much due to the fact that the details of the image in the complex environment are too fuzzy, and even the identification of the normal sample is affected due to the misleading of the different-fiber image in the complex environment to the classification network.
Aiming at the problem of different fiber classification in a complex environment, the invention provides a model improvement algorithm based on feature difference fitting. The main strategy of the algorithm is to design an additional network branch training for a sample in a complex environment and perform feature fusion with a normal picture training branch in the training process. In the output stage, calculating the feature output mean square error of the main network and the branch network, and fitting two branch output feature vectors as a feature difference; the outputs of the two branches are feature fused by one convolutional layer. The final classification result uses the label of a normal sample, namely the whole network uses the characteristic difference and the classification loss to constrain the training, and the improvement of the classification accuracy under the complex environment is realized.
Referring to fig. 4, a network structure based on feature difference fitting is named as CottonNet-Fusion, and the improved network input is a picture pair, which is composed of a sample in a normal environment and a sample in a complex environment, and it is assumed that the input image pair is (s, c), where s is the normal sample and c is the complex sample image, and s is generated by a series of random image blurring and image noise perturbation methods. The feature output vector of the two groups of network learning is (f) s ,f c ) The mean square error loss method for calculating the mean square error loss of the two is shown as the formula (2-7):
Figure GDA0003766667160000111
the final loss function is defined as equations (2-8) and (2-9).
Figure GDA0003766667160000112
Figure GDA0003766667160000113
Where α and β are the weights of the two types of labels.
The CottonNet-Fusion itself does not improve the network module, and the core network structure is still CottonNet-Res. The method based on the feature difference fitting is mainly inspired by semi-supervised learning, a teacher model is classified by normal samples to supervise a student model of the complex environment samples, and features of the two models are further fused so as to improve the classification accuracy under the complex environment.
In order to prove the accuracy of the method for classifying the foreign fibers, the inventor carries out a large number of experiments, and experimental data and processes are as follows;
as shown in fig. 5, there are a lot of false foreign fiber impurities in the raw cotton, which do not affect the quality of the cotton, but they are closer to the real foreign fiber in the image, and if there is no good image classification algorithm, it is easy to cause too many false detections in various foreign fiber sorting devices, resulting in more waste of the raw cotton. Therefore, a large number of false foreign fiber images are collected, a reasonable classification algorithm is designed, and the removal of false foreign fiber impurities is very necessary for the image interference of real foreign fibers. FIG. 5 is a picture of a collected pseudo-foreign fiber sample, which is mainly composed of yellow cotton, broken cotton leaves, cotton stalks and the like.
Compared with the traditional algorithm
In order to verify the effectiveness of the algorithm, the inventor selects a relatively representative standard foreign fiber classification method at home and abroad for comparison. The different fiber classification methods are based on a classical machine learning algorithm, and establish a different fiber data knowledge base through an artificial feature design mode, so that the algorithm classification capability is enhanced while the knowledge base is perfected. Conventional different fiber artificial features include color feature, shape feature and texture feature information.
The color characteristics comprehensively consider the color data of each pixel in the different fiber image, and belong to a global characteristic information extraction operator in the field of classical image processing. Common color feature extraction methods are histogram statistics and color moment calculation. The histogram is based on the RGB space, and the number of pixel points in each color interval is counted. The color moment makes up the defect of incomplete distribution information in a common color histogram, and the first-order moment and the second-order moment of each channel of the RGB space are calculated to reflect the color distribution information in the image. The color moment calculation formulas are shown as formulas (2-12) and (2-13).
Figure GDA0003766667160000121
Figure GDA0003766667160000122
Where i is represented as a color channel (RGB) in the image, j is represented as a pixel location in the image, L ij Appointing the brightness value of the pixel for the current color channel, N represents the total number of the pixel points, and calculating the obtained second moment V i Is the color moment grade of a certain kind of foreign fiber belonging to the target.
Texture features of an image reflect the roughness of an object, the directional characteristics across the image, and the order of some distribution. Texture characteristics are described using gray-scale values of an image, and common description operators include gray-scale value histogram statistics, low-order gray moments, gray distribution entropy information, and the like. The calculation formula is shown as formula (2-14), formula (2-15), formula (2-16) and formula (2-17),
Figure GDA0003766667160000131
Figure GDA0003766667160000132
Figure GDA0003766667160000133
Figure GDA0003766667160000134
wherein L is i Is a gray scale vector of the image, H (L) i ) The histogram statistic value corresponding to the gray level vector is obtained. L represents the gray level, and equations (2-14) represent the average gray level, representing the smoothness of the object in the texture features. The equations (2-15) represent gray-scale second moments used to determine the contrast of the input image. The rough degree of the image is reflected by the gray scale statistics in the formula (2-16), and the coarser the different fiber target texture, the larger the corresponding R (L), i.e. the higher the target gray scale change degree. The expression (2-17) uses the gray-level average entropy to count the distribution characteristics of the image, i.e. the statistics of the image information amount, and the richer the image information is, the larger the calculated e (l) is.
The morphological feature description method of the image is easy, and the method is generally counted by calculating the outline and the perimeter of the target. In addition, the connectivity of the different fiber targets can be counted according to the distribution characteristics of the targets. Common shape feature extraction operators are as shown in the formulas (2-18), (2-19) and (2-20).
Figure GDA0003766667160000135
Where P is the sample target pixel perimeter, A is the target pixel area, and F represents the morphological coefficient.
Figure GDA0003766667160000141
Wherein c represents the distance from the center point of the target to the centroid of the target, and a represents the distance from the center point to the vertex of the rectangular frame of the target angle. Result of calculation E c Is the eccentricity of the image.
E u =1-H (2-20)
Wherein H represents the number of holes (some block-shaped foreign fibers) in the target foreign fibers, and E is obtained by calculation u Called the euclidean number, is typically used for statistics of target connectivity.
The classification algorithm of the network based on the convolution spirit exceeds the traditional algorithm in all categories, and the traditional algorithm SVM can keep more than 80% of classification accuracy in the categories with obvious color characteristics such as cloth blocks, paper scraps and the like, but the traditional algorithm cannot meet the application requirements in the categories with difficult extraction of the characteristics such as yellow cotton, mulching films, cotton stalks and the like.
The comparison result with the traditional algorithm conforms to the trend in the current image processing field, and on a data set with a certain scale, if an image type which is easy to be confused exists, the processing algorithm based on the convolutional neural network is superior to the traditional processing algorithm.
Contrast with classical networks
In table 2-1, the comparison results between the network of the present invention and its improvements, and the classical networks AlexNet, VGG16, ResNet18, ResNet50, densnet 201, etc., including model size, parameter calculation amount, classification accuracy, etc., are listed. The parameter calculation amount is mainly used for counting the times of multiplication and addition operations (MACs) in the convolutional neural network. The network used by the invention comprises a basic structure CottonNet, which is formed by connecting common tower type convolution layers in series, and has 6 layers of convolution neural networks. cottonet-Add: two additional high-level networks are added on the basic structure, the number of cores of the high-level networks is 1024, and the high-level networks and the previous layer form a tower structure for evaluating the influence of the deepened network on the classification accuracy. Cotton-Res: a high-level structure is improved on a basic network CottonNet, and the extraction capability of the network on image details is enhanced by using a residual error module.
TABLE 2-1 comparison of the results of the classification on the differential fiber data set (best results are shown bold)
Figure GDA0003766667160000151
The following is concluded from Table 2-1: (1) compared with a large classical network, the accuracy of the foreign fiber classification model CottonNet-Fusion designed based on the residual error network is the highest, and exceeds AlexNet 9.6%, and the Top-1 accuracy reaches 97.4%. (2) Deeper levels do not represent that model accuracy can rise, e.g., VGG16 and CottonNet, which are only 1/4 levels of VGG16, but have an accuracy exceeding 3.7%. (3) The idea of residual or bypass feature fusion can improve the classification accuracy, such as ResNet and DenseNet series, and compared with the traditional single convolution layer superposition, the accuracy is much higher. (4) The CottonNet-Add increases a high-level feature extractor, but the accuracy cannot be improved by simply deepening the network, and the network layer with deepened visual analysis does not pay attention to the foreign fiber target. (5) The addition of the residual network enhances the attention degree of the model to the target details, which is the key of the final accuracy improvement. (6) The CottonNet parameter quantity of the basic network structure is minimum, the calculation efficiency is higher compared with that of a classical network, and the CottonNet-Res with the addition of the residual error module meets the requirements of different fiber classification.
Model anti-interference capability test and comparison under complex environment
In order to evaluate the performance and robustness of the model in the complex environment, the inventor designs a test experiment for the anti-interference capability of the model in the complex environment. The experimental data set comprises a real foreign fiber image in a complex environment, and the scenes comprise image darkening, image blurring, background noise abnormal increase and the like; the method also comprises simulated complex environment data obtained by blurring the normal sample and carrying out noise processing. And comprehensively evaluating the anti-interference performance of the model by testing two data sets.
The model cottonet-Fusion based on feature difference fitting comprises two hyper-parameters alpha and beta, and is used for measuring the classification loss weight between a normal sample and a complex sample. According to the method, the influence of two super-parameters on the network performance is evaluated through the classification accuracy on the different-fiber sample data set in the complex environment. We use five combinations to represent the over-parameter values that might be defined in a practical case: alpha is 0, beta is 1, and represents the classification of the foreign fiber sample under the condition of removing the complex environment; α ═ 0.3, β ═ 0.7, represent to increase a part of complicated sample weights, keep more normal sample classification results, namely the supervisory information comes mainly from the normal sample; alpha is 0.5, beta is 0.5, and the complex sample and the normal sample keep the same weight; alpha is 0.7, beta is 0.3, and the supervision information mainly comes from complex samples; α ═ 1 and β ═ 0, and represent classification results that no longer used any normal samples.
As can be seen from fig. 6: (1) the reduction of beta (normal sample classification weight) has a great influence on the classification accuracy, when alpha is 0.3 and beta is 0.7, the accuracy is the highest (90.3%), and the classification model is heavily weighted on the classification label of the normal sample, that is, the normal sample plays a role in network supervision. (2) When α is 1 and β is 0, the accuracy is the lowest, and the classification model at this time completely takes the labels and features of the complex samples as the supervision information, resulting in the performance degradation of the classification model. The accuracy rate of completely removing the classification label of the complex sample is slightly low (84.5%), which indicates that the classification feature fusion of the complex sample is meaningful for improving the accuracy rate. According to the conclusion of experiments, the subsequent research of the invention takes alpha as 0.3 and beta as 0.7 as default hyper-parameters.
The model comparison experiment mainly aims at two models, namely a CottonNet-Res model and a CottonNet-Fushion model, wherein the CottonNet-Res model adopts a traditional training method, the CottonNet-Fushion model adopts a characteristic difference fitting method, and the classical network adopts ResNet18 with excellent comprehensive performance.
(1) Testing of heterogeneous data sets in real complex environments
The results of the classification accuracy comparisons are shown in Table 2-2.
TABLE 2-2 comparison of results of classification of different fiber data in complex environment (best results are shown in bold)
Figure GDA0003766667160000171
From the table 2-2, it can be seen that the accuracy of the different fiber classification result in the complex environment is reduced compared with all the previous data. The cottonet-Fusion using the feature-difference regression training performed best, approximately 10 percentage points beyond the second classical network Resnet 18. Under the full data set, the classification performance of CottonNet-Res is 97.6% of that of CottonNet-Fusion, and the data set test under the complex environment shows that the performance of the CottonNet-Res is reduced to 89% of that of the CottonNet-Fusion. Therefore, the characteristic difference fitting training method of CottonNet-Fusion has higher anti-interference capability on the different-fiber data set in the complex environment.
(2) Normal sample additive noise test
The image noise adding mode is generally Gaussian noise and accords with the distribution rule of camera bottom noise. And evaluating the classification accuracy of each model through Gaussian noises with different levels. The test is limited to normal sample scenarios.
The gaussian noise is calculated as:
Figure GDA0003766667160000181
referring to fig. 7, where I (I, j) represents the image pixel point value with the input position (I, j), μ and δ represent the mean and variance of the gaussian distribution function, respectively, where the variance has a greater influence on normal samples. Fig. 7 shows the effect of gaussian noise of different variances on normal samples. The image has obvious distortion when the delta is 8, the invention sets the delta to be from 1 to 12 levels, and the noise resistance of the model is observed.
FIG. 8 is the effect of Gaussian noise of different variance levels on model classification accuracy. It can be seen that the CottonNet-Fusion curve drops most slowly, with a classification accuracy approaching 80% (77.8%) even though the gaussian noise variance level is adjusted to 12, while other models have dropped below 70%. The noise immunity of the model is far superior to that of other models.
(3) Normal sample fuzz testing
The image blurring method is obtained by calculating an inner product of a Gaussian kernel with the radius of R and an image, and the calculation formula is shown as a formula (2-22) and a formula (2-23).
Figure GDA0003766667160000191
Figure GDA0003766667160000192
Where R is the calculated radius of the Gaussian kernel and σ is the standard deviation of the Gaussian kernel, typically 1.0. x and y are index values within the radius of the gaussian kernel in the horizontal and vertical directions, and i and j are corresponding position indexes of the original image. I (I, j) is the pixel brightness value of the corresponding position.
When the blur radius is 3, the blur radius is substantially invisible visually, which poses a challenge to the anti-interference capability of the classification model, and fig. 9 is a comparison of the accuracy of the classification model at different blur levels.
The following is concluded from fig. 10: (1) the image blur has a great influence on the accuracy of the classification model, and the gaussian blur with a radius of 1.5 can cause the accuracy of all classification models to be lower than 80%. All models are already substantially unusable (less than 60%) under the gaussian blur operation with radius 3. (2) CottonNet-Fusion still maintains a relatively smooth drop under the fuzz test, and can also maintain 71% classification accuracy at a fuzz level of radius 2, which is 1.16 times the accuracy (61.3) of the second name Resnet 18.
In conclusion, according to experimental comparative analysis under complex environments, CottonNet-Fusion has a more prominent expression on anti-interference capability. By means of feature difference fitting, the feature difference and the classification result are simultaneously constrained to be trained through a network, and the regression loss joint training features are utilized, so that the classification accuracy is further improved, and the anti-interference capability of the model is better than that of a model (CottonNet-Res) trained through a conventional method.
The method combines the characteristics of the different fiber images and application scenes, and needs to optimize and improve the classical convolution neural network. The visualization of each layer of feature map of the convolutional neural network is realized by a class activation thermodynamic diagram mapping algorithm, the feature extraction capability of the classical network on different fiber targets is observed, the characteristic extraction capability is adjusted and cut, a basic different fiber classification network CottonNet which gives consideration to both performance and efficiency is designed, and the classification accuracy on a verification set reaches 94.2%.
In order to enhance the high-level feature extraction capability, the invention improves the different fiber classification network and provides CottonNet-Res by referring to the idea of a residual error network, and improves the classification accuracy to 95.1%.
Aiming at the problem of foreign fiber classification in a complex environment, the invention provides a characteristic difference fitting-based classification model CottonNet-Fusion, the classification accuracy is improved to 97.4%, and the accuracy can still be kept at 90.3% on a foreign fiber data set in the complex environment.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the equipment or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (2)

1. A differential fiber classification method based on residual error network and characteristic difference fitting is characterized by comprising the following steps:
s1, designing a foreign fiber classification basic network CottonNet for classifying and identifying foreign fibers and cotton on the image characteristics;
the CottonNet comprises 6 convolutional layers, parameters of the CottonNet adopt a tower structure, the front two layers of the CottonNet are mainly used for extracting bottom layer features, the four convolutional layers are used for extracting high layer features of an image, the last convolutional layer is not subjected to pooling, and a nonlinear activation function LeakyReLU is added to the output part of each convolutional layer;
the CottonNet classification model uses Softmax as a classifier, an input sample is assumed to be I, the input sample is an image or a vector, and after convolutional neural network operation, an output vector f ═ a is obtained 1 ,a 2 ,...,a N ]The probability of belonging to a given category k is:
Figure FDA0003755222240000011
s2, introducing a residual error network on the basis of the CottonNet in S1, and defining a residual error module to obtain an improved network structure CottonNet-Res, wherein the CottonNet-Res adopts the last four layers of convolutional layers to enhance the high-layer feature extraction capability and improve the accuracy of the different fiber classification;
the residual error module cascades the module input characteristics and the final output characteristics, wherein two layers of convolution are selected as a basic residual error module, the two layers of convolution are respectively defined as Layer1 and Layer2, and the output of the input characteristics after two layers of processing is respectively omega 1 And omega 2 The output is characterized by omega 0 =ω 12
S3, aiming at the different fiber classification under the complex environment, obtaining an improved algorithm CottonNet-Fusion through feature difference fitting on the basis of CottonNet-Res in S2;
the CottonNet-Fusion designs extra network branch training for samples in a complex environment, performs feature Fusion with a normal picture training branch in the training process, calculates the feature output mean square error of a main network and a branch network in an output stage, fits two branch output feature vectors as a feature difference, performs feature Fusion on the output of the two branches through a convolution layer, and performs feature Fusion on the output of the two branches, wherein the final classification result uses the label of a normal sample, namely the whole network uses the feature difference and the classification loss to constrain the training, thereby realizing the improvement of the classification accuracy in the complex environment;
in the cottonet-Res, assuming no cascading branches of residual modules, the output of a normal single-layer network can be expressed as formula (2-2):
Figure FDA0003755222240000021
h is convolution related operation of a single-layer network, g represents a single-layer output activation function, the convolution neural network is composed of an expression (2-2), and more characteristic operators about the image can be obtained by adding the expression (2-2), namely, the neural network is deepened; the parameter training of the network is optimized by a back propagation algorithm based on gradient, and the specific training process is as follows: the current layer uses a forward propagation input signal, then a backward propagation error and a function derivation method are used to obtain a gradient, so that the current layer parameters are updated, the parameter update of the current layer l needs to calculate the derivative with the loss being epsilon, and an error term is defined as follows:
Figure FDA0003755222240000022
according to the chain law of back propagation, delta l Error term δ dependent on the next layer l+1 The modification (2-3) is:
Figure FDA0003755222240000023
defining a propagation gradient:
Figure FDA0003755222240000031
when gamma is l When the error term is less than 1, the error term of the l layer is reduced compared with the l +1 layer, and under the back propagation algorithmThe gradient is gradually reduced, and the parameters are not updated any more, namely the gradient disappears; when gamma is l When the gradient is larger than 1, the gradient is gradually increased, namely the gradient explosion phenomenon; during the deepening of the network, gamma l The value of (2) has uncertainty, and a residual error network is proposed aiming at the problem; the residual error network constructs a residual error branch in a basic network, directly concatenates input and output characteristics in a layer jump connection mode, and the output of a residual error network module is expressed as a formula (2-6):
z l =H(a l-1 )=a l-1 +F(a l-1 ) (2-6)
wherein F (a) l-1 ) As a residual function, the characteristic a of the input l-1 Can be propagated directly from any network lower layer to a higher layer.
2. The method for classifying the foreign fiber based on the residual network and the feature difference fitting of the residual network according to claim 1, wherein the network input of the CottonNet-Fusion is a picture pair consisting of a sample in a normal environment and a sample in a complex environment, the pair of the input pictures is assumed to be (s, c), where s is a normal sample and c is a complex sample image, s is generated by a series of random image blurring and image noise perturbation method, and the feature output vectors of two sets of network learning are (f) s ,f c ) The mean square error loss method for calculating the mean square error loss of the two is shown as the formula (2-7):
Figure FDA0003755222240000032
the final loss function is defined as equations (2-8) and (2-9):
Figure FDA0003755222240000033
Figure FDA0003755222240000034
where α and β are the weights of the two types of labels.
CN202110783128.XA 2021-07-12 2021-07-12 Differential fiber classification method based on residual error network and characteristic difference fitting Active CN113361655B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110783128.XA CN113361655B (en) 2021-07-12 2021-07-12 Differential fiber classification method based on residual error network and characteristic difference fitting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110783128.XA CN113361655B (en) 2021-07-12 2021-07-12 Differential fiber classification method based on residual error network and characteristic difference fitting

Publications (2)

Publication Number Publication Date
CN113361655A CN113361655A (en) 2021-09-07
CN113361655B true CN113361655B (en) 2022-09-27

Family

ID=77539093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110783128.XA Active CN113361655B (en) 2021-07-12 2021-07-12 Differential fiber classification method based on residual error network and characteristic difference fitting

Country Status (1)

Country Link
CN (1) CN113361655B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109858462A (en) * 2019-02-21 2019-06-07 武汉纺织大学 A kind of Fabric Recognition Method and system based on convolutional neural networks
CN111767959A (en) * 2020-06-30 2020-10-13 创新奇智(广州)科技有限公司 Method and device for classifying pile fibers
CN112465752A (en) * 2020-11-16 2021-03-09 电子科技大学 Improved Faster R-CNN-based small target detection method

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102879401B (en) * 2012-09-07 2015-06-24 西安工程大学 Method for automatically detecting and classifying textile flaws based on pattern recognition and image processing
US10605798B2 (en) * 2017-12-26 2020-03-31 Petr PERNER Method and device for optical yarn quality monitoring
CN108284079B (en) * 2018-02-13 2019-05-07 南京林业大学 A kind of residual film identification sorting algorithm of the unginned cotton residual film sorting unit based on high light spectrum image-forming and deep learning
CN108805200B (en) * 2018-06-08 2022-02-08 中国矿业大学 Optical remote sensing scene classification method and device based on depth twin residual error network
CN109740673A (en) * 2019-01-02 2019-05-10 天津工业大学 A kind of neural network smog image classification method merging dark
CN111612739B (en) * 2020-04-16 2023-11-03 杭州电子科技大学 Deep learning-based cerebral infarction classification method
CN111815592A (en) * 2020-06-29 2020-10-23 郑州大学 Training method of pulmonary nodule detection model
CN112381764A (en) * 2020-10-23 2021-02-19 西安科锐盛创新科技有限公司 Crop disease and insect pest detection method
CN112446350B (en) * 2020-12-09 2022-07-19 武汉工程大学 Improved method for detecting cotton in YOLOv3 complex cotton field background
CN112507898B (en) * 2020-12-14 2022-07-01 重庆邮电大学 Multi-modal dynamic gesture recognition method based on lightweight 3D residual error network and TCN

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109858462A (en) * 2019-02-21 2019-06-07 武汉纺织大学 A kind of Fabric Recognition Method and system based on convolutional neural networks
CN111767959A (en) * 2020-06-30 2020-10-13 创新奇智(广州)科技有限公司 Method and device for classifying pile fibers
CN112465752A (en) * 2020-11-16 2021-03-09 电子科技大学 Improved Faster R-CNN-based small target detection method

Also Published As

Publication number Publication date
CN113361655A (en) 2021-09-07

Similar Documents

Publication Publication Date Title
Faisal et al. Deep learning and computer vision for estimating date fruits type, maturity level, and weight
CN112435221A (en) Image anomaly detection method based on generative confrontation network model
CN107481231A (en) A kind of handware defect classifying identification method based on depth convolutional neural networks
CN108446729A (en) Egg embryo classification method based on convolutional neural networks
CN104992223A (en) Dense population estimation method based on deep learning
CN107220649A (en) A kind of plain color cloth defects detection and sorting technique
CN114092389A (en) Glass panel surface defect detection method based on small sample learning
CN109657978A (en) A kind of Risk Identification Method and system
CN108876781A (en) Surface defect recognition method based on SSD algorithm
CN109191430A (en) A kind of plain color cloth defect inspection method based on Laws texture in conjunction with single classification SVM
CN106529568A (en) Pearl multi-classification method based on BP neural network
Ghazvini et al. Defect detection of tiles using 2D-wavelet transform and statistical features
CN111932639B (en) Detection method of unbalanced defect sample based on convolutional neural network
CN111145145A (en) Image surface defect detection method based on MobileNet
CN111507227A (en) Multi-student individual segmentation and state autonomous identification method based on deep learning
CN115294109A (en) Real wood board production defect identification system based on artificial intelligence, and electronic equipment
CN116152498A (en) Metal surface defect semantic segmentation network and training method based on data driving
CN113361655B (en) Differential fiber classification method based on residual error network and characteristic difference fitting
Wen et al. Multi-scene citrus detection based on multi-task deep learning network
CN108932471A (en) A kind of vehicle checking method
CN108363967A (en) A kind of categorizing system of remote sensing images scene
CN110033443A (en) A kind of feature extraction network and its defects of display panel detection method
JP3344766B2 (en) Image processing apparatus and beef carcass grading system using this apparatus
CN116188516A (en) Training method of defect data generation model
CN115761451A (en) Pollen classification method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant