CN113887317A - Semi-supervised classification method for fabric fiber components based on pi model - Google Patents

Semi-supervised classification method for fabric fiber components based on pi model Download PDF

Info

Publication number
CN113887317A
CN113887317A CN202111040595.XA CN202111040595A CN113887317A CN 113887317 A CN113887317 A CN 113887317A CN 202111040595 A CN202111040595 A CN 202111040595A CN 113887317 A CN113887317 A CN 113887317A
Authority
CN
China
Prior art keywords
data
training
loss
fabric
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111040595.XA
Other languages
Chinese (zh)
Inventor
池明旻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202111040595.XA priority Critical patent/CN113887317A/en
Publication of CN113887317A publication Critical patent/CN113887317A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a fabric fiber component semi-supervised classification method based on a pi model, which comprises a fabric near-infrared hyperspectral data acquisition and cleaning method, a neural network and a classifier for characteristic extraction of fabric near-infrared hyperspectral data, a model training method by applying the semi-supervised method based on the pi model and the like; the method combines the collection and cleaning of the near-infrared hyperspectral data of the fabric, the feature extraction neural network and the classifier, the semi-supervision method based on the pi model and the like, realizes the analysis of the fiber components of the fabric by using the near-infrared hyperspectral data of the fabric, and obtains the fiber component classification of the target fabric. The invention trains the neural network model by using a semi-supervised method, relieves the problems of difficult acquisition of fabric material label data, long time consumption and high cost, and obtains good effect in practical problems.

Description

Semi-supervised classification method for fabric fiber components based on pi model
Technical Field
The invention relates to a fabric fiber component classification method, in particular to a semi-supervised learning method based on a pi model.
Background
The production process of the fabric is easily influenced by various external factors, which causes great quality fluctuation of the finished product and requires strict quality detection. Conventional analysis of the fiber composition of fabrics is divided into two steps: qualitative and quantitative.
The qualitative analysis method of the material comprises the following steps: the method includes a combustion method, a melting point method, a hand feeling visual method, a microscopic section analysis method and the like, and generally adopts the microscopic section analysis method, namely, a fiber is sectioned by a slicer and then observed under a microscope, and the fiber type item positioning is judged according to the appearance.
The quantitative analysis method of the material comprises the following steps: firstly, different fibers are qualitatively analyzed by different solvents, and then specific component contents are calculated.
The traditional fabric fiber component analysis method has the disadvantages of multiple steps, long time consumption, high requirements on technical personnel, higher cost and urgent need for technical innovation.
Recently, there are also traditional machine learning methods based on SVM, decision tree, etc., or deep neural network methods based on LSTM, RNN, etc. However, the traditional machine learning method is poor in effect, and the deep neural network needs a large amount of labeled data obtained by the traditional material analysis method. The traditional material analysis method is too expensive, time-consuming, labor-consuming and difficult to obtain in large quantities. This causes the problem that the deep neural network is difficult to be applied to the material analysis problem and has poor effect.
The semi-supervised learning is different from the supervised learning mode which only utilizes labeled data, introduces a large amount of non-labeled data, constructs a non-supervised signal by learning the inherent attribute of the non-labeled data, and is combined with the supervised signal constructed by the labeled data, so that the model can make full use of a large amount of non-labeled data to continuously iterate, the generalization performance is finally enhanced, and the overfitting problem caused by less labeled data during the supervised training by using a deep neural network is greatly relieved.
Meanwhile, semi-supervised learning is enhanced by using stronger data, and a noise training mechanism is introduced, so that a trained model has stronger robustness, and the obtained result is more stable in the face of noise influence caused by equipment difference, environment difference and the like in the real world.
The pi model is an effective semi-supervised learning method, cross entropy loss is solved for marked data, different data enhancement is carried out on 2 copies of unmarked data based on a consistency regular principle, and MSE loss is calculated as consistency loss. And combining the cross entropy loss of the marked data and the consistency loss without the mark into a total loss to train the neural network model.
Disclosure of Invention
The pi model is originally applied to the image classification problem and is poor in performance when being directly applied to the fabric fiber classification problem, because the near infrared spectrum is different from the image data type, a data enhancement method based on visual invariance prior cannot be introduced, meanwhile, the dimensionality of the spectral data is small, the data is easy to distort after being enhanced, the data does not belong to the original category any more, and therefore the consistency regulation is not applicable.
In order to overcome the defects and shortcomings of a pi model algorithm in the prior art, the invention provides a fabric fiber component semi-supervised classification method based on a pi model, wherein on the basis of pi model semi-supervised loss, cosine distance screening is applied to data enhancement samples, so that data distortion samples are prevented from participating in calculation loss; meanwhile, a confidence detection mechanism is added, so that low prediction confidence samples are prevented from participating in training, the stability of model training is improved, the problem of network overfitting caused by too little labeled data is solved, and the generalization capability of the model is improved. The model training mode comprises the following steps:
s1: collecting fabric fiber supervised and unsupervised data;
s2: and extracting a neural network and a classifier through the characteristics of the fabric near-infrared hyperspectral data, and performing supervised training on the labeled data by using the labeled training data of the fabric near-infrared hyperspectral sequence dataset.
S3: and extracting a neural network and a classifier through the characteristics of the fabric near-infrared hyperspectral data, introducing the semi-supervised training method by using the label-free training data of the fabric near-infrared hyperspectral sequence dataset, and carrying out unsupervised training.
S4: by the semi-supervised training method, the supervised data training and the unsupervised training are combined to update the model;
s5: the method comprises the steps of analyzing the material quality of fabric near-infrared hyperspectral data, acquiring and cleaning the fabric near-infrared hyperspectral data, extracting features by a neural network and a classifier, acquiring, cleaning, extracting and classifying the features of the fabric near-infrared hyperspectral data, and obtaining the material category components of a target fabric.
Further, the step S1 is specifically to collect the supervised and unsupervised data of the fabric fiber as follows:
s11: collecting fabric near-infrared hyperspectral sequence data; s12: supervised data tag acquisition; s13: cleaning data of the fabric near-infrared hyperspectral sequence data, and removing the abnormality; s14: training of fabric near-infrared hyperspectral sequence data and test data set production
Further, the step S11 is specifically to collect the near-infrared hyperspectral sequence data of the fabric as follows:
s111: sampling and filling the collected data into a database for multiple times in order to ensure that the collected data reflects the integral condition and the expansion data set of a certain piece of cloth; s112: and collecting the near-infrared hyperspectral sequence data of the fabric by a near-infrared hyperspectral imager, and intercepting the sequence data with the wavelength range of 900nm to 1700 nm.
Further, the step S12 of supervised data label acquisition specifically comprises the following steps:
s121: for the fabric data collected in S111, if the fabric sample has been subjected to a traditional fabric component analysis method to obtain material components, marking the data, and constructing a mapping relation between the fabric near-infrared hyperspectral sequence and the material components; s122: for the fabric data collected in S121, the material components include common pure materials such as cotton, rayon, modal, tencel, terylene, wool, spandex, hemp, nylon, acrylic fibers, real silk, cashmere, and the like, and blended fabrics composed of the above materials. The material component refers to a material list for forming a fabric, and is irrelevant to the blending content of each material.
Further, in step S13, the data washing of the near-infrared hyperspectral sequence data of the fabric, the exception eliminating method comprises the following specific steps:
s131: aiming at the characteristics of more noise and unsmooth of partial cloth spectral sequences, a curve of the cloth spectral sequences after smooth noise reduction is obtained based on a Savitzky-Golay smooth noise reduction algorithm, a mean square error is obtained by comparing the curve with an original data curve, and a data set is excluded from sampling points with the mean square error larger than a threshold value w.
Further, in step S14, training the near-infrared hyperspectral sequence data of the fabric, and testing the data set specifically includes the following steps:
s141: and D, dividing the data subjected to data cleaning and exception elimination in the step S131 into labeled data and unlabeled data according to the existence of the labels. Tagged data as per 7: and 3, dividing the data set into a training set and a testing set. Putting all the label-free data into a training set; s142: aiming at the characteristic that the spectrum waveforms of a plurality of sampling points of the same cloth sample are similar, the plurality of sampling points of the same cloth sample are not dispersed to a training set and a testing set, otherwise, the training set and the testing set are mixed, and the problem that the testing result deviates from the actual situation is caused.
Further, the step S2 of performing supervised training on the labeled data includes the following specific steps:
s21: constructing a feature extraction network based on a one-dimensional convolution neural network aiming at the characteristics of near-infrared hyperspectral sequence data; s22: based on the labeled training data obtained in the step S1, inputting the labeled training data into a one-dimensional convolution feature extraction neural network to obtain feature vectors, obtaining output probability distribution by a Softmax classifier, comparing the output probability distribution with real labels to obtain supervised cross entropy loss, and recording the supervised cross entropy loss as supervised loss; s23: based on the output probability distribution vector obtained by the classifier in step S22, a threshold e is set for the problem that the labeled data amount is small and the model is likely to be over-fitted. If the maximum value of the probability distribution vector of a certain sampling point obtained by the characteristic extraction network and the Softmax classifier in the step S22 is larger than e, no supervision loss is counted; s24: based on the labeled training data obtained in step S1, the labeled training data is obtained with uniform sampling in the early stage of training. During the late training period, labeled training data is obtained with randomly weighted sampling.
Further, step S3 introduces the semi-supervised training method, and the specific steps of performing unsupervised training are as follows:
s31: based on the data cleaning and abnormal elimination in the step S1, randomly enhancing each spectrum sequence to form a pair of unlabeled training data obtained by making a data set, extracting the network by using the one-dimensional convolution characteristics in the step S21 to obtain characteristics, obtaining output probability distribution by using a classifier, and outputting the sequences which are randomly enhanced and are different from the training data to obtain unsupervised MSE loss which is recorded as unsupervised loss; s32: unsupervised data pairs in step S31, the classifiers after the S21 feature extraction network are generated by the Softmax function and the temperature coefficient τ, in order to obtain a steeper distribution, which more easily satisfies the confidence threshold β. τ is generally taken to be 0.5. The formula is as follows:
Figure BDA0003249003710000031
s33: for the unsupervised loss of the unsupervised data obtained in step S31, a threshold β is set, which is taken to be 0.6. And if the maximum value of the unsupervised probability distribution does not reach the threshold value, counting the loss. The decision function can be formulated as:
Figure BDA0003249003710000041
where x is the input data and P is the probability distribution vector of the feature vector output by the model output by the warming Softmax function P (#).
S34: and setting a threshold r for avoiding the problems of data distortion and the like caused by too severe data enhancement based on the input sequence pair obtained by the random enhancement of the S31. And if the absolute value of the cosine distance between a certain sampling point and the sample after data enhancement is less than r, no unsupervised loss is counted. Can be formulated as:
Figure BDA0003249003710000042
wherein
Figure BDA0003249003710000043
Where X is an unlabeled training data sequence vector and X' is a data-enhanced copy of X.
S35: based on the unlabeled training data obtained in step S1, data is always obtained as uniform samples during training.
Further, in the step S4, the semi-supervised training method combines the supervised data training and the unsupervised training to update the model specifically includes the following steps:
s41: the total loss is the sum of the product of the supervised loss plus a coefficient w (t) related to the training algebra times the unsupervised loss. Can be calculated by the following formula:
Figure BDA0003249003710000044
wherein T is the current training algebra, T is the total training algebra, μ is a constant, generally 10, to increase unsupervised loss.
According to the fabric fiber component semi-supervised classification method based on the pi model, feature extraction is carried out on input spectrum sequence data by using the convolutional neural network, unsupervised learning is combined, the feature extraction capability of the neural network is trained by using the consistency priori knowledge of a large amount of unlabelled sample data in practical application, the generalization capability of the model is enhanced, the robustness of noise during instrument acquisition is improved, and the problem of serious overfitting of the fabric spectrum sequence processed by using the neural network in the past is solved.
Drawings
FIG. 1 is a flow chart of model training provided by the present invention;
FIG. 2 is a comparison graph of waveforms of multiple sampling points on the same cloth according to the present invention;
Detailed Description
Specific embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, the present invention should be understood not to be limited to such an embodiment described below, and the technical idea of the present invention may be implemented in combination with other known techniques or other techniques having the same functions as those of the known techniques.
In the following description of the embodiments, for purposes of clearly illustrating the structure and operation of the present invention, directional terms are used, but the terms "front", "rear", "left", "right", "outer", "inner", "outward", "inward", "axial", "radial", and the like are to be construed as words of convenience and are not to be construed as limiting terms.
The relevant terms are explained as follows:
data cleaning: data cleansing-a process of re-examining and verifying Data with the aim of deleting duplicate information, correcting existing errors, and providing Data consistency.
Data enhancement: data enhancement (DataAugmentation) -a process of varying and expanding data in order to expand the data set and prevent over-fitting of the model.
Loss of consistency: based on the clustering assumption, i.e. the assumption that the learned decision boundary must be located in a low density region: if the actual perturbation is applied to an unmarked datum, the prediction should not change significantly.
Specific embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Referring to fig. 1, the model training process includes steps S1-S4 as follows:
1. step S1-Collection of supervised and unsupervised data of textile fibers
The method is characterized in that a hyperspectral near infrared instrument is adopted to collect data of reflectivity, absorptivity, intensity and the like of cloth in a near infrared band deeply into textile clothing factories, cloth manufacturers and the like, the data are cleaned, and only spectral sequence data with the wavelength of 900nm-1700nm are adopted.
For the cloth which is analyzed to obtain the material and proportion through the traditional fabric fiber component analysis program, the material and proportion are collected and are in one-to-one correspondence with the spectrum sequence.
In the acquisition process of the near-infrared spectrometer, due to reasons such as irregular operation, light leakage may occur, the light leakage is marked as abnormal data, and a data set needs to be excluded. The specific judgment method is as follows: if the maximum value of the cloth reflectivity is not more than 0.1, the data is abnormal data.
2. Step S2-training part of Supervisory data
In order to expand a data set and enhance the generalization capability of a model, a near-infrared hyperspectral sequence in the data set needs to be subjected to data enhancement and then input into the model.
There are many ways of data enhancement, and through experimental tests, the more effective way of data enhancement to fabric spectrum sequence data has:
a. and (4) translating up and down. I.e. adding a one-dimensional sequence to the one-dimensional input sequence. In particular, referring to fig. 2, we find that the data translation of the near-infrared hyperspectral sequence in the wavelength range of 900nm to 1600nm is significant. Therefore, only data in the wavelength range of 900nm-1600nm are translated up and down, and experiments prove that the effect is better than the effect of integral up-and-down translation.
b. Gaussian white noise is added. The near-infrared hyperspectral sequence is obtained by analysis and collection, and experiments prove that the Gaussian white noise with the mean value of 0 and the standard deviation of 0.2 is selected to have the best effect.
c. And (4) randomly mixing. Based on the clustering assumption, the same class should have similar features, and the class samples can be randomly drawn for mixing, and the class of the result should be unchanged.
And inputting the enhanced input data into a one-dimensional convolutional neural network to obtain characteristics, obtaining output probability distribution by a classifier, and comparing the output probability distribution with a real label to obtain supervised cross entropy loss. In the actual production process, the number of the labeled samples is far smaller than that of the unlabeled samples, so that the model is easy to overfit, and in order to inhibit the trend, when the maximum value of the output probability distribution vector is larger than the set threshold value, the loss is not counted.
Research shows that uniform sampling is helpful for training the feature extraction capability of the neural network, and balanced sampling is helpful for training the classification capability of the neural network, so that in order to further reduce the problem of unbalanced data categories caused by the long tail effect in data concentration, training data is obtained by uniform sampling in the early training stage. In the late training phase, training data is obtained with randomly weighted samples.
3. Step S3-training part of unsupervised data
The unsupervised near-infrared hyperspectral sequences are respectively subjected to random enhancement to form a pair (the enhancement mode is the same as the data enhancement mode of supervised data, the amplitude is larger), the random enhancement is input into a one-dimensional convolutional neural network to obtain characteristics, output probability distribution is obtained through Softmax, and unsupervised MSE loss is obtained by outputting the homologous different enhanced sequences, namely unsupervised consistency loss.
Because unsupervised partial training enhances the feature extraction capability of the model, data should be obtained by uniform sampling all the time to achieve the best training effect.
The probability distribution MSE of the neural network output after different enhancements of the same data is small and needs to be enlarged. The method is characterized in that temperature tau is added, a Softmax function is modified, steeper distribution is obtained, MSE loss is increased, and the calculation formula is as follows:
Figure BDA0003249003710000071
where τ is the temperature constant, and is taken to be 0.5 to obtain a steeper profile.
Because of the increased randomness of the data, its output may not belong to any one of the known classes. We set a threshold β, which is taken to be 0.6. And if the maximum value of the unsupervised probability distribution does not reach the threshold value, counting the loss. The decision function can be formulated as:
Figure BDA0003249003710000072
wherein P is the probability distribution vector of the embedding output by the model after the heating Softmax function P (#).
Data enhancement in unsupervised training data is the key to obtain consistency loss, however, excessive data enhancement causes problems such as data distortion and the like, and data category change destroys consistency loss, so that a threshold r needs to be set. And if the absolute value of the cosine distance between a certain sampling point and the sample after data enhancement is less than r, no unsupervised loss is counted. Can be formulated as:
Figure BDA0003249003710000073
wherein
Figure BDA0003249003710000074
Where X is an unlabeled training data sequence vector and X' is a data-enhanced copy of X.
4. Step S4 semi-supervised training method
To integrate the results of supervised and unsupervised training, the total penalty is the sum of the supervised penalty plus a product of the coefficient w (t) related to the training algebra times the unsupervised penalty.
Figure BDA0003249003710000075
Wherein T is the current training algebra, T is the total training algebra, μ is a constant, generally 10, to increase unsupervised loss.
The overall loss function is:
Figure BDA0003249003710000076
wherein S is the number of samples of supervised data, CE is a cross entropy function, U is the number of samples of unsupervised data, μ is an unsupervised loss weight, T is a current training algebra, T is a total training algebra, I (. + -.) is a confidence coefficient decision function, R (. + -.) is a cosine distance decision function, f (. + -.) is a warming Softmax function, a (. + -.) is random data enhancement, j (. + -.) is a random data enhancement, and1and j2Are two copies of the same unlabeled exemplar.
Through the training process, a fabric fiber component qualitative method utilizing both supervision data and unsupervised data can be obtained, a hyperspectral sequence of a fabric to be detected is input into a model which is trained, and the material class of the fabric can be output.
Compared with the prior art, the invention realizes the following effects: the semi-supervised classification method for the fabric fiber components based on the pi model is characterized by extracting the characteristics of input spectrum sequence data through a convolutional neural network, using semi-supervised learning, training the characteristic extraction capability of the neural network by utilizing the consistency priori knowledge of a large amount of unlabelled sample data in practical application, enhancing the generalization capability of the model, improving the robustness of noise when the noise is acquired by an instrument, solving the problem of serious overfitting of the fabric spectrum sequence processed by the neural network in the past and further improving the prediction precision of the fabric fiber component classification.

Claims (4)

1. The semi-supervised classification method of the fabric fiber components based on the pi model is characterized by comprising the following steps of:
collecting and cleaning fabric near-infrared hyperspectral data, wherein the hyperspectral data comprises tag data and non-tag data;
inputting the labeled data of the fabric into a characteristic extraction neural network and a classifier to obtain probability distribution of model output, and solving cross entropy loss with the corresponding label; respectively performing random data enhancement on the fabric unlabeled data twice to obtain an unlabeled sample pair, inputting the sample pair into a feature extraction neural network and a classifier, and obtaining sample pair probability distribution output by a model; measuring the L2 distance of different output probability distributions of the same sample pair, and solving the unsupervised MSE loss of the sample pair; combining the cross entropy loss and the MSE loss into a total loss, and updating a feature extraction neural network model;
and after the neural network model is trained, classifying the hyperspectral data of the fabric by using the neural network model.
2. The method of claim 1, wherein the step of labeled data training further comprises:
constructing a feature extraction network based on a one-dimensional convolution neural network aiming at the characteristics of near-infrared hyperspectral sequence data;
the fabric near-infrared hyperspectral data has the characteristic of small label data quantity, so that the neural network model is easy to generate the problem of overfitting, a threshold value e is set, and if the maximum value of a probability distribution vector output by inputting label data into the neural network model is larger than e, supervision loss is not included;
for the labeled data training process, uniformly sampling to obtain labeled training data in the early training stage; during the late training period, labeled training data is obtained with randomly weighted sampling.
3. The method of claim 1, wherein the unlabeled data training further comprises:
the characteristic extraction neural network model shares parameters with a neural network model in label training;
the label-free sample pair is obtained by randomly enhancing data of the same sample, and a threshold value r is set for avoiding the problem of data distortion caused by too intense data enhancement; and if the absolute value of the cosine distance between a certain sampling point and the sample after data enhancement is less than r, no unsupervised loss is counted. Can be formulated as:
Figure FDA0003249003700000011
wherein
Figure FDA0003249003700000012
Wherein X is an unlabeled training data sequence vector and X' is a data-enhanced copy of X;
setting a threshold value beta to be 0.6 for the MSE loss obtained by calculating the non-label data pair. And if the maximum value of the unsupervised probability distribution does not reach the threshold value, counting the loss. The decision function can be formulated as:
Figure FDA0003249003700000021
wherein x is input data and p is a probability distribution vector output by the classifier;
extracting a neural network and a classifier from the input features by the label-free samples to obtain output probability distribution; to make it easier for the probability distribution to satisfy the confidence threshold β, a classifier is used that incorporates a temperature coefficient, the formula being:
Figure FDA0003249003700000022
wherein τ is the temperature coefficient;
and for the label-free data training process, obtaining label-free training data by uniform sampling all the time.
4. The method of claim 1, wherein the combining of tagged data loss and untagged data loss further comprises:
the total loss is the sum of the product of the loss of labeled data plus a coefficient w (t) related to the training algebra times the loss of unlabeled data. Can be calculated by the following formula:
Figure FDA0003249003700000023
where T is the current training algebra, T is the total training algebra, and μ is a constant, generally 10, to increase the loss of unlabeled data.
CN202111040595.XA 2021-09-06 2021-09-06 Semi-supervised classification method for fabric fiber components based on pi model Pending CN113887317A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111040595.XA CN113887317A (en) 2021-09-06 2021-09-06 Semi-supervised classification method for fabric fiber components based on pi model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111040595.XA CN113887317A (en) 2021-09-06 2021-09-06 Semi-supervised classification method for fabric fiber components based on pi model

Publications (1)

Publication Number Publication Date
CN113887317A true CN113887317A (en) 2022-01-04

Family

ID=79008380

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111040595.XA Pending CN113887317A (en) 2021-09-06 2021-09-06 Semi-supervised classification method for fabric fiber components based on pi model

Country Status (1)

Country Link
CN (1) CN113887317A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117786617A (en) * 2024-02-27 2024-03-29 南京信息工程大学 Cloth component analysis method and system based on GA-LSTM hyperspectral quantitative inversion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117786617A (en) * 2024-02-27 2024-03-29 南京信息工程大学 Cloth component analysis method and system based on GA-LSTM hyperspectral quantitative inversion
CN117786617B (en) * 2024-02-27 2024-04-30 南京信息工程大学 Cloth component analysis method and system based on GA-LSTM hyperspectral quantitative inversion

Similar Documents

Publication Publication Date Title
Jahanbakhshi et al. Classification of sour lemons based on apparent defects using stochastic pooling mechanism in deep convolutional neural networks
CN110717368A (en) Qualitative classification method for textiles
Yu et al. Recognition method of soybean leaf diseases using residual neural network based on transfer learning
CN113887317A (en) Semi-supervised classification method for fabric fiber components based on pi model
CN113970532B (en) Fabric fiber component detection system and prediction method based on near infrared spectrum
Sun et al. A method of information fusion for identification of rice seed varieties based on hyperspectral imaging technology
Mustafic et al. Cotton contamination detection and classification using hyperspectral fluorescence imaging
Fabijańska A survey of thresholding algorithms on yarn images
Anami et al. Comparative analysis of SVM and ANN classifiers for defective and non-defective fabric images classification
CN113408616B (en) Spectral classification method based on PCA-UVE-ELM
CN114112982A (en) Fabric fiber component qualitative method based on k-Shape
Islam et al. Nitrogen fertilizer recommendation for paddies through automating the leaf color chart (LCC)
Cheng et al. Fabric material identification based on Densenet variant networks
Yildirim et al. Discovering the relationships between yarn and fabric properties using association rule mining
CN111275131A (en) Chemical image classification and identification method based on infrared spectrum
CN110542659A (en) pearl luster detection method based on visible light spectrum
CN115406852A (en) Fabric fiber component qualitative method based on multi-label convolutional neural network
Amin et al. SAPS: Automatic Saffron Adulteration Prediction Systems, research issues, and prospective solutions
Turner et al. Training a new instrument to measure cotton fiber maturity using transfer learning
Soleymanian Moghadam et al. Classification of Persian carpet patterns based on quantitative aesthetic‐related features
Bitrus et al. Enhancing classification in correlative microscopy using multiple classifier systems with dynamic selection
Bhugra et al. Use of leaf colour for drought stress analysis in rice
Madgi et al. Recognition of Green Colour Vegetables' Images Using an Artificial Neural Network
Manga Plant Disease Classification using Residual Networks with MATLAB
Huang et al. A deep multi-instance neural network for dyeing-free inspection of yarn dyeing uniformity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination