CN117892203A - Defective gear classification method, device and computer readable storage medium - Google Patents

Defective gear classification method, device and computer readable storage medium Download PDF

Info

Publication number
CN117892203A
CN117892203A CN202410293547.9A CN202410293547A CN117892203A CN 117892203 A CN117892203 A CN 117892203A CN 202410293547 A CN202410293547 A CN 202410293547A CN 117892203 A CN117892203 A CN 117892203A
Authority
CN
China
Prior art keywords
classification
domain
target domain
enhanced
loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410293547.9A
Other languages
Chinese (zh)
Other versions
CN117892203B (en
Inventor
李可
高琼华
宿磊
顾杰斐
赵新维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangnan University
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN202410293547.9A priority Critical patent/CN117892203B/en
Publication of CN117892203A publication Critical patent/CN117892203A/en
Application granted granted Critical
Publication of CN117892203B publication Critical patent/CN117892203B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to the technical field of gear surface defect detection, in particular to a defective gear classification method, a defective gear classification device and a computer readable storage medium, which comprise the following steps: constructing a defect gear depth migration network model, wherein the defect gear depth migration network model comprises a feature extractor, a domain condition channel attention module and a classifier; inputting the training set sample into a feature extractor, inputting the training set sample into a domain condition channel attention module, and calculating the distance loss of the classified input features of the target domain and the source domain on a feature layer and the distance loss of the classified output of the target domain and the source domain on a classification layer; constructing a loss function and updating network parameters of the model; and inputting the unlabeled target domain sample into a trained defective gear deep migration network model to obtain a classification result. The method not only improves the precision of classifying the target domain defect gears, but also shows more stable performance in the aspect of classifying and detecting small batches of defect gears.

Description

Defective gear classification method, device and computer readable storage medium
Technical Field
The invention relates to the technical field of gear surface defect detection, in particular to a defective gear classification method, a defective gear classification device and a computer readable storage medium.
Background
The gear is used as an important mechanical transmission element, and is widely applied to various mechanical equipment in industrial production, and the performance and reliability of the gear directly influence the operation efficiency and service life of a mechanical system. However, due to the complex manufacturing process and high strength working environment, various defects such as fatigue cracks, wear, scratches, etc., which may develop gradually under adverse conditions, eventually lead to failure of the gear, are often present on the gear surface. Therefore, defect detection of the gear surface is particularly important.
As a non-contact rapid response scheme, the computer vision technology has the advantages of high efficiency, high precision, high automation degree, adaptability to various surface materials and defect types and the like compared with the traditional manual inspection and mechanical detection.
The traditional machine vision surface defect detection technology generally adopts a mode of combining image processing and shallow machine learning, plays a vital role in a plurality of industrial fields, and becomes a key link for ensuring the quality of products. The core challenge of conventional machine vision surface defect detection techniques is how to extract good feature representations in order to be able to accurately distinguish between defect and non-defect areas, thus requiring engineers in the professional field to manually select and design feature extraction methods according to actual conditions, and to design appropriate classifiers for defect detection, which limits their applicability and expansibility to some extent. Moreover, conventional machine vision surface defect detection techniques often perform poorly when dealing with complex and varying defect types, as the artificial design features may not cover all cases.
With rapid development of deep learning technology, emerging defect detection methods based on deep learning are gradually attracting attention. The deep learning model can automatically learn characteristic representation from data and does not depend on artificial design characteristics any more, so that the deep learning model has stronger generalization capability and adaptability. However, in an actual industrial detection scene, the training scene is different from the target task scene due to interference of various external factors such as illumination, resolution, equipment and the like, and the performance of the training model is seriously reduced due to the fact that the trained model is directly applied to the target task scene.
In order to solve the problems, the unsupervised domain adaptation method trains a model on labeled source domain data and unlabeled target domain data, and expands the model from the source domain to the target domain, so that the defect of the labeled data in the target domain is overcome by using the knowledge of the source domain data, and the generalization capability and the adaptability of the model are improved. However, in the existing unsupervised domain adaptation method, due to the large difference between the source domain and the target domain, the characteristics of the model are difficult to align in the process of migrating from the source domain to the target domain, so that the classification accuracy of the model is greatly reduced.
Disclosure of Invention
Therefore, the invention aims to solve the technical problem that in the prior art, characteristics are difficult to align in the process of transferring a model from a source domain to a target domain, so that the classification accuracy of the model is greatly reduced.
In order to solve the technical problems, the invention provides a defective gear classifying method, comprising the following steps:
Constructing a defect gear depth migration network model, wherein the defect gear depth migration network model comprises a feature extractor, a domain condition channel attention module and a classifier; the domain condition channel attention module comprises a first attention sub-module, a classification layer and a second attention sub-module which are sequentially connected in series along the positive propagation direction; the first attention sub-module and the second attention sub-module comprise two full-connection layers, and the classification layers are consistent with the structures and parameters of the classifier;
inputting a labeled source domain sample and a label-free target domain sample as training sets into a feature extractor of a defect gear depth migration network model to respectively obtain source domain feature data and target domain feature data;
inputting the target domain feature data into a first attention sub-module of a domain condition channel attention module, and adding the target domain feature data and the output of the first attention sub-module to obtain the classified input features of the target domain; inputting the classified input features of the target domain into a classification layer, and inputting the output of the classification layer into a second attention sub-module; adding the output of the second attention sub-module and the output of the classification layer to obtain the classification output of the target domain;
the method comprises the steps of taking source domain feature data as classification input features of a source domain, inputting the classification input features of the source domain, and obtaining classification output of the source domain;
Respectively calculating the distance loss of the classified input features of the target domain and the classified input features of the source domain on the feature layer, and the distance loss of the classified output of the target domain and the classified output of the source domain on the classified layer;
Inputting the target domain feature data and the source domain feature data into a classifier to respectively obtain classification results of a source domain sample and a target domain sample; calculating the classification loss between the classification result of the source domain sample and the real label;
Constructing a loss function of a defect gear depth migration network model according to the distance loss of the classification input features of the target domain and the classification input features of the source domain on the feature layer, the distance loss of the classification output of the target domain and the classification output of the source domain on the classification layer and the classification loss between the classification result of the source domain sample and the real label, and updating network parameters of a feature extractor and a classifier in the defect gear depth migration network model in back propagation;
And inputting the unlabeled target domain sample into a feature extractor and a classifier of the trained defective gear deep migration network model to obtain a classification result of the unlabeled target domain sample.
Preferably, the expression of the distance loss between the classified input feature of the target domain and the classified input feature of the source domain on the feature layer is:
Wherein,, Input features for classification of target domains/> And classification input features of source domain/> Distance loss at feature layer,/> input features for classification of target domain,/> For the total number of target domain samples in the training set,/> And/> Respectively is/> Sum/> classification input features of individual target domains, there are/> , And/> ; Input features for classification of source domain,/> For the total number of source domain samples in the training set,/> And/> Respectively is/> Sum/> Classification input features of individual source domains, there are/> , And/> ; Is a kernel function.
Preferably, the classification input features of the target domain The calculation formula of (2) is as follows:
Wherein,, For target domain feature data,/> output of the target domain at the first attention sub-module,/> for the first fully connected layer in the first attention sub-module,/> For the second fully-connected layer in the first attention sub-module, For ReLU activation function,/> As a softmax function.
Preferably, the expression of the distance loss between the classification output of the target domain and the classification output of the source domain on the classification layer is:
Wherein,, Classification output for target Domain/> And classification output of source domain/> Distance loss at the classification layer,/> Output for classification of target domain,/> For the total number of target domain samples in the training set,/> And Respectively is/> Sum/> Classification output of individual target domains, there are/> , And/> ; for classified output of source domain,/> For the total number of source domain samples in the training set,/> And/> Respectively is/> Sum/> Classification output of individual Source Domains, there are/> , And/> ; Is a kernel function.
Preferably, the classification output of the target domain The calculation formula of (2) is as follows:
Wherein,, For the output of the target domain at the classification layer,/> for the output of the target domain at the second attention sub-module, for the first fully connected layer in the second attention sub-module,/> For the second fully connected layer in the second attention sub-module,/> For ReLU activation function,/> As a softmax function.
Preferably, the loss function of the defective gear depth migration network model the formula of (2) is:
Wherein,, For the classification loss between the classification result of the source domain sample and the real label,/> distance loss of classified input features of target domain and classified input features of source domain on feature layer,/> Distance loss on classification layer for classification output of target domain and classification output of source domain,/> distance loss/>, on feature layer, of classification input features for target domain and classification input features for source domain And distance loss at the classification layer between the classification output of the target domain and the classification output of the source domain Is added to the weight of the loss.
Preferably, in each iteration of training the defective gear depth migration network model, after updating the network parameters of the defective gear depth migration network model in back propagation, dynamically optimizing the classification loss between the classification result of the source domain sample and the real label by combining random gradient descent with a sharpness perception minimization method, and updating the network parameters again.
Preferably, before inputting the labeled source domain samples and the unlabeled target domain samples as the training set into the feature extractor of the defect gear deep migration network model, the method further comprises:
performing significance enhancement operation on a source domain sample and a target domain sample in a training set to obtain an enhanced source domain sample set and an enhanced target domain sample set;
Inputting the enhanced source domain sample set and the enhanced target domain sample set into a feature extractor of a defect gear depth migration network model to respectively obtain enhanced source domain feature data and enhanced target domain feature data;
inputting the enhanced source domain characteristic data and the enhanced target domain characteristic data into a classifier to respectively obtain classification results of an enhanced source domain sample set and an enhanced target domain sample set, and respectively calculating the class consistency loss of the enhanced source domain sample set and the class consistency loss of the enhanced target domain sample set through the divergence;
Inputting the enhanced source domain characteristic data and the enhanced target domain characteristic data into a domain classifier, classifying samples according to domain categories according to the characteristic data, and respectively calculating the domain consistency loss of an enhanced source domain sample set and the domain consistency loss of an enhanced target domain sample set through the divergence;
And constructing a loss function of the defective gear depth migration network model according to the field consistency loss and the category consistency loss of the enhanced source domain sample set, the field consistency loss and the category consistency loss of the enhanced target domain sample set, the distance loss of the classification input features of the enhanced target domain and the classification input features of the enhanced source domain on the feature layer, the distance loss of the classification output of the enhanced target domain and the classification output of the enhanced source domain on the classification layer, and the classification loss between the classification result of the enhanced source domain sample set and the real label, and updating the network parameters of the defective gear depth migration network model in the back propagation.
Preferably, performing a saliency enhancement operation on a source domain sample and a target domain sample in a training set to obtain an enhanced source domain sample set and an enhanced target domain sample set, including:
source domain sample set in training set Different data enhancement modes are respectively adopted in the significance enhancement operation to obtain a sample set/>, after source domain sample enhancement And/> Then enhance the source domain sample set/> ;
Target domain sample set in training set Different data enhancement modes are respectively adopted in the significance enhancement operation, so that a data set/>, after target domain sample enhancement, is obtained And/> Then the target domain sample set is enhanced .
Preferably, the formula of the saliency enhancement operation is:
Wherein,, For different data enhancement modes,/> To enhance the mode/> Weights of/> Total number of data enhancement modes,/> For the original sample set,/> For the original sample set/> Mean value of/(I) For the original sample set/> standard deviation of/> to obey the probability of Beta distribution, the value range is/> , To be an enhanced sample set.
Preferably, the classification consistency loss of the enhanced source domain sample set and the classification consistency loss of the enhanced target domain sample set are respectively calculated through the divergence, and the formula is as follows:
Wherein,, To enhance class consistency loss for source domain sample sets,/> To enhance class consistency loss for a target domain sample set,/> Representing the divergence, the calculation formula is/> ;And Respectively source domain sample set/> Sample set/>, source domain sample enhanced And/> Class probability distribution,/> To enhance the source domain sample set/> Class mix probability distribution of (2); /(I) And/> Target domain sample set/>, respectively Data set/>, target domain sample enhanced And/> Class probability distribution,/> To enhance the target domain sample set/> Is a class-mixed probability distribution.
Preferably, the source domain sample set Class probability distribution/> Sample set/>, source domain sample enhanced And/> Class probability distribution/> And/> Enhanced Source Domain sample set/> Class mix probability distribution/> the formula of (2) is:
Wherein,, Representation of data normalized by softmax,/> And/> Respectively source domain sample set/> Sample set/>, source domain sample enhanced And/> an output in a classifier of the domain conditional channel attention module;
target domain sample set Class probability distribution/> Data set/>, target domain sample enhanced And/> Class probability distribution/> And/> Enhancement of target Domain sample set/> Class mix probability distribution/> the formula of (2) is:
Wherein,, And/> Target domain sample set/>, respectively Data set/>, target domain sample enhanced And/> The output in the classifier of the domain conditional channel attention module.
Preferably, the field consistency loss of the enhanced source field sample set and the field consistency loss of the enhanced target field sample set are calculated through the divergence respectively, and the formula is as follows:
Wherein,, To enhance the domain consistency penalty of the source domain sample set,/> To enhance the domain consistency penalty of the target domain sample set,/> Representing the divergence, the calculation formula is/> ; And/> Respectively source domain sample set/> Sample set/>, source domain sample enhanced And/> Output in a domain classifier,/> To enhance the source domain sample set/> Is a domain mix probability distribution; /(I) And/> Respectively target domain sample sets Data set/>, target domain sample enhanced And/> Output in a domain classifier,/> To enhance the target domain sample set/> Is a domain mix probability distribution of (c).
Preferably, the source domain sample set is enhanced Domain mixed probability distribution/> the formula of (2) is:
Wherein,, And/> Respectively source domain sample set/> Sample set/>, source domain sample enhanced And An output in a domain classifier;
Enhancement target domain sample set Domain mixed probability distribution/> the formula of (2) is:
Wherein,, And/> Target domain sample set/>, respectively Enhanced data set of target domain samples And/> an output in the domain classifier.
Preferably, the loss function of the defective gear depth migration network model is constructed according to the field consistency loss and the category consistency loss of the enhanced source domain sample set, the field consistency loss and the category consistency loss of the enhanced target domain sample set, the distance loss on the feature layer between the classification input features of the enhanced target domain and the classification input features of the enhanced source domain, the distance loss on the classification layer between the classification output of the enhanced target domain and the classification output of the enhanced source domain, and the classification loss between the classification output of the enhanced source domain and the real label The formula is:
Wherein,, to enhance the classification loss between the classification result of the source domain sample set and the real label,/> To enhance class consistency loss for source domain sample sets,/> To enhance class consistency loss for a target domain sample set,/> To enhance the domain consistency penalty of the source domain sample set,/> To enhance the domain consistency penalty of the target domain sample set,/> To enhance the distance loss of the classification input features of the target domain and the classification input features of the source domain on the feature layer,/> The distance loss of the classification output of the enhanced target domain and the classification output of the enhanced source domain on the classification layer; /(I) Distance loss/>, on feature layer, of classification input features for enhanced target domain and classification input features for enhanced source domain And distance loss/>, on the classification layer, of the classification output of the enhanced target domain and the classification output of the enhanced source domain Loss weight of/> Loss weights for category consistency loss and domain consistency loss.
The invention also provides a defective gear classifying device, which comprises:
The model construction module is used for constructing a defective gear depth migration network model, and the defective gear depth migration network model comprises a feature extractor, a domain condition channel attention module and a classifier; the domain condition channel attention module comprises a first attention sub-module, a classification layer and a second attention sub-module which are sequentially connected in series along the positive propagation direction; the first attention sub-module and the second attention sub-module comprise two full-connection layers, and the classification layers are consistent with the structures and parameters of the classifier;
The feature extraction module is used for inputting a labeled source domain sample and a label-free target domain sample as training sets into a feature extractor of the depth migration network model of the defect gear to respectively obtain source domain feature data and target domain feature data;
The distance loss acquisition module is used for inputting the target domain characteristic data into a first attention sub-module of the domain condition channel attention module, and obtaining the classified input characteristics of the target domain after adding the target domain characteristic data and the output of the first attention sub-module; inputting the classified input features of the target domain into a classification layer, and inputting the output of the classification layer into a second attention sub-module; adding the output of the second attention sub-module and the output of the classification layer to obtain the classification output of the target domain; the method comprises the steps of taking source domain feature data as classification input features of a source domain, inputting the classification input features of the source domain, and obtaining classification output of the source domain; respectively calculating the distance loss of the classified input features of the target domain and the classified input features of the source domain on the feature layer, and the distance loss of the classified output of the target domain and the classified output of the source domain on the classified layer;
The classification loss acquisition module is used for inputting the target domain characteristic data and the source domain characteristic data into the classifier to respectively obtain classification results of the source domain sample and the target domain sample; calculating the classification loss between the classification result of the source domain sample and the real label;
The loss function construction module is used for constructing a loss function of the defect gear depth migration network model according to the distance loss of the classification input features of the target domain and the classification input features of the source domain on the feature layer, the distance loss of the classification output of the target domain and the classification output of the source domain on the classification layer and the classification loss between the classification result of the source domain sample and the real label, and updating network parameters of a feature extractor and a classifier in the defect gear depth migration network model in back propagation;
The classification result acquisition module is used for inputting the unlabeled target domain sample into the feature extractor and the classifier of the trained defective gear deep migration network model to obtain the classification result of the unlabeled target domain sample.
The present invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of a defective gear classification method as described above.
Compared with the prior art, the technical scheme of the invention has the following beneficial effects:
According to the defect gear classification method, a defect gear depth migration network model is built, a labeled source domain sample is utilized to train the defect gear depth migration network model, a domain condition channel attention module is added in the training process, and the domain condition channel attention module comprises a first attention sub-module, a classification layer and a second attention sub-module which are sequentially connected in series along the positive propagation direction. The first attention sub-module can dynamically adjust the attention degree of the classified input features to be more concentrated on a specific area or feature important to the task, and then the target domain feature data and the source domain feature data are subjected to domain matching by calculating the distance loss of the classified input features of the target domain and the classified input features of the source domain on the feature layer so that the source domain data and the target domain data can retain information of the specific area on the feature layer. Similarly, the second attention sub-module can dynamically adjust the attention degree of output classification to be more concentrated on the classification area important to the task, and then calculate the distance loss of the classification output of the target domain and the classification output of the source domain on the classification layer to enable the target domain classification data to be in domain matching with the source domain classification data so as to enable the source domain data and the target domain data to retain the information of the specific domain on the classification layer. The domain condition channel attention module can retain specific information of each domain in the training process of the defective gear depth migration network model, and is beneficial to solving the problem caused by the data distribution difference of the source domain and the target domain. According to the method, the defective gears are classified by using the defective gear deep migration network model, so that the accuracy of classifying the defective gears in the target domain is improved.
According to the defect gear classification method, in the training process of the defect gear depth migration network model, the original source domain samples and the original target domain samples are processed through the saliency enhancement technology to enlarge defect characteristics, increase the number of the samples, solve the problem of deficiency of the defect samples, and ensure the category consistency and the field consistency of the defect samples obtained through the saliency enhancement technology by calculating the category consistency loss and the field consistency loss. The method and the device solve the problem that the defective gear depth migration network model is over-fitted due to too few samples, and improve the classification performance of the defective gear depth migration network model.
According to the defect gear classification method, in the training process of the defect gear depth migration network model, the sharpness perception minimization method is introduced for classification loss between classification output of a source domain and a real label, and the generalization performance of the model is enhanced by optimizing the loss value and the loss definition at the same time, so that the generalization capability and the stability of the defect gear depth migration network model are further improved.
In conclusion, the method not only improves the precision of classifying the target domain defect gears, but also shows more stable performance in the aspect of classifying and detecting small batches of defect gears.
Drawings
In order that the invention may be more readily understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings, in which,
FIG. 1 is a block diagram of a domain condition channel attention module in a defective gear depth migration network model of the present invention;
FIG. 2 is a flow chart of the training of a defective gear depth migration network model according to the present invention;
FIG. 3 is a block diagram of a defective gear depth migration network model of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and specific examples, which are not intended to be limiting, so that those skilled in the art will better understand the invention and practice it.
Example 1
Because of the shortage of labeled samples in the target domain, the method constructs the depth migration network model of the defect gear, trains the depth migration network model of the defect gear by using labeled source domain samples and unlabeled target domain samples, expands the depth migration network model of the defect gear from the source domain to the target domain, so that the depth migration network model of the defect gear can classify the defect gear in the target domain, and specifically comprises the following steps:
A defective gear depth migration network model is constructed, and the defective gear depth migration network model comprises a feature extractor, a domain condition channel attention module and a classifier. Referring to FIG. 1, FIG. 1 is a block diagram of a domain condition channel attention module in a defective gear depth migration network model according to the present invention, the domain condition channel attention module the system comprises a first attention sub-module, a classification layer and a second attention sub-module which are sequentially connected in series along the forward propagation direction; the first attention sub-module and the second attention sub-module comprise two full connection layers, and the classification layers are consistent with the structure and parameters of the classifier.
The labeled source domain sample and the unlabeled target domain sample are used as training sets to be input into a feature extractor of a depth migration network model of the defect gear, and source domain feature data are respectively obtained And target Domain feature data/> Source domain feature data/> And target Domain feature data/> Input to the domain conditional channel attention module.
In order to focus the input features of the classification layer more on specific areas or features important to the task, namely gear defects, the invention provides a domain conditional access attention module. The classifying layer of the domain condition channel attention module shares parameters with the classifier, and is connected with a first attention sub-module, namely an F-layer, in front of the classifying layer, so that the attention degree of the network to the classified input features is dynamically adjusted to concentrate on specific areas or features important to tasks.
Characterizing data of target domain The first attention sub-module is input to the domain condition channel attention module to obtain the output/>, of the target domain in the first attention sub-module . Output/>, of target domain at first attention sub-module and target Domain feature data/> Adding to obtain the classified input features/>, of the target domain The formula is:
Wherein,, For target domain feature data,/> output of the target domain at the first attention sub-module,/> for the first fully connected layer in the first attention sub-module,/> For the second fully connected layer in the first attention sub-module,/> For ReLU activation function,/> As a softmax function.
Inputting the classification of the target domain into the feature Input to the classification layer/> obtaining the output of the target domain in the classification layer . In order to make the classification output of the target domain in the classification layer more concentrated on the classification region important to the task, the invention connects a second attention sub-module, namely a C-layer, after the classification layer of the domain condition channel attention module, and dynamically adjusts the attention degree of the network to the classification output so as to concentrate on the classification region important to the task.
Output of target domain at classification layer Input to the second attention sub-module to obtain the output/>, of the target domain in the second attention sub-module . Output of the target domain at the second attention sub-module/> at the classification layer/>, with the target domain Output/> adding to obtain the classified output/>, of the target domain The formula is:
Wherein,, For the output of the target domain at the classification layer,/> for the output of the target domain at the second attention sub-module, for the first fully connected layer in the second attention sub-module,/> For the second fully connected layer in the second attention sub-module,/> For ReLU activation function,/> As a softmax function.
Classifying input features using source domain feature data as source domain Classification layer input to Domain conditional channel attention Module/> Obtaining the classified output/>, of the source domain . Wherein, sort layer/> And classification layer/> Sharing parameters.
In order to make the characteristic data of the target domain and the characteristic data of the source domain carry out domain matching so as to keep the information of the specific domain on the characteristic layer by the source domain data and the target domain data during training, the invention calculates the classified input characteristics of the target domain And classification input features of source domain/> The distance loss at the feature layer is given by:
Wherein,, Input features for classification of target domains/> And classification input features of source domain/> Maximum averaged disparity loss, i.e. distance loss,/>, at the feature layer input features for classification of target domain,/> For the total number of target domain samples in the training set,/> And/> Respectively is/> Sum/> the classified input features of the individual target domains are as follows , And/> ; Input features for classification of source domain,/> For the total number of source domain samples in the training set,/> And/> Respectively is/> Sum/> Classified input features of individual source domains, including , And/> ; Is a kernel function.
In order to make the target domain classification data and source domain classification data implement domain matching so as to retain the information of specific domain on the classification layer when the source domain data and target domain data are trained, the invention calculates the classification output of target domain Classification output with Source Domain/> the distance loss at the classification layer is given by:
Wherein,, Classification output for target Domain/> And classification output of source domain/> Maximum averaged disparity loss, i.e. distance loss,/>, at the classification layer Output for classification of target domain,/> For the total number of target domain samples in the training set,/> And/> Respectively is/> Sum/> Classification output of individual target domains, there are/> , And/> ; for classified output of source domain,/> For the total number of source domain samples in the training set,/> And/> Respectively is/> Sum/> Classification output of individual Source Domains, there are/> , And/> ; Is a kernel function.
The domain condition channel attention module participates in operation only in the training process of the depth migration network model of the defect gear, is used for calculating the distance loss of the classification input features of the target domain and the classification input features of the source domain on the feature layer and the distance loss of the classification output of the target domain and the classification output of the source domain on the classification layer, so that specific information of each domain can be reserved when the network model is trained, the problem caused by the data distribution difference of the source domain and the target domain is solved, and the accuracy of classifying the defect gear of the target domain is improved.
Inputting the target domain feature data and the source domain feature data into a classifier to respectively obtain classification results of the source domain sample and the target domain sample, and calculating the classification results of the source domain sample through cross entropy classification loss with real tags, the formula is:
Wherein,, classification result for source domain sample/> Classification loss with real tags,/> For/> true tag of individual source domain samples,/> For/> classification of individual source domain samples, i.e. predictive labels,/> Is the total number of source domain samples in the training set.
Constructing a loss function of a defective gear depth migration network model according to the distance loss of the classification input features of the target domain and the classification input features of the source domain on the feature layer, the distance loss of the classification output of the target domain and the classification output of the source domain on the classification layer and the classification loss between the classification result of the source domain sample and the real label The formula is:
Wherein,, distance loss on feature layer for classification input features of target domain and classification input features of source domain and distance loss/>, on the classification layer, of the classification output of the target domain and the classification output of the source domain Is added to the weight of the loss.
And updating network parameters of the feature extractor and the classifier in the depth migration network model of the defect gear in back propagation, and updating network parameters of the classification layer in the domain condition channel attention module according to the updated network parameters of the classifier in each iteration so that the classification layer is consistent with the network parameters of the classifier.
And taking the target domain sample with the label as a test set for testing the accuracy of the trained defective gear depth migration network model.
Inputting the unlabeled target domain sample into a feature extractor of the trained defective gear deep migration network model to obtain feature data of the target domain sample, and inputting the feature data of the target domain sample into a classifier to obtain a classification result of the unlabeled target domain sample.
Example two
In practical situations, due to insufficient migration information caused by lack of enough defect data, various defect features are difficult to cover, and therefore excessive fitting of a depth migration network model of a defect gear is possibly caused, so that in the embodiment, a significant enhancement operation is performed on samples in a training set to increase the number of the samples, and the problem of deficiency of the defect samples is solved.
In this embodiment, a flow chart for training the depth migration network model of the defective gear is shown in fig. 2, and fig. 2 is a flow chart for training the depth migration network model of the defective gear according to the present invention, specifically including:
S1, respectively selecting the same number of labeled source domain samples and unlabeled target domain samples from the gear defect data set as a training set, and then selecting the labeled target domain samples according to the proportion as a test set.
In this embodiment, a type i gear is used as a source domain, and a type ii gear is used as a target domain, to obtain a defective gear with four defects of cracks, bumps, missing teeth, and powder removal. The training set and the test set are segmented in a ratio of 5:1, but due to limited actual collected data, the segmented training set comprises 500 crack type I gear images, 142 bump type I gear images, 500 missing tooth type I gear images, 300 powder removal type I gear images, 500 crack type II gear images, 200 bump type II gear images, 300 missing tooth type II gear images and 300 powder removal type II gear images; the test set comprises 100 crack type II gear images, 50 bump type II gear images, 70 missing tooth type II gear images and 70 powder removal type II gear images.
Classifying samples in the training set according to the gear field to construct a source field sample set and target Domain sample set/> .
S2, constructing a defective gear depth migration network model, wherein the structure of the defective gear depth migration network model is shown by referring to FIG. 3, and FIG. 3 is a structural diagram of the defective gear depth migration network model. Source domain sample set in training set and target Domain sample set/> And inputting a defect gear depth migration network model, and performing significance enhancement operation on the source domain sample set and the target domain sample set to obtain an enhanced source domain sample set and an enhanced target domain sample set.
The saliency enhancement operations include rotation, scaling, clipping, weighting, etc. for generating new sample data, the formula of the operations is:
Wherein,, For different data enhancement modes,/> To enhance the mode/> Weights of/> Total number of data enhancement modes,/> For the original sample set,/> For the original sample set/> Mean value of/(I) For the original sample set/> standard deviation of/> to obey the probability of Beta distribution, the value range is/> , To be an enhanced sample set.
Source domain sample set in training set Different data enhancement modes are respectively adopted in the significance enhancement operation to obtain a sample set/>, after source domain sample enhancement And/> Then enhance the source domain sample set/> .
Target domain sample set in training set Different data enhancement modes are respectively adopted in the significance enhancement operation, so that a data set/>, after target domain sample enhancement, is obtained And/> Then enhance the target domain sample set/> .
S3, enhancing the source domain sample set And enhancing target domain sample set/> Feature extractor/>, of input defect gear depth migration network model respectively obtaining the enhanced source domain feature data/> And enhancing target domain feature data/> .
S4, enhancing the source domain feature data And enhancing target domain feature data/> Input to the domain condition channel attention module, the structure and data processing mode of the domain condition channel attention module are the same as those of the first embodiment, but only input the source domain characteristic data/> And target Domain feature data/> Become enhanced source domain feature data/> And enhancing target domain feature data/> obtaining the classified input characteristics/>, of the enhanced target domain And enhancing the classified input features/>, of the source domain And enhancing the classification output/>, of the target domain And enhancing the classified output/>, of the source domain .
Calculating distance loss between classified input features of enhanced target domain and classified input features of enhanced source domain on feature layer The formula is:
Wherein,, Input features/>, to enhance classification of target domains And enhancing the classified input features/>, of the source domain Maximum averaged disparity loss over feature layer,/> To enhance the classification input features of the target domain, To enhance the target domain sample set/> The total number of samples in/( Input features for enhancing classification of source domain, i.e. enhancing source domain feature data/> , To enhance the source domain sample set/> The total number of samples in/( Is a kernel function.
Preferably, the kernel function is a gaussian kernel function, and the formula is ; wherein/> And/> respectively represent data points,/> Is the bandwidth of the gaussian kernel.
Calculating the distance loss of the classification output of the enhanced target domain and the classification output of the enhanced source domain on the classification layer The formula is:
Wherein,, To enhance the classification output/>, of a target domain And enhancing the classified output/>, of the source domain Maximum averaged disparity loss over classification layer,/> to enhance the classification output of the target domain,/> To enhance the target domain sample set/> The total number of samples in/( To enhance the classification output of source domains,/> To enhance the source domain sample set/> The total number of samples in/( Is a kernel function.
S5, inputting the target domain feature data and the source domain feature data into a classifier to respectively obtain classification results of the enhanced source domain sample set And enhancing the classification result/>, of the target domain sample set ; wherein/> And/> Respectively source domain sample set/> Sample set/>, source domain sample enhanced And/> through classifier/> Classification result of/> And/> Target domain sample set/>, respectively Data set/>, target domain sample enhanced And/> through classifier/> Classifier/> And/> Sharing parameters.
S6, respectively calculating the class consistency loss of the enhanced source domain sample set and the class consistency loss of the enhanced target domain sample set through the divergence so as to ensure the class consistency of the newly-added data information obtained through the saliency enhancement operation, wherein the method comprises the following steps:
S601, calculating a source domain sample set Class probability distribution/> Sample set/>, source domain sample enhanced And Class probability distribution/> And/> Enhanced Source Domain sample set/> Class mix probability distribution/> the formula is: /(I)
Wherein,, the data is shown normalized by softmax.
S602, calculating a target domain sample set Class probability distribution/> Data set/>, target domain sample enhanced And/> Class probability distribution/> And/> Enhancement of target Domain sample set/> Class mix probability distribution/> The formula is:
S603, respectively calculating the class consistency loss of the enhanced source domain sample set and the class consistency loss of the enhanced target domain sample set through the divergence so as to ensure the class consistency of the newly-added data information obtained through the significance enhancement operation, wherein the formula is as follows:
Wherein,, To enhance class consistency loss for source domain sample sets,/> To enhance class consistency loss for a target domain sample set,/> Representing the divergence, the calculation formula is/> ; And/> Respectively source domain sample set/> Sample set/>, source domain sample enhanced And/> Class probability distribution,/> To enhance the source domain sample set/> Class mix probability distribution of (2); /(I) And/> Target domain sample set/>, respectively Data set/>, target domain sample enhanced And/> Class probability distribution,/> To enhance the target domain sample set/> Is a class-mixed probability distribution.
S7, inputting the source domain feature data and the target domain feature data into a domain classifier, classifying the samples according to the domain categories according to the feature data, and respectively obtaining target domain sample sets Data set/>, target domain sample enhanced And/> Output in a domain classifier/> And/> Source field sample set/> Sample set/>, source domain sample enhanced And Output in a domain classifier/> And/> . The method for respectively calculating the field consistency loss of the enhanced source field sample set and the field consistency loss of the enhanced target field sample set through the divergence to ensure the field consistency of the newly added data information obtained through the saliency enhancement operation comprises the following steps:
S701, calculating an enhanced source domain sample set Domain mixed probability distribution/> The formula is:
;/>
Wherein,, And/> Respectively source domain sample set/> Sample set/>, source domain sample enhanced And an output in the domain classifier.
S702, calculating an enhanced target domain sample set Domain mixed probability distribution/> The formula is:
Wherein,, And/> Target domain sample set/>, respectively Enhanced data set of target domain samples And/> an output in the domain classifier.
S703, calculating the field consistency loss of the enhanced source field sample set and the field consistency loss of the enhanced target field sample set through the divergence respectively, wherein the formula is as follows:
Wherein,, To enhance the domain consistency penalty of the source domain sample set,/> To enhance the domain consistency penalty of the target domain sample set,/> Representing the divergence, the calculation formula is/> ; And/> Respectively source domain sample set/> Sample set/>, source domain sample enhanced And/> Output in a domain classifier,/> To enhance the source domain sample set/> Is a domain mix probability distribution; /(I) And/> Respectively target domain sample sets Data set/>, target domain sample enhanced And/> Output in a domain classifier,/> To enhance the target domain sample set/> Is a domain mix probability distribution of (c).
S8, calculating and enhancing classification results of the source domain sample set through cross entropy classification loss with real tags The formula is:
Wherein,, to enhance the classification loss between the classification output of the source domain and the real label,/> To enhance the source domain sample set/> True tag of individual samples,/> To enhance the source domain sample set/> Classification of individual samples, i.e. predictive labels,/> To enhance the source domain sample set/> The total number of samples in (a).
S9, constructing a loss function of a depth migration network model of the defect gear The formula is:
Wherein,, to enhance the classification loss between the classification result of the source domain sample set and the real label,/> To enhance class consistency loss for source domain sample sets,/> To enhance class consistency loss for a target domain sample set,/> To enhance the domain consistency penalty of the source domain sample set,/> To enhance the domain consistency penalty of the target domain sample set,/> To enhance the distance loss of the classification input features of the target domain and the classification input features of the source domain on the feature layer,/> The distance loss of the classification output of the enhanced target domain and the classification output of the enhanced source domain on the classification layer; /(I) Distance loss/>, on feature layer, of classification input features for enhanced target domain and classification input features for enhanced source domain And distance loss/>, on the classification layer, of the classification output of the enhanced target domain and the classification output of the enhanced source domain Loss weight of/> Loss weights for category consistency loss and domain consistency loss.
And S10, updating network parameters of the depth migration network model of the defect gear by using the reverse propagation.
Because the loss function trained by the defective gear depth migration network model may have a plurality of local and global minima, global features between the minima are different, thereby creating models of different generalization capabilities, and making the training process of the defective gear depth migration network model unstable. To solve this problem, the present embodiment dynamically optimizes the classification loss between the classification output of the source domain and the real label by a random gradient descent in combination with a sharpness perception minimization method after updating the network parameters of the defective gear depth migration network model in the back propagation, and updates the network parameters again, including:
by classifying output of enhanced source domain and classifying loss function between real labels Performing a first order taylor expansion to approximate the internal maximization problem yields the following formula:
Wherein,, Is the current network parameter,/> Is the step length when internal maximization is performed within a certain range, and comprises , Representing the classification loss function at/> A gradient thereat. Conversely, solving for the above approximation/> Given by the solution to the even norm problem:
Wherein,, Is a random value,/> And/> Is a random value and has/> .
Calculating new network parameter positions of defective gear deep migration network model The gradient of the loss function at the point, the formula is:
Wherein,, is a new network parameter location/> the gradient of the loss function, i.e. the gradient after minor adjustment of the parameters.
According to the determination Network parameters/>, for a defective gear depth migration network model Updating, wherein the updated network parameter values are as follows:
Wherein,, For iteration number,/> For the network parameters of the last iteration,/> for updated network parameters,/> Is step length and/> .
In the training process of the depth migration network model of the defect gear, the original source domain sample and the original target domain sample are processed through the saliency enhancement technology to enlarge defect characteristics, increase the number of the samples, solve the problem of deficiency of the defect samples, and ensure the category consistency and the field consistency of the defect samples obtained through the saliency enhancement technology by calculating the category consistency loss and the field consistency loss, so that the problem that the depth migration network model of the defect gear is over-fitted due to too few samples is solved, and the classification performance of the depth migration network model of the defect gear is improved. And a sharpness perception minimization method is introduced for classification loss between classification output of a source domain and a real label, so that generalization performance of the defective gear depth migration network model is enhanced by optimizing loss values and loss definition at the same time, and generalization capability and stability of the defective gear depth migration network model are further improved.
Example III
Based on the defective gear classification methods in the first and second embodiments, the present embodiment provides a defective gear classification device, including:
The model construction module is used for constructing a defective gear depth migration network model, and the defective gear depth migration network model comprises a feature extractor, a domain condition channel attention module and a classifier; the domain condition channel attention module comprises a first attention sub-module, a classification layer and a second attention sub-module which are sequentially connected in series along the positive propagation direction; the first attention sub-module and the second attention sub-module comprise two full-connection layers, and the classification layers are consistent with the structures and parameters of the classifier;
The feature extraction module is used for inputting a labeled source domain sample and a label-free target domain sample as training sets into a feature extractor of the depth migration network model of the defect gear to respectively obtain source domain feature data and target domain feature data;
The distance loss acquisition module is used for inputting the target domain characteristic data into a first attention sub-module of the domain condition channel attention module, and obtaining the classified input characteristics of the target domain after adding the target domain characteristic data and the output of the first attention sub-module; inputting the classified input features of the target domain into a classification layer, and inputting the output of the classification layer into a second attention sub-module; adding the output of the second attention sub-module and the output of the classification layer to obtain the classification output of the target domain; the method comprises the steps of taking source domain feature data as classification input features of a source domain, inputting the classification input features of the source domain, and obtaining classification output of the source domain; respectively calculating the distance loss of the classified input features of the target domain and the classified input features of the source domain on the feature layer, and the distance loss of the classified output of the target domain and the classified output of the source domain on the classified layer;
The classification loss acquisition module is used for inputting the target domain characteristic data and the source domain characteristic data into the classifier to respectively obtain classification results of the source domain sample and the target domain sample; calculating the classification loss between the classification result of the source domain sample and the real label;
The loss function construction module is used for constructing a loss function of the defect gear depth migration network model according to the distance loss of the classification input features of the target domain and the classification input features of the source domain on the feature layer, the distance loss of the classification output of the target domain and the classification output of the source domain on the classification layer and the classification loss between the classification result of the source domain sample and the real label, and updating network parameters of a feature extractor and a classifier in the defect gear depth migration network model in back propagation;
The classification result acquisition module is used for inputting the unlabeled target domain sample into the feature extractor and the classifier of the trained defective gear deep migration network model to obtain the classification result of the unlabeled target domain sample.
The present embodiment also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the defective gear classification method described above.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations and modifications of the present invention will be apparent to those of ordinary skill in the art in light of the foregoing description. It is not necessary here nor is it exhaustive of all embodiments. And obvious variations or modifications thereof are contemplated as falling within the scope of the present invention.

Claims (17)

1. A defective gear classifying method, comprising:
Constructing a defect gear depth migration network model, wherein the defect gear depth migration network model comprises a feature extractor, a domain condition channel attention module and a classifier; the domain condition channel attention module comprises a first attention sub-module, a classification layer and a second attention sub-module which are sequentially connected in series along the positive propagation direction; the first attention sub-module and the second attention sub-module comprise two full-connection layers, and the classification layers are consistent with the structures and parameters of the classifier;
inputting a labeled source domain sample and a label-free target domain sample as training sets into a feature extractor of a defect gear depth migration network model to respectively obtain source domain feature data and target domain feature data;
inputting the target domain feature data into a first attention sub-module of a domain condition channel attention module, and adding the target domain feature data and the output of the first attention sub-module to obtain the classified input features of the target domain; inputting the classified input features of the target domain into a classification layer, and inputting the output of the classification layer into a second attention sub-module; adding the output of the second attention sub-module and the output of the classification layer to obtain the classification output of the target domain;
the method comprises the steps of taking source domain feature data as classification input features of a source domain, inputting the classification input features of the source domain, and obtaining classification output of the source domain;
Respectively calculating the distance loss of the classified input features of the target domain and the classified input features of the source domain on the feature layer, and the distance loss of the classified output of the target domain and the classified output of the source domain on the classified layer;
Inputting the target domain feature data and the source domain feature data into a classifier to respectively obtain classification results of a source domain sample and a target domain sample; calculating the classification loss between the classification result of the source domain sample and the real label;
Constructing a loss function of a defect gear depth migration network model according to the distance loss of the classification input features of the target domain and the classification input features of the source domain on the feature layer, the distance loss of the classification output of the target domain and the classification output of the source domain on the classification layer and the classification loss between the classification result of the source domain sample and the real label, and updating network parameters of a feature extractor and a classifier in the defect gear depth migration network model in back propagation;
And inputting the unlabeled target domain sample into a feature extractor and a classifier of the trained defective gear deep migration network model to obtain a classification result of the unlabeled target domain sample.
2. The method according to claim 1, wherein the expression of the distance loss between the classification input features of the target domain and the classification input features of the source domain on the feature layer is:
Wherein,, Input features for classification of target domains/> And classification input features of source domain/> Distance loss at feature layer,/> input features for classification of target domain,/> For the total number of target domain samples in the training set,/> And/> Respectively is/> Sum/> classification input features of individual target domains, there are/> , And/> ; Input features for classification of source domain,/> For the total number of source domain samples in the training set,/> And/> Respectively is/> Sum/> Classification input features of individual source domains, there are/> , And/> ; Is a kernel function.
3. A defective gear classifying method according to claim 2, wherein the classification of the target domain is input to a feature The calculation formula of (2) is as follows:
Wherein,, For target domain feature data,/> output of the target domain at the first attention sub-module,/> for the first fully connected layer in the first attention sub-module,/> For the second fully connected layer in the first attention sub-module,/> For ReLU activation function,/> As a softmax function.
4. The method of claim 1, wherein the expression of the distance loss between the classification output of the target domain and the classification output of the source domain at the classification layer is:
Wherein,, Classification output for target Domain/> And classification output of source domain/> the distance loss at the classification layer, Output for classification of target domain,/> For the total number of target domain samples in the training set,/> And/> Respectively is/> Sum/> Classification output of individual target domains, there are/> , And/> ; for classified output of source domain,/> For the total number of source domain samples in the training set,/> And/> Respectively is/> Sum/> Classification output of individual Source Domains, there are/> , And/> ; Is a kernel function.
5. The method of claim 4, wherein the target domain classification outputs The calculation formula of (2) is as follows:
Wherein,, For the output of the target domain at the classification layer,/> Output of the target domain at the second attention sub-module,/> for the first fully connected layer in the second attention sub-module,/> For the second fully-connected layer in the second attention sub-module, For ReLU activation function,/> As a softmax function.
6. The method of claim 1, wherein the defect gear depth migration network model has a loss function the formula of (2) is:
Wherein,, For the classification loss between the classification result of the source domain sample and the real label,/> distance loss of classified input features of target domain and classified input features of source domain on feature layer,/> Distance loss on classification layer for classification output of target domain and classification output of source domain,/> distance loss/>, on feature layer, of classification input features for target domain and classification input features for source domain and distance loss/>, on the classification layer, of the classification output of the target domain and the classification output of the source domain Is added to the weight of the loss.
7. The method for classifying defective gears according to claim 1, wherein in each iteration of training the defective gear depth migration network model, after updating the network parameters of the defective gear depth migration network model in the back propagation, the classification loss between the classification result of the source domain sample and the real label is dynamically optimized by a random gradient descent combined with a sharpness perception minimization method, and the network parameters are updated again.
8. The method of claim 1, wherein before inputting the labeled source domain samples and the unlabeled target domain samples as the training set into the feature extractor of the deep migration network model of the defective gear, further comprising:
performing significance enhancement operation on a source domain sample and a target domain sample in a training set to obtain an enhanced source domain sample set and an enhanced target domain sample set;
Inputting the enhanced source domain sample set and the enhanced target domain sample set into a feature extractor of a defect gear depth migration network model to respectively obtain enhanced source domain feature data and enhanced target domain feature data;
inputting the enhanced source domain characteristic data and the enhanced target domain characteristic data into a classifier to respectively obtain classification results of an enhanced source domain sample set and an enhanced target domain sample set, and respectively calculating the class consistency loss of the enhanced source domain sample set and the class consistency loss of the enhanced target domain sample set through the divergence;
Inputting the enhanced source domain characteristic data and the enhanced target domain characteristic data into a domain classifier, classifying samples according to domain categories according to the characteristic data, and respectively calculating the domain consistency loss of an enhanced source domain sample set and the domain consistency loss of an enhanced target domain sample set through the divergence;
And constructing a loss function of the defective gear depth migration network model according to the field consistency loss and the category consistency loss of the enhanced source domain sample set, the field consistency loss and the category consistency loss of the enhanced target domain sample set, the distance loss of the classification input features of the enhanced target domain and the classification input features of the enhanced source domain on the feature layer, the distance loss of the classification output of the enhanced target domain and the classification output of the enhanced source domain on the classification layer, and the classification loss between the classification result of the enhanced source domain sample set and the real label, and updating the network parameters of the defective gear depth migration network model in the back propagation.
9. The method of claim 8, wherein performing a saliency enhancement operation on source domain samples and target domain samples in a training set to obtain an enhanced source domain sample set and an enhanced target domain sample set, comprises:
source domain sample set in training set Different data enhancement modes are respectively adopted in the significance enhancement operation to obtain a sample set/>, after source domain sample enhancement And/> Then enhance the source domain sample set/> ;
Target domain sample set in training set Different data enhancement modes are respectively adopted in the significance enhancement operation, so that a data set/>, after target domain sample enhancement, is obtained And/> Then enhance the target domain sample set/> .
10. The method of claim 9, wherein the significance enhancing operation is formulated as:
Wherein,, For different data enhancement modes,/> To enhance the mode/> Weights of/> The total number of data enhancement modes, For the original sample set,/> For the original sample set/> Mean value of/(I) For the original sample set/> standard deviation of/> to obey the probability of Beta distribution, the value range is/> , To be an enhanced sample set.
11. The method of claim 9, wherein the step of calculating the classification consistency loss of the enhanced source domain sample set and the classification consistency loss of the enhanced target domain sample set by using the divergence comprises the following steps:
Wherein,, To enhance class consistency loss for source domain sample sets,/> To enhance class consistency loss for a target domain sample set,/> Representing the divergence, the calculation formula is/> ; And/> Respectively source domain sample set/> Sample set/>, source domain sample enhanced And/> Class probability distribution,/> To enhance the source domain sample set/> Class mix probability distribution of (2); /(I) And/> Target domain sample set/>, respectively Data set/>, target domain sample enhanced And/> Class probability distribution,/> To enhance the target domain sample set/> Is a class-mixed probability distribution.
12. The method of claim 11, wherein the source domain sample set Class probability distribution/> Sample set/>, source domain sample enhanced And/> Class probability distribution/> And/> Enhanced Source Domain sample set/> Class mix probability distribution/> the formula of (2) is:
Wherein,, Representation of data normalized by softmax,/> And/> respectively the source domain sample sets Sample set/>, source domain sample enhanced And/> an output in the classifier;
target domain sample set Class probability distribution/> Data set/>, target domain sample enhanced And/> Class probability distribution/> And/> Enhancement of target Domain sample set/> Class mix probability distribution/> the formula of (2) is:
Wherein,, And/> Target domain sample set/>, respectively Data set/>, target domain sample enhanced And An output in the classifier.
13. The method of claim 9, wherein the field uniformity loss for the enhanced source field sample set and the field uniformity loss for the enhanced target field sample set are calculated by using the divergence, respectively, with the formula:
Wherein,, To enhance the domain consistency penalty of the source domain sample set,/> To enhance the domain consistency penalty of the target domain sample set,/> Representing the divergence, the calculation formula is/> ;And Respectively source domain sample set/> Sample set/>, source domain sample enhanced And/> The output in the domain classifier is provided to the user, To enhance the source domain sample set/> Is a domain mix probability distribution; /(I) And/> Target domain sample set/>, respectively Data set/>, target domain sample enhanced And/> Output in a domain classifier,/> To enhance the target domain sample set Is a domain mix probability distribution of (c).
14. The method of claim 13, wherein the source domain sample set is enhanced Domain mixed probability distribution/> the formula of (2) is:
Wherein,, And/> Respectively source domain sample set/> Sample set/>, source domain sample enhanced And/> An output in a domain classifier;
Enhancement target domain sample set Domain mixed probability distribution/> the formula of (2) is:
Wherein,, And/> Target domain sample set/>, respectively Data set/>, target domain sample enhanced And an output in the domain classifier.
15. The method according to claim 8, wherein the loss function of the defective gear depth migration network model is constructed based on a field consistency loss and a class consistency loss of the enhanced source domain sample set, a field consistency loss and a class consistency loss of the enhanced target domain sample set, a distance loss of the classification input features of the enhanced target domain and the classification input features of the enhanced source domain on the feature layer, a distance loss of the classification output of the enhanced target domain and the classification output of the enhanced source domain on the classification layer, and a classification loss between the classification result of the enhanced source domain sample set and the real label The formula is:
Wherein,, to enhance the classification loss between the classification result of the source domain sample set and the real label,/> To enhance class consistency loss for source domain sample sets,/> To enhance class consistency loss for a target domain sample set,/> To enhance the domain consistency penalty of the source domain sample set,/> To enhance the domain consistency penalty of the target domain sample set,/> To enhance the distance loss of the classification input features of the target domain and the classification input features of the source domain on the feature layer,/> The distance loss of the classification output of the enhanced target domain and the classification output of the enhanced source domain on the classification layer; /(I) Distance loss/>, on feature layer, of classification input features for enhanced target domain and classification input features for enhanced source domain And distance loss/>, on the classification layer, of the classification output of the enhanced target domain and the classification output of the enhanced source domain Loss weight of/> Loss weights for category consistency loss and domain consistency loss.
16. A defective gear sorting apparatus, comprising:
The model construction module is used for constructing a defective gear depth migration network model, and the defective gear depth migration network model comprises a feature extractor, a domain condition channel attention module and a classifier; the domain condition channel attention module comprises a first attention sub-module, a classification layer and a second attention sub-module which are sequentially connected in series along the positive propagation direction; the first attention sub-module and the second attention sub-module comprise two full-connection layers, and the classification layers are consistent with the structures and parameters of the classifier;
The feature extraction module is used for inputting a labeled source domain sample and a label-free target domain sample as training sets into a feature extractor of the depth migration network model of the defect gear to respectively obtain source domain feature data and target domain feature data;
The distance loss acquisition module is used for inputting the target domain characteristic data into a first attention sub-module of the domain condition channel attention module, and obtaining the classified input characteristics of the target domain after adding the target domain characteristic data and the output of the first attention sub-module; inputting the classified input features of the target domain into a classification layer, and inputting the output of the classification layer into a second attention sub-module; adding the output of the second attention sub-module and the output of the classification layer to obtain the classification output of the target domain; the method comprises the steps of taking source domain feature data as classification input features of a source domain, inputting the classification input features of the source domain, and obtaining classification output of the source domain; respectively calculating the distance loss of the classified input features of the target domain and the classified input features of the source domain on the feature layer, and the distance loss of the classified output of the target domain and the classified output of the source domain on the classified layer;
The classification loss acquisition module is used for inputting the target domain characteristic data and the source domain characteristic data into the classifier to respectively obtain classification results of the source domain sample and the target domain sample; calculating the classification loss between the classification result of the source domain sample and the real label;
The loss function construction module is used for constructing a loss function of the defect gear depth migration network model according to the distance loss of the classification input features of the target domain and the classification input features of the source domain on the feature layer, the distance loss of the classification output of the target domain and the classification output of the source domain on the classification layer and the classification loss between the classification result of the source domain sample and the real label, and updating network parameters of a feature extractor and a classifier in the defect gear depth migration network model in back propagation;
The classification result acquisition module is used for inputting the unlabeled target domain sample into the feature extractor and the classifier of the trained defective gear deep migration network model to obtain the classification result of the unlabeled target domain sample.
17. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of a defective gear classifying method according to any of claims 1 to 15.
CN202410293547.9A 2024-03-14 2024-03-14 Defective gear classification method, device and computer readable storage medium Active CN117892203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410293547.9A CN117892203B (en) 2024-03-14 2024-03-14 Defective gear classification method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410293547.9A CN117892203B (en) 2024-03-14 2024-03-14 Defective gear classification method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN117892203A true CN117892203A (en) 2024-04-16
CN117892203B CN117892203B (en) 2024-06-07

Family

ID=90644507

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410293547.9A Active CN117892203B (en) 2024-03-14 2024-03-14 Defective gear classification method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN117892203B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210390355A1 (en) * 2020-06-13 2021-12-16 Zhejiang University Image classification method based on reliable weighted optimal transport (rwot)
CN115049627A (en) * 2022-06-21 2022-09-13 江南大学 Steel surface defect detection method and system based on domain self-adaptive deep migration network
CN116227578A (en) * 2022-12-13 2023-06-06 浙江工业大学 Unsupervised domain adaptation method for passive domain data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210390355A1 (en) * 2020-06-13 2021-12-16 Zhejiang University Image classification method based on reliable weighted optimal transport (rwot)
CN115049627A (en) * 2022-06-21 2022-09-13 江南大学 Steel surface defect detection method and system based on domain self-adaptive deep migration network
CN116227578A (en) * 2022-12-13 2023-06-06 浙江工业大学 Unsupervised domain adaptation method for passive domain data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张思雨: "基于深度领域自适应网络的精密零件缺陷检测方法研究", 《江南大学》, 31 December 2023 (2023-12-31), pages 1 - 108 *

Also Published As

Publication number Publication date
CN117892203B (en) 2024-06-07

Similar Documents

Publication Publication Date Title
CN112465111B (en) Three-dimensional voxel image segmentation method based on knowledge distillation and countermeasure training
CN111723780B (en) Directional migration method and system of cross-domain data based on high-resolution remote sensing image
CN114627383B (en) Small sample defect detection method based on metric learning
CN110119753B (en) Lithology recognition method by reconstructed texture
Choudhary et al. Crack detection in concrete surfaces using image processing, fuzzy logic, and neural networks
CN109118445B (en) Underwater image enhancement method based on multi-branch generation countermeasure network
CN110705601A (en) Transformer substation equipment oil leakage image identification method based on single-stage target detection
CN109284779A (en) Object detection method based on deep full convolution network
CN114663346A (en) Strip steel surface defect detection method based on improved YOLOv5 network
CN112116620B (en) Indoor image semantic segmentation and coating display method
CN111461212A (en) Compression method for point cloud target detection model
CN110648310A (en) Weak supervision casting defect identification method based on attention mechanism
CN111127360A (en) Gray level image transfer learning method based on automatic encoder
CN114580517A (en) Method and device for determining image recognition model
CN117103790A (en) Corrugated board production line and control method thereof
Lin et al. Integrated circuit board object detection and image augmentation fusion model based on YOLO
CN115272777A (en) Semi-supervised image analysis method for power transmission scene
WO2024179409A1 (en) Three-dimensional industrial anomaly detection method and apparatus, storage medium, and electronic device
WO2024179409A9 (en) Three-dimensional industrial anomaly detection method and apparatus, storage medium, and electronic device
CN107564045B (en) Stereo matching method based on gradient domain guided filtering
CN117892203B (en) Defective gear classification method, device and computer readable storage medium
CN108765391A (en) A kind of plate glass foreign matter image analysis methods based on deep learning
CN111797935A (en) Semi-supervised deep network picture classification method based on group intelligence
CN115587989B (en) Workpiece CT image defect detection segmentation method and system
CN114627289B (en) Industrial part instance segmentation method based on voting mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant