CN118194165A - Assembly robot fault diagnosis feature transformation method based on transfer learning - Google Patents

Assembly robot fault diagnosis feature transformation method based on transfer learning Download PDF

Info

Publication number
CN118194165A
CN118194165A CN202410613776.4A CN202410613776A CN118194165A CN 118194165 A CN118194165 A CN 118194165A CN 202410613776 A CN202410613776 A CN 202410613776A CN 118194165 A CN118194165 A CN 118194165A
Authority
CN
China
Prior art keywords
representing
moment
domain
fault
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410613776.4A
Other languages
Chinese (zh)
Other versions
CN118194165B (en
Inventor
毛建旭
李卓维
贺文斌
刘彩苹
王耀南
张辉
朱青
李哲
方遒
冯运
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202410613776.4A priority Critical patent/CN118194165B/en
Publication of CN118194165A publication Critical patent/CN118194165A/en
Application granted granted Critical
Publication of CN118194165B publication Critical patent/CN118194165B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Testing Of Devices, Machine Parts, Or Other Structures Thereof (AREA)

Abstract

The invention relates to an assembly robot fault diagnosis feature transformation method based on transfer learning, which specifically comprises the following steps: s1: extracting original characteristics: collecting one-dimensional data, and respectively labeling the collected data with labels corresponding to normal or faults; s2: feature transformation: s3: feature fusion: fusing the features after feature transformation in the step S2; s4: fault diagnosis using a transfer learning network: dividing the data subjected to feature fusion in the step S3 into labeled source domain data and unlabeled target domain data, taking all the source domain data and part of the target domain data as training sets, taking the rest of the target domain data as verification sets, and obtaining the similarity probability of a target domain sample relative to the source domain sample through transfer learning; s5: judging whether the accuracy and the loss of the transfer learning algorithm reach the preset precision. The invention can solve the problem of insufficient samples, improve the algorithm performance and improve the practicability.

Description

Assembly robot fault diagnosis feature transformation method based on transfer learning
Technical Field
The invention relates to the technical field of robot fault diagnosis, in particular to an assembly robot fault diagnosis feature transformation method based on transfer learning.
Background
An assembly robot is a robot specifically designed to perform an assembly task, and is widely used in manufacturing industry to assemble parts into a final product, and is generally composed of a mechanical arm, a sensor, an actuator, a control system, a human-machine interface, and the like.
In actual industrial production, the working environment is complex and severe, the body structure is complex, and high requirements are put on real-time and efficient fault diagnosis of the assembly robot. Today, fault diagnosis techniques for assembly robots are evolving towards more intelligence and automation: the fault data is collected and signal processing is carried out by relying on data fusion and a multi-mode method, so that a large number of tag signals are collected more easily; relying on more powerful deep learning methods, data-driven fault diagnosis methods are becoming more and more accurate; depending on the continuous development of transfer learning, cross-data diagnosis between different data sets is achieved.
The deep learning method has important application value when fault diagnosis is carried out on the assembly robot deployed on the industrial site. Deep learning algorithms typically require a large number of labeled samples to train, however, due to the hardware conditions of the site, collecting a rich failure sample may be limited, resulting in an insufficient number of samples; meanwhile, due to the fact that the field conditions are changeable, the robot is complex in structure, and in the actual operation process of the assembly robot, some unexpected fault types during model training can be encountered, so that the target domain data set contains fault states which do not occur in the source domain data set, and the effect of transfer learning is greatly affected.
Disclosure of Invention
The invention aims to provide an assembly robot fault diagnosis feature transformation method based on transfer learning, which can solve the problem of insufficient samples, improve algorithm performance and improve practicability.
The method for transforming the fault diagnosis characteristics of the assembly robot based on the transfer learning is adopted to achieve the aim, and specifically comprises the following steps:
S1: extracting original characteristics: installing force and moment sensors at different positions of the assembly robot, collecting one-dimensional data, and respectively labeling the collected data with labels corresponding to normal or faults;
S2: feature transformation: for raw data acquired at S1: extracting the mean value, derivative and monotonicity of the three-dimensional plane in the orthogonal plane and the three-dimensional plane; extracting harmonic amplitudes generated in the frequency domain by utilizing Fourier transformation in the frequency domain;
s3: feature fusion: fusing the features after feature transformation in the step S2;
S4: fault diagnosis using a transfer learning network: the data subjected to feature fusion in the step S3 is divided into labeled source domain data and unlabeled target domain data, all the source domain data and part of the target domain data are used as training sets, the rest of the target domain data are used as verification sets, and the similarity probability of the target domain sample relative to the source domain sample is obtained through transfer learning: when the probability is lower than a set threshold value, classifying the sample of the target domain as an unknown sample; when the probability is higher than a set threshold value, performing fault classification by using a fault classification algorithm trained by the source domain sample;
S5: judging whether the accuracy and the loss of the transfer learning algorithm reach the preset precision.
As a further improvement of the present invention, S1 comprises the steps of:
Selecting a section of time window for fault of the assembly robot to perform data acquisition, dividing the time window into d time periods, respectively acquiring forces and moments in X, Y and Z axis directions at the moment i by using a force sensor and a moment sensor, and respectively recording the forces and the moments as AndWherein/>A force representing the moment in the x-axis direction i; /(I)A force representing the moment in the y-axis direction i; /(I)A force representing the moment in the z-axis direction i; /(I)Representing an m-dimensional real space; /(I)Moment at moment i in x-axis direction is represented; /(I)Moment at moment i in y-axis direction; /(I)Moment at moment i in the z-axis direction; /(I)Representing an n-dimensional real space, each intra-window expression is as follows:
Wherein, Representing the sum of forces at the end of each of the d time periods in the x-axis direction; /(I)Representing the sum of forces at the end of each of the d time periods in the y-axis direction; /(I)Representing the sum of forces at the end of each of the d time periods in the z-axis direction; /(I)Representing the sum of moments at the end of each of the d time periods in the x-axis direction; /(I)Representing the sum of moments at the end of each of the d periods in the y-axis direction; /(I)Representing the sum of the moments at the end of each of the d time periods in the z-axis direction.
As a further improvement of the present invention, S2 comprises the steps of:
2.1: expanding a force vector f and a moment vector t in an orthogonal plane and a three-dimensional plane, and combining the force vector f and the moment vector t into a vector p:
Wherein, Representing the transformation of two forces in the xy direction respectively into one force in the xy plane,/>Representing the transformation of two forces in the xz direction respectively into a force in the xz plane,/>Representing the transformation of two forces each in the yz direction into a force in the yz plane,/>Representing the transformation of three forces each in xyz direction into one force in xyz three-dimensional space,Representing the transformation of two moments in the xy direction respectively into one moment in the xy plane,/>, respectivelyRepresenting the transformation of two moments in the xz direction respectively into a moment in the xz plane,/>Representing the transformation of two moments in the yz direction respectively into a moment in the yz plane,/>Representing the transformation of three moments in xyz direction into a moment in xyz three-dimensional space;
2.2: calculating the average value of the force and the moment in each window:
wherein a (p, m, n) represents the mean value of the p vector over the (m, n) time period, and the formula is:
Wherein, The i-th time vector p is represented.
2.3: Calculating the summary feature vector S (p) of the vector p in each window through three general indexes of mean value, derivative and monotonicity (the summary feature vector refers to the vector covering the vector p profile at the starting moment, the middle moment and the ending moment:
Wherein D (P, M, n) and M (P) are the derivative and monotonicity of the P vector at (M, n), respectively, as follows:
Wherein mon (p, i) reflects the monotonicity of vector p at time (i, i+1):
wherein trend (p) is the trend function of vector p:
2.4: using fourier transform, the harmonic amplitudes F (p) generated in the frequency domain by the intra-window vector p are extracted:
Wherein dft ((p, p d), v) denotes discrete fourier transform of the input signal, p d denotes the value of vector p at d time, N denotes the period of fourier transform, r denotes the index of discrete time or space, j denotes the imaginary unit, v denotes the frequency, p r denotes the value of vector p at discrete time or space point r, and e denotes the natural constant.
As a further improvement of the present invention, S3 includes the steps of:
fusing P, S (P), F (P) to obtain a feature fusion vector B (P):
The above formula is equivalent to:
As a further improvement of the present invention, S4 includes the steps of:
b (P) is divided into labeled source domain data according to a data standardization processing rule And unlabeled target domain data B t (P), will/>And part B t (P) as a training set training DANN (/ >)) The remaining B t (P) was used as test data, and the goal of the network training was: and eliminating unknown fault types in the target domain, classifying the unknown fault types independently, and classifying the rest samples in the target domain according to the source domain.
As a further improvement of the invention, the DANN training process is as follows:
s4.1: the feature extractor G fn, using the residual block construction mapping function of the non-local connection network, extracts deep abstract features of the input data as follows:
Wherein, Representing an added residual block mapping function,/>Is a network parameter,/>An mth sample representing the extracted deep abstract feature,/>、/>、/>An mth sample respectively representing the source domain, the target domain, the source domain and the target domain and concentrating the extracted deep abstract features;
S4.2: in source domain samples For research object, a fault classifier/>, is builtAcquiring a source domain sample fault label probability matrix/>Constructing a loss function/>, of the fault classifierThe source domain sample fault probability is calculated by optimizing the loss function, and the formula is as follows:
Wherein, And/>Probability/>, respectivelyAnd tag distribution/>Weights of (2); /(I)Is a cross entropy loss function,/>A fault tag representing a source domain input feature;
s4.3: in target domain samples Target and Source Domain union samples/>For research object, build domain classifier/>Obtaining the similarity probability/>, relative to the source domain, of the target domainRespectively, 0 represents completely different, 1 represents completely the same, and an optimization target/>, of the domain classifier is established according to the probabilityThe formula is as follows:
Wherein, For a cross entropy loss function of two classes,/>For real domain labels corresponding to input features,/>The total number of samples is the union feature;
To be used for For input, the/>, obtained through experimentsSet as threshold, if/>If the set value is smaller than the set threshold value, the set value is considered as unknown set data, and the unknown set data is removed from the target domain and marked as unknown faults; if/>If the fault is larger than the set threshold, the fault is considered to exist in the source domain, and the fault classifier obtained in the step S4.2 is used for fault diagnosis, wherein the specific expression is as follows:
Wherein, Is the failure tag probability matrix of the mth sample in the union of the source domain and the target domain.
As a further improvement of the present invention, S5 includes the steps of:
if the preset precision is reached, ending the training process; if the preset precision is not reached, continuing to adjust the probability threshold value in the algorithm Until the training index reaches the preset precision.
As a further improvement of the present invention, the shift learning network in S4 includes:
Feature extractor Fault classifier/>And domain classifier/>
Feature extractorThe method comprises 6 residual blocks and 2 non-local connection networks, wherein the number of channels of each residual block is 64, 128 and 256 in sequence, the convolution kernel size is 3*1, and the arrangement mode is that one non-local connection network is added after every two residual blocks;
Fault classifier The method comprises the steps of 1 input layer, 6 hidden layers, 1 softmax output layer, and obtaining a fault probability prediction matrix of source domain characteristics through the output layer, wherein the hidden layers are composed of 3 full-connection layers, 3 activation layers and 1 pooling layer;
Domain classifier The method comprises a 2-layer full-connection layer, a 2-layer activation layer, a 1-layer pooling layer and a 1-layer sigmoid output layer, and the target domain sample similarity probability is obtained through the output layer.
Drawings
FIG. 1 is a schematic flow chart of a method according to an embodiment.
Fig. 2 is a schematic diagram of a feature extractor architecture.
Fig. 3 is a schematic diagram of a fault classifier structure.
Fig. 4 is a schematic diagram of a domain classifier structure.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, it should be noted that, directions or positional relationships indicated by terms such as "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., are based on directions or positional relationships shown in the drawings, are merely for convenience of description and simplification of description, and do not indicate or imply that the apparatus or element to be referred to must have a specific direction, be constructed and operated in the specific direction, and thus should not be construed as limiting the present invention; the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance; furthermore, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
Example 1
The fault diagnosis feature transformation method of the assembly robot based on transfer learning comprises the following specific processes:
1: extracting original features
And selecting a section of time window in which the assembly robot may fail to acquire data, and dividing the time window into d time periods. The force sensor and the moment sensor are used for respectively collecting the force and moment in the X, Y and Z axis directions at the moment i, and the force and moment are respectively recorded asAnd/>Wherein/>A force representing the moment in the x-axis direction i; /(I)A force representing the moment in the y-axis direction i; /(I)A force representing the moment in the z-axis direction i; /(I)Representing an m-dimensional real space; /(I)Moment at moment i in x-axis direction is represented; /(I)Moment at moment i in y-axis direction; /(I)Moment at moment i in the z-axis direction; /(I)Representing an n-dimensional real space. Each intra-window expression is as follows:
Wherein, Representing the sum of forces at the end of each of the d time periods in the x-axis direction; /(I)Representing the sum of forces at the end of each of the d time periods in the y-axis direction; /(I)Representing the sum of forces at the end of each of the d time periods in the z-axis direction; /(I)Representing the sum of moments at the end of each of the d time periods in the x-axis direction; /(I)Representing the sum of moments at the end of each of the d periods in the y-axis direction; /(I)Representing the sum of the moments at the end of each of the d time periods in the z-axis direction.
2: Feature transformation
The method comprises the following specific steps of obtaining abstract feature vectors and harmonic amplitudes of original data acquired by a sensor through feature transformation:
2.1: expanding a force vector f and a moment vector t in an orthogonal plane and a three-dimensional plane, and combining the force vector f and the moment vector t into a p vector:
Wherein, Representing the transformation of two forces in the xy direction respectively into one force in the xy plane,/>Representing the transformation of two forces in the xz direction respectively into a force in the xz plane,/>Representing the transformation of two forces each in the yz direction into a force in the yz plane,/>Representing the transformation of three forces each in xyz direction into one force in xyz three-dimensional space,Representing the transformation of two moments in the xy direction respectively into one moment in the xy plane,/>, respectivelyRepresenting the transformation of two moments in the xz direction respectively into a moment in the xz plane,/>Representing the transformation of two moments in the yz direction respectively into a moment in the yz plane,/>Three moments in xyz direction are converted into a moment in xyz three-dimensional space.
2.2: Calculating the average value of the force and the moment in each window:
wherein a (p, m, n) represents the mean value of the p vector over the (m, n) time period, and the formula is:
Wherein, The i-th time vector p is represented.
2.3: Calculating the abstract feature vector S (p) of the vector p in each window through three general indexes of mean value, derivative and monotonicity:
wherein D (P, M, n) and M (P) are the derivative and monotonicity of vector P at (M, n), respectively, as follows:
wherein mon (p, i) reflects the monotonicity of the p vector at time (i, i+1):
wherein trend (p) is the trend function of vector p:
2.4: using fourier transform, the harmonic amplitudes F (p) generated in the frequency domain by the intra-window vector p are extracted:
Wherein dft ((p, p d), v) denotes discrete fourier transform of the input signal, p d denotes the value of vector p at d time, N denotes the period of fourier transform, r denotes the index of discrete time or space, j denotes the imaginary unit, v denotes the frequency, p r denotes the value of vector p at discrete time or space point r, and e denotes the natural constant.
3: Feature fusion
Fusing P, S (P), F (P) to obtain a feature fusion vector B (P):
According to the conclusion in 1, the above formula is equivalent to:
the fusion vector B (p) has more features than the original feature p.
4: Fault diagnosis is performed using an improved transfer learning network.
B (P) is divided into labeled source domain data according to a data standardization processing ruleAnd unlabeled target domain data B t (P). Will/>And 60% of B t (P) as training set training DANN, the remaining 40% of B t (P) as test data. The goals of the network training are: and eliminating unknown fault types in the target domain, classifying the unknown fault types independently, and classifying the rest samples in the target domain according to the source domain. The basic training flow is as follows:
4.1: the feature extractor G fn, using the residual block construction mapping function of the non-local connection network, extracts deep abstract features of the input data as follows:
Wherein, Representing an added residual block mapping function,/>Is a network parameter,/>An mth sample representing the extracted deep abstract feature,/>、/>、/>The mth sample representing the extracted deep abstract features in the source domain, in the target domain, in the source domain, and in the target domain, respectively, and in the union.
4.2: In source domain samplesFor research object, a fault classifier/>, is builtAcquiring a source domain sample fault label probability matrix/>. Constructing a loss function/>, of the fault classifierAnd calculating the fault probability of the source domain sample by optimizing the loss function. The formula is as follows:
Wherein, And/>Probability/>, respectivelyAnd tag distribution/>Weights of (2); /(I)Is a cross entropy loss function,/>Fault labels representing source domain input features.
4.3: In target domain samplesTarget and Source Domain union samples/>For research object, build domain classifier/>Obtaining the similarity probability/>, relative to the source domain, of the target domainRespectively, 0 represents completely different, 1 represents completely the same, and an optimization target/>, of the domain classifier is established according to the probability. The formula is as follows:
Wherein, For a cross entropy loss function of two classes,/>For real domain labels corresponding to input features,/>The total number of samples is the union feature.
To be used forFor input, the/>, obtained through experimentsSet as threshold, if/>If the set value is smaller than the set threshold value, the set value is considered as unknown set data, and the unknown set data is removed from the target domain and marked as unknown faults; if/>If the fault is larger than the set threshold value, the fault is considered to exist in the source domain, and the fault classifier in the step 4.2 is used for fault diagnosis. The specific expression is as follows:
Wherein, Is the failure tag probability matrix of the mth sample in the union of the source domain and the target domain.
5: Judging whether the accuracy and loss of the transfer learning algorithm meet the specified requirements:
If the preset precision is reached, ending the training process; if the preset precision is not reached, continuing to finely tune the probability threshold value in the algorithm Until the training index reaches the preset precision.
Example 2
The fault diagnosis method for the assembly robot provided by the invention comprises the following steps: data acquisition, feature conversion and migration learning. Firstly, acquiring signals in a specific period through a force and moment sensor arranged on an assembly robot to obtain an original signal; and then carrying out feature transformation on the original signal to obtain a fusion signal. Compared with the original signal, the fusion signal contains more hidden features of high dimension and frequency domain; and finally, classifying the fusion signals in a source domain and a target domain, and training a transfer learning algorithm for fault diagnosis of the assembly robot.
The specific principle is described as follows:
3.1: data acquisition principle: and installing force and moment sensors at different positions of the assembly robot, collecting one-dimensional data, and embedding the collected data with normal or corresponding fault labels.
3.2: The principle of feature conversion: for the raw signal acquired at 3.1: extracting the mean value, derivative and monotonicity of the three-dimensional plane in the orthogonal plane and the three-dimensional plane; the harmonic amplitudes generated in the frequency domain are extracted using fourier transform in the frequency domain. Through the operation, the purpose of expanding the data characteristics of the sample is achieved.
3.3: Improved migration learning algorithm principle: the data expanded in 3.2 is divided into labeled source domain data and unlabeled target domain data, all source domain data and 40% of target domain data are used as training sets, and the remaining 60% of target domain data are used as verification sets. The purpose of the algorithm is: obtaining the similarity probability of the target domain sample relative to the source domain sample through transfer learning: when the probability is lower than a set threshold value, classifying the sample of the target domain as an unknown sample; when the probability is higher than a set threshold, performing fault classification by using a fault classification algorithm trained by a source domain sample, thereby achieving the purpose of diagnosing unknown faults in actual production of a factory.
The designed transfer learning network comprises three parts: feature extractorFault classifier/>Domain classifier
The feature extractor is used for further extracting deep abstract features of the fusion signal, and consists of six residual blocks and two non-local connection networks, wherein the number of channels of each residual block is 64, 128 and 256 in sequence, the convolution kernel size is 3*1, and one non-local connection network is added after each two residual blocks. The structure is shown in fig. 2:
The function of the fault classifier is to obtain an algorithm model with fault type classification capability by using a source domain labeled sample. The classifier consists of an input layer, 6 hidden layers and 1 softmax output layer, and the failure probability prediction matrix of the source domain features is obtained through the output layer. Wherein: the hidden layer is composed of 3 layers of full connection layers, 3 layers of activation layers and 1 layer of pooling layer. The structure is shown in fig. 3:
The domain classifier is used for comparing the target domain sample with the source domain sample, generating quantifiable similarity probability, and classifying the target domain sample as a known fault or an unknown fault in the source domain according to whether the probability reaches a set threshold. The classifier is composed of a 2-layer full-connection layer, a 2-layer activation layer, a 1-layer pooling layer and a 1-layer sigmoid output layer, and the target domain sample similarity probability is obtained through the output layer, and the structure is shown in fig. 4:
the algorithm for feature conversion of the fault diagnosis of the assembly robot has important significance in solving the problems of insufficient samples, improving the algorithm performance, improving the practicability, popularizing and applying and the like, and provides a new method for the research and practice in the field of fault diagnosis.
1. Solves the problem of insufficient sample: in the field of fault diagnosis, it is often difficult to obtain a sufficient number of fault samples. The acquired original data is transformed and expanded through the feature conversion algorithm, so that the diversity and the number of samples are increased, the problem of insufficient samples is solved, and the accuracy and the robustness of the fault diagnosis algorithm can be improved.
2. Improving the performance of the fault diagnosis algorithm: through the feature conversion algorithm, the original data can be transformed and expanded to obtain more feature representations. These new features can capture more information about the robot failure, thereby improving the performance of the failure diagnosis algorithm. Through reasonable feature conversion of the data, the feature and mode of the fault can be better revealed, so that the algorithm can more accurately conduct fault identification and diagnosis.
3. The practicability of fault diagnosis is improved: the feature conversion algorithm can perform dimension lifting processing on the original data, so that the expression of the data meets the requirements of the machine learning algorithm. Therefore, the complexity of data processing can be simplified, and the practicability of the fault diagnosis algorithm is improved. The original data is converted into the characteristic representation with more characterization, so that the influence of noise and redundant information can be reduced, and the algorithm is more robust and efficient.
Aiming at the actual running condition of the assembly robot, the invention provides a fault diagnosis feature transformation method based on improved transfer learning, which has the following beneficial effects:
1: in consideration of actual running conditions of factories, a large number of samples cannot be collected generally, a feature transformation algorithm is provided, one-dimensional original data are converted into two-dimensional orthogonal planes, three-dimensional planes and frequency domains, feature components of the two-dimensional original data are expanded, and a guarantee is provided for running a fault diagnosis algorithm based on data driving.
2: In consideration of actual running conditions of factories and complexity of an assembly robot structure, a method for effectively classifying faults which do not occur in a training set based on transfer learning is provided. The algorithm utilizes known characteristics in the source domain, and quantifies the similarity of the sample in the target domain and the sample in the source domain by constructing the similarity probability, so that the aim of classification is fulfilled.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several equivalent substitutions and obvious modifications can be made without departing from the spirit of the invention, and the same should be considered to be within the scope of the invention.

Claims (8)

1. The fault diagnosis feature transformation method of the assembly robot based on transfer learning is characterized by comprising the following steps of:
S1: extracting original characteristics: installing force and moment sensors at different positions of the assembly robot, collecting one-dimensional data, and respectively labeling the collected data with labels corresponding to normal or faults;
S2: feature transformation: for raw data acquired at S1: extracting the mean value, derivative and monotonicity of the three-dimensional plane in the orthogonal plane and the three-dimensional plane; extracting harmonic amplitudes generated in the frequency domain by utilizing Fourier transformation in the frequency domain;
s3: feature fusion: fusing the features after feature transformation in the step S2;
S4: fault diagnosis using a transfer learning network: the data subjected to feature fusion in the step S3 is divided into labeled source domain data and unlabeled target domain data, all the source domain data and part of the target domain data are used as training sets, the rest of the target domain data are used as verification sets, and the similarity probability of the target domain sample relative to the source domain sample is obtained through transfer learning: when the probability is lower than a set threshold value, classifying the sample of the target domain as an unknown sample; when the probability is higher than a set threshold value, performing fault classification by using a fault classification algorithm trained by the source domain sample;
S5: judging whether the accuracy and the loss of the transfer learning algorithm reach the preset precision.
2. The transfer learning-based assembly robot fault diagnosis feature transformation method according to claim 1, wherein S1 comprises the steps of:
Selecting a section of time window for fault of the assembly robot to perform data acquisition, dividing the time window into d time periods, respectively acquiring forces and moments in X, Y and Z axis directions at the moment i by using a force sensor and a moment sensor, and respectively recording the forces and the moments as And/>Wherein/>A force representing the moment in the x-axis direction i; /(I)A force representing the moment in the y-axis direction i; /(I)A force representing the moment in the z-axis direction i; /(I)Representing an m-dimensional real space; /(I)Moment at moment i in x-axis direction is represented; /(I)Moment at moment i in y-axis direction; /(I)Moment at moment i in the z-axis direction; /(I)Representing an n-dimensional real space; each intra-window expression is as follows:
Wherein, Representing the sum of forces at the end of each of the d time periods in the x-axis direction; /(I)Representing the sum of forces at the end of each of the d time periods in the y-axis direction; /(I)Representing the sum of forces at the end of each of the d time periods in the z-axis direction; /(I)Representing the sum of moments at the end of each of the d time periods in the x-axis direction; /(I)Representing the sum of moments at the end of each of the d periods in the y-axis direction; /(I)Representing the sum of the moments at the end of each of the d time periods in the z-axis direction.
3. The transfer learning-based assembly robot fault diagnosis feature transformation method according to claim 1, wherein S2 comprises the steps of:
2.1: expanding a force vector f and a moment vector t in an orthogonal plane and a three-dimensional plane, and combining the force vector f and the moment vector t into a vector p:
Wherein, Representing the transformation of two forces in the xy direction respectively into one force in the xy plane,/>Representing the transformation of two forces in the xz direction respectively into a force in the xz plane,/>Representing the transformation of two forces each in the yz direction into a force in the yz plane,/>Representing the transformation of three forces each in xyz direction into one force in xyz three-dimensional space,/>Representing the transformation of two moments in the xy direction respectively into one moment in the xy plane,/>, respectivelyRepresenting the transformation of two moments in the xz direction respectively into a moment in the xz plane,/>Representing the transformation of two moments in the yz direction respectively into a moment in the yz plane,/>Representing the transformation of three moments in xyz direction into a moment in xyz three-dimensional space;
2.2: calculating the average value of the force and the moment in each window:
wherein a (p, m, n) represents the mean value of the p vector over the (m, n) time period, and the formula is:
Wherein, Representing an i-th moment vector p;
2.3: the summary feature vector S (p) of the vector p in each window is calculated by three general indexes of mean, derivative and monotonicity, and the expression of S (p) is as follows:
wherein D (P, M, n) and M (P) are the derivative and monotonicity of vector P at (M, n), respectively, as follows:
Wherein mon (p, i) reflects the monotonicity of vector p at time (i, i+1):
wherein trend (p) is the trend function of vector p:
2.4: using fourier transform, the harmonic amplitudes F (p) generated in the frequency domain by the intra-window vector p are extracted:
Wherein dft ((p, p d), v) denotes discrete fourier transform of the input signal, p d denotes the value of vector p at d time, N denotes the period of fourier transform, r denotes the index of discrete time or space, j denotes the imaginary unit, v denotes the frequency, p r denotes the value of vector p at discrete time or space point r, and e denotes the natural constant.
4. The transfer learning-based assembly robot fault diagnosis feature transformation method according to claim 3, wherein S3 comprises the steps of:
fusing P, S (P), F (P) to obtain a feature fusion vector B (P):
The above formula is equivalent to:
5. The transfer learning-based assembly robot fault diagnosis feature transformation method according to claim 4, wherein S4 comprises the steps of:
b (P) is divided into labeled source domain data according to a data standardization processing rule And unlabeled target domain data B t (P), will/>And part B t (P) as training set training DANN, the remaining B t (P) as test data, the goal of the network training is: and eliminating unknown fault types in the target domain, classifying the unknown fault types independently, and classifying the rest samples in the target domain according to the source domain.
6. The transfer learning-based assembly robot fault diagnosis feature transformation method of claim 5, wherein the DANN training flow is as follows:
s4.1: the feature extractor G fn, using the residual block construction mapping function of the non-local connection network, extracts deep abstract features of the input data as follows:
Wherein, Representing an added residual block mapping function,/>As a function of the parameters of the network,An mth sample representing the extracted deep abstract feature,/>、/>、/>An mth sample respectively representing the source domain, the target domain, the source domain and the target domain and concentrating the extracted deep abstract features;
S4.2: in source domain samples For research object, a fault classifier/>, is builtAcquiring a source domain sample fault label probability matrix/>Constructing a loss function/>, of the fault classifierThe source domain sample fault probability is calculated by optimizing the loss function, and the formula is as follows:
Wherein, And/>Probability/>, respectivelyAnd tag distribution/>Weights of (2); /(I)Is a cross entropy loss function,/>A fault tag representing a source domain input feature;
s4.3: in target domain samples Target and Source Domain union samples/>For research object, build domain classifier/>Obtaining the similarity probability/>, relative to the source domain, of the target domainRespectively, 0 represents completely different, 1 represents completely the same, and an optimization target/>, of the domain classifier is established according to the probabilityThe formula is as follows:
Wherein, For a cross entropy loss function of two classes,/>For real domain labels corresponding to input features,/>The total number of samples is the union feature;
To be used for For input, the/>, obtained through experimentsSet as threshold, if/>If the set value is smaller than the set threshold value, the set value is considered as unknown set data, and the unknown set data is removed from the target domain and marked as unknown faults; if/>If the fault is larger than the set threshold, the fault is considered to exist in the source domain, and the fault classifier obtained in the step S4.2 is used for fault diagnosis, wherein the specific expression is as follows:
Wherein, Is the failure tag probability matrix of the mth sample in the union of the source domain and the target domain.
7. The transfer learning-based assembly robot fault diagnosis feature transformation method according to claim 6, wherein S5 comprises the steps of:
if the preset precision is reached, ending the training process; if the preset precision is not reached, continuing to adjust the probability threshold value in the algorithm Until the training index reaches the preset precision.
8. The transfer learning-based assembly robot fault diagnosis feature transformation method according to claim 1, wherein the transfer learning network in S4 includes:
Feature extractor Fault classifier/>And domain classifier/>
Feature extractorThe method comprises 6 residual blocks and 2 non-local connection networks, wherein the number of channels of each residual block is 64, 128 and 256 in sequence, the convolution kernel size is 3*1, and the arrangement mode is that one non-local connection network is added after every two residual blocks;
Fault classifier The method comprises the steps of 1 input layer, 6 hidden layers, 1 softmax output layer, and obtaining a fault probability prediction matrix of source domain characteristics through the output layer, wherein the hidden layers are composed of 3 full-connection layers, 3 activation layers and 1 pooling layer;
Domain classifier The method comprises a 2-layer full-connection layer, a 2-layer activation layer, a 1-layer pooling layer and a 1-layer sigmoid output layer, and the target domain sample similarity probability is obtained through the output layer.
CN202410613776.4A 2024-05-17 2024-05-17 Assembly robot fault diagnosis feature transformation method based on transfer learning Active CN118194165B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410613776.4A CN118194165B (en) 2024-05-17 2024-05-17 Assembly robot fault diagnosis feature transformation method based on transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410613776.4A CN118194165B (en) 2024-05-17 2024-05-17 Assembly robot fault diagnosis feature transformation method based on transfer learning

Publications (2)

Publication Number Publication Date
CN118194165A true CN118194165A (en) 2024-06-14
CN118194165B CN118194165B (en) 2024-08-09

Family

ID=91402106

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410613776.4A Active CN118194165B (en) 2024-05-17 2024-05-17 Assembly robot fault diagnosis feature transformation method based on transfer learning

Country Status (1)

Country Link
CN (1) CN118194165B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469082A (en) * 2021-07-08 2021-10-01 南京航空航天大学 Satellite actuator fault detection method based on migration component analysis
US11169288B1 (en) * 2017-12-07 2021-11-09 Triad National Security, Llc Failure prediction and estimation of failure parameters
CN115600150A (en) * 2022-09-26 2023-01-13 郑州大学(Cn) Multi-mode gearbox fault diagnosis method based on deep migration learning
CN116793682A (en) * 2023-07-07 2023-09-22 武汉理工大学 Bearing fault diagnosis method based on iCORAL-MMD and anti-migration learning
CN116805051A (en) * 2023-06-21 2023-09-26 杭州电子科技大学 Double convolution dynamic domain adaptive equipment fault diagnosis method based on attention mechanism
CN117330315A (en) * 2023-12-01 2024-01-02 智能制造龙城实验室 Rotary machine fault monitoring method based on online migration learning
CN117786461A (en) * 2023-12-27 2024-03-29 三峡陆上新能源投资有限公司 Water pump fault diagnosis method, control device and storage medium thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11169288B1 (en) * 2017-12-07 2021-11-09 Triad National Security, Llc Failure prediction and estimation of failure parameters
CN113469082A (en) * 2021-07-08 2021-10-01 南京航空航天大学 Satellite actuator fault detection method based on migration component analysis
CN115600150A (en) * 2022-09-26 2023-01-13 郑州大学(Cn) Multi-mode gearbox fault diagnosis method based on deep migration learning
CN116805051A (en) * 2023-06-21 2023-09-26 杭州电子科技大学 Double convolution dynamic domain adaptive equipment fault diagnosis method based on attention mechanism
CN116793682A (en) * 2023-07-07 2023-09-22 武汉理工大学 Bearing fault diagnosis method based on iCORAL-MMD and anti-migration learning
CN117330315A (en) * 2023-12-01 2024-01-02 智能制造龙城实验室 Rotary machine fault monitoring method based on online migration learning
CN117786461A (en) * 2023-12-27 2024-03-29 三峡陆上新能源投资有限公司 Water pump fault diagnosis method, control device and storage medium thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
IRANI F N等: "Deep transfer learning strategy in intelligent fault diagnosis of gas turbines based on the Koopman operator", 《APPLIED ENERGY》, 31 January 2024 (2024-01-31), pages 1 - 19 *
杨毅等: "基于深度-迁移学 习的输电线路故障选相 模型及其可迁移性研究", 《电力自动化设备》, vol. 40, no. 10, 31 October 2020 (2020-10-31), pages 165 - 172 *

Also Published As

Publication number Publication date
CN118194165B (en) 2024-08-09

Similar Documents

Publication Publication Date Title
CN109655259B (en) Compound fault diagnosis method and device based on deep decoupling convolutional neural network
CN108614548B (en) Intelligent fault diagnosis method based on multi-mode fusion deep learning
Langarica et al. An industrial internet application for real-time fault diagnosis in industrial motors
Zhang et al. DeepHealth: A self-attention based method for instant intelligent predictive maintenance in industrial Internet of Things
CN108178037A (en) A kind of elevator faults recognition methods based on convolutional neural networks
CN112200032A (en) Attention mechanism-based high-voltage circuit breaker mechanical state online monitoring method
CN112947385B (en) Aircraft fault diagnosis method and system based on improved Transformer model
CN117784710B (en) Remote state monitoring system and method for numerical control machine tool
CN117009916A (en) Actuator fault diagnosis method based on multi-sensor information fusion and transfer learning
CN117009770A (en) Bearing fault diagnosis method based on SDP and visual transducer codes
CN116593157A (en) Complex working condition gear fault diagnosis method based on matching element learning under small sample
CN116929815A (en) Equipment working state monitoring system and method based on Internet of things
Wang et al. Auto-embedding transformer for interpretable few-shot fault diagnosis of rolling bearings
CN117034003A (en) Full life cycle self-adaptive fault diagnosis method, system, equipment and medium for aerospace major product manufacturing equipment
Zhang et al. CarNet: A dual correlation method for health perception of rotating machinery
CN116012681A (en) Method and system for diagnosing motor faults of pipeline robot based on sound vibration signal fusion
Lu et al. Rotating Machinery Fault Diagnosis Under Multiple Working Conditions via A Time Series Transformer Enhanced by Convolutional Neural Network
CN118296452A (en) Industrial equipment fault diagnosis method based on transducer model optimization
CN118194165B (en) Assembly robot fault diagnosis feature transformation method based on transfer learning
CN113469013A (en) Motor fault prediction method and system based on transfer learning and time sequence
CN116610940A (en) Equipment fault diagnosis system based on wavelet transformation and deep neural network
Zhu et al. Bidirectional Current WP and CBAR Neural Network Model Based Bearing Fault Diagnosis
KR20070049460A (en) On-line fault detecting method for 3-phase induction motor which is based on statistical pattern recognition technique
Yang et al. A review of fault diagnosis based on Siamese neural networks
Wang et al. Adversarial based unsupervised domain adaptation for bearing fault diagnosis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant