CN114161228A - Tool wear prediction method, device, equipment and storage medium - Google Patents

Tool wear prediction method, device, equipment and storage medium Download PDF

Info

Publication number
CN114161228A
CN114161228A CN202210131513.0A CN202210131513A CN114161228A CN 114161228 A CN114161228 A CN 114161228A CN 202210131513 A CN202210131513 A CN 202210131513A CN 114161228 A CN114161228 A CN 114161228A
Authority
CN
China
Prior art keywords
tool
independent
data
model
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210131513.0A
Other languages
Chinese (zh)
Other versions
CN114161228B (en
Inventor
陆绍飞
朱雅君
杨贯中
李军义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202210131513.0A priority Critical patent/CN114161228B/en
Publication of CN114161228A publication Critical patent/CN114161228A/en
Application granted granted Critical
Publication of CN114161228B publication Critical patent/CN114161228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23QDETAILS, COMPONENTS, OR ACCESSORIES FOR MACHINE TOOLS, e.g. ARRANGEMENTS FOR COPYING OR CONTROLLING; MACHINE TOOLS IN GENERAL CHARACTERISED BY THE CONSTRUCTION OF PARTICULAR DETAILS OR COMPONENTS; COMBINATIONS OR ASSOCIATIONS OF METAL-WORKING MACHINES, NOT DIRECTED TO A PARTICULAR RESULT
    • B23Q17/00Arrangements for observing, indicating or measuring on machine tools
    • B23Q17/09Arrangements for observing, indicating or measuring on machine tools for indicating or measuring cutting pressure or for determining cutting-tool condition, e.g. cutting ability, load on tool
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23QDETAILS, COMPONENTS, OR ACCESSORIES FOR MACHINE TOOLS, e.g. ARRANGEMENTS FOR COPYING OR CONTROLLING; MACHINE TOOLS IN GENERAL CHARACTERISED BY THE CONSTRUCTION OF PARTICULAR DETAILS OR COMPONENTS; COMBINATIONS OR ASSOCIATIONS OF METAL-WORKING MACHINES, NOT DIRECTED TO A PARTICULAR RESULT
    • B23Q17/00Arrangements for observing, indicating or measuring on machine tools
    • B23Q17/09Arrangements for observing, indicating or measuring on machine tools for indicating or measuring cutting pressure or for determining cutting-tool condition, e.g. cutting ability, load on tool
    • B23Q17/0904Arrangements for observing, indicating or measuring on machine tools for indicating or measuring cutting pressure or for determining cutting-tool condition, e.g. cutting ability, load on tool before or after machining
    • B23Q17/0914Arrangements for measuring or adjusting cutting-tool geometry machine tools
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23QDETAILS, COMPONENTS, OR ACCESSORIES FOR MACHINE TOOLS, e.g. ARRANGEMENTS FOR COPYING OR CONTROLLING; MACHINE TOOLS IN GENERAL CHARACTERISED BY THE CONSTRUCTION OF PARTICULAR DETAILS OR COMPONENTS; COMBINATIONS OR ASSOCIATIONS OF METAL-WORKING MACHINES, NOT DIRECTED TO A PARTICULAR RESULT
    • B23Q17/00Arrangements for observing, indicating or measuring on machine tools
    • B23Q17/09Arrangements for observing, indicating or measuring on machine tools for indicating or measuring cutting pressure or for determining cutting-tool condition, e.g. cutting ability, load on tool
    • B23Q17/0952Arrangements for observing, indicating or measuring on machine tools for indicating or measuring cutting pressure or for determining cutting-tool condition, e.g. cutting ability, load on tool during machining
    • B23Q17/0957Detection of tool breakage
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/04Ageing analysis or optimisation against ageing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Numerical Control (AREA)

Abstract

The application relates to a cutter wear prediction method, a cutter wear prediction device, equipment and a storage medium. The method includes the steps of collecting original cutter data, introducing Hadamard products to improve an attention gate, fusing the improved attention gate with an independent circulation neural network to obtain a plurality of independent circulation network basic models with fused attention mechanisms, stacking the independent circulation network basic models to construct a deep independent circulation network model, combining the deep independent circulation network model with a convolution neural network, and constructing a cutter wear prediction model for predicting a wear result of a cutter according to the original cutter data. Because the Hadamard product is introduced into the depth independent circulation network model in the tool wear prediction model for improvement, the original input can be adjusted, the influence of input elements with stronger importance on the model is strengthened, elements with weaker importance are inhibited, and the prediction accuracy of the tool wear result is improved.

Description

Tool wear prediction method, device, equipment and storage medium
Technical Field
The application relates to the technical field of equipment state early warning based on a calculation model, in particular to a cutter wear prediction method, a cutter wear prediction device, equipment and a storage medium.
Background
In recent years, the era of "industrial 4.0" leading to intelligent manufacturing rises, a new round of industrial transformation is initiated in the world, the level of industrialization gradually becomes an important mark for evaluating national comprehensive strength, and an industrial processing technology energized by the internet is rapidly developing. Among them, condition monitoring of mechanical equipment, which is an important component of industrial processing, is also an important means for ensuring the quality of industrial processing and reducing the processing cost.
Milling is a mechanical processing method for processing various parts by using a milling cutter as a cutter, and is commonly used in modern processing and manufacturing industries. The cutter is used as a key part for milling machine cutting, and the state of the cutter is directly related to the quality of a machined part, the damage of equipment, the precision and the efficiency of machining and the economic benefit of an enterprise. In order to reduce the loss caused by the damage of the tool, in recent years, with the development of more and more researchers, different tool state monitoring methods are explored.
Currently, tool wear prediction includes two broad categories: direct and indirect processes. The direct method directly measures the wear amount of the tool by means of a laser beam, a microscope, or the like. And the indirect method realizes the cutter wear prediction by collecting cutter wear information such as cutting force signals, vibration signals, acoustic emission signals and the like. However, both of the above prediction methods have their own disadvantages.
In recent years, with the development of data acquisition and storage technology, the traditional machine learning algorithm cannot well learn rules from massive data, so that a deep learning model is increasingly used in tool wear prediction. The deep learning model can extract features in a self-adaptive manner without excessive processing on signal data so as to achieve a high-precision effect.
At present, a tool wear prediction method based on a deep learning model is used by a great deal of research work, but the research work in the field still faces a lot of challenges, such as the prediction accuracy of the model.
Disclosure of Invention
In order to solve or partially solve the problems in the related art, the application provides a tool wear prediction method, a device, equipment and a storage medium, which can improve the accuracy of the model for predicting the tool wear.
A first aspect of the present application provides a tool wear prediction method, including:
collecting original tool data;
introducing a Hadamard product to improve an attention gate, and fusing the improved attention gate with an independent circulation neural network to obtain a plurality of independent circulation network basic models with a fused attention mechanism, wherein each independent circulation network basic model comprises a plurality of independent neural units which are connected with each other;
stacking a plurality of independent circulation network basic models to construct a deep independent circulation network model;
and combining the depth independent circulation network model with a convolution neural network, and constructing a tool wear prediction model according to the original tool data, wherein the tool wear prediction model is used for predicting the wear result of the tool.
Preferably, after the acquiring the raw tool data, the method further includes:
sequentially removing invalid data in the original tool data, performing down-sampling and normalization processing to obtain initial tool data;
introducing a Hadamard product to improve an attention vector of an attention gate, fusing the improved attention gate with an independent cyclic neural network, and constructing a plurality of independent cyclic network basic models with a fused attention mechanism, wherein each independent cyclic network basic model comprises a plurality of independent neural units which are connected with each other;
stacking a plurality of independent circulation network basic models to construct a deep independent circulation network model;
and combining the depth independent circulation network model with a convolution neural network, and constructing a tool wear prediction model according to the initial tool data, wherein the tool wear prediction model is used for predicting the wear result of the tool.
Preferably, the removing, down-sampling and normalizing the invalid data in the original tool data in sequence to obtain the initial tool data further includes:
dividing the initial cutter data into initial cutter training data and initial cutter testing data by adopting a K-fold cross verification method, wherein K is a positive integer greater than or equal to 5;
introducing Hadamard products to improve an attention gate, and fusing the improved attention gate with an independent circulation neural network to obtain a plurality of independent circulation network basic models with a fused attention mechanism, wherein each independent circulation network basic model comprises a plurality of independent neural units which are connected with each other.
Preferably, the introduction of the hadamard product improves the attention gate, including:
introducing a Hadamard product improves an attention vector of an attention gate such that the attention vector of the attention gate is consistent with a recurrent input unit of an independent recurrent neural network before being fused with the independent recurrent neural network.
Preferably, the tool wear prediction model is used for predicting a wear result of a tool, and includes:
extracting local features of the original tool data by a convolution network model part of the tool wear prediction model;
extracting the time sequence characteristics of the original tool data by the depth independent cycle network model part of the tool wear prediction model;
and inputting the local characteristics of the original tool data and the time sequence characteristics of the original tool data into an output layer part of the tool wear prediction model to obtain a tool wear result.
A second aspect of the present application provides a tool wear prediction device including:
the acquisition module is used for acquiring original tool data;
the fusion module is used for introducing a Hadamard product to improve an attention gate, fusing the improved attention gate with an independent circulation neural network to obtain a plurality of independent circulation network basic models with a fusion attention mechanism, wherein each independent circulation network basic model comprises a plurality of independent neural units which are connected with each other;
the stacking module is used for stacking a plurality of independent circulation network basic models to construct a deep independent circulation network model;
and the construction module is used for combining the depth independent circulation network model with a convolution neural network and constructing a cutter wear prediction model according to the original cutter data, wherein the cutter wear prediction model is used for predicting the wear result of the cutter.
Preferably, the device further comprises a pretreatment module;
and the preprocessing module is used for sequentially removing invalid data in the original tool data, performing down-sampling and normalization processing to obtain initial tool data.
Preferably, the fusion module introduces a hadamard product to improve the attention gate, and comprises:
introducing a Hadamard product improves an attention vector of an attention gate such that the attention vector of the attention gate is consistent with a recurrent input unit of an independent recurrent neural network before being fused with the independent recurrent neural network.
A third aspect of the present application provides an electronic device comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the tool wear prediction method as described above.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon executable code, which, when executed by a processor of an electronic device, causes the processor to perform a tool wear prediction method as described above.
The technical scheme provided by the application can comprise the following beneficial effects:
according to the technical scheme, original cutter data are collected, Hadamard products are introduced to improve an attention gate, the improved attention gate is fused with an independent circulation neural network, a plurality of independent circulation network basic models with fused attention mechanisms are obtained, a plurality of independent circulation network basic models are stacked to construct a deep independent circulation network model, the deep independent circulation network model is combined with a convolution neural network, and a cutter wear prediction model for predicting the wear result of a cutter is constructed according to the original cutter data. As the Hadamard product is introduced into the depth independent circulation network model in the tool wear prediction model for improvement, the original input can be adjusted, the influence of the input elements with stronger importance on the model is enhanced, and the elements with weaker importance are inhibited, so that the prediction accuracy of the model on the tool wear result is improved.
In addition, when the tool wear prediction model of the technical scheme of the application predicts the wear result of the tool, the convolution network model part is used for extracting the local features of the original tool data, and the depth independent cycle network model part is used for extracting the time sequence features of the original tool data, so that data information loss can be avoided, and the subsequent model can better predict the tool wear result by using the output layer part.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The foregoing and other objects, features and advantages of the application will be apparent from the following more particular descriptions of exemplary embodiments of the application as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the application.
FIG. 1 is a schematic flow chart diagram illustrating a tool wear prediction method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of a tool wear prediction method according to another embodiment of the present application;
FIG. 3 is a schematic diagram of the architecture of an independent loop network model with a fused attention mechanism shown in the present application;
FIG. 4 is a schematic diagram of a tool wear prediction model shown in the present application;
FIG. 5 is a schematic structural diagram of a tool wear prediction device according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a tool wear prediction device according to another embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device shown in an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While embodiments of the present application are illustrated in the accompanying drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms "first," "second," "third," etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
In the related art, the tool wear prediction method based on the deep learning model is used by a great deal of research, but the research in the field still faces many challenges, such as the prediction accuracy of the model.
In view of the above problems, the present application provides a method, an apparatus, a device and a storage medium for predicting tool wear, which can improve the accuracy of predicting tool wear by a model. In order to facilitate understanding of the technical solutions of the present application, the technical solutions of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 shows a schematic flow chart of a tool wear prediction method according to an embodiment of the present application.
Referring to fig. 1, a method for predicting tool wear includes the following steps:
and step S11, collecting original tool data.
And collecting milling force signals, vibration acceleration signals, acoustic emission signals and tool wear values along an x axis, a y axis and a z axis, and sorting to obtain original tool data. The original cutter data records the full life cycle information of each milling cutter, including the cutter feeding and retracting processes, and is positioned at the head end and the tail end of the original cutter data.
And S12, introducing Hadamard products to improve the attention gate, and fusing the improved attention gate with the independent circulation neural network to obtain a plurality of independent circulation network basic models with fused attention mechanisms, wherein each independent circulation network basic model comprises a plurality of independent neural units connected with each other.
1) Independent circulation neural network-IndRNN
IndRNN is an independent recurrent neural network. The neurons of each layer are independent, so that IndRNN can adjust the reverse propagation time of the gradient to effectively solve the problems of gradient disappearance and gradient explosion, and meanwhile, compared with LSTM, IndRNN can better process time sequences with longer time steps. Its unit formula is as follows:
Figure 34129DEST_PATH_IMAGE001
(1)
wherein the content of the first and second substances,
Figure 599102DEST_PATH_IMAGE002
the cyclic weight is represented by a number of cyclic weights,
Figure 45127DEST_PATH_IMAGE003
the output of the last time is represented,
Figure 362845DEST_PATH_IMAGE004
representing a hadamard operation.
Figure 236123DEST_PATH_IMAGE005
An input weight representing the current time of day,
Figure 553972DEST_PATH_IMAGE006
an input value representing the time t is shown,
Figure 741371DEST_PATH_IMAGE007
a matrix of deviation measures is represented and,
Figure 946218DEST_PATH_IMAGE008
it is shown that the activation function is,
Figure 990398DEST_PATH_IMAGE009
an output value representing the hidden state at time t.
Each neuron in the IndRNN layer is independent, and connection between the neurons can be realized by stacking 2 layers or multiple layers of IndRNN units. For the n-th layer of neurons, the hidden state at time t can be expressed as:
Figure 795543DEST_PATH_IMAGE010
(2)
wherein the content of the first and second substances,
Figure 583370DEST_PATH_IMAGE011
and
Figure 157571DEST_PATH_IMAGE012
the input weight and hidden layer weight of the nth layer neuron are respectively represented.
2) Attention Gate-EleAttG
EleAttG is an attention gate based on the element-wise approach of the attention mechanism that adjusts the original input by means of an attention vector, enhancing or suppressing elements according to their importance.
EleAttG can be embedded into a generic RNN module, conferring attention to RNN neurons. The RNN portion may be replaced with other RNN variant units such as LSTM, GRU, etc. The specific formula of EleAttG can be expressed as:
Figure 372651DEST_PATH_IMAGE013
(3)
Figure 399513DEST_PATH_IMAGE014
(4)
wherein
Figure 240299DEST_PATH_IMAGE015
It is indicated that the activation function sigmoid,
Figure 669007DEST_PATH_IMAGE016
represents the current input value
Figure 54989DEST_PATH_IMAGE003
Representing the last hidden state output.
Figure 834726DEST_PATH_IMAGE017
To express an attention vector, will
Figure 964356DEST_PATH_IMAGE017
And the original input
Figure 247570DEST_PATH_IMAGE016
After Hadamard operation, the result is obtained
Figure 70032DEST_PATH_IMAGE018
Figure 71486DEST_PATH_IMAGE018
And
Figure 755539DEST_PATH_IMAGE016
same dimension, involved in subsequent RNN units
Figure 893260DEST_PATH_IMAGE016
Will be replaced by
Figure 886624DEST_PATH_IMAGE018
. For example, the formula for combining EleAttG with GRU is as follows:
Figure 375374DEST_PATH_IMAGE019
(5)
Figure 846806DEST_PATH_IMAGE020
(6)
Figure 104612DEST_PATH_IMAGE021
(7)
Figure 268877DEST_PATH_IMAGE022
(8)
wherein the content of the first and second substances,
Figure 979344DEST_PATH_IMAGE023
and
Figure 520047DEST_PATH_IMAGE024
the output vectors of the GRU reset gate and the refresh gate are respectively represented,
Figure 147206DEST_PATH_IMAGE025
to representThe output vector of the hidden state. By analogy, EleAttG can be added to other RNN variants to obtain good results, these layers of RNN with EleAttG can replace the original RNN layers, and multiple EleAttG-RNN layers can be stacked.
3) Constructing an independent loop network basic model-AIndRNN with a fusion attention mechanism
Referring to FIG. 4, the RNN variants LSTM and GRU are element-expanded on the RNN calculation unit without changing the basic RNN component
Figure 216794DEST_PATH_IMAGE026
Thus the original EleAttG attention vector
Figure 680136DEST_PATH_IMAGE027
Input variables that can modulate such RNN variants. But with the loop input unit of IndRNN directly at
Figure 493371DEST_PATH_IMAGE026
On the basis of the method, matrix multiplication is changed into matrix Hadamard product operation, which is different from LSTM and GRU variants to a certain extent
Figure 725769DEST_PATH_IMAGE026
EleAttG of (1) does not apply.
Based on the above problems, the original EleAttG is modified by introducing a hadamard product, and the modified EleAttG is combined with IndRNN to obtain AIndRNN. The AINDRNN calculation method comprises the following steps:
Figure 966258DEST_PATH_IMAGE028
(9)
Figure 916896DEST_PATH_IMAGE029
(10)
Figure 533823DEST_PATH_IMAGE030
(11)
wherein the content of the first and second substances,
Figure 359741DEST_PATH_IMAGE031
a matrix of weights is represented by a matrix of weights,
Figure 36710DEST_PATH_IMAGE032
representing the hidden state value at time t-1,
Figure 209065DEST_PATH_IMAGE033
representing the modified attention vector, its dimensions and the original input
Figure 364103DEST_PATH_IMAGE034
In the same way, the first and second,
Figure 571093DEST_PATH_IMAGE035
the hadamard operation of the representation matrix,
Figure 418964DEST_PATH_IMAGE036
show that
Figure 78615DEST_PATH_IMAGE037
Is multiplied one-by-one with the previous hidden state value.
And step S13, stacking a plurality of independent circulation network basic models to construct a deep independent circulation network model.
Due to the Hadamard product operation by changing the matrix multiplication into the matrix, the method keeps
Figure 37344DEST_PATH_IMAGE033
And IndRNN cycle input unit
Figure 98841DEST_PATH_IMAGE038
Will be consistent with
Figure 366880DEST_PATH_IMAGE039
And the original input
Figure 513828DEST_PATH_IMAGE040
The elements of (a) are multiplied point by point,to pair
Figure 10668DEST_PATH_IMAGE040
Realizing element level attention adjustment, and obtaining the adjusted
Figure 926671DEST_PATH_IMAGE041
. Due to the fact that
Figure 116344DEST_PATH_IMAGE033
And
Figure 750588DEST_PATH_IMAGE038
computing Unit identity, input
Figure 51119DEST_PATH_IMAGE041
The method is more suitable for calculation of IndRNN, enables the IndRNN to concentrate on information with higher learning importance, and inhibits influence of unnecessary information.
Therefore, by stacking a plurality of AIndRNN's, a deep independent round robin network model AIndRNN is obtained. The deep AIndRNN can process time sequence data alone or in combination with other networks to handle more complex timing problems.
And step S14, combining the depth independent circulation network model with the convolution neural network, and constructing a tool wear prediction model according to the original tool data, wherein the tool wear prediction model is used for predicting the wear result of the tool.
On the basis of the proposed depth AINDRNN, a tool wear prediction model CNN-AINDRNN is proposed aiming at the accuracy problem of tool wear prediction.
Referring to FIG. 4, the CNN-AINDRNN is mainly composed of three parts. The first part is the convolutional network model part. This portion is composed of a BN layer and a CNN layer. The BN layer can accelerate the convergence speed of the model, so that the model training is more stable. The CNN can automatically extract data related features, and is mainly used in the fields of computer vision, voice recognition and the like at present. The shared weights and local connections in CNN are used to process a large amount of two-dimensional data such as image signals, and the training process can be completed more quickly with very few parameters. The invention makesThe original tool data can extract local features in the data by utilizing the convolution layer in the CNN, and the pooling layer is used for reducing the dimension. At the same time, due to the improved attention component
Figure 821629DEST_PATH_IMAGE042
And the original input
Figure 447783DEST_PATH_IMAGE043
Must be identical, and therefore it is necessary to ensure that the second dimension of the input data must be consistent with the AIndRNN number. And the CNN may adjust the second dimension of the output data to ensure that it is equal to the number of cells in the AIndRNN layer.
The input data enters AIndRNN after local features are extracted by CNN. After the improved attention unit is added to the AINDRNN, the weight of the input element can be adjusted according to different importance, and then subsequent cyclic unit calculation is carried out. The depth AINDRNN obtained by stacking AINDRNNs can better extract the time sequence characteristics in the process of tool wear.
And finally, the data enters an output layer part, and the output layer consists of a full connection layer and is used for obtaining a cutter wear prediction result.
The tool wear prediction model is constructed by the technical scheme of the embodiment. As the Hadamard product is introduced into the depth independent circulation network model in the tool wear prediction model for improvement, the original input can be adjusted, the influence of the input elements with stronger importance on the model is enhanced, and the elements with weaker importance are inhibited, so that the prediction accuracy of the model on the tool wear result is improved.
In addition, when the tool wear prediction model of the embodiment predicts the wear result of the tool, the convolution network model part is used to extract the local features of the original tool data, and the depth independent cycle network model part is used to extract the time sequence features of the original tool data, so that data information loss can be avoided, and the subsequent model can better predict the tool wear result by using the output layer part.
Fig. 2 shows a schematic flow chart of a tool wear prediction method according to another embodiment of the present application.
Referring to fig. 2, a method for predicting tool wear includes the following steps:
and step S21, collecting original tool data.
Please refer to the related description in step S11 for step S21, which is not described herein.
And step S22, sequentially removing invalid data in the original tool data, down-sampling and normalizing to obtain the original tool data.
The original cutter data records the full life cycle information of each milling cutter, including the cutter feeding and retracting processes, and is positioned at the head end and the tail end of the original cutter data. And the vibration amplitude of the head and tail data is too small, so that the normal milling process of the milling cutter cannot be reflected. In order to avoid the influence of head and tail signals on the prediction model, 2.5% of head and tail of the total sampling data is respectively intercepted in each milling process, and head and tail invalid data are removed.
The sampling frequency of the original tool data is too high, the number of signal samples acquired by milling each time is over one hundred thousand, and the excessive data not only can lead the model training time to be too long, but also can lead the model precision to be influenced. Therefore, the first 20000 pieces of signal data are mainly adopted and normalized, and finally three original tool data with the size of 315 × 200000 × 7 are obtained.
And step S23, dividing the initial cutter data into initial cutter training data and initial cutter testing data by adopting a K-fold cross verification method, wherein K is a positive integer greater than or equal to 5.
It should be noted that the initial tool data may be divided into initial tool training data and initial tool testing data by using a K-fold cross-validation method (where K is greater than or equal to 5). The specific principle of the K-fold cross-validation method is as follows: 1. dividing the total initial tool data S into k disjoint subsets, and assuming that the number of training samples in S is m, each subset has m/k training samples, and the corresponding subset is called { S1, S2, …, sk }. 2. And taking out one from the divided subsets each time as initial cutter testing data, and taking the other k-1 as initial cutter training data. For example, from the divided subsets (the subset contains 10 initial tool data), 2 are taken out as initial tool test data, and the other 8 are taken out as initial tool training data, that is, according to 8: 2.
And S24, introducing Hadamard products to improve the attention gate, and fusing the improved attention gate with the independent circulation neural network to obtain a plurality of independent circulation network basic models with fused attention mechanisms, wherein each independent circulation network basic model comprises a plurality of independent neural units connected with each other.
Please refer to the related description in step S12 for step S24, which is not described herein.
And step S25, stacking a plurality of independent circulation network basic models to construct a deep independent circulation network model.
Please refer to the related description in step S13 for step S25, which is not described herein.
And step S26, combining the depth independent circulation network model with the convolution neural network, and constructing a tool wear prediction model according to the initial tool data, wherein the tool wear prediction model is used for predicting the wear result of the tool.
Please refer to the related description in step S14 for step S26, which is not described herein.
And testing and verifying the initial tool wear prediction model according to the initial tool test data to finish the final construction of the tool wear prediction model.
In order to better understand the technical solution of the present application, the following description is made with specific test experiments.
1. Raw tool data
The data used in this test experiment is from the milling tool wear data set used in the 2010 PHM Association of college data challenges. The data acquisition experiment of the data set is carried out on a high-speed numerical control milling machine, the used cutter is a three-edge hard alloy ball-end milling cutter, the processed workpiece material is stainless steel, the rotating speed of a milling machine spindle is 10400rpm, the feeding speed is 1555mm/min, and the radial cutting depth and the axial cutting depth are 0.125mm and 0.2mm respectively. The sampling frequency was 50 kHz. The experiment mainly collects milling force, vibration acceleration and acoustic emission signals in the milling cutter machining process. The three-component dynamometer and the three-piezoelectric accelerometer respectively acquire a milling force signal and a vibration acceleration signal along an x axis, a y axis and a z axis.
The data set collected a total of 7 sensor information for the full life cycle of 6 mills (c 1-c 6) under the same operating conditions, 315 mills per mill, each recording the wear values flute1, flute2 and flute3 for each of the three edges of the mill. Wherein c1, c4 and c6 are used as training sets of competitions, c2, c3 and c5 are used as testing sets of competitions, and corresponding wear values are not referenced, so that only c1, c4 and c6 are selected as experimental data, and flute1 is selected as a target wear value.
2. Experimental Environment and parameter settings
The hardware environment is mainly a PC host. The CPU of the PC host is Intel (R) core (TM) i5-10210U CPU @ 1.60GHz 2.11 GHz, and the memory is 16GB RAM and 64-bit operating system. The software is implemented in Python language under Pycharm environment by taking Windows 10 as a platform, wherein the version of Python is 3.6.0, and the version of Tensorflow is 1.15.0.
The tool wear prediction model in the test experiment is a mixed model consisting of CNN and AINDRNN. The CNN part of the model consists of 4 CNN layers, the number of the filters of the CNN convolutional layer part in the first layer is 32, the number of the filters of each CNN layer is twice of that of the previous CNN layer, and the activation functions of the CNN layers are ReLU functions. The pool layer part, the first layer and the second layer are pool-size 5, and the third layer and the fourth layer are 4. The AIndRNN layer of the model is divided into two layers, with memory cells all set to 256. The output layer consists of two fully-connected layers with the calculation unit [500,600], and uses the ReLU function as the activation function. Specific parameter settings are shown in table 1.
Figure 320055DEST_PATH_IMAGE044
TABLE 1 structural parameter Table of tool wear prediction model
3. Evaluation index
The tool wear prediction problem involved in the test experiment is a regression problem, and the root mean square error RMSE and the average relative error MAE are mainly used as evaluation indexes of the regression problem. These indices are calculated as follows:
RMSE: the average magnitude of the error, which is the square root of the average of the squared differences between the predicted and actual observations, is measured by the following equation:
Figure 158698DEST_PATH_IMAGE045
(12)
MAE: the average of absolute errors, the formula for calculation is as follows:
Figure 49293DEST_PATH_IMAGE046
(13)
wherein the content of the first and second substances,
Figure 580769DEST_PATH_IMAGE047
the number of samples is represented by the number of samples,
Figure 924026DEST_PATH_IMAGE048
the actual value is represented by the value of,
Figure 831939DEST_PATH_IMAGE049
the predicted value is represented by a value of the prediction,
Figure 577041DEST_PATH_IMAGE050
the ith sample is represented.
4. Evaluation of Experimental results
To compare the effects of IndRNN and AIndRNN, and of original and modified EleAttG on IndRNN, three sets of experiments were performed, the results of which were investigated at tool c1, c4, c6 for CNN-IndRNN, CNN-EleAttG-IndRNN and CNN-AIndRNN, respectively.
Figure 279418DEST_PATH_IMAGE051
TABLE 2 comparison of CNN-AINDRNN model with CNN-IndRNN and CNN-EleAttG-IndRNN models
In the comparative experiment, compared with the MAE and the other two models, the RMSE of the CNN-AIndRNN is generally reduced on three data sets of C1, C4 and C6, and the contingency of prediction of the CNN-AIndRNN model is eliminated. Compared with a CNN-IndRNN model, the RMSE of the CNN-AIndRNN is averagely reduced by 0.809, and the MAE is averagely reduced by 0.531, which shows that the prediction accuracy of the IndRNN for the cutter wear prediction problem is obviously improved after an improved attention unit is added. And after the original EleAttG unit is added into the IndRNN model, the RMSE and MAE of the CNN-EleAttG-IndRNN model vibrate up and down in the prediction result of the CNN-IndRNN model, and no obvious difference of quality exists, which indicates that the original EleAttG unit is based on sigma
Figure 359238DEST_PATH_IMAGE052
Component, not applicable based on
Figure 539684DEST_PATH_IMAGE052
IndRNN model of (2). Finally, comparing the predicted results of CNN-AIndRNN and CNN-EleAttG-IndRNN, RMSE and MAE are respectively reduced by 1.0813 and 0.8523, which shows that the improved EleAttG unit is more suitable for IndRNN model, and CNN-AIndRNN prediction accuracy is higher.
Corresponding to the method of the functional embodiment, the application also provides a tool wear prediction device.
Fig. 5 is a schematic structural diagram illustrating a tool wear prediction device in an embodiment of the present application.
Referring to fig. 5, a tool wear prediction apparatus 50 includes an acquisition module 510, a fusion module 520, a stacking module 530, and a construction module 540.
The acquisition module 510 is used for acquiring original tool data;
the fusion module 520 is configured to introduce a hadamard product to improve an attention gate, fuse the improved attention gate with an independent cyclic neural network, and obtain a plurality of independent cyclic network base models with a fusion attention mechanism, where each of the independent cyclic network base models includes a plurality of independent neural units connected to each other;
the stacking module 530 is configured to stack a plurality of independent circulation network basic models to construct a deep independent circulation network model;
the construction module 540 is configured to combine the depth independent cyclic network model with the convolutional neural network, and construct a tool wear prediction model according to the original tool data, where the tool wear prediction model is used to predict a wear result of the tool.
In the device of this embodiment, the acquisition module 510 acquires original tool data, the fusion module 520 introduces a hadamard product to improve an attention gate, the improved attention gate is fused with an independent cyclic neural network to obtain a plurality of independent cyclic network base models with a fused attention mechanism, the stacking module 530 stacks the plurality of independent cyclic network base models to construct a deep independent cyclic network model, the construction module 540 combines the deep independent cyclic network model with a convolutional neural network, and a tool wear prediction model for predicting a tool wear result is constructed according to the original tool data. As the Hadamard product is introduced into the depth independent circulation network model in the tool wear prediction model for improvement, the original input can be adjusted, the influence of the input elements with stronger importance on the model is enhanced, and the elements with weaker importance are inhibited, so that the prediction accuracy of the model on the tool wear result is improved.
In addition, when the tool wear prediction model of the technical scheme of the application predicts the wear result of the tool, the convolution network model part is used for extracting the local features of the original tool data, and the depth independent cycle network model part is used for extracting the time sequence features of the original tool data, so that data information loss can be avoided, and the subsequent model can better predict the tool wear result by using the output layer part.
Fig. 6 shows a schematic structural diagram of a tool wear prediction device in an embodiment of the present application.
Referring to fig. 6, a tool wear prediction apparatus 60 includes an acquisition module 610, a preprocessing module 620, a dividing module 630, a fusing module 640, a stacking module 650, and a constructing module 560.
Please refer to the description of the corresponding functional units and modules in fig. 5 for the acquisition module 610, the fusion module 640, the stacking module 650, and the construction module 660, which are not described herein again.
The preprocessing module 620 is configured to sequentially perform elimination, downsampling, and normalization processing on invalid data in the original tool data to obtain initial tool data.
The dividing module 630 is configured to divide the initial tool data into initial tool training data and initial tool testing data by using a K-fold cross-validation method, where K is a positive integer greater than or equal to 5.
With regard to the apparatus in the above embodiments, the specific manner in which each module and unit performs operations has been described in detail in relation to the method embodiment corresponding to the apparatus, and will not be described in detail herein.
Referring to fig. 7, an electronic device 700 includes a processor 710 and a memory 720.
The Processor 710 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 720 may include various types of storage units such as system memory, Read Only Memory (ROM), and permanent storage. Wherein the ROM may store static data or instructions that are required by the processor 710 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered off. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at runtime. In addition, the memory 720 may include any combination of computer-readable storage media, including various types of semiconductor memory chips (e.g., DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic and/or optical disks, as well. The memory 720 has stored thereon executable code that, when processed by the processor 710, may cause the processor 710 to perform some or all of the methods described above.
Furthermore, the method according to the present application may also be implemented as a computer program or computer program product comprising computer program code instructions for performing some or all of the steps of the above-described method of the present application.
Alternatively, the present application may also be embodied as a computer-readable storage medium (or non-transitory machine-readable storage medium or machine-readable storage medium) having executable code (or a computer program or computer instruction code) stored thereon, which, when executed by a processor of an electronic device (or server, etc.), causes the processor to perform part or all of the various steps of the above-described method according to the present application.
Having described embodiments of the present application, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A method of predicting tool wear, comprising:
collecting original tool data;
introducing a Hadamard product to improve an attention gate, and fusing the improved attention gate with an independent circulation neural network to obtain a plurality of independent circulation network basic models with a fused attention mechanism, wherein each independent circulation network basic model comprises a plurality of independent neural units which are connected with each other;
stacking a plurality of independent circulation network basic models to construct a deep independent circulation network model;
and combining the depth independent circulation network model with a convolution neural network, and constructing a tool wear prediction model according to the original tool data, wherein the tool wear prediction model is used for predicting the wear result of the tool.
2. The tool wear prediction method of claim 1, wherein after collecting the raw tool data, further comprising:
sequentially removing invalid data in the original tool data, performing down-sampling and normalization processing to obtain initial tool data;
introducing a Hadamard product to improve an attention vector of an attention gate, fusing the improved attention gate with an independent cyclic neural network, and constructing a plurality of independent cyclic network basic models with a fused attention mechanism, wherein each independent cyclic network basic model comprises a plurality of independent neural units which are connected with each other;
stacking a plurality of independent circulation network basic models to construct a deep independent circulation network model;
and combining the depth independent circulation network model with a convolution neural network, and constructing a tool wear prediction model according to the initial tool data, wherein the tool wear prediction model is used for predicting the wear result of the tool.
3. The tool wear prediction method according to claim 2, wherein the removing, down-sampling and normalizing the invalid data in the original tool data to obtain the initial tool data further comprises:
dividing the initial cutter data into initial cutter training data and initial cutter testing data by adopting a K-fold cross verification method, wherein K is a positive integer greater than or equal to 5;
introducing Hadamard products to improve an attention gate, and fusing the improved attention gate with an independent circulation neural network to obtain a plurality of independent circulation network basic models with a fused attention mechanism, wherein each independent circulation network basic model comprises a plurality of independent neural units which are connected with each other.
4. The tool wear prediction method of claim 1, wherein the introducing a hadamard product improves attention gates, comprising:
introducing a Hadamard product improves an attention vector of an attention gate such that the attention vector of the attention gate is consistent with a recurrent input unit of an independent recurrent neural network before being fused with the independent recurrent neural network.
5. The tool wear prediction method of claim 1, wherein the tool wear prediction model is used to predict a wear outcome of a tool, comprising:
extracting local features of the original tool data by a convolution network model part of the tool wear prediction model;
extracting the time sequence characteristics of the original tool data by the depth independent cycle network model part of the tool wear prediction model;
and inputting the local characteristics of the original tool data and the time sequence characteristics of the original tool data into an output layer part of the tool wear prediction model to obtain a tool wear result.
6. A tool wear prediction device, comprising:
the acquisition module is used for acquiring original tool data;
the fusion module is used for introducing a Hadamard product to improve an attention gate, fusing the improved attention gate with an independent circulation neural network to obtain a plurality of independent circulation network models with a fusion attention mechanism, wherein each independent circulation network basic model comprises a plurality of independent neural units which are connected with each other;
the stacking module is used for stacking a plurality of independent circulation network basic models to construct a deep independent circulation network model;
and the construction module is used for combining the depth independent circulation network model with a convolution neural network and constructing a cutter wear prediction model according to the original cutter data, wherein the cutter wear prediction model is used for predicting the wear result of the cutter.
7. The tool wear prediction device of claim 6, further comprising a preprocessing module;
and the preprocessing module is used for sequentially removing invalid data in the original tool data, performing down-sampling and normalization processing to obtain initial tool data.
8. The tool wear prediction device of claim 6, wherein the fusion module incorporates Hadamard products to improve attention gates, comprising:
introducing a Hadamard product improves an attention vector of an attention gate such that the attention vector of the attention gate is consistent with a recurrent input unit of an independent recurrent neural network before being fused with the independent recurrent neural network.
9. An electronic device, comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the tool wear prediction method of any one of claims 1 to 5.
10. A computer-readable storage medium having stored thereon executable code which, when executed by a processor of an electronic device, causes the processor to perform the tool wear prediction method of any one of claims 1 to 5.
CN202210131513.0A 2022-02-14 2022-02-14 Tool wear prediction method, device, equipment and storage medium Active CN114161228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210131513.0A CN114161228B (en) 2022-02-14 2022-02-14 Tool wear prediction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210131513.0A CN114161228B (en) 2022-02-14 2022-02-14 Tool wear prediction method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114161228A true CN114161228A (en) 2022-03-11
CN114161228B CN114161228B (en) 2022-06-21

Family

ID=80489948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210131513.0A Active CN114161228B (en) 2022-02-14 2022-02-14 Tool wear prediction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114161228B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115971970A (en) * 2022-12-02 2023-04-18 西南交通大学 Milling cutter wear monitoring method based on multi-parameter guide space attention mechanism

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111027457A (en) * 2019-12-06 2020-04-17 电子科技大学 Independent circulation neural network based on gate control and skeleton action identification method
CN113378725A (en) * 2021-06-15 2021-09-10 山东大学 Cutter fault diagnosis method, equipment and storage medium based on multi-scale-channel attention network
CN113780153A (en) * 2021-09-07 2021-12-10 北京理工大学 Cutter wear monitoring and predicting method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111027457A (en) * 2019-12-06 2020-04-17 电子科技大学 Independent circulation neural network based on gate control and skeleton action identification method
CN113378725A (en) * 2021-06-15 2021-09-10 山东大学 Cutter fault diagnosis method, equipment and storage medium based on multi-scale-channel attention network
CN113780153A (en) * 2021-09-07 2021-12-10 北京理工大学 Cutter wear monitoring and predicting method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WANG GAO-PENG等: "Silicon content prediction of hot metal in blast furnace based on", 《E3S WEB OF CONFERENCES》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115971970A (en) * 2022-12-02 2023-04-18 西南交通大学 Milling cutter wear monitoring method based on multi-parameter guide space attention mechanism
CN115971970B (en) * 2022-12-02 2024-03-26 西南交通大学 Milling cutter abrasion monitoring method based on multi-parameter guiding spatial attention mechanism

Also Published As

Publication number Publication date
CN114161228B (en) 2022-06-21

Similar Documents

Publication Publication Date Title
Sener et al. A novel chatter detection method for milling using deep convolution neural networks
CN108709745B (en) Rapid bearing fault identification method based on enhanced LPP algorithm and extreme learning machine
Zhou et al. A tool condition monitoring method based on two-layer angle kernel extreme learning machine and binary differential evolution for milling
CN111633467B (en) Cutter wear state monitoring method based on one-dimensional depth convolution automatic encoder
CN113065581B (en) Vibration fault migration diagnosis method for reactance domain self-adaptive network based on parameter sharing
CN114161228B (en) Tool wear prediction method, device, equipment and storage medium
Nunes et al. Am-mobilenet1d: A portable model for speaker recognition
Zhang et al. A novel data-driven method based on sample reliability assessment and improved CNN for machinery fault diagnosis with non-ideal data
CN108857577A (en) Cutting-tool wear state monitoring method and equipment
CN115688040A (en) Mechanical equipment fault diagnosis method, device, equipment and readable storage medium
Naveen Venkatesh et al. Transfer Learning‐Based Condition Monitoring of Single Point Cutting Tool
Amin et al. Development of intelligent fault-tolerant control systems with machine leaprning, deep learning, and transfer learning algorithms: A review
Han et al. Chatter detection in milling of thin-walled parts using multi-channel feature fusion and temporal attention-based network
CN115753101A (en) Bearing fault diagnosis method based on weight adaptive feature fusion
Hundi et al. Deep learning to speed up the development of structure–property relations for hexagonal boron nitride and graphene
KR102546340B1 (en) Method and apparatus for detecting out-of-distribution using noise filter
Kaur et al. Analyzing various machine learning algorithms with smote and adasyn for image classification having imbalanced data
Wang et al. Improved bilayer convolution transfer learning neural network for industrial fault detection
Djaballah et al. Deep transfer learning for bearing fault diagnosis using CWT time–frequency images and convolutional neural networks
CN112846938A (en) Main shaft rotation precision degradation traceability system under cutting working condition
Dharwadkar et al. Customer retention and credit risk analysis using ANN, SVM and DNN
CN113469263B (en) Prediction model training method and device suitable for small samples and related equipment
Kushwaha et al. Analysis of CNN Model with Traditional Approach and Cloud AI based Approach
Naing et al. Images retrieval and classification for acute myeloid leukemia blood cell using deep metric learning
Junjun et al. One-dimensional residual neural network-based for tool wear condition monitoring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant