CN110851654A - Industrial equipment fault detection and classification method based on tensor data dimension reduction - Google Patents
Industrial equipment fault detection and classification method based on tensor data dimension reduction Download PDFInfo
- Publication number
- CN110851654A CN110851654A CN201910852739.8A CN201910852739A CN110851654A CN 110851654 A CN110851654 A CN 110851654A CN 201910852739 A CN201910852739 A CN 201910852739A CN 110851654 A CN110851654 A CN 110851654A
- Authority
- CN
- China
- Prior art keywords
- data
- matrix
- tensor
- layer
- encoder
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/906—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a tensor data dimension reduction-based industrial equipment fault detection and classification method in the field of Internet of things, which comprises the following steps of: 1) data acquisition: various sensors are adopted for data acquisition, so that data sources are provided for prediction, and the data comprises structured data such as operation parameters in the production process of industrial equipment and unstructured data such as videos or images during operation; 2) data preprocessing: fusing data of different structures, and reducing the dimension of the fused data; 3) and (3) data analysis: the method and the system have the advantages that the stack denoising autoencoder of the server is utilized, after a large amount of data are trained, the production condition of the industrial equipment can be detected healthily according to the data sent by the sensors around the production process, the processing efficiency is improved, and the accuracy of the detection result is improved.
Description
Technical Field
The invention relates to a fault detection method, in particular to a tensor data dimension reduction-based industrial equipment fault detection classification method, and belongs to the technical field of internet of things.
Background
Industrial internet of things and data-driven technologies have revolutionized the manufacturing industry by enabling computer networks to collect large amounts of data from connected machines and translate large machine data into operational information. Machine health monitoring has fully accepted a big data revolution as a key component of modern manufacturing systems. In contrast to the top-down modeling provided by traditional physics-based models, data-driven machine health monitoring systems provide a bottom-up solution paradigm for predicting future operating conditions and remaining useful life in the event of certain faults (diagnostics). In order to extract useful knowledge from big data and make appropriate decisions, machine learning techniques are considered to be a powerful solution. As the hottest sub-field of machine learning, deep learning can become a bridge connecting large mechanical data and intelligent machine health monitoring.
As a branch of machine learning, deep learning attempts to model hierarchical representations behind data and classify (predict) patterns by stacking multiple layers of information processing modules in a hierarchical architecture. Recently, deep learning has been successfully applied to various fields such as computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics.
In the aspect of a machine health monitoring system, the deep learning technology also has a good application prospect, and the layer-by-layer pre-training of the deep neural network based on the self-encoder or the RBM can promote the training of the neural network and improve the discrimination capability of the neural network to represent mechanical data. Convolutional neural networks and cyclic neural networks may provide more advanced and complex combinatorial mechanisms to learn representations of mechanical data. In contrast to traditional data-driven health detection systems, deep learning-based health detection systems do not require extensive manual and crafted feature design knowledge. All model parameters including the feature module and the pattern classification/regression module may be jointly trained.
However, as the amount of data is becoming huge, on one hand, the data types that need to be processed by the neural network have been extended from single structured data to semi-structured and unstructured data, and even the neural network may process data of multiple structure types at the same time, on the other hand, due to the increase of the data types, the input dimension of the neural network always exceeds hundreds, even thousands, and the possible high dimension may cause potential problems, such as large amount of calculation cost and overfitting due to huge model parameters, and therefore, how to preprocess the data becomes a large direction for the future research of large data technologies.
Document "method, system, device, and storage medium for evaluating health state of industrial device" (application No. 201910376003.8) it provides a method, system, device, and storage medium for evaluating health state of industrial device, the method for evaluating comprising: acquiring historical operating parameter data and historical operating code data corresponding to operation, which are acquired by a sensor in industrial equipment within a historical set time period; establishing a parameter prediction model; acquiring a parameter predicted value corresponding to the target sensor according to the parameter prediction model; and evaluating the health state of the industrial equipment within a target set time period according to the target operation parameter data and the parameter predicted value.
The literature establishes a prediction model by using historical data and a machine learning method, and compares actual sensor parameters with predicted values to further judge the health condition of target industrial equipment. The scheme is based on the reality, and is simple and practical.
The scheme has the defects that only continuous or discrete structured data are collected to be used as the evaluation basis of the health state of the equipment, and the reliability is not high. Due to the uniqueness of data, the prediction model in the scheme only uses a linear regression algorithm to predict the target parameter data, and the prediction method is too simple. In the aspect of health state evaluation, the predicted value and the actual value are compared, and whether the industrial equipment is healthy or not can be obtained according to whether the residual error of the predicted value and the actual value exceeds a certain threshold value or not, so that the health degree of the industrial equipment cannot be accurately judged, namely the health state of the industrial equipment cannot be finely judged.
Disclosure of Invention
The invention aims to provide a tensor data dimension reduction-based industrial equipment fault detection and classification method, which improves the processing efficiency and increases the accuracy of detection results.
The purpose of the invention is realized as follows: a tensor data dimension reduction-based industrial equipment fault detection and classification method comprises the following steps:
1) data acquisition: various sensors are adopted for data acquisition, so that data sources are provided for prediction, and the data comprises structured data such as operation parameters in the production process of industrial equipment and unstructured data such as videos or images during operation;
2) data preprocessing: fusing data of different structures, and reducing the dimension of the fused data;
3) and (3) data analysis: the stacking denoising autoencoder of the server is utilized, after a large amount of data are trained, the production condition of the industrial equipment can be subjected to health detection according to data sent by sensors around the production process.
As a further limitation of the present invention, the data fusion process in the data preprocessing of step 2) includes: firstly, carrying out quantitative and unified representation on industrial big data, and aiming at different structured, semi-structured and unstructured data, carrying out different quantitative representation methods; and fusing tensor data of different orders by using a tensor expansion operator, generally fusing the same type of structural data, and fusing different structural data.
As a further limitation of the present invention, the incremental dimension reduction algorithm is adopted in the data dimension reduction process in the data preprocessing of step 2), and specifically includes:
2-1) recursive matrix singular value decomposition, the recursive formula is as follows
In the formula, the mix function is described in part 2, and is used to combine the incremental matrix with the decomposition result, and in the recursive process, the function f continuously calls its own operation to the matrix MiAnd CiDecomposing, and approximating each function call to the final singular value decomposition by one step to finally obtain a matrix M1;
2-2) combining the increment matrix and the decomposition result, which is described in detail as follows: matrix MiAnd matrix CiThe result of decomposition and matrix Mi-1And matrix Ci-1Merging as new input original matrix and increment matrix, and then adding the increment matrix Ci-1Projecting the matrix Ci-1Projected into orthogonal space UiIn the above, U can be obtained by calculation through the orthogonal relationiAnd obtaining a unit orthogonal basis J of H, and converting UiCombining the new matrix and the original matrix to form a new matrix, and decomposing the high-order singular value of the new matrix to obtain an updated left unitary matrix U, a half positive definite diagonal matrix sigma and a right unitary matrix V, so as to complete the combination of the new matrix and the original matrix and then complete the dynamic update decomposition;
2-3) incremental tensor singular value decomposition, cutting the higher order tensor into incremental tensorsOriginal tensorFirstly, the increment tensor and the original tensor are expanded to the same dimensionality, the dimensionality of a mode expansion matrix obtained by the tensors with different dimensionalities during expansion is different, on one hand, the tensor X is expanded according to the mode, the tensor T is updated to obtain a core tensor S, and on the other hand, the original tensor T is decomposed to obtain the core tensor and a left unitary matrix U1,U2…UiMixing S with U1,U2…UiCombining to obtain new approximate tensorAnd the dimensionality reduction treatment of the original tensor is realized.
As a further limitation of the present invention, step 3) specifically comprises:
3-1) selecting a proper denoising self-encoder:
the stacked denoising autoencoder has 3 hidden layers, the output of each layer is used as the input of the next layer, on the other hand, it can also be regarded as being composed of 3 autoencoders, wherein the first layer (input layer) and the second layer constitute the first autoencoder, the second layer and the third layer constitute the second autoencoder, and the third layer and the fourth layer constitute the third encoder;
3-2) the coding function and reconstruction function of this self-encoder are given:
assuming that the stacked self-encoders have l layers in common, ω is(k,l)And b(k,l)Respectively representing the weight and the bias parameter of the kth self-encoder, and setting the encoding process of each layer as follows:
wherein a is(k+1)And x(k)Representing the output and input of the encoder, respectively, f (—) representing a sigmoid transformation with log-likelihood functions from one layer to another;
the reconstruction process is represented as follows:
z=gθ′(a)=s(ωTa+bT),θ′={ωT,vT}
wherein z and a represent the output and the output of the previous hidden layer, respectively, and θ represents a connection parameter; the function s (×) represents the reconstruction function, aiming at the equation output z equal to the input data;
3-3) adding sparsity to the self-encoder;
let ajAnd m represents the number of active and input nodes of the hidden unit, respectively, where an output of 1 represents activation and an output of 0 represents suppression, the average activation p in the encoderjIs shown as
3-4) the cost function of the self-encoder is expressed as follows:
assuming a training set with m input samples, the overall cost function of a denoised self-encoder with n layers is set as:
in the above formula, C (ω),b;x(i),a(i)) Is the cost function of a single auto-encoder,
KL(ρ||ρj) Is the Kullback-Leibler (KL) divergence, intended to measure ρjAnd p, which is put into the cost function as an extra penalty term in the present invention, and is specifically expressed as:
represents the synaptic weight, s, between the ith neuron in layer l and the jth neuron in layer l +1lRepresents the number of total neurons in layer l.
Compared with the prior art, the invention adopting the technical scheme has the following technical effects: aiming at the characteristics of large data volume and various structures in industrial production, the method uniformly expresses heterogeneous data in a tensor form and fuses the data; the health condition of the industrial equipment can be detected by the neural network according to the larger and more abundant data, and compared with a general tensor dimension reduction method such as singular value decomposition, an incremental high-order singular value decomposition tensor dimension reduction algorithm with a better dimension reduction effect is adopted, so that the training and classification efficiency of the neural network is improved, and the accuracy of an analysis result is improved; on the other hand, the stacking denoising autoencoder combines the advantages of the denoising autoencoder and the sparse autoencoder, and has better performance; the health condition of the industrial equipment can be effectively classified; the method can improve the accuracy of the health detection of the industrial equipment in the industrial production process; and the data preprocessing is carried out in the fog nodes, so that the time delay is reduced, and the operation of a large-scale sensor network with a large number of network nodes is facilitated.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a schematic flow chart of the incremental tensor dimension reduction process in the present invention.
FIG. 3 is a flow chart of the continuous learning process of fault diagnosis based on the stacked denoising autoencoder in the present invention.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the attached drawings:
as shown in fig. 1, the present embodiment provides a method for detecting and classifying faults of industrial equipment based on tensor data dimension reduction, which includes the following steps.
The method comprises the following steps: collecting data from various sensors associated with an industrial device; the collected data will be put into the fog node for processing.
Step two: respectively converting the collected structured, semi-structured and unstructured data into consistent high-order tensors; fusing different tensors of different orders by using a tensor expansion operator;
2.1 structured data, which is usually referred to as a database, is logically expressed and implemented by adopting a 2-dimensional table structure; therefore, the data is converted into a multi-order tensor based on the number of the table columns, and if the data of a certain database contains 5 types of attributes, namely the corresponding table has 5 columns, the data is converted into a 5-order tensor;
2.2 there often exists a certain relationship between each data of the semi-structured data; therefore, the semi-structured data can be represented in a tree structure, and after tensor conversion, one semi-structured data is represented as a 3 rd order tensor;
2.3 the unstructured data is converted into tensors of different orders according to the characteristic quantity, taking video data as an example, the main characteristics of the unstructured data are a time frame, the width and the height of a frame picture and the color, so that one piece of video data can be expressed as a tensor of 4 orders;
2.4, performing fusion processing on the data according to a tensor expansion operator, firstly fusing the same type of structural data, and then fusing the 3 types of fused structural data of different types;
step three: and (3) performing dimensionality reduction on the high-order tensor obtained in the last step by using an incremental tensor dimensionality reduction algorithm, and transmitting the processed tensor data to a server, namely an input layer of a self-encoder. The whole algorithm flow is shown in FIG. 2;
3.1 cutting the higher order tensor into incremental tensorsOriginal tensor Firstly, expanding the increment tensor and the original tensor to the same dimensionality, wherein the dimensionality of a mode-based expansion matrix obtained by the tensors with different dimensionalities during expansion is different;
3.2 unfolding the tensor X according to a model, carrying out recursive singular value decomposition on the X to obtain the core tensor X, and continuously calling self operation by the function f to carry out matrix M in the recursive processiAnd CiDecomposing, and approximating each function call to the final singular value decomposition by one step to finally obtain a matrix M1The recursive formula is as follows
For the mix function in the formula, it is described in detail as follows: matrix MiAnd matrix CiThe result of decomposition and matrix Mi-1And matrix Ci-1Merging as new input sourceMatrix and delta matrix, then for delta matrix Ci-1Projecting the matrix Ci-1Projected into orthogonal space UiIn the above, U can be obtained by calculation through the orthogonal relationiAnd obtaining a unit orthogonal basis J of H, and converting UiCombining the new matrix and the J into a new matrix, and decomposing the high-order singular value of the new matrix to obtain an updated left unitary matrix U, a half positive definite diagonal matrix sigma and a right unitary matrix V, so that the combination of the new matrix and the original matrix is completed, and then dynamic update decomposition is completed;
3.3 decomposing the original tensor T to obtain the core tensor and the left unitary matrix U1,U2…UiHere, the original tensor T is directly subjected to singular value decomposition, i.e. T ═ U Σ VTSo as to obtain a left unitary matrix U;
3.4 mixing S with U1,U2…UiCombining to obtain new approximate tensorAnd the dimensionality reduction treatment of the original tensor is realized.
Step four: the stacked denoising self-encoder of the server is trained by using the input data, and the training process is as shown in fig. 3:
4.1, setting the number of layers of the neural network, the number of neurons in each layer, the maximum batch (batch) number and the maximum generation (epoch) number;
4.2 starting from the layer 1, initializing weights and deviations randomly, and starting to train the first batch of data of the first generation; namely initializing epich to be 1 and batch to be 1;
4.3 for each batch of data for each generation the following training process was performed:
4.3.1 unsupervised learning phase;
4.3.1.1 sets the encoding and reconstruction functions of this self-encoder:
assuming that the stacked self-encoders have l layers in common, ω is(k,l)And b(k,l)Respectively representing the weight and bias parameters of the kth self-encoder. The encoding process for each layer is set as follows:
wherein a is(k+1)And x(k)Representing the output and input of the encoder, respectively, f (—) representing a sigmoid transformation with log-likelihood functions from one layer to another;
the reconstruction process is represented as follows:
z=gθ′(a)=s(ωTa+bT),θ′={ωT,bT}
wherein z and a represent the output and the output of the previous hidden layer, respectively, θ represents the connection parameter, the function s (×) represents the reconstruction function, the intended output z equals the input data;
4.3.1.2 adding sparsity to the self-encoder;
let ajAnd m represents the number of active and input nodes of the hidden unit, respectively, where an output of 1 represents activation and an output of 0 represents suppression, the average activation p in the encoderjIs shown as
4.3.1.3 setting cost function of self-encoder
Assuming a training set with m input samples, the overall cost function of a denoised self-encoder with n layers is set as:
in the above formula, C (omega, b; x)(i),a(i)) Is the cost function of a single auto-encoder,
KL(ρ||ρj) Is the Kullback-Leibler (KL) divergence, intended to measure ρjAnd p, which is put in place as an additional penalty term in the present inventionIn this function, it is specifically expressed as:
represents the synaptic weight, s, between the ith neuron in layer l and the jth neuron in layer l +1lRepresents the number of total neurons in layer l; the whole learning process is to minimize the cost function so as to obtain a coding function and a reconstruction function parameter;
4.3.2 fine tuning by using back propagation learning;
4.3.2.1 calculating the layer output in the forward direction from the first hidden layer to the output layer;
4.3.2.2 calculating the residual error of each unit in the output layer;
4.3.2.3 calculating the residual error of each unit in the hidden layer from back to front;
4.3.2.4 calculating an expected partial derivative of the corresponding cost function;
4.3.2.5 updating the residual for each layer based on the partial derivatives;
4.3.2.6 updating the initial weights and biases using gradient descent;
4.3.2.7 tuning based on conjugate gradient method;
updating the weight and the deviation through the 2 steps;
4.4 when a batch of data is trained, making batch equal to batch +1, and when the batch is trained to the set maximum batch number (maxbatch), namely batch > maxbatch, starting next generation training, namely making epoch equal to epoch + 1;
4.5 circularly and repeatedly training all the data of each generation until the training of the last generation is finished, namely when the epoch is greater than maxepoch, starting to train continuously at the next layer according to the circulation; until the training of the last layer is finished.
And 5: and (4) processing new data and putting the processed new data into an autoencoder according to the step 1-3, and carrying out fault detection on the industrial equipment by utilizing the trained neural network according to input data so as to classify fault types.
And inputting the real-time data into an auto-encoder after tensor fusion and dimension reduction, and obtaining the possible probability of each classification by utilizing a sofmax regression algorithm on an output layer of the auto-encoder according to the coding and reconstruction functions obtained after training. The number and types of classifications that can be set vary depending on the industrial equipment. For transformers, for example, the output categories may be classified as healthy, partial discharge, spark discharge, high temperature superheat, low temperature superheat, and the like. Finally, the staff deduces the health condition of the equipment according to the obtained possible probability of each classification condition. If the training effect is good, the probability value of a certain classification in the result obtained in each time under the normal condition is very high, and the equipment can be judged to be under the condition.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can understand that the modifications or substitutions within the technical scope of the present invention are included in the scope of the present invention, and therefore, the scope of the present invention should be subject to the protection scope of the claims.
Claims (4)
1. A tensor data dimension reduction-based industrial equipment fault detection and classification method is characterized by comprising the following steps:
1) data acquisition: various sensors are adopted for data acquisition, so that data sources are provided for prediction, and the data comprises structured data such as operation parameters in the production process of industrial equipment and unstructured data such as videos or images during operation;
2) data preprocessing: fusing data of different structures, and reducing the dimension of the fused data;
3) and (3) data analysis: the stacking denoising autoencoder of the server is utilized, after a large amount of data are trained, the production condition of the industrial equipment can be subjected to health detection according to data sent by sensors around the production process.
2. The method for detecting and classifying the faults of the industrial equipment based on the tensor data dimension reduction as claimed in claim 1, wherein the data fusion process in the data preprocessing of the step 2) comprises the following steps: firstly, carrying out quantitative and unified representation on industrial big data, and aiming at different structured, semi-structured and unstructured data, carrying out different quantitative representation methods; and fusing tensor data of different orders by using a tensor expansion operator, generally fusing the same type of structural data, and fusing different structural data.
3. The tensorial data dimension reduction-based industrial equipment fault detection and classification method according to claim 2, wherein an incremental dimension reduction algorithm is adopted in the data dimension reduction process in the data preprocessing of the step 2), and specifically comprises the following steps:
2-1) recursive matrix singular value decomposition, the recursive formula is as follows
In the formula, the mix function is described in part 2, and is used to combine the incremental matrix with the decomposition result, and in the recursive process, the function f continuously calls its own operation to the matrix MiAnd CiDecomposing, and approximating each function call to the final singular value decomposition by one step to finally obtain a matrix M1;
2-2) combining the increment matrix and the decomposition result, which is described in detail as follows: matrix MiAnd matrix CiThe result of decomposition and matrix Mi-1And matrix Ci-1Merging as new input original matrix and increment matrix, and then adding the increment matrix Ci-1Projecting the matrix Ci-1Projected into orthogonal space UiIn the above, U can be obtained by calculation through the orthogonal relationiAnd obtaining a unit orthogonal basis J of H, and converting UiAnd J are combined into a new matrix, and the updated left unitary matrix U, the half positive definite diagonal matrix sigma and the right unitary matrix V are obtained by decomposing the high-order singular value of the new matrix,combining the newly added matrix with the original matrix, and then completing dynamic update decomposition;
2-3) incremental tensor singular value decomposition, cutting the higher order tensor into incremental tensorsOriginal tensorFirstly, the increment tensor and the original tensor are expanded to the same dimensionality, the dimensionality of a mode expansion matrix obtained by the tensors with different dimensionalities during expansion is different, on one hand, the tensor X is expanded according to the mode, the tensor T is updated to obtain a core tensor S, and on the other hand, the original tensor T is decomposed to obtain the core tensor and a left unitary matrix U1,U2…UiMixing S with U1,U2…UiCombining to obtain new approximate tensorAnd the dimensionality reduction treatment of the original tensor is realized.
4. The method for detecting and classifying faults of industrial equipment based on tensor data dimension reduction as claimed in claim 1, wherein the step 3) specifically comprises the following steps:
3-1) selecting a proper denoising self-encoder:
the stacked denoising autoencoder has 3 hidden layers, the output of each layer is used as the input of the next layer, on the other hand, it can also be regarded as being composed of 3 autoencoders, wherein the first layer (input layer) and the second layer constitute the first autoencoder, the second layer and the third layer constitute the second autoencoder, and the third layer and the fourth layer constitute the third encoder;
3-2) the coding function and reconstruction function of this self-encoder are given:
assuming that the stacked self-encoders have l layers in common, ω is(k,l)And b(k,l)Respectively representing the weight and bias parameters of the kth self-encoder, coded per layerThe program settings are as follows:
wherein a is(k+1)And x(k)Representing the output and input of the encoder, respectively, f (—) representing a sigmoid transformation with log-likelihood functions from one layer to another;
the reconstruction process is represented as follows:
z=gθ′(a)=s(ωTa+bT),θ′={ωT,bT}
wherein z and a represent the output and the output of the previous hidden layer, respectively, and θ represents a connection parameter; the function s (×) represents the reconstruction function, aiming at the equation output z equal to the input data;
3-3) adding sparsity to the self-encoder;
let ajAnd m represents the number of active and input nodes of the hidden unit, respectively, where an output of 1 represents activation and an output of 0 represents suppression, the average activation p in the encoderjIs shown as
3-4) the cost function of the self-encoder is expressed as follows:
assuming a training set with m input samples, the overall cost function of a denoised self-encoder with n layers is set as:
in the above formula, C (omega, b; x)(i),a(i)) Is the cost function of a single auto-encoder,
KL(ρ||ρj) Is Kullback-Leibler (KL)Divergence, intended to measure pjAnd p, which is put into the cost function as an extra penalty term in the present invention, and is specifically expressed as:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910852739.8A CN110851654A (en) | 2019-09-10 | 2019-09-10 | Industrial equipment fault detection and classification method based on tensor data dimension reduction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910852739.8A CN110851654A (en) | 2019-09-10 | 2019-09-10 | Industrial equipment fault detection and classification method based on tensor data dimension reduction |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110851654A true CN110851654A (en) | 2020-02-28 |
Family
ID=69595481
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910852739.8A Withdrawn CN110851654A (en) | 2019-09-10 | 2019-09-10 | Industrial equipment fault detection and classification method based on tensor data dimension reduction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110851654A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111235709A (en) * | 2020-03-18 | 2020-06-05 | 东华大学 | Online detection system for spun yarn evenness of ring spinning based on machine vision |
CN111428000A (en) * | 2020-03-20 | 2020-07-17 | 华泰证券股份有限公司 | Method, system and storage medium for quantizing unstructured text data |
CN112087443A (en) * | 2020-09-04 | 2020-12-15 | 浙江大学 | Intelligent detection method for sensing data abnormity under large-scale industrial sensing network information physical attack |
CN112819176A (en) * | 2021-01-22 | 2021-05-18 | 烽火通信科技股份有限公司 | Data management method and data management device suitable for machine learning |
CN114742182A (en) * | 2022-06-15 | 2022-07-12 | 深圳市明珞锋科技有限责任公司 | Intelligent equipment output data information processing method and operation evaluation method |
WO2024141501A1 (en) | 2022-12-29 | 2024-07-04 | Thales | Method for reducing the size of a numerical representation of a computer log file and method for analysing such a file |
-
2019
- 2019-09-10 CN CN201910852739.8A patent/CN110851654A/en not_active Withdrawn
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111235709A (en) * | 2020-03-18 | 2020-06-05 | 东华大学 | Online detection system for spun yarn evenness of ring spinning based on machine vision |
CN111428000A (en) * | 2020-03-20 | 2020-07-17 | 华泰证券股份有限公司 | Method, system and storage medium for quantizing unstructured text data |
CN112087443A (en) * | 2020-09-04 | 2020-12-15 | 浙江大学 | Intelligent detection method for sensing data abnormity under large-scale industrial sensing network information physical attack |
CN112087443B (en) * | 2020-09-04 | 2021-06-04 | 浙江大学 | Sensing data anomaly detection method under physical attack of industrial sensing network information |
CN112819176A (en) * | 2021-01-22 | 2021-05-18 | 烽火通信科技股份有限公司 | Data management method and data management device suitable for machine learning |
CN114742182A (en) * | 2022-06-15 | 2022-07-12 | 深圳市明珞锋科技有限责任公司 | Intelligent equipment output data information processing method and operation evaluation method |
WO2024141501A1 (en) | 2022-12-29 | 2024-07-04 | Thales | Method for reducing the size of a numerical representation of a computer log file and method for analysing such a file |
FR3144678A1 (en) | 2022-12-29 | 2024-07-05 | Thales | Method for reducing the dimension of a digital representation of a computer log file and method for analyzing such a file |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110851654A (en) | Industrial equipment fault detection and classification method based on tensor data dimension reduction | |
CN111079836B (en) | Process data fault classification method based on pseudo label method and weak supervised learning | |
CN109389171B (en) | Medical image classification method based on multi-granularity convolution noise reduction automatic encoder technology | |
CN105606914A (en) | IWO-ELM-based Aviation power converter fault diagnosis method | |
CN114048468A (en) | Intrusion detection method, intrusion detection model training method, device and medium | |
CN116680105A (en) | Time sequence abnormality detection method based on neighborhood information fusion attention mechanism | |
CN115510950A (en) | Aircraft telemetry data anomaly detection method and system based on time convolution network | |
CN113076545A (en) | Deep learning-based kernel fuzzy test sequence generation method | |
CN114330650A (en) | Small sample characteristic analysis method and device based on evolutionary element learning model training | |
CN115905848A (en) | Chemical process fault diagnosis method and system based on multi-model fusion | |
CN116007937A (en) | Intelligent fault diagnosis method and device for mechanical equipment transmission part | |
CN115809596A (en) | Digital twin fault diagnosis method and device | |
CN113824575A (en) | Method and device for identifying fault node, computing equipment and computer storage medium | |
CN117474529A (en) | Intelligent operation and maintenance system for power grid | |
CN117743933A (en) | Method and device for determining invalid alarm information, storage medium and electronic device | |
CN117236374A (en) | Layering interpretation method based on fully developed material graph neural network | |
CN115174421B (en) | Network fault prediction method and device based on self-supervision unwrapping hypergraph attention | |
CN116720095A (en) | Electrical characteristic signal clustering method for optimizing fuzzy C-means based on genetic algorithm | |
CN114330500B (en) | Online parallel diagnosis method and system for power grid power equipment based on storm platform | |
CN115408693A (en) | Malicious software detection method and system based on self-adaptive computing time strategy | |
CN114565051A (en) | Test method of product classification model based on neuron influence degree | |
CN115100599A (en) | Mask transform-based semi-supervised crowd scene abnormality detection method | |
CN113688989A (en) | Deep learning network acceleration method, device, equipment and storage medium | |
CN117010459B (en) | Method for automatically generating neural network based on modularization and serialization | |
CN118468197B (en) | Multichannel feature fusion vehicle networking abnormality detection method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20200228 |
|
WW01 | Invention patent application withdrawn after publication |