CN117171625B - Intelligent classification method and device for working conditions, electronic equipment and storage medium - Google Patents

Intelligent classification method and device for working conditions, electronic equipment and storage medium Download PDF

Info

Publication number
CN117171625B
CN117171625B CN202311369467.9A CN202311369467A CN117171625B CN 117171625 B CN117171625 B CN 117171625B CN 202311369467 A CN202311369467 A CN 202311369467A CN 117171625 B CN117171625 B CN 117171625B
Authority
CN
China
Prior art keywords
data
sample
original
sample data
working condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311369467.9A
Other languages
Chinese (zh)
Other versions
CN117171625A (en
Inventor
张勤勤
徐培
张永飞
杨尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunhe Enmo Beijing Information Technology Co ltd
Original Assignee
Yunhe Enmo Beijing Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunhe Enmo Beijing Information Technology Co ltd filed Critical Yunhe Enmo Beijing Information Technology Co ltd
Priority to CN202311369467.9A priority Critical patent/CN117171625B/en
Publication of CN117171625A publication Critical patent/CN117171625A/en
Application granted granted Critical
Publication of CN117171625B publication Critical patent/CN117171625B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The embodiment of the application provides an intelligent classification method and device for working conditions, electronic equipment and storage media, and belongs to the technical field of artificial intelligence. The method comprises the following steps: obtaining original sample data, wherein the original sample data comprises first original sample data of abnormal working conditions and second original sample data of normal working conditions; inputting the original sample data into a preset original working condition classification model to classify working conditions, and obtaining a predicted working condition label; calculating according to a sample label corresponding to the original sample data and a predicted working condition label to obtain original gradient data; gradient update data are obtained through calculation according to the original sample data; updating the original gradient data according to the gradient update data to obtain target gradient update data; training the original working condition classification model to obtain a target working condition classification model; and classifying the working conditions of the target equipment according to the target working condition classification model. According to the embodiment of the application, the accuracy of classification and identification of the working conditions can be improved under the condition of unbalanced samples.

Description

Intelligent classification method and device for working conditions, electronic equipment and storage medium
Technical Field
The application relates to the technical field of artificial intelligence, in particular to an intelligent classification method and device for working conditions, electronic equipment and a storage medium.
Background
Currently, in industrial internet of things scenarios, the "negative sample" data will only appear when a machine device fails. That is, in most cases, only "positive sample" data is obtained by proper machine operation. Based on the above situation, when the machine operation condition is judged according to the classification model, the situation of sample imbalance may occur, thereby affecting the classification accuracy of the classification model. Therefore, how to provide a working condition classification method under the condition of unbalanced samples so as to improve the accuracy of classification model on working condition classification and identification is a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the application mainly aims to provide an intelligent working condition classification method and device, electronic equipment and storage medium, and aims to improve accuracy of working condition classification and identification.
In order to achieve the above object, a first aspect of an embodiment of the present application provides an intelligent classification method for working conditions, where the method includes:
obtaining original sample data of target equipment, wherein the original sample data comprises first original sample data corresponding to abnormal working conditions of the target equipment and second original sample data corresponding to normal working conditions of the target equipment;
Inputting the original sample data into a preset original working condition classification model to perform working condition classification to obtain a predicted working condition label, wherein the predicted working condition label is used for indicating whether the target equipment is in a normal working condition or in an abnormal working condition;
calculating to obtain original gradient data according to a sample label corresponding to the original sample data and the predicted working condition label, wherein the sample label is used for indicating whether the target equipment is in a normal working condition or in an abnormal working condition;
gradient update data of the first original sample data are obtained through calculation according to the original sample data;
updating the original gradient data corresponding to the first original sample data according to the gradient update data to obtain target gradient update data;
training the original working condition classification model according to the target gradient update data, the first original sample data, the second original sample data and the original gradient data corresponding to the second original sample to obtain a target working condition classification model;
and classifying the working conditions of the target equipment according to the target working condition classification model.
In some embodiments, the gradient update data includes gradient adjustment data and gradient expansion data, the gradient update data of the first raw sample data calculated from the raw sample data includes:
Determining a degree of deviation of the first raw sample data;
calculating the gradient adjustment data according to the deviation degree and the first total sample size of the first original sample data;
and obtaining a second total sample size of the second original sample data, and calculating the gradient expansion data according to the second total sample size, the first total sample size, a preset sampling rate of the second original sample data and the gradient regulation data.
In some embodiments, the determining the degree of deviation of the first raw sample data includes:
acquiring a first sample vector of the first original sample data and acquiring a second sample vector of the second original sample data;
calculating a median eigenvector according to the first sample vector and the second sample vector;
calculating according to the median eigenvector, the first sample vector and the second sample vector to obtain a sample covariance matrix;
and calculating the deviation according to the sample covariance matrix.
In some embodiments, the obtaining the first sample vector of the first raw sample data includes:
carrying out box division processing on the first original sample data to obtain a target data interval corresponding to the first original sample data;
And obtaining the first sample vector according to the target data interval.
In some embodiments, the calculating the median eigenvector according to the first sample vector and the second sample vector includes:
obtaining a target total sample size according to the first total sample size and the second total sample size;
if the target total sample size is odd, calculating a first median index according to the target total sample size;
performing sequencing calculation on the first sample vector according to the first median index to obtain a first median vector;
performing sequencing calculation on the second sample vector according to the first median index to obtain a second median vector;
and obtaining the median characteristic vector according to the first median vector and the second median vector.
In some embodiments, the calculating the median eigenvector according to the first sample vector and the second sample vector further includes:
if the target total sample size is even, calculating to obtain the first median index and the second median index according to the target total sample size;
sorting and calculating the first sample vector according to the first median index and the second median index to obtain a third median vector;
Sorting and calculating the second sample vector according to the first median index and the second median index to obtain a fourth median vector;
and obtaining the median characteristic vector according to the third median vector and the fourth median vector.
In some embodiments, the obtaining raw sample data of the target device includes:
acquiring initial data of the target equipment, wherein the initial data comprises the first original sample data and initial sample data corresponding to normal working conditions of the target equipment;
inputting the initial sample data into a preset original working condition classification model to classify working conditions, and obtaining an initial gradient of the initial sample data;
sorting the initial sample data according to the initial gradient to obtain a sorting result;
and obtaining the second original sample data according to the sorting result, the first total sample quantity of the first original sample data and a preset sampling multiple.
To achieve the above object, a second aspect of the embodiments of the present application provides an intelligent working condition classification device, where the device includes:
the data acquisition module is used for acquiring original sample data of target equipment, wherein the original sample data comprise first original sample data corresponding to abnormal working conditions of the target equipment and second original sample data corresponding to normal working conditions of the target equipment;
The first working condition classification module is used for inputting the original sample data into a preset original working condition classification model to perform working condition classification to obtain a predicted working condition label, wherein the predicted working condition label is used for indicating whether the target equipment is in a normal working condition or in an abnormal working condition;
the gradient calculation module is used for calculating to obtain original gradient data according to a sample label corresponding to the original sample data and the predicted working condition label, wherein the sample label is used for indicating whether the target equipment is in a normal working condition or in an abnormal working condition;
the updating data calculation module is used for calculating gradient updating data of the first original sample data according to the original sample data;
the gradient self-adaptation module is used for updating the original gradient data corresponding to the first original sample data according to the gradient update data to obtain target gradient update data;
the model training module is used for training the original working condition classification model according to the target gradient update data, the first original sample data, the second original sample data and the original gradient data corresponding to the second original sample to obtain a target working condition classification model;
And the second working condition classification module is used for classifying the working conditions of the target equipment according to the target working condition classification model.
To achieve the above object, a third aspect of the embodiments of the present application proposes an electronic device, which includes a memory and a processor, the memory storing a computer program, the processor implementing the method according to the first aspect when executing the computer program.
To achieve the above object, a fourth aspect of the embodiments of the present application proposes a computer-readable storage medium storing a computer program that, when executed by a processor, implements the method of the first aspect.
According to the working condition intelligent classification method and device, the electronic equipment and the storage medium, gradient update data of first original sample data are calculated through the first original sample data and second original sample data, and original gradient data corresponding to the first original sample data are updated according to the gradient update data, so that target gradient update data are obtained. Therefore, when the original working condition classification model is trained according to the original gradient data corresponding to the target gradient update data, the first original sample data, the second original sample data and the second original sample data, the obtained target working condition classification model can improve the learning capacity of the first original sample data, namely the learning capacity of few types of samples, and further the classification accuracy of the target working condition classification model can be improved under the condition of unbalanced samples.
Drawings
FIG. 1 is a flow chart of an intelligent classification method for working conditions according to an embodiment of the present application;
fig. 2 is a flowchart of step S101 in fig. 1;
fig. 3 is a flowchart of step S104 in fig. 1;
fig. 4 is a flowchart of step S301 in fig. 3;
fig. 5 is a flowchart of step S401 in fig. 4;
fig. 6 is a flowchart of step S402 in fig. 4;
FIG. 7 is a flowchart of another embodiment of step S402 in FIG. 4;
FIG. 8 is a schematic structural diagram of an intelligent working condition classification device according to an embodiment of the present application;
fig. 9 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
It should be noted that although functional block division is performed in a device diagram and a logic sequence is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
First, several nouns referred to in this application are parsed:
artificial intelligence (artificial intelligence, AI): is a new technical science for researching and developing theories, methods, technologies and application systems for simulating, extending and expanding the intelligence of people; artificial intelligence is a branch of computer science that attempts to understand the nature of intelligence and to produce a new intelligent machine that can react in a manner similar to human intelligence, research in this field including robotics, language recognition, image recognition, natural language processing, and expert systems. Artificial intelligence can simulate the information process of consciousness and thinking of people. Artificial intelligence is also a theory, method, technique, and application system that utilizes a digital computer or digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results.
Currently, in industrial internet of things scenarios, the "negative sample" data will only appear when a machine device fails. That is, in most cases, only "positive sample" data is obtained by proper machine operation. Based on the above situation, when the machine operation condition is judged according to the classification model, the situation of sample imbalance may occur, thereby affecting the classification accuracy of the classification model.
In the related art, the method for solving the sample imbalance comprises the following two methods:
first, SMOTE (Synthetic Minority Over-sampling Technique) algorithm: the algorithm balances the data set by synthesizing new minority class samples, thereby solving the problem of sample imbalance. However, such algorithms may suffer from the problem of overfitting the generated samples, resulting in classification models that are too sensitive to a small number of classes of samples. Furthermore, the samples generated by such algorithms may differ significantly from the true samples.
Second, adaBoost (Adaptive Boosting) algorithm: the algorithm weights the misclassified samples by iteratively training the weak classifier, thereby improving the classification capability of the classifier (i.e., the classification model) on the special samples. However, this algorithm is relatively sensitive to noise and outliers, which can easily lead to overfitting. Furthermore, when there is a sample imbalance condition, the method of misclassification weighting may not help in the identification of minority class samples because of the high proportion of majority class samples.
Based on the above, the embodiment of the application provides a working condition intelligent classification method and device, electronic equipment and storage medium, aiming at improving the learning ability of a model to few types of samples under the condition of unbalanced samples, and further improving the classification accuracy of the model.
The embodiment of the application provides a working condition intelligent classification method and device, electronic equipment and storage medium, and specifically, the following embodiment is used for explaining, and first describes the working condition intelligent classification method in the embodiment of the application.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The embodiment of the application provides an intelligent classification method for working conditions, which relates to the technical field of artificial intelligence. The intelligent classification method for the working conditions can be applied to a terminal, a server side and software running in the terminal or the server side. In some embodiments, the terminal may be a smart phone, tablet, notebook, desktop, etc.; the server side can be configured as an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms and the like; the software may be an application for implementing the intelligent classification method of the working conditions, etc., but is not limited to the above form.
The subject application is operational with numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
Fig. 1 is an optional flowchart of a method for intelligently classifying working conditions according to an embodiment of the present application, where the method in fig. 1 may include, but is not limited to, steps S101 to S107.
Step S101, obtaining original sample data of target equipment, wherein the original sample data comprise first original sample data corresponding to abnormal working conditions of the target equipment and second original sample data corresponding to normal working conditions of the target equipment;
step S102, inputting original sample data into a preset original working condition classification model to perform working condition classification to obtain a predicted working condition label, wherein the predicted working condition label is used for indicating whether target equipment is in a normal working condition or in an abnormal working condition;
step S103, original gradient data is obtained through calculation according to sample labels corresponding to the original sample data and predicted working condition labels, wherein the sample labels are used for indicating whether target equipment is in a normal working condition or in an abnormal working condition;
step S104, calculating gradient update data of first original sample data according to the original sample data;
step S105, updating original gradient data corresponding to the first original sample data according to the gradient update data to obtain target gradient update data;
step S106, training an original working condition classification model according to the target gradient update data, the first original sample data, the second original sample data and the original gradient data corresponding to the second original sample to obtain a target working condition classification model;
And step S107, classifying the working conditions of the target equipment according to the target working condition classification model.
In step S101 to step S107 illustrated in the embodiment of the present application, gradient update data of first original sample data is calculated through the first original sample data and second original sample data, and original gradient data corresponding to the first original sample data is updated according to the gradient update data, so as to obtain target gradient update data. Therefore, when the original working condition classification model is trained according to the original gradient data corresponding to the target gradient update data, the first original sample data, the second original sample data and the second original sample data, the obtained target working condition classification model can improve the learning capacity of the first original sample data, namely the learning capacity of few types of samples, and further the classification accuracy of the target working condition classification model can be improved under the condition of unbalanced samples.
In step S101 of some embodiments, the target device refers to a device that needs to perform a classification of the operating conditions. When the embodiments of the present application are applied to different fields, the target device may be different. For convenience of explanation, in the embodiment of the present application, the target device is taken as a rocket, and the working condition is taken as a working condition corresponding to the rocket launching task as an example. The raw sample data refers to data corresponding to the target device in operation, and may be, for example, data corresponding to a temperature sensor, data corresponding to a pressure sensor, data corresponding to a height sensor, and the like. The abnormal working condition refers to the working condition when the target equipment is abnormal, such as the working condition that the fault can not emit. The normal working condition refers to the working condition of the target equipment when the target equipment is normal, such as the working condition of normal emission. The raw sample data includes first raw sample data and second raw sample data. The first original sample data of the target equipment under the abnormal working condition can be obtained through the corresponding sensor, and the second original sample data of the target equipment under the normal working condition can be obtained through the corresponding sensor. It will be appreciated that the target device is typically operating under normal conditions, and therefore the first raw sample data is typically less in number than the second raw sample data, whereby the first raw sample data may be taken as a minority class of samples and the second raw sample data as a majority class of samples. However, according to practical situations, the second raw sample data may be used as a minority sample, and the first raw sample data may be used as a majority sample, which is not particularly limited in this embodiment of the present application. For convenience of explanation, the embodiment of the present application takes the first raw sample data as a minority class sample and the second raw sample data as a majority class sample as examples. It should be understood that, when the second raw sample data is used as a minority class sample and the first raw sample data is used as a majority class sample, the adaptation of the embodiments of the present application shall also fall within the scope of protection of the present application.
Referring to fig. 2, in some embodiments, step S101 includes, but is not limited to including, step S201 through step S204.
Step S201, initial data of target equipment is obtained, wherein the initial data comprises first original sample data and initial sample data corresponding to normal working conditions of the target equipment;
step S202, inputting initial sample data into a preset original working condition classification model to perform working condition classification, and obtaining initial gradients of the initial sample data;
step S203, sorting the initial sample data according to the initial gradient to obtain a sorting result;
step S204, obtaining second original sample data according to the sorting result, the first total sample size of the first original sample data and the preset sampling multiple.
In step S201 of some embodiments, the initial data refers to data corresponding to the target device at runtime, and may be, for example, data corresponding to a temperature sensor, data corresponding to a pressure sensor, data corresponding to a height sensor, and so on. The initial data includes first original sample data and initial sample data. The first original sample data of the target equipment under the abnormal working condition can be obtained through the corresponding sensor, and the initial sample data corresponding to the target equipment under the normal working condition can be obtained through the corresponding sensor. It will be appreciated that the first raw sample data may be taken as minority class samples and the initial sample data as majority class samples.
In step S202 of some embodiments, the initial sample data is used as input data of an original working condition classification model, and working condition classification is performed on the initial sample data according to the original working condition classification model, so as to obtain an initial tag. The initial tag is used for indicating that the target equipment is in a normal working condition or in an abnormal working condition under the condition of initial sample data. And calculating to obtain the initial gradient of the initial sample data according to the initial label and the real label corresponding to the initial sample data.
In step S203 of some embodiments, the plurality of initial sample data is sorted from large to small or from small to large according to the initial gradient, and a sorting result is obtained.
In step S204 of some embodiments, the first total sample size refers to the total number of first raw sample data. The preset sampling multiple refers to a preset sampling multiple, and for example, the preset sampling multiple can be set to 1.5 or other values. Sampling is carried out from initial sample data with highest initial gradient, and 1.5 times of first total sample quantity is obtained, and 0.9 times of second original sample data are obtained. It can be understood that the preset sampling multiples 1.5 and 0.9 are configurable parameters, and can be adaptively set according to practical situations. Where 1.5 is the critical value for a slight imbalance of samples.
It will be appreciated that after each round of training, the gradients of the sample data are recalculated and reordered according to the gradients. Thus, the majority class samples that participate in the training for each round can be updated.
The benefit of steps S201 to S204 is that the minority class samples and the majority class samples can be sampled differently, i.e. in each round of training, all the minority class samples are sampled, and the majority class samples are sampled randomly after gradient sequencing, so that the imbalance of the sample data can be reduced.
In step S102 of some embodiments, the raw condition classification model is a model with the capability of classifying conditions based on data. And taking the original sample data as input data of an original working condition classification model, namely respectively taking the first original sample data and the second original sample data as input data of the original working condition classification model, so as to identify working conditions corresponding to the first original sample data (or the second original sample data) through the original working condition classification model, and obtaining a prediction working condition label. It is understood that the predicted operating condition labels include labels predicted from the first raw sample data and labels predicted from the second raw sample data. The predicted condition label is used for indicating that the target device is in a normal condition or in an abnormal condition under the condition of the first original sample data (or the second original sample data).
In step S103 of some embodiments, the sample tag refers to a real working condition corresponding to the target device in the case of the original sample data (i.e., the first original sample data or the second original sample data), so the sample tag may also indicate that the target device is in a normal working condition or in an abnormal working condition. And calculating according to the sample label, the predicted working condition label and a preset loss function to obtain a predicted loss value. And calculating the original gradient data of the corresponding first original sample data (or the second original sample data) according to the predicted loss value.
In step S104 of some embodiments, the gradient update data refers to data for updating original gradient data corresponding to the first original sample data, i.e., the gradient update data may be regarded as the weight of the original gradient data corresponding to the first original sample data. Gradient update data may be calculated from the first raw sample data and the second raw sample data. In this way, the gradient update data may reflect the distribution of the first raw sample data and the second raw sample data.
Referring to fig. 3, in some embodiments, the gradient update data includes gradient adjustment data and gradient expansion data, and step S104 includes, but is not limited to including, step S301 through step S303.
Step S301, determining the deviation degree of the first original sample data;
step S302, gradient adjustment data are obtained according to the deviation degree and the first total sample amount of the first original sample data;
step S303, a second total sample size of the second original sample data is obtained, and gradient expansion data is obtained by calculation according to the second total sample size, the first total sample size, a preset sampling rate of the second original sample data and gradient adjustment data.
In step S301 of some embodiments, the degree of deviation refers to the degree of dispersion or degree of abnormality of the first original sample data. The degree of deviation may be calculated from data having statistical properties such as median, covariance, etc. of the plurality of first raw sample data.
Referring to fig. 4, in some embodiments, step S301 includes, but is not limited to including, step S401 through step S404.
Step S401, obtaining a first sample vector of first original sample data and obtaining a second sample vector of second original sample data;
step S402, calculating a median eigenvector according to the first sample vector and the second sample vector;
step S403, calculating to obtain a sample covariance matrix according to the median eigenvector, the first sample vector and the second sample vector;
Step S404, calculating the deviation according to the sample covariance matrix.
In step S401 of some embodiments, a first sample vector of first raw sample data is obtained from a data form of the first raw sample data, and a second sample vector of second raw sample data is obtained from a data form of the second raw sample data. Wherein the data form is used to describe the first raw sample data (or the second raw sample data) as discrete data or as continuous data. When the first raw sample data (or the second raw sample data) is continuous data, a first sample vector (or a second sample vector) is acquired by a split-box discrete method. When the first original sample data (or the second original sample data) is continuous data, the first sample vector (or the second sample vector) is obtained by means of one-hot encoding or the like. From the first sample vector and the second sample vector, a vector set can be obtained. Wherein (1)>Represents a sample vector with index K (i.e., may be either a first sample vector or a second sample vector). It will be appreciated that obtaining the first and second sample vectors by a binned discrete method can reduce the sensitivity of the model to noise in the samples to some extent.
Referring to fig. 5, in some embodiments, taking the first original sample data as continuous data as an example, the "obtaining the first sample vector of the first original sample data" in step S401 includes, but is not limited to, steps S501 to S502.
Step S501, carrying out box division processing on first original sample data to obtain a target data interval corresponding to the first original sample data;
step S502, a first sample vector is obtained according to the target data interval.
In step S501 of some embodiments, a data range of a plurality of first original sample data is determined, and equidistant binning, equal frequency binning, cluster binning and equal binning are performed according to the data range, so as to obtain a plurality of original data intervals. And taking the original data interval in which the first original sample data falls as a target data interval. Taking the first raw sample data as data corresponding to the temperature sensor as an example, it is assumed that a data range determined according to a plurality of temperature data is [35,46]. Two original data sections [35,40], [41, 45] can be obtained from the data range. If a certain first original sample data is 35.5, the original data interval [35,40] can be used as the target data interval of the first original sample data.
In some embodiments, in step S502, an index may be assigned to each original data interval, e.g., the index of original data interval [35,40] is 0, and the index of original data interval [41,45] is 1. The first sample vector may be an index corresponding to the target data interval.
It can be understood that the second sample vector is obtained in a similar manner to the first sample vector, which is not described in detail in this embodiment of the present application.
In step S402 of some embodiments, a median of the first sample vector is determined, and a median of the second sample vector is determined. The median feature vector is used to represent a set of the two medians.
Referring to fig. 6, in some embodiments, step S402 includes, but is not limited to including, step S601 through step S605.
Step S601, obtaining a target total sample size according to the first total sample size and the second total sample size;
step S602, if the target total sample size is odd, calculating a first median index according to the target total sample size;
step S603, performing sorting calculation on the first sample vector according to the first median index to obtain a first median vector;
step S604, performing sorting calculation on the second sample vector according to the first median index to obtain a second median vector;
Step S605, a median feature vector is obtained according to the first median vector and the second median vector.
In step S601 of some embodiments, the target total sample size is used to represent the total data of the first original sample data and the second original sample data, so the target total sample size can be obtained by summing the first total sample size and the second total sample size.
In step S602 of some embodiments, a first median index is used to represent a median position index for all sample data. When the target total sample size N is odd, the first median index is
For ease of understanding, the position index is illustrated by the following example. Taking all sample data as [2,4,6,8, ] as an example, the position index of sample data [2] is 0, and the position index of sample data [6] is 2.
In steps S603 to S604 of some embodiments, taking the first sample vector as an example, in the case where the target total sample amount N is an odd number, the first median vector of the first sample vector may be calculated according to the first median index and the following formula (1).
.. A.C. type (1)
Wherein,represents the first sample vector, N represents the target total sample amount, ">Representing the ranking function.
It is understood that when the target total sample size N is an odd number, the second median vector calculating method of the second sample vector is similar to the calculation of the first median vector in the formula (1), and will not be repeated in this embodiment of the present application.
In step S605 of some embodiments, the first median vector and the second median vector are aggregated according to the corresponding position indexes to obtain a set of median indexes (i.e., median eigenvectors). Wherein,representing either the first median vector or the second median vector.
Referring to fig. 7, in some embodiments, step S402 further includes, but is not limited to including, step S701 through step S704.
Step S701, if the target total sample size is even, calculating to obtain a first median index and a second median index according to the target total sample size;
step S702, sorting and calculating the first sample vector according to the first median index and the second median index to obtain a third median vector;
step S703, performing sorting calculation on the second sample vector according to the first median index and the second median index to obtain a fourth median vector;
in step S704, a median feature vector is obtained according to the third median vector and the fourth median vector.
In step S701 of some embodiments, a method for determining the first median index according to the target total sample size is similar to step S602, and will not be described in detail in this embodiment of the present application. When the target total sample size N is even, the second median index can be obtained based on the determination method of the first median index
In steps S702 to S703 of some embodiments, taking the first sample vector as an example, in the case where the target total sample amount N is an even number, a third median vector of the first sample vector may be calculated according to the first median index, the second median index, and the following formula (2).
... (2)
It is understood that when the target total sample size N is even, the fourth median vector calculation method of the second sample vector is similar to the calculation of the third median vector in the formula (2), and will not be described in detail in this embodiment of the present application.
In step S704 of some embodiments, the third median vector and the fourth median vector are aggregated according to the corresponding position indexes to obtain a set of median indexes (i.e., median eigenvectors). Wherein,representing a third or fourth median vector.
In step S403 of some embodiments, a sample covariance matrix is calculated from the median eigenvector, the first sample vector, the second sample vector, and the following equation (3)
... (3)
Wherein,,/>,/>representing the desired calculation.
In step S404 of some embodiments, a degree of deviation of the first raw sample data is calculated according to the sample covariance matrix, the median eigenvector, and the following equation (4).
... (4)
Wherein,indicate->Degree of deviation of the first raw sample data, < >>Indicate->And discretizing the few types of samples to obtain the feature vector.
It will be appreciated that the first sample vectorAnd feature vector->Different. For example, when the first raw sample data is represented in a table format, data corresponding to the temperature sensor, data corresponding to the pressure sensor, data corresponding to the height sensor, and the like collected at the same time are taken as the data of the same sample, that is, the data are in the same row in the table. Meanwhile, data corresponding to the temperature sensor, data corresponding to the pressure sensor, data corresponding to the height sensor and the like are in different columns in the table. />Is a characteristic vector obtained according to each column data (such as data corresponding to temperature sensor)>Is a feature vector obtained according to each row of data (such as data corresponding to a temperature sensor, data corresponding to a pressure sensor, and data corresponding to a height sensor).
In step S302 of some embodiments, gradient adjustment data refers to data for adjusting original gradient data, and the gradient adjustment data may be calculated according to the following formula (5).
... (5)
Wherein,indicate->Gradient adjustment data of the first raw sample data, < >>A first total sample size is indicated and,indicate->Degree of deviation of the first raw sample data, < >>Representing the sum of the deviations of all the first raw sample data. As can be seen from equation (5), the gradient adjustment degree of the first raw sample data is inversely proportional to the degree of deviation, i.e., the larger the degree of deviation is, the smaller the gradient adjustment data is.
In step S303 of some embodiments, the second total sample amount refers to the total amount of the second raw sample data. Gradient expansion data is calculated according to the second total sample size, the first total sample size, the sampling rate of the second original sample data, gradient adjustment data and the following formula (6).
... (6)
Wherein,indicate->Gradient expansion data of the first raw sample data, < >>Representing the ratio of the minority class samples to the total number of samples, i.e +.>Can be calculated from the first total sample size and the second total sample size. / >The sampling rate of the second original sample data is represented, and a specific value of the sampling rate can be adaptively set according to actual needs, which is not particularly limited in this embodiment of the present application. Wherein,the value of (2) should satisfy->,/>The value of (2) should satisfy->And->+/>
The benefit of step S301 to step S303 is that the gradient adjustment data and gradient expansion data of the minority sample (i.e. the first original sample data) are obtained by calculation, so that the learning efficiency of the model to the minority sample in the subsequent model training can be improved, and the problem of low classification accuracy caused by sample imbalance is reduced.
In step S105 of some embodiments, the original gradient data corresponding to the first original sample data is updated according to the gradient update data, that is, the gradient update data may be regarded as the weight of the first original sample data, so as to update the gradient distribution of the first original sample data, thereby obtaining the target gradient update data. It can be understood that the gradient update data can adaptively adjust the original gradient data corresponding to the first river source city sample data, that is, can update the gradient distribution of the first original sample data, thereby increasing the weight of the first original sample data in the training process of the original working condition classification model, further improving the learning ability of the original working condition classification model to the first original sample data, that is, improving the learning ability to a few kinds of samples, and improving the classification accuracy of the original working condition classification model under the condition of unbalanced samples.
In step S106 of some embodiments, training the original condition classification model according to the target gradient update data and the first original sample data, and training the original condition classification model according to the second original sample data and the original gradient data corresponding to the second original sample data to obtain the target condition classification model.
In step S107 of some embodiments, in actual situations, real data of the target working condition (such as data corresponding to a temperature sensor, data corresponding to a pressure sensor, data corresponding to a height sensor, etc.) may be obtained, and the real data is used as input data of a target working condition classification model to predict the working condition of the target device.
With reference to the descriptions of the above embodiments, the working condition intelligent classification method provided by the embodiment of the application has the following advantages:
(1) Compared with the SMOTE algorithm in the related art, the embodiment of the application does not need to generate new sample data, but fully utilizes the data corresponding to the known few types of samples in a differential sampling mode. In this way, the complexity of the embodiments of the present application is low, and the risk of sample distortion can be avoided to some extent.
(2) Compared with the AdaBoost algorithm in the related art, the method and the device do not pay attention to weight expansion of misclassified samples, but increase learning efficiency of a model on minority class samples in a differential sampling mode. In addition, the method provided by the embodiment of the application belongs to a reconstruction method of the LGBM (Light Gradient Boosting Machine) algorithm, and the embodiment of the application adopts a method for smoothing the influence of extreme data characteristics through the split-box processing in the LGBM algorithm. Meanwhile, the problem of overfitting of few samples is relieved through a self-adaptive gradient expansion method. In this way, the embodiment of the application reduces the sensitivity of the classification model to noise and outliers and alleviates the problem of overfitting in a sample imbalance scene.
Referring to fig. 8, an embodiment of the present application further provides an intelligent working condition classification device, which may implement the intelligent working condition classification method, where the device includes:
the data obtaining module 801 is configured to obtain original sample data of a target device, where the original sample data includes first original sample data corresponding to an abnormal working condition of the target device and second original sample data corresponding to a normal working condition of the target device;
the first working condition classification module 802 is configured to input the original sample data to a preset original working condition classification model to perform working condition classification, so as to obtain a predicted working condition label, where the predicted working condition label is used to indicate that the target device is in a normal working condition or in an abnormal working condition;
The gradient calculation module 803 is configured to calculate, according to a sample tag corresponding to the original sample data and a predicted working condition tag, to obtain the original gradient data, where the sample tag is used to indicate that the target device is in a normal working condition or in an abnormal working condition;
an update data calculation module 804, configured to calculate gradient update data of the first original sample data according to the original sample data;
the gradient self-adaptation module 805 is configured to update original gradient data corresponding to the first original sample data according to gradient update data, so as to obtain target gradient update data;
the model training module 806 is configured to train the original working condition classification model according to the target gradient update data, the first original sample data, the second original sample data, and the original gradient data corresponding to the second original sample, to obtain a target working condition classification model;
and the second working condition classification module 807 is configured to classify the working condition of the target device according to the target working condition classification model.
The specific implementation of the working condition intelligent classification device is basically the same as the specific embodiment of the working condition intelligent classification method, and is not repeated here.
The embodiment of the application also provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the intelligent classification method of the working conditions when executing the computer program. The electronic equipment can be any intelligent terminal including a tablet personal computer, a vehicle-mounted computer and the like.
Referring to fig. 9, fig. 9 illustrates a hardware structure of an electronic device according to another embodiment, the electronic device includes:
the processor 901 may be implemented by a general purpose CPU (central processing unit), a microprocessor, an application specific integrated circuit (ApplicationSpecificIntegratedCircuit, ASIC), or one or more integrated circuits, etc. for executing related programs to implement the technical solutions provided by the embodiments of the present application;
the memory 902 may be implemented in the form of read-only memory (ReadOnlyMemory, ROM), static storage, dynamic storage, or random access memory (RandomAccessMemory, RAM). The memory 902 may store an operating system and other application programs, and when the technical solutions provided in the embodiments of the present disclosure are implemented by software or firmware, relevant program codes are stored in the memory 902, and the processor 901 invokes the working condition classification method for executing the embodiments of the present disclosure;
an input/output interface 903 for inputting and outputting information;
the communication interface 904 is configured to implement communication interaction between the device and other devices, and may implement communication in a wired manner (e.g. USB, network cable, etc.), or may implement communication in a wireless manner (e.g. mobile network, WIFI, bluetooth, etc.);
A bus 905 that transfers information between the various components of the device (e.g., the processor 901, the memory 902, the input/output interface 903, and the communication interface 904);
wherein the processor 901, the memory 902, the input/output interface 903 and the communication interface 904 are communicatively coupled to each other within the device via a bus 905.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the intelligent classification method of the working conditions when being executed by a processor.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiments described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application, and as those skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solutions provided by the embodiments of the present application are equally applicable to similar technical problems.
It will be appreciated by those skilled in the art that the technical solutions shown in the figures do not constitute limitations of the embodiments of the present application, and may include more or fewer steps than shown, or may combine certain steps, or different steps.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the present application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in this application, "at least one" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is merely a logical function division, and there may be another division manner in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including multiple instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the various embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing a program.
Preferred embodiments of the present application are described above with reference to the accompanying drawings, and thus do not limit the scope of the claims of the embodiments of the present application. Any modifications, equivalent substitutions and improvements made by those skilled in the art without departing from the scope and spirit of the embodiments of the present application shall fall within the scope of the claims of the embodiments of the present application.

Claims (8)

1. An intelligent classification method for working conditions is characterized by comprising the following steps:
obtaining original sample data of target equipment, wherein the original sample data comprises first original sample data corresponding to abnormal working conditions of the target equipment and second original sample data corresponding to normal working conditions of the target equipment;
inputting the original sample data into a preset original working condition classification model to perform working condition classification to obtain a predicted working condition label, wherein the predicted working condition label is used for indicating whether the target equipment is in a normal working condition or in an abnormal working condition;
calculating to obtain original gradient data according to a sample label corresponding to the original sample data and the predicted working condition label, wherein the sample label is used for indicating whether the target equipment is in a normal working condition or in an abnormal working condition;
gradient update data of the first original sample data are obtained through calculation according to the original sample data; wherein the gradient update data includes gradient adjustment data and gradient expansion data;
updating the original gradient data corresponding to the first original sample data according to the gradient update data to obtain target gradient update data;
Training the original working condition classification model according to the target gradient update data, the first original sample data, the second original sample data and the original gradient data corresponding to the second original sample to obtain a target working condition classification model;
classifying the working conditions of the target equipment according to the target working condition classification model;
the calculating the gradient update data of the first original sample data according to the original sample data includes:
determining a degree of deviation of the first raw sample data;
calculating the gradient adjustment data according to the deviation degree and the first total sample size of the first original sample data;
obtaining a second total sample size of the second original sample data, and calculating the gradient expansion data according to the second total sample size, the first total sample size, a preset sampling rate of the second original sample data and the gradient regulation data;
the determining the degree of deviation of the first original sample data includes:
acquiring a first sample vector of the first original sample data and acquiring a second sample vector of the second original sample data;
Calculating a median eigenvector according to the first sample vector and the second sample vector;
calculating according to the median eigenvector, the first sample vector and the second sample vector to obtain a sample covariance matrix;
and calculating the deviation according to the sample covariance matrix.
2. The method of claim 1, wherein the obtaining the first sample vector of the first raw sample data comprises:
carrying out box division processing on the first original sample data to obtain a target data interval corresponding to the first original sample data;
and obtaining the first sample vector according to the target data interval.
3. The method of claim 1, wherein said computing a median eigenvector from said first and second sample vectors comprises:
obtaining a target total sample size according to the first total sample size and the second total sample size;
if the target total sample size is odd, calculating a first median index according to the target total sample size;
performing sequencing calculation on the first sample vector according to the first median index to obtain a first median vector;
Performing sequencing calculation on the second sample vector according to the first median index to obtain a second median vector;
and obtaining the median characteristic vector according to the first median vector and the second median vector.
4. A method according to claim 3, wherein said computing a median eigenvector from said first and second sample vectors further comprises:
if the target total sample size is even, calculating to obtain the first median index and the second median index according to the target total sample size;
sorting and calculating the first sample vector according to the first median index and the second median index to obtain a third median vector;
sorting and calculating the second sample vector according to the first median index and the second median index to obtain a fourth median vector;
and obtaining the median characteristic vector according to the third median vector and the fourth median vector.
5. The method of claim 1, wherein the obtaining raw sample data for the target device comprises:
acquiring initial data of the target equipment, wherein the initial data comprises the first original sample data and initial sample data corresponding to normal working conditions of the target equipment;
Inputting the initial sample data into a preset original working condition classification model to classify working conditions, and obtaining an initial gradient of the initial sample data;
sorting the initial sample data according to the initial gradient to obtain a sorting result;
and obtaining the second original sample data according to the sorting result, the first total sample quantity of the first original sample data and a preset sampling multiple.
6. A condition classification device, the device comprising:
the data acquisition module is used for acquiring original sample data of target equipment, wherein the original sample data comprise first original sample data corresponding to abnormal working conditions of the target equipment and second original sample data corresponding to normal working conditions of the target equipment;
the first working condition intelligent classification module is used for inputting the original sample data into a preset original working condition classification model to perform working condition classification to obtain a predicted working condition label, wherein the predicted working condition label is used for indicating whether the target equipment is in a normal working condition or in an abnormal working condition;
the gradient calculation module is used for calculating to obtain original gradient data according to a sample label corresponding to the original sample data and the predicted working condition label, wherein the sample label is used for indicating whether the target equipment is in a normal working condition or in an abnormal working condition;
The updating data calculation module is used for calculating gradient updating data of the first original sample data according to the original sample data; wherein the gradient update data includes gradient adjustment data and gradient expansion data; the calculating the gradient update data of the first original sample data according to the original sample data includes: determining a degree of deviation of the first raw sample data; calculating the gradient adjustment data according to the deviation degree and the first total sample size of the first original sample data; obtaining a second total sample size of the second original sample data, and calculating the gradient expansion data according to the second total sample size, the first total sample size, a preset sampling rate of the second original sample data and the gradient regulation data; the determining the degree of deviation of the first original sample data includes: acquiring a first sample vector of the first original sample data and acquiring a second sample vector of the second original sample data; calculating a median eigenvector according to the first sample vector and the second sample vector; calculating according to the median eigenvector, the first sample vector and the second sample vector to obtain a sample covariance matrix; calculating the deviation according to the sample covariance matrix;
The gradient self-adaptation module is used for updating the original gradient data corresponding to the first original sample data according to the gradient update data to obtain target gradient update data;
the model training module is used for training the original working condition classification model according to the target gradient update data, the first original sample data, the second original sample data and the original gradient data corresponding to the second original sample to obtain a target working condition classification model;
and the second working condition classification module is used for classifying the working conditions of the target equipment according to the target working condition classification model.
7. An electronic device comprising a memory storing a computer program and a processor implementing the method of any one of claims 1 to 5 when the computer program is executed by the processor.
8. A computer readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the method of any one of claims 1 to 5.
CN202311369467.9A 2023-10-23 2023-10-23 Intelligent classification method and device for working conditions, electronic equipment and storage medium Active CN117171625B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311369467.9A CN117171625B (en) 2023-10-23 2023-10-23 Intelligent classification method and device for working conditions, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311369467.9A CN117171625B (en) 2023-10-23 2023-10-23 Intelligent classification method and device for working conditions, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117171625A CN117171625A (en) 2023-12-05
CN117171625B true CN117171625B (en) 2024-02-06

Family

ID=88930038

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311369467.9A Active CN117171625B (en) 2023-10-23 2023-10-23 Intelligent classification method and device for working conditions, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117171625B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111397896A (en) * 2020-03-08 2020-07-10 华中科技大学 Fault diagnosis method and system for rotary machine and storage medium
CN113610350A (en) * 2021-07-08 2021-11-05 中南民族大学 Complex working condition fault diagnosis method, equipment, storage medium and device
CN113834656A (en) * 2021-08-27 2021-12-24 西安电子科技大学 Bearing fault diagnosis method, system, equipment and terminal
CN114637847A (en) * 2022-03-15 2022-06-17 平安科技(深圳)有限公司 Model training method, text classification method and device, equipment and medium
CN114925728A (en) * 2022-05-24 2022-08-19 武汉工程大学 Rolling bearing fault diagnosis method, rolling bearing fault diagnosis device, electronic device and storage medium
CN116738338A (en) * 2023-05-31 2023-09-12 华南理工大学 Small sample fault diagnosis method based on multi-scale integrated LightGBM

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11720962B2 (en) * 2020-11-24 2023-08-08 Zestfinance, Inc. Systems and methods for generating gradient-boosted models with improved fairness

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111397896A (en) * 2020-03-08 2020-07-10 华中科技大学 Fault diagnosis method and system for rotary machine and storage medium
CN113610350A (en) * 2021-07-08 2021-11-05 中南民族大学 Complex working condition fault diagnosis method, equipment, storage medium and device
CN113834656A (en) * 2021-08-27 2021-12-24 西安电子科技大学 Bearing fault diagnosis method, system, equipment and terminal
CN114637847A (en) * 2022-03-15 2022-06-17 平安科技(深圳)有限公司 Model training method, text classification method and device, equipment and medium
CN114925728A (en) * 2022-05-24 2022-08-19 武汉工程大学 Rolling bearing fault diagnosis method, rolling bearing fault diagnosis device, electronic device and storage medium
CN116738338A (en) * 2023-05-31 2023-09-12 华南理工大学 Small sample fault diagnosis method based on multi-scale integrated LightGBM

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
An Improved LightGBM Algorithm for Online Fault Detection of Wind Turbine Gearboxes;Mingzhu Tang等;《energies》;第 1-16页 *
https://zhuanlan.zhihu.com/p/627313748 1/25 集成学习六——LightGBM;Semeron;《知乎》;第1-25页 *

Also Published As

Publication number Publication date
CN117171625A (en) 2023-12-05

Similar Documents

Publication Publication Date Title
CN106611052B (en) The determination method and device of text label
US20190034497A1 (en) Data2Data: Deep Learning for Time Series Representation and Retrieval
Wang et al. Efficient learning by directed acyclic graph for resource constrained prediction
US20200311613A1 (en) Connecting machine learning methods through trainable tensor transformers
CN114330713B (en) Convolutional neural network model pruning method and device, electronic equipment and storage medium
CN110377587B (en) Migration data determination method, device, equipment and medium based on machine learning
US11416717B2 (en) Classification model building apparatus and classification model building method thereof
CN113095370A (en) Image recognition method and device, electronic equipment and storage medium
CN110708285A (en) Flow monitoring method, device, medium and electronic equipment
EP3940600A1 (en) Method and apparatus with neural network operation processing background
CN117171625B (en) Intelligent classification method and device for working conditions, electronic equipment and storage medium
WO2022162427A1 (en) Annotation-efficient image anomaly detection
CN112784102A (en) Video retrieval method and device and electronic equipment
CN112541530A (en) Data preprocessing method and device for clustering model
CN116346640A (en) Network index prediction method and device, electronic equipment and storage medium
CN105205487B (en) A kind of image processing method and device
CN111260074A (en) Method for determining hyper-parameters, related device, equipment and storage medium
US20220083843A1 (en) System and method for balancing sparsity in weights for accelerating deep neural networks
CN116049536A (en) Recommendation method and related device
US11289202B2 (en) Method and system to improve clinical workflow
CN116664958B (en) Image classification method based on binary neural network model and related equipment
CN117150326B (en) New energy node output power prediction method, device, equipment and storage medium
CN114581252B (en) Target case prediction method and device, electronic equipment and storage medium
JP2020047219A (en) Device and method for processing information, program, and storage medium
US20220092425A1 (en) System and method for pruning filters in deep neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant