CN118032327A - Equipment intelligent lubrication monitoring method and device based on artificial intelligence - Google Patents

Equipment intelligent lubrication monitoring method and device based on artificial intelligence Download PDF

Info

Publication number
CN118032327A
CN118032327A CN202410445565.4A CN202410445565A CN118032327A CN 118032327 A CN118032327 A CN 118032327A CN 202410445565 A CN202410445565 A CN 202410445565A CN 118032327 A CN118032327 A CN 118032327A
Authority
CN
China
Prior art keywords
data
sample set
lubrication
feature
training sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410445565.4A
Other languages
Chinese (zh)
Inventor
马兵
尹旭
王玉石
朱运恒
蔡钧泽
韩明宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Energy Shuzhiyun Technology Co ltd
Original Assignee
Shandong Energy Shuzhiyun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Energy Shuzhiyun Technology Co ltd filed Critical Shandong Energy Shuzhiyun Technology Co ltd
Priority to CN202410445565.4A priority Critical patent/CN118032327A/en
Publication of CN118032327A publication Critical patent/CN118032327A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides an intelligent lubrication monitoring method and device for equipment based on artificial intelligence, which relate to the technical field of data processing, and are characterized in that after attribute information of a plurality of types of sensors on a lubrication part in the running process is acquired, the acquired data are identified by a monitoring model, and a classification result is output, so that the lubrication state of the lubrication part is determined. The monitoring model is constructed based on the classifier, and the network parameters of the classifier are optimized by introducing the concept of energy conservation and an annealing strategy, so that better model configuration can be found, and the accuracy and the robustness of classification are improved. The training sample set trains the classifier through feature extraction and data dimension reduction and feature dynamic reconstruction, so that key feature information is reserved, the description capability of the features on the equipment state can be enhanced, and the requirements of the classifier can be met better.

Description

Equipment intelligent lubrication monitoring method and device based on artificial intelligence
Technical Field
The invention relates to the technical field of data processing, in particular to an intelligent lubrication monitoring method and device for equipment based on artificial intelligence.
Background
In the current industrial field, stable operation and maintenance management of equipment are key to ensuring production efficiency and safety. In particular to the aspect of lubrication management, reasonable lubrication can not only reduce the abrasion of equipment and prolong the service life, but also prevent unexpected faults, so that intelligent lubrication monitoring of the equipment becomes an important research and application field. With the development of artificial intelligence technology, particularly the progress of machine learning and deep learning technology, how to effectively apply these technologies to perform intelligent lubrication monitoring on equipment, and improve the accuracy and efficiency of monitoring have become a research hotspot in the field.
However, existing classification methods suffer from inadequate accuracy and robustness in processing complex and variable lubrication state data, particularly when novel or rare failure modes are encountered, with limited adaptability and predictive capabilities of the model.
Disclosure of Invention
In view of the above, the invention aims to provide an intelligent lubrication monitoring method and device for equipment based on artificial intelligence, which can improve the accuracy of intelligent lubrication monitoring of the equipment.
In a first aspect, an embodiment of the present invention provides an intelligent lubrication monitoring method for an apparatus based on artificial intelligence, where the method includes: data acquisition is carried out on the lubrication part of the target equipment to obtain a sample to be monitored; the sample to be monitored comprises attribute information acquired by various sensors on the lubricating part in the running process; assembling the sample to be monitored to generate a feature vector to be monitored; inputting the feature vector to be monitored into a pre-constructed monitoring model, and outputting a classification result; the monitoring model is constructed based on classification training of the classifier, and an energy conservation concept and an annealing strategy are introduced to optimize network parameters of the classifier; the training sample set of the training classifier carries out classification training after carrying out feature extraction, data reduction and feature dynamic reconstruction on the original training sample set; the method comprises the steps that an original training sample set is generated based on diversity measurement, the training sample set comprises a plurality of training samples and corresponding sample labels, and the training samples comprise sensor data acquired by a plurality of types of sensors on equipment lubrication parts; and determining the lubrication state corresponding to the lubrication part based on the classification result.
In a second aspect, an embodiment of the present invention provides an intelligent lubrication monitoring device for an apparatus based on artificial intelligence, where the device includes: the data acquisition module is used for acquiring data of a lubrication part of the target equipment to obtain a sample to be monitored; the sample to be monitored comprises attribute information acquired by various sensors on the lubricating part in the running process; the data processing module is used for assembling the sample to be monitored and generating a feature vector to be monitored; the execution module is used for inputting the feature vector to be monitored into a pre-constructed monitoring model and outputting a classification result; the monitoring model is constructed based on classification training of the extreme learning machine algorithm, and an energy conservation concept and an annealing strategy are introduced to optimize network parameters of the extreme learning machine algorithm; the training sample set for training the extreme learning machine algorithm carries out classification training after carrying out feature extraction, data reduction and feature dynamic reconstruction on the original training sample set; the method comprises the steps that an original training sample set is generated based on diversity measurement, the training sample set comprises a plurality of training samples and corresponding sample labels, and the training samples comprise sensor data acquired by a plurality of types of sensors on equipment lubrication parts; and the output module is used for determining the lubrication state corresponding to the lubrication part based on the classification result.
The embodiment of the invention has the following beneficial effects: according to the intelligent lubrication monitoring method and device for the equipment based on the artificial intelligence, after sensor data of various sensors of a lubrication part of the equipment are obtained, the data are identified by a pre-constructed monitoring model, a classification result is determined, and then the lubrication state of the lubrication part is determined, wherein the model introduces an energy conservation concept and an annealing strategy to optimize network parameters of a classifier. The parameters of the classifier can be dynamically adjusted through the energy conservation principle and the simulated annealing strategy, so that more optimal model configuration can be found, and the classification accuracy and robustness are improved, so that the intelligent lubrication monitoring accuracy of the equipment is improved. In addition, the original training sample set is generated based on the diversity measurement, so that the model can identify novel or rare data, the lubrication state of the lubrication part can be accurately identified, and the problem of a fault mode caused by poor lubrication is better solved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings. In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an intelligent lubrication monitoring method for an intelligent lubrication device based on artificial intelligence according to an embodiment of the present invention;
FIG. 2 is a flow chart of another intelligent lubrication monitoring method for an intelligent lubrication device according to an embodiment of the present invention;
FIG. 3 is a flowchart of a third intelligent lubrication monitoring method for an artificial intelligence based device according to an embodiment of the present invention;
Fig. 4 is a schematic structural diagram of an intelligent lubrication monitoring device for an apparatus based on artificial intelligence according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of another intelligent lubrication monitoring device for an apparatus based on artificial intelligence according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purposes of clarity, technical solutions, and advantages of the embodiments of the present disclosure, the following description describes embodiments of the present disclosure with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure herein. It will be apparent that the described embodiments are merely some, but not all embodiments of the present disclosure. The disclosure may be embodied or practiced in other different specific embodiments, and details within the subject specification may be modified or changed from various points of view and applications without departing from the spirit of the disclosure. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
It should be apparent that the aspects described in this disclosure may be embodied in a wide variety of forms and that any specific structure and/or function described in this disclosure is illustrative only. Based on the present disclosure, one skilled in the art will appreciate that one aspect described in this disclosure may be implemented independently of any other aspects, and that two or more of these aspects may be combined in various ways. For example, apparatus may be implemented and/or methods practiced using any number of the aspects set forth in this disclosure. In addition, such apparatus may be implemented and/or such method practiced using other structure and/or functionality in addition to one or more of the aspects set forth in the disclosure.
It should also be noted that the illustrations provided in the following embodiments merely illustrate the basic concepts of the disclosure by way of illustration, and only the components related to the disclosure are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated. In addition, in the following description, specific details are provided in order to provide a thorough understanding of the examples. However, it will be understood by those skilled in the art that the foregoing aspects may be practiced without these specific details.
The embodiment of the invention provides an intelligent equipment lubrication monitoring method and device based on artificial intelligence, which can improve the accuracy of intelligent equipment lubrication monitoring.
For the sake of understanding the present embodiment, first, a detailed description is given of an intelligent lubrication monitoring method for an apparatus based on artificial intelligence disclosed in the present embodiment, and fig. 1 shows a flowchart of an intelligent lubrication monitoring method for an apparatus based on artificial intelligence provided in the present embodiment, and as shown in fig. 1, the method includes the following specific steps:
step S102, data acquisition is carried out on the lubrication part of the target equipment, and a sample to be monitored is obtained.
Step S104, assembling the sample to be monitored to generate a feature vector to be monitored.
In particular, the sample to be monitored includes attribute information collected by various types of sensors during the running process of the lubrication part. In one embodiment, the data originates from multiple types of sensors installed at critical lubrication nodes of the device, including temperature sensors, pressure sensors, vibration sensors. In this embodiment, the data attributes include: : sensor reading time stamp; : temperature sensor readings; /(I) : Reading by a pressure sensor; /(I): Vibration sensor readings; /(I): A sound sensor reading; /(I): A rotational speed sensor reads; /(I): Flow sensor readings; /(I): Quality index of lubricating oil; /(I): An indication of the device operating state; /(I): Environmental factor readings.
After the sample to be monitored is collected, the sample is assembled to generate the feature vector to be monitored. In this embodiment, 5 pieces of data collected from key lubrication points of one high-speed rotating apparatus are exemplified as follows:
It should be emphasized that this embodiment is merely illustrative of one data format and kind of the present invention, and in practical applications, the attributes of data are generally more than 10 attributes, and the number of attributes of data may reach tens or hundreds.
And S106, inputting the feature vector to be monitored into a pre-constructed monitoring model, and outputting a classification result.
Step S108, determining the lubrication state corresponding to the lubrication part based on the classification result.
The embodiment of the invention identifies the sensor data of the lubrication part of the equipment through a pre-constructed monitoring model, and determines the lubrication state of the lubrication part.
The monitoring model is constructed after classification training based on the classifier, and the training sample set of the training classifier carries out classification training after carrying out feature extraction, data reduction and feature dynamic reconstruction on the original training sample set. Specifically, the new data collected from the sensor of the lubrication part of the device includes information such as temperature, pressure, vibration, etc., and the feature vector of the new sample is denoted as x new. First, feature extraction is performed on new sample data x new to obtain a feature vector f new. Further, a trained feature dimension reduction model is applied to convert f new to a low-dimensional representation h new. Further, inputting the feature h new after dimension reduction into a classifier after training to obtain a prediction of the lubrication state
According to the embodiment of the invention, the assessment of the lubrication state of the equipment is realized by collecting and marking the sensor data from the lubrication part of the equipment. The original training sample set is generated based on the diversity measurement, the training sample set of the training classifier comprises a plurality of training samples and corresponding sample labels, and the training samples comprise sensor data acquired by a plurality of types of sensors on the lubricating parts of the equipment. Based on this, the lubrication state of the lubrication site is determined by determining the lubrication state category indicated by the classification result (i.e., the labeled category). In one embodiment, the output categories of the classifier include: category 1: normal lubrication; category 2: insufficient lubrication; category 3: the lubrication is excessive; category 4: aging lubricating oil; category 5: potential failure. If the classification result indicates a class 1, it indicates that the lubrication site is in a normal lubrication state.
The embodiment of the invention optimizes the network parameters of the classifier by introducing the concept of energy conservation and an annealing strategy. The invention relates to a method for determining the model parameters of a classifier, which comprises the steps of determining the model parameters of the classifier, and determining the model parameters of the classifier, wherein the classifier adopts an extreme learning machine classification algorithm as the classifier algorithm, the extreme learning machine of an energy conservation annealing strategy is an improvement of the traditional extreme learning machine algorithm, the weight and bias of an hidden layer node are randomly generated in the traditional extreme learning machine, and the parameters are dynamically adjusted through the energy conservation principle and a simulated annealing strategy so as to find the optimal model configuration, so that the convergence rate of the model parameter search can be improved, and the accuracy and the robustness of the classifier precision are improved.
Furthermore, on the basis of the embodiment, the embodiment of the invention also provides another intelligent lubrication monitoring method for equipment based on artificial intelligence, and the method mainly describes a construction method of a monitoring model. The monitoring model is trained through a pre-constructed training sample set, and the training sample set is constructed after data expansion by using a preset data expansion method after data marking and data preprocessing are carried out on an original sample set. It can be understood that in the task of the invention, acquisition, labeling and preprocessing of training data are time-consuming and labor-consuming, and insufficient training samples easily result in poor generalization capability of the model, and simultaneously influence the accuracy of the model. Many existing approaches have difficulty in addressing the problem of insufficient data volume and lack of data diversity, especially in complex or infrequent equipment failure modes, which limits the generalization ability and prediction accuracy of the model.
According to the invention, a quantum sparse coding-based generation countermeasure network algorithm is adopted, and the quantum computing concept is utilized to enhance the sparse coding capacity, so that high-quality and diversified data are generated, the problem of insufficient training data in intelligent lubrication monitoring of equipment is solved, and the generalization capacity and precision of a model are improved. Compared with the traditional generation countermeasure network, the quantum sparse coding generation countermeasure network has higher efficiency and generation capacity when processing complex data structures, and is particularly suitable for the field of equipment lubrication monitoring with smaller data volume and complex data structures. Specifically, fig. 2 shows a flowchart of a step of constructing a training sample set according to an embodiment of the present invention, as shown in fig. 2, including the following steps:
Step S202, sensor data of a lubrication part of the equipment are obtained, and sample labeling is carried out on the sensor data to construct an initial sample set.
In specific implementation, data acquisition is carried out on the lubricating part of the equipment, the acquired data are marked, and an initial sample set is constructed to train a model. The data source according to the embodiment of the present invention may refer to the above embodiment, and the labeled category is used to characterize the lubrication state of the lubrication site. In this embodiment, the labeling categories of the data include: category 1: normal lubrication; category 2: insufficient lubrication; category 3: the lubrication is excessive; category 4: aging lubricating oil; category 5: potential failure.
Step S204, based on the generation countermeasure network and a preset quantum bit simulation sparse coding process, carrying out data expansion on the initial sample set to obtain generation data based on quantum state coding.
Wherein, the embodiment of the invention receives random noise as input through a generator for generating an countermeasure network; and (3) simulating a sparse coding process through quantum gate operation, mapping random noise onto the quantum bit, and carrying out data expansion on the random noise based on the quantum bit to obtain generated data based on quantum state coding.
Specifically, parameters of the generator and the arbiter are initialized first, the number of quantum bits is set to simulate the sparse coding process, and the distance measurement parameters are initialized. Specifically, the generatorSum discriminator/>Parameter/>And/>Is randomly initialized. At the same time, the qubit/>, is setTo simulate a sparse coding process in which each qubit represents one characteristic dimension of the data.
Further, the generator receives random noise as input and simulates a sparse coding process by quantum gate operation, generating preliminary data samples. Specifically, the generator receives a random noise vector z as an input, simulates a sparse coding process through quantum gate operation, and generates a preliminary data sample x ', where the quantum sparse coding can be expressed as:
Wherein, Representing quantum gate operation,/>Representing the input noise/>Mapping to qubits/>The above procedure. In one embodiment, the number of qubits/>Is set to 10 to simulate the data distribution in a 10-dimensional feature space, providing sufficient expressive power for the generator to simulate complex equipment lubrication data.
Further, quantum gate operationThe calculation of (2) can be expressed as:
Wherein, Is Hamiltonian quantity,/>Is a parameter related to the quantum state and is preset by people.
Step S206, performing distance measurement on the generated data, and introducing quantum entropy to perform diversity measurement on the generated data.
In the concrete implementation, the initially generated data samples are processed by utilizing an improved distance measurement method so as to maintain the internal structural characteristics of the data and enhance the diversity of the data, the improved measurement method not only considers the Euclidean distance between the data points, but also introduces the topological structure relation of the data points in a high-dimensional characteristic space, thereby better capturing and maintaining the internal structural characteristics of the data, effectively enhancing the quality of the generated data in the data expansion process, and ensuring that the expanded data can better reflect the lubrication characteristic distribution in the real equipment running state.
Specifically, the objective of distance metric learning is to optimize the internal structure of the generated data so that it is closer to the distribution of the real data, provided thatFor a real data sample, introducing a distance metric M, the calculation manner of the distance metric learning loss function L M can be expressed as:
Wherein, Representing feature extraction functions,/>Representing the distance measure obtained by distance measure learning.
Further, the distance metric may be calculated as:
Wherein, Is a symmetrical positive definite matrix, and the elements thereof are obtained through learning and represent the importance and the relativity of different dimensions in the characteristic space.
Furthermore, the invention adopts a loss function which fuses the quantum information theory and the anti-learning characteristic, not only considers the similarity between the generated data and the real data, but also introduces a quantum entropy concept to evaluate the diversity and the information richness of the generated data, and promotes the generation of high-quality and diversified data. In one embodiment, quantum entropy uses von Neumann entropy as a measure of quantum state uncertainty for one quantum stateIts von neumann entropy can be expressed as:
Wherein, Representing the trace of the matrix for extracting the physically observable quantities from the quantum operations. Further, the generated data is regarded as a code of quantum states, thereby introducing quantum entropy as a measure of the diversity of the generated data.
Step S208, calculating a loss function according to the distance measure and the diversity measure, and generating an antagonism network optimization based on the loss function and the antagonism learning characteristic pair so as to optimize the quality of the generated data.
In a specific implementation, the countermeasure training is performed based on the distance metric and the diversity metric. The arbiter tries to distinguish between the real data and the generated data, and the generator tries to fool the arbiter, both parties continuously update the parameters in the countermeasure, until the nash equilibrium is reached. The way to update the parameters is based on the calculation of the loss function at each iteration.
Specifically, the countermeasure training updates the parameters by minimizing the loss functions of the generator and the arbiter. The goal of the arbiter is to maximize the probability that it can distinguish between real data and generated data, while the goal of the generator is to minimize the probability that the arbiter correctly recognizes the generated data, the loss of the arbiter L D can be expressed as:
loss of generator Can be expressed as:
Loss function integrating quantum information theory The calculation of (2) can be expressed as:
Wherein, Is a superparameter that trades off importance of different items,/>Expressed by noise/>Quantum state encoding of generated data,/>Representing expectations,/>The representation is subject to a particular distribution.
Further, feedback adjustment is carried out, and the quality and diversity of generated data are optimized according to parameters of a feedback adjustment generator of the discriminator, a quantum sparse coding strategy and parameters of distance measurement learning. Specifically, parameters of a generator and a quantum sparse coding strategy are adjusted according to feedback of the discriminator. Is provided withFor learning rate, the way the generator parameters are updated can be expressed as:
the manner in which the arbiter parameters are updated can be expressed as:
Wherein, And/>The generator and arbiter parameters before update, respectively,/>And/>Learning rate of generator and arbiter,/>Is a gradient sign.
Further, the learning rate of the generatorAnd the learning rate of the discriminator is set by a dynamic adjustment mode. Specifically, the dynamic parameter adjustment mechanism introduced in the countermeasure training stage dynamically adjusts the learning rate of the generator and the discriminator by monitoring the gradient change of the loss function in the training process in real time, thereby realizing a more efficient and stable training process.
In particular implementations, the generator is updated at each iterationSum discriminator/>Before the parameters of (2), the dynamic parameter adjustment mechanism dynamically adjusts the learning rate/>, according to the gradient information of the loss functionAnd/>Definition/>And/>The amount of change in the generator loss function and the arbiter loss function between the current iteration and the previous iteration, respectively, can be expressed as:
Wherein, And/>Loss function values of the generator and the arbiter, respectively, representing the current iteration,/>AndRepresenting the corresponding value of the last iteration.
Further, based on the amount of change in the loss function, a strategy for dynamically adjusting the learning rate may be defined as:
Wherein, Is an adjustment function for adjusting the learning rate according to the amount of change in the loss function. /(I)AndLearning rate of generator and arbiter respectively representing current iteration,/>And/>Representing the learning rate of the generator and arbiter of the last iteration.
In one embodiment, the adjustment function is in the form of:
Wherein, As the variation of the loss function,/>Is a super parameter used for controlling the sensitivity of learning rate adjustment, when the variation of the loss function is positive, the learning rate is reduced by the adjustment function, otherwise, the learning rate is increased, so that the self-adaptive learning rate adjustment is realized in the training process.
Further, a determination is made as to whether the algorithm satisfies an end condition, and in one embodiment, a maximum number of iterations is setWhen the maximum iteration number/>, is reachedOutputting final generated data; otherwise, return to step S204 to continue training.
Step S210, combining the generated data with qualified quality with the initial sample set to obtain a training sample set.
The method and the device adopt a quantum sparse coding-based generation countermeasure network algorithm, and utilize the concept of quantum computation to enhance the sparse coding capacity, so that high-quality and diversified data are generated, the problem of insufficient training data in intelligent lubrication monitoring of equipment is solved, and the generalization capacity and precision of a model are improved. Compared with the traditional generation countermeasure network, the quantum sparse coding generation countermeasure network has higher efficiency and generation capacity when processing complex data structures, and is particularly suitable for the field of equipment lubrication monitoring with smaller data volume and complex data structures.
Further, a monitoring model is built based on a current training sample set, fig. 3 shows a flowchart of a third intelligent lubrication monitoring method for an artificial intelligence-based device according to an embodiment of the present invention, and as shown in fig. 3, the method includes the following steps:
step S302, a pre-constructed training sample set is obtained.
Step S304, feature extraction is carried out on the training sample set through a pre-built feature extraction model, data dimension reduction is carried out on the training sample set after feature extraction through a pre-built feature dimension reduction model, and feature dynamic reconstruction is carried out on the training sample set after data dimension reduction through a preset dynamic feature reconstruction network, so that a target training sample is generated.
According to the embodiment of the invention, after the feature extraction, the data dimension reduction and the feature dynamic reconstruction are carried out on the training sample set, the classifier is subjected to classification training. The generated target training sample keeps key characteristic information, and simultaneously removes redundant and unimportant information, so that the data representation efficiency can be remarkably improved, and more accurate input is provided for subsequent classification and monitoring tasks. Moreover, the method can better adapt to the requirements of the classifier, and can also enhance the description capability of the characteristics on the equipment state.
However, the prior art is often inefficient and inaccurate when extracting key features from complex equipment lubrication monitoring data, especially in environments with high data noise, which directly affects the accuracy and reliability of the monitoring. In the face of high-dimensional monitoring data, the prior art often cannot effectively reserve key information during dimension reduction processing, so that the processing complexity is high and the model performance is poor, which is challenging to computational resources and processing efficiency. In this regard, the embodiment of the present invention solves the above-mentioned problems by constructing a feature extraction model and a feature dimension reduction model through the following steps.
Specifically, in the embodiment of the invention, the feature extraction is performed through the pre-constructed feature extraction model, and the feature extraction model is constructed through the following steps:
1) Initializing a neural network architecture, and integrating a local sensitive hash function into an input layer of the neural network; and setting a honeypot node in the neural network.
2) Mapping the training sample set to a low-dimensional hash space through a local sensitive hash function, and screening the training sample set through a honey pot node to obtain a target training sample.
In specific implementation, the invention provides a neural network algorithm based on a local sensitive honeypot optimization algorithm, and the neural network algorithm is combined with local sensitive hash and honeypot technology to realize efficient and robust feature extraction. Neural network algorithms based on locally sensitive honeypot optimization algorithms identify and extract features critical to device lubrication state determination by exploiting the high-dimensional data proximity nearest neighbor searching capability of locally sensitive hashes and policies of honeypot technology for trapping and analyzing malicious behavior in the security domain, while filtering out extraneous noise.
In specific implementation, the neural network architecture is initialized, and a local sensitive hash function is integrated into an input layer of the network, so that high-dimensional equipment lubrication monitoring data is mapped into a lower-dimensional hash space, and the computational complexity is reduced.
Let the data vector input into the feature extraction model be c i, where i represents the ith equipment lubrication monitoring data point, for the locally sensitive hash function setRepresentation, wherein each hash function/>Mapping the input vector into a hash bucket, the mapping process can be expressed as:
Wherein, Is a mapped hash vector,/>Representing the tandem operation of the vectors.
Further, a locality sensitive hashing functionMapping the high-dimensional data into the low-dimensional hash space can be calculated by the following ways:
Wherein, Is a normal vector of a randomly selected hyperplane for projecting data points into a one-dimensional space; /(I)Is the width parameter of the hash bucket and is used for adjusting the size of the hash bucket; /(I)Is the width of the hash window used to control the granularity of the hash space.
Further, honeypot feature screening is performed:
Honeypot nodes are arranged in the neural network and are specially used for capturing the characteristics which are most critical to the prediction of the lubrication state of the equipment. Wherein the honeypot node is provided with a sensitivity parameter. The embodiment of the invention adjusts the sensitivity of the honeypot nodes through feedback in the training process so as to identify and strengthen the characteristics which have the greatest contribution to the improvement of the model performance.
Specifically, it is provided withIn order to extract an original feature matrix from a network, the honey nodes screen key features through a weighting process, wherein the weighting matrix is/>Each element/>Represents the/>Features at/>Importance of individual honey nodes, weighted feature matrix/>The calculation is as follows:
further, weighting matrix of honeypot nodes For identifying and weighting key features, wherein the constituent elements/>The calculation of (2) can be expressed as:
Wherein, Represents the/>Features at/>Scores in the individual honey nodes are calculated based on the contribution degree of the characteristics to the model performance improvement; /(I)Is a parameter controlling the steepness of the curve for adjusting the weighting intensity. In one embodiment, the sensitivity parameter of the honeypot node/>Is set to 0.5.
3) Training the neural network by using the target training sample, and adjusting network parameters of the neural network by using a preset optimization algorithm.
In specific implementation, the neural network is trained by utilizing data subjected to local sensitive hash mapping and honeypot screening, and network parameters are adjusted through optimization algorithms such as back propagation, gradient descent and the like, so that the most representative characteristics are extracted. Specifically, the neural network optimizes feature extraction by minimizing the loss function, provided thatIs a true label,/>Is network output, and network output/>The loss function L is obtained from a preset Softmax classification function and can be expressed as:
Wherein, For the network output of the ith sample,/>Is the true label of the ith sample,/>Is a loss term that is used to determine,Is a regularization term,/>Is a network weight,/>Is a regularization coefficient.
Further, regularization termFor preventing network overfitting, the calculation method can be expressed as:
Wherein, Is the/>, in the neural networkWeight matrix of layer,/>Representing the L2 norm.
4) And dynamically adjusting sensitivity parameters of the local sensitive hash function and the honeypot node according to the performance of the neural network on a preset verification set.
In order to further optimize the feature extraction process, the embodiment of the invention uses a preset verification set to verify the neural network, determine the current performance of the neural network, and dynamically adjust the parameters of the local sensitive hash function and the honeypot node if the performance does not meet the requirements. Specifically, the performance evaluation index is set asAccording to/>The adjustment policy may be expressed as:
Wherein, Is a performance change,/>Is an adjustment function. /(I)For the adjusted honeypot node,/>For the tuned locality sensitive hash function,/>To adjust the honey nodes before,/>Is a locally sensitive hash function before adjustment. In one embodiment, the adjustment function is a fine-tuning of parameters using a gradient descent method.
5) And constructing a feature extraction model based on the current neural network until the neural network meets the performance requirement.
In summary, by constructing the feature extraction model through the above steps, further, the trained neural network outputs a set of optimized feature vectors, and the feature vectors can be used to construct a feature dimension reduction model to perform feature dimension reduction on the training sample set through the feature dimension reduction model.
In specific implementation, the construction steps of the feature dimension reduction model are as follows:
1) And acquiring a preset training sample set.
2) And performing dimension reduction processing on the training sample set through a preset self-coding neural network and a network flow optimization mechanism to obtain feature representation of a low-dimension feature space and reconstruction data of an original data space.
In specific implementation, the data after feature extraction is further input into a feature dimension reduction module for dimension reduction operation, and the self-coding neural network algorithm based on network flow optimization is adopted to effectively reduce the data dimension after feature extraction in a mode of encoding before decoding, and meanwhile key information is reserved to improve the accuracy of equipment lubrication monitoring. The self-coding neural network based on network flow optimization combines the network flow optimization theory and the self-coder structure, key characteristic information is reserved through optimizing a data flow path, redundant and unimportant information is removed, the data representation efficiency can be remarkably improved, and more accurate input is provided for subsequent classification and monitoring tasks.
In specific implementation, mapping the training sample set to a low-dimensional feature space through an encoder to obtain feature representation; the feature representation is restored to the original data space by a decoder to obtain reconstructed data. The embodiment of the invention introduces a network flow optimization mechanism into the low-dimensional feature space to optimize the feature representation through the network flow optimization mechanism and adjust the connection weight among the network nodes of the encoder.
In particular, the encoder section is designed to employ a multi-layer neural network architecture that maps high-dimensional input data into a low-dimensional feature space, with the encoder design being intended to capture the inherent structure and patterns in the data. Wherein the encoder portion is designed by using a multi-layer neural network, and the input is a high-dimensional data vector after feature extractionThe output is the feature vector/>, after the dimension reductionThe encoding process can be expressed as:
Wherein, Is an encoder function,/>Is a parameter of the encoder. /(I)
Further, encoder functionConsisting of a multi-layer neural network, the computation of each layer can be expressed as:
Wherein, Is/>Output of layer (for the first layer,/>),/>Is the output of the layer i +1,And/>Respectively is the/>Weight matrix and bias vector of layer,/>Is a ReLU activation function.
In one embodiment, both the encoder and decoder are designed as a 3-layer fully connected neural network. The number of neurons in the first layer is set to 128, the second layer is 64, and the last layer (i.e., the coding layer and decoding initiation layer) is 32.
In the encoded low-dimensional feature space, a network stream optimization mechanism is introduced, feature representation is optimized through simulating the problem of 'minimum cost maximum stream' in the data flow process, the data is encoded by an encoder, and further, key information is ensured to be transmitted preferentially in the data flow process, and is further input into a decoder for decoding. The method comprises the steps of adjusting the connection weight among nodes in a network to ensure that key information is transmitted preferentially in the data flow process.
Specifically, data optimization is performed by constructing features into a graph that includes nodes and edges. Is provided withIs a feature flow graph in which/>And/>Representing nodes and edges in the graph, respectively, each edge/>Flow rate onAnd cost/>Calculated from the following formula:
Wherein, Is edge/>Capacity of/>And/>Respectively node/>And/>Demand and supply of,/>And/>Is a parameter that controls the cost of the flow.
In one embodiment, the capacity of the edgeThe initial value is set to 1.5 times the average intensity of the data stream according to the average intensity dynamic setting of the data stream. Flow cost parameter/>And/>Initial values are set to 0.01 and 0.001, respectively, to balance flow distribution and optimize cost.
Further, the flow rateThe calculation of (a) needs to consider the supply-demand balance of the nodes, and the calculation mode can be expressed as follows:
Wherein, Representing inflow nodes/>The flow is ensured not to exceed the capacity of the edges, and the demand of the nodes is met.
Further, a decoder section is designed, which also adopts a multi-layer neural network structure, restores the data in the low-dimensional feature space to the original data space, and the objective of the decoder is to reconstruct the original data as accurately as possible to verify the retention of key information in the encoding process. In particular, a low-dimensional feature vector is to be usedMapping back to the original data space to obtain a reconstructed data vector/>The decoding process can be expressed as:
Wherein, Is a decoder function,/>Is a parameter of the decoder.
Further, decoder functionsSimilar to the encoder structure, for recovering from a low-dimensional feature space to the original data space, the computation of each layer thereof can be expressed as:
Wherein, Is the output of the decoder first layer (for the first layer,/>),/>Is the output of layer l+1 of the decoder,/>And/>Is decoder No./>Weights and biases of layers,/>Is the activation function used by the decoder.
3) And adjusting network parameters of the self-coding neural network according to the difference between the reconstructed data and the training sample set, and adjusting parameters in a network flow optimization mechanism.
Specifically, by comparing the difference between the original data and the reconstructed data, the parameters in the encoder and the decoder are dynamically adjusted, so that the feature dimension reduction process can be ensured to effectively reduce the data dimension, and information critical to the monitoring task can be reserved.
Specifically, the invention adopts the self-adaptive mapping adjustment mode to compare the original data vectorsAnd reconstructing the data vector/>The difference between the two is used for realizing adaptive adjustment, and the difference measurement/>May be expressed in terms of mean square error, and may be expressed as:
based on a difference measure Parameters of the encoder and the decoder and flow and cost parameters in network flow optimization are dynamically adjusted to adjust connection weights among network nodes, so that key information can be effectively reserved in a feature dimension reduction process.
4) And constructing a characteristic dimension reduction model based on the current self-coding neural network until the difference meets the preset difference requirement.
To sum up, it is determined whether the self-encoder is trained based on the difference metric, and a feature dimension reduction model is constructed based on the trained self-encoder. Further, the feature vector after the dimension reduction is output through the model. In order to enable the training samples to better adapt to the requirements of the classifier, the embodiment of the invention further carries out feature dynamic reconstruction on the feature vector.
The invention adopts a preset dynamic feature reconstruction network to further refine and reconstruct the features so as to better adapt to the requirements of the classifier, and the network realizes the self-adaptive reconstruction of the features by dynamically adjusting parameters in the feature reconstruction process so as to enhance the description capability of the features on the equipment state.
Specifically, the feature vector obtained after feature dimension reduction is set asThe goal of dynamic feature reconstruction networks is to reconstruct the network through the network/>Will/>Conversion to reconstructed eigenvectors/>The conversion process can be expressed as:
Wherein, Parameters representing the dynamic feature reconstruction network. Adaptive tuning of dynamic feature reconstruction networks through loss functions/>Implementation, the loss function is intended to minimize the difference between the reconstructed feature and the original feature, while optimizing the contribution of the reconstructed feature to the classification result, can be expressed as:
Wherein, Representing an L2 norm for calculating a difference between the original feature vector and the reconstructed feature vector; Based on the classification loss of the reconstruction features, the method is used for guiding the feature reconstruction process; /(I) Is a super parameter for balancing the two-part loss.
In conclusion, the dimension-reduced feature vector after feature dynamic reconstruction is obtained, and the vector (namely the target training sample) is used as a sample of a training classifier for classification training.
The invention adopts the extreme learning machine classification algorithm based on the energy conservation annealing strategy, optimizes the weight and bias parameters of the extreme learning machine by introducing the energy conservation concept and the annealing strategy when the intelligent lubrication monitoring data of the equipment is processed, and improves the classification accuracy and robustness. The extreme learning machine of the energy conservation annealing strategy adopted by the invention is an improvement of the traditional extreme learning machine algorithm, in the traditional extreme learning machine, the weight and the bias of the hidden layer node are randomly generated, and in the invention, the parameters are dynamically adjusted through the energy conservation principle and the simulated annealing strategy so as to find out the better model configuration.
Specifically, the training process of the extreme learning machine classification algorithm based on the energy conservation annealing strategy is as follows:
step S306, inputting the target training sample into a preset classifier for classification training, and calculating classifier energy according to the current classification performance of the classifier.
First, randomly initializing weights of an extreme learning machineAnd bias/>Setting the initial annealing temperature/>And cooling rate/>. Weight matrix/>And bias vector/>Initializing to random values, following a normal distribution of standards, can be expressed as:
Further, the initial temperature T 0 and the cooling rate α ELM, in one embodiment, T 0=1000,αELM =0.95.
Further, the feature data after dimension reduction is input to an extreme learning machine, and the output of an hidden layer and a final classification result are calculated in a forward propagation mode. Specifically, the output of hidden layerFrom input data/>Weight/>And bias/>Calculated, using the activation function/>The calculation can be expressed as:
Wherein, The function is activated for Sigmoid.
Further, energy calculation is carried out, and the energy of the system is calculated according to the classification performance of the current model. In one embodiment, the capability of the system is represented by a mean square error loss function, which can be calculated as:
Wherein, Is the number of samples,/>Is the target output,/>Is/>Hidden layer output of individual samples.
Step S308, optimizing network parameters of the classifier based on the classifier energy and a preset energy conservation annealing strategy.
After the classifier energy E is determined, the network parameters are optimized. And after the corresponding energy change rate is calculated, the adjusted network parameters are accepted according to the preset acceptance probability. Further, classification performance evaluation is performed on the classifier containing the current network parameters, and the weight of the classifier is adjusted by adopting a dynamic weight adjustment mechanism so as to optimize the network parameters.
1) Under the simulated annealing strategy, a higher energy (i.e., worse performance) configuration is accepted with a certain probability to jump out of the locally optimal solution. Randomly adjusting weightsAnd bias/>And calculates the energy in the new configuration. Specifically, the probability is determined by the current temperature/>The decision can be expressed as: /(I)
Wherein,Is an energy variation. If/>Then always accept the new solution; if/>Then with probability/>Accepting the new solution.
In one embodiment, the amount of energy changeIs calculated according to the energy difference between the current state and the new state, and is set/>Is the energy of the current state,/>Being the energy of the new state, the calculation mode can be expressed as:
in the present embodiment, the probability of acceptance used in the simulated annealing adjustment step The following selection modes can be adopted:
if the energy of the new state is lower than the current state (i.e ) The new state will be unconditionally accepted. If the energy of the new state is higher than the current state, the new state will be accepted with a probability that is related to the amount of energy increase and the current temperature.
Annealing and updating according to the cooling rateUpdate temperature/>Steps 306-308 are repeated until the temperature is below a preset threshold or a maximum number of iterations is reached. Specifically, after a certain number of iterations, the current temperature/>, is updatedUntil the temperature is lower than a preset minimum value or the maximum iteration number is reached, the updating mode can be expressed as follows:
Wherein, For updated temperature,/>To be the temperature before the update.
2) And (3) carrying out weight adjustment by adopting a dynamic weight adjustment mechanism, evaluating classification performance after each iteration is finished, and correspondingly adjusting the weight, wherein the dynamic weight adjustment mechanism can enable the model to pay more attention to samples with wrong classification.
Specifically, for a weight matrix of a classifierFeature matrix/>, input to classifierTrue tag moment/>For model predictive tag matrix/>The dynamic weight adjustment mechanism is generally implemented by first calculating the classification error/>, of each sampleThe calculation method can be expressed as:
further, the classification error is converted into a weight adjustment amount In one embodiment, the adjustment is performed using a linear transformation, and the calculation method may be expressed as:
Wherein, Is the conversion coefficient. Further, the weight matrix is updated by using the weight adjustment amount, and the updating manner can be expressed as follows:
Wherein, Is the average of all sample weight adjustments,/>For the weight matrix before update,/>And the weight matrix is updated.
And step S310, until the classifier meets the preset performance requirement, constructing a monitoring model based on the current classifier.
In summary, the classifier is trained using the target training samples and the weights and biases at the lowest energy (i.e., best performance) are recorded as parameters of the final model. Further, a monitoring model is constructed based on the classifier containing the parameters.
In summary, according to the intelligent lubrication monitoring method for the equipment based on the artificial intelligence, provided by the embodiment of the invention, when intelligent lubrication monitoring data of the equipment are processed, the weight and bias parameters of the extreme learning machine are optimized by introducing the concept of energy conservation and an annealing strategy, so that the classification accuracy and robustness are improved. In addition, the training sample set of the training classifier is subjected to feature extraction, data reduction and feature dynamic reconstruction in the following mode, key feature information can be reserved, redundant and unimportant information is removed, the data representation efficiency is remarkably improved, and more accurate input is provided for subsequent classification and monitoring tasks. Moreover, the method can better adapt to the requirements of the classifier, and can also enhance the description capability of the characteristics on the equipment state.
The neural network algorithm based on the locally sensitive honeypot optimization algorithm is used for identifying and extracting features which are critical to equipment lubrication state judgment and filtering irrelevant noise by utilizing the high-dimensional data approximate nearest neighbor searching capability of the locally sensitive hash and strategies of honeypot technology for trapping and analyzing malicious behaviors in the safety field. The self-coding neural network algorithm based on network flow optimization effectively reduces the data dimension after feature extraction by a mode of encoding before decoding, and simultaneously retains key information to improve the accuracy of equipment lubrication monitoring. The self-adaptive reconstruction of the features is realized by dynamically adjusting parameters in the feature reconstruction process, so that the description capability of the features on the equipment state is enhanced.
Further, on the basis of the above method embodiment, the embodiment of the present invention further provides an intelligent lubrication monitoring device for an apparatus based on artificial intelligence, and fig. 4 shows a schematic structural diagram of the intelligent lubrication monitoring device for an apparatus based on artificial intelligence provided in the embodiment of the present invention, as shown in fig. 4, where the device includes: the data acquisition module 100 is used for acquiring data of a lubrication part of the target equipment to obtain a sample to be monitored; the sample to be monitored comprises attribute information acquired by various types of sensors on the lubricating part in the running process. The data processing module 200 is configured to assemble the sample to be monitored and generate a feature vector to be monitored. The execution module 300 is configured to input the feature vector to be monitored into a pre-constructed monitoring model, and output a classification result; the monitoring model is constructed based on classification training of the extreme learning machine algorithm, and an energy conservation concept and an annealing strategy are introduced to optimize network parameters of the extreme learning machine algorithm; the training sample set for training the extreme learning machine algorithm carries out classification training after carrying out feature extraction, data reduction and feature dynamic reconstruction on the original training sample set; the training sample set comprises a plurality of training samples and corresponding sample labels, wherein the training samples comprise sensor data acquired by a plurality of types of sensors on a lubricating part of the equipment; and the output module 400 is used for determining the lubrication state corresponding to the lubrication part based on the classification result.
The intelligent lubrication monitoring device for the equipment based on the artificial intelligence has the same technical characteristics as the intelligent lubrication monitoring method for the equipment based on the artificial intelligence provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
Further, based on the above embodiment, the embodiment of the present invention further provides another intelligent lubrication monitoring device for an apparatus based on artificial intelligence, and fig. 5 shows a schematic structural diagram of another intelligent lubrication monitoring device for an apparatus based on artificial intelligence provided in the embodiment of the present invention, as shown in fig. 5, where the output module 400 is further configured to determine a lubrication status category indicated by a classification result; and determining the lubrication state of the lubrication part according to the lubrication state type.
The executing module 300 is further configured to obtain a pre-constructed training sample set; performing feature extraction on the training sample set through a pre-constructed feature extraction model, performing data dimension reduction on the training sample set after feature extraction through a pre-constructed feature dimension reduction model, and performing feature dynamic reconstruction on the training sample set after data dimension reduction through a pre-set dynamic feature reconstruction network to generate a target training sample; inputting the target training sample into an extreme learning machine algorithm for classification training, and calculating learning energy according to the current classification performance of the extreme learning machine algorithm; optimizing network parameters of an extreme learning machine algorithm based on learning energy and a preset energy conservation annealing strategy; and constructing a monitoring model based on the current extreme learning machine algorithm until the extreme learning machine algorithm meets the preset performance requirement.
The execution module 300 is further configured to adjust network parameters of the algorithm of the limit learning machine by using a preset simulated annealing strategy, and receive the adjusted network parameters according to a preset acceptance probability after calculating a corresponding energy change rate; and carrying out classification performance evaluation on the extreme learning machine algorithm containing the current network parameters, and adjusting the weight of the extreme learning machine algorithm by adopting a dynamic weight adjustment mechanism so as to optimize the network parameters.
The execution module 300 is further configured to initialize a neural network architecture and integrate a locally sensitive hash function into an input layer of the neural network; and, setting a honeypot node in the neural network; the honey pot nodes are provided with sensitivity parameters; mapping the training sample set to a low-dimensional hash space through a local sensitive hash function, and screening the training sample set through a honey pot node to obtain a target training sample; training a neural network by using a target training sample, and adjusting network parameters of the neural network by using a preset optimization algorithm; dynamically adjusting sensitivity parameters of a local sensitive hash function and honey nodes according to the performance of the neural network on a preset verification set; and constructing a feature extraction model based on the current neural network until the neural network meets the performance requirement.
The executing module 300 is further configured to obtain a preset training sample set; performing dimension reduction processing on the training sample set through a preset self-coding neural network and a network flow optimization mechanism to obtain feature representation of a low-dimension feature space and reconstruction data of an original data space; according to the difference between the reconstructed data and the training sample set, adjusting the network parameters of the self-coding neural network and adjusting the parameters in a network flow optimization mechanism; and constructing a characteristic dimension reduction model based on the current self-coding neural network until the difference meets the preset difference requirement.
The execution module 300 is further configured to map, by an encoder, the training sample set to a low-dimensional feature space to obtain a feature representation; introducing a network flow optimization mechanism into the low-dimensional feature space to optimize the feature representation through the network flow optimization mechanism, and adjusting the connection weight among network nodes of the encoder; the feature representation is restored to the original data space by a decoder to obtain reconstructed data.
The device also comprises a construction module 500, which is used for acquiring sensor data of the lubrication part of the equipment and carrying out sample marking on the sensor data to construct an initial sample set; the marked category is used for representing the lubrication state of the lubrication part; performing data expansion on the initial sample set based on a generation countermeasure network and a preset qubit simulation sparse coding process to obtain generation data based on quantum state coding; performing distance measurement on the generated data, and introducing quantum entropy to perform diversity measurement on the generated data; calculating a loss function according to the distance measure and the diversity measure, and generating an antagonism network optimization based on the loss function and the antagonism learning characteristic pair so as to optimize the quality of generated data; and merging the generated data with qualified quality with the initial sample set to obtain a training sample set.
The above construction module 500 is further configured to generate a generator of the countermeasure network to receive random noise as an input; and (3) simulating a sparse coding process through quantum gate operation, mapping random noise onto the quantum bit, and carrying out data expansion on the random noise based on the quantum bit to obtain generated data based on quantum state coding.
The embodiment of the invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps of the method shown in the figures 1 to 3. The embodiments of the present invention also provide a computer readable storage medium having a computer program stored thereon, which when executed by a processor performs the steps of the method shown in fig. 1 to 3 described above.
The embodiment of the present invention further provides a schematic structural diagram of an electronic device, as shown in fig. 6, where the electronic device includes a processor 61 and a memory 60, where the memory 60 stores computer executable instructions that can be executed by the processor 61, and the processor 61 executes the computer executable instructions to implement the methods shown in fig. 1 to 3.
In the embodiment shown in fig. 6, the electronic device further comprises a bus 62 and a communication interface 63, wherein the processor 61, the communication interface 63 and the memory 60 are connected by means of the bus 62. The memory 60 may include a high-speed random access memory (RAM, random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. The communication connection between the system network element and at least one other network element is achieved via at least one communication interface 63 (which may be wired or wireless), and may use the internet, a wide area network, a local network, a metropolitan area network, etc. Bus 62 may be an ISA (Industry Standard Architecture ) Bus, PCI (PERIPHERAL COMPONENT INTERCONNECT, peripheral component interconnect standard) Bus, EISA (Extended Industry Standard Architecture ) Bus, etc., or AMBA (Advanced Microcontroller Bus Architecture, standard for on-chip buses) Bus, where AMBA defines three types of buses, including an APB (ADVANCED PERIPHERAL Bus) Bus, an AHB (ADVANCED HIGH-performance Bus) Bus, and a AXI (Advanced eXtensible Interface) Bus. The bus 62 may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, only one bi-directional arrow is shown in FIG. 6, but not only one bus or type of bus.
The processor 61 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 61 or by instructions in the form of software. The processor 61 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), and the like; but may also be a digital signal Processor (DIGITAL SIGNAL Processor, DSP), application Specific Integrated Circuit (ASIC), field-Programmable gate array (FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory and the processor 61 reads the information in the memory and in combination with its hardware performs the method shown in any of the foregoing figures 1 to 3.
The embodiment of the invention provides a computer program product of an intelligent lubrication monitoring method and device for equipment based on artificial intelligence, which comprises a computer readable storage medium storing program codes, wherein the instructions included in the program codes can be used for executing the method in the embodiment of the method, and specific implementation can be seen in the embodiment of the method and is not repeated herein.
It will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding process in the foregoing method embodiment for the specific working process of the above-described system, which is not described herein again. In addition, in the description of embodiments of the present invention, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood by those skilled in the art in specific cases.
The above functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied in essence or a part contributing to the prior art or a part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the above-described method of the various embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention for illustrating the technical solution of the present invention, but not for limiting the scope of the present invention, and although the present invention has been described in detail with reference to the foregoing examples, it will be understood by those skilled in the art that the present invention is not limited thereto: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (10)

1. An intelligent lubrication monitoring method for equipment based on artificial intelligence, which is characterized by comprising the following steps:
Data acquisition is carried out on the lubrication part of the target equipment to obtain a sample to be monitored; the sample to be monitored comprises attribute information acquired by various types of sensors on the lubricating part in the running process;
Assembling the sample to be monitored to generate a feature vector to be monitored;
Inputting the feature vector to be monitored into a pre-constructed monitoring model, and outputting a classification result; the monitoring model is constructed based on classification training of the classifier, and an energy conservation concept and an annealing strategy are introduced to optimize network parameters of the classifier; training the training sample set of the classifier, wherein the training sample set is subjected to classification training after feature extraction, data reduction and feature dynamic reconstruction of the original training sample set; the original training sample set is generated based on diversity measurement, the training sample set comprises a plurality of training samples and corresponding sample labels, and the training samples comprise sensor data acquired by a plurality of types of sensors on equipment lubrication parts;
and determining the lubrication state corresponding to the lubrication part based on the classification result.
2. The method of claim 1, wherein the step of determining a lubrication state corresponding to the lubrication site based on the classification result comprises:
determining a lubrication state category indicated by the classification result;
And determining the lubrication state of the lubrication part according to the lubrication state category.
3. The method according to claim 1, wherein the method for constructing the monitoring model comprises:
Acquiring a pre-constructed training sample set;
Performing feature extraction on the training sample set through a pre-constructed feature extraction model, performing data dimension reduction on the training sample set after feature extraction through a pre-constructed feature dimension reduction model, and performing feature dynamic reconstruction on the training sample set after data dimension reduction through a pre-set dynamic feature reconstruction network to generate a target training sample;
inputting the target training sample into a preset classifier for classification training, and calculating classifier energy according to the current classification performance of the classifier;
optimizing network parameters of the classifier based on the classifier energy and a preset energy conservation annealing strategy;
And constructing a monitoring model based on the current classifier until the classifier meets the preset performance requirement.
4. A method according to claim 3, wherein the step of optimizing network parameters of the classifier based on the classifier energy and a preset energy conservation annealing strategy comprises:
Adjusting network parameters of the classifier by using a preset simulated annealing strategy, and receiving the adjusted network parameters according to a preset acceptance probability after calculating a corresponding energy change rate;
and carrying out classification performance evaluation on the classifier containing the current network parameters, and adopting a dynamic weight adjustment mechanism to adjust the weight of the classifier so as to optimize the network parameters.
5. A method according to claim 3, wherein the method of constructing the feature extraction model comprises:
Initializing a neural network architecture, and integrating a local sensitive hash function into an input layer of the neural network; and, setting a honeypot node in the neural network; the honey pot node is provided with a sensitivity parameter;
Mapping the training sample set to a low-dimensional hash space through the local sensitive hash function, and screening the training sample set through the honey nodes to obtain a target training sample;
Training the neural network by using the target training sample, and adjusting network parameters of the neural network by using a preset optimization algorithm;
Dynamically adjusting sensitivity parameters of the local sensitive hash function and the honeypot node according to the performance of the neural network on a preset verification set;
And constructing a feature extraction model based on the current neural network until the neural network meets the performance requirement.
6. A method according to claim 3, wherein the method for constructing the feature dimension reduction model comprises:
Acquiring a preset training sample set;
Performing dimension reduction processing on the training sample set through a preset self-coding neural network and a network flow optimization mechanism to obtain feature representation of a low-dimension feature space and reconstruction data of an original data space;
According to the difference between the reconstruction data and the training sample set, adjusting network parameters of the self-coding neural network and adjusting parameters in the network flow optimization mechanism;
and constructing a characteristic dimension reduction model based on the current self-coding neural network until the difference meets the preset difference requirement.
7. The method of claim 6, wherein the step of performing the dimension reduction process on the training sample set through a preset self-coding neural network and a network flow optimization mechanism to obtain the feature representation of the low-dimensional feature space and the reconstructed data of the original data space comprises the steps of:
Mapping the training sample set to a low-dimensional feature space through an encoder to obtain feature representation;
introducing a network flow optimization mechanism into the low-dimensional feature space to optimize the feature representation through the network flow optimization mechanism, and adjusting the connection weight between network nodes of the encoder;
and restoring the characteristic representation to the original data space through a decoder to obtain reconstruction data.
8. The method of claim 1, wherein the step of constructing the training sample set comprises:
acquiring sensor data of a lubrication part of the equipment, and carrying out sample labeling on the sensor data to construct an initial sample set; the marked category is used for representing the lubrication state of the lubrication part;
Performing data expansion on the initial sample set based on a generation countermeasure network and a preset qubit simulation sparse coding process to obtain generation data based on quantum state coding;
Performing distance measurement on the generated data, and introducing quantum entropy to perform diversity measurement on the generated data;
Calculating a loss function from the distance metric and the diversity metric, optimizing the generated countermeasure network based on the loss function and a countermeasure learning characteristic to optimize the quality of the generated data;
and merging the generated data with qualified quality with the initial sample set to obtain a training sample set.
9. The method of claim 8, wherein the step of performing data expansion on the initial sample set based on a generation countermeasure network and a preset qubit simulation sparse coding process to obtain generated data based on quantum state coding comprises:
a generator that generates an countermeasure network receives random noise as an input;
and simulating a sparse coding process through quantum gate operation, mapping the random noise to the quantum bit, and carrying out data expansion on the random noise based on the quantum bit to obtain generated data based on quantum state coding.
10. An intelligent lubrication monitoring device for an artificial intelligence-based apparatus, the device comprising:
The data acquisition module is used for acquiring data of a lubrication part of the target equipment to obtain a sample to be monitored; the sample to be monitored comprises attribute information acquired by various types of sensors on the lubricating part in the running process;
The data processing module is used for assembling the sample to be monitored and generating a feature vector to be monitored;
The execution module is used for inputting the feature vector to be monitored into a pre-constructed monitoring model and outputting a classification result; the monitoring model is constructed based on classification training of the extreme learning machine algorithm, and an energy conservation concept and an annealing strategy are introduced to optimize network parameters of the extreme learning machine algorithm; training a training sample set of the extreme learning machine algorithm, and performing classification training after performing feature extraction, data reduction and feature dynamic reconstruction on an original training sample set; the original training sample set is generated based on diversity measurement, the training sample set comprises a plurality of training samples and corresponding sample labels, and the training samples comprise sensor data acquired by a plurality of types of sensors on equipment lubrication parts;
And the output module is used for determining the lubrication state corresponding to the lubrication part based on the classification result.
CN202410445565.4A 2024-04-15 2024-04-15 Equipment intelligent lubrication monitoring method and device based on artificial intelligence Pending CN118032327A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410445565.4A CN118032327A (en) 2024-04-15 2024-04-15 Equipment intelligent lubrication monitoring method and device based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410445565.4A CN118032327A (en) 2024-04-15 2024-04-15 Equipment intelligent lubrication monitoring method and device based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN118032327A true CN118032327A (en) 2024-05-14

Family

ID=91000993

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410445565.4A Pending CN118032327A (en) 2024-04-15 2024-04-15 Equipment intelligent lubrication monitoring method and device based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN118032327A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210048806A1 (en) * 2019-08-16 2021-02-18 Arizona Board Of Regents On Behalf Of Arizona State University System and methods for gray-box adversarial testing for control systems with machine learning components
WO2021139235A1 (en) * 2020-06-30 2021-07-15 平安科技(深圳)有限公司 Method and apparatus for system exception testing, device, and storage medium
CN113328992A (en) * 2021-04-23 2021-08-31 国网辽宁省电力有限公司电力科学研究院 Dynamic honey net system based on flow analysis
CN114565006A (en) * 2021-12-03 2022-05-31 浙江运达风电股份有限公司 Wind driven generator blade damage detection method and system based on deep learning
CN114878162A (en) * 2022-05-18 2022-08-09 武汉科技大学 Ship bearing lubrication state on-line monitoring system based on deep learning
CN114941796A (en) * 2022-07-26 2022-08-26 启东普力马机械有限公司 Intelligent lubrication equipment fault monitoring method based on industrial big data
CN117648643A (en) * 2024-01-30 2024-03-05 山东神力索具有限公司 Rigging predictive diagnosis method and device based on artificial intelligence
CN117874639A (en) * 2024-03-12 2024-04-12 山东能源数智云科技有限公司 Mechanical equipment service life prediction method and device based on artificial intelligence

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210048806A1 (en) * 2019-08-16 2021-02-18 Arizona Board Of Regents On Behalf Of Arizona State University System and methods for gray-box adversarial testing for control systems with machine learning components
WO2021139235A1 (en) * 2020-06-30 2021-07-15 平安科技(深圳)有限公司 Method and apparatus for system exception testing, device, and storage medium
CN113328992A (en) * 2021-04-23 2021-08-31 国网辽宁省电力有限公司电力科学研究院 Dynamic honey net system based on flow analysis
CN114565006A (en) * 2021-12-03 2022-05-31 浙江运达风电股份有限公司 Wind driven generator blade damage detection method and system based on deep learning
CN114878162A (en) * 2022-05-18 2022-08-09 武汉科技大学 Ship bearing lubrication state on-line monitoring system based on deep learning
CN114941796A (en) * 2022-07-26 2022-08-26 启东普力马机械有限公司 Intelligent lubrication equipment fault monitoring method based on industrial big data
CN117648643A (en) * 2024-01-30 2024-03-05 山东神力索具有限公司 Rigging predictive diagnosis method and device based on artificial intelligence
CN117874639A (en) * 2024-03-12 2024-04-12 山东能源数智云科技有限公司 Mechanical equipment service life prediction method and device based on artificial intelligence

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
刘峰;: "门座式起重机旋转机构智能润滑及在线监测系统研究", 天津科技, no. 1, 15 November 2018 (2018-11-15) *
孙志远;鲁成祥;史忠植;马刚;: "深度学习研究与进展", 计算机科学, no. 02, 15 February 2016 (2016-02-15) *
朱松豪;赵云斌;: "基于半监督生成式对抗网络的异常行为检测", 南京邮电大学学报(自然科学版), no. 04, 31 August 2020 (2020-08-31) *
李军亮;滕克难;夏菲;: "基于深度学习的军用飞机部件状态参数预测", 振动与冲击, no. 06, 28 March 2018 (2018-03-28) *
林国营 等: "基于改进人工鱼群算法的互感器Jiles-Atherton模型参数辨识", 电测与仪表, 10 December 2018 (2018-12-10) *
王一新 等: "基于物理模型驱动的机器学习方法预测超临界二氧化碳管道最大泄漏速率", 石油科学通报, 28 February 2023 (2023-02-28) *
王太勇;王廷虎;王鹏;乔卉卉;徐明达;: "基于注意力机制BiLSTM的设备智能故障诊断方法", 天津大学学报(自然科学与工程技术版), no. 06, 27 April 2020 (2020-04-27) *

Similar Documents

Publication Publication Date Title
Reddy et al. A deep neural networks based model for uninterrupted marine environment monitoring
CN108881196A (en) The semi-supervised intrusion detection method of model is generated based on depth
CN117648643B (en) Rigging predictive diagnosis method and device based on artificial intelligence
CN117851921B (en) Equipment life prediction method and device based on transfer learning
CN116881832B (en) Construction method and device of fault diagnosis model of rotary mechanical equipment
CN117892251B (en) Rigging forging process parameter monitoring and early warning method and device based on artificial intelligence
Chu et al. Neural batch sampling with reinforcement learning for semi-supervised anomaly detection
CN113283909B (en) Ether house phishing account detection method based on deep learning
CN117077871B (en) Method and device for constructing energy demand prediction model based on big data
CN116934385B (en) Construction method of user loss prediction model, user loss prediction method and device
CN117892182B (en) Rope durability testing method and device based on artificial intelligence
CN115718826A (en) Method, system, device and medium for classifying target nodes in graph structure data
CN117874639B (en) Mechanical equipment service life prediction method and device based on artificial intelligence
CN117668622B (en) Training method of equipment fault diagnosis model, fault diagnosis method and device
Qin et al. Remaining useful life prediction using temporal deep degradation network for complex machinery with attention-based feature extraction
CN117436595A (en) New energy automobile energy consumption prediction method, device, equipment and storage medium
CN117312865A (en) Nonlinear dynamic optimization-based data classification model construction method and device
CN117578438A (en) Generating countermeasure network method and system for predicting new energy power generation
CN117592595A (en) Method and device for building and predicting load prediction model of power distribution network
CN117349494A (en) Graph classification method, system, medium and equipment for space graph convolution neural network
CN118032327A (en) Equipment intelligent lubrication monitoring method and device based on artificial intelligence
CN117291314B (en) Construction method of energy risk identification model, energy risk identification method and device
CN116155755B (en) Link symbol prediction method based on linear optimization closed sub-graph coding
CN115174421B (en) Network fault prediction method and device based on self-supervision unwrapping hypergraph attention
CN117650528A (en) Photovoltaic power generation prediction method and device based on data mining

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination