CN115796407A - Production line fault prediction method and related equipment - Google Patents

Production line fault prediction method and related equipment Download PDF

Info

Publication number
CN115796407A
CN115796407A CN202310103876.8A CN202310103876A CN115796407A CN 115796407 A CN115796407 A CN 115796407A CN 202310103876 A CN202310103876 A CN 202310103876A CN 115796407 A CN115796407 A CN 115796407A
Authority
CN
China
Prior art keywords
sequence
self
attention
module
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310103876.8A
Other languages
Chinese (zh)
Other versions
CN115796407B (en
Inventor
张高峰
林满满
戴雨卉
雷俊
黄欣莹
周扬迈
薛亚飞
方舟
田璐璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Construction Science and Technology Group Co Ltd
Original Assignee
China Construction Science and Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Science and Technology Group Co Ltd filed Critical China Construction Science and Technology Group Co Ltd
Priority to CN202310103876.8A priority Critical patent/CN115796407B/en
Publication of CN115796407A publication Critical patent/CN115796407A/en
Application granted granted Critical
Publication of CN115796407B publication Critical patent/CN115796407B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a production line fault prediction method and related equipment, wherein the method comprises the following steps: acquiring input sequence data, wherein the input sequence data comprise operation data of the production line ordered according to time sequence at each moment in a preset time period; inputting the input sequence data into an initial feature extraction layer, and acquiring initial sequence features output by the initial feature extraction layer, wherein the initial sequence features comprise feature vectors corresponding to the operation data at each moment; generating N splitting sequence characteristics based on the initial sequence characteristics; and respectively inputting the N splitting sequence characteristics to each self-attention module to obtain the output of each self-attention module, wherein each self-attention module comprises at least one self-attention layer, and the output of each self-attention module is connected and then input to the prediction module to obtain a fault prediction result sequence output by the prediction module. The invention can realize the fault prediction of the production line under the non-stop state.

Description

Production line fault prediction method and related equipment
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a production line fault prediction method and related equipment.
Background
The production line comprises various mechanical devices which are matched to realize a complex production process. Production line faults can be caused by the problems of aging of components, production environment (dust, untimely cleaning) and the like of a plurality of components forming the production line, and the production efficiency can be seriously influenced by the breakdown and shutdown maintenance of the production line. However, in the prior art, the production line needs to be stopped and inspected by a technician after production is finished to avoid fault occurrence, and fault prediction cannot be performed under the condition that the production line normally runs.
Thus, there is a need for improvements and enhancements in the art.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a production line fault prediction method, aiming at solving the problem that the fault prediction cannot be carried out under the condition that the production line normally runs in the prior art.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
in a first aspect of the present invention, a method for predicting production line faults is provided, the method comprising:
acquiring input sequence data, wherein the input sequence data comprise operation data of a production line sequenced according to a time sequence at each moment in a preset time period;
inputting the input sequence data into an initial feature extraction layer, and acquiring initial sequence features output by the initial feature extraction layer, wherein the initial sequence features comprise feature vectors corresponding to running data at each moment;
generating N split sequence features based on the initial sequence features, wherein N is a positive integer greater than 1, the first split sequence feature is the same as the initial sequence features, and the nth split sequence feature is a partial feature vector in the (N-1) th split sequence feature;
respectively inputting the N splitting sequence characteristics to each self-attention module to obtain the output of each self-attention module, wherein each self-attention module comprises at least one self-attention layer, and the output of each self-attention module is connected and then input to a prediction module to obtain a fault prediction result sequence output by the prediction module;
and the fault prediction result sequence comprises fault prediction results at all times within a preset time length after a preset time period.
The method for predicting the production line fault, wherein the generating of the N split sequence features based on the initial sequence features comprises:
and selecting a characteristic vector of the last 1/M from the n-1 th split sequence characteristic to obtain the nth split sequence characteristic.
The production line failure prediction method described above, wherein the number of the self-attention layers included in the self-attention module corresponding to the first split sequence feature is higher than the number of the self-attention layers included in the self-attention modules corresponding to the other split sequence features.
The method for predicting the production line fault, wherein the self-attention module further comprises a down-sampling module, the step of respectively inputting the N split sequence features into the self-attention modules and obtaining the output of each self-attention module comprises the following steps:
in each of the self-attention layers, the following operations are performed on the input sequence features:
acquiring a query vector corresponding to each feature vector in the input sequence features to obtain a query set;
and performing downsampling on the query vectors in the query set through the downsampling module to obtain a plurality of target query vectors, wherein at least one bit in the target query vectors is 0, and the output of the self-attention layer is obtained based on the target query vectors.
The method for predicting the production line fault, wherein the obtaining the output of the self-attention layer based on the target query vector comprises:
acquiring a key vector and a value vector corresponding to each feature vector in the input sequence features;
obtaining an output of the self-attention layer based on each of the target query vectors, each of the key vectors, and each of the value vectors.
The grab line fault prediction method comprises the following steps of:
determining sample training data in a plurality of training data, wherein each training data comprises sample input sequence data and a fault prediction result label corresponding to the sample input sequence data;
acquiring a sample fault prediction result sequence corresponding to the sample input sequence data according to the initial feature extraction layer, the self-attention module and the prediction module;
and acquiring training loss according to the fault prediction result label and the sample fault prediction result sequence, and updating parameters of the initial feature extraction layer, the self-attention module and the prediction module according to the training loss.
The production line fault prediction method, wherein the obtaining of the training loss according to the fault prediction result label and the sample fault prediction result sequence, comprises:
acquiring a first loss according to the difference between the fault prediction result label and the sample fault prediction result sequence;
acquiring the target query vectors corresponding to the sample input sequence data, acquiring the outputs of the self-attention layers corresponding to all the query vectors in the sample input sequence data, and acquiring a second loss according to the outputs of the self-attention layers corresponding to all the query vectors and the outputs of the self-attention layers corresponding to all the target query vectors;
determining the training loss from the first loss and the second loss.
In a second aspect of the present invention, there is provided a production line failure prediction apparatus comprising:
an input data acquisition unit for acquiring input sequence data including operation data of a production line ordered in time sequence at each time within a preset time period;
an initial feature extraction unit, configured to input the input sequence data to an initial feature extraction layer, and acquire an initial sequence feature output by the initial feature extraction layer, where the initial sequence feature includes feature vectors corresponding to operation data at each time;
a feature splitting unit, configured to generate N split sequence features based on the initial sequence feature, where N is a positive integer greater than 1, a first split sequence feature is the same as the initial sequence feature, and an nth split sequence feature is a partial feature vector in an nth-1 th split sequence feature;
the prediction unit is used for respectively inputting the N splitting sequence characteristics to each self-attention module to obtain the output of each self-attention module, each self-attention module comprises at least one self-attention layer, the output of each self-attention module is connected and then input to the prediction module, and the fault prediction result sequence output by the prediction module is obtained;
and the fault prediction result sequence comprises fault prediction results at all times within a preset time after a preset time period.
In a third aspect of the present invention, a terminal is provided, which includes a processor, and a computer-readable storage medium communicatively connected to the processor, the computer-readable storage medium being adapted to store a plurality of instructions, and the processor being adapted to call the instructions in the computer-readable storage medium to execute the steps of implementing the method for predicting production line faults as described in any one of the above.
In a fourth aspect of the present invention, there is provided a computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the steps of the production line failure prediction method of any one of the above.
Compared with the prior art, the production line fault prediction method provided by the invention has the advantages that the production line fault prediction method is used for acquiring the operation data of the production line, inputting the operation data into the neural network model for processing to obtain the fault prediction structure, and the fault prediction of the production line under the non-stop state can be realized.
Drawings
FIG. 1 is a flow chart of an embodiment of a method for production line fault prediction provided by the present invention;
FIG. 2 is a schematic structural diagram of an embodiment of a production line fault prediction device provided by the present invention;
fig. 3 is a schematic diagram illustrating the principle of an embodiment of the terminal according to the present invention.
Detailed Description
In order to make the objects, technical solutions and effects of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The production line fault prediction method provided by the invention can be applied to a terminal with computing capability, and the terminal can be, but is not limited to various computers, servers, mobile equipment and the like.
As shown in FIG. 1, one embodiment of a method for predicting a failure of a production line includes the steps of:
s100, input sequence data are obtained, wherein the input sequence data comprise operation data of the production line sequenced according to the time sequence at each moment in a preset time period.
The operation data of the production line comprises operation data of equipment on the production line, such as operation temperature, working voltage and the like of the equipment, and can be acquired through a sensor arranged on the equipment and an acquisition device in communication connection with the equipment.
Arranging the running data of each moment in a period of time according to a time sequence to obtain input sequence data, wherein the interval between every two adjacent moments is fixed for a long time.
And S200, inputting the input sequence data into an initial feature extraction layer, and acquiring initial sequence features output by the initial feature extraction layer, wherein the initial sequence features comprise feature vectors corresponding to the running data at each moment.
In this embodiment, the input sequence data is processed by a neural network model, and a failure prediction result is output, where the neural network model includes an initial feature extraction layer, the input sequence data is first input to the initial feature extraction layer, the initial feature extraction layer may adopt the structure of an existing feature extraction network, and through the initial feature extraction layer, the operating data at each time in the input sequence data is processed into a feature vector to form an initial sequence feature.
S300, generating N splitting sequence characteristics based on the initial sequence characteristics, wherein N is a positive integer larger than 1, the first splitting sequence characteristics are the same as the initial sequence characteristics, and the nth splitting sequence characteristics are partial characteristic vectors in the (N-1) th splitting sequence characteristics.
Line maintenance requires schedule scheduling in advance after a predicted failure, and therefore, in this embodiment, a long sequence of predicted outputs is generated, and the accuracy of the long sequence of predicted outputs also requires a long sequence of inputs to provide sufficient information. While the detail information is likely to be lost in the long-sequence processing process, the detail information which is more close to the time of the prediction time period is more helpful to the prediction result. In the present embodiment, the detail information is retained. Splitting the initial sequence characteristics to obtain split sequence characteristics, which specifically comprises: and selecting a characteristic vector of the last 1/M from the n-1 th splitting sequence characteristic to obtain the nth splitting sequence characteristic.
And M is a positive integer larger than 1, taking M as an example of 2, for a complete initial sequence feature, generating a plurality of split sequence features, wherein the first split sequence feature is the same as the initial sequence feature, the second split sequence feature comprises a feature vector of the last 1/2 of the initial sequence feature, and the third split sequence feature comprises a feature vector of the last 1/4 of the initial sequence feature.
S400, inputting the N splitting sequence characteristics to each self-attention module respectively, obtaining the output of each self-attention module, wherein each self-attention module comprises at least one self-attention layer, connecting the outputs of the self-attention modules, inputting the connected outputs to the prediction module, and obtaining a fault prediction result sequence output by the prediction module.
In this embodiment, the prediction module generates the failure prediction result sequence in one step to reduce the prediction time complexity. In the neural network model of this embodiment, a corresponding self-attention module is respectively set corresponding to each split sequence feature, each self-attention module includes at least one self-attention layer, and because the data volume of the first split sequence feature is larger, the number of self-attention layers included in the self-attention module corresponding to the first split sequence feature is higher than the number of self-attention layers included in the self-attention modules corresponding to other split sequence features, and the self-attention modules are used for extracting higher-level information.
In the conventional self-attention mechanism, for each input feature vector, a corresponding query vector, a key vector and a value vector are calculated first, and each query vector is point-multiplied by all key vectors, which causes a large amount of calculation and reduces efficiency. In order to improve efficiency, in this embodiment, the self-attention module includes a down-sampling module, and in each self-attention layer, for the sequence features input into the self-attention layer, the following operations are performed:
acquiring a query vector corresponding to each feature vector in the input sequence features to obtain a query set;
and performing downsampling on the query vectors in the query set through a downsampling module to obtain a plurality of target query vectors, wherein at least one bit in the target query vectors is 0, and the output of the attention layer is obtained based on the target query vectors.
The query vector corresponding to the feature vector is obtained based on the feature vector and the query matrix corresponding to the attention layer. Aiming at the longer characteristic of the characteristic sequence input to the self-attention layer in the embodiment, the down-sampling module is arranged, the query vector corresponding to each characteristic vector in the characteristic sequence is down-sampled, only part of elements in each query vector are reserved, and the rest elements are set to be 0.
It can be seen that, the size of the target query vector obtained by down-sampling the query vector is the same, but the target query vector has 0 element. Obtaining output from an attention layer based on a target query vector, comprising:
acquiring a key vector and a value vector corresponding to each feature vector in the input sequence features;
the output from the attention layer is obtained based on the respective target query vectors, the respective key vectors and the respective value vectors.
Specifically, the output from the attention layer in the present embodiment is acquired in such a manner that the self-attention output is acquired based on the query vector query, the key vector key, and the value vector value in the conventional self-attention mechanism.
The output from the attention layer is also a sequence of a plurality of feature vectors, which are input to the next module of the neural network, if there is only one self-attention layer in the self-attention block, the output from the attention layer is the output from the attention module, if there are a plurality of self-attention layers in the self-attention block, the input of each self-attention layer is the output of the last self-attention layer, and the output of the last self-attention layer is the output from the attention module.
The features output from the attention module are connected and then input to the prediction module, the prediction module can adopt the structure of the existing decoder, and the prediction module outputs a fault prediction result sequence.
The parameters of the initial feature extraction layer, the self-attention module and the prediction module in the foregoing are determined based on the training process of the neural network model. The specific training process is as follows:
determining sample training data in the plurality of training data, wherein each training data comprises sample input sequence data and a fault prediction result label corresponding to the sample input sequence data;
acquiring a sample fault prediction result sequence corresponding to sample input sequence data according to the initial feature extraction layer, the self-attention module and the prediction module;
and acquiring training loss according to the fault prediction result label and the sample fault prediction result sequence, and updating parameters of the initial feature extraction layer, the self-attention module and the prediction module according to the training loss.
In order to further improve the training efficiency and improve the learning capability of the neural network model, in this embodiment, a corresponding loss calculation process is set for the output of the downsampling layer set in this embodiment, so that the neural network model can maintain the prediction accuracy of the model while reducing the computation complexity and guaranteeing that part of information is discarded. Specifically, obtaining the training loss according to the failure prediction result label and the sample failure prediction result sequence includes:
acquiring a first loss according to the difference between the fault prediction result label and the sample fault prediction result sequence;
acquiring target query vectors corresponding to the sample input sequence data, acquiring the output of the self-attention layer corresponding to all the query vectors in the sample input sequence data, and acquiring a second loss according to the output of the self-attention layer corresponding to all the query vectors and the output of the self-attention layer corresponding to the target query vectors;
a training loss is determined based on the first loss and the second loss.
It can be understood that, in order to ensure that the information on the importance of the prediction result is not lost after the down-sampling of the query vector is performed, it is better that the attention mechanism output of the query vector after the down-sampling is closer to the attention mechanism output of the query vector before the down-sampling, therefore, in the method provided by this embodiment, in the training process, in addition to calculating the attention layer output corresponding to the target query vector after the down-sampling, the corresponding attention layer output is calculated based on the original query vector, and the second loss is obtained according to the difference between the two. And updating parameters of each module of the neural network model according to the second loss and the difference between the sample failure prediction result sequence corresponding to the sample input sequence and the failure prediction result label to obtain training loss.
Further, in order to prevent the model training process from progressing to a direction in which the elements in the target query vector are set to 0 as little as possible, so that the first loss is as small as possible, in this embodiment, a third loss is further provided, that is, a training loss is determined according to the first loss and the second loss, including:
determining a third loss based on the number of 0 elements in the target query vector corresponding to the sample input sequence;
a training loss is determined based on the first loss, the second loss, and the third loss.
That is, based on the consideration of the computational efficiency of the model, it is desirable that the number of 0 elements in the target query vector is as large as possible, so that the complexity of the computation time can be reduced.
In summary, the present embodiment provides a production line fault prediction method, which collects operation data of a production line, inputs the operation data into a neural network model for processing, obtains a fault prediction structure, and can realize fault prediction in a state where the production line does not stop.
It should be understood that, although the steps in the flowcharts shown in the figures of the present specification are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in the flowcharts may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
Based on the above embodiment, the present invention further provides a production line fault prediction apparatus, as shown in fig. 2, the production line fault prediction apparatus includes:
an input data acquisition unit, configured to acquire input sequence data, where the input sequence data includes operation data of a production line sorted in time sequence at each time within a preset time period, and is specifically described in embodiment one;
an initial feature extraction unit, configured to input the input sequence data to an initial feature extraction layer, and acquire an initial sequence feature output by the initial feature extraction layer, where the initial sequence feature includes feature vectors corresponding to running data at each time, and is specifically described in embodiment one;
a feature splitting unit, configured to generate N split sequence features based on an initial sequence feature, where N is a positive integer greater than 1, a first split sequence feature is the same as the initial sequence feature, and an nth split sequence feature is a partial feature vector in an nth-1 th split sequence feature, which is specifically described in embodiment one;
the prediction unit is configured to input the N split sequence features to each self-attention module, respectively, to obtain an output of each self-attention module, where each self-attention module includes at least one self-attention layer, connect outputs of the self-attention modules, and then input the outputs to the prediction module, and obtain a fault prediction result sequence output by the prediction module, which is specifically described in embodiment one;
the fault prediction result sequence comprises fault prediction results at all times within a preset time after a preset time period.
Based on the above embodiment, the present invention further provides a terminal, as shown in fig. 3, where the terminal includes a processor 10 and a memory 20. Fig. 3 shows only some of the components of the terminal, but it should be understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead.
The memory 20 may in some embodiments be an internal storage unit of the terminal, such as a hard disk or a memory of the terminal. The memory 20 may also be an external storage device of the terminal in other embodiments, such as a plug-in hard disk provided on the terminal, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 20 may also include both an internal storage unit and an external storage device of the terminal. The memory 20 is used for storing application software installed in the terminal and various data. The memory 20 may also be used to temporarily store data that has been output or is to be output. In one embodiment, the memory 20 stores a line failure prediction program 30, and the line failure prediction program 30 is executable by the processor 10 to implement the line failure prediction method of the present application.
The processor 10 may be a Central Processing Unit (CPU), a microprocessor or other chip in some embodiments, and is used for running program codes stored in the memory 20 or Processing data, such as executing the production line failure prediction method.
In one embodiment, the following steps are implemented when the processor 10 executes the line fault prediction program 30 in the memory 20:
acquiring input sequence data, wherein the input sequence data comprise operation data of a production line sequenced according to a time sequence at each moment in a preset time period;
inputting the input sequence data into an initial feature extraction layer, and acquiring initial sequence features output by the initial feature extraction layer, wherein the initial sequence features comprise feature vectors corresponding to running data at each moment;
generating N split sequence features based on the initial sequence features, wherein N is a positive integer greater than 1, the first split sequence feature is the same as the initial sequence features, and the nth split sequence feature is a partial feature vector in the (N-1) th split sequence feature;
respectively inputting the N splitting sequence characteristics to each self-attention module to obtain the output of each self-attention module, wherein each self-attention module comprises at least one self-attention layer, and the output of each self-attention module is connected and then input to a prediction module to obtain a fault prediction result sequence output by the prediction module;
and the fault prediction result sequence comprises fault prediction results at all times within a preset time after a preset time period.
Wherein the generating N split sequence features based on the initial sequence features comprises:
and selecting a characteristic vector of the last 1/M from the n-1 th split sequence characteristic to obtain the nth split sequence characteristic.
Wherein the number of the self-attention layers included in the self-attention module corresponding to the first split sequence feature is higher than the number of the self-attention layers included in the self-attention modules corresponding to the other split sequence features.
Wherein, the self-attention module further includes a down-sampling module, the inputting the N split sequence features into each self-attention module respectively, and obtaining the output of each self-attention module, includes:
in each of the self-attention layers, the following operations are performed on the input sequence features:
acquiring a query vector corresponding to each feature vector in the input sequence features to obtain a query set;
and performing downsampling on the query vectors in the query set through the downsampling module to obtain a plurality of target query vectors, wherein at least one bit in the target query vectors is 0, and acquiring the output of the self-attention layer based on the target query vectors.
Wherein the obtaining the output of the self-attention layer based on the target query vector comprises:
acquiring a key vector and a value vector corresponding to each feature vector in the input sequence features;
obtaining an output of the self-attention layer based on each of the target query vectors, each of the key vectors, and each of the value vectors.
Wherein the parameters of the initial feature extraction layer, the self-attention module, and the prediction module are determined based on:
determining sample training data in a plurality of training data, wherein each training data comprises sample input sequence data and a fault prediction result label corresponding to the sample input sequence data;
acquiring a sample fault prediction result sequence corresponding to the sample input sequence data according to the initial feature extraction layer, the self-attention module and the prediction module;
and acquiring training loss according to the fault prediction result label and the sample fault prediction result sequence, and updating parameters of the initial feature extraction layer, the self-attention module and the prediction module according to the training loss.
Wherein the obtaining of the training loss according to the failure prediction result label and the sample failure prediction result sequence comprises:
acquiring a first loss according to the difference between the fault prediction result label and the sample fault prediction result sequence;
acquiring the target query vectors corresponding to the sample input sequence data, acquiring the outputs of the self-attention layers corresponding to all the query vectors in the sample input sequence data, and acquiring a second loss according to the outputs of the self-attention layers corresponding to all the query vectors and the outputs of the self-attention layers corresponding to all the target query vectors;
determining the training loss from the first loss and the second loss.
The present invention also provides a computer readable storage medium having stored thereon one or more programs, the one or more programs being executable by one or more processors to perform the steps of the production line failure prediction method as described above.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method of predicting production line faults, the method comprising:
acquiring input sequence data, wherein the input sequence data comprise operation data of a production line sequenced according to a time sequence at each moment in a preset time period;
inputting the input sequence data into an initial feature extraction layer, and acquiring initial sequence features output by the initial feature extraction layer, wherein the initial sequence features comprise feature vectors corresponding to running data at each moment;
generating N split sequence features based on the initial sequence features, wherein N is a positive integer greater than 1, a first split sequence feature is the same as the initial sequence features, and an nth split sequence feature is a partial feature vector in an (N-1) th split sequence feature;
respectively inputting the N splitting sequence characteristics to each self-attention module to obtain the output of each self-attention module, wherein each self-attention module comprises at least one self-attention layer, and the output of each self-attention module is connected and then input to a prediction module to obtain a fault prediction result sequence output by the prediction module;
and the fault prediction result sequence comprises fault prediction results at all times within a preset time after a preset time period.
2. The production line fault prediction method of claim 1, wherein the generating N split sequence features based on the initial sequence features comprises:
and selecting a characteristic vector of the last 1/M from the n-1 th splitting sequence characteristic to obtain the nth splitting sequence characteristic.
3. The production line failure prediction method of claim 1, wherein the number of self-attention layers included in the self-attention module corresponding to a first split sequence feature is higher than the number of self-attention layers included in the self-attention modules corresponding to other split sequence features.
4. The method for predicting production line faults as claimed in claim 1, wherein the self-attention module further comprises a down-sampling module, and the inputting the N split sequence features into the self-attention modules respectively to obtain the output of each self-attention module comprises:
in each of the self-attention layers, the following operations are performed on the input sequence features:
acquiring a query vector corresponding to each feature vector in the input sequence features to obtain a query set;
and performing downsampling on the query vectors in the query set through the downsampling module to obtain a plurality of target query vectors, wherein at least one bit in the target query vectors is 0, and the output of the self-attention layer is obtained based on the target query vectors.
5. The production line fault prediction method of claim 4, wherein the obtaining the output of the self-attention layer based on the target query vector comprises:
acquiring a key vector and a value vector corresponding to each feature vector in the input sequence features;
obtaining an output of the self-attention layer based on each of the target query vectors, each of the key vectors, and each of the value vectors.
6. The production line fault prediction method of claim 4, wherein the parameters of the initial feature extraction layer, the self-attention module, and the prediction module are determined based on the steps of:
determining sample training data in a plurality of training data, wherein each training data comprises sample input sequence data and a fault prediction result label corresponding to the sample input sequence data;
acquiring a sample fault prediction result sequence corresponding to the sample input sequence data according to the initial feature extraction layer, the self-attention module and the prediction module;
and acquiring training loss according to the fault prediction result label and the sample fault prediction result sequence, and updating parameters of the initial feature extraction layer, the self-attention module and the prediction module according to the training loss.
7. The production line fault prediction method of claim 6, wherein the obtaining training loss from the fault prediction result labels and the sample fault prediction result sequence comprises:
acquiring a first loss according to the difference between the fault prediction result label and the sample fault prediction result sequence;
acquiring the target query vectors corresponding to the sample input sequence data, acquiring the outputs of the self-attention layers corresponding to all the query vectors in the sample input sequence data, and acquiring a second loss according to the outputs of the self-attention layers corresponding to all the query vectors and the outputs of the self-attention layers corresponding to all the target query vectors;
determining the training loss from the first loss and the second loss.
8. A production line failure prediction apparatus, comprising:
an input data acquisition unit for acquiring input sequence data including operation data of a production line ordered in time sequence at each time within a preset time period;
an initial feature extraction unit, configured to input the input sequence data to an initial feature extraction layer, and acquire an initial sequence feature output by the initial feature extraction layer, where the initial sequence feature includes feature vectors corresponding to running data at each time;
a feature splitting unit, configured to generate N split sequence features based on the initial sequence feature, where N is a positive integer greater than 1, a first split sequence feature is the same as the initial sequence feature, and an nth split sequence feature is a partial feature vector in an N-1 th split sequence feature;
the prediction unit is used for respectively inputting the N splitting sequence characteristics to each self-attention module to obtain the output of each self-attention module, each self-attention module comprises at least one self-attention layer, the output of each self-attention module is connected and then input to the prediction module, and the fault prediction result sequence output by the prediction module is obtained;
and the fault prediction result sequence comprises fault prediction results at all times within a preset time after a preset time period.
9. A terminal, characterized in that the terminal comprises: a processor, a computer readable storage medium communicatively connected to the processor, the computer readable storage medium adapted to store a plurality of instructions, the processor adapted to invoke the instructions in the computer readable storage medium to perform the steps of implementing the production line failure prediction method of any of the above claims 1-7.
10. A computer readable storage medium, storing one or more programs, the one or more programs being executable by one or more processors to perform the steps of the production line failure prediction method as claimed in any one of claims 1 to 7.
CN202310103876.8A 2023-02-13 2023-02-13 Production line fault prediction method and related equipment Active CN115796407B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310103876.8A CN115796407B (en) 2023-02-13 2023-02-13 Production line fault prediction method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310103876.8A CN115796407B (en) 2023-02-13 2023-02-13 Production line fault prediction method and related equipment

Publications (2)

Publication Number Publication Date
CN115796407A true CN115796407A (en) 2023-03-14
CN115796407B CN115796407B (en) 2023-05-23

Family

ID=85430975

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310103876.8A Active CN115796407B (en) 2023-02-13 2023-02-13 Production line fault prediction method and related equipment

Country Status (1)

Country Link
CN (1) CN115796407B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116184930A (en) * 2023-03-22 2023-05-30 中科航迈数控软件(深圳)有限公司 Fault prediction method, device, equipment and storage medium for numerical control machine tool
CN117289685A (en) * 2023-11-27 2023-12-26 青岛创新奇智科技集团股份有限公司 Production line fault prediction and self-healing method and system based on artificial intelligence

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110824587A (en) * 2019-11-01 2020-02-21 上海眼控科技股份有限公司 Image prediction method, image prediction device, computer equipment and storage medium
CN112487820A (en) * 2021-02-05 2021-03-12 南京邮电大学 Chinese medical named entity recognition method
CN112651782A (en) * 2020-12-30 2021-04-13 中国平安人寿保险股份有限公司 Behavior prediction method, device, equipment and medium based on zoom dot product attention
CN114493014A (en) * 2022-01-28 2022-05-13 湖南大学 Multivariate time series prediction method, multivariate time series prediction system, computer product and storage medium
CN114660993A (en) * 2022-05-25 2022-06-24 中科航迈数控软件(深圳)有限公司 Numerical control machine tool fault prediction method based on multi-source heterogeneous data feature dimension reduction
CN114783418A (en) * 2022-06-20 2022-07-22 天津大学 End-to-end voice recognition method and system based on sparse self-attention mechanism

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110824587A (en) * 2019-11-01 2020-02-21 上海眼控科技股份有限公司 Image prediction method, image prediction device, computer equipment and storage medium
CN112651782A (en) * 2020-12-30 2021-04-13 中国平安人寿保险股份有限公司 Behavior prediction method, device, equipment and medium based on zoom dot product attention
CN112487820A (en) * 2021-02-05 2021-03-12 南京邮电大学 Chinese medical named entity recognition method
CN114493014A (en) * 2022-01-28 2022-05-13 湖南大学 Multivariate time series prediction method, multivariate time series prediction system, computer product and storage medium
CN114660993A (en) * 2022-05-25 2022-06-24 中科航迈数控软件(深圳)有限公司 Numerical control machine tool fault prediction method based on multi-source heterogeneous data feature dimension reduction
CN114783418A (en) * 2022-06-20 2022-07-22 天津大学 End-to-end voice recognition method and system based on sparse self-attention mechanism

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116184930A (en) * 2023-03-22 2023-05-30 中科航迈数控软件(深圳)有限公司 Fault prediction method, device, equipment and storage medium for numerical control machine tool
CN117289685A (en) * 2023-11-27 2023-12-26 青岛创新奇智科技集团股份有限公司 Production line fault prediction and self-healing method and system based on artificial intelligence
CN117289685B (en) * 2023-11-27 2024-02-02 青岛创新奇智科技集团股份有限公司 Production line fault prediction and self-healing method and system based on artificial intelligence

Also Published As

Publication number Publication date
CN115796407B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
CN115796407A (en) Production line fault prediction method and related equipment
US20180174036A1 (en) Hardware Accelerator for Compressed LSTM
CN111325159B (en) Fault diagnosis method, device, computer equipment and storage medium
CN114676647B (en) Numerical control machine tool part service life prediction method based on deep learning method
US11257001B2 (en) Prediction model enhancement
CN115293057B (en) Wind driven generator fault prediction method based on multi-source heterogeneous data
CN112231224A (en) Business system testing method, device, equipment and medium based on artificial intelligence
CN113255792B (en) Data anomaly point detection method, device, system and storage medium
CN114660993B (en) Numerical control machine tool fault prediction method based on multi-source heterogeneous data feature dimension reduction
CN113723956A (en) Abnormity monitoring method, device, equipment and storage medium
CN111882074A (en) Data preprocessing system, method, computer device and readable storage medium
CN117231590A (en) Fault prediction system and method for hydraulic system
CN117454190A (en) Log data analysis method and device
CN116361567A (en) Data processing method and system applied to cloud office
CN116450393A (en) Log anomaly detection method and system integrating BERT feature codes and variant transformers
CN112507059B (en) Event extraction method and device in public opinion monitoring in financial field and computer equipment
CN116821638B (en) Data analysis method and system for AI chip application optimization design
CN116658489B (en) Hydraulic system fault diagnosis method and system based on digital twinning
CN113129049B (en) File configuration method and system for model training and application
CN115601363B (en) Assembly type building product defect detection method based on small target detection algorithm
CN114385785A (en) Rapid reasoning method and system supporting high-concurrency large-scale generation type language model
CN116627761B (en) PHM modeling and modeling auxiliary system and method based on big data frame
CN117725543B (en) Multi-element time sequence anomaly prediction method, electronic equipment and storage medium
CN116980307A (en) Operation and maintenance fault analysis method and device and computer equipment
CN114186031A (en) System fault prediction method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant