CN116562311B - Operation and maintenance method and system based on natural language machine translation - Google Patents

Operation and maintenance method and system based on natural language machine translation Download PDF

Info

Publication number
CN116562311B
CN116562311B CN202310826255.2A CN202310826255A CN116562311B CN 116562311 B CN116562311 B CN 116562311B CN 202310826255 A CN202310826255 A CN 202310826255A CN 116562311 B CN116562311 B CN 116562311B
Authority
CN
China
Prior art keywords
maintenance
model
data
training
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310826255.2A
Other languages
Chinese (zh)
Other versions
CN116562311A (en
Inventor
王步云
徐维南
胡伟
张静雯
张贻辉
刘道学
戚小东
潘成浩
李兵
张佰亿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Shuzhi Construction Research Institute Co ltd
China Tiesiju Civil Engineering Group Co Ltd CTCE Group
Original Assignee
Anhui Shuzhi Construction Research Institute Co ltd
China Tiesiju Civil Engineering Group Co Ltd CTCE Group
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Shuzhi Construction Research Institute Co ltd, China Tiesiju Civil Engineering Group Co Ltd CTCE Group filed Critical Anhui Shuzhi Construction Research Institute Co ltd
Priority to CN202310826255.2A priority Critical patent/CN116562311B/en
Publication of CN116562311A publication Critical patent/CN116562311A/en
Application granted granted Critical
Publication of CN116562311B publication Critical patent/CN116562311B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0499Feedforward networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses an operation and maintenance method and system based on natural language machine translation, and belongs to the technical field of information processing. Aiming at the problem that an operation and maintenance system in the prior art can not dynamically expand the requirement of data operation and maintenance, the invention provides an operation and maintenance method and an operation and maintenance system based on natural language machine translation, wherein the operation and maintenance method based on natural language machine translation comprises the following steps: setting a Chinese operation and maintenance question and an operation and maintenance code answer aiming at the data operation and maintenance direction, and screening an optimal answer; aiming at the problem of Chinese operation and maintenance, a Chinese word vector pre-training model is used for preprocessing, and data preprocessing is carried out for operation and maintenance codes; compressing a Transfomer model, and training by adopting the compressed Transfomer model; for the trained model, multi-head attention layer parameter quantization is adopted to obtain a final model; and carrying out operation and maintenance work by adopting the finally trained model. The operation and maintenance system can dynamically expand operation and maintenance requirements, manage operation and maintenance systems, and is short in training time and high in operation efficiency.

Description

Operation and maintenance method and system based on natural language machine translation
Technical Field
The invention relates to the technical field of information processing, in particular to an operation and maintenance method and system based on natural language machine translation.
Background
The traditional operation and maintenance faces to massive operation and maintenance data, so that damage is quickly stopped and decision is made, and the analysis and judgment of manual experts often take several hours or more. Especially, in the aspect of data monitoring, a static operation and maintenance means is often adopted, and the operation and maintenance problems cannot be dynamically expanded and analyzed mainly through timing task acquisition server logs and database queries by adopting automatic analysis aiming at fixed data problems. The intelligent operation and maintenance technology is mainly used for dynamically expanding operation and maintenance problems by introducing a machine learning and deep learning technology, so that the instant answer and instant answer dynamic operation and maintenance are realized.
The prior data operation and maintenance has some technical difficulties, mainly the problem that the prior data operation and maintenance issues timing task feedback data through specified requirements. If the requirements are increased and expanded, the data operation and maintenance task needs to be added again, and then the data operation and maintenance can be performed in the background; secondly, newly-increased data operation and maintenance requirements are required to be added with a function menu again on an operation and maintenance system interface so as to display data operation and maintenance feedback. The expansion operation steps are complicated, the efficiency is low,
the existing operation and maintenance mode through natural language processing is also applied to fault elimination or direct response, such as China patent application, application number 201710508456.2, publication date 2017, 11 and 10, and discloses a method and a device for performing operation and maintenance obstacle elimination through natural language processing, wherein the method comprises the following steps: analyzing according to the fault phenomenon, the fault reason and the fault solution existing in the historical data, and establishing a fault knowledge database of the association relation of the fault phenomenon, the fault reason and the fault solution; when a fault occurs, acquiring a system log related to the fault; extracting key fields of fault phenomena in the fault knowledge database, and matching with the system log; and pushing the fault solution associated with the key field in the fault knowledge database when the key field in the fault knowledge database is successfully matched with the system log. The beneficial effects are as follows: the related fault coping schemes are quickly searched and obtained when faults occur through the established fault knowledge database, so that the fault clearing efficiency is greatly improved, a user is helped to reduce the fault recovery time, but the fault coping schemes cannot be automatically maintained for a new operation and maintenance task, and the fault coping schemes can be operated only after new operation and maintenance requirements are manually added.
Disclosure of Invention
1. Technical problem to be solved
Aiming at the problem that the operation and maintenance system in the prior art cannot dynamically expand the data operation and maintenance requirements, the invention provides the operation and maintenance method and the operation and maintenance system based on natural language machine translation, which can dynamically expand the operation and maintenance requirements, realize the management of the operation and maintenance system, realize short training time and have high operation efficiency.
2. Technical proposal
The aim of the invention is achieved by the following technical scheme.
The aim of the invention is achieved by the following technical scheme.
The operation and maintenance method based on natural language machine translation comprises the following steps:
data acquisition
Setting a Chinese operation and maintenance question and an operation and maintenance code answer aiming at the data operation and maintenance direction, and screening an optimal answer;
data preprocessing
Aiming at the problem of Chinese operation and maintenance, a Chinese word vector pre-training model is used for preprocessing, and data preprocessing is carried out for operation and maintenance codes;
the compressed Transfomer model is an NLP classical model, and on the machine translation task, the Transfomer performance exceeds that of RNN and CNN, and the compressed Transfomer model can achieve good effect only by a coder/decoder and can be parallelized efficiently.
Adopting a Transfomer model as a machine translation framework, wherein each full-connection feedforward neural network layer in an encoder module adopts a weight sharing mode, and each full-connection feedforward neural network layer in a block of a decoder adopts a weight sharing mode to obtain a compressed Transfomer model;
model training
Training by adopting a compressed Transfomer model;
parameter quantization
For the trained model, multi-head attention layer parameter quantization is adopted to obtain a final model;
and carrying out operation and maintenance work by adopting the finally trained model.
Furthermore, the screening mode in the data acquisition is that a plurality of operation and maintenance codes are given for the same operation and maintenance problem, and meanwhile, the operation and maintenance codes are scored according to the code operation efficiency and the database security index; the answer with the highest score is the optimal answer.
Further, the data preprocessing step for the operation and maintenance code is as follows:
word splitting is carried out on all operation and maintenance answers, all split words, key database tables and key fields of the tables are unified into a translation dictionary, the dictionary length is an operation and maintenance code, and the splitting and database key tables and the field information set length are
Setting a fixed sentence length for operation and maintenance answersSupplement 0 of insufficient length;
the operation and maintenance answers are subjected to single-heat coding, and the coding dimension of each operation and maintenance answer is [,/>];
Establishing a key field of a database and key table association matrix information, wherein the key field belongs to a data table, setting a value at an association position, setting another value at the association position, and directly associating the data table with an external key;
inquiring an incidence matrix information table, adding direct numerical values of operation and maintenance answer codes, wherein the codes have fields belonging to a data table, and adding a value; a data table with directly related external keys, and the codes are added with another value;
setting an embedded training layer, initial parametersMatrix multiplication is performed with the operation and maintenance answer code to change the dimension of the operation and maintenance answer code into [ -bit>,/>];
Respectively adding position embedding information into the Chinese operation and maintenance questions and the operation and maintenance answers;
the Chinese answer embedding and position embedding codes are used as model input, and the operation and maintenance question embedding and position embedding codes are used as output.
Further, the calculation formula of the position embedding information is as follows:
,/>PE represents a vector with the same dimension as the embedded word, wherein the position corresponding to each element in the vector is odd and even, pos represents the position of the word in the sentence, the value range is [0, seq_len ], the value range of the element representing the vector is [0, hid_size/2) ]>Is the dimension of the embedded layer.
Furthermore, the Tranfomer model specifically comprises an encoder and a decoder, wherein the encoder comprises 6 modules, namely a block, each module comprises a multi-head attention layer and a fully-connected feedforward neural network layer, the decoder comprises 6 modules, each module comprises a multi-head attention layer, a cross attention layer and a fully-connected feedforward neural network layer, each fully-connected feedforward neural network layer in the 6 modules of the encoder adopts a weight sharing mode, and each fully-connected feedforward neural network layer maintains parameters during trainingIn accordance, each fully-connected feedforward neural network layer in 6 blocks of the decoder adopts a weight sharing mode, and each fully-connected feedforward neural network layer maintains parameters +.>And consistent.
Further, the specific mode of quantifying the multi-head attention layer parameters is as follows:
the weight of the multi-head attention layer after training is as follows]Let W be the k centroids, max (W), k linear interval values between min (W), replace each W weight with the nearest centroid value, fine tune the centroid position model by retraining and back propagation through centroids.
Further, all data sets are randomly divided into training sets and test sets according to a set proportion.
Furthermore, the multi-head attention layer is subjected to multi-head attention layer parameter quantification based on a Kmeans mode.
The system of the operation and maintenance method of the natural language machine translation comprises,
the data acquisition module is used for acquiring data, setting Chinese operation and maintenance questions and operation and maintenance code answers aiming at the data operation and maintenance direction, and screening optimal answers;
the data preprocessing module is used for preprocessing aiming at the Chinese operation and maintenance problem by using a Chinese word vector pre-training model and preprocessing data aiming at operation and maintenance codes;
the compression model module compresses the Transfomer model;
the training module is used for training by adopting a compressed Transfomer model;
the parameter quantization module adopts multi-head attention layer parameter quantization for the trained model to obtain a final model;
and the operation and maintenance module is used for carrying out operation and maintenance work by adopting a final trained model.
A computer readable storage medium having stored thereon a computer program which when executed by a processor implements an operation and maintenance method based on natural language machine translation as described.
3. Advantageous effects
Compared with the prior art, the invention has the advantages that:
the scheme mainly adopts a deep learning method, applies a coder-decoder framework of a Transfomer algorithm, adopts full-connection layer weight sharing and initial parameter quantization to compress a model aiming at the characteristics of excessive parameters and overlong training time on the basis, and ensures the code translation effect on the basis of reducing parameter redundancy. The whole translation effect is good, and the efficiency is higher.
Drawings
FIG. 1 is a schematic flow diagram of an operation and maintenance system according to the present invention;
FIG. 2 is a schematic diagram of a code translation framework of the present invention;
FIG. 3 is a schematic diagram of a conventional Transfomer framework;
FIG. 4 is a block diagram of the encoder and decoder of the present embodiment;
fig. 5 is a fully connected feed forward network weight sharing framework.
Detailed Description
The invention will now be described in detail with reference to the drawings and the accompanying specific examples.
Example 1
Aiming at the requirement that the existing operation and maintenance system can not dynamically expand data operation and maintenance, the scheme adopts a natural language question-answering mode, the newly added problem of the data operation and maintenance is input into the operation and maintenance system, a cloud algorithm module of the operation and maintenance system translates the Chinese input operation and maintenance problem into a data operation and maintenance language, and then the algorithm module is connected with a data operation and maintenance end, inputs the translated data operation and maintenance language and feeds back operation and maintenance results to the operation and maintenance system. As shown in fig. 1, the key of the intelligent operation and maintenance system for realizing the extended question-answering method is to translate the Chinese operation and maintenance problem into an algorithm model of operation and maintenance codes, the algorithm model mainly adopts a deep learning method, a Transfomer algorithm encoder-decoder framework is used, on the basis, the model is compressed by adopting full-connection layer weight sharing and initial parameter quantization aiming at the characteristics of excessive parameters and overlong training time, and the code translation effect is ensured on the basis of reducing parameter redundancy. The whole translation effect is good, and the efficiency is higher.
The code translation framework shown in fig. 2 performs data collection, data preprocessing, parameter compression model, corresponding natural language recognition and Chinese code translation.
The main part of the scheme comprises 1, adding data association information into operation and maintenance answer codes to carry out data enhancement; 2. design of Transfomer model compression.
The specific steps are as follows,
step one: and (5) data acquisition.
A large number of Chinese operation and maintenance questions and operation and maintenance code answers are designed for the data operation and maintenance direction.
Providing a plurality of operation and maintenance codes aiming at the same operation and maintenance problem, and scoring the operation and maintenance codes according to indexes such as code operation efficiency, database security and the like; the answer with the highest score is the optimal answer. The operation and maintenance questions and final best code answer match correspondence table is as shown in table 1:
TABLE 1 operation and maintenance questions and optimal code answers correspondence table
Step two: data preprocessing
For the Chinese operation and maintenance problem, a word embedding model, such as a 768-dimensional Chinese word vector pre-training model of Chinese_L-12_H-768_A-12, is used. The data preprocessing steps for the operation and maintenance codes are as follows:
word splitting is performed on all operation and maintenance answers, for example: select score from where score >0.9 and date = ' -, split to Select, the score was used to determine the score, from, where, score,0.9, and, date, =, ', - ' ].
The split words, the key database table and the key fields of the table are unified into a translation dictionary, the dictionary length is an operation and maintenance code, the split words, the key database table and the key fields are integrated into a length ofFor example:
{‘select’:1,’score’:2,’and’:3,’from’:4,···}。
setting a fixed sentence length for operation and maintenance answersAnd (5) supplementing 0 for the length shortage.
One-hot encoding is performed on operation and maintenance answers, and each operation and maintenance answer is encoded with the dimension of [,/>]。
Establishing a database key field and key table association matrix information, wherein the key field belongs to a data table, setting 1 in an association position, setting 2 in the association position, and directly associating the data table with an external key.
Table 2 building key field of database and key table associated matrix information table
Inquiring the associated matrix information table, adding the direct numerical value of the operation and maintenance answer codes, and having the field belonging to the data tableThe codes are all added with 1; there is a data table directly associated with the foreign key, and the codes are all added with 2. Operation and maintenance answer code dimension is [,/>]。
Setting an embedded training layer, initial parametersMatrix multiplication is performed with the operation and maintenance answer code to change the dimension of the operation and maintenance answer code into [ -bit>,/>]。
And adding position embedded information, namely position embedded information, to the Chinese operation and maintenance questions and operation and maintenance answers respectively. The position coding calculation formula is as follows:
,/>
PE represents a vector of the same dimension as the word embedding, wherein the position corresponding to each element in the vector is parity. pos represents the position of a word in a sentence, the value range is [0, seq_len), the value range is [0, hid_size/2 ] of the dimension sequence number of the representative element in the vector (for example, the word embedding dimension is 512, then 0, i of the 1 vector is 0;1,2 is 1..510, 512 is 255),is the dimension of the embedded layer.
The Chinese answer embedding and position embedding codes are used as model input, and the operation and maintenance question embedding and position embedding codes are used as output. Of course, the specific coding mode can be adjusted as long as the coding modes of various Chinese word vectors can be carried out.
Step three: the Transfomer model is compressed.
The model adopts a Transfomer as a main frame of machine translation, the traditional Transfomer model frame is mainly shown in fig. 3, and the traditional Transfomer model has excessive parameters and long training time, so that a compressing mode of the Transfomer model is designed, and the model training speed is improved under the condition of minimum precision reduction.
Tranfomer is mainly divided into encoder and decoder modules as a training model of seq-seq. Wherein, there are 6 modules in the encoder module, namely blocks, each block has a multi-head attention layer and a full-connection feedforward neural network layer, as shown in figure four; second, there are 6 blocks in the decoder block, each with a multi-head attention layer, a cross-attention layer and a fully connected feedforward neural network layer, as shown in FIG. 5. Adopting a weight sharing mode aiming at each full-connection feedforward neural network layer in 6 blocks of the encoder to enable each full-connection feedforward neural network layer to maintain parameters during trainingConsistent, the parameters are reduced by 6 times on the original encoder frame; adopting a weight sharing mode aiming at each full-connection feedforward neural network layer in 6 blocks of the decoder, so that each full-connection feedforward neural network layer maintains parameters +.>In agreement, the parameters are reduced by a factor of 6 over the original decoder framework. The training efficiency of the compression model is greatly improved. The sharing mode is shown in fig. 4, and of course, the weighting sharing of the Transfomer may be changed to the weighting sharing mode of other layers, and refer to the optimization mode in Albert paper.
Step four: and (5) model training.
The specific parameter configuration of the training phase is shown in the following table:
TABLE 3 training stage specific parameter configuration Table
Step five: parameter quantization
For a trained model, a multi-headed attention layer parameter quantization approach is taken to obtain fewer parameter values but substantially the same machine translation effect. The specific mode is as follows:
the weight of the multi-head attention layer after training is as follows]Let W be the value. K centroids are set as k linear interval values between max (W), min (W).
Each W weight is replaced with the nearest centroid value.
The centroid position model is fine-tuned by retraining and back-propagation through the centroid. In forward propagation, the most recent centroid value stored by the index is the weight; in back propagation, the gradient of the centroid is the gradient of all weights in this cluster.
All data sets are randomly divided into training sets and test sets according to the proportion of 8:2, and the final index conditions are shown in the following table.
Table 4 data set assignment final index
Meanwhile, the model training time is reduced in a weight sharing mode of the fully-connected feedforward neural network layer. The training server base is configured as 4 gpu, model number NVIDIA Tesla V100-SXM2.
Table 5 comparison of training time period for the present and our methods
In addition, the parameters of the multi-head attention layer are quantized according to the Kmeans mode, so that the parameters of the whole frame are reduced, and the operation efficiency is improved.
Table 6 comparison of GPU efficiency with and without quantization operation
By means of the method, the running efficiency is higher, the training time is shortened, and the overall running and maintenance time is faster.
A system for the above-described natural language machine translation operation and maintenance method, comprising,
the data acquisition module is used for acquiring data, setting Chinese operation and maintenance questions and operation and maintenance code answers aiming at the data operation and maintenance direction, and screening optimal answers;
the data preprocessing module is used for preprocessing aiming at the Chinese operation and maintenance problem by using a Chinese word vector pre-training model and preprocessing data aiming at operation and maintenance codes;
the compression model module compresses the Transfomer model;
the training module is used for training by adopting a compressed Transfomer model;
the parameter quantization module adopts multi-head attention layer parameter quantization for the trained model to obtain a final model;
and the operation and maintenance module is used for carrying out operation and maintenance work by adopting a final trained model.
The storage medium for the natural language machine translation-based operation and maintenance method may be a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the above-mentioned spectral clustering method of non-feature value decomposition.
How a specific implementation of the device is achieved, which is possible in the prior art, is not explained in more detail again, the possibilities of a corresponding implementation being explained in principle below.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures.
Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD), such as a field programmable gate array (Field Programmable Gate Array, FPGA), is an integrated circuit whose logic function is determined by the programming of the device by a user. The designer programs itself to "integrate" a digital system onto a single PLD without requiring the chip manufacturer to design and fabricate application specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented with "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but also HDL is not only one, but a plurality of, such as Verilog. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: the ATMEL AT89S52, microchip pic16c57 memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices. For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present specification.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.

Claims (9)

1. The operation and maintenance method based on natural language machine translation comprises the following steps:
data acquisition
Setting a Chinese operation and maintenance question and an operation and maintenance code answer aiming at the data operation and maintenance direction, and screening an optimal answer;
data preprocessing
Aiming at the problem of Chinese operation and maintenance, a Chinese word vector pre-training model is used for preprocessing, and data preprocessing is carried out for operation and maintenance codes; the data preprocessing steps for the operation and maintenance codes are as follows:
word splitting is carried out on all operation and maintenance answers, all split words, key database tables and key fields of the tables are unified into a translation dictionary, the dictionary length is an operation and maintenance code, and the splitting and database key tables and the field information set length are
Setting a fixed sentence length for operation and maintenance answersSupplement 0 of insufficient length;
the operation and maintenance answers are subjected to single-heat coding, and the coding dimension of each operation and maintenance answer is [,/>];
Establishing a key field of a database and key table association matrix information, wherein the key field belongs to a data table, setting a value at an association position, setting another value at the association position, and directly associating the data table with an external key;
inquiring an incidence matrix information table, adding direct numerical values of operation and maintenance answer codes, wherein the codes have fields belonging to a data table, and adding a value; a data table with directly related external keys, and the codes are added with another value;
setting embedded training layer, initial parametersNumber of digitsMatrix multiplication is performed with the operation and maintenance answer code to change the dimension of the operation and maintenance answer code into [ -bit>,/>];
Respectively adding position embedding information into the Chinese operation and maintenance questions and the operation and maintenance answers;
the Chinese answer embedding and position embedding codes are used as model input, and the operation and maintenance question embedding and position embedding codes are used as output;
compressed Transfomer model
Adopting a Transfomer model as a machine translation framework, wherein each full-connection feedforward neural network layer in a module of an encoder adopts a weight sharing mode, and each full-connection feedforward neural network layer in a module of a decoder adopts a weight sharing mode to obtain a compressed Transfomer model;
model training
Training by adopting a compressed Transfomer model;
parameter quantization
For the trained model, multi-head attention layer parameter quantization is adopted to obtain a final model;
and carrying out operation and maintenance work by adopting the finally trained model.
2. The operation and maintenance method based on natural language machine translation according to claim 1, wherein the screening mode in the data collection is to give out a plurality of operation and maintenance codes aiming at the same operation and maintenance problem, and meanwhile, scoring the operation and maintenance codes according to the code operation efficiency and the database security index; the answer with the highest score is the optimal answer.
3. The natural language machine translation based operation and maintenance method according to claim 2,
the calculation formula of the position embedding information is as follows:
,/>PE represents a vector with the same dimension as the embedded word, wherein the position corresponding to each element in the vector is odd and even, pos represents the position of the word in the sentence, the value range is [0, seq_len ], the value range of the element representing the vector is [0, hid_size/2) ]>Is the dimension of the embedded layer.
4. A natural language machine translation based operation and maintenance method according to any one of claims 1 to 3, wherein said fransfomer model comprises an encoder and a decoder module for specific structure, 6 modules in the encoder module, each module having a multi-head attention layer and a fully connected feedforward neural network layer, 6 modules in the decoder module, each module having a multi-head attention layer, a cross attention layer and a fully connected feedforward neural network layer, each of the 6 modules in the encoder adopting weight sharing mode, each fully connected feedforward neural network layer maintaining parameters during trainingIn accordance, each fully-connected feedforward neural network layer in 6 modules of the decoder adopts a weight sharing mode, and each fully-connected feedforward neural network layer maintains parameters +.>And consistent.
5. The method for natural language machine translation based operation and maintenance as set forth in claim 4,
the specific mode of quantifying the parameters of the multi-head attention layer is as follows:
the weight of the multi-head attention layer after training is as follows]Let W be the k centroids, max (W), k linear interval values between min (W), replace each W weight with the nearest centroid value, fine tune the centroid position model by retraining and back propagation through centroids.
6. The natural language machine translation based operation and maintenance method according to claim 1, wherein all data sets are randomly divided into training sets and test sets according to a set ratio.
7. The method of claim 1, wherein the multi-head attention layer parameter quantization is performed on the multi-head attention layer based on Kmeans mode.
8. A system based on the operation and maintenance method of natural language machine translation according to any one of claims 1 to 7, comprising,
the data acquisition module is used for acquiring data, setting Chinese operation and maintenance questions and operation and maintenance code answers aiming at the data operation and maintenance direction, and screening optimal answers;
the data preprocessing module is used for preprocessing aiming at the Chinese operation and maintenance problem by using a Chinese word vector pre-training model and preprocessing data aiming at operation and maintenance codes;
the compression model module compresses the Transfomer model;
the training module is used for training by adopting a compressed Transfomer model;
the parameter quantization module adopts multi-head attention layer parameter quantization for the trained model to obtain a final model;
and the operation and maintenance module is used for carrying out operation and maintenance work by adopting a final trained model.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the natural language machine translation based operation and maintenance method according to any one of claims 1 to 7.
CN202310826255.2A 2023-07-07 2023-07-07 Operation and maintenance method and system based on natural language machine translation Active CN116562311B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310826255.2A CN116562311B (en) 2023-07-07 2023-07-07 Operation and maintenance method and system based on natural language machine translation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310826255.2A CN116562311B (en) 2023-07-07 2023-07-07 Operation and maintenance method and system based on natural language machine translation

Publications (2)

Publication Number Publication Date
CN116562311A CN116562311A (en) 2023-08-08
CN116562311B true CN116562311B (en) 2023-12-01

Family

ID=87502217

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310826255.2A Active CN116562311B (en) 2023-07-07 2023-07-07 Operation and maintenance method and system based on natural language machine translation

Country Status (1)

Country Link
CN (1) CN116562311B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109727592A (en) * 2017-10-31 2019-05-07 上海幻电信息科技有限公司 O&M instruction executing method, medium and terminal based on natural language speech interaction
CN109977213A (en) * 2019-03-29 2019-07-05 南京邮电大学 A kind of optimal answer selection method towards intelligent Answer System
CN111444730A (en) * 2020-03-27 2020-07-24 新疆大学 Data enhancement Weihan machine translation system training method and device based on Transformer model
CN112685434A (en) * 2020-12-21 2021-04-20 福建新大陆软件工程有限公司 Operation and maintenance question-answering method based on knowledge graph
CN113704437A (en) * 2021-09-03 2021-11-26 重庆邮电大学 Knowledge base question-answering method integrating multi-head attention mechanism and relative position coding
CN114418088A (en) * 2021-12-28 2022-04-29 南京大学 Model training method
CN114547329A (en) * 2022-01-25 2022-05-27 阿里巴巴(中国)有限公司 Method for establishing pre-training language model, semantic analysis method and device
CN115659947A (en) * 2022-10-25 2023-01-31 武汉览山科技有限公司 Multi-item selection answering method and system based on machine reading understanding and text summarization

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110162799B (en) * 2018-11-28 2023-08-04 腾讯科技(深圳)有限公司 Model training method, machine translation method, and related devices and equipment
EP3819809A1 (en) * 2019-11-08 2021-05-12 PolyAI Limited A dialogue system, a method of obtaining a response from a dialogue system, and a method of training a dialogue system
US20220067486A1 (en) * 2020-09-02 2022-03-03 Sap Se Collaborative learning of question generation and question answering
KR20220118123A (en) * 2021-02-18 2022-08-25 현대자동차주식회사 Qestion and answer system and method for controlling the same

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109727592A (en) * 2017-10-31 2019-05-07 上海幻电信息科技有限公司 O&M instruction executing method, medium and terminal based on natural language speech interaction
CN109977213A (en) * 2019-03-29 2019-07-05 南京邮电大学 A kind of optimal answer selection method towards intelligent Answer System
CN111444730A (en) * 2020-03-27 2020-07-24 新疆大学 Data enhancement Weihan machine translation system training method and device based on Transformer model
CN112685434A (en) * 2020-12-21 2021-04-20 福建新大陆软件工程有限公司 Operation and maintenance question-answering method based on knowledge graph
CN113704437A (en) * 2021-09-03 2021-11-26 重庆邮电大学 Knowledge base question-answering method integrating multi-head attention mechanism and relative position coding
CN114418088A (en) * 2021-12-28 2022-04-29 南京大学 Model training method
CN114547329A (en) * 2022-01-25 2022-05-27 阿里巴巴(中国)有限公司 Method for establishing pre-training language model, semantic analysis method and device
CN115659947A (en) * 2022-10-25 2023-01-31 武汉览山科技有限公司 Multi-item selection answering method and system based on machine reading understanding and text summarization

Also Published As

Publication number Publication date
CN116562311A (en) 2023-08-08

Similar Documents

Publication Publication Date Title
CN107066553B (en) Short text classification method based on convolutional neural network and random forest
CN100458795C (en) Intelligent word input method and input method system and updating method thereof
CN116701431A (en) Data retrieval method and system based on large language model
CN113032418B (en) Method for converting complex natural language query into SQL (structured query language) based on tree model
Ding et al. Slimyolov4: lightweight object detector based on yolov4
CN115101085A (en) Multi-speaker time-domain voice separation method for enhancing external attention through convolution
CN112749191A (en) Intelligent cost estimation method and system applied to database and electronic equipment
CN114265937A (en) Intelligent classification analysis method and system of scientific and technological information, storage medium and server
CN114281968A (en) Model training and corpus generation method, device, equipment and storage medium
CN115687567A (en) Method for searching similar long text by short text without marking data
Wang et al. Faster nearest neighbor machine translation
Liu et al. Accurate emotion strength assessment for seen and unseen speech based on data-driven deep learning
CN116562311B (en) Operation and maintenance method and system based on natural language machine translation
Peng et al. Swin transformer-based supervised hashing
CN117349311A (en) Database natural language query method based on improved RetNet
CN110222339B (en) Intention recognition method and device based on improved XGBoost algorithm
CN116340455A (en) Method for extracting design standard entity relation of high-speed train bogie
CN114969087A (en) NL2SQL method and device based on multi-view feature decoupling
CN112287641B (en) Synonym sentence generating method, system, terminal and storage medium
CN114490995A (en) Multistage self-attention network security cooperative disposal battle room semantic abstraction method
CN114168720A (en) Natural language data query method and storage device based on deep learning
CN112329924A (en) Method for improving prediction performance of neural network
Shen et al. SPSQL: Step-by-step Parsing Based Framework for Text-to-SQL Generation
Liu et al. Strengthnet: Deep learning-based emotion strength assessment for emotional speech synthesis
US20230169075A1 (en) Apparatus and method for processing natural language query about relational database using transformer neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant