WO2022160442A1 - Answer generation method and apparatus, electronic device, and readable storage medium - Google Patents

Answer generation method and apparatus, electronic device, and readable storage medium Download PDF

Info

Publication number
WO2022160442A1
WO2022160442A1 PCT/CN2021/082863 CN2021082863W WO2022160442A1 WO 2022160442 A1 WO2022160442 A1 WO 2022160442A1 CN 2021082863 W CN2021082863 W CN 2021082863W WO 2022160442 A1 WO2022160442 A1 WO 2022160442A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
corpus
answer
text
question
Prior art date
Application number
PCT/CN2021/082863
Other languages
French (fr)
Chinese (zh)
Inventor
李雷来
王健宗
瞿晓阳
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2022160442A1 publication Critical patent/WO2022160442A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition

Definitions

  • the present application relates to the field of intelligent decision-making, and in particular, to an answer generation method, apparatus, electronic device, and readable storage medium.
  • the inventor realizes that in order to obtain an intelligent question answering model with good performance, a large amount of corpus is usually required to train a question answering model with a huge amount of parameters.
  • a lot of computing resources need to be invested in the training process, and the training process is relatively long, due to cost and time constraints.
  • people usually reduce the corpus and model structure, but this method makes the performance of the trained intelligent question answering model not high enough, and the accuracy of the answers matched by the model is not high. Therefore, an answer generation method is urgently needed to improve the accuracy of answer generation.
  • the answer generation methods provided in this application include:
  • the target text
  • the probability of the word and the probability of the answer end word determine the target answer corresponding to the target question.
  • the present application also provides an answer generation device, the device comprising:
  • the first training module is used to establish a computer cluster, obtain the first corpus from the first database, and control the computer cluster to perform the first distributed training on the initial question-answer model based on the first corpus to obtain the first question-and-answer model;
  • a second training module configured to obtain a second corpus from a second database, and control the computer cluster to perform second distributed training on the first question answering model based on the second corpus to obtain a second question answering model;
  • the entity recognition module is used to parse the user's request based on the answer generated by the client, obtain the target question carried by the request, perform word segmentation and entity recognition processing on the target question, obtain the entity recognition result, and obtain and match from the third database. the target text matched by the entity recognition result;
  • An answer determination module for inputting the target text and the target question into the second question answering model, to obtain the probability that each word in the target text is the starting word of the answer to the target question and the probability of the ending word of the answer,
  • the target answer corresponding to the target question is determined based on the probability of the start word of the answer and the probability of the end word of the answer.
  • the present application also provides an electronic device, the electronic device comprising:
  • the memory stores an answer generation program executable by the at least one processor, the answer generation program being executed by the at least one processor to enable the at least one processor to perform the following steps:
  • the probability of the word and the probability of the answer end word determine the target answer corresponding to the target question.
  • the present application also provides a computer-readable storage medium, where an answer generation program is stored on the computer-readable storage medium, and the answer generation program can be executed by one or more processors to realize the following steps:
  • the target text
  • the probability of the word and the probability of the answer end word determine the target answer corresponding to the target question.
  • FIG. 1 is a schematic flowchart of an answer generation method provided by an embodiment of the present application.
  • FIG. 2 is a schematic block diagram of an answer generating apparatus provided by an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of an electronic device for implementing an answer generation method provided by an embodiment of the present application
  • the present application provides an answer generation method.
  • FIG. 1 it is a schematic flowchart of an answer generation method provided by an embodiment of the present application.
  • the method may be performed by an electronic device, which may be implemented by software and/or hardware.
  • the answer generation method includes:
  • Establish a computer cluster obtain a first corpus from a first database, and control the computer cluster to perform a first distributed training on an initial question-answer model based on the first corpus to obtain a first question-and-answer model.
  • the first corpus includes public data obtained from Baidu Know, Baidu Encyclopedia and other channels, as well as data in specific fields crawled by crawler programs, such as Data in healthcare, finance and sports.
  • the controlling the computer cluster to perform the first distributed training on the initial question answering model based on the first corpus includes steps A11-A13:
  • A11 Set the maximum iteration round of the first distributed training, perform masking and labeling processing on the first corpus, and obtain a third corpus with a label;
  • performing masking and tagging processing on the first corpus to obtain a third corpus with tags including steps B11-B14:
  • the sentence order of the first sample is taken as the label of the first sample, and the set of the first sample carrying the label is taken as the first sample set;
  • A12. Acquire hardware resource information of each computing node in the computer cluster, split the third corpus into multiple sub-corpora based on the hardware resource information, and distribute the multiple sub-corpora to the respective computing nodes to for each computing node to train the initial question answering model based on the sub-corpus;
  • the third corpus may be distributed to each computing node according to the quantity of a certain hardware resource (for example, a graphics card), or the total hardware resource of each computing node may be calculated according to the quantity of each hardware resource and its corresponding weight Score, distribute the third corpus to each computing node based on the total score.
  • a certain hardware resource for example, a graphics card
  • the initial question answering model is a bert model
  • the structure is a 12-layer superimposed transformer structure.
  • A13 Receive the model gradients fed back by the respective computing nodes, update the model parameters of the initial question answering model based on the model gradients, and send the updated model parameters to the respective computing nodes for the computing nodes to use based on the The updated model parameters update the initial question-and-answer model, and continue training based on the updated initial question-and-answer model, and when the iteration reaches the maximum number of iteration rounds, the first distributed training ends.
  • the calculation formula of the model gradient is:
  • g ti is the model gradient corresponding to the i-th computing node in the computer cluster in the t-th iteration
  • T ti is the total number of training samples in the sub-corpus of the i-th computing node in the computer cluster in the t-round iteration
  • x t(ij) is the j-th sample participating in training in the sub-corpus of the i-th computing node in the computer cluster in the t-th iteration
  • s t(ij) is the i-th calculation in the computer cluster in the t-round iteration
  • the label of the jth sample participating in the training in the sub-corpus of the node, and l() is the output of the initial question answering model.
  • the updating of the model parameters of the initial question answering model based on the model gradient includes steps C11-C13:
  • m t is the convergence parameter in the t-th iteration
  • is the first equilibrium hyperparameter with a fixed value (usually 0.9)
  • m t-1 is the convergence parameter in the t-1 iteration
  • g t is the average value of the model gradient at the t-th iteration
  • ⁇ t is the second balanced hyperparameter learned at the t-th iteration.
  • the update parameter calculation formula is:
  • v t is the update parameter in the t-th iteration
  • is the third equilibrium hyperparameter with a fixed value (usually 0.9)
  • v t-1 is the update parameter in the t-1 iteration
  • g t is the mean value of the model gradient at the t-th iteration
  • ⁇ t is the fourth balanced hyperparameter learned at the t-th iteration.
  • the purpose of introducing the convergence parameter in this embodiment is to speed up the convergence of the model at the position/dimension where the gradient changes are small, and the purpose of introducing the update parameter is to obtain better model parameters through training.
  • the scaling ratio calculation formula is:
  • r t is the scaling rate at the t-th iteration
  • m t is the convergence parameter at the t-th iteration
  • v t is the update parameter at the t-th iteration
  • is the fifth equilibrium hyperparameter with a fixed value.
  • the scaling ratio determined by the convergence parameter and the update parameter is used to adjust the scaling ratio of the model parameters.
  • the learning rate calculation formula is:
  • ut is the learning rate of the initial question answering model in the t-th iteration
  • U ti is the new learning rate of the i-th layer parameters of the initial question-answer model in the t-th iteration
  • p ti is the initial question-answer model in the t-th iteration.
  • i-layer model parameters ⁇ is the sixth equilibrium hyperparameter whose value has been fixed
  • is the seventh equilibrium hyperparameter whose value has been fixed.
  • the learning rate is automatically adjusted with the parameters of the current layer of the model through the above learning rate calculation formula, so that better model parameters can be obtained by training, and the answer matched by the final question answering model is more accurate.
  • the model parameter calculation formula is:
  • p (t+1)i is the model parameter of the i-th layer of the initial question answering model in the t+1 round of iteration
  • p ti is the model parameter of the i-th layer of the initial question-answer model in the t-th iteration
  • U ti is the t-th layer
  • r t is the scaling rate at the t-th iteration
  • is the eighth equilibrium hyperparameter whose value has been fixed.
  • the sending of the updated model parameters to each computing node includes steps D11-D13:
  • model parameters are quantized and compressed to reduce the amount of transmitted data and improve transmission efficiency.
  • the quantization process uses float16 quantization.
  • 4-byte identifiers are required under normal circumstances, and 2-byte (16-bit) identifiers are used to reduce the accuracy of the model in the case of large-scale training.
  • 2-byte (16-bit) identifiers are used to reduce the accuracy of the model in the case of large-scale training.
  • the network transfer speed can be doubled.
  • the compression processing adopts the method of sparse compression and storage to change the sparse matrix into a dense matrix.
  • Most of the parameters of the model exist in the form of 2-dimensional or 3-dimensional matrices, especially after a small number of iterations, the matrix is often a sparse matrix.
  • a matrix that is, there are a large number of 0 elements in the matrix, can be processed by sparse row storage or column storage to eliminate the 0 elements in the matrix.
  • the encryption public key corresponding to each computing node is stored in the fourth database, and the encryption private key is stored by each computing node.
  • the security of the updated model parameters is ensured
  • the ciphertext of the standard information digest value of the updated model parameters is also transmitted.
  • each computing node decrypts the ciphertext data with its own private key, it calculates the information digest value of the decrypted data, and compares the calculated information digest value. Whether it is consistent with the standard information digest value to ensure that the ciphertext data has not been tampered with, further ensuring the security of the updated model parameters.
  • the process of the second distributed training is basically the same as that of the first distributed training, and only the training samples and training tasks are different.
  • the training samples are the text extracted from the second corpus and the questions set based on the extracted text.
  • the label of is the pre-set answer for the question
  • the training task is to predict the answer to the question from the extracted text
  • the training objective is that the similarity between the predicted answer and the answer in the label is greater than a preset threshold.
  • the target question is "how long is the claim period for health insurance”
  • the word sequence obtained after performing word segmentation on the target question is ⁇ health insurance, yes, claim period, yes, how long ⁇
  • the entity recognition model is used to perform entity recognition on the word sequence. Identify, identify and obtain the entity name "health insurance”, the text corresponding to each entity name (for example, the instructions corresponding to each type of insurance) is pre-stored in the third database, and the instructions corresponding to health insurance in the third database are used as the target corresponding to the target question text.
  • the answer generation method proposed by the present application first, establishes a computer cluster, obtains the first corpus from the first database, controls the computer cluster to perform the first distributed training on the initial question answering model based on the first corpus, and obtains the first corpus.
  • a question and answer model obtains a second corpus from the second database, and control the computer cluster to perform second distributed training on the first question and answer model based on the second corpus to obtain a second question and answer model; then, obtain the target carried by the answer generation request Then, perform word segmentation and entity recognition processing on the target question, obtain the entity recognition result, and obtain the target text matching the entity recognition result from the third database; finally, input the target text and the target question into the second question answering model, and get the target text in the target text.
  • Each word is the probability of the start word of the answer and the probability of the end word of the answer, and the target answer corresponding to the target question is determined based on the probability of the start word of the answer and the probability of the end word of the answer.
  • This solution implements the first distributed training and the second distributed training through the computer cluster, and realizes the training of the question and answer model with a huge amount of parameters by using massive corpus in a relatively short period of time. Because the corpus and model structure are not reduced, the training is guaranteed. The resulting high performance of the second question answering model and the high accuracy of the answers generated by the second question answering model. Therefore, the present application improves the answer generation accuracy.
  • FIG. 2 it is a schematic block diagram of an answer generating apparatus according to an embodiment of the present application.
  • the answer generating apparatus 100 described in this application can be installed in an electronic device. According to the implemented functions, the answer generation apparatus 100 may include a first training module 110 , a second training module 120 , an entity recognition module 130 and an answer determination module 140 .
  • the modules described in this application may also be referred to as units, which refer to a series of computer program segments that can be executed by the processor of an electronic device and can perform fixed functions, and are stored in the memory of the electronic device.
  • each module/unit is as follows:
  • the first training module 110 is configured to establish a computer cluster, obtain a first corpus from a first database, and control the computer cluster to perform a first distributed training on an initial question-answer model based on the first corpus to obtain a first question-answer model.
  • the first corpus includes public data obtained from Baidu Know, Baidu Encyclopedia and other channels, as well as data in specific fields crawled by crawler programs, such as Data in healthcare, finance and sports.
  • the controlling the computer cluster to perform the first distributed training on the initial question answering model based on the first corpus includes steps A21-A23:
  • A21 Set the maximum iteration round of the first distributed training, perform masking and labeling processing on the first corpus, and obtain a third corpus with a label;
  • performing masking and tagging processing on the first corpus to obtain a third corpus with tags including steps B21-B24:
  • the sentence order of the first sample is taken as the label of the first sample, and the set of the first sample carrying the label is taken as the first sample set;
  • A22 Obtain hardware resource information of each computing node in the computer cluster, split the third corpus into multiple sub-corpora based on the hardware resource information, and distribute the multiple sub-corpora to the respective computing nodes to for each computing node to train the initial question answering model based on the sub-corpus;
  • the third corpus may be distributed to each computing node according to the quantity of a certain hardware resource (for example, a graphics card), or the total hardware resource of each computing node may be calculated according to the quantity of each hardware resource and its corresponding weight Score, distribute the third corpus to each computing node based on the total score.
  • a certain hardware resource for example, a graphics card
  • the initial question answering model is a bert model
  • the structure is a 12-layer superimposed transformer structure.
  • A23 Receive the model gradients fed back by the respective computing nodes, update the model parameters of the initial question answering model based on the model gradients, and send the updated model parameters to the respective computing nodes for the computing nodes to use based on
  • the updated model parameters update the initial question-and-answer model, and continue training based on the updated initial question-and-answer model, and when the iteration reaches the maximum number of iteration rounds, the first distributed training ends.
  • the calculation formula of the model gradient is:
  • g ti is the model gradient corresponding to the i-th computing node in the computer cluster in the t-th iteration
  • T ti is the total number of training samples in the sub-corpus of the i-th computing node in the computer cluster in the t-round iteration
  • x t(ij) is the j-th sample participating in training in the sub-corpus of the i-th computing node in the computer cluster in the t-th iteration
  • s t(ij) is the i-th calculation in the computer cluster in the t-round iteration
  • the label of the jth sample participating in the training in the sub-corpus of the node, and l() is the output of the initial question answering model.
  • the updating of the model parameters of the initial question answering model based on the model gradient includes steps C21-C23:
  • m t is the convergence parameter in the t-th iteration
  • is the first equilibrium hyperparameter with a fixed value (usually 0.9)
  • m t-1 is the convergence parameter in the t-1 iteration
  • g t is the average value of the model gradient at the t-th iteration
  • ⁇ t is the second balanced hyperparameter learned at the t-th iteration.
  • the update parameter calculation formula is:
  • v t is the update parameter in the t-th iteration
  • is the third equilibrium hyperparameter with a fixed value (usually 0.9)
  • v t-1 is the update parameter in the t-1 iteration
  • g t is the mean value of the model gradient at the t-th iteration
  • ⁇ t is the fourth balanced hyperparameter learned at the t-th iteration.
  • the purpose of introducing the convergence parameter in this embodiment is to speed up the convergence of the model at the position/dimension where the gradient changes are small, and the purpose of introducing the update parameter is to obtain better model parameters through training.
  • the scaling ratio calculation formula is:
  • r t is the scaling rate at the t-th iteration
  • m t is the convergence parameter at the t-th iteration
  • v t is the update parameter at the t-th iteration
  • is the fifth equilibrium hyperparameter with a fixed value.
  • the scaling ratio determined by the convergence parameter and the update parameter is used to adjust the scaling ratio of the model parameters.
  • the learning rate calculation formula is:
  • ut is the learning rate of the initial question answering model in the t-th iteration
  • U ti is the new learning rate of the i-th layer parameters of the initial question-answer model in the t-th iteration
  • p ti is the initial question-answer model in the t-th iteration.
  • i-layer model parameters ⁇ is the sixth equilibrium hyperparameter whose value has been fixed
  • is the seventh equilibrium hyperparameter whose value has been fixed.
  • the learning rate is automatically adjusted with the parameters of the current layer of the model through the above learning rate calculation formula, so that better model parameters can be obtained by training, and the answer matched by the final question answering model is more accurate.
  • the model parameter calculation formula is:
  • p (t+1)i is the model parameter of the i-th layer of the initial question answering model in the t+1 round of iteration
  • p ti is the model parameter of the i-th layer of the initial question-answer model in the t-th iteration
  • U ti is the t-th layer
  • r t is the scaling rate during the t-th iteration
  • is the eighth equilibrium hyperparameter whose value has been fixed.
  • Sending the updated model parameters to each computing node includes steps D21-D23:
  • model parameters are quantized and compressed to reduce the amount of transmitted data and improve transmission efficiency.
  • the quantization process uses float16 quantization.
  • 4-byte identifiers are required under normal circumstances, and 2-byte (16-bit) identifiers are used to reduce the accuracy of the model in the case of large-scale training.
  • 2-byte (16-bit) identifiers are used to reduce the accuracy of the model in the case of large-scale training.
  • the network transfer speed can be doubled.
  • the compression process adopts the method of sparse compression and storage, and changes the sparse matrix into a dense matrix.
  • Most of the parameters of the model exist in the form of 2-dimensional or 3-dimensional matrices, especially after a small number of iterations, the matrix is often a sparse matrix.
  • a matrix that is, there are a large number of 0 elements in the matrix, can be processed by sparse row storage or column storage to eliminate the 0 elements in the matrix.
  • the encryption public key corresponding to each computing node is stored in the fourth database, and the encryption private key is stored by each computing node itself.
  • the security of the updated model parameters is ensured
  • the ciphertext of the standard information digest value of the updated model parameters is also transmitted.
  • each computing node decrypts the ciphertext data with its own private key, it calculates the information digest value of the decrypted data, and compares the calculated information digest value. It is consistent with the standard information digest value to ensure that the ciphertext data has not been tampered with, which further ensures the security of the updated model parameters.
  • the second training module 120 is configured to obtain a second corpus from a second database, and control the computer cluster to perform second distributed training on the first question-answering model based on the second corpus to obtain a second question-answer model;
  • the process of the second distributed training is basically the same as that of the first distributed training, and only the training samples and training tasks are different.
  • the training samples are the text extracted from the second corpus and the questions set based on the extracted text.
  • the label of is the pre-set answer for the question
  • the training task is to predict the answer to the question from the extracted text
  • the training objective is that the similarity between the predicted answer and the answer in the label is greater than a preset threshold.
  • the entity identification module 130 is used to parse the user based on the answer generation request sent by the client, obtain the target question carried by the request, perform word segmentation and entity identification processing on the target question, obtain the entity identification result, and obtain from the third database target text that matches the entity recognition result.
  • the target question is "how long is the claim period for health insurance”
  • the word sequence obtained after performing word segmentation on the target question is ⁇ health insurance, yes, claim period, yes, how long ⁇
  • the entity recognition model is used to perform entity recognition on the word sequence. Identify, identify and obtain the entity name "health insurance”, the text corresponding to each entity name (for example, the instructions corresponding to each type of insurance) is pre-stored in the third database, and the instructions corresponding to health insurance in the third database are used as the target corresponding to the target question text.
  • the answer determination module 140 is configured to input the target text and the target question into the second question answering model, and obtain the probability that each word in the target text is the starting word of the answer of the target question and the probability of the ending word of the answer , and the target answer corresponding to the target question is determined based on the probability of the initial word of the answer and the probability of the end word of the answer.
  • FIG. 3 it is a schematic structural diagram of an electronic device for implementing an answer generation method provided by an embodiment of the present application.
  • the electronic device 1 is a device that can automatically perform numerical calculation and/or information processing according to pre-set or stored instructions.
  • the electronic device 1 may be a computer, a single network server, a server group composed of multiple network servers, or a cloud composed of a large number of hosts or network servers based on cloud computing, where cloud computing is a type of distributed computing, A super virtual computer consisting of a collection of loosely coupled computers.
  • the electronic device 1 includes, but is not limited to, a memory 11, a processor 12, and a network interface 13 that can be communicatively connected to each other through a system bus, and the memory 11 stores an answer generation program 10, the answer generation program 10 is executable by the processor 12 .
  • FIG. 3 only shows the electronic device 1 having the components 11-13 and the answer generating program 10. Those skilled in the art can understand that the structure shown in FIG. 3 does not constitute a limitation on the electronic device 1, and may include a Fewer or more components are shown, or some components are combined, or a different arrangement of components.
  • the memory 11 includes a memory and at least one type of readable storage medium.
  • the memory provides a cache for the operation of the electronic device 1;
  • the readable storage medium can be, for example, flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static random access memory (SRAM) ), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, etc. non-volatile storage media.
  • the readable storage medium may be an internal storage unit of the electronic device 1, such as a hard disk of the electronic device 1; in other embodiments, the non-volatile storage medium may also be an external storage unit of the electronic device 1
  • a storage device such as a pluggable hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, a flash memory card (Flash Card), etc. equipped on the electronic device 1.
  • the readable storage medium of the memory 11 is generally used to store the operating system and various application software installed in the electronic device 1 , for example, to store the code of the answer generation program 10 in an embodiment of the present application.
  • the memory 11 can also be used to temporarily store various types of data that have been output or will be output.
  • the processor 12 may be a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, a microprocessor, or other data processing chips.
  • the processor 12 is generally used to control the overall operation of the electronic device 1, such as performing control and processing related to data interaction or communication with other devices.
  • the processor 12 is configured to run the program code or process data stored in the memory 11, for example, run the answer generation program 10 and the like.
  • the network interface 13 may include a wireless network interface or a wired network interface, and the network interface 13 is used to establish a communication connection between the electronic device 1 and a client (not shown in the figure).
  • the electronic device 1 may further include a user interface, and the user interface may include a display (Display), an input unit such as a keyboard (Keyboard), and an optional user interface may also include a standard wired interface and a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, and the like.
  • the display may also be appropriately called a display screen or a display unit, which is used for displaying information processed in the electronic device 1 and for displaying a visualized user interface.
  • the answer generation program 10 stored in the memory 11 in the electronic device 1 is a combination of multiple instructions, and when running in the processor 12, can realize:
  • the probability of the word and the probability of the answer end word determine the target answer corresponding to the target question.
  • the modules/units integrated in the electronic device 1 may be stored in a computer-readable storage medium.
  • the computer-readable medium may be non-volatile or non-volatile.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) .
  • An answer generation program 10 is stored on the computer-readable storage medium, and the answer generation program 10 can be executed by one or more processors to realize the following steps:
  • the probability of the word and the probability of the answer end word determine the target answer corresponding to the target question.
  • modules described as separate components may or may not be physically separated, and components shown as modules may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional module in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware, or can be implemented in the form of hardware plus software function modules.
  • the blockchain referred to in this application is a new application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information to verify its Validity of information (anti-counterfeiting) and generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Machine Translation (AREA)

Abstract

The present application relates to intelligent decision making. Disclosed is an answer generation method, comprising: establishing a computer cluster, and controlling the computer cluster to execute first distributed training on an initial question answering model on the basis of a first corpus to obtain a first question answering model; controlling the computer cluster to execute second distributed training on the first question answering model on the basis of a second corpus to obtain a second question answering model; obtaining a target question carried by the answer generation request, obtaining a target text corresponding to the target question from a third database, and inputting the target text and the target question into the second question answering model to obtain the probability that each word in the target text is an answer starting vocabulary to the target question and the probability that each word in the target text is an answer ending vocabulary to the target question; and on the basis of the probability of the answer starting vocabulary and the probability of the answer ending vocabulary, determining a target answer corresponding to the target question. The present application further provides an answer generation apparatus, an electronic device, and a readable storage medium. According to the present application, answer generation accuracy is improved.

Description

答案生成方法、装置、电子设备及可读存储介质Answer generation method, apparatus, electronic device and readable storage medium
本申请要求于2021年1月28日提交中国专利局、申请号为CN202110124138.2、名称为“答案生成方法、装置、电子设备及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number CN202110124138.2 and the title of "Answer Generation Method, Device, Electronic Device and Readable Storage Medium" filed with the Chinese Patent Office on January 28, 2021, the entire contents of which are Incorporated herein by reference.
技术领域technical field
本申请涉及智能决策领域,尤其涉及一种答案生成方法、装置、电子设备及可读存储介质。The present application relates to the field of intelligent decision-making, and in particular, to an answer generation method, apparatus, electronic device, and readable storage medium.
背景技术Background technique
随着信息化时代的到来,智能问答在人们生活中的应用越来越广泛,例如,某公司新研发了一款产品,当用户咨询该产品时,可通过智能问答模型从该产品的说明书中智能提取出用户问题对应的答案反馈给用户。With the advent of the information age, the application of intelligent question and answer in people's lives is becoming more and more extensive. For example, a company has developed a new product. When a user consults the product, the intelligent question and answer model can be used from the product manual. Intelligently extract the answer corresponding to the user's question and feed it back to the user.
发明人意识到,为了得到性能良好的智能问答模型,通常需要海量语料对参数量巨大的问答模型进行训练,然而训练过程中需要投入大量计算资源,且训练过程较为漫长,基于成本与时间的限制,人们通常对语料及模型结构进行缩减,然而这种方式使得训练得到的智能问答模型性能不够高,模型匹配出的答案准确度不高。因此,亟需一种答案生成方法,以提高答案生成准确度。The inventor realizes that in order to obtain an intelligent question answering model with good performance, a large amount of corpus is usually required to train a question answering model with a huge amount of parameters. However, a lot of computing resources need to be invested in the training process, and the training process is relatively long, due to cost and time constraints. , people usually reduce the corpus and model structure, but this method makes the performance of the trained intelligent question answering model not high enough, and the accuracy of the answers matched by the model is not high. Therefore, an answer generation method is urgently needed to improve the accuracy of answer generation.
发明内容SUMMARY OF THE INVENTION
本申请提供的答案生成方法,包括:The answer generation methods provided in this application include:
建立计算机集群,从第一数据库中获取第一语料,控制所述计算机集群基于所述第一语料对初始问答模型执行第一分布式训练,得到第一问答模型;establishing a computer cluster, obtaining the first corpus from the first database, and controlling the computer cluster to perform the first distributed training on the initial question-answer model based on the first corpus to obtain the first question-answer model;
从第二数据库中获取第二语料,控制所述计算机集群基于所述第二语料对所述第一问答模型执行第二分布式训练,得到第二问答模型;Obtain a second corpus from the second database, and control the computer cluster to perform second distributed training on the first question answering model based on the second corpus to obtain a second question answering model;
解析用户基于客户端发出的答案生成请求,获取所述请求携带的目标问题,对所述目标问题执行分词及实体识别处理,得到实体识别结果,从第三数据库中获取与所述实体识别结果匹配的目标文本;Parse the user's request based on the answer sent by the client, obtain the target question carried by the request, perform word segmentation and entity recognition processing on the target question, obtain the entity recognition result, and obtain the entity recognition result from the third database. the target text;
将所述目标文本及目标问题输入所述第二问答模型,得到所述目标文本中每个词语是所述目标问题的答案起始词汇的概率及答案终点词汇的概率,基于所述答案起始词汇的概率及答案终点词汇的概率确定所述目标问题对应的目标答案。Input the target text and target question into the second question answering model, and obtain the probability that each word in the target text is the starting word of the answer of the target question and the probability of the ending word of the answer, based on the starting word of the answer The probability of the word and the probability of the answer end word determine the target answer corresponding to the target question.
本申请还提供一种答案生成装置,所述装置包括:The present application also provides an answer generation device, the device comprising:
第一训练模块,用于建立计算机集群,从第一数据库中获取第一语料,控制所述计算机集群基于所述第一语料对初始问答模型执行第一分布式训练,得到第一问答模型;The first training module is used to establish a computer cluster, obtain the first corpus from the first database, and control the computer cluster to perform the first distributed training on the initial question-answer model based on the first corpus to obtain the first question-and-answer model;
第二训练模块,用于从第二数据库中获取第二语料,控制所述计算机集群基于所述第二语料对所述第一问答模型执行第二分布式训练,得到第二问答模型;a second training module, configured to obtain a second corpus from a second database, and control the computer cluster to perform second distributed training on the first question answering model based on the second corpus to obtain a second question answering model;
实体识别模块,用于解析用户基于客户端发出的答案生成请求,获取所述请求携带的目标问题,对所述目标问题执行分词及实体识别处理,得到实体识别结果,从第三数据库中获取与所述实体识别结果匹配的目标文本;The entity recognition module is used to parse the user's request based on the answer generated by the client, obtain the target question carried by the request, perform word segmentation and entity recognition processing on the target question, obtain the entity recognition result, and obtain and match from the third database. the target text matched by the entity recognition result;
答案确定模块,用于将所述目标文本及目标问题输入所述第二问答模型,得到所述目标文本中每个词语是所述目标问题的答案起始词汇的概率及答案终点词汇的概率,基于所述答案起始词汇的概率及答案终点词汇的概率确定所述目标问题对应的目标答案。An answer determination module, for inputting the target text and the target question into the second question answering model, to obtain the probability that each word in the target text is the starting word of the answer to the target question and the probability of the ending word of the answer, The target answer corresponding to the target question is determined based on the probability of the start word of the answer and the probability of the end word of the answer.
本申请还提供一种电子设备,所述电子设备包括:The present application also provides an electronic device, the electronic device comprising:
至少一个处理器;以及,at least one processor; and,
与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,
所述存储器存储有可被所述至少一个处理器执行的答案生成程序,所述答案生成程序被所述至少一个处理器执行,以使所述至少一个处理器能够执行如下步骤:The memory stores an answer generation program executable by the at least one processor, the answer generation program being executed by the at least one processor to enable the at least one processor to perform the following steps:
建立计算机集群,从第一数据库中获取第一语料,控制所述计算机集群基于所述第一语料对初始问答模型执行第一分布式训练,得到第一问答模型;establishing a computer cluster, obtaining the first corpus from the first database, and controlling the computer cluster to perform the first distributed training on the initial question-answer model based on the first corpus to obtain the first question-answer model;
从第二数据库中获取第二语料,控制所述计算机集群基于所述第二语料对所述第一问答模型执行第二分布式训练,得到第二问答模型;Obtain a second corpus from the second database, and control the computer cluster to perform second distributed training on the first question answering model based on the second corpus to obtain a second question answering model;
解析用户基于客户端发出的答案生成请求,获取所述请求携带的目标问题,对所述目标问题执行分词及实体识别处理,得到实体识别结果,从第三数据库中获取与所述实体识别结果匹配的目标文本;Parse the user's request based on the answer generated by the client, obtain the target question carried by the request, perform word segmentation and entity recognition processing on the target question, obtain an entity recognition result, and obtain a match with the entity recognition result from a third database the target text;
将所述目标文本及目标问题输入所述第二问答模型,得到所述目标文本中每个词语是所述目标问题的答案起始词汇的概率及答案终点词汇的概率,基于所述答案起始词汇的概率及答案终点词汇的概率确定所述目标问题对应的目标答案。Input the target text and target question into the second question answering model, and obtain the probability that each word in the target text is the starting word of the answer of the target question and the probability of the ending word of the answer, based on the starting word of the answer The probability of the word and the probability of the answer end word determine the target answer corresponding to the target question.
本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有答案生成程序,所述答案生成程序可被一个或者多个处理器执行,以实现如下步骤:The present application also provides a computer-readable storage medium, where an answer generation program is stored on the computer-readable storage medium, and the answer generation program can be executed by one or more processors to realize the following steps:
建立计算机集群,从第一数据库中获取第一语料,控制所述计算机集群基于所述第一语料对初始问答模型执行第一分布式训练,得到第一问答模型;establishing a computer cluster, obtaining the first corpus from the first database, and controlling the computer cluster to perform the first distributed training on the initial question-answer model based on the first corpus to obtain the first question-answer model;
从第二数据库中获取第二语料,控制所述计算机集群基于所述第二语料对所述第一问答模型执行第二分布式训练,得到第二问答模型;Obtain a second corpus from the second database, and control the computer cluster to perform second distributed training on the first question answering model based on the second corpus to obtain a second question answering model;
解析用户基于客户端发出的答案生成请求,获取所述请求携带的目标问题,对所述目标问题执行分词及实体识别处理,得到实体识别结果,从第三数据库中获取与所述实体识别结果匹配的目标文本;Parse the user's request based on the answer sent by the client, obtain the target question carried by the request, perform word segmentation and entity recognition processing on the target question, obtain the entity recognition result, and obtain the entity recognition result from the third database. the target text;
将所述目标文本及目标问题输入所述第二问答模型,得到所述目标文本中每个词语是所述目标问题的答案起始词汇的概率及答案终点词汇的概率,基于所述答案起始词汇的概率及答案终点词汇的概率确定所述目标问题对应的目标答案。Input the target text and target question into the second question answering model, and obtain the probability that each word in the target text is the starting word of the answer of the target question and the probability of the ending word of the answer, based on the starting word of the answer The probability of the word and the probability of the answer end word determine the target answer corresponding to the target question.
附图说明Description of drawings
图1为本申请一实施例提供的答案生成方法的流程示意图;1 is a schematic flowchart of an answer generation method provided by an embodiment of the present application;
图2为本申请一实施例提供的答案生成装置的模块示意图;FIG. 2 is a schematic block diagram of an answer generating apparatus provided by an embodiment of the present application;
图3为本申请一实施例提供的实现答案生成方法的电子设备的结构示意图;3 is a schematic structural diagram of an electronic device for implementing an answer generation method provided by an embodiment of the present application;
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The realization, functional characteristics and advantages of the purpose of the present application will be further described with reference to the accompanying drawings in conjunction with the embodiments.
具体实施方式Detailed ways
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make the purpose, technical solutions and advantages of the present application more clearly understood, the present application will be described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present application, but not to limit the present application. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
需要说明的是,在本申请中涉及“第一”、“第二”等的描述仅用于描述目的,而不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。另外,各个实施例之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本申请要求的保护范围之内。It should be noted that the descriptions involving "first", "second", etc. in this application are only for the purpose of description, and should not be construed as indicating or implying their relative importance or implying the number of indicated technical features . Thus, a feature delimited with "first", "second" may expressly or implicitly include at least one of that feature. In addition, the technical solutions between the various embodiments can be combined with each other, but must be based on the realization by those of ordinary skill in the art. When the combination of technical solutions is contradictory or cannot be realized, it should be considered that the combination of such technical solutions does not exist. , is not within the scope of protection claimed in this application.
本申请提供一种答案生成方法。参照图1所示,为本申请一实施例提供的答案生成方法的流程示意图。该方法可以由一个电子设备执行,该电子设备可以由软件和/或硬件实现。The present application provides an answer generation method. Referring to FIG. 1 , it is a schematic flowchart of an answer generation method provided by an embodiment of the present application. The method may be performed by an electronic device, which may be implemented by software and/or hardware.
本实施例中,答案生成方法包括:In this embodiment, the answer generation method includes:
S1、建立计算机集群,从第一数据库中获取第一语料,控制所述计算机集群基于所述第一语料对初始问答模型执行第一分布式训练,得到第一问答模型。S1. Establish a computer cluster, obtain a first corpus from a first database, and control the computer cluster to perform a first distributed training on an initial question-answer model based on the first corpus to obtain a first question-and-answer model.
本实施例中,通过汇总多台计算机建立计算机集群,并采用虚拟化技术(例如,运用docker容器进行虚拟化处理)为计算机集群中的计算节点配置训练需要的环境,可实现分布式并行训练,有效提高训练效率。In this embodiment, by aggregating multiple computers to establish a computer cluster, and using virtualization technology (for example, using a docker container for virtualization processing) to configure the environment required for training for the computing nodes in the computer cluster, distributed parallel training can be realized, Effectively improve training efficiency.
为得到性能良好的问答模型,获取的第一语料的数据量较为庞大,第一语料包括从百度知道、百度百科等渠道获取的公开数据,还包括爬虫程序爬取到的特有领域的数据,例如医疗、金融和体育领域数据。In order to obtain a question and answer model with good performance, the amount of data obtained in the first corpus is relatively large. The first corpus includes public data obtained from Baidu Know, Baidu Encyclopedia and other channels, as well as data in specific fields crawled by crawler programs, such as Data in healthcare, finance and sports.
所述控制所述计算机集群基于所述第一语料对初始问答模型执行第一分布式训练,包括步骤A11-A13:The controlling the computer cluster to perform the first distributed training on the initial question answering model based on the first corpus includes steps A11-A13:
A11、设置第一分布式训练的最大迭代轮次,对所述第一语料执行掩盖及标签化处理,得到携带标签的第三语料;A11. Set the maximum iteration round of the first distributed training, perform masking and labeling processing on the first corpus, and obtain a third corpus with a label;
本实施例中,所述对所述第一语料执行掩盖及标签化处理,得到携带标签的第三语料,包括步骤B11-B14:In this embodiment, performing masking and tagging processing on the first corpus to obtain a third corpus with tags, including steps B11-B14:
B11、从所述第一语料中抽取第一预设数量的第一文本,随机调整所述第一文本中句子的顺序,将调整后的文本作为第一样本,将所述第一文本中的句子顺序作为所述第一样本的标签,将携带标签的第一样本的集合作为第一样本集;B11. Extract a first preset number of first texts from the first corpus, randomly adjust the order of sentences in the first text, take the adjusted text as a first sample, and use the first text as a first sample. The sentence order of the first sample is taken as the label of the first sample, and the set of the first sample carrying the label is taken as the first sample set;
B12、从所述第一语料中抽取第二预设数量的第二文本,随机将所述第二文本中第三预设数量的词语掩盖,将掩盖后的文本作为第二样本,将被掩盖的词语作为所述第二样本的标签,将携带标签的第二样本的集合作为第二样本集;B12. Extract a second preset number of second texts from the first corpus, randomly mask a third preset number of words in the second text, and use the masked text as a second sample, which will be masked The words of the second sample are used as the label of the second sample, and the set of the second samples carrying the label is used as the second sample set;
B13、从所述第一语料中抽取第四预设数量的第三文本及第五预设数量的第四文本,随机将所述第四文本中第六预设数量的句子用从其他文本中抽取的句子替换,得到替换后的文本,将所述替换后的文本的标签设置为第一数值(例如,0),将所述第三文本的标签设置为第二数值(例如,1),将携带标签的替换后的文本及第三文本的集合作为第三样本集;B13. Extract a fourth preset number of third texts and a fifth preset number of fourth texts from the first corpus, and randomly use the sixth preset number of sentences in the fourth text from other texts The extracted sentences are replaced to obtain the replaced text, the label of the replaced text is set to a first numerical value (for example, 0), and the label of the third text is set to a second numerical value (for example, 1), Taking the set of the replaced text and the third text with the label as the third sample set;
B14、将所述第一样本集、所述第二样本集及所述第三样本集的集合作为所述第三语料。B14. Use the set of the first sample set, the second sample set and the third sample set as the third corpus.
A12、获取所述计算机集群中各个计算节点的硬件资源信息,基于所述硬件资源信息将所述第三语料拆分为多个子语料,将所述多个子语料分发给所述各个计算节点,以供所述各个计算节点基于所述子语料训练所述初始问答模型;A12. Acquire hardware resource information of each computing node in the computer cluster, split the third corpus into multiple sub-corpora based on the hardware resource information, and distribute the multiple sub-corpora to the respective computing nodes to for each computing node to train the initial question answering model based on the sub-corpus;
本实施例中,可根据某一硬件资源(例如,显卡)的数量将第三语料分发给各个计算节点,也可以根据每个硬件资源的数量及其对应的权重计算各个计算节点的硬件资源总分值,基于总分值将第三语料分发给各个计算节点。In this embodiment, the third corpus may be distributed to each computing node according to the quantity of a certain hardware resource (for example, a graphics card), or the total hardware resource of each computing node may be calculated according to the quantity of each hardware resource and its corresponding weight Score, distribute the third corpus to each computing node based on the total score.
本实施例中,所述初始问答模型为bert模型,结构为12层叠加的transformer结构。In this embodiment, the initial question answering model is a bert model, and the structure is a 12-layer superimposed transformer structure.
A13、接收所述各个计算节点反馈的模型梯度,基于所述模型梯度更新所述初始问答模型的模型参数,将更新后的模型参数发送给所述各个计算节点,以供所述各个计算节点基于所述更新后的模型参数更新所述初始问答模型,并基于更新后的初始问答模型继续训练,当迭代到最大迭代轮数后,第一分布式训练结束。A13. Receive the model gradients fed back by the respective computing nodes, update the model parameters of the initial question answering model based on the model gradients, and send the updated model parameters to the respective computing nodes for the computing nodes to use based on the The updated model parameters update the initial question-and-answer model, and continue training based on the updated initial question-and-answer model, and when the iteration reaches the maximum number of iteration rounds, the first distributed training ends.
可选的,所述模型梯度的计算公式为:Optionally, the calculation formula of the model gradient is:
Figure PCTCN2021082863-appb-000001
Figure PCTCN2021082863-appb-000001
其中,g ti为第t轮迭代时计算机集群中第i个计算节点对应的模型梯度,T ti为第t轮迭代时计算机集群中第i个计算节点的子语料中参与训练的样本的总数量,x t(ij)为第t轮迭代时计算机集群中第i个计算节点的子语料中参与训练的第j个样本,s t(ij)为第t轮迭代 时计算机集群中第i个计算节点的子语料中参与训练的第j个样本的标签,l()为初始问答模型的输出。 Among them, g ti is the model gradient corresponding to the i-th computing node in the computer cluster in the t-th iteration, and T ti is the total number of training samples in the sub-corpus of the i-th computing node in the computer cluster in the t-round iteration , x t(ij) is the j-th sample participating in training in the sub-corpus of the i-th computing node in the computer cluster in the t-th iteration, and s t(ij) is the i-th calculation in the computer cluster in the t-round iteration The label of the jth sample participating in the training in the sub-corpus of the node, and l() is the output of the initial question answering model.
所述基于所述模型梯度更新所述初始问答模型的模型参数,包括步骤C11-C13:The updating of the model parameters of the initial question answering model based on the model gradient includes steps C11-C13:
C11、计算所述模型梯度的平均值,将所述平均值代入收敛参数计算公式计算得到所述初始问答模型的收敛参数,将所述平均值代入更新参数计算公式计算得到所述初始问答模型的更新参数;C11. Calculate the average value of the gradient of the model, substitute the average value into the convergence parameter calculation formula to calculate the convergence parameter of the initial question and answer model, and substitute the average value into the update parameter calculation formula to calculate the initial question answer model. update parameters;
所述收敛参数计算公式为:The calculation formula of the convergence parameter is:
Figure PCTCN2021082863-appb-000002
Figure PCTCN2021082863-appb-000002
其中,m t为第t轮迭代时的收敛参数,α为数值已固定的第一平衡超参数(通常取值为0.9),m t-1为第t-1轮迭代时的收敛参数,g t为第t轮迭代时模型梯度的平均值,α t为第t轮迭代时学习的第二平衡超参数。 Among them, m t is the convergence parameter in the t-th iteration, α is the first equilibrium hyperparameter with a fixed value (usually 0.9), m t-1 is the convergence parameter in the t-1 iteration, g t is the average value of the model gradient at the t-th iteration, and α t is the second balanced hyperparameter learned at the t-th iteration.
所述更新参数计算公式为:The update parameter calculation formula is:
Figure PCTCN2021082863-appb-000003
Figure PCTCN2021082863-appb-000003
其中,v t为第t轮迭代时的更新参数,β为数值已固定的第三平衡超参数(通常取值为0.9),v t-1为第t-1轮迭代时的更新参数,g t为第t轮迭代时模型梯度的平均值,β t为第t轮迭代时学习的第四平衡超参数。 Among them, v t is the update parameter in the t-th iteration, β is the third equilibrium hyperparameter with a fixed value (usually 0.9), v t-1 is the update parameter in the t-1 iteration, g t is the mean value of the model gradient at the t-th iteration, and β t is the fourth balanced hyperparameter learned at the t-th iteration.
本实施例引入收敛参数的目的在于在梯度变化小的位置/维度加快模型的收敛,引入更新参数的目的在于训练得到更好的模型参数。The purpose of introducing the convergence parameter in this embodiment is to speed up the convergence of the model at the position/dimension where the gradient changes are small, and the purpose of introducing the update parameter is to obtain better model parameters through training.
C12、将所述收敛参数及更新参数代入缩放率计算公式计算得到所述初始问答模型的缩放率,将所述初始问答模型的初始模型参数及初始学习率代入学习率计算公式计算得到所述初始问答模型的新学习率;C12. Substitute the convergence parameters and update parameters into the scaling ratio calculation formula to calculate the scaling ratio of the initial question answering model, and substitute the initial model parameters and initial learning rate of the initial question answering model into the learning rate calculation formula to calculate the initial question answering model. The new learning rate for the question answering model;
所述缩放率计算公式为:The scaling ratio calculation formula is:
Figure PCTCN2021082863-appb-000004
Figure PCTCN2021082863-appb-000004
其中,r t为第t轮迭代时的缩放率,m t为第t轮迭代时的收敛参数,v t为第t轮迭代时的更新参数,∈为数值已固定的第五平衡超参数。 Among them, r t is the scaling rate at the t-th iteration, m t is the convergence parameter at the t-th iteration, v t is the update parameter at the t-th iteration, and ∈ is the fifth equilibrium hyperparameter with a fixed value.
通过收敛参数及更新参数确定出来的缩放率用于调整模型参数的缩放比例。The scaling ratio determined by the convergence parameter and the update parameter is used to adjust the scaling ratio of the model parameters.
所述学习率计算公式为:The learning rate calculation formula is:
U ti=u t*min{max(‖p ti‖,γ),δ} U ti =u t *min{max(‖p ti ‖,γ),δ}
其中,u t为第t轮迭代时初始问答模型的学习率,U ti为第t轮迭代时初始问答模型第i层参数的新学习率,p ti为第t轮迭代时初始问答模型的第i层模型参数,γ为数值已固定的第六平衡超参数,δ为数值已固定的第七平衡超参数。 Among them, ut is the learning rate of the initial question answering model in the t-th iteration, U ti is the new learning rate of the i-th layer parameters of the initial question-answer model in the t-th iteration, and p ti is the initial question-answer model in the t-th iteration. i-layer model parameters, γ is the sixth equilibrium hyperparameter whose value has been fixed, and δ is the seventh equilibrium hyperparameter whose value has been fixed.
本实施例中,通过上述学习率计算公式实现了学习率随着模型当前层的参数自动调整,从而可训练得到更好的模型参数,最终得到的问答模型匹配出的答案准确性更高。In this embodiment, the learning rate is automatically adjusted with the parameters of the current layer of the model through the above learning rate calculation formula, so that better model parameters can be obtained by training, and the answer matched by the final question answering model is more accurate.
C13、将所述初始模型参数、所述缩放率及所述新学习率代入模型参数计算公式计算得到所述初始问答模型的新模型参数。C13. Substitute the initial model parameters, the scaling rate and the new learning rate into a model parameter calculation formula to calculate new model parameters of the initial question answering model.
所述模型参数计算公式为:The model parameter calculation formula is:
Figure PCTCN2021082863-appb-000005
Figure PCTCN2021082863-appb-000005
其中,p (t+1)i为第t+1轮迭代时初始问答模型的第i层模型参数,p ti为第t轮迭代时初始问答模型的第i层模型参数,U ti为第t轮迭代时初始问答模型第i层参数的新学习率,r t为第t轮迭代时的缩放率,λ为数值已固定的第八平衡超参数。 Among them, p (t+1)i is the model parameter of the i-th layer of the initial question answering model in the t+1 round of iteration, p ti is the model parameter of the i-th layer of the initial question-answer model in the t-th iteration, and U ti is the t-th layer The new learning rate of the i-th layer parameters of the initial question answering model during the iteration, r t is the scaling rate at the t-th iteration, and λ is the eighth equilibrium hyperparameter whose value has been fixed.
所述将更新后的模型参数发送给各个计算节点,包括步骤D11-D13:The sending of the updated model parameters to each computing node includes steps D11-D13:
D11、对所述更新后的模型参数执行量化和压缩处理,得到压缩后的模型参数;D11, performing quantization and compression processing on the updated model parameters to obtain compressed model parameters;
因模型每一层的参数数量众多,整体参数数量巨大,即使在百兆的网络带宽下,将更新后的模型参数发送给各计算节点时需要花费较长的时间,本实施例通过对更新后的模型参数执行量化和压缩处理,以减少传输数据量,提高传输效率。Due to the large number of parameters in each layer of the model and the huge number of overall parameters, even under the network bandwidth of 100M, it takes a long time to send the updated model parameters to each computing node. The model parameters are quantized and compressed to reduce the amount of transmitted data and improve transmission efficiency.
所述量化处理采用的是float16量化,针对一个浮点数,正常情况下需要4个字节标识,采用2个字节(16位)标识对于大批量训练的情况下模型的精度损失较少,然而网络传输速度可提高一倍。The quantization process uses float16 quantization. For a floating point number, 4-byte identifiers are required under normal circumstances, and 2-byte (16-bit) identifiers are used to reduce the accuracy of the model in the case of large-scale training. However, The network transfer speed can be doubled.
所述压缩处理采用稀疏压缩存储的方法,将稀疏矩阵变为稠密矩阵,模型的参数大多是以2维或者3维矩阵的形式存在的,尤其在迭代少量的轮次后,矩阵往往是一个稀疏矩阵,即矩阵中存在大量的0元素,可采用稀疏行存储或者列存储处理,消除矩阵中的0元素。The compression processing adopts the method of sparse compression and storage to change the sparse matrix into a dense matrix. Most of the parameters of the model exist in the form of 2-dimensional or 3-dimensional matrices, especially after a small number of iterations, the matrix is often a sparse matrix. A matrix, that is, there are a large number of 0 elements in the matrix, can be processed by sparse row storage or column storage to eliminate the 0 elements in the matrix.
D12、计算所述压缩后的模型参数的标准信息摘要值;D12. Calculate the standard information digest value of the compressed model parameter;
D13、从第四数据库中获取各个计算节点对应的公钥,采用所述公钥对所述压缩后的模型参数及标准信息摘要值进行加密,得到密文数据,将所述密文数据分发到对应的计算节点。D13. Obtain the public key corresponding to each computing node from the fourth database, encrypt the compressed model parameters and the standard information digest value using the public key, obtain ciphertext data, and distribute the ciphertext data to the corresponding computing node.
本实施例中,第四数据库中存储有各个计算节点对应的加密公钥,加密私钥由各个计算节点自己保存,通过对更新后的模型参数加密传输,保证了更新后的模型参数的安全性,同时还传输了更新后的模型参数的标准信息摘要值密文,各个计算节点用自己的私钥解密密文数据后,计算解密后的数据的信息摘要值,通过比对计算的信息摘要值与标准信息摘要值是否一致来确保密文数据未被篡改,进一步保证了更新后的模型参数的安全性。In this embodiment, the encryption public key corresponding to each computing node is stored in the fourth database, and the encryption private key is stored by each computing node. By encrypting and transmitting the updated model parameters, the security of the updated model parameters is ensured At the same time, the ciphertext of the standard information digest value of the updated model parameters is also transmitted. After each computing node decrypts the ciphertext data with its own private key, it calculates the information digest value of the decrypted data, and compares the calculated information digest value. Whether it is consistent with the standard information digest value to ensure that the ciphertext data has not been tampered with, further ensuring the security of the updated model parameters.
S2、从第二数据库中获取第二语料,控制所述计算机集群基于所述第二语料对所述第一问答模型执行第二分布式训练,得到第二问答模型;S2, obtaining a second corpus from a second database, and controlling the computer cluster to perform second distributed training on the first question-answering model based on the second corpus to obtain a second question-answering model;
所述第二分布式训练与第一分布式训练的过程基本相同,仅训练样本及训练任务不同,所述训练样本为从第二语料中抽取的文本及基于抽取的文本设置的问题,训练样本的标签为预先为该问题设置的答案,训练任务为从抽取的文本中预测问题的答案,训练目标是预测的答案与标签中的答案相似度大于预设阈值。The process of the second distributed training is basically the same as that of the first distributed training, and only the training samples and training tasks are different. The training samples are the text extracted from the second corpus and the questions set based on the extracted text. The label of is the pre-set answer for the question, the training task is to predict the answer to the question from the extracted text, and the training objective is that the similarity between the predicted answer and the answer in the label is greater than a preset threshold.
S3、解析用户基于客户端发出的答案生成请求,获取所述请求携带的目标问题,对所述目标问题执行分词及实体识别处理,得到实体识别结果,从第三数据库中获取与所述实体识别结果匹配的目标文本。S3. Parse the user's request for generating an answer based on the answer sent by the client, obtain the target question carried by the request, perform word segmentation and entity recognition processing on the target question, obtain an entity recognition result, and obtain and identify the entity from a third database. The target text that the result matches.
例如,目标问题是“健康险的理赔期是多久”,对目标问题执行分词处理后得到的词语序列为{健康险,的,理赔期,是,多久},通过实体识别模型对词语序列进行实体识别,识别得到实体名“健康险”,第三数据库中预先存储有各个实体名对应的文本(例如,各险种对应的说明书),将第三数据库中健康险对应的说明书作为目标问题对应的目标文本。For example, the target question is "how long is the claim period for health insurance", and the word sequence obtained after performing word segmentation on the target question is {health insurance, yes, claim period, yes, how long}, and the entity recognition model is used to perform entity recognition on the word sequence. Identify, identify and obtain the entity name "health insurance", the text corresponding to each entity name (for example, the instructions corresponding to each type of insurance) is pre-stored in the third database, and the instructions corresponding to health insurance in the third database are used as the target corresponding to the target question text.
S4、将所述目标文本及目标问题输入所述第二问答模型,得到所述目标文本中每个词语是所述目标问题的答案起始词汇的概率及答案终点词汇的概率,基于所述答案起始词汇的概率及答案终点词汇的概率确定所述目标问题对应的目标答案。S4. Input the target text and the target question into the second question answering model, and obtain the probability that each word in the target text is the starting word of the answer of the target question and the probability of the ending word of the answer, based on the answer The probability of the start word and the probability of the answer end word determine the target answer corresponding to the target question.
由上述实施例可知,本申请提出的答案生成方法,首先,建立计算机集群,从第一数据库中获取第一语料,控制计算机集群基于第一语料对初始问答模型执行第一分布式训练,得到第一问答模型;然后,从第二数据库中获取第二语料,控制计算机集群基于第二语料对第一问答模型执行第二分布式训练,得到第二问答模型;接着,获取答案生成请求携带的目标问题,对目标问题执行分词及实体识别处理,得到实体识别结果,从第三数据库中获取与实体识别结果匹配的目标文本;最后,将目标文本及目标问题输入第二问答模型,得到目标文本中每个词语是目标问题的答案起始词汇的概率及答案终点词汇的概率,基于答案起始词汇的概率及答案终点词汇的概率确定目标问题对应的目标答案。本方案通过计算机集群执行第一分布式训练、第二分布式训练,实现了较短时间内采用海量语料对参数量巨大的问答模型的训练,因未对语料及模型结构进行缩减,保证了训练得到的第二问答 模型的高性能、第二问答模型生成的答案的高准确度。因此,本申请提高了答案生成准确度。As can be seen from the above embodiments, the answer generation method proposed by the present application, first, establishes a computer cluster, obtains the first corpus from the first database, controls the computer cluster to perform the first distributed training on the initial question answering model based on the first corpus, and obtains the first corpus. a question and answer model; then, obtain a second corpus from the second database, and control the computer cluster to perform second distributed training on the first question and answer model based on the second corpus to obtain a second question and answer model; then, obtain the target carried by the answer generation request Then, perform word segmentation and entity recognition processing on the target question, obtain the entity recognition result, and obtain the target text matching the entity recognition result from the third database; finally, input the target text and the target question into the second question answering model, and get the target text in the target text. Each word is the probability of the start word of the answer and the probability of the end word of the answer, and the target answer corresponding to the target question is determined based on the probability of the start word of the answer and the probability of the end word of the answer. This solution implements the first distributed training and the second distributed training through the computer cluster, and realizes the training of the question and answer model with a huge amount of parameters by using massive corpus in a relatively short period of time. Because the corpus and model structure are not reduced, the training is guaranteed. The resulting high performance of the second question answering model and the high accuracy of the answers generated by the second question answering model. Therefore, the present application improves the answer generation accuracy.
如图2所示,为本申请一实施例提供的答案生成装置的模块示意图。As shown in FIG. 2 , it is a schematic block diagram of an answer generating apparatus according to an embodiment of the present application.
本申请所述答案生成装置100可以安装于电子设备中。根据实现的功能,所述答案生成装置100可以包括第一训练模块110、第二训练模块120、实体识别模块130及答案确定模块140。本申请所述模块也可以称之为单元,是指一种能够被电子设备处理器所执行,并且能够完成固定功能的一系列计算机程序段,其存储在电子设备的存储器中。The answer generating apparatus 100 described in this application can be installed in an electronic device. According to the implemented functions, the answer generation apparatus 100 may include a first training module 110 , a second training module 120 , an entity recognition module 130 and an answer determination module 140 . The modules described in this application may also be referred to as units, which refer to a series of computer program segments that can be executed by the processor of an electronic device and can perform fixed functions, and are stored in the memory of the electronic device.
在本实施例中,关于各模块/单元的功能如下:In this embodiment, the functions of each module/unit are as follows:
第一训练模块110,用于建立计算机集群,从第一数据库中获取第一语料,控制所述计算机集群基于所述第一语料对初始问答模型执行第一分布式训练,得到第一问答模型。The first training module 110 is configured to establish a computer cluster, obtain a first corpus from a first database, and control the computer cluster to perform a first distributed training on an initial question-answer model based on the first corpus to obtain a first question-answer model.
本实施例中,通过汇总多台计算机建立计算机集群,并采用虚拟化技术(例如,运用docker容器进行虚拟化处理)为计算机集群中的计算节点配置训练需要的环境,可实现分布式并行训练,有效提高训练效率。In this embodiment, by aggregating multiple computers to establish a computer cluster, and using virtualization technology (for example, using a docker container for virtualization processing) to configure the environment required for training for the computing nodes in the computer cluster, distributed parallel training can be realized, Effectively improve training efficiency.
为得到性能良好的问答模型,获取的第一语料的数据量较为庞大,第一语料包括从百度知道、百度百科等渠道获取的公开数据,还包括爬虫程序爬取到的特有领域的数据,例如医疗、金融和体育领域数据。In order to obtain a question and answer model with good performance, the amount of data obtained in the first corpus is relatively large. The first corpus includes public data obtained from Baidu Know, Baidu Encyclopedia and other channels, as well as data in specific fields crawled by crawler programs, such as Data in healthcare, finance and sports.
所述控制所述计算机集群基于所述第一语料对初始问答模型执行第一分布式训练,包括步骤A21-A23:The controlling the computer cluster to perform the first distributed training on the initial question answering model based on the first corpus includes steps A21-A23:
A21、设置第一分布式训练的最大迭代轮次,对所述第一语料执行掩盖及标签化处理,得到携带标签的第三语料;A21. Set the maximum iteration round of the first distributed training, perform masking and labeling processing on the first corpus, and obtain a third corpus with a label;
本实施例中,所述对所述第一语料执行掩盖及标签化处理,得到携带标签的第三语料,包括步骤B21-B24:In this embodiment, performing masking and tagging processing on the first corpus to obtain a third corpus with tags, including steps B21-B24:
B21、从所述第一语料中抽取第一预设数量的第一文本,随机调整所述第一文本中句子的顺序,将调整后的文本作为第一样本,将所述第一文本中的句子顺序作为所述第一样本的标签,将携带标签的第一样本的集合作为第一样本集;B21. Extract a first preset number of first texts from the first corpus, randomly adjust the order of sentences in the first text, take the adjusted text as a first sample, and use the first text as a first sample. The sentence order of the first sample is taken as the label of the first sample, and the set of the first sample carrying the label is taken as the first sample set;
B22、从所述第一语料中抽取第二预设数量的第二文本,随机将所述第二文本中第三预设数量的词语掩盖,将掩盖后的文本作为第二样本,将被掩盖的词语作为所述第二样本的标签,将携带标签的第二样本的集合作为第二样本集;B22. Extract a second preset number of second texts from the first corpus, randomly mask a third preset number of words in the second text, and use the masked text as a second sample, which will be masked The words of the second sample are used as the label of the second sample, and the set of the second samples carrying the label is used as the second sample set;
B23、从所述第一语料中抽取第四预设数量的第三文本及第五预设数量的第四文本,随机将所述第四文本中第六预设数量的句子用从其他文本中抽取的句子替换,得到替换后的文本,将所述替换后的文本的标签设置为第一数值(例如,0),将所述第三文本的标签设置为第二数值(例如,1),将携带标签的替换后的文本及第三文本的集合作为第三样本集;B23. Extract a fourth preset number of third texts and a fifth preset number of fourth texts from the first corpus, and randomly use the sixth preset number of sentences in the fourth text from other texts The extracted sentences are replaced to obtain the replaced text, the label of the replaced text is set to a first numerical value (for example, 0), and the label of the third text is set to a second numerical value (for example, 1), Taking the set of the replaced text and the third text with the label as the third sample set;
B24、将所述第一样本集、所述第二样本集及所述第三样本集的集合作为所述第三语料。B24. Use the set of the first sample set, the second sample set and the third sample set as the third corpus.
A22、获取所述计算机集群中各个计算节点的硬件资源信息,基于所述硬件资源信息将所述第三语料拆分为多个子语料,将所述多个子语料分发给所述各个计算节点,以供所述各个计算节点基于所述子语料训练所述初始问答模型;A22. Obtain hardware resource information of each computing node in the computer cluster, split the third corpus into multiple sub-corpora based on the hardware resource information, and distribute the multiple sub-corpora to the respective computing nodes to for each computing node to train the initial question answering model based on the sub-corpus;
本实施例中,可根据某一硬件资源(例如,显卡)的数量将第三语料分发给各个计算节点,也可以根据每个硬件资源的数量及其对应的权重计算各个计算节点的硬件资源总分值,基于总分值将第三语料分发给各个计算节点。In this embodiment, the third corpus may be distributed to each computing node according to the quantity of a certain hardware resource (for example, a graphics card), or the total hardware resource of each computing node may be calculated according to the quantity of each hardware resource and its corresponding weight Score, distribute the third corpus to each computing node based on the total score.
本实施例中,所述初始问答模型为bert模型,结构为12层叠加的transformer结构。In this embodiment, the initial question answering model is a bert model, and the structure is a 12-layer superimposed transformer structure.
A23、接收所述各个计算节点反馈的模型梯度,基于所述模型梯度更新所述初始问答模型的模型参数,将更新后的模型参数发送给所述各个计算节点,以供所述各个计算节点基于所述更新后的模型参数更新所述初始问答模型,并基于更新后的初始问答模型继续训 练,当迭代到最大迭代轮数后,第一分布式训练结束。A23. Receive the model gradients fed back by the respective computing nodes, update the model parameters of the initial question answering model based on the model gradients, and send the updated model parameters to the respective computing nodes for the computing nodes to use based on The updated model parameters update the initial question-and-answer model, and continue training based on the updated initial question-and-answer model, and when the iteration reaches the maximum number of iteration rounds, the first distributed training ends.
可选的,所述模型梯度的计算公式为:Optionally, the calculation formula of the model gradient is:
Figure PCTCN2021082863-appb-000006
Figure PCTCN2021082863-appb-000006
其中,g ti为第t轮迭代时计算机集群中第i个计算节点对应的模型梯度,T ti为第t轮迭代时计算机集群中第i个计算节点的子语料中参与训练的样本的总数量,x t(ij)为第t轮迭代时计算机集群中第i个计算节点的子语料中参与训练的第j个样本,s t(ij)为第t轮迭代时计算机集群中第i个计算节点的子语料中参与训练的第j个样本的标签,l()为初始问答模型的输出。 Among them, g ti is the model gradient corresponding to the i-th computing node in the computer cluster in the t-th iteration, and T ti is the total number of training samples in the sub-corpus of the i-th computing node in the computer cluster in the t-round iteration , x t(ij) is the j-th sample participating in training in the sub-corpus of the i-th computing node in the computer cluster in the t-th iteration, and s t(ij) is the i-th calculation in the computer cluster in the t-round iteration The label of the jth sample participating in the training in the sub-corpus of the node, and l() is the output of the initial question answering model.
所述基于所述模型梯度更新所述初始问答模型的模型参数,包括步骤C21-C23:The updating of the model parameters of the initial question answering model based on the model gradient includes steps C21-C23:
C21、计算所述模型梯度的平均值,将所述平均值代入收敛参数计算公式计算得到所述初始问答模型的收敛参数,将所述平均值代入更新参数计算公式计算得到所述初始问答模型的更新参数;C21. Calculate the average value of the gradient of the model, substitute the average value into the convergence parameter calculation formula to calculate the convergence parameter of the initial question-and-answer model, and substitute the average value into the update parameter calculation formula to calculate the value of the initial question-and-answer model. update parameters;
所述收敛参数计算公式为:The calculation formula of the convergence parameter is:
Figure PCTCN2021082863-appb-000007
Figure PCTCN2021082863-appb-000007
其中,m t为第t轮迭代时的收敛参数,α为数值已固定的第一平衡超参数(通常取值为0.9),m t-1为第t-1轮迭代时的收敛参数,g t为第t轮迭代时模型梯度的平均值,α t为第t轮迭代时学习的第二平衡超参数。 Among them, m t is the convergence parameter in the t-th iteration, α is the first equilibrium hyperparameter with a fixed value (usually 0.9), m t-1 is the convergence parameter in the t-1 iteration, g t is the average value of the model gradient at the t-th iteration, and α t is the second balanced hyperparameter learned at the t-th iteration.
所述更新参数计算公式为:The update parameter calculation formula is:
Figure PCTCN2021082863-appb-000008
Figure PCTCN2021082863-appb-000008
其中,v t为第t轮迭代时的更新参数,β为数值已固定的第三平衡超参数(通常取值为0.9),v t-1为第t-1轮迭代时的更新参数,g t为第t轮迭代时模型梯度的平均值,β t为第t轮迭代时学习的第四平衡超参数。 Among them, v t is the update parameter in the t-th iteration, β is the third equilibrium hyperparameter with a fixed value (usually 0.9), v t-1 is the update parameter in the t-1 iteration, g t is the mean value of the model gradient at the t-th iteration, and β t is the fourth balanced hyperparameter learned at the t-th iteration.
本实施例引入收敛参数的目的在于在梯度变化小的位置/维度加快模型的收敛,引入更新参数的目的在于训练得到更好的模型参数。The purpose of introducing the convergence parameter in this embodiment is to speed up the convergence of the model at the position/dimension where the gradient changes are small, and the purpose of introducing the update parameter is to obtain better model parameters through training.
C22、将所述收敛参数及更新参数代入缩放率计算公式计算得到所述初始问答模型的缩放率,将基于所述初始问答模型的初始模型参数及初始学习率代入学习率计算公式计算得到所述初始问答模型的新学习率;C22. Substitute the convergence parameters and update parameters into the scaling rate calculation formula to calculate the scaling rate of the initial question answering model, and substitute the initial model parameters and initial learning rate based on the initial question answering model into the learning rate calculation formula to calculate the The new learning rate for the initial question answering model;
所述缩放率计算公式为:The scaling ratio calculation formula is:
Figure PCTCN2021082863-appb-000009
Figure PCTCN2021082863-appb-000009
其中,r t为第t轮迭代时的缩放率,m t为第t轮迭代时的收敛参数,v t为第t轮迭代时的更新参数,∈为数值已固定的第五平衡超参数。 Among them, r t is the scaling rate at the t-th iteration, m t is the convergence parameter at the t-th iteration, v t is the update parameter at the t-th iteration, and ∈ is the fifth equilibrium hyperparameter with a fixed value.
通过收敛参数及更新参数确定出来的缩放率用于调整模型参数的缩放比例。The scaling ratio determined by the convergence parameter and the update parameter is used to adjust the scaling ratio of the model parameters.
所述学习率计算公式为:The learning rate calculation formula is:
U ti=u t*min{max(‖p ti‖,γ),δ} U ti =u t *min{max(‖p ti ‖,γ),δ}
其中,u t为第t轮迭代时初始问答模型的学习率,U ti为第t轮迭代时初始问答模型第i层参数的新学习率,p ti为第t轮迭代时初始问答模型的第i层模型参数,γ为数值已固定的第六平衡超参数,δ为数值已固定的第七平衡超参数。 Among them, ut is the learning rate of the initial question answering model in the t-th iteration, U ti is the new learning rate of the i-th layer parameters of the initial question-answer model in the t-th iteration, and p ti is the initial question-answer model in the t-th iteration. i-layer model parameters, γ is the sixth equilibrium hyperparameter whose value has been fixed, and δ is the seventh equilibrium hyperparameter whose value has been fixed.
本实施例中,通过上述学习率计算公式实现了学习率随着模型当前层的参数自动调整,从而可训练得到更好的模型参数,最终得到的问答模型匹配出的答案准确性更高。In this embodiment, the learning rate is automatically adjusted with the parameters of the current layer of the model through the above learning rate calculation formula, so that better model parameters can be obtained by training, and the answer matched by the final question answering model is more accurate.
C23、将所述初始模型参数、所述缩放率及所述新学习率代入模型参数计算公式计算得到所述初始问答模型的新模型参数。C23. Substitute the initial model parameters, the scaling rate and the new learning rate into a model parameter calculation formula to calculate new model parameters of the initial question answering model.
所述模型参数计算公式为:The model parameter calculation formula is:
Figure PCTCN2021082863-appb-000010
Figure PCTCN2021082863-appb-000010
其中,p (t+1)i为第t+1轮迭代时初始问答模型的第i层模型参数,p ti为第t轮迭代时初始问答模型的第i层模型参数,U ti为第t轮迭代时初始问答模型第i层参数的新学习率,r t为第t轮迭代时的缩放率,λ为数值已固定的第八平衡超参数。 Among them, p (t+1)i is the model parameter of the i-th layer of the initial question answering model in the t+1 round of iteration, p ti is the model parameter of the i-th layer of the initial question-answer model in the t-th iteration, and U ti is the t-th layer The new learning rate of the i-th layer parameters of the initial question answering model during the iteration, r t is the scaling rate during the t-th iteration, and λ is the eighth equilibrium hyperparameter whose value has been fixed.
所述将更新后的模型参数发送给各个计算节点,包括步骤D21-D23:Sending the updated model parameters to each computing node includes steps D21-D23:
D21、对所述更新后的模型参数执行量化和压缩处理,得到压缩后的模型参数;D21, performing quantization and compression processing on the updated model parameters to obtain compressed model parameters;
因模型每一层的参数数量众多,整体参数数量巨大,即使在百兆的网络带宽下,将更新后的模型参数发送给各计算节点时需要花费较长的时间,本实施例通过对更新后的模型参数执行量化和压缩处理,以减少传输数据量,提高传输效率。Due to the large number of parameters in each layer of the model and the huge number of overall parameters, even under the network bandwidth of 100M, it takes a long time to send the updated model parameters to each computing node. The model parameters are quantized and compressed to reduce the amount of transmitted data and improve transmission efficiency.
所述量化处理采用的是float16量化,针对一个浮点数,正常情况下需要4个字节标识,采用2个字节(16位)标识对于大批量训练的情况下模型的精度损失较少,然而网络传输速度可提高一倍。The quantization process uses float16 quantization. For a floating point number, 4-byte identifiers are required under normal circumstances, and 2-byte (16-bit) identifiers are used to reduce the accuracy of the model in the case of large-scale training. However, The network transfer speed can be doubled.
所述压缩处理采用稀疏压缩存储的方法,将稀疏矩阵变为稠密矩阵,模型的参数大多是以2维或者3维矩阵的形式存在的,尤其在迭代少量的轮次后,矩阵往往是一个稀疏矩阵,即矩阵中存在大量的0元素,可采用稀疏行存储或者列存储处理,消除矩阵中的0元素。The compression process adopts the method of sparse compression and storage, and changes the sparse matrix into a dense matrix. Most of the parameters of the model exist in the form of 2-dimensional or 3-dimensional matrices, especially after a small number of iterations, the matrix is often a sparse matrix. A matrix, that is, there are a large number of 0 elements in the matrix, can be processed by sparse row storage or column storage to eliminate the 0 elements in the matrix.
D22、计算所述压缩后的模型参数的标准信息摘要值;D22. Calculate the standard information digest value of the compressed model parameter;
D23、从第四数据库中获取各个计算节点对应的公钥,采用所述公钥对所述压缩后的模型参数及标准信息摘要值进行加密,得到密文数据,将所述密文数据分发到对应的计算节点。D23. Obtain the public key corresponding to each computing node from the fourth database, encrypt the compressed model parameters and the standard information digest value by using the public key, obtain ciphertext data, and distribute the ciphertext data to the corresponding computing node.
本实施例中,第四数据库中存储有各个计算节点对应的加密公钥,加密私钥由各个计算节点自己保存,通过对更新后的模型参数加密传输,保证了更新后的模型参数的安全性,同时还传输了更新后的模型参数的标准信息摘要值密文,各个计算节点用自己的私钥解密密文数据后,计算解密后的数据的信息摘要值,通过比对计算的信息摘要值与标准信息摘要值是否一致来确保密文数据未被篡改,进一步保证了更新后的模型参数的安全性。In this embodiment, the encryption public key corresponding to each computing node is stored in the fourth database, and the encryption private key is stored by each computing node itself. By encrypting and transmitting the updated model parameters, the security of the updated model parameters is ensured At the same time, the ciphertext of the standard information digest value of the updated model parameters is also transmitted. After each computing node decrypts the ciphertext data with its own private key, it calculates the information digest value of the decrypted data, and compares the calculated information digest value. It is consistent with the standard information digest value to ensure that the ciphertext data has not been tampered with, which further ensures the security of the updated model parameters.
第二训练模块120,用于从第二数据库中获取第二语料,控制所述计算机集群基于所述第二语料对所述第一问答模型执行第二分布式训练,得到第二问答模型;The second training module 120 is configured to obtain a second corpus from a second database, and control the computer cluster to perform second distributed training on the first question-answering model based on the second corpus to obtain a second question-answer model;
所述第二分布式训练与第一分布式训练的过程基本相同,仅训练样本及训练任务不同,所述训练样本为从第二语料中抽取的文本及基于抽取的文本设置的问题,训练样本的标签为预先为该问题设置的答案,训练任务为从抽取的文本中预测问题的答案,训练目标是预测的答案与标签中的答案相似度大于预设阈值。The process of the second distributed training is basically the same as that of the first distributed training, and only the training samples and training tasks are different. The training samples are the text extracted from the second corpus and the questions set based on the extracted text. The label of is the pre-set answer for the question, the training task is to predict the answer to the question from the extracted text, and the training objective is that the similarity between the predicted answer and the answer in the label is greater than a preset threshold.
实体识别模块130,用于解析用户基于客户端发出的答案生成请求,获取所述请求携带的目标问题,对所述目标问题执行分词及实体识别处理,得到实体识别结果,从第三数据库中获取与所述实体识别结果匹配的目标文本。The entity identification module 130 is used to parse the user based on the answer generation request sent by the client, obtain the target question carried by the request, perform word segmentation and entity identification processing on the target question, obtain the entity identification result, and obtain from the third database target text that matches the entity recognition result.
例如,目标问题是“健康险的理赔期是多久”,对目标问题执行分词处理后得到的词语序列为{健康险,的,理赔期,是,多久},通过实体识别模型对词语序列进行实体识别,识别得到实体名“健康险”,第三数据库中预先存储有各个实体名对应的文本(例如,各险种对应的说明书),将第三数据库中健康险对应的说明书作为目标问题对应的目标文本。For example, the target question is "how long is the claim period for health insurance", and the word sequence obtained after performing word segmentation on the target question is {health insurance, yes, claim period, yes, how long}, and the entity recognition model is used to perform entity recognition on the word sequence. Identify, identify and obtain the entity name "health insurance", the text corresponding to each entity name (for example, the instructions corresponding to each type of insurance) is pre-stored in the third database, and the instructions corresponding to health insurance in the third database are used as the target corresponding to the target question text.
答案确定模块140,用于将所述目标文本及目标问题输入所述第二问答模型,得到所述目标文本中每个词语是所述目标问题的答案起始词汇的概率及答案终点词汇的概率,基于所述答案起始词汇的概率及答案终点词汇的概率确定所述目标问题对应的目标答案。The answer determination module 140 is configured to input the target text and the target question into the second question answering model, and obtain the probability that each word in the target text is the starting word of the answer of the target question and the probability of the ending word of the answer , and the target answer corresponding to the target question is determined based on the probability of the initial word of the answer and the probability of the end word of the answer.
如图3所示,为本申请一实施例提供的实现答案生成方法的电子设备的结构示意图。As shown in FIG. 3 , it is a schematic structural diagram of an electronic device for implementing an answer generation method provided by an embodiment of the present application.
所述电子设备1是一种能够按照事先设定或者存储的指令,自动进行数值计算和/或 信息处理的设备。所述电子设备1可以是计算机、也可以是单个网络服务器、多个网络服务器组成的服务器组或者基于云计算的由大量主机或者网络服务器构成的云,其中云计算是分布式计算的一种,由一群松散耦合的计算机集组成的一个超级虚拟计算机。The electronic device 1 is a device that can automatically perform numerical calculation and/or information processing according to pre-set or stored instructions. The electronic device 1 may be a computer, a single network server, a server group composed of multiple network servers, or a cloud composed of a large number of hosts or network servers based on cloud computing, where cloud computing is a type of distributed computing, A super virtual computer consisting of a collection of loosely coupled computers.
在本实施例中,电子设备1包括,但不仅限于,可通过系统总线相互通信连接的存储器11、处理器12、网络接口13,该存储器11中存储有答案生成程序10,所述答案生成程序10可被所述处理器12执行。图3仅示出了具有组件11-13以及答案生成程序10的电子设备1,本领域技术人员可以理解的是,图3示出的结构并不构成对电子设备1的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。In this embodiment, the electronic device 1 includes, but is not limited to, a memory 11, a processor 12, and a network interface 13 that can be communicatively connected to each other through a system bus, and the memory 11 stores an answer generation program 10, the answer generation program 10 is executable by the processor 12 . FIG. 3 only shows the electronic device 1 having the components 11-13 and the answer generating program 10. Those skilled in the art can understand that the structure shown in FIG. 3 does not constitute a limitation on the electronic device 1, and may include a Fewer or more components are shown, or some components are combined, or a different arrangement of components.
其中,存储器11包括内存及至少一种类型的可读存储介质。内存为电子设备1的运行提供缓存;可读存储介质可为如闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等的非易失性存储介质。在一些实施例中,可读存储介质可以是电子设备1的内部存储单元,例如该电子设备1的硬盘;在另一些实施例中,该非易失性存储介质也可以是电子设备1的外部存储设备,例如电子设备1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。本实施例中,存储器11的可读存储介质通常用于存储安装于电子设备1的操作系统和各类应用软件,例如存储本申请一实施例中的答案生成程序10的代码等。此外,存储器11还可以用于暂时地存储已经输出或者将要输出的各类数据。The memory 11 includes a memory and at least one type of readable storage medium. The memory provides a cache for the operation of the electronic device 1; the readable storage medium can be, for example, flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static random access memory (SRAM) ), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, etc. non-volatile storage media. In some embodiments, the readable storage medium may be an internal storage unit of the electronic device 1, such as a hard disk of the electronic device 1; in other embodiments, the non-volatile storage medium may also be an external storage unit of the electronic device 1 A storage device, such as a pluggable hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, a flash memory card (Flash Card), etc. equipped on the electronic device 1. In this embodiment, the readable storage medium of the memory 11 is generally used to store the operating system and various application software installed in the electronic device 1 , for example, to store the code of the answer generation program 10 in an embodiment of the present application. In addition, the memory 11 can also be used to temporarily store various types of data that have been output or will be output.
处理器12在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器12通常用于控制所述电子设备1的总体操作,例如执行与其他设备进行数据交互或者通信相关的控制和处理等。本实施例中,所述处理器12用于运行所述存储器11中存储的程序代码或者处理数据,例如运行答案生成程序10等。In some embodiments, the processor 12 may be a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, a microprocessor, or other data processing chips. The processor 12 is generally used to control the overall operation of the electronic device 1, such as performing control and processing related to data interaction or communication with other devices. In this embodiment, the processor 12 is configured to run the program code or process data stored in the memory 11, for example, run the answer generation program 10 and the like.
网络接口13可包括无线网络接口或有线网络接口,该网络接口13用于在所述电子设备1与客户端(图中未画出)之间建立通信连接。The network interface 13 may include a wireless network interface or a wired network interface, and the network interface 13 is used to establish a communication connection between the electronic device 1 and a client (not shown in the figure).
可选的,所述电子设备1还可以包括用户接口,用户接口可以包括显示器(Display)、输入单元比如键盘(Keyboard),可选的用户接口还可以包括标准的有线接口、无线接口。可选的,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在电子设备1中处理的信息以及用于显示可视化的用户界面。Optionally, the electronic device 1 may further include a user interface, and the user interface may include a display (Display), an input unit such as a keyboard (Keyboard), and an optional user interface may also include a standard wired interface and a wireless interface. Optionally, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, and the like. The display may also be appropriately called a display screen or a display unit, which is used for displaying information processed in the electronic device 1 and for displaying a visualized user interface.
应该了解,所述实施例仅为说明之用,在专利申请范围上并不受此结构的限制。It should be understood that the embodiments are only used for illustration, and are not limited by this structure in the scope of the patent application.
所述电子设备1中的所述存储器11存储的答案生成程序10是多个指令的组合,在所述处理器12中运行时,可以实现:The answer generation program 10 stored in the memory 11 in the electronic device 1 is a combination of multiple instructions, and when running in the processor 12, can realize:
建立计算机集群,从第一数据库中获取第一语料,控制所述计算机集群基于所述第一语料对初始问答模型执行第一分布式训练,得到第一问答模型;establishing a computer cluster, obtaining the first corpus from the first database, and controlling the computer cluster to perform the first distributed training on the initial question-answer model based on the first corpus to obtain the first question-answer model;
从第二数据库中获取第二语料,控制所述计算机集群基于所述第二语料对所述第一问答模型执行第二分布式训练,得到第二问答模型;Obtain a second corpus from the second database, and control the computer cluster to perform second distributed training on the first question answering model based on the second corpus to obtain a second question answering model;
解析用户基于客户端发出的答案生成请求,获取所述请求携带的目标问题,对所述目标问题执行分词及实体识别处理,得到实体识别结果,从第三数据库中获取与所述实体识别结果匹配的目标文本;Parse the user's request based on the answer generated by the client, obtain the target question carried by the request, perform word segmentation and entity recognition processing on the target question, obtain an entity recognition result, and obtain a match with the entity recognition result from a third database the target text;
将所述目标文本及目标问题输入所述第二问答模型,得到所述目标文本中每个词语是所述目标问题的答案起始词汇的概率及答案终点词汇的概率,基于所述答案起始词汇的概率及答案终点词汇的概率确定所述目标问题对应的目标答案。Input the target text and target question into the second question answering model, and obtain the probability that each word in the target text is the starting word of the answer of the target question and the probability of the ending word of the answer, based on the starting word of the answer The probability of the word and the probability of the answer end word determine the target answer corresponding to the target question.
具体地,所述处理器12对上述答案生成程序10的具体实现方法可参考图1对应实施例中相关步骤的描述,在此不赘述。Specifically, for the specific implementation method of the above-mentioned answer generating program 10 by the processor 12, reference may be made to the description of the relevant steps in the corresponding embodiment of FIG. 1 , which is not repeated here.
进一步地,所述电子设备1集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。所述计算机可读介质可以是非易失性的,也可以是非易失性的。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)。Further, if the modules/units integrated in the electronic device 1 are implemented in the form of software functional units and sold or used as independent products, they may be stored in a computer-readable storage medium. The computer-readable medium may be non-volatile or non-volatile. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) .
所述计算机可读存储介质上存储有答案生成程序10,所述答案生成程序10可被一个或者多个处理器执行,以实现如下步骤:An answer generation program 10 is stored on the computer-readable storage medium, and the answer generation program 10 can be executed by one or more processors to realize the following steps:
建立计算机集群,从第一数据库中获取第一语料,控制所述计算机集群基于所述第一语料对初始问答模型执行第一分布式训练,得到第一问答模型;establishing a computer cluster, obtaining the first corpus from the first database, and controlling the computer cluster to perform the first distributed training on the initial question-answer model based on the first corpus to obtain the first question-answer model;
从第二数据库中获取第二语料,控制所述计算机集群基于所述第二语料对所述第一问答模型执行第二分布式训练,得到第二问答模型;Obtain a second corpus from the second database, and control the computer cluster to perform second distributed training on the first question answering model based on the second corpus to obtain a second question answering model;
解析用户基于客户端发出的答案生成请求,获取所述请求携带的目标问题,对所述目标问题执行分词及实体识别处理,得到实体识别结果,从第三数据库中获取与所述实体识别结果匹配的目标文本;Parse the user's request based on the answer generated by the client, obtain the target question carried by the request, perform word segmentation and entity recognition processing on the target question, obtain an entity recognition result, and obtain a match with the entity recognition result from a third database the target text;
将所述目标文本及目标问题输入所述第二问答模型,得到所述目标文本中每个词语是所述目标问题的答案起始词汇的概率及答案终点词汇的概率,基于所述答案起始词汇的概率及答案终点词汇的概率确定所述目标问题对应的目标答案。Input the target text and target question into the second question answering model, and obtain the probability that each word in the target text is the starting word of the answer of the target question and the probability of the ending word of the answer, based on the starting word of the answer The probability of the word and the probability of the answer end word determine the target answer corresponding to the target question.
在本申请所提供的几个实施例中,应该理解到,所揭露的设备,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。In the several embodiments provided in this application, it should be understood that the disclosed apparatus, apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are only illustrative. For example, the division of the modules is only a logical function division, and there may be other division manners in actual implementation.
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。The modules described as separate components may or may not be physically separated, and components shown as modules may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
另外,在本申请各个实施例中的各功能模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。In addition, each functional module in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit. The above-mentioned integrated units can be implemented in the form of hardware, or can be implemented in the form of hardware plus software function modules.
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。It will be apparent to those skilled in the art that the present application is not limited to the details of the above-described exemplary embodiments, but that the present application can be implemented in other specific forms without departing from the spirit or essential characteristics of the present application.
因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本申请内。不应将权利要求中的任何附关联图标记视为限制所涉及的权利要求。Accordingly, the embodiments are to be regarded in all respects as illustrative and not restrictive, and the scope of the application is to be defined by the appended claims rather than the foregoing description, which is therefore intended to fall within the scope of the claims. All changes within the meaning and scope of the equivalents of , are included in this application. Any reference signs in the claims shall not be construed as limiting the involved claim.
本申请所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。The blockchain referred to in this application is a new application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm. Blockchain, essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information to verify its Validity of information (anti-counterfeiting) and generation of the next block. The blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。系统权利要求中陈述的多个单元或装置也可以由一个单元或装置通过软件或者硬件来实现。第二等词语用来表示名称,而并不表示任何特定的顺序。Furthermore, it is clear that the word "comprising" does not exclude other units or steps and the singular does not exclude the plural. Several units or means recited in the system claims can also be realized by one unit or means by means of software or hardware. Second-class terms are used to denote names and do not denote any particular order.
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神和范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present application rather than limitations. Although the present application has been described in detail with reference to the preferred embodiments, those of ordinary skill in the art should understand that the technical solutions of the present application can be Modifications or equivalent substitutions can be made without departing from the spirit and scope of the technical solutions of the present application.

Claims (20)

  1. 一种答案生成方法,其中,所述方法包括:An answer generation method, wherein the method comprises:
    建立计算机集群,从第一数据库中获取第一语料,控制所述计算机集群基于所述第一语料对初始问答模型执行第一分布式训练,得到第一问答模型;establishing a computer cluster, obtaining the first corpus from the first database, and controlling the computer cluster to perform the first distributed training on the initial question-answer model based on the first corpus to obtain the first question-answer model;
    从第二数据库中获取第二语料,控制所述计算机集群基于所述第二语料对所述第一问答模型执行第二分布式训练,得到第二问答模型;Obtain a second corpus from the second database, and control the computer cluster to perform second distributed training on the first question answering model based on the second corpus to obtain a second question answering model;
    解析用户基于客户端发出的答案生成请求,获取所述请求携带的目标问题,对所述目标问题执行分词及实体识别处理,得到实体识别结果,从第三数据库中获取与所述实体识别结果匹配的目标文本;Parse the user's request based on the answer generated by the client, obtain the target question carried by the request, perform word segmentation and entity recognition processing on the target question, obtain an entity recognition result, and obtain a match with the entity recognition result from a third database the target text;
    将所述目标文本及目标问题输入所述第二问答模型,得到所述目标文本中每个词语是所述目标问题的答案起始词汇的概率及答案终点词汇的概率,基于所述答案起始词汇的概率及答案终点词汇的概率确定所述目标问题对应的目标答案。Input the target text and target question into the second question answering model, and obtain the probability that each word in the target text is the starting word of the answer of the target question and the probability of the ending word of the answer, based on the starting word of the answer The probability of the word and the probability of the answer end word determine the target answer corresponding to the target question.
  2. 如权利要求1所述的答案生成方法,其中,所述控制所述计算机集群基于所述第一语料对初始问答模型执行第一分布式训练,包括:The answer generation method according to claim 1, wherein the controlling the computer cluster to perform the first distributed training on the initial question answering model based on the first corpus comprises:
    设置第一分布式训练的最大迭代轮次,对所述第一语料执行掩盖及标签化处理,得到携带标签的第三语料;Setting the maximum iteration round of the first distributed training, performing masking and labeling processing on the first corpus, and obtaining a third corpus with a label;
    获取所述计算机集群中各个计算节点的硬件资源信息,基于所述硬件资源信息将所述第三语料拆分为多个子语料,将所述多个子语料分发给所述各个计算节点,以供所述各个计算节点基于所述子语料训练所述初始问答模型;Obtain hardware resource information of each computing node in the computer cluster, split the third corpus into multiple sub-corpora based on the hardware resource information, and distribute the multiple sub-corpora to the respective computing nodes for all each computing node trains the initial question answering model based on the sub-corpus;
    接收所述各个计算节点反馈的模型梯度,基于所述模型梯度更新所述初始问答模型的模型参数,将更新后的模型参数发送给所述各个计算节点,以供所述各个计算节点基于所述更新后的模型参数更新所述初始问答模型,并基于更新后的初始问答模型继续训练,当迭代到最大迭代轮数后,第一分布式训练结束。Receive the model gradients fed back by the respective computing nodes, update the model parameters of the initial question-and-answer model based on the model gradients, and send the updated model parameters to the respective computing nodes for the respective computing nodes to use based on the The initial question answering model is updated with the updated model parameters, and the training is continued based on the updated initial question answering model. When the iteration reaches the maximum number of iteration rounds, the first distributed training ends.
  3. 如权利要求2所述的答案生成方法,其中,所述基于所述模型梯度更新所述初始问答模型的模型参数,包括:The answer generation method according to claim 2, wherein the updating the model parameters of the initial question answering model based on the model gradient comprises:
    计算所述模型梯度的平均值,将所述平均值代入收敛参数计算公式计算得到所述初始问答模型的收敛参数,将所述平均值代入更新参数计算公式计算得到所述初始问答模型的更新参数;Calculate the average value of the gradient of the model, substitute the average value into the convergence parameter calculation formula to calculate the convergence parameter of the initial question-and-answer model, and substitute the average value into the update parameter calculation formula to calculate the update parameter of the initial question-and-answer model ;
    将所述收敛参数及更新参数代入缩放率计算公式计算得到所述初始问答模型的缩放率,将所述初始问答模型的初始模型参数及初始学习率代入学习率计算公式计算得到所述初始问答模型的新学习率;Substitute the convergence parameters and update parameters into the scaling rate calculation formula to calculate the scaling rate of the initial question answering model, and substitute the initial model parameters and initial learning rate of the initial question answering model into the learning rate calculation formula to calculate the initial question answering model. The new learning rate of ;
    将所述初始模型参数、所述缩放率及所述新学习率代入模型参数计算公式计算得到所述初始问答模型的新模型参数。Substituting the initial model parameters, the scaling rate and the new learning rate into a model parameter calculation formula to calculate new model parameters of the initial question answering model.
  4. 如权利要求2所述的答案生成方法,其中,所述对所述第一语料执行掩盖及标签化处理,得到携带标签的第三语料,包括:The answer generation method according to claim 2, wherein, performing masking and labeling processing on the first corpus to obtain a third corpus with tags, comprising:
    从所述第一语料中抽取第一预设数量的第一文本,随机调整所述第一文本中句子的顺序,将调整后的文本作为第一样本,将所述第一文本中的句子顺序作为所述第一样本的标签,将携带标签的第一样本的集合作为第一样本集;Extract a first preset number of first texts from the first corpus, randomly adjust the order of sentences in the first text, take the adjusted text as a first sample, and use the sentences in the first text as a first sample. The sequence is used as the label of the first sample, and the set of the first samples carrying the label is used as the first sample set;
    从所述第一语料中抽取第二预设数量的第二文本,随机将所述第二文本中第三预设数量的词语掩盖,将掩盖后的文本作为第二样本,将被掩盖的词语作为所述第二样本的标签,将携带标签的第二样本的集合作为第二样本集;Extracting a second preset number of second texts from the first corpus, randomly masking a third preset number of words in the second text, using the masked text as a second sample, and masking the masked words As the label of the second sample, the set of the second samples carrying the label is used as the second sample set;
    从所述第一语料中抽取第四预设数量的第三文本及第五预设数量的第四文本,随机将所述第四文本中第六预设数量的句子用从其他文本中抽取的句子替换,得到替换后的文本,将所述替换后的文本的标签设置为第一数值,将所述第三文本的标签设置为第二数值,将 携带标签的替换后的文本及第三文本的集合作为第三样本集;A fourth preset number of third texts and a fifth preset number of fourth texts are extracted from the first corpus, and the sixth preset number of sentences in the fourth text are randomly selected from the sentences extracted from other texts. Sentence replacement, to obtain the replaced text, the label of the replaced text is set to the first numerical value, the label of the third text is set to the second numerical value, the replaced text and the third text that carry the label are set The set of , as the third sample set;
    将所述第一样本集、所述第二样本集及所述第三样本集的集合作为所述第三语料。A set of the first sample set, the second sample set and the third sample set is used as the third corpus.
  5. 如权利要求2所述的答案生成方法,其中,所述将更新后的模型参数发送给各个计算节点,包括:The answer generation method according to claim 2, wherein the sending the updated model parameters to each computing node comprises:
    对所述更新后的模型参数执行量化和压缩处理,得到压缩后的模型参数;Quantizing and compressing the updated model parameters to obtain compressed model parameters;
    计算所述压缩后的模型参数的标准信息摘要值;calculating standard information digest values of the compressed model parameters;
    从第四数据库中获取各个计算节点对应的公钥,采用所述公钥对所述压缩后的模型参数及标准信息摘要值进行加密,得到密文数据,将所述密文数据分发到对应的计算节点。Obtain the public key corresponding to each computing node from the fourth database, use the public key to encrypt the compressed model parameters and the standard information digest value, obtain ciphertext data, and distribute the ciphertext data to the corresponding calculate node.
  6. 如权利要求2所述的答案生成方法,其中,所述模型梯度的计算公式为:The answer generation method according to claim 2, wherein the calculation formula of the model gradient is:
    Figure PCTCN2021082863-appb-100001
    Figure PCTCN2021082863-appb-100001
    其中,g ti为第t轮迭代时计算机集群中第i个计算节点对应的模型梯度,T ti为第t轮迭代时计算机集群中第i个计算节点的子语料中参与训练的样本的总数量,x t(ij)为第t轮迭代时计算机集群中第i个计算节点的子语料中参与训练的第j个样本,s t(ij)为第t轮迭代时计算机集群中第i个计算节点的子语料中参与训练的第j个样本的标签,l()为初始问答模型的输出。 Among them, g ti is the model gradient corresponding to the i-th computing node in the computer cluster in the t-th iteration, and T ti is the total number of training samples in the sub-corpus of the i-th computing node in the computer cluster in the t-round iteration , x t(ij) is the j-th sample participating in training in the sub-corpus of the i-th computing node in the computer cluster in the t-th iteration, and s t(ij) is the i-th calculation in the computer cluster in the t-round iteration The label of the jth sample participating in the training in the sub-corpus of the node, and l() is the output of the initial question answering model.
  7. 一种答案生成装置,其中,所述装置包括:An answer generating apparatus, wherein the apparatus comprises:
    第一训练模块,用于建立计算机集群,从第一数据库中获取第一语料,控制所述计算机集群基于所述第一语料对初始问答模型执行第一分布式训练,得到第一问答模型;The first training module is used to establish a computer cluster, obtain the first corpus from the first database, and control the computer cluster to perform the first distributed training on the initial question-answer model based on the first corpus to obtain the first question-and-answer model;
    第二训练模块,用于从第二数据库中获取第二语料,控制所述计算机集群基于所述第二语料对所述第一问答模型执行第二分布式训练,得到第二问答模型;a second training module, configured to obtain a second corpus from a second database, and control the computer cluster to perform second distributed training on the first question answering model based on the second corpus to obtain a second question answering model;
    实体识别模块,用于解析用户基于客户端发出的答案生成请求,获取所述请求携带的目标问题,对所述目标问题执行分词及实体识别处理,得到实体识别结果,从第三数据库中获取与所述实体识别结果匹配的目标文本;The entity recognition module is used to parse the user's request based on the answer generated by the client, obtain the target question carried by the request, perform word segmentation and entity recognition processing on the target question, obtain the entity recognition result, and obtain and match from the third database. the target text matched by the entity recognition result;
    答案确定模块,用于将所述目标文本及目标问题输入所述第二问答模型,得到所述目标文本中每个词语是所述目标问题的答案起始词汇的概率及答案终点词汇的概率,基于所述答案起始词汇的概率及答案终点词汇的概率确定所述目标问题对应的目标答案。An answer determination module, for inputting the target text and the target question into the second question answering model, to obtain the probability that each word in the target text is the starting word of the answer to the target question and the probability of the ending word of the answer, The target answer corresponding to the target question is determined based on the probability of the start word of the answer and the probability of the end word of the answer.
  8. 如权利要求7所述的答案生成装置,其中,所述控制所述计算机集群基于所述第一语料对初始问答模型执行第一分布式训练,包括:The answer generating apparatus according to claim 7, wherein the controlling the computer cluster to perform the first distributed training on the initial question answering model based on the first corpus comprises:
    设置第一分布式训练的最大迭代轮次,对所述第一语料执行掩盖及标签化处理,得到携带标签的第三语料;Setting the maximum iteration round of the first distributed training, performing masking and labeling processing on the first corpus, and obtaining a third corpus with a label;
    获取所述计算机集群中各个计算节点的硬件资源信息,基于所述硬件资源信息将所述第三语料拆分为多个子语料,将所述多个子语料分发给所述各个计算节点,以供所述各个计算节点基于所述子语料训练所述初始问答模型;Obtain hardware resource information of each computing node in the computer cluster, split the third corpus into multiple sub-corpora based on the hardware resource information, and distribute the multiple sub-corpora to the respective computing nodes for all each computing node trains the initial question answering model based on the sub-corpus;
    接收各个所述计算节点反馈的模型梯度,基于所述模型梯度更新所述初始问答模型的模型参数,将更新后的模型参数发送给所述各个计算节点,以供所述各个计算节点基于所述更新后的模型参数更新所述初始问答模型,并基于更新后的初始问答模型继续训练,当迭代到最大迭代轮数后,第一分布式训练结束。Receive the model gradients fed back by each of the computing nodes, update the model parameters of the initial question answering model based on the model gradients, and send the updated model parameters to the respective computing nodes for the computing nodes to use based on the The initial question answering model is updated with the updated model parameters, and the training is continued based on the updated initial question answering model. When the iteration reaches the maximum number of iteration rounds, the first distributed training ends.
  9. 一种电子设备,其中,所述电子设备包括:An electronic device, wherein the electronic device comprises:
    至少一个处理器;以及,at least one processor; and,
    与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,
    所述存储器存储有可被所述至少一个处理器执行的答案生成程序,所述答案生成程序被所述至少一个处理器执行,以使所述至少一个处理器能够执行如下步骤:The memory stores an answer generation program executable by the at least one processor, the answer generation program being executed by the at least one processor to enable the at least one processor to perform the following steps:
    建立计算机集群,从第一数据库中获取第一语料,控制所述计算机集群基于所述第一 语料对初始问答模型执行第一分布式训练,得到第一问答模型;A computer cluster is established, the first corpus is obtained from the first database, and the computer cluster is controlled to perform the first distributed training on the initial question and answer model based on the first corpus to obtain the first question and answer model;
    从第二数据库中获取第二语料,控制所述计算机集群基于所述第二语料对所述第一问答模型执行第二分布式训练,得到第二问答模型;Obtain a second corpus from the second database, and control the computer cluster to perform second distributed training on the first question answering model based on the second corpus to obtain a second question answering model;
    解析用户基于客户端发出的答案生成请求,获取所述请求携带的目标问题,对所述目标问题执行分词及实体识别处理,得到实体识别结果,从第三数据库中获取与所述实体识别结果匹配的目标文本;Parse the user's request based on the answer generated by the client, obtain the target question carried by the request, perform word segmentation and entity recognition processing on the target question, obtain an entity recognition result, and obtain a match with the entity recognition result from a third database the target text;
    将所述目标文本及目标问题输入所述第二问答模型,得到所述目标文本中每个词语是所述目标问题的答案起始词汇的概率及答案终点词汇的概率,基于所述答案起始词汇的概率及答案终点词汇的概率确定所述目标问题对应的目标答案。Input the target text and target question into the second question answering model, and obtain the probability that each word in the target text is the starting word of the answer of the target question and the probability of the ending word of the answer, based on the starting word of the answer The probability of the word and the probability of the answer end word determine the target answer corresponding to the target question.
  10. 如权利要求9所述的电子设备,其中,所述控制所述计算机集群基于所述第一语料对初始问答模型执行第一分布式训练,包括:The electronic device of claim 9, wherein the controlling the computer cluster to perform a first distributed training on an initial question answering model based on the first corpus comprises:
    设置第一分布式训练的最大迭代轮次,对所述第一语料执行掩盖及标签化处理,得到携带标签的第三语料;Setting the maximum iteration round of the first distributed training, performing masking and labeling processing on the first corpus, and obtaining a third corpus with a label;
    获取所述计算机集群中各个计算节点的硬件资源信息,基于所述硬件资源信息将所述第三语料拆分为多个子语料,将所述多个子语料分发给所述各个计算节点,以供所述各个计算节点基于所述子语料训练所述初始问答模型;Obtain hardware resource information of each computing node in the computer cluster, split the third corpus into multiple sub-corpora based on the hardware resource information, and distribute the multiple sub-corpora to the respective computing nodes for all each computing node trains the initial question answering model based on the sub-corpus;
    接收所述各个计算节点反馈的模型梯度,基于所述模型梯度更新所述初始问答模型的模型参数,将更新后的模型参数发送给所述各个计算节点,以供所述各个计算节点基于所述更新后的模型参数更新所述初始问答模型,并基于更新后的初始问答模型继续训练,当迭代到最大迭代轮数后,第一分布式训练结束。Receive the model gradients fed back by the respective computing nodes, update the model parameters of the initial question-and-answer model based on the model gradients, and send the updated model parameters to the respective computing nodes for the respective computing nodes to use based on the The initial question answering model is updated with the updated model parameters, and the training is continued based on the updated initial question answering model. When the iteration reaches the maximum number of iteration rounds, the first distributed training ends.
  11. 如权利要求10所述的电子设备,其中,所述基于所述模型梯度更新所述初始问答模型的模型参数,包括:The electronic device according to claim 10, wherein the updating the model parameters of the initial question answering model based on the model gradient comprises:
    计算所述模型梯度的平均值,将所述平均值代入收敛参数计算公式计算得到所述初始问答模型的收敛参数,将所述平均值代入更新参数计算公式计算得到所述初始问答模型的更新参数;Calculate the average value of the gradient of the model, substitute the average value into the convergence parameter calculation formula to calculate the convergence parameter of the initial question-and-answer model, and substitute the average value into the update parameter calculation formula to calculate the update parameter of the initial question-and-answer model ;
    将所述收敛参数及更新参数代入缩放率计算公式计算得到所述初始问答模型的缩放率,将所述初始问答模型的初始模型参数及初始学习率代入学习率计算公式计算得到所述初始问答模型的新学习率;Substitute the convergence parameters and update parameters into the scaling rate calculation formula to calculate the scaling rate of the initial question answering model, and substitute the initial model parameters and initial learning rate of the initial question answering model into the learning rate calculation formula to calculate the initial question answering model. The new learning rate of ;
    将所述初始模型参数、所述缩放率及所述新学习率代入模型参数计算公式计算得到所述初始问答模型的新模型参数。Substituting the initial model parameters, the scaling rate and the new learning rate into a model parameter calculation formula to calculate new model parameters of the initial question answering model.
  12. 如权利要求10所述的电子设备,其中,所述对所述第一语料执行掩盖及标签化处理,得到携带标签的第三语料,包括:The electronic device according to claim 10, wherein the masking and tagging processing is performed on the first corpus to obtain a third corpus with a tag, comprising:
    从所述第一语料中抽取第一预设数量的第一文本,随机调整所述第一文本中句子的顺序,将调整后的文本作为第一样本,将所述第一文本中的句子顺序作为所述第一样本的标签,将携带标签的第一样本的集合作为第一样本集;Extract a first preset number of first texts from the first corpus, randomly adjust the order of sentences in the first text, take the adjusted text as a first sample, and use the sentences in the first text as a first sample. The sequence is used as the label of the first sample, and the set of the first samples carrying the label is used as the first sample set;
    从所述第一语料中抽取第二预设数量的第二文本,随机将所述第二文本中第三预设数量的词语掩盖,将掩盖后的文本作为第二样本,将被掩盖的词语作为所述第二样本的标签,将携带标签的第二样本的集合作为第二样本集;Extracting a second preset number of second texts from the first corpus, randomly masking a third preset number of words in the second text, using the masked text as a second sample, and masking the masked words As the label of the second sample, the set of the second samples carrying the label is used as the second sample set;
    从所述第一语料中抽取第四预设数量的第三文本及第五预设数量的第四文本,随机将所述第四文本中第六预设数量的句子用从其他文本中抽取的句子替换,得到替换后的文本,将所述替换后的文本的标签设置为第一数值,将所述第三文本的标签设置为第二数值,将携带标签的替换后的文本及第三文本的集合作为第三样本集;A fourth preset number of third texts and a fifth preset number of fourth texts are extracted from the first corpus, and the sixth preset number of sentences in the fourth text are randomly selected from the sentences extracted from other texts. Sentence replacement, to obtain the replaced text, the label of the replaced text is set to the first numerical value, the label of the third text is set to the second numerical value, the replaced text and the third text that carry the label are set The set of , as the third sample set;
    将所述第一样本集、所述第二样本集及所述第三样本集的集合作为所述第三语料。A set of the first sample set, the second sample set and the third sample set is used as the third corpus.
  13. 如权利要求10所述的电子设备,其中,所述将更新后的模型参数发送给各个计算节点,包括:The electronic device according to claim 10, wherein the sending the updated model parameters to each computing node comprises:
    对所述更新后的模型参数执行量化和压缩处理,得到压缩后的模型参数;Quantizing and compressing the updated model parameters to obtain compressed model parameters;
    计算所述压缩后的模型参数的标准信息摘要值;calculating standard information digest values of the compressed model parameters;
    从第四数据库中获取各个计算节点对应的公钥,采用所述公钥对所述压缩后的模型参数及标准信息摘要值进行加密,得到密文数据,将所述密文数据分发到对应的计算节点。Obtain the public key corresponding to each computing node from the fourth database, use the public key to encrypt the compressed model parameters and the standard information digest value, obtain ciphertext data, and distribute the ciphertext data to the corresponding calculate node.
  14. 如权利要求10所述的电子设备,其中,所述模型梯度的计算公式为:The electronic device according to claim 10, wherein the calculation formula of the model gradient is:
    Figure PCTCN2021082863-appb-100002
    Figure PCTCN2021082863-appb-100002
    其中,g ti为第t轮迭代时计算机集群中第i个计算节点对应的模型梯度,T ti为第t轮迭代时计算机集群中第i个计算节点的子语料中参与训练的样本的总数量,x t(ij)为第t轮迭代时计算机集群中第i个计算节点的子语料中参与训练的第j个样本,s t(ij)为第t轮迭代时计算机集群中第i个计算节点的子语料中参与训练的第j个样本的标签,l()为初始问答模型的输出。 Among them, g ti is the model gradient corresponding to the i-th computing node in the computer cluster in the t-th iteration, and T ti is the total number of training samples in the sub-corpus of the i-th computing node in the computer cluster in the t-round iteration , x t(ij) is the j-th sample participating in training in the sub-corpus of the i-th computing node in the computer cluster in the t-th iteration, and s t(ij) is the i-th calculation in the computer cluster in the t-round iteration The label of the jth sample participating in the training in the sub-corpus of the node, and l() is the output of the initial question answering model.
  15. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有答案生成程序,所述答案生成程序可被一个或者多个处理器执行,以实现如下步骤:A computer-readable storage medium, wherein an answer generation program is stored on the computer-readable storage medium, and the answer generation program can be executed by one or more processors to realize the following steps:
    建立计算机集群,从第一数据库中获取第一语料,控制所述计算机集群基于所述第一语料对初始问答模型执行第一分布式训练,得到第一问答模型;establishing a computer cluster, obtaining the first corpus from the first database, and controlling the computer cluster to perform the first distributed training on the initial question-answer model based on the first corpus to obtain the first question-answer model;
    从第二数据库中获取第二语料,控制所述计算机集群基于所述第二语料对所述第一问答模型执行第二分布式训练,得到第二问答模型;Obtain a second corpus from the second database, and control the computer cluster to perform second distributed training on the first question answering model based on the second corpus to obtain a second question answering model;
    解析用户基于客户端发出的答案生成请求,获取所述请求携带的目标问题,对所述目标问题执行分词及实体识别处理,得到实体识别结果,从第三数据库中获取与所述实体识别结果匹配的目标文本;Parse the user's request based on the answer generated by the client, obtain the target question carried by the request, perform word segmentation and entity recognition processing on the target question, obtain an entity recognition result, and obtain a match with the entity recognition result from a third database the target text;
    将所述目标文本及目标问题输入所述第二问答模型,得到所述目标文本中每个词语是所述目标问题的答案起始词汇的概率及答案终点词汇的概率,基于所述答案起始词汇的概率及答案终点词汇的概率确定所述目标问题对应的目标答案。Input the target text and target question into the second question answering model, and obtain the probability that each word in the target text is the starting word of the answer of the target question and the probability of the ending word of the answer, based on the starting word of the answer The probability of the word and the probability of the answer end word determine the target answer corresponding to the target question.
  16. 如权利要求15所述的计算机可读存储介质,其中,所述控制所述计算机集群基于所述第一语料对初始问答模型执行第一分布式训练,包括:The computer-readable storage medium of claim 15, wherein the controlling the computer cluster to perform a first distributed training on an initial question answering model based on the first corpus comprises:
    设置第一分布式训练的最大迭代轮次,对所述第一语料执行掩盖及标签化处理,得到携带标签的第三语料;Setting the maximum iteration round of the first distributed training, performing masking and labeling processing on the first corpus, and obtaining a third corpus with a label;
    获取所述计算机集群中各个计算节点的硬件资源信息,基于所述硬件资源信息将所述第三语料拆分为多个子语料,将所述多个子语料分发给所述各个计算节点,以供所述各个计算节点基于所述子语料训练所述初始问答模型;Obtain hardware resource information of each computing node in the computer cluster, split the third corpus into multiple sub-corpora based on the hardware resource information, and distribute the multiple sub-corpora to the respective computing nodes for all each computing node trains the initial question answering model based on the sub-corpus;
    接收所述各个计算节点反馈的模型梯度,基于所述模型梯度更新所述初始问答模型的模型参数,将更新后的模型参数发送给所述各个计算节点,以供所述各个计算节点基于所述更新后的模型参数更新所述初始问答模型,并基于更新后的初始问答模型继续训练,当迭代到最大迭代轮数后,第一分布式训练结束。Receive the model gradients fed back by the respective computing nodes, update the model parameters of the initial question-and-answer model based on the model gradients, and send the updated model parameters to the respective computing nodes for the respective computing nodes to use based on the The initial question answering model is updated with the updated model parameters, and the training is continued based on the updated initial question answering model. When the iteration reaches the maximum number of iteration rounds, the first distributed training ends.
  17. 如权利要求16所述的计算机可读存储介质,其中,所述基于所述模型梯度更新所述初始问答模型的模型参数,包括:The computer-readable storage medium of claim 16, wherein the updating the model parameters of the initial question answering model based on the model gradient comprises:
    计算所述模型梯度的平均值,将所述平均值代入收敛参数计算公式计算得到所述初始问答模型的收敛参数,将所述平均值代入更新参数计算公式计算得到所述初始问答模型的更新参数;Calculate the average value of the gradient of the model, substitute the average value into the convergence parameter calculation formula to calculate the convergence parameter of the initial question-and-answer model, and substitute the average value into the update parameter calculation formula to calculate the update parameter of the initial question-and-answer model ;
    将所述收敛参数及更新参数代入缩放率计算公式计算得到所述初始问答模型的缩放率,将所述初始问答模型的初始模型参数及初始学习率代入学习率计算公式计算得到所述初始问答模型的新学习率;Substitute the convergence parameters and update parameters into the scaling ratio calculation formula to calculate the scaling ratio of the initial question answering model, and substitute the initial model parameters and initial learning rate of the initial question answering model into the learning rate calculation formula to calculate the initial question answering model. The new learning rate of ;
    将所述初始模型参数、所述缩放率及所述新学习率代入模型参数计算公式计算得到所 述初始问答模型的新模型参数。Substitute the initial model parameters, the scaling rate and the new learning rate into the model parameter calculation formula to calculate the new model parameters of the initial question answering model.
  18. 如权利要求16所述的计算机可读存储介质,其中,所述对所述第一语料执行掩盖及标签化处理,得到携带标签的第三语料,包括:The computer-readable storage medium according to claim 16, wherein the masking and tagging processing is performed on the first corpus to obtain a third corpus carrying a tag, comprising:
    从所述第一语料中抽取第一预设数量的第一文本,随机调整所述第一文本中句子的顺序,将调整后的文本作为第一样本,将所述第一文本中的句子顺序作为所述第一样本的标签,将携带标签的第一样本的集合作为第一样本集;Extract a first preset number of first texts from the first corpus, randomly adjust the order of sentences in the first text, take the adjusted text as a first sample, and use the sentences in the first text as a first sample. The sequence is used as the label of the first sample, and the set of the first samples carrying the label is used as the first sample set;
    从所述第一语料中抽取第二预设数量的第二文本,随机将所述第二文本中第三预设数量的词语掩盖,将掩盖后的文本作为第二样本,将被掩盖的词语作为所述第二样本的标签,将携带标签的第二样本的集合作为第二样本集;Extracting a second preset number of second texts from the first corpus, randomly masking a third preset number of words in the second text, using the masked text as a second sample, and masking the masked words As the label of the second sample, the set of the second samples carrying the label is used as the second sample set;
    从所述第一语料中抽取第四预设数量的第三文本及第五预设数量的第四文本,随机将所述第四文本中第六预设数量的句子用从其他文本中抽取的句子替换,得到替换后的文本,将所述替换后的文本的标签设置为第一数值,将所述第三文本的标签设置为第二数值,将携带标签的替换后的文本及第三文本的集合作为第三样本集;A fourth preset number of third texts and a fifth preset number of fourth texts are extracted from the first corpus, and the sixth preset number of sentences in the fourth text are randomly selected from the sentences extracted from other texts. Sentence replacement, to obtain the replaced text, the label of the replaced text is set to the first numerical value, the label of the third text is set to the second numerical value, the replaced text and the third text that carry the label are set The set of , as the third sample set;
    将所述第一样本集、所述第二样本集及所述第三样本集的集合作为所述第三语料。A set of the first sample set, the second sample set and the third sample set is used as the third corpus.
  19. 如权利要求16所述的计算机可读存储介质,其中,所述将更新后的模型参数发送给各个计算节点,包括:The computer-readable storage medium of claim 16, wherein the sending the updated model parameters to each computing node comprises:
    对所述更新后的模型参数执行量化和压缩处理,得到压缩后的模型参数;Quantizing and compressing the updated model parameters to obtain compressed model parameters;
    计算所述压缩后的模型参数的标准信息摘要值;calculating standard information digest values of the compressed model parameters;
    从第四数据库中获取各个计算节点对应的公钥,采用所述公钥对所述压缩后的模型参数及标准信息摘要值进行加密,得到密文数据,将所述密文数据分发到对应的计算节点。Obtain the public key corresponding to each computing node from the fourth database, use the public key to encrypt the compressed model parameters and the standard information digest value, obtain ciphertext data, and distribute the ciphertext data to the corresponding calculate node.
  20. 如权利要求16所述的计算机可读存储介质,其中,所述模型梯度的计算公式为:The computer-readable storage medium of claim 16, wherein the calculation formula of the model gradient is:
    Figure PCTCN2021082863-appb-100003
    Figure PCTCN2021082863-appb-100003
    其中,g ti为第t轮迭代时计算机集群中第i个计算节点对应的模型梯度,T ti为第t轮迭代时计算机集群中第i个计算节点的子语料中参与训练的样本的总数量,x t(ij)为第t轮迭代时计算机集群中第i个计算节点的子语料中参与训练的第j个样本,s t(ij)为第t轮迭代时计算机集群中第i个计算节点的子语料中参与训练的第j个样本的标签,l()为初始问答模型的输出。 Among them, g ti is the model gradient corresponding to the i-th computing node in the computer cluster in the t-th iteration, and T ti is the total number of training samples in the sub-corpus of the i-th computing node in the computer cluster in the t-round iteration , x t(ij) is the j-th sample participating in training in the sub-corpus of the i-th computing node in the computer cluster in the t-th iteration, and s t(ij) is the i-th calculation in the computer cluster in the t-round iteration The label of the jth sample participating in the training in the sub-corpus of the node, and l() is the output of the initial question answering model.
PCT/CN2021/082863 2021-01-28 2021-03-25 Answer generation method and apparatus, electronic device, and readable storage medium WO2022160442A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110124138.2A CN112800178A (en) 2021-01-28 2021-01-28 Answer generation method and device, electronic equipment and readable storage medium
CN202110124138.2 2021-01-28

Publications (1)

Publication Number Publication Date
WO2022160442A1 true WO2022160442A1 (en) 2022-08-04

Family

ID=75812753

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/082863 WO2022160442A1 (en) 2021-01-28 2021-03-25 Answer generation method and apparatus, electronic device, and readable storage medium

Country Status (2)

Country Link
CN (1) CN112800178A (en)
WO (1) WO2022160442A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115934922A (en) * 2023-03-09 2023-04-07 杭州心识宇宙科技有限公司 Conversation service execution method and device, storage medium and electronic equipment
CN116523031A (en) * 2023-07-05 2023-08-01 深圳须弥云图空间科技有限公司 Training method of language generation model, language generation method and electronic equipment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113593557B (en) * 2021-07-27 2023-09-12 中国平安人寿保险股份有限公司 Distributed session method, device, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107957989A (en) * 2017-10-23 2018-04-24 阿里巴巴集团控股有限公司 Term vector processing method, device and equipment based on cluster
CN108595417A (en) * 2018-04-03 2018-09-28 成都量子矩阵科技有限公司 Distributed training method based on Word2vector
CN111241304A (en) * 2020-01-16 2020-06-05 平安科技(深圳)有限公司 Answer generation method based on deep learning, electronic device and readable storage medium
WO2020133358A1 (en) * 2018-12-29 2020-07-02 深圳市优必选科技有限公司 Chat corpus cleaning method, apparatus, computer device and storage medium
CN111400470A (en) * 2020-03-13 2020-07-10 深圳市腾讯计算机系统有限公司 Question processing method and device, computer equipment and storage medium
CN112183091A (en) * 2020-10-12 2021-01-05 深圳壹账通智能科技有限公司 Question and answer pair generation method and device, electronic equipment and readable storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200202243A1 (en) * 2019-03-05 2020-06-25 Allegro Artificial Intelligence Ltd Balanced federated learning
CN111414457A (en) * 2020-03-20 2020-07-14 深圳前海微众银行股份有限公司 Intelligent question-answering method, device, equipment and storage medium based on federal learning
CN111931950B (en) * 2020-09-28 2021-01-26 支付宝(杭州)信息技术有限公司 Method and system for updating model parameters based on federal learning
CN112231756B (en) * 2020-10-29 2022-05-27 湖南科技学院 FL-EM-GMM medical user privacy protection method and system
CN112257873A (en) * 2020-11-11 2021-01-22 深圳前海微众银行股份有限公司 Training method, device, system, equipment and storage medium of machine learning model

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107957989A (en) * 2017-10-23 2018-04-24 阿里巴巴集团控股有限公司 Term vector processing method, device and equipment based on cluster
CN108595417A (en) * 2018-04-03 2018-09-28 成都量子矩阵科技有限公司 Distributed training method based on Word2vector
WO2020133358A1 (en) * 2018-12-29 2020-07-02 深圳市优必选科技有限公司 Chat corpus cleaning method, apparatus, computer device and storage medium
CN111241304A (en) * 2020-01-16 2020-06-05 平安科技(深圳)有限公司 Answer generation method based on deep learning, electronic device and readable storage medium
CN111400470A (en) * 2020-03-13 2020-07-10 深圳市腾讯计算机系统有限公司 Question processing method and device, computer equipment and storage medium
CN112183091A (en) * 2020-10-12 2021-01-05 深圳壹账通智能科技有限公司 Question and answer pair generation method and device, electronic equipment and readable storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115934922A (en) * 2023-03-09 2023-04-07 杭州心识宇宙科技有限公司 Conversation service execution method and device, storage medium and electronic equipment
CN115934922B (en) * 2023-03-09 2024-01-30 杭州心识宇宙科技有限公司 Dialogue service execution method and device, storage medium and electronic equipment
CN116523031A (en) * 2023-07-05 2023-08-01 深圳须弥云图空间科技有限公司 Training method of language generation model, language generation method and electronic equipment
CN116523031B (en) * 2023-07-05 2024-05-10 深圳须弥云图空间科技有限公司 Training method of language generation model, language generation method and electronic equipment

Also Published As

Publication number Publication date
CN112800178A (en) 2021-05-14

Similar Documents

Publication Publication Date Title
WO2022105115A1 (en) Question and answer pair matching method and apparatus, electronic device and storage medium
WO2022160442A1 (en) Answer generation method and apparatus, electronic device, and readable storage medium
CN113051356B (en) Open relation extraction method and device, electronic equipment and storage medium
US12093646B2 (en) Systems and methods for semi-supervised extraction of text classification information
CN106095842B (en) Online course searching method and device
CN112183091A (en) Question and answer pair generation method and device, electronic equipment and readable storage medium
CN112860848B (en) Information retrieval method, device, equipment and medium
CN111666415A (en) Topic clustering method and device, electronic equipment and storage medium
CN113157739B (en) Cross-modal retrieval method and device, electronic equipment and storage medium
WO2021189908A1 (en) Image classification method based on deep learning, image classification apparatus, server and medium
CN112001179A (en) Named entity recognition method and device, electronic equipment and readable storage medium
CN110597956B (en) Searching method, searching device and storage medium
CN113706252B (en) Product recommendation method and device, electronic equipment and storage medium
WO2023178979A1 (en) Question labeling method and apparatus, electronic device and storage medium
WO2023178978A1 (en) Prescription review method and apparatus based on artificial intelligence, and device and medium
WO2023040145A1 (en) Artificial intelligence-based text classification method and apparatus, electronic device, and medium
CN116362684A (en) Library cluster-based book management method, library cluster-based book management device, library cluster-based book management equipment and storage medium
CN112651782B (en) Behavior prediction method, device, equipment and medium based on dot product attention scaling
CN112052409B (en) Address resolution method, device, equipment and medium
CN113688239A (en) Text classification method and device under few samples, electronic equipment and storage medium
CN112395401A (en) Adaptive negative sample pair sampling method and device, electronic equipment and storage medium
CN116521867A (en) Text clustering method and device, electronic equipment and storage medium
CN114581177B (en) Product recommendation method, device, equipment and storage medium
CN113704616B (en) Information pushing method and device, electronic equipment and readable storage medium
CN113705692A (en) Emotion classification method and device based on artificial intelligence, electronic equipment and medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21922041

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21922041

Country of ref document: EP

Kind code of ref document: A1