CN111428448B - Text generation method, device, computer equipment and readable storage medium - Google Patents
Text generation method, device, computer equipment and readable storage medium Download PDFInfo
- Publication number
- CN111428448B CN111428448B CN202010136551.6A CN202010136551A CN111428448B CN 111428448 B CN111428448 B CN 111428448B CN 202010136551 A CN202010136551 A CN 202010136551A CN 111428448 B CN111428448 B CN 111428448B
- Authority
- CN
- China
- Prior art keywords
- text
- sample
- data
- generator
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 238000012549 training Methods 0.000 claims abstract description 36
- 230000015654 memory Effects 0.000 claims description 39
- 230000006870 function Effects 0.000 claims description 24
- 230000008569 process Effects 0.000 claims description 19
- 238000004088 simulation Methods 0.000 claims description 14
- 238000000342 Monte Carlo simulation Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 8
- 230000002787 reinforcement Effects 0.000 claims description 5
- 230000009471 action Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 10
- 238000010586 diagram Methods 0.000 description 4
- 230000003068 static effect Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 239000003795 chemical substances by application Substances 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a text generation method, a text generation device, computer equipment and a readable storage medium, and belongs to the field of text processing. The text generation method, the device, the computer equipment and the readable storage medium generate target text data according to target guide data through the text generation countermeasure network model obtained through pre-training, solve the problem that discrete output cannot be updated, and achieve the purpose that text sentences can be generated according to sentence head data by adopting the text generation countermeasure network model.
Description
Technical Field
The present invention relates to the field of text processing, and in particular, to a text generation method, apparatus, computer device, and readable storage medium.
Background
In the intelligent interview scenario, artificial intelligence (ARTIFICIAL INTELLIGENCE, abbreviated as AI) needs to ask a question about a candidate according to a preset question, and also needs to present an open question to the candidate according to an actual situation so as to test an actual response capability of the candidate. The open question requires the AI to generate the question text using a generative model.
The current generation model mainly adopts a generation countermeasure network (GAN), and because the generation countermeasure network needs to update parameter variables based on continuous output data, the generation countermeasure network is mainly applied to image processing, various image generation tasks comprise unsupervised generation, tagged generation, super-resolution restoration, automatic coloring, street view generation and the like, and the quality of generated pictures is lifelike and is difficult for human eyes to distinguish true from false.
When the generating countermeasure network is applied to the text generating task, the generating countermeasure network needs to output probability distribution of the next word in the vocabulary based on the generated text sequence in the text generating process, then the word is selected, the output result is discrete data, and the discrete data cannot realize training update of the network. The current generation countermeasure network cannot be applied to the text generation task.
Disclosure of Invention
Aiming at the problem that the existing generation countermeasure network only supports continuous output, a text generation method, a device, computer equipment and a readable storage medium for generating the countermeasure network based on the text which can be updated according to discrete data are provided.
In order to achieve the above object, the present invention provides a text generation method, including the steps of:
Collecting answer data generated by a business object in a question-answer scene;
extracting the answer data and acquiring target guide data;
generating an countermeasure network model through a text obtained through pre-training and generating target text data according to the target guide data;
the target guiding data is sentence head data of the target text data.
In one embodiment, before the step of generating the countermeasure network model from the text obtained by training in advance and generating the target text data from the target guidance data, the method includes:
Obtaining a sample guide set and a sample text set, wherein the sample guide set comprises at least one sample guide data, the sample text set comprises at least one sample text data, and the sample guide data is sentence head data of the sample text data;
Training an initial countermeasure network model according to the sample guide set and the sample text set, and obtaining a text generation countermeasure network model.
In one embodiment, the initial countermeasure network model includes a generator and a arbiter, and the training of the initial countermeasure network model according to the sample guidance set and the sample text set and obtaining the text generated countermeasure network model includes:
Generating, by the generator and from at least one sample guide data in the sample guide set, at least one sample data;
Simulating the at least one sample text data by using Monte Carlo simulation and obtaining a plurality of sample simulated text data;
identifying the plurality of sample simulated text data by the discriminator according to target text data in the sample text set, and updating the parameter value of the generator according to the identification result;
updating the arbiter based on the updated generator and according to a loss function;
And circularly updating the generator and the discriminator until the initial countermeasure network model meets a preset convergence condition, and obtaining the text formed by the updated generator to generate the countermeasure network model.
In one embodiment, the step of generating, by the generator and from the sample guidance data of at least one of the sample guidance sets, at least one sample text sample data comprises:
Calculating according to the sample guide data through the generator, obtaining a first sample word with the highest probability in a vocabulary, and adding the first sample word at the tail end of the sample guide data;
And calculating according to the first sample word through the generator to obtain a second sample word with the highest probability in the vocabulary, adding the second sample word at the tail end of the first sample word, and circularly executing the steps until sample text data with preset length is obtained.
In one embodiment, the step of simulating the at least one sample of text data using a monte carlo simulation and obtaining a plurality of samples of simulated text data comprises:
the words in each sample text data are simulated one by adopting Monte Carlo simulation, and a plurality of sample simulation text data corresponding to the sample text data are generated.
In one embodiment, the step of identifying, by the arbiter and according to the target text data in the set of sample texts, the plurality of sample simulated text data, and updating the parameter value of the generator according to the identification result, includes:
Identifying the plurality of sample simulated text data by the discriminator according to target text data in the sample text set, and acquiring a state cost function according to an identification result;
and calculating an objective function according to the state cost function, and updating the parameter value of the generator according to the objective function.
In one embodiment, the step of generating the countermeasure network model by the text obtained through pre-training and generating the target text data according to the target guidance data includes:
calculating the target guide data by adopting a generator of the text generation countermeasure network model to obtain a first sample word with the highest probability in a vocabulary, and adding the first sample word to the tail end of the target guide data;
and calculating the first sample word by adopting the generator, obtaining a second sample word with the highest probability in the vocabulary, adding the second sample word at the tail end of the first sample word, and circularly executing the steps until target text data with preset length is obtained.
In order to achieve the above object, the present invention further provides a text generating apparatus, including:
the collecting unit is used for collecting answer data generated by the business object in the question-answer scene;
the acquisition unit is used for extracting the answer data and acquiring target guide data;
The generation unit is used for generating an countermeasure network model through the text obtained through pre-training and generating target text data according to the target guide data;
the target guiding data is sentence head data of the target text data.
To achieve the above object, the present invention also provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the above method when executing the computer program.
To achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the above method.
The beneficial effects of the technical scheme are that:
According to the technical scheme, the text generation method, the device, the computer equipment and the readable storage medium generate target text data according to target guide data (such as sentence head data) through the text generation countermeasure network model obtained through training in advance, solve the problem that discrete output cannot be updated, and achieve the purpose that the text generation countermeasure network model can generate text sentences (such as text problems) according to the sentence head data.
Drawings
FIG. 1 is a method flow diagram of one embodiment of a text generation method of the present invention;
FIG. 2 is a method flow diagram of one embodiment of obtaining a text generation countermeasure network model;
FIG. 3 is a flow chart of a method of one embodiment of training an initial countermeasure network model based on a sample guide set and a sample text set to obtain text generated countermeasure network models;
FIG. 4 is a block diagram of one embodiment of a text generating device according to the present invention;
fig. 5 is a schematic diagram of a hardware architecture of an embodiment of a computer device according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The text generation method, the device, the computer equipment and the readable storage medium are suitable for the business fields of insurance, finance and the like, and provide a text generation method capable of automatically generating open text problems for a loan system, an insurance system and a finance system so as to be convenient for testing the thinking ability of a candidate. According to the invention, the target text data is generated according to the target guide data (such as sentence head data) through the text generation countermeasure network model obtained through pre-training, so that the problem that discrete output cannot be updated is solved, and the aim that the text sentence (such as text problem) can be generated according to the sentence head data by adopting the text generation countermeasure network model is fulfilled.
Example 1
Referring to fig. 1, a text generating method of the present embodiment includes the following steps:
S1, collecting answer data generated by a business object in a question-answer scene;
In this step, the business object may be a consulting user consulting with the business, or a buyer of the online transaction platform, or an interviewer in the interview process. The answer data may be collected by a collection device, such as an audio receiving device, a microphone or a mobile terminal with recording capabilities, etc.
The text generation method in this embodiment is mainly applied to dialogue scenes (at least two users), and generates a question text based on answer information of a target object, so that the target object can answer the question text, for example: when the text generation method is applied to the interview scene, an open text problem is generated according to keywords provided by interviewees.
S2, extracting the answer data and acquiring target guide data;
In step S2, the answer data may be subjected to semantic analysis to extract keywords in the answer data, and the keywords are used as target guidance data; analyzing the answer data to extract nouns in the answer data, and taking the nouns as target guide data.
It should be noted that: the target guidance data may be keywords, or words at the beginning of a sentence.
S3, generating an countermeasure network model through a text obtained through pre-training and generating target text data according to the target guide data;
It should be noted that: the target guiding data is sentence head data of the target text data. For example: the target guidance data is: "today"; the target text data is: "how does today weather? ". The target guidance data may be two words or three words, and is not limited herein.
Referring to fig. 2, before performing step S3, the step of acquiring the text to generate the countermeasure network model may include:
S31, acquiring a sample guide set and a sample text set, wherein the sample guide set comprises at least one sample guide data, the sample text set comprises at least one sample text data, and the sample guide data is sentence head data of the sample text data;
In this embodiment, the sample guidance set is a sequence composed of sample guidance data (period data); a sample text set is a sequence of real text data composed of sample text data (complete sentences). The sample guidance data is sentence head data of the real text data.
S32, training an initial countermeasure network model according to the sample guide set and the sample text set, and obtaining a text generation countermeasure network model.
At present, the pixel value of each point of the generated image is a continuous value in the image processing process of the generated countermeasure network, so that the calculation map of the whole network is differentiable (micro-guidable) from the weight of the generator to the output of the generator, and then the weight and the output classification of the discriminator are all obtained, the error can be normally back-propagated, and the gradient and the weight can be normally updated. However, in the text generation process, the generator actually outputs a sequence, each round of outputting the probability distribution of the next word in the vocabulary based on the generated text sequence, and then selecting the word with the highest probability, wherein the selection process is not tiny, the generator outputs discrete token, and in the training process, the error back propagates to the point, and the gradient update of the pixel value like the image generation task on each token cannot be performed, so that the weight value of the generator is updated. On the other hand, the arbiter can directly receive the input of a complete text sequence and output the true or false of sentences, but cannot judge the sentences which are generated by the generator to half and are not finished, so that the arbiter cannot provide supervision on the training of the generator for each word in the generated text sequence.
Therefore, in the training process of generating the countermeasure network model in this embodiment, in order to solve the irreducible problem caused by the discrete output of the generator, in this embodiment, the generating process of the text sequence is regarded as a sequence decision process, a policy gradient (policy gradient) method in reinforcement learning is adopted, the judgment result of the discriminator is taken as a reward (review), part of the text generated by the generator is taken as a state (state), the generator is taken as an agent (agent), the next word is predicted as an action, and the generator is a policy (policy) which needs to be updated, so that the irreducible problem of the loss function of the discrete output is solved. In the method for judging the unfinished sequence, the embodiment adopts Monte Carlo search (Monte Carlo search), based on the generated sequence, the generator continues to generate until the sequence is completed, the discriminator judges the sequence, the simulation is carried out for a plurality of times, and the average value of the final reward is used as the estimate of the reward of the current unfinished sequence.
It should be noted that: the initial countermeasure network model comprises a generator and a discriminator; referring to fig. 3, in step S32, training an initial countermeasure network model according to the sample guidance set and the sample text set, and obtaining a text generation countermeasure network model includes:
By way of example and not limitation, the generator may employ a long short term memory network (LSTM) of output sequences for generating text sequences from a given initial state; the discriminator can adopt a two-class long-short-period memory network for receiving the output text and the real text of the generator and judging the true or false of the output text.
S321, generating at least one sample data according to at least one sample guidance data in the sample guidance set through the generator;
further, the step in step S321 may include:
Calculating according to the sample guide data through the generator, obtaining a first sample word with the highest probability in a vocabulary, and adding the first sample word at the tail end of the sample guide data;
And calculating according to the first sample word through the generator, obtaining a second sample word with the highest probability in the vocabulary, adding the second sample word at the tail end of the first sample word, and circularly executing the steps (and so on) until sample text data with preset length is obtained.
In this step, the generator G θ and the arbiter D φ are initialized; the sample guiding data is a real text set S= { X 1~T }, the sentence length of each real text in the real text set is T, and the tail part of the length less than T is filled with zero; the sample guide set is the word set { Y 1 }.
The word set { Y 1 } is input to the generator G θ, the input layer of the generator G θ maps the input words to label information (token) corresponding to the corresponding words in the vocabulary, the label information is embedded and expressed, the input information is (Y 1,y2,…,yt-1) used as input to the generator G θ in practical application, the generator G θ outputs the probability of each word of the next word in the vocabulary according to the input data, the word with the highest probability is used as Y t, and the like, and the processing is circulated until the end of a sentence Y T, so that a set of generated sample text set { Y 1~T } with the length of T (length less than zero filling) is obtained.
Wherein (y 1,y2,…,yt-1) represents an incomplete sentence composed of t-1 words, and y 1 represents the 1 st word in a sentence; y 2 represents the 2 nd word in a sentence; y t-1 represents the t-1 st word in a sentence; y T the T-th word (end of period) in a sentence.
In the step, only the generator G θ is used to input a word y 1, the generator G θ embeds the word y 1 and then transmits the word y 1 to the LSTM, and the generated token sequence and the word corresponding to the token sequence in the vocabulary are output, so as to obtain the generated text sequence (y 1,y2,…,yT).
S322, simulating the at least one sample text data by adopting Monte Carlo simulation and obtaining a plurality of sample simulation text data;
further, the step of step S322 includes:
the words in each sample text data are simulated one by adopting Monte Carlo simulation, and a plurality of sample simulation text data corresponding to the sample text data are generated.
In this implementation, for each sequence in the sample text set { Y 1~T }, taking the (Y 1,y2,…,yT) sequence as an example, each word Y t in the sequence is traversed, and N times of monte carlo simulation are performed, which is different from selecting the word with the highest probability as Y t before, where each time the generator G θ is used to sample according to the multiple distribution of output words, repeating until the end of period Y T is reached, so as to obtain N different complete sample simulated text sets { Y 1~T 1,Y1~T 2,…,Y1~T N }.
The number of simulations of words in the sample text set at different positions in the sentence may be the same or different.
S323, identifying the plurality of sample simulation text data through the discriminator according to target text data in the sample text set, and updating the parameter value of the generator according to the identification result;
Further, step S323 may include:
Identifying the plurality of sample simulated text data by the discriminator according to target text data in the sample text set, and acquiring a state cost function according to an identification result;
and calculating an objective function according to the state cost function, and updating the parameter value of the generator according to the objective function.
In an embodiment, inputting the obtained sample simulation text set { Y 1~T 1,Y1~T 2,…,Y1~T N } into a discriminator D φ for two classification, comparing each sample simulation text with a corresponding real text, if the sample simulation text is consistent, indicating that the sample simulation text generated by the generator is real (mark 1); if not, the sample simulated text generated by the show generator is false (marker 0). For a complete sentence, directly taking the output result of the discriminator D φ as a state value; and for incomplete sentences, averaging the discrimination results of N complete sentences obtained by Monte Carlo simulation. To sum up, the state-cost function can be expressed as:
where i represents the number of simulations of the monte carlo simulation.
The parameters θ of the generator G θ are updated according to the state-cost function, the objective function of which is to produce as much as possible a more realistic sample spoof arbiter, i.e. to maximize the rewards it gets under the policy G θ:
wherein G θ(yt|Y1~t-1) represents the policy output, which can be essentially considered as a probability, outputting the probability value of y t in the vocabulary; y 1~t-1 is the value that all Y t has appeared. The parameter θ is a weight parameter in generator G θ; the parameters of generator G θ are updated on J (θ), in other words, the gradient of the strategy comes from J (θ):
wherein, alpha θ is the learning rate.
S324, updating the discriminator according to a loss function based on the updated generator;
In this step, using the updated generator G θ, a set of text sequences { Y 1~T } is generated, and at the same time, the same number of text sequence sets { X 1~T } are selected from the real text sets s= { X 1~T }, and input into the discriminator D φ for classification, where the loss function is a binary logarithmic loss function:
the parameters of D φ are updated on J (φ):
Wherein, alpha φ is the learning rate.
S325, circularly updating the generator and the discriminator until the initial countermeasure network model meets preset convergence conditions, and obtaining the text formed by the updated generator to generate the countermeasure network model.
In this step, in each training round, the training generator n G times is repeated, and the training of the discriminator n D times is repeated until the model meets the preset convergence condition. Such as: the preset convergence condition is n D>nG to ensure that the arbiter can correctly guide the generator to update.
In step S3, the step of generating the countermeasure network model by the text obtained through training in advance and generating target text data according to the target guidance data includes:
calculating the target guide data by adopting a generator of the text generation countermeasure network model to obtain a first sample word with the highest probability in a vocabulary, and adding the first sample word to the tail end of the target guide data;
And calculating the first sample word by adopting the generator, obtaining a second sample word with the highest probability in the vocabulary, adding the second sample word at the tail end of the first sample word, and circularly executing the steps (and so on) until target text data with preset length is obtained. Therefore, the target text data for asking is generated according to the answer data, the purpose of opening asking and answering based on the answer of the business object is achieved, and the temporary response capability of the business object to the open question is conveniently tested.
In the embodiment, the text generation method is based on the countermeasure type long-short-term memory network and the strategy gradient, and the structure of the LSTM-based discriminator-generator is used, so that the tasks of generating the text sequence and judging the authenticity of the text can be accurately realized; by means of countermeasure training, the discriminator can dynamically update parameters of the discriminator, the recognition capability is improved continuously, the generator provides proper guidance, and the method has potential more than generating the quality of the text purely based on other static reference evaluation; by means of the idea of reinforcement learning, the sequence generation process is converted into a sequence decision process, the problem that a loss function caused by discrete output is not tiny is solved, and training of an countermeasure network is possible; using Monte Carlo search to obtain a complete sequence of each step and a scoring result thereof in a discriminator by using strategy simulation, taking the average value as a reward value of the current time step, and solving the reward problem that an unfinished sequence cannot be directly obtained; in addition, only the generator part needs to be reserved in the training stage, and compared with other technologies such as Gumbel-softmax which do not have a micro discretization, no additional parameters need to be trained, and the model occupies less memory.
Example two
As shown in fig. 4, the present invention also provides a text generating apparatus 1, including: an acquisition unit 11, an acquisition unit 12, and a generation unit 13, wherein:
the collecting unit 11 is used for collecting answer data generated by the business object in the question-answer scene;
the business object can be a consulting user consulting with the business, a buyer of an online transaction platform or an interviewer in an interviewing process. The answer data may be collected by a collection device, such as an audio receiving device, a microphone or a mobile terminal with recording capabilities, etc.
The text generating apparatus 1 in this embodiment is mainly applied to a dialogue scene (at least two users), and generates a question text based on answer information of a target object, so that the target object can answer the question text, for example: when the text generating device 1 is applied to an interview scene, an open text question is generated from keywords provided by interviewees.
An acquisition unit 12 for extracting the answer data and acquiring target guidance data;
The acquiring unit 12 is adopted to perform semantic analysis on the answer data to extract keywords in the answer data, and the keywords are used as target guide data; analyzing the answer data to extract nouns in the answer data, and taking the nouns as target guide data.
A generating unit 13 for generating an countermeasure network model by a text obtained by training in advance and generating target text data from the target guidance data;
the target guiding data is sentence head data of the target text data.
Specifically, the generating unit 13 calculates according to the target guidance data by using the generator of the text generation countermeasure network model, obtains a first sample word with the highest probability in the vocabulary, and adds the first sample word to the end of the target guidance data;
The generator calculates according to the first sample word, acquires a second sample word with the highest probability in the vocabulary, adds the second sample word at the tail end of the first sample word, and the like until target text data with preset length is acquired.
In this embodiment, the text generating device 1 uses the structure of the LSTM-based discriminator-generator based on the countermeasure long-short term memory network and the policy gradient, so that the task of generating a text sequence and judging the authenticity of the text can be accurately realized; by means of countermeasure training, the discriminator can dynamically update parameters of the discriminator, the recognition capability is improved continuously, the generator provides proper guidance, and the method has potential more than generating the quality of the text purely based on other static reference evaluation; by means of the idea of reinforcement learning, the sequence generation process is converted into a sequence decision process, the problem that a loss function caused by discrete output is not tiny is solved, and training of an countermeasure network is possible; using Monte Carlo search to obtain a complete sequence of each step and a scoring result thereof in a discriminator by using strategy simulation, taking the average value as a reward value of the current time step, and solving the reward problem that an unfinished sequence cannot be directly obtained; in addition, only the generator part needs to be reserved in the training stage, and compared with other technologies such as Gumbel-softmax which do not have a micro discretization, no additional parameters need to be trained, and the model occupies less memory.
Example III
In order to achieve the above objective, the present invention further provides a computer device 2, where the computer device 2 includes a plurality of computer devices 2, and the components of the text generating apparatus 1 of the second embodiment may be dispersed in different computer devices 2, and the computer device 2 may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a rack server, a blade server, a tower server, or a rack server (including a stand-alone server, or a server cluster formed by a plurality of servers) that execute a program, or the like. The computer device 2 of the present embodiment includes at least, but is not limited to: a memory 21, a processor 23, a network interface 22, and the text generating device 1 (refer to fig. 5) which can be communicatively connected to each other through a system bus. It should be noted that fig. 5 only shows a computer device 2 having components, but it should be understood that not all of the illustrated components are required to be implemented, and that more or fewer components may be implemented instead.
In this embodiment, the memory 21 includes at least one type of computer readable storage medium, including flash memory, hard disk, multimedia card, card memory (e.g., SD or DX memory, etc.), random Access Memory (RAM), static Random Access Memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disk, optical disk, etc. In some embodiments, the memory 21 may be an internal storage unit of the computer device 2, such as a hard disk or a memory of the computer device 2. In other embodiments, the memory 21 may also be an external storage device of the computer device 2, such as a plug-in hard disk provided on the computer device 2, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like. Of course, the memory 21 may also comprise both an internal memory unit of the computer device 2 and an external memory device. In this embodiment, the memory 21 is typically used to store an operating system and various types of application software installed on the computer device 2, such as program codes of the text generation method of the first embodiment. Further, the memory 21 may be used to temporarily store various types of data that have been output or are to be output.
The processor 23 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 23 is typically used to control the overall operation of the computer device 2, e.g. to perform control and processing related to data interaction or communication with said computer device 2, etc. In this embodiment, the processor 23 is configured to execute the program code or the processing data stored in the memory 21, for example, to execute the text generating apparatus 1 or the like.
The network interface 22 may comprise a wireless network interface or a wired network interface, which network interface 22 is typically used to establish a communication connection between the computer device 2 and other computer devices 2. For example, the network interface 22 is used to connect the computer device 2 to an external terminal through a network, establish a data transmission channel and a communication connection between the computer device 2 and the external terminal, and the like. The network may be an Intranet (Intranet), the Internet (Internet), a global system for mobile communications (Global System of Mobile communication, GSM), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), a 4G network, a 5G network, bluetooth (Bluetooth), wi-Fi, or other wireless or wired network.
It is noted that fig. 5 only shows a computer device 2 having components 21-23, but it is understood that not all of the illustrated components are required to be implemented, and that more or fewer components may alternatively be implemented.
In the present embodiment, the text generating device 1 stored in the memory 21 may be further divided into one or more program modules, which are stored in the memory 21 and executed by one or more processors (the processor 23 in the present embodiment) to complete the present invention.
Embodiment four:
To achieve the above object, the present invention also provides a computer-readable storage medium including a plurality of storage media such as a flash memory, a hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application store, etc., on which a computer program is stored, which when executed by the processor 23, performs the corresponding functions. The computer-readable storage medium of the present embodiment is for storing the text generating apparatus 1, and when executed by the processor 23, implements the text generating method of the first embodiment.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.
Claims (8)
1. A method of text generation, characterized in that it is based on question-answer scenarios, said method comprising the steps of:
Collecting answer data generated by a business object in a question-answer scene;
extracting the answer data and acquiring target guide data;
generating an countermeasure network model through a text obtained through pre-training and generating target text data according to the target guide data;
the target guide data are sentence head data of the target text data;
the step of generating the countermeasure network model through the text obtained through pre-training and generating target text data according to the target guiding data comprises the following steps:
calculating the target guide data by adopting a generator of the text generation countermeasure network model to obtain a first sample word with the highest probability in a vocabulary, and adding the first sample word to the tail end of the target guide data;
Calculating the first sample word by adopting the generator, obtaining a second sample word with the highest probability in a vocabulary, and adding the second sample word at the tail of the first sample word;
Circularly executing the steps until target text data with preset length is obtained;
The generator adopts a long-short-term memory network LSTM of the output sequence for generating a text sequence from a given initial state; the discriminator adopts a two-classification long-short-term memory network, and uses the structure of the discriminator-generator based on LSTM to generate a text sequence and judge the authenticity of the text;
before the step of generating the countermeasure network model by the text obtained through the pre-training and generating target text data according to the target guidance data, the method comprises the following steps:
Obtaining a sample guide set and a sample text set, wherein the sample guide set comprises at least one sample guide data, the sample text set comprises at least one sample text data, and the sample guide data is sentence head data of the sample text data;
training an initial countermeasure network model according to the sample guide set and the sample text set, and obtaining a text generation countermeasure network model;
The generation process of the text sequence is regarded as a sequence decision process, a strategy gradient method in reinforcement learning is adopted, the judgment result of the discriminator is used as a reward, part of the text generated by the generator is used as a state, the generator is used as an agent, the next word is predicted to be used as an action, and the generator is a strategy which needs to be updated.
2. The text generation method of claim 1, wherein the initial countermeasure network model includes a generator and a arbiter, the step of training the initial countermeasure network model based on the sample guide set and the sample text set and obtaining a text generation countermeasure network model includes:
Generating, by the generator and from at least one sample guide data in the sample guide set, at least one sample data;
Simulating the at least one sample text data by using Monte Carlo simulation and obtaining a plurality of sample simulated text data;
identifying the plurality of sample simulated text data by the discriminator according to target text data in the sample text set, and updating the parameter value of the generator according to the identification result;
updating the arbiter based on the updated generator and according to a loss function;
And circularly updating the generator and the discriminator until the initial countermeasure network model meets a preset convergence condition, and obtaining the text formed by the updated generator to generate the countermeasure network model.
3. The text generation method of claim 2, wherein the step of generating at least one sample of text data by the generator and from at least one sample guide data in the sample guide set comprises:
Calculating according to the sample guide data through the generator, obtaining a first sample word with the highest probability in a vocabulary, and adding the first sample word at the tail end of the sample guide data;
Calculating according to the first sample word through the generator, obtaining a second sample word with the highest probability in a vocabulary, and adding the second sample word at the tail end of the first sample word;
and executing the steps circularly until sample text data with preset length is acquired.
4. The text generation method according to claim 2, wherein the step of simulating the at least one sample of text data using a monte carlo simulation and obtaining a plurality of samples of simulated text data comprises:
the words in each sample text data are simulated one by adopting Monte Carlo simulation, and a plurality of sample simulation text data corresponding to the sample text data are generated.
5. The text generation method according to claim 2, wherein the step of identifying the plurality of sample simulated text data by the discriminator and according to the target text data in the sample text set, and updating the parameter value of the generator according to the identification result, comprises:
Identifying the plurality of sample simulated text data by the discriminator according to target text data in the sample text set, and acquiring a state cost function according to an identification result;
and calculating an objective function according to the state cost function, and updating the parameter value of the generator according to the objective function.
6. A text generation apparatus, characterized by comprising, based on a question-answer scenario:
the collecting unit is used for collecting answer data generated by the business object in the question-answer scene;
the acquisition unit is used for extracting the answer data and acquiring target guide data;
The generation unit is used for generating an countermeasure network model through the text obtained through pre-training and generating target text data according to the target guide data;
the target guide data are sentence head data of the target text data;
The text generation countermeasure network model obtained through pre-training and target text data generation according to the target guide data comprise the following steps:
calculating the target guide data by adopting a generator of the text generation countermeasure network model to obtain a first sample word with the highest probability in a vocabulary, and adding the first sample word to the tail end of the target guide data;
Calculating the first sample word by adopting the generator, obtaining a second sample word with the highest probability in a vocabulary, and adding the second sample word at the tail of the first sample word;
Circularly executing the steps until target text data with preset length is obtained;
The generator adopts a long-short-term memory network LSTM of the output sequence for generating a text sequence from a given initial state; the discriminator adopts a two-classification long-short-term memory network, and uses the structure of the discriminator-generator based on LSTM to generate a text sequence and judge the authenticity of the text;
the text generation device is also used for:
Obtaining a sample guide set and a sample text set, wherein the sample guide set comprises at least one sample guide data, the sample text set comprises at least one sample text data, and the sample guide data is sentence head data of the sample text data;
training an initial countermeasure network model according to the sample guide set and the sample text set, and obtaining a text generation countermeasure network model;
The generation process of the text sequence is regarded as a sequence decision process, a strategy gradient method in reinforcement learning is adopted, the judgment result of the discriminator is used as a reward, part of the text generated by the generator is used as a state, the generator is used as an agent, the next word is predicted to be used as an action, and the generator is a strategy which needs to be updated.
7. A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, characterized by: the processor, when executing the computer program, implements the steps of the method of any one of claims 1 to 5.
8. A computer-readable storage medium having stored thereon a computer program, characterized by: the computer program implementing the steps of the method of any one of claims 1 to 5 when executed by a processor.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010136551.6A CN111428448B (en) | 2020-03-02 | 2020-03-02 | Text generation method, device, computer equipment and readable storage medium |
PCT/CN2020/118456 WO2021174827A1 (en) | 2020-03-02 | 2020-09-28 | Text generation method and appartus, computer device and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010136551.6A CN111428448B (en) | 2020-03-02 | 2020-03-02 | Text generation method, device, computer equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111428448A CN111428448A (en) | 2020-07-17 |
CN111428448B true CN111428448B (en) | 2024-05-07 |
Family
ID=71553527
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010136551.6A Active CN111428448B (en) | 2020-03-02 | 2020-03-02 | Text generation method, device, computer equipment and readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111428448B (en) |
WO (1) | WO2021174827A1 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111428448B (en) * | 2020-03-02 | 2024-05-07 | 平安科技(深圳)有限公司 | Text generation method, device, computer equipment and readable storage medium |
CN112036544A (en) * | 2020-07-31 | 2020-12-04 | 五八有限公司 | Image generation method and device |
CN112861179B (en) * | 2021-02-22 | 2023-04-07 | 中山大学 | Method for desensitizing personal digital spatial data based on text-generated countermeasure network |
CN115481630A (en) * | 2022-09-27 | 2022-12-16 | 深圳先进技术研究院 | Electronic insurance letter automatic generation method and device based on sequence countermeasure and prior reasoning |
CN116010609B (en) * | 2023-03-23 | 2023-06-09 | 山东中翰软件有限公司 | Material data classifying method and device, electronic equipment and storage medium |
CN117933268A (en) * | 2024-03-21 | 2024-04-26 | 山东大学 | End-to-end unsupervised resistance text rewriting method and device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106663125A (en) * | 2014-08-21 | 2017-05-10 | 国立研究开发法人情报通信研究机构 | Question sentence generation device and computer program |
CN109522411A (en) * | 2018-11-12 | 2019-03-26 | 南京德磐信息科技有限公司 | A kind of writing householder method neural network based |
CN110162595A (en) * | 2019-03-29 | 2019-08-23 | 深圳市腾讯计算机系统有限公司 | For generating the method, apparatus, equipment and readable storage medium storing program for executing of text snippet |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105654945B (en) * | 2015-10-29 | 2020-03-06 | 乐融致新电子科技(天津)有限公司 | Language model training method, device and equipment |
CN110019732B (en) * | 2017-12-27 | 2021-10-15 | 华为技术有限公司 | Intelligent question answering method and related device |
CN109062937B (en) * | 2018-06-15 | 2019-11-26 | 北京百度网讯科技有限公司 | The method of training description text generation model, the method and device for generating description text |
CN110619118B (en) * | 2019-03-28 | 2022-10-28 | 中国人民解放军战略支援部队信息工程大学 | Automatic text generation method |
CN110196899B (en) * | 2019-06-11 | 2020-07-21 | 中央民族大学 | Low-resource language question-answer corpus generating method |
CN111428448B (en) * | 2020-03-02 | 2024-05-07 | 平安科技(深圳)有限公司 | Text generation method, device, computer equipment and readable storage medium |
-
2020
- 2020-03-02 CN CN202010136551.6A patent/CN111428448B/en active Active
- 2020-09-28 WO PCT/CN2020/118456 patent/WO2021174827A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106663125A (en) * | 2014-08-21 | 2017-05-10 | 国立研究开发法人情报通信研究机构 | Question sentence generation device and computer program |
CN109522411A (en) * | 2018-11-12 | 2019-03-26 | 南京德磐信息科技有限公司 | A kind of writing householder method neural network based |
CN110162595A (en) * | 2019-03-29 | 2019-08-23 | 深圳市腾讯计算机系统有限公司 | For generating the method, apparatus, equipment and readable storage medium storing program for executing of text snippet |
Also Published As
Publication number | Publication date |
---|---|
WO2021174827A1 (en) | 2021-09-10 |
CN111428448A (en) | 2020-07-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111428448B (en) | Text generation method, device, computer equipment and readable storage medium | |
CN109034069B (en) | Method and apparatus for generating information | |
CN112487139B (en) | Text-based automatic question setting method and device and computer equipment | |
CN116824278B (en) | Image content analysis method, device, equipment and medium | |
CN111767366A (en) | Question and answer resource mining method and device, computer equipment and storage medium | |
CN112818995B (en) | Image classification method, device, electronic equipment and storage medium | |
CN114241505A (en) | Method and device for extracting chemical structure image, storage medium and electronic equipment | |
CN112182269B (en) | Training of image classification model, image classification method, device, equipment and medium | |
CN109978058B (en) | Method, device, terminal and storage medium for determining image classification | |
CN110852071A (en) | Knowledge point detection method, device, equipment and readable storage medium | |
CN117876090A (en) | Risk identification method, electronic device, storage medium, and program product | |
CN116541507A (en) | Visual question-answering method and system based on dynamic semantic graph neural network | |
CN115145928B (en) | Model training method and device and structured abstract obtaining method and device | |
CN114120287B (en) | Data processing method, device, computer equipment and storage medium | |
CN112529116B (en) | Scene element fusion processing method, device and equipment and computer storage medium | |
CN115439734A (en) | Quality evaluation model training method and device, electronic equipment and storage medium | |
CN111652767B (en) | User portrait construction method and device, computer equipment and storage medium | |
CN113779159A (en) | Model training method, argument detecting device, electronic equipment and storage medium | |
CN109657710B (en) | Data screening method and device, server and storage medium | |
CN113610080A (en) | Cross-modal perception-based sensitive image identification method, device, equipment and medium | |
CN118227768B (en) | Visual question-answering method and device based on artificial intelligence | |
CN116612466B (en) | Content identification method, device, equipment and medium based on artificial intelligence | |
CN114743043B (en) | Image classification method, electronic device, storage medium and program product | |
CN118132729B (en) | Answer generation method and device based on medical knowledge graph | |
CN116467414B (en) | Data verification method, device, equipment and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |