CN111581916B - Text generation method and device, electronic equipment and computer readable medium - Google Patents

Text generation method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN111581916B
CN111581916B CN202010413938.1A CN202010413938A CN111581916B CN 111581916 B CN111581916 B CN 111581916B CN 202010413938 A CN202010413938 A CN 202010413938A CN 111581916 B CN111581916 B CN 111581916B
Authority
CN
China
Prior art keywords
encoder
text
term
mixed
variational
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010413938.1A
Other languages
Chinese (zh)
Other versions
CN111581916A (en
Inventor
施文娴
周浩
李磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010413938.1A priority Critical patent/CN111581916B/en
Publication of CN111581916A publication Critical patent/CN111581916A/en
Application granted granted Critical
Publication of CN111581916B publication Critical patent/CN111581916B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/126Character encoding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/355Class or cluster creation or modification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Probability & Statistics with Applications (AREA)
  • Machine Translation (AREA)

Abstract

The embodiment of the disclosure discloses a text generation method, a text generation device, an electronic device and a computer readable medium. One embodiment of the method comprises: acquiring a source text; inputting a source text into a variational self-encoder to obtain a target text, wherein the variational self-encoder takes mixed index distribution as prior, a loss function used by the variational self-encoder in a training process comprises a dispersion term, the dispersion term is used for adjusting the dispersion trend of mixed components, and the mixed components are a plurality of index distributions corresponding to the mixed index distribution. The embodiment realizes the effects of reducing mode collapse, enhancing interpretability and having remarkable effect on the improvement of the quality of the target text.

Description

Text generation method and device, electronic equipment and computer readable medium
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a text generation method, an apparatus, an electronic device, and a computer-readable medium.
Background
A Variational auto-encoder (VAE) is widely used in the fields of text generation, image generation, and the like due to its own characteristics. However, the problem of pattern collapse often exists in variational training. For example, in the language generation task, a plurality of gaussian priors tend to collapse in training, and finally, one gaussian prior is generated reversely. As shown in fig. 1, the reservation "given me about my meeting" and the query weather "will it be hub in? "are mapped to the same pattern. Furthermore, the problem of pattern collapse is also observed in the image modeling task.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a text generation method, apparatus, electronic device and computer readable medium to solve the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a text generation method, including obtaining a source text; inputting a source text into a variational self-encoder to obtain a target text, wherein the variational self-encoder takes mixed index distribution as prior, a loss function used by the variational self-encoder in a training process comprises a dispersion term, the dispersion term is used for adjusting the dispersion trend of mixed components, and the mixed components are a plurality of index distributions corresponding to the mixed index distribution.
In a second aspect, some embodiments of the present disclosure provide a text generation apparatus, including: an acquisition unit configured to acquire a source text; and the generation unit is configured to input the source text into a variational self-encoder to obtain a target text, wherein the variational self-encoder takes mixed index distribution as a priori, a loss function used by the variational self-encoder in a training process comprises a dispersion term, the dispersion term is used for adjusting the dispersion trend of mixed components, and the mixed components are a plurality of index distributions corresponding to the mixed index distribution.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement any of the methods described above.
In a fourth aspect, some embodiments of the disclosure provide a computer readable medium having a computer program stored thereon, where the program is to implement any of the above-mentioned methods when executed by a processor.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: and inputting the source text into a variational self-encoder to obtain the target text. The loss function used by the variational self-encoder in the training process comprises a dispersion term, so that mode collapse is relieved, and a structured hidden space is induced. The structured hidden space has the advantages of discrete and continuous hidden spaces, so that the model capacity is ensured, and the interpretability is enhanced. As an example, in the context of session generation, hidden variables may simulate actions or intentions. In addition, the method has a remarkable effect on the improvement of the quality of the target text.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a visual effect diagram of different types of sentences mapped to the same pattern in variation training;
FIG. 2 is a schematic diagram of one application scenario of a text generation method according to some embodiments of the present disclosure;
FIG. 3 is a flow diagram of some embodiments of a text generation method according to the present disclosure;
FIG. 4 is a flow diagram of further embodiments of a text generation method according to the present disclosure;
FIG. 5 is a schematic structural diagram of some embodiments of a text generation apparatus according to the present disclosure;
FIG. 6 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
FIG. 2 is a schematic diagram 200 of one application scenario of a text generation method according to some embodiments of the present disclosure.
The application scene is a scene for automatically generating a conversation, and can be applied to products such as a customer service robot or an intelligent sound box. Taking a smart speaker as an example, the execution subject (smart speaker) of the text generation method may obtain the source text 201 by recognizing the user voice. On this basis, the source text input variations may be passed from the encoder 202, resulting in the target text 203. The variational self-encoder 202 uses the mixed exponential distribution as a prior, and the loss function used in the training process includes a dispersion term 204, where the dispersion term is used to adjust the dispersion tendency of the mixed component, and the mixed component is a plurality of exponential distributions corresponding to the mixed exponential distribution.
With continued reference to fig. 3, a flow 300 of some embodiments of a text generation method according to the present disclosure is shown. The text generation method comprises the following steps:
step 301, a source text is obtained.
In some embodiments, the executing body of the text generation method may obtain the source text from other electronic devices locally or communicatively connected in various ways. In practice, the source text may be any text. The source text may be determined in different ways depending on the scene. For example, it can be obtained by specifying or screening according to a certain condition. As an example, in an application scenario where a smart speaker is in conversation with a user, an execution subject (smart speaker) may obtain a text corresponding to a voice of the user through voice recognition. At this time, the source text is the text corresponding to the user voice.
Step 302, inputting the source text into a variational self-encoder to obtain a target text, wherein the variational self-encoder takes mixed index distribution as prior, a loss function used by the variational self-encoder in the training process comprises a dispersion term, the dispersion term is used for adjusting the dispersion trend of mixed components, and the mixed components are a plurality of index distributions corresponding to the mixed index distribution.
In some embodiments, the execution body may divide the source text input variation from the encoder to obtain the target text. The Variational auto-encoder (VAE) is an unsupervised learning algorithm, and generates corresponding distribution for input data, and generates output data after sampling.
Alternatively, the variational autocoder may be a discrete gaussian mixture variational autocoder or a discrete polynomial distribution mixture variational autocoder.
In some embodiments, the variational auto-encoder may be a mixed-exponential family variational auto-encoder. The mixed-exponent family variational autoencoder may be a variational autoencoder that uses a mixed-exponent family distribution as a priori. The mixed index distribution can be obtained by mixing a plurality of index distributions, wherein the plurality of index distributions participating in mixing can also be called mixed components. A mixed gaussian distributed variational self-encoder (GMVAE) is a typical mixed exponential family variational self-encoder. The exponential distribution may be a polynomial distribution (catalytic distribution), von Mises-Fisher distribution (von Mises-Fisher) or the like, in addition to the gaussian distribution.
The reason why the mixed exponential family variations self-encoder mode collapses and the principle and effect of introducing the dispersion term to solve the mode collapse will be described below by taking GMVAE as an example.
First, we theoretically analyzed pattern collapse and found that the cause of pattern collapse was to maximize the lower bound of Evidence (ELBO) during training. Especially for GMVAE, maximizing ELBO implicitly aggregates the mean and variance of the gaussian mixture distribution.
Based on the above analysis, we propose to introduce an additional scatter term to solve the pattern collapse problem. The prior of GMVAE is a mixture gaussian distribution that represents the mixture components using a discrete hidden variable c, with a continuous hidden variable z dependent on c. In this model, the marginal likelihood of sentence x is:
Figure BDA0002494347080000051
where θ is a parameter of the generating network, which may generate x from z. p is a radical ofη(z, c) is the mixed prior distribution with parameter η. p (z, c) ═ p (c) pη(z | c). Where p (c) may be assumed to be a uniform distribution. p is a radical ofη(z | c) is the Gaussian distribution of the c-th component.
In the testing phase, a mixed component c is first selected according to the prior distribution p (c). Then according to p from the selected mixed componentsη(z | c) samples yield z. Generating a network with z as input by a decoder pθ(x | z) generates sentence x.
In the training phase, the optimization of equation (1) is difficult. Therefore, we use a variational posterior distribution q with parameter φφ(z, c | x) to approximate the true posterior distribution p (z, c | x). Under approximation of the mean field, qφ(z,c|x)=qφ(z|x)qφ(c | x). Wherein, a posteriori qφ(z | x) can be assumed to be a multivariate Gaussian distribution with mean μφ(x) Sum variance
Figure BDA0002494347080000052
Can be obtained by a neural network (recognition network). q. q.sφ(c | x) may be implemented by a neural network classifier.
In some alternative implementations of some embodiments, the scatter term may be determined by:
in fact, we do not optimize the marginal likelihood (equation (1)), but rather maximize ELBO. ELBO can be decomposed as the sum of the reconstruction term (reconstruction) and regularization term (regularization) for c and z:
Figure BDA0002494347080000061
wherein all parameters including θ, φ and η can be jointly trained by re-parameterization techniques (reparameterization locks).
We further studied the ELBO objective function and found the regularization term of ELBO, namely RcAnd RzThis is the cause of the pattern collapse.
On the basis, the analysis of a general mixed index family variational self-encoder is continued. The marginal likelihood of a generic mixed exponential variational autocoder is similar to equation (1), where p isη(z | c) may be other exponential distributions, such as a domain distribution, a von mises distribution, and so forth. In general, dz can be replaced by ν (dz). V (dz) is a basic metric, which may be, for example, a Lebesgue or a count metric.
The probability density function of the mixed component c is expressed by natural parameters according to the definition of the index family:
pη(z/c)=exp(<ηc,φ(z)>-A(ηc))
where φ (z) is a function vector called the sufficient statistics. EtacIs the corresponding parameter vector. For example, for a Gaussian distribution, φ (z) ═ z, z2]To do so
Figure BDA0002494347080000062
c,φ(x)>Is the vector ηcAnd phi (x). A (eta)c) Is a logarithmic distribution function of the normalized probability density function. For a distribution of the frequencies of the gaussian distribution,
Figure BDA0002494347080000063
on the basis, R in ELBOzCan be rewritten to the mean valueTerm (Average R)z) And a Dispersion term (Dispersion term L)d):
Figure BDA0002494347080000064
Wherein the content of the first and second substances,
Figure BDA0002494347080000071
distributed in the same exponential family as a priori, but with "averaged" parameters
Figure BDA0002494347080000072
And (4) parameterizing. For the family of exponents, the domain of the parameter η is a convex set, and therefore,
Figure BDA0002494347080000073
is a feasible parameter.
The scatter term is always non-negative in terms of convexity to the distribution function a. For the minimum representation of the index family, when the parameters η of the different mixed componentscCollapse to one or qφ(c | x) has a unique heat probability mass, A is severe and LdApproaching zero. Note that RcBlocking qφ(c | x) becomes one-hot because p (c) is always assumed to be uniform. The presence of a dispersion term therefore makes it as difficult as possible to distinguish the parameters of all the mixed components.
From the above analysis, we propose to introduce a dispersion term into the loss function, thereby solving the problem of pattern collapse.
In some optional implementations of some embodiments, the loss function further includes a dispersion control parameter β. A trade-off is made between variance and concentration of the mixed components by adjusting the dispersion control parameter beta.
In these implementations, for a variational self-encoder, the loss function for x is sampled from the data set D
Figure BDA0002494347080000074
Figure BDA0002494347080000075
Figure BDA0002494347080000076
For mixed Gaussian distribution, ηc
Figure BDA0002494347080000077
In some optional implementations of some embodiments, the variational self-encoder includes an encoder and a decoder. In practice, the encoder may employ various recurrent Neural networks rnn (recurrent Neural networks), such as GRU, according to actual needs. On this basis, the encoder can encode the source text to obtain an encoded hidden state. Posterior distribution qφ(z | x) and qφThe parameter of (c | x) may be derived based on the hidden state. As an example, distribution qφMean μ of (z | x) (assumed to be a multivariate diagonal gaussian)φSum variance
Figure BDA0002494347080000078
Can be obtained by two affine transformations. Distribution qφ(c | x) can be modeled as a nonlinear classifier with hidden states as input. On this basis, z can be sampled from the mix a priori (in testing) or a posteriori (in training) by the reparameterization technique. And inputting z into a decoder to obtain a target text. Wherein the decoder may be a recurrent neural language model.
Some embodiments of the present disclosure provide a text generation method, which obtains a target text by inputting a source text into a variational self-encoder. The loss function used by the variational self-encoder in the training process comprises a dispersion term, so that mode collapse is relieved, and a structured hidden space is induced. The structured hidden space has the advantages of discrete and continuous hidden spaces, so that the model capacity is ensured, and the interpretability is enhanced. As an example, in the context of session generation, hidden variables may simulate actions or intentions. In addition, the method has a remarkable effect on the improvement of the quality of the target text.
With further reference to FIG. 4, a flow diagram of further embodiments of a text generation method is shown. The process 400 of the text generation method includes the following steps:
step 401, a source text is obtained.
In some embodiments, the specific implementation of step 401 and the technical effect thereof may refer to step 301 in the embodiment corresponding to fig. 3, which is not described herein again.
Step 402, inputting a source text into a variational self-encoder to obtain a target text, wherein the variational self-encoder takes mixed index distribution as prior, a loss function used by the variational self-encoder in a training process comprises a dispersion item and a mutual information item, the dispersion item is used for adjusting the dispersion trend of mixed components, and the mixed components are a plurality of index distributions corresponding to the mixed index distribution.
In some embodiments, based on those corresponding to fig. 3, the loss function may further include a mutual information item for characterizing the relevance of the hidden variable and the output text of the variable self-encoder. As an example, mutual information items
Figure BDA0002494347080000081
Can be determined by the following formula:
Figure BDA0002494347080000082
wherein q isφ(c) Can pass in a minimum batch (batch)
Figure BDA0002494347080000083
To estimate.
As can be seen from fig. 4, compared to the description of some embodiments corresponding to fig. 3, mutual information items are added to the loss function, thereby further improving interpretability and further solving the pattern collapse problem.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of a text generation apparatus, which correspond to those of the method embodiments shown in fig. 2, and which may be applied in various electronic devices in particular.
As shown in fig. 5, the text generation apparatus 500 of some embodiments includes: an acquisition unit 501 and a generation unit 502. Wherein the obtaining unit 501 is configured to obtain the source text. The generating unit 502 is configured to input the source text into a variational autocoder, which takes a mixed index distribution as a prior, to obtain a target text, where a loss function used by the variational autocoder in the training process includes a dispersion term, and the dispersion term is used to adjust a dispersion trend of a mixed component, where the mixed component is a plurality of index distributions corresponding to the mixed index distribution.
In some embodiments, specific implementations of the obtaining unit 501 and the generating unit 502 in the text generating apparatus 500 and technical effects brought by the specific implementations may refer to those embodiments corresponding to fig. 3, and are not described herein again.
In an alternative implementation of some embodiments, the loss function further includes a dispersion control parameter for adjusting the variance and concentration of the mixed components.
In an alternative implementation of some embodiments, the scatter term is obtained by: decomposing the evidence lower bound of the variational self-encoder into a regular term and a reconstruction term of a hidden variable; and rewriting the regular term based on the probability density function of the mixed components to obtain a mean term and a dispersion term.
In an alternative implementation of some embodiments, the variational self-encoder includes an encoder and a decoder; and the generating unit 502 may be further configured to: inputting the source text into an encoder to obtain a hidden state after the source text is encoded; obtaining parameters of posterior distribution based on the hidden state; sampling the mixed components according to the parameters of posterior distribution to obtain an implicit variable; and inputting the hidden variable into a decoder to obtain a target text.
In an alternative implementation of some embodiments, the variational auto-encoder is a discrete gaussian mixture variational auto-encoder or a discrete classification mixture variational auto-encoder.
In an alternative implementation of some embodiments, the loss function further includes a mutual information item, and the mutual information item is used for characterizing the relevance of the hidden variable and the output text of the variational self-encoder.
In some embodiments, the loss function used by the variational autoencoder during training includes a dispersion term, thereby mitigating mode collapse and inducing a structured implicit space. The structured hidden space has the advantages of discrete and continuous hidden spaces, so that the model capacity is ensured, and the interpretability is enhanced. As an example, in the context of session generation, hidden variables may simulate actions or intentions. In addition, the method has a remarkable effect on the improvement of the quality of the target text.
Referring now to fig. 6, shown is a schematic diagram of an electronic device 600 suitable for use in implementing some embodiments of the present disclosure. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 609, or installed from the storage device 608, or installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a source text; inputting a source text into a variational self-encoder to obtain a target text, wherein the variational self-encoder takes mixed index distribution as prior, a loss function used by the variational self-encoder in a training process comprises a dispersion term, the dispersion term is used for adjusting the dispersion trend of mixed components, and the mixed components are a plurality of index distributions corresponding to the mixed index distribution.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit and a generation unit. Where the names of these units do not in some cases constitute a limitation on the unit itself, for example, an acquisition unit may also be described as a "unit to acquire source text".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
According to one or more embodiments of the present disclosure, there is provided a text generation method including: acquiring a source text; inputting a source text into a variational self-encoder to obtain a target text, wherein the variational self-encoder takes mixed index distribution as prior, a loss function used by the variational self-encoder in a training process comprises a dispersion term, the dispersion term is used for adjusting the dispersion trend of mixed components, and the mixed components are a plurality of index distributions corresponding to the mixed index distribution.
According to one or more embodiments of the present disclosure, the loss function further includes a dispersion control parameter for adjusting the variance and concentration of the mixed components.
According to one or more embodiments of the present disclosure, the dispersion term is obtained by: decomposing the evidence lower bound of the upper variational self-encoder into a regular term and a reconstruction term of a hidden variable; and rewriting the regular term based on the probability density function of the mixed components to obtain a mean term and a dispersion term.
According to one or more embodiments of the present disclosure, a variational self-encoder includes an encoder and a decoder; and inputting the source text into a variational self-encoder to obtain a target text, wherein the variational self-encoder comprises the following steps: inputting the source text into an encoder to obtain a hidden state after the source text is encoded; obtaining parameters of posterior distribution based on the hidden state; sampling the mixed components according to the parameters of posterior distribution to obtain an implicit variable; and inputting the hidden variable into a decoder to obtain a target text.
According to one or more embodiments of the present disclosure, the variational autocoder is a discrete gaussian mixture variational autocoder or a discrete classification mixture variational autocoder.
According to one or more embodiments of the present disclosure, the loss function further includes a mutual information item for characterizing a degree of association of the hidden variable and the output text of the variable self-encoder.
According to one or more embodiments of the present disclosure, there is provided a text generation apparatus including: an acquisition unit configured to acquire a source text; and the generation unit is configured to input the source text into a variational self-encoder to obtain a target text, wherein the variational self-encoder takes mixed index distribution as a priori, a loss function used by the variational self-encoder in a training process comprises a dispersion term, the dispersion term is used for adjusting the dispersion trend of mixed components, and the mixed components are a plurality of index distributions corresponding to the mixed index distribution.
According to one or more embodiments of the present disclosure, the loss function further includes a dispersion control parameter for adjusting the variance and concentration of the mixed components.
According to one or more embodiments of the present disclosure, the dispersion term is obtained by: decomposing the evidence lower bound of the variational self-encoder into a regular term and a reconstruction term of a hidden variable; and rewriting the regular term based on the probability density function of the mixed components to obtain a mean term and a dispersion term.
According to one or more embodiments of the present disclosure, a variational self-encoder includes an encoder and a decoder; and the generating unit may be further configured to: inputting the source text into an encoder to obtain a hidden state after the source text is encoded; obtaining parameters of posterior distribution based on the hidden state; sampling the mixed components according to the parameters of posterior distribution to obtain an implicit variable; and inputting the hidden variable into a decoder to obtain a target text.
According to one or more embodiments of the present disclosure, the variational autocoder is a discrete gaussian mixture variational autocoder or a discrete classification mixture variational autocoder.
According to one or more embodiments of the present disclosure, the loss function further includes a mutual information item for characterizing a degree of association of the hidden variable and the output text of the variable self-encoder.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement a method as in any above.
According to one or more embodiments of the present disclosure, a computer-readable medium is provided, on which a computer program is stored, wherein the program, when executed by a processor, implements the method as any one of the above.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (7)

1. A text generation method, comprising:
acquiring a source text;
inputting the source text into a variational self-encoder to obtain a target text, wherein the variational self-encoder takes mixed index distribution as prior, a loss function used by the variational self-encoder in a training process comprises a dispersion term, the dispersion term is used for adjusting the dispersion trend of mixed components, the mixed components are a plurality of index distributions corresponding to the mixed index distribution, and the dispersion term is obtained by the following method:
decomposing the evidence lower bound of the variational self-encoder into a regular term and a reconstruction term of a hidden variable;
rewriting the regular term based on the probability density function of the mixed component to obtain a mean term and the dispersion term;
wherein the variational self-encoder comprises an encoder and a decoder; and
the inputting the source text into a variational self-encoder to obtain a target text comprises the following steps:
inputting the source text into the encoder to obtain a hidden state after the source text is encoded;
obtaining a parameter of posterior distribution based on the hidden state;
sampling the mixed components according to the parameters of the posterior distribution to obtain an implicit variable;
and inputting the hidden variable into the decoder to obtain the target text.
2. The method of claim 1, wherein the loss function further comprises a dispersion control parameter for adjusting the variance and concentration of the mixed components.
3. The method of claim 1, wherein the variational auto-encoder is a discrete gaussian mixture variational auto-encoder or a discrete polynomial distribution mixture variational auto-encoder.
4. The method according to any of claims 1-3, wherein the loss function further comprises a mutual information item characterizing the relevance of the hidden variables and the output text of the variational autocoder.
5. A text generation apparatus comprising:
an acquisition unit configured to acquire a source text;
the generation unit is configured to input the source text into a variational self-encoder to obtain a target text, wherein the variational self-encoder takes a mixed index distribution as a priori, a loss function used by the variational self-encoder in a training process comprises a dispersion term, the dispersion term is used for adjusting a dispersion trend of mixed components, the mixed components are a plurality of index distributions corresponding to the mixed index distribution, and the dispersion term is obtained by the following steps:
decomposing the evidence lower bound of the variational self-encoder into a regular term and a reconstruction term of a hidden variable;
rewriting the regular term based on the probability density function of the mixed component to obtain a mean term and the dispersion term;
wherein the variational self-encoder comprises an encoder and a decoder; and
the generation unit is further configured to:
inputting the source text into the encoder to obtain a hidden state after the source text is encoded;
obtaining a parameter of posterior distribution based on the hidden state;
sampling the mixed components according to the parameters of the posterior distribution to obtain an implicit variable;
and inputting the hidden variable into the decoder to obtain the target text.
6. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-4.
7. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-4.
CN202010413938.1A 2020-05-15 2020-05-15 Text generation method and device, electronic equipment and computer readable medium Active CN111581916B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010413938.1A CN111581916B (en) 2020-05-15 2020-05-15 Text generation method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010413938.1A CN111581916B (en) 2020-05-15 2020-05-15 Text generation method and device, electronic equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN111581916A CN111581916A (en) 2020-08-25
CN111581916B true CN111581916B (en) 2022-03-01

Family

ID=72123125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010413938.1A Active CN111581916B (en) 2020-05-15 2020-05-15 Text generation method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN111581916B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108334497A (en) * 2018-02-06 2018-07-27 北京航空航天大学 The method and apparatus for automatically generating text
CN108363685A (en) * 2017-12-25 2018-08-03 北京牡丹电子集团有限责任公司数字电视技术中心 Based on recurrence variation own coding model from media data document representation method
CN110134960A (en) * 2019-05-15 2019-08-16 北京奇艺世纪科技有限公司 A kind of generation method and relevant device of text
CN110572696A (en) * 2019-08-12 2019-12-13 浙江大学 variational self-encoder and video generation method combining generation countermeasure network
US10643131B1 (en) * 2016-05-20 2020-05-05 Deepmind Technologies Limited Training variational autoencoders to generate disentangled latent factors

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11727265B2 (en) * 2019-06-27 2023-08-15 Intel Corporation Methods and apparatus to provide machine programmed creative support to a user

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10643131B1 (en) * 2016-05-20 2020-05-05 Deepmind Technologies Limited Training variational autoencoders to generate disentangled latent factors
CN108363685A (en) * 2017-12-25 2018-08-03 北京牡丹电子集团有限责任公司数字电视技术中心 Based on recurrence variation own coding model from media data document representation method
CN108334497A (en) * 2018-02-06 2018-07-27 北京航空航天大学 The method and apparatus for automatically generating text
CN110134960A (en) * 2019-05-15 2019-08-16 北京奇艺世纪科技有限公司 A kind of generation method and relevant device of text
CN110572696A (en) * 2019-08-12 2019-12-13 浙江大学 variational self-encoder and video generation method combining generation countermeasure network

Also Published As

Publication number Publication date
CN111581916A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
US11640528B2 (en) Method, electronic device and computer readable medium for information processing for accelerating neural network training
US20200104640A1 (en) Committed information rate variational autoencoders
US10705833B2 (en) Transforming data manipulation code into data workflow
CN111198945A (en) Data processing method, device, medium and electronic equipment
US11847546B2 (en) Automatic data preprocessing
CN108491812B (en) Method and device for generating face recognition model
CN110781922A (en) Sample data generation method and device for machine learning model and electronic equipment
US20210133539A1 (en) Simulator-assisted training for interpretable generative models
CN112434620A (en) Scene character recognition method, device, equipment and computer readable medium
CN110751190A (en) Financial risk model generation method and device and electronic equipment
CN113409307A (en) Image denoising method, device and medium based on heterogeneous noise characteristics
CN111581916B (en) Text generation method and device, electronic equipment and computer readable medium
Ray et al. Minimax theory for a class of nonlinear statistical inverse problems
CN110046670B (en) Feature vector dimension reduction method and device
CN110489435B (en) Data processing method and device based on artificial intelligence and electronic equipment
CN110796170A (en) Client dynamic support model generation method and device and electronic equipment
CN116955921A (en) Time series data complement method, related equipment and storage medium
CN112102328A (en) Image segmentation processing method and system based on deep learning and electronic equipment
CN113806507B (en) Multi-label classification method, device and readable medium
CN113823312A (en) Speech enhancement model generation method and device and speech enhancement method and device
Hammad et al. Further investigation of stochastic nonlinear Hilfer-fractional integro-differential inclusions using almost sectorial operators
CN113077353B (en) Method, device, electronic equipment and medium for generating nuclear insurance conclusion
CN117495714B (en) Face image restoration method and device based on diffusion generation priori and readable medium
CN114842448B (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN115223113B (en) Training sample set cleaning method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder