WO2021189890A1 - 基于文本编辑技术的文本生成模型的训练方法及装置 - Google Patents
基于文本编辑技术的文本生成模型的训练方法及装置 Download PDFInfo
- Publication number
- WO2021189890A1 WO2021189890A1 PCT/CN2020/131757 CN2020131757W WO2021189890A1 WO 2021189890 A1 WO2021189890 A1 WO 2021189890A1 CN 2020131757 W CN2020131757 W CN 2020131757W WO 2021189890 A1 WO2021189890 A1 WO 2021189890A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- text
- source
- source text
- target
- generation model
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/237—Lexical tools
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3344—Query execution using natural language analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
Definitions
- This application belongs to the field of machine learning technology in artificial intelligence, and in particular relates to a method and device for training a text generation model based on text editing technology.
- Text generation is an important task in the field of natural language processing, and it is also a major challenge facing artificial intelligence.
- text generation can assist professionals in professional writing, such as legal document completion, automatic news generation, text summary generation, text retelling, etc.
- the inventor realizes that the training of text generation models in the prior art requires a large amount of data.
- high-quality text data in a specific field is relatively scarce, resulting in low accuracy of high-semantic text generated by the text generation model.
- the embodiments of the application provide a method and device for training a text generation model based on text editing technology, which solves the problem that the text generation model in the prior art requires a large amount of high-quality text data for training to accurately obtain high-semantic text.
- an embodiment of the present application provides a method for training a text generation model based on text editing technology, which includes:
- the configuration parameters of the text generation model are adjusted according to the first tag sequence and the second tag sequence.
- an embodiment of the present application provides a training device for a text generation model based on text editing technology, which includes:
- the first obtaining unit is used to obtain a preset source text collection
- An editing unit configured to edit the source text set according to a preset text editor to obtain a target text set of the source text set
- the first building unit is configured to build a vocabulary list according to the source text set and the target text set;
- a processing unit configured to process each source text according to the vocabulary and the target text of each source text in the source text set to obtain a first tag sequence
- An input unit configured to input each source text into a text generation model to be trained to obtain a second label sequence
- the first adjustment unit is configured to adjust the configuration parameters of the text generation model according to the first tag sequence and the second tag sequence.
- an embodiment of the present application further provides a computer device, including a memory, a processor, and a computer program stored on the memory and running on the processor, wherein the processor executes the Perform the following steps in the computer program:
- the configuration parameters of the text generation model are adjusted according to the first tag sequence and the second tag sequence.
- the embodiments of the present application also provide a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor executes the following steps :
- the configuration parameters of the text generation model are adjusted according to the first tag sequence and the second tag sequence.
- the training method of the text generation model based on the text editing technology in this application, when the source text set is small, the training of the text model can still be completed, and the training efficiency of the text generation model is greatly improved, and the training efficiency of the text generation model is greatly improved.
- the accuracy of generating high-semantic text is greatly improved.
- FIG. 1 is a schematic flowchart of a training method for a text generation model based on text editing technology provided by an embodiment of the application;
- FIG. 2 is a schematic diagram of a sub-process of a training method for a text generation model based on a text editing technology provided by an embodiment of the application;
- FIG. 3 is a schematic diagram of another sub-process of the training method of a text generation model based on text editing technology provided by an embodiment of the application;
- FIG. 4 is a schematic diagram of another sub-flow of the training method of a text generation model based on text editing technology provided by an embodiment of the application;
- FIG. 5 is a schematic diagram of another sub-process of the training method of a text generation model based on text editing technology provided by an embodiment of the application;
- FIG. 6 is a schematic diagram of another sub-process of the training method of a text generation model based on text editing technology provided by an embodiment of the application;
- FIG. 7 is a schematic block diagram of a training device for a text generation model based on text editing technology provided by an embodiment of the application;
- FIG. 8 is a schematic block diagram of a subunit of a training device for a text generation model based on a text editing technology provided by an embodiment of the application;
- FIG. 9 is a schematic block diagram of another subunit of the training device for a text generation model based on text editing technology provided by an embodiment of the application.
- FIG. 10 is a schematic block diagram of another subunit of the training device for a text generation model based on text editing technology provided by an embodiment of the application;
- FIG. 10 is a schematic block diagram of another subunit of the training device for a text generation model based on text editing technology provided by an embodiment of the application;
- FIG. 11 is a schematic block diagram of another subunit of the training device for a text generation model based on text editing technology provided by an embodiment of the application;
- FIG. 12 is a schematic block diagram of another subunit of the training device for a text generation model based on text editing technology provided by an embodiment of the application;
- FIG. 13 is a schematic block diagram of a computer device provided by an embodiment of the application.
- FIG. 1 is a schematic flowchart of a training method of a text generation model based on a text editing technology provided by an embodiment of the application.
- the training method of the text generation model based on the text editing technology is built and run in the server.
- Edit each source text in the source text set to obtain the target text of each source text and then process the source text through the preset vocabulary and target text to obtain the first label sequence, and input the source text to the target text.
- the second label sequence is obtained in the trained text generation model, and the configuration parameters of the text generation model to be trained are adjusted by calculating the similarity between the first label sequence and the second label sequence, so that when the source text set is small, it is still.
- the training of the text model can be completed and the training efficiency of the text generation model is greatly improved, and the accuracy of generating high-semantic texts is improved.
- the method includes steps S110 to S150.
- the source text set is a data set that needs to be trained on a text generation model, and the number of texts in the source text set can be configured according to user needs, and it can be a large amount of text data or a small amount of text data. .
- a source text group text generation model with a small amount of data is used for training.
- S120 Edit the source text set according to a preset text editor to obtain a target text set of the source text set.
- the text editor is a text editing tool that can be used to edit each source text in the source text set to obtain a target text with high semantics, a notepad under Window, and a text editor under Mac OS X. , Vi, emacs, gedit, etc. under Linux can be used to edit each source text in the source text set.
- the source text is "Xiao Ming was born in 1993.
- Xiao Ming was born in Shanghai
- the target text edited with a text editor is "Xiao Ming was born in Shanghai in early 1993".
- a vocabulary is constructed according to the source text set and the target text set. Specifically, words in each target text in the target text set that do not exist in the source text of the target text are used as words in the vocabulary. In the process of constructing the vocabulary, it is usually in order to reduce subsequent use of the vocabulary. When calculating the amount of time, the vocabulary needs to be optimized to make the vocabulary as small as possible, and the words in the vocabulary from the source text set need to be filtered according to the frequency of the word in the target text set, for example, the vocabulary Words that appear less than ten times in the target text set are eliminated to obtain an optimized vocabulary list. After the construction of the vocabulary list in the embodiment of the present application is completed, it is stored in the blockchain, which ensures the security performance of the vocabulary list storage.
- step S130 includes sub-steps S131 and S132.
- the longest common subsequence technology is used to obtain the longest common subsequence of each source text and the target text of each source text.
- the longest common subsequence is defined as: a sequence S, if it is a subsequence of two or more known sequences, and is the longest among all sequences that meet this condition, then S is called the longest known sequence Common subsequence.
- step S131 includes sub-steps S1311 and S1312.
- S1311 obtain the sub-sequence set of each source text and the sub-sequence set of the target text of each source text.
- the subsequences in the set of subsequences of each source text are subsequences obtained by splitting each source text without changing the character sequence of each source text. After each source text is split The subsequence of is combined into the subsequence set of each source text, and the subsequence set of the target text of each source text is also obtained by referring to the acquisition method of the subsequence set of each source text.
- each subsequence in the subsequence set of each source text is matched with each subsequence in the subsequence set of the target text to obtain each source text and each source text
- the longest common subsequence in the set of common subsequences is taken as the longest common subsequence.
- each common subsequence in the set of common subsequences is a common subsequence of each source text and the target text of each source text, and the longest sequence in the set of common subsequences It is the longest common subsequence of each source text and the target text of each source text.
- the vocabulary is constructed according to the target text of each source text and the longest common subsequence. Specifically, the longest common subsequence is the longest common subsequence of the target text of each source text and each of the source texts, and the longest common subsequence from each source text A word that does not exist in the longest common subsequence is obtained from the target text and the word is used as a word in the vocabulary, thereby completing the construction of the vocabulary.
- step S132 includes sub-steps S1321 and S1322.
- S1321. Perform word segmentation processing on the target text of each source text to obtain words of the target text of each source text.
- the target text of each source text is subjected to word segmentation processing to obtain the words of the target text of each source text.
- word segmentation processing to obtain the words of the target text of each source text.
- the reverse maximum matching method in the string-based word segmentation method is used to segment the target text of each source text.
- the word segmentation process is: setting the longest entry in the preset dictionary The number of Chinese characters is L, and the processing starts from the end of the string of the target text. At the beginning of each cycle, the last L words of the character string are taken as processing objects, and the dictionary is searched.
- the matching is successful, and the processing object is segmented as a word; if not, the first Chinese character of the processing object is removed, and the remaining character string As a new processing object, the matching is performed again until the segmentation is successful, that is, a round of matching is completed, and a word is segmented, and this loop is repeated until all the words in the target text are segmented.
- the words of the target text of each source text are matched with the longest common subsequence to obtain words constituting the vocabulary from the words of the target text of each source text. Specifically, whether the word of the target text of each source text exists in the longest common subsequence is taken as the result of whether the word of the target text matches the longest common subsequence successfully, if the matching is successful, the word is not Is a word in the vocabulary; if the match is unsuccessful, it will be regarded as a word in the vocabulary.
- S140 Process each source text according to the vocabulary and the target text of each source text to obtain a first tag sequence.
- each source text according to the vocabulary and the target text of each source text to obtain a first tag sequence.
- each source text does not include the vocabulary in the vocabulary, and the longest common subsequence between each source text and the target text of each source text is marked to compare the Mark each source text, then split each marked source text to get the characters of each marked source text, match the characters with words in the vocabulary to get new words, and then match After the words are spliced together, the first tag sequence can be obtained.
- step S140 includes sub-steps S141, S142, S143, and S144.
- each source text is marked as the first label, which is marked as the symbol "keep”; the characters not belonging to the longest common subsequence are marked as the second label, and marked as the second label.
- the symbol "delete” is used to make each source text marked with the text of the first label and the second label. For example, when the source text is "Xiao Ming was born in 1993. Xiao Ming was born in Shanghai", the order of the tags of the tagged source text is "keep keep delete delete keep keep keep keep keep keep keep keep keep keep keep keep keep keep keep keep keep keep keep keep keep keep keep keep”.
- the character set of each source text after the labeling is a set of characters marked with a first label
- the word segmentation process is: firstly label each source text with the adjacent first label and the second label.
- the two words are subjected to word segmentation processing, and then the sentence marked with the first label is separately subjected to word segmentation processing to obtain the character set of each source text after the labeling. For example, when the source text is "Xiao Ming was born in 1993.
- the words in the vocabulary are respectively matched with the characters in the character set to obtain a word set. Specifically, each word in the vocabulary is matched with each character in the character set to form a new word, and then a search is performed in a preset dictionary to find whether there is a newly formed word in the dictionary, if If it does not exist, the newly formed words are ignored, and the finally filtered new words form the word set.
- the words in the word set are spliced to obtain the first tag sequence.
- the words in the word set are spliced in the order of the characters in each source text after the annotation to obtain the first tag sequence.
- the syntactic analysis of the spliced text is performed to filter out the most suitable source text And use it as the first tag sequence, and the target text of the source text can be predicted through the first tag sequence.
- the source text For example, the source text.
- the text generation model to be trained is a model of an encoder-decoder model architecture, that is, the text generation model includes an encoder and a decoder.
- the second label sequence can be obtained through encoding and decoding, and the target text of the source text can be predicted through the second label sequence.
- the encoder of the text generation model adopts a pre-trained RoBERTa Chinese model, which is composed of 12-layer transformer; the decoder adopts a single-layer transformer, which can ensure accuracy while taking into account the inference speed of the model.
- S160 Adjust configuration parameters of the text generation model according to the first tag sequence and the second tag sequence.
- the configuration parameters of the text generation model are adjusted according to the first tag sequence and the second tag sequence.
- the target text of the source text can be obtained by both the first tag sequence and the second tag sequence
- the first tag sequence is obtained on the basis of the target text of the source text
- the second tag sequence text passes
- the text generation model to be trained is generated, that is, the first label sequence is more accurate than the second label sequence, and the target text obtained according to the first label sequence is more accurate.
- the first label sequence and the second label sequence are calculated to obtain the first label sequence.
- the similarity between a tag sequence and the second tag sequence, and finally the configuration parameters of the text generation model are adjusted accordingly according to the similarity, so that the second tag sequence generated by the text generation model is closer to the first tag sequence, and the alignment is completed. Training of text generation model.
- step S160 includes sub-steps S161 and S162.
- the first tag sequence and the second tag sequence need to be vectorized, and then the distance calculation is performed, and the calculated distance is used as the first tag sequence.
- the similarity between the second tag sequence and the first tag sequence the longer the distance, the lower the similarity, and the shorter the distance, the higher the similarity.
- the Euclidean distance calculation method is adopted to obtain the similarity.
- the Euclidean distance is a commonly used distance definition, which refers to the true distance between two points in an n-dimensional space, or the natural length of a vector.
- the formula for calculating the Euclidean distance between the second tag sequence and the first tag sequence is: Among them, n represents the dimension of the vector, x 1k is the vector of the first label sequence, and x 2k is the vector of the second label sequence.
- the configuration parameters of the text generation model are adjusted according to the similarity.
- the preset threshold is whether to adjust the parameters of the text generation model so that the text generation model can generate high-semantic text more accurately.
- the threshold can be set according to actual conditions, and is not limited here.
- the method for training a text generation model based on text editing technology described in this application obtains a preset source text set; edits the source text set according to a preset text editor to obtain the target of the source text set Text set; construct a vocabulary list according to the source text set and the target text set; process each source text according to the vocabulary and the target text of each source text in the source text set to obtain the first A label sequence; input each source text into the text generation model to be trained to obtain a second label sequence; configure parameters for the text generation model according to the first label sequence and the second label sequence Adjustment.
- the training method of the text generation model based on the text editing technology described in the present application not only greatly improves the training efficiency of the text generation model, but also improves the accuracy of the text generation model to generate high-semantic text.
- the embodiment of the present application also provides an apparatus 100 for training a text generation model based on a text editing technology, which is used to execute any embodiment of the foregoing training method for a text generation model based on a text editing technology.
- FIG. 7 is a schematic block diagram of a training device 100 for a text generation model based on a text editing technology provided by an embodiment of the present application.
- the training device 100 for the text generation model based on text editing technology includes a first acquiring unit 110, an editing unit 120, a first constructing unit 130, a processing unit 140, an input unit 150, and a second One adjustment unit 160.
- the first obtaining unit 110 is configured to obtain a preset source text collection.
- the editing unit 120 is configured to edit the source text set according to a preset text editor to obtain a target text set of the source text set.
- the first construction unit 130 is configured to construct a vocabulary list according to the source text set and the target text set.
- the first construction unit 130 includes: a second construction unit 131 and a third construction unit 132.
- the second construction unit 131 is configured to construct the longest common subsequence of each source text and the target text of each source text according to each source text and the target text of each source text.
- the second construction unit 131 includes: a second acquisition unit 1311 and a first matching unit 1312.
- the second obtaining unit 1311 is configured to obtain the subsequence set of each source text and the subsequence set of the target text of each source text.
- the first matching unit 1312 is configured to match each subsequence in the subsequence set of each source text with each subsequence in the subsequence set of the target text to obtain each source text.
- a set of common subsequences between the text and the target text of each source text, and the longest common subsequence in the set of common subsequences is taken as the longest common subsequence.
- the third construction unit 132 is configured to construct the vocabulary list according to the target text of each source text and the longest common subsequence.
- the third construction unit 132 includes: a first word segmentation unit 1321 and a second matching unit 1322.
- the first word segmentation unit 1321 is configured to perform word segmentation processing on the target text of each source text to obtain the words of the target text of each source text.
- the second matching unit 1322 is configured to match the words of the target text of each source text with the longest common subsequence to obtain the vocabulary from the words of the target text of each source text Words.
- the processing unit 140 is configured to process each source text according to the vocabulary and the target text of each source text in the source text set to obtain a first tag sequence.
- the processing unit 140 includes: a labeling unit 141, a second word segmentation unit 142, a third matching unit 143 and a splicing unit 144.
- the labeling unit 141 is used to label each source text according to the longest common subsequence to obtain each source text after labeling.
- the second word segmentation unit 142 is configured to perform word segmentation processing on each source text after the annotation to obtain the character set of each source text after the annotation.
- the third matching unit 143 is configured to match the words in the vocabulary with the characters in the character set to obtain a word set.
- the splicing unit 144 is configured to splice the words in the word set to obtain the first tag sequence.
- the input unit 150 is configured to input each source text into the text generation model to be trained to obtain the second label sequence.
- the first adjustment unit 160 is configured to adjust configuration parameters of the text generation model according to the first tag sequence and the second tag sequence.
- the first adjustment unit 160 includes: a third acquisition unit 161 and a second adjustment unit 162.
- the third acquiring unit 161 is configured to acquire the similarity between the second tag sequence and the first tag sequence.
- the second adjustment unit 162 is configured to adjust the configuration parameters of the text generation model according to the similarity if the similarity is lower than a preset threshold.
- the training device 100 for a text generation model based on text editing technology provided by the embodiment of the present application is used to execute the foregoing for obtaining a preset source text set; edit the source text set according to a preset text editor to obtain The target text set of the source text set; construct a vocabulary list according to the source text set and the target text set; Process the source text to obtain the first label sequence; input each source text into the text generation model to be trained to obtain the second label sequence; The text generation model adjusts the configuration parameters.
- FIG. 13 is a schematic block diagram of a computer device according to an embodiment of the present application.
- the device 500 includes a processor 502, a memory, and a network interface 505 connected through a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
- the non-volatile storage medium 503 can store an operating system 5031 and a computer program 5032.
- the processor 502 can execute the training method of the text generation model based on the text editing technology.
- the processor 502 is used to provide calculation and control capabilities, and support the operation of the entire device 500.
- the internal memory 504 provides an environment for the operation of the computer program 5032 in the non-volatile storage medium 503.
- the processor 502 can execute the training method of the text generation model based on text editing technology.
- the network interface 505 is used for network communication, such as providing data information transmission.
- FIG. 13 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the device 500 to which the solution of the present application is applied.
- the specific device 500 may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
- the processor 502 is configured to run a computer program 5032 stored in a memory, so as to implement any embodiment of the training method of the text generation model based on the text editing technology.
- the computer program may be stored in a storage medium, and the storage medium may be a computer-readable storage medium.
- the computer program is executed by at least one processor in the computer system to implement the process steps of the foregoing method embodiment.
- the computer-readable storage medium may be non-volatile or volatile.
- the storage medium stores a computer program that, when executed by a processor, implements any embodiment of the training method of the text generation model based on the text editing technology.
- the computer-readable storage medium may be a U disk, a mobile hard disk, a read-only memory (ROM, Read-Only Memory), a magnetic disk, or an optical disk, and other media that can store program codes.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Machine Translation (AREA)
Abstract
一种基于文本编辑技术的文本生成模型的训练方法及装置(100),该方法包括:获取预设的源文本集(S110);根据预置的文本编辑器对所述源文本集进行编辑以得到所述源文本集的目标文本集(S120);根据所述源文本集、所述目标文本集构建词汇表(S130);根据所述词汇表、所述源文本集中的每一源文本的目标文本对所述每一源文本进行处理以得到第一标签序列(S140);将所述每一源文本输入到待训练的文本生成模型中以得到第二标签序列(S150);根据所述第一标签序列、所述第二标签序列对所述文本生成模型进行配置参数的调整(S160)。通过上述方法对文本生成模型进行训练,不仅极大的提升了文本生成模型的训练效率,而且提高了文本生成模型生成高语义的文本准确率。
Description
本申请要求于2020年10月22日提交中国专利局、申请号为202011139506.2,发明名称为“基于文本编辑技术的文本生成模型的训练方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请属于人工智能中的机器学习技术领域,尤其涉及一种基于文本编辑技术的文本生成模型的训练方法及装置。
文本生成是自然语言处理领域一项重要的任务,也是人工智能面临的一个重大挑战。虽然文本生成可以辅助专业人员进行专业写作,例如法律文书补全、自动生成新闻、生成文本摘要、文本复述等,但是发明人意识到现有技术中文本生成模型的训练需依赖于大量的数据,尤其在特定领域的高质量的文本数据却比较匮乏,造成文本生成模型生成的高语义文本的准确度不高。
发明内容
本申请实施例提供了一种基于文本编辑技术的文本生成模型的训练方法及装置,解决了现有技术中文本生成模型需要大量高质量的文本数据进行训练才能准确获取高语义文本的问题。
第一方面,本申请实施例提供了一种基于文本编辑技术的文本生成模型的训练方法,其包括:
获取预设的源文本集;
根据预置的文本编辑器对所述源文本集进行编辑以得到所述源文本集的目标文本集;
根据所述源文本集、所述目标文本集构建词汇表;
根据所述词汇表、所述源文本集中的每一源文本的目标文本对所述每一源文本进行处理以得到第一标签序列;
将所述每一源文本输入到待训练的文本生成模型中以得到第二标签序列;
根据所述第一标签序列、所述第二标签序列对所述文本生成模型进行配置参数的调整。
第二方面,本申请实施例提供了一种基于文本编辑技术的文本生成模型的训练装置,其包括:
第一获取单元,用于获取预设的源文本集;
编辑单元,用于根据预置的文本编辑器对所述源文本集进行编辑以得到所述源文本集的目标文本集;
第一构建单元,用于根据所述源文本集、所述目标文本集构建词汇表;
处理单元,用于根据所述词汇表、所述源文本集中的每一源文本的目标文本对所述每一源文本进行处理以得到第一标签序列;
输入单元,用于将所述每一源文本输入到待训练的文本生成模型中以得到第二标签序列;
第一调整单元,用于根据所述第一标签序列、所述第二标签序列对所述文本生成模型进 行配置参数的调整。
第三方面,本申请实施例又提供了一种计算机设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,其中,所述处理器执行所述计算机程序时执行以下步骤:
获取预设的源文本集;
根据预置的文本编辑器对所述源文本集进行编辑以得到所述源文本集的目标文本集;
根据所述源文本集、所述目标文本集构建词汇表;
根据所述词汇表、所述源文本集中的每一源文本的目标文本对所述每一源文本进行处理以得到第一标签序列;
将所述每一源文本输入到待训练的文本生成模型中以得到第二标签序列;
根据所述第一标签序列、所述第二标签序列对所述文本生成模型进行配置参数的调整。
第四方面,本申请实施例还提供了一种计算机可读存储介质,其中所述计算机可读存储介质存储有计算机程序,所述计算机程序当被处理器执行时使所述处理器执行以下步骤:
获取预设的源文本集;
根据预置的文本编辑器对所述源文本集进行编辑以得到所述源文本集的目标文本集;
根据所述源文本集、所述目标文本集构建词汇表;
根据所述词汇表、所述源文本集中的每一源文本的目标文本对所述每一源文本进行处理以得到第一标签序列;
将所述每一源文本输入到待训练的文本生成模型中以得到第二标签序列;
根据所述第一标签序列、所述第二标签序列对所述文本生成模型进行配置参数的调整。
本申请通过所述的基于文本编辑技术的文本生成模型的训练方法,使得当源文本集较少时,仍然可以完成文本模型的训练并极大的提高了文本生成模型的训练效率,而且提高了生成高语义文本的准确率。
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的基于文本编辑技术的文本生成模型的训练方法的流程示意图;
图2为本申请实施例提供的基于文本编辑技术的文本生成模型的训练方法的子流程示意图;
图3为本申请实施例提供的基于文本编辑技术的文本生成模型的训练方法的另一子流程示意图;
图4为本申请实施例提供的基于文本编辑技术的文本生成模型的训练方法的另一子流程示意图;
图5为本申请实施例提供的基于文本编辑技术的文本生成模型的训练方法的另一子流程示意图;
图6为本申请实施例提供的基于文本编辑技术的文本生成模型的训练方法的另一子流程示意图;
图7为本申请实施例提供的基于文本编辑技术的文本生成模型的训练装置的示意性框图;
图8为本申请实施例提供的基于文本编辑技术的文本生成模型的训练装置的子单元示意性框图;
图9为本申请实施例提供的基于文本编辑技术的文本生成模型的训练装置的另一子单元示意性框图;
图10为本申请实施例提供的基于文本编辑技术的文本生成模型的训练装置的另一子单元示意性框图;
图11为本申请实施例提供的基于文本编辑技术的文本生成模型的训练装置的另一子单元示意性框图;
图12为本申请实施例提供的基于文本编辑技术的文本生成模型的训练装置的另一子单元示意性框图;
图13为本申请实施例提供的计算机设备的示意性框图。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
应当理解,当在本说明书和所附权利要求书中使用时,术语“包括”和“包含”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。
还应当进一步理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
请参阅图1,图1为本申请实施例提供的基于文本编辑技术的文本生成模型的训练方法的流程示意图。所述基于文本编辑技术的文本生成模型的训练方法在服务器中进行搭建并运行,在服务器中对文本生成模型进行训练的过程时,通过获取对文本生成模型进行训练所需的源文本集后,将源文本集中的每一条源文本进行编辑以得到每一条源文本的目标文本,然后通过预设的词汇表以及目标文本对源文本进行处理以得到第一标签序列,同时将源文本输入到待训练的文本生成模型中以得到第二标签序列,通过计算第一标签序列与第二标签序列的相似度来对待训练的文本生成模型进行配置参数的调整,使得当源文本集较少时,仍然可以完成文本模型的训练并极大的提高了文本生成模型的训练效率,而且提高了生成高语义文本的准确率。
如图1所示,该方法包括步骤S110~S150。
S110、获取预设的源文本集。
获取预设的源文本集。具体的,所述源文本集为需对文本生成模型进行训练的数据集,所述源文本集中的文本数量可根据用户需求进行配置,既可以为大量的文本数据,也可以为少量的文本数据。在本申请实施例中,采用数据量较少的源文本集队文本生成模型进行训练。
S120、根据预置的文本编辑器对所述源文本集进行编辑以得到所述源文本集的目标文本集。
根据预置的文本编辑器对所述源文本集进行编辑以得到所述源文本集的目标文本集。具体的,所述文本编辑器为可用于对所述源文本集中的每一条源文本进行编辑以得到具有高语义的目标文本的文本编辑工具,Window旗下的记事本,Mac OS X旗下的文本编辑,Linux旗下的vi、emacs、gedit等均可用于对所述源文本集中的每一条源文本进行编辑。例如:当源文本为“小明出生于1993年。小明生在上海”时,使用文本编辑器编辑后的目标文本为“小明于1993年初出生在上海”。
S130、根据所述源文本集、所述目标文本集构建词汇表。
根据所述源文本集、所述目标文本集构建词汇表。具体的,将所述目标文本集中每一目标文本中不存在于该目标文本的源文本中的词语作为所述词汇表中的词语,在构建词汇表的过程中,通常为了减少后续使用词汇表时的计算量,需要对词汇表进行优化以使得词汇表尽可能小,而从源文本集中获取词汇表中的词语需根据该词语在目标文本集中出现的频率进行筛选,例如,将词汇表中在目标文本集中出现十次以下的词语进行剔除,便可得到优化后的词汇表。本申请实施例中的词汇表构建完成后存储于区块链中,保证了词汇表存储的安全性能。
在一实施例中,如图2所示,步骤S130包括子步骤S131和S132。
S131、根据所述每一源文本、所述每一源文本的目标文本构建所述每一源文本和所述每一源文本的目标文本的最长公共子序列。
根据所述每一源文本、所述每一源文本的目标文本构建所述每一源文本和所述每一源文本的目标文本的最长公共子序列。具体的,通过使用最长公共子序列技术来获取每一源文本以及每一源文本的目标文本的最长公共子序列。最长公共子序列的定义为:一个序列S,如果分别是两个或多个已知序列的子序列,且是所有符合此条件序列中最长的,则S称为已知序列的最长公共子序列。
在一实施例中,如图3所示,步骤S131包括子步骤S1311和S1312。
S1311、获取所述每一源文本的子序列集合以及所述每一源文本的目标文本的子序列集合。
获取所述每一源文本的子序列集合以及所述每一源文本的目标文本的子序列集合。具体的,所述每一源文本的子序列集合中的子序列为在不改变每一源文本字符顺序的前提下将每一源文本进行拆分而得到子序列,每一源文本拆分后的子序列组合成所述每一源文本的子序列集合,同样所述每一源文本的目标文本的子序列集合参考每一源文本的子序列集合的获取方式得到。
S1312、将所述每一源文本的子序列集合中的每一子序列分别与所述目标文本的子序列集合中的每一子序列进行匹配以得到所述每一源文本与所述每一源文本的目标文本的公共子序列集合并将所述公共子序列集合中的最长公共子序列作为所述最长公共子序列。
将所述每一源文本的子序列集合中的每一子序列分别与所述目标文本的子序列集合中的每一子序列进行匹配以得到所述每一源文本与所述每一源文本的目标文本的公共子序列集合并将所述公共子序列集合中的最长公共子序列作为所述最长公共子序列。具体的,所述公共子序列集合中的每一个公共子序列均为所述每一源文本和所述每一源文本的目标文本的公共子序列,所述公共子序列集合中最长的序列便为所述每一源文本和所述每一源文本的目标文本的最长公共子序列。
S132、根据所述每一源文本的目标文本、所述最长公共子序列构建所述词汇表。
根据所述每一源文本的目标文本、所述最长公共子序列构建所述词汇表。具体的,所述最长公共子序列为所述每一源文本的目标文本与所述每一源文本的最长公共子序列,通过所述最长公共子序列从所述每一源文本的目标文本中获取不存在于所述最长公共子序列中的词语并将该词语作为所述词汇表中的词语,进而完成词汇表的构建。
在一实施例中,如图4所示,步骤S132包括子步骤S1321和S1322。
S1321、将所述每一源文本的目标文本进行分词处理以得到所述每一源文本的目标文本的词语。
将所述每一源文本的目标文本进行分词处理以得到所述每一源文本的目标文本的词语。具体的,本实施例中采用基于字符串的分词方法中的逆向最大匹配法对每一源文本的目标文本进行分词处理,其分词过程为:设定预置的词典中最长词条所包含的汉字数量为L,从目标文本的字符串末尾开始处理。在每一次循环开始时,都取所述字符串最后的L个字作为处理对象,查找所述词典。若所述词典中存在这样的一个L字词,则匹配成功,所述处理对象则被作为一个词被切分;若不成功,则去掉该处理对象的第一个汉字,剩下的字符串作为新的处理对象,再次进行匹配,直到切分成功为止,即完成一轮匹配,切分出一个词,类此循环直至目标文本中的词语全部被切分出来为止。
S1322、将所述每一源文本的目标文本的词语与所述最长公共子序列进行匹配以从所述每一源文本的目标文本的词语中获取构成所述词汇表的词语。
将所述每一源文本的目标文本的词语与所述最长公共子序列进行匹配以从所述每一源文本的目标文本的词语中获取构成所述词汇表的词语。具体的,将所述每一源文本的目标文本的词语是否存在于所述最长公共子序列作为目标文本的词语与最长公共子序列是否匹配成功的结果,如果匹配成功,则该词语不为词汇表中的词语;若匹配不成功,则将其作为词汇表中的词语。
S140、根据所述词汇表、所述每一源文本的目标文本对所述每一源文本进行处理以得到第一标签序列。
根据所述词汇表、所述每一源文本的目标文本对所述每一源文本进行处理以得到第一标签序列。具体的,所述每一源文本中不包含有所述词汇表中的词汇,通过标注所述每一源文 本中与所述每一源文本的目标文本的最长公共子序列以对所述每一源文本进行标注,然后将标注后的每一源文本进行拆分得到标注后的每一源文本的字符,将该字符与词汇表中的词语进行匹配以得到新的词语,然后将匹配后的词语进行拼接便可得到第一标签序列。
在一实施例中,如图5所示,步骤S140包括子步骤S141、S142、S143和S144。
S141、根据所述最长公共子序列对所述每一源文本进行标注以得到标注后的每一源文本。
根据所述最长公共子序列对所述每一源文本进行标注以得到标注后的每一源文本。具体的,将所述每一源文本中属于最长公共子序列中的字符标注第一标签,记作符号“keep”;将不属于最长公共子序列中的字符标注为第二标签,记作符号“delete”,进而使得所述每一源文本标注有第一标签和第二标签的文本。例如,源文本为“小明出生于1993年。小明生在上海”时,标注有标签的源文本标注的标签的顺序为“keep keep delete delete keep keep keep delete delete delete keep keep keep keep”。
S142、将所述标注后的每一源文本进行分词处理以得到所述标注后的每一源文本的字符集合。
将所述标注后的每一源文本进行分词处理以得到所述标注后的每一源文本的字符集合。具体的,所述标注后的每一源文本的字符集合为标注有第一标签的字符的集合,分词的过程为:首先将每一源文本中标注有第一标签和第二标签相邻的两个词进行分词处理,然后单独将标注有第一标签的语句进行分词处理以得到所述标注后的每一源文本的字符集合。例如,源文本为“小明出生于1993年。小明生在上海”时,标注有标签的源文本标注的标签的顺序为“keep keep delete delete keep keep keep delete delete delete keep keep keep keep”,最终得到的标注后的每一源文本的字符集合为:[小、明、于、1993、年、生、在、上、海],其中,每一个字符上均标注有第一标签。
S143、将所述词汇表中的词语分别与所述字符集合中的字符进行匹配以得到词语集合。
将所述词汇表中的词语分别与所述字符集合中的字符进行匹配以得到词语集合。具体的,将词汇表中的每一个词语均与字符集合中的每一个字符进行匹配以组成新的词语,然后在预设的词典中进行查找以得到该词典中是否存在新组成的词语,若不存在,则忽略该新组成的词语,最终筛选出的新的词语组成所述词语集合。
S144、将所述词语集合中的词语进行拼接以得到所述第一标签序列。
将所述词语集合中的词语进行拼接以得到所述第一标签序列。具体的,将所述词语集合中的词语以所述标注后的每一源文本中字符的排列顺序进行拼接以得到所述第一标签序列。在对词语集合中所有的词语进行拼接的过程中,需按照源文本字符组成的顺序进行拼接,拼接完成至少得到一条文本,然后将拼接完的得到的文本进行句法分析,筛选出最符合源文本的语句并将其作为所述第一标签序列,通过所述第一标签序列便可预测出源文本的目标文本。例如,源文本为“小明出生于1993年。小明生在上海”时,第一标签序列为“keep keep delete delete keep keep keep
初|delete delete delete
出|keep keep keep keep”,其中“
初|”和“
出|”均为根据词汇表中的词语进行标注的标签。
S150、将所述每一源文本输入到待训练的文本生成模型中以得到第二标签序列。
将所述每一源文本输入到待训练的文本生成模型中以得到第二标签序列。具体的,所述待训练的文本生成模型为encoder-decoder模型架构的模型,即所述文本生成模型包括编码器和解码器。源文本输入到待训练的文本生成模型中后,通过编码和解码便可得到第二标签序列,通过第二标签序列便可预测出该源文本的目标文本。在本申请实施例中,所述文本生成模型的编码器采用预训练的RoBERTa中文模型,即由12层transformer组成;解码器采用单层transformer,可在保证精度的同时兼顾模型的推理速度。
S160、根据所述第一标签序列、所述第二标签序列对所述文本生成模型进行配置参数的调整。
根据所述第一标签序列、所述第二标签序列对所述文本生成模型进行配置参数的调整。具体的,通过所述第一标签序列和所述第二标签序列均可获得源文本的目标文本,所述第一标签序列在源文本的目标文本的基础上获得,而第二标签序列文本通过待训练的文本生成模型生成,即第一标签序列相对于第二标签序列而言,根据第一标签序列获得的目标文本更准确,通过将第一标签序列与第二标签序列进行计算以得到第一标签序列与第二标签序列的相似度,最后根据相似度对文本生成模型的配置参数进行相应的调整,以使得文本生成模型生成的第二标签序列更接近于第一标签序列,进而完成对文本生成模型的训练。
在一实施例中,如图6所示,步骤S160包括子步骤S161和S162。
S161、获取所述第二标签序列与所述第一标签序列的相似度。
获取所述第二标签序列与所述第一标签序列的相似度。具体的,在计算所述第一标签序列和第二标签序列的相似度时,需将第一标签序列和第二标签序列进行向量化,然后进行距离计算,将计算得到的距离作为所述第二标签序列与所述第一标签序列的相似度,距离越长,相似度越低,距离越短,相似度越高。在本申请实施例中采用欧式距离计算方式得到所述相似度。所述欧式距离是一个通常采用的距离定义,指在n维空间中两个点之间的真实距离,或者向量的自然长度。所述第二标签序列与所述第一标签序列的欧式距离计算公式为:
其中,n表示向量的维度,x
1k为第一标签序列的向量,x
2k为第二标签序列的向量。
S162、若所述相似度低于预设的阈值,根据所述相似度对所述文本生成模型的配置参数进行调整。
若所述相似度低于预设的阈值,根据所述相似度对所述文本生成模型的配置参数进行调整。具体的,预设的阈值为是否对文本生成模型的参数进行调整的以使得文本生成模型能更加准确生成高语义文本。所述阈值可根据实际情况进行设定,在此不做限定。
本申请所述的基于文本编辑技术的文本生成模型的训练方法,通过获取预设的源文本集;根据预置的文本编辑器对所述源文本集进行编辑以得到所述源文本集的目标文本集;根据所述源文本集、所述目标文本集构建词汇表;根据所述词汇表、所述源文本集中的每一源文本的目标文本对所述每一源文本进行处理以得到第一标签序列;将所述每一源文本输入到待训练的文本生成模型中以得到第二标签序列;根据所述第一标签序列、所述第二标签序列对所 述文本生成模型进行配置参数的调整。本申请所述的基于文本编辑技术的文本生成模型的训练方法不仅极大的提升了文本生成模型的训练效率,而且提高了文本生成模型生成高语义的文本准确率。
本申请实施例还提供了一种基于文本编辑技术的文本生成模型的训练装置100,该装置用于执行前述基于文本编辑技术的文本生成模型的训练方法的任一实施例。具体地,请参阅图7,图7是本申请实施例提供的基于文本编辑技术的文本生成模型的训练装置100的示意性框图。
如图7所示,所述的基于文本编辑技术的文本生成模型的训练装置100,该装置包括第一获取单元110、编辑单元120、第一构建单元130、处理单元140、输入单元150和第一调整单元160。
第一获取单元110,用于获取预设的源文本集。
编辑单元120,用于根据预置的文本编辑器对所述源文本集进行编辑以得到所述源文本集的目标文本集。
第一构建单元130,用于根据所述源文本集、所述目标文本集构建词汇表。
在其他发明实施例中,如图8所示,所述第一构建单元130包括:第二构建单元131和第三构建单元132。
第二构建单元131,用于根据所述每一源文本、所述每一源文本的目标文本构建所述每一源文本和所述每一源文本的目标文本的最长公共子序列。
在其他发明实施例中,如图9所示,所述第二构建单元131包括:第二获取单元1311和第一匹配单元1312。
第二获取单元1311,用于获取所述每一源文本的子序列集合以及所述每一源文本的目标文本的子序列集合。
第一匹配单元1312,用于将所述每一源文本的子序列集合中的每一子序列分别与所述目标文本的子序列集合中的每一子序列进行匹配以得到所述每一源文本与所述每一源文本的目标文本的公共子序列集合并将所述公共子序列集合中的最长公共子序列作为所述最长公共子序列。
第三构建单元132,用于根据所述每一源文本的目标文本、所述最长公共子序列构建所述词汇表。
在其他发明实施例中,如图10所示,所述第三构建单元132包括:第一分词单元1321和第二匹配单元1322。
第一分词单元1321,用于将所述每一源文本的目标文本进行分词处理以得到所述每一源文本的目标文本的词语。
第二匹配单元1322,用于将所述每一源文本的目标文本的词语与所述最长公共子序列进行匹配以从所述每一源文本的目标文本的词语中获取构成所述词汇表的词语。
处理单元140,用于根据所述词汇表、所述源文本集中每一源文本的目标文本对所述每一源文本进行处理以得到第一标签序列。
在其他发明实施例中,如图11所示,所述处理单元140包括:标注单元141、第二分词单元142、第三匹配单元143和拼接单元144。
标注单元141,用根据所述最长公共子序列对所述每一源文本进行标注以得到标注后的每一源文本。
第二分词单元142,用于将所述标注后的每一源文本进行分词处理以得到所述标注后的每一源文本的字符集合。
第三匹配单元143,用于将所述词汇表中的词语分别与所述字符集合中的字符进行匹配以得到词语集合。
拼接单元144,用于将所述词语集合中的词语进行拼接以得到所述第一标签序列。
输入单元150,用于将所述每一源文本输入到待训练的文本生成模型中以得到第二标签序列。
第一调整单元160,用于根据所述第一标签序列、所述第二标签序列对所述文本生成模型进行配置参数的调整。
在其他发明实施例中,如图12所示,所述第一调整单元160包括:第三获取单元161和第二调整单元162。
第三获取单元161,用于获取所述第二标签序列与所述第一标签序列的相似度。
第二调整单元162,用于若所述相似度低于预设的阈值,根据所述相似度对所述文本生成模型的配置参数进行调整。
本申请实施例所提供的基于文本编辑技术的文本生成模型的训练装置100用于执行上述用于获取预设的源文本集;根据预置的文本编辑器对所述源文本集进行编辑以得到所述源文本集的目标文本集;根据所述源文本集、所述目标文本集构建词汇表;根据所述词汇表、所述源文本集中的每一源文本的目标文本对所述每一源文本进行处理以得到第一标签序列;将所述每一源文本输入到待训练的文本生成模型中以得到第二标签序列;根据所述第一标签序列、所述第二标签序列对所述文本生成模型进行配置参数的调整。
请参阅图13,图13是本申请实施例提供的计算机设备的示意性框图。
参阅图13,该设备500包括通过系统总线501连接的处理器502、存储器和网络接口505,其中,存储器可以包括非易失性存储介质503和内存储器504。
该非易失性存储介质503可存储操作系统5031和计算机程序5032。该计算机程序5032被执行时,可使得处理器502执行基于文本编辑技术的文本生成模型的训练方法。该处理器502用于提供计算和控制能力,支撑整个设备500的运行。该内存储器504为非易失性存储介质503中的计算机程序5032的运行提供环境,该计算机程序5032被处理器502执行时,可使得处理器502执行基于文本编辑技术的文本生成模型的训练方法。该网络接口505用于进行网络通信,如提供数据信息的传输等。本领域技术人员可以理解,图13中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的设备500的限定,具体的设备500可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
其中,所述处理器502用于运行存储在存储器中的计算机程序5032,以实现上述基于文本编辑技术的文本生成模型的训练方法的任一实施例。
本领域普通技术人员可以理解的是实现上述实施例的方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成。该计算机程序可存储于一存储介质中,该存储介质可以为计算机可读存储介质。该计算机程序被该计算机系统中的至少一个处理器执行,以实现上述方法的实施例的流程步骤。
因此,本申请还提供了一种计算机可读存储介质。该计算机可读存储介质可以是非易失性,也可以是易失性。该存储介质存储有计算机程序,该计算机程序当被处理器执行时实现上述基于文本编辑技术的文本生成模型的训练方法的任一实施例。
该计算机可读存储介质可以是U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置、设备和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的装置、设备和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。
Claims (20)
- 一种基于文本编辑技术的文本生成模型的训练方法,其中,包括以下步骤:获取预设的源文本集;根据预置的文本编辑器对所述源文本集进行编辑以得到所述源文本集的目标文本集;根据所述源文本集、所述目标文本集构建词汇表;根据所述词汇表、所述源文本集中的每一源文本的目标文本对所述每一源文本进行处理以得到第一标签序列;将所述每一源文本输入到待训练的文本生成模型中以得到第二标签序列;根据所述第一标签序列、所述第二标签序列对所述文本生成模型进行配置参数的调整。
- 根据权利要求1所述的基于文本编辑技术的文本生成模型的训练方法,其中,所述根据所述源文本集、所述目标文本集构建所述词汇表,包括:根据所述每一源文本、所述每一源文本的目标文本构建所述每一源文本和所述每一源文本的目标文本的最长公共子序列;根据所述每一源文本的目标文本、所述最长公共子序列构建所述词汇表。
- 根据权利要求2所述的基于文本编辑技术的文本生成模型的训练方法,其中,所述根据所述每一源文本、所述每一源文本的目标文本构建所述每一源文本和所述每一源文本的目标文本的最长公共子序列,包括:获取所述每一源文本的子序列集合以及所述每一源文本的目标文本的子序列集合;将所述每一源文本的子序列集合中的每一子序列分别与所述目标文本的子序列集合中的每一子序列进行匹配以得到所述每一源文本与所述每一源文本的目标文本的公共子序列集合并将所述公共子序列集合中的最长公共子序列作为所述最长公共子序列。
- 根据权利要求2所述的基于文本编辑技术的文本生成模型的训练方法,其中,所述根据所述每一源文本的目标文本、所述最长公共子序列构建所述词汇表,包括:将所述每一源文本的目标文本进行分词处理以得到所述每一源文本的目标文本的词语;将所述每一源文本的目标文本的词语与所述最长公共子序列进行匹配以从所述每一源文本的目标文本的词语中获取构成所述词汇表的词语。
- 根据权利要求2所述的基于文本编辑技术的文本生成模型的训练方法,其中,所述根据预设的词汇表、所述每一源文本的目标文本对所述每一源文本进行处理以得到第一标签序列,包括:根据所述最长公共子序列对所述每一源文本进行标注以得到标注后的每一源文本;将所述标注后的每一源文本进行分词处理以得到所述标注后的每一源文本的字符集合;将所述词汇表中的词语分别与所述字符集合中的字符进行匹配以得到词语集合;将所述词语集合中的词语进行拼接以得到所述第一标签序列。
- 根据权利要求5所述的基于文本编辑技术的文本生成模型的训练方法,其中,所述将所述词语集合中的词语进行拼接以得到所述第一标签序列,包括:将所述词语集合中的词语以所述标注后的每一源文本中字符的排列顺序进行拼接以得到所述第一标签序列。
- 根据权利要求1所述的基于文本编辑技术的文本生成模型的训练方法,其中,所述根据所述第一标签序列、所述第二标签序列对所述文本生成模型进行配置参数的调整,包括:获取所述第二标签序列与所述第一标签序列的相似度;若所述相似度低于预设的阈值,根据所述相似度对所述文本生成模型的配置参数进行调整。
- 一种基于文本编辑技术的文本生成模型的训练装置,其中,包括:第一获取单元,用于获取预设的源文本集;编辑单元,用于根据预置的文本编辑器对所述源文本集进行编辑以得到所述源文本集的目标文本集;第一构建单元,用于根据所述源文本集、所述目标文本集构建词汇表;处理单元,用于根据所述词汇表、所述源文本集中的每一源文本的目标文本对所述每一源文本进行处理以得到第一标签序列;输入单元,用于将所述每一源文本输入到待训练的文本生成模型中以得到第二标签序列;第一调整单元,用于根据所述第一标签序列、所述第二标签序列对所述文本生成模型进行配置参数的调整。
- 一种计算机设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,其中,所述处理器执行所述计算机程序时执行以下步骤:获取预设的源文本集;根据预置的文本编辑器对所述源文本集进行编辑以得到所述源文本集的目标文本集;根据所述源文本集、所述目标文本集构建词汇表;根据所述词汇表、所述源文本集中的每一源文本的目标文本对所述每一源文本进行处理以得到第一标签序列;将所述每一源文本输入到待训练的文本生成模型中以得到第二标签序列;根据所述第一标签序列、所述第二标签序列对所述文本生成模型进行配置参数的调整。
- 根据权利要求9所述的计算机设备,其中,所述根据所述源文本集、所述目标文本集构建所述词汇表,包括:根据所述每一源文本、所述每一源文本的目标文本构建所述每一源文本和所述每一源文本的目标文本的最长公共子序列;根据所述每一源文本的目标文本、所述最长公共子序列构建所述词汇表。
- 根据权利要求10所述的计算机设备,其中,所述根据所述每一源文本、所述每一源文本的目标文本构建所述每一源文本和所述每一源文本的目标文本的最长公共子序列,包括:获取所述每一源文本的子序列集合以及所述每一源文本的目标文本的子序列集合;将所述每一源文本的子序列集合中的每一子序列分别与所述目标文本的子序列集合中的每一子序列进行匹配以得到所述每一源文本与所述每一源文本的目标文本的公共子序列集合并将所述公共子序列集合中的最长公共子序列作为所述最长公共子序列。
- 根据权利要求10所述的计算机设备,其中,所述根据所述每一源文本的目标文本、 所述最长公共子序列构建所述词汇表,包括:将所述每一源文本的目标文本进行分词处理以得到所述每一源文本的目标文本的词语;将所述每一源文本的目标文本的词语与所述最长公共子序列进行匹配以从所述每一源文本的目标文本的词语中获取构成所述词汇表的词语。
- 根据权利要求10所述的计算机设备,其中,所述根据预设的词汇表、所述每一源文本的目标文本对所述每一源文本进行处理以得到第一标签序列,包括:根据所述最长公共子序列对所述每一源文本进行标注以得到标注后的每一源文本;将所述标注后的每一源文本进行分词处理以得到所述标注后的每一源文本的字符集合;将所述词汇表中的词语分别与所述字符集合中的字符进行匹配以得到词语集合;将所述词语集合中的词语进行拼接以得到所述第一标签序列。
- 根据权利要求13所述的计算机设备,其中,所述将所述词语集合中的词语进行拼接以得到所述第一标签序列,包括:将所述词语集合中的词语以所述标注后的每一源文本中字符的排列顺序进行拼接以得到所述第一标签序列。
- 根据权利要求9所述的计算机设备,其中,所述根据所述第一标签序列、所述第二标签序列对所述文本生成模型进行配置参数的调整,包括:获取所述第二标签序列与所述第一标签序列的相似度;若所述相似度低于预设的阈值,根据所述相似度对所述文本生成模型的配置参数进行调整。
- 一种计算机可读存储介质,其中,所述计算机可读存储介质存储有计算机程序,所述计算机程序当被处理器执行以下步骤:获取预设的源文本集;根据预置的文本编辑器对所述源文本集进行编辑以得到所述源文本集的目标文本集;根据所述源文本集、所述目标文本集构建词汇表;根据所述词汇表、所述源文本集中的每一源文本的目标文本对所述每一源文本进行处理以得到第一标签序列;将所述每一源文本输入到待训练的文本生成模型中以得到第二标签序列;根据所述第一标签序列、所述第二标签序列对所述文本生成模型进行配置参数的调整。
- 根据权利要求16所述的计算机可读存储介质,其中,所述根据所述源文本集、所述目标文本集构建所述词汇表,包括:根据所述每一源文本、所述每一源文本的目标文本构建所述每一源文本和所述每一源文本的目标文本的最长公共子序列;根据所述每一源文本的目标文本、所述最长公共子序列构建所述词汇表。
- 根据权利要求17所述的计算机可读存储介质,其中,所述根据所述每一源文本、所述每一源文本的目标文本构建所述每一源文本和所述每一源文本的目标文本的最长公共子序列,包括:获取所述每一源文本的子序列集合以及所述每一源文本的目标文本的子序列集合;将所述每一源文本的子序列集合中的每一子序列分别与所述目标文本的子序列集合中的每一子序列进行匹配以得到所述每一源文本与所述每一源文本的目标文本的公共子序列集合并将所述公共子序列集合中的最长公共子序列作为所述最长公共子序列。
- 根据权利要求17所述的计算机可读存储介质,其中,所述根据所述每一源文本的目标文本、所述最长公共子序列构建所述词汇表,包括:将所述每一源文本的目标文本进行分词处理以得到所述每一源文本的目标文本的词语;将所述每一源文本的目标文本的词语与所述最长公共子序列进行匹配以从所述每一源文本的目标文本的词语中获取构成所述词汇表的词语。
- 根据权利要求17所述的计算机可读存储介质,其中,所述根据预设的词汇表、所述每一源文本的目标文本对所述每一源文本进行处理以得到第一标签序列,包括:根据所述最长公共子序列对所述每一源文本进行标注以得到标注后的每一源文本;将所述标注后的每一源文本进行分词处理以得到所述标注后的每一源文本的字符集合;将所述词汇表中的词语分别与所述字符集合中的字符进行匹配以得到词语集合;将所述词语集合中的词语进行拼接以得到所述第一标签序列。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011139506.2A CN112257456A (zh) | 2020-10-22 | 2020-10-22 | 基于文本编辑技术的文本生成模型的训练方法及装置 |
CN202011139506.2 | 2020-10-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021189890A1 true WO2021189890A1 (zh) | 2021-09-30 |
Family
ID=74264135
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/131757 WO2021189890A1 (zh) | 2020-10-22 | 2020-11-26 | 基于文本编辑技术的文本生成模型的训练方法及装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112257456A (zh) |
WO (1) | WO2021189890A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113011149B (zh) * | 2021-03-04 | 2024-05-14 | 中国科学院自动化研究所 | 一种文本纠错方法及系统 |
CN113435183B (zh) * | 2021-06-30 | 2023-08-29 | 平安科技(深圳)有限公司 | 文本生成方法、装置及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180329886A1 (en) * | 2017-05-15 | 2018-11-15 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Artificial intelligence based method and apparatus for generating information |
CN109657051A (zh) * | 2018-11-30 | 2019-04-19 | 平安科技(深圳)有限公司 | 文本摘要生成方法、装置、计算机设备及存储介质 |
CN109933662A (zh) * | 2019-02-15 | 2019-06-25 | 北京奇艺世纪科技有限公司 | 模型训练方法、信息生成方法、装置、电子设备和计算机可读介质 |
CN110097085A (zh) * | 2019-04-03 | 2019-08-06 | 阿里巴巴集团控股有限公司 | 歌词文本生成方法、训练方法、装置、服务器及存储介质 |
CN110263350A (zh) * | 2019-03-08 | 2019-09-20 | 腾讯科技(深圳)有限公司 | 模型训练方法、装置、计算机可读存储介质和计算机设备 |
-
2020
- 2020-10-22 CN CN202011139506.2A patent/CN112257456A/zh active Pending
- 2020-11-26 WO PCT/CN2020/131757 patent/WO2021189890A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180329886A1 (en) * | 2017-05-15 | 2018-11-15 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Artificial intelligence based method and apparatus for generating information |
CN109657051A (zh) * | 2018-11-30 | 2019-04-19 | 平安科技(深圳)有限公司 | 文本摘要生成方法、装置、计算机设备及存储介质 |
CN109933662A (zh) * | 2019-02-15 | 2019-06-25 | 北京奇艺世纪科技有限公司 | 模型训练方法、信息生成方法、装置、电子设备和计算机可读介质 |
CN110263350A (zh) * | 2019-03-08 | 2019-09-20 | 腾讯科技(深圳)有限公司 | 模型训练方法、装置、计算机可读存储介质和计算机设备 |
CN110097085A (zh) * | 2019-04-03 | 2019-08-06 | 阿里巴巴集团控股有限公司 | 歌词文本生成方法、训练方法、装置、服务器及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN112257456A (zh) | 2021-01-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108091328B (zh) | 基于人工智能的语音识别纠错方法、装置及可读介质 | |
CN108491373B (zh) | 一种实体识别方法及系统 | |
CN107657947B (zh) | 基于人工智能的语音处理方法及其装置 | |
KR102117160B1 (ko) | 모호한 엔티티 단어에 기반한 텍스트 처리 방법과 장치 | |
CN108052499B (zh) | 基于人工智能的文本纠错方法、装置及计算机可读介质 | |
CN114694076A (zh) | 基于多任务学习与层叠跨模态融合的多模态情感分析方法 | |
WO2019085779A1 (zh) | 机器处理及文本纠错方法和装置、计算设备以及存储介质 | |
CN107301170B (zh) | 基于人工智能的切分语句的方法和装置 | |
US9734820B2 (en) | System and method for translating real-time speech using segmentation based on conjunction locations | |
WO2021189890A1 (zh) | 基于文本编辑技术的文本生成模型的训练方法及装置 | |
US20040111264A1 (en) | Name entity extraction using language models | |
WO2021139266A1 (zh) | 融合外部知识的bert模型的微调方法、装置及计算机设备 | |
WO2020215456A1 (zh) | 一种基于教师监督的文本标注方法和设备 | |
CN113948066B (zh) | 一种实时转译文本的纠错方法、系统、存储介质和装置 | |
CN113449489B (zh) | 标点符号标注方法、装置、计算机设备和存储介质 | |
CN112347241A (zh) | 一种摘要提取方法、装置、设备及存储介质 | |
CN111160004A (zh) | 一种断句模型的建立方法及装置 | |
CN114662476A (zh) | 一种融合词典与字符特征的字符序列识别方法 | |
CN116468009A (zh) | 文章生成方法、装置、电子设备和存储介质 | |
CN113742446A (zh) | 一种基于路径排序的知识图谱问答方法及系统 | |
CN114528394A (zh) | 一种基于掩码语言模型的文本三元组提取方法及装置 | |
CN113919358A (zh) | 一种基于主动学习的命名实体识别方法和系统 | |
CN111832248A (zh) | 文本规整方法、装置、电子设备和存储介质 | |
CN114218921A (zh) | 一种优化bert的问题语义匹配方法 | |
CN112507718B (zh) | 一种跨语种实体标注方法、装置、设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20926689 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20926689 Country of ref document: EP Kind code of ref document: A1 |