CN113326696B - Text generation method and device - Google Patents

Text generation method and device Download PDF

Info

Publication number
CN113326696B
CN113326696B CN202110883337.1A CN202110883337A CN113326696B CN 113326696 B CN113326696 B CN 113326696B CN 202110883337 A CN202110883337 A CN 202110883337A CN 113326696 B CN113326696 B CN 113326696B
Authority
CN
China
Prior art keywords
text
target
determining
score
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110883337.1A
Other languages
Chinese (zh)
Other versions
CN113326696A (en
Inventor
岳祥
方强
丁文彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Century TAL Education Technology Co Ltd
Original Assignee
Beijing Century TAL Education Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Century TAL Education Technology Co Ltd filed Critical Beijing Century TAL Education Technology Co Ltd
Priority to CN202110883337.1A priority Critical patent/CN113326696B/en
Publication of CN113326696A publication Critical patent/CN113326696A/en
Application granted granted Critical
Publication of CN113326696B publication Critical patent/CN113326696B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/253Grammatical analysis; Style critique
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Machine Translation (AREA)

Abstract

The disclosure provides a text generation method and a text generation device, and belongs to the field of text processing. The method comprises the following steps: acquiring a target text, wherein the target text at least comprises target prosody information and a question stem text; calling a rhyme text generation model constructed based on prosodic information, and processing the target text to obtain at least one error item text, wherein each error item text conforms to prosodic rules corresponding to the target prosodic information in the question stem text; and determining at least one investigation question according to the question stem text, the at least one error item text and the correct item text of the question stem text. By adopting the method and the device, the question setting efficiency can be improved.

Description

Text generation method and device
Technical Field
The present disclosure relates to the field of text processing, and in particular, to a text generation method and apparatus.
Background
In order to investigate the understanding of the ancient poetry by the students, the teacher designs some questions for the ancient poetry.
One of the question types is that any one sentence of ancient poetry is hollowed, other poetry sentences are given as question stems, or the previous sentence or the next sentence corresponding to the hollowed position is given as question stems, and a plurality of options are set, wherein the options include correct items and at least one wrong item. Poetry sentences in the wrong items are closer to the correct items, so that students can possibly select the wrong items when not mastering the original ancient poetry, and the purpose of investigation is achieved.
However, the poetry sentence in the wrong item may not be the poetry sentence in the existing ancient poetry, so that a teacher is required to expend more energy and time to compose the poetry, and the writing ability of the teacher is limited, so that the problem making efficiency is low.
Disclosure of Invention
In order to solve the problems in the prior art, embodiments of the present disclosure provide a text generation method and apparatus. The technical scheme is as follows:
according to an aspect of the present disclosure, there is provided a text generation method, the method including:
acquiring a target text, wherein the target text at least comprises target prosody information and a question stem text;
calling a rhyme text generation model constructed based on prosodic information, and processing the target text to obtain at least one error item text, wherein each error item text conforms to prosodic rules corresponding to the target prosodic information in the question stem text;
and determining at least one investigation question according to the question stem text, the at least one error item text and the correct item text of the question stem text.
According to another aspect of the present disclosure, there is provided a text generation apparatus, the apparatus including:
the acquisition module is used for acquiring a target text, and the target text at least comprises target prosody information and a question stem text;
the processing module is used for calling a rhyme-retention text generation model constructed based on prosodic information, processing the target text to obtain at least one error item text, wherein each error item text conforms to a prosodic rule corresponding to the target prosodic information in the stem text;
and the determining module is used for determining at least one investigation question according to the question stem text, the at least one error item text and the correct item text of the question stem text.
According to another aspect of the present disclosure, there is provided an electronic device including:
a processor; and
a memory for storing a program, wherein the program is stored in the memory,
wherein the program comprises instructions which, when executed by the processor, cause the processor to perform the text generation method described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the above text generation method.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program, wherein the computer program realizes the above text generation method when executed by a processor.
In the embodiment of the disclosure, because the rhyme-entering text generation model is constructed based on the prosody information, when the terminal calls the rhyme-entering text generation model to generate the error item text, the terminal can take the target prosody information and the stem text as the input of the model and output the input to obtain the error item text. Because the input of the model comprises the target prosody information, the terminal can utilize the target prosody information in the process of processing the model, so that the output error item text conforms to the prosody rule corresponding to the target prosody information in the question stem text, and the prosody requirement is met. Furthermore, the terminal can determine at least one investigation question according to the question stem text, the at least one error item text and the correct item text corresponding to the question stem text, so that the question setting efficiency is improved.
Drawings
Further details, features and advantages of the disclosure are disclosed in the following description of exemplary embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 shows a flow diagram of a text generation method according to an example embodiment of the present disclosure;
FIG. 2 shows a knowledge base schematic in accordance with an example embodiment of the present disclosure;
FIG. 3 illustrates a flow diagram for determining input encoding according to an exemplary embodiment of the present disclosure;
FIG. 4 shows a model structure diagram in accordance with an exemplary embodiment of the present disclosure;
FIG. 5 shows a model structure diagram in accordance with an exemplary embodiment of the present disclosure;
FIG. 6 shows a model processing diagram in accordance with an exemplary embodiment of the present disclosure;
FIG. 7 illustrates a flow chart for determining a composite score according to an exemplary embodiment of the present disclosure;
FIG. 8 shows a flowchart of a training method according to an example embodiment of the present disclosure;
FIG. 9 shows a schematic block diagram of a text generation apparatus according to an example embodiment of the present disclosure;
FIG. 10 shows a schematic block diagram of a text generation apparatus according to an example embodiment of the present disclosure;
FIG. 11 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description. It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The disclosed embodiments provide a text generation method, which may be performed by a terminal, a server, and/or other devices with processing capabilities. The method provided by the embodiment of the present disclosure may be implemented by any one of the above devices, or may be implemented by multiple devices, for example, the terminal acquires a target text and transmits the target text to the server, the server generates at least one error item text according to the target text, and returns the generated error item text to the terminal, and then the terminal may determine a corresponding examination topic according to the received error item text, which is not limited in the present disclosure.
Taking a terminal as an example, the method for generating a text will be described below with reference to a flowchart of the text generation method shown in fig. 1.
Step 101, a terminal acquires a target text.
The target text at least comprises target prosody information and a question stem text.
The prosodic information may be a vowel vocabulary, such as single vowels a, o, e, i, u, v (for ease of encoding, v may be rewritten as below), complex vowels ai, ei, ui, ao, ou, iu, ie, ve, er, an, en, in, un, vn, ang, eng, ing, ong. And/or the prosodic information can also be level and oblique information, e.g., level and oblique.
The stem text may be poetry text that MASKs the correct item text, e.g., in front of the bed, bright moon, [ MASK ] [ MASK ] [ MASK ] [ MASK ] [ MASK ]. Wherein, the verse corresponding to [ MASK ] is a covered text, and is also a text to be generated by the text generation method provided by the embodiment of the disclosure.
In one possible embodiment, the target text to be processed may be determined before generating the error term text. In order to examine the mastering degree of the students on the ancient poetry, the source of the question stem text can be all the ancient poetry which needs to be mastered by the students in class and out of class.
The terminal can obtain the text that each ancient poetry corresponds, to each ancient poetry, covers every sentence poetry in proper order, obtains a plurality of theme stem texts, for example, if ancient poetry has 4 sentences, then can cover every sentence poetry in proper order, obtains 4 theme stem texts. And the terminal can also obtain the rhythm information of each ancient poem, and when the rhythm information is a vowel word, the rhythm information can be the vowel of the last character of each sentence in the poem, for example, the rhythm information of the 'quiet night thinking' is angangangvenangeng. The rhythm information of each poem represents the corresponding rhythm rule.
For a topic stem text, the terminal can integrate the topic stem text and corresponding prosody information to obtain a target text. After integrating each topic stem text, the terminal can obtain and store a plurality of target texts, and further can generate error item texts corresponding to the plurality of target texts based on the text generation method provided by the embodiment of the disclosure.
When a task of generating an error item text is triggered, the terminal can acquire a current target text to be processed, and prosody information in the target text to be processed is called as target prosody information. Illustratively, the target text to be processed may be "angngveang \ n-bed bright moonlight, [ MASK ] [ MASK ] [ MASK ] [ MASK ] [ MASK ]. To look at the moon and to look low at the hometown. Where \\ n is a line feed character used to divide prosodic information and stem text.
Optionally, the target text further includes associated text of the stem text.
Wherein the associated text may be one or more of: the name of the work, the author, the alias of the work, the year of creation, the origin of the work, the genre of literature and the brief introduction of the work. The associated text may also be other texts associated with the stem text, and may be used to describe the stem text, which is not limited in this embodiment.
The knowledge base is a database constructed based on entity-attribute, and the associated text of the question stem text can be stored in the knowledge base.
In a possible implementation manner, the terminal may obtain an associated text corresponding to a stem text stored in the knowledge base. In the process of determining the target text, the topic stem text, the corresponding prosodic information and the associated text can be integrated to obtain a target text.
For example, the information in the knowledge base of the "quiet night thinking" is shown in a schematic diagram of the knowledge base shown in fig. 2, the terminal may add the information in the knowledge base to a target text, and the target text may be "angangveang \ n quiet night thinking" (li white poetry) \ n work introduction: the 'quiet night thought' is a five-story ancient poem made by Li Bai of poetry in Tang Dynasty. The poem depicts what the poem feels like going up to the moon indoors at night in autumn. In poetry, methods such as metaphors and setbacks are used to express the thinking situation of the guest house, the language is fresh and plain, the lingering charm is endless, and the reciting is always performed. Name of work \ n: the alternative name of the works of quiescence thought \ n is as follows: night thought \ n author: the creation age of Libai \ n: the prosperous Tang \ n works are: the Litaibai collection \ n literature and physical cutting: five languages ancient poetry \ n quiet night siv \ n Libai \ n bed front moonlight, [ MASK ] [ MASK ] [ MASK ] [ MASK ]. To look at the moon and to look low at the hometown. "
In the embodiment of the present disclosure, the order of the prosodic information, the stem text, and the associated text in the target text is not limited.
When the associated text is included in the target text, the rhyme text generation model can determine the wrong text based on more information amount in the process of generating the text, so that the similarity degree of the wrong text and the correct text is improved, and the quality of the wrong text is improved.
And 102, calling a rhyme text generation model constructed based on prosody information by the terminal, and processing the target text to obtain at least one error item text.
And each error item text conforms to the prosody rule corresponding to the target prosody information in the topic header text.
In a possible implementation manner, when the terminal constructs the rhyme text generation model, the terminal may add prosody information to the rhyme text generation model, so that the rhyme text generation model may represent the prosody information. Furthermore, the terminal can take the target text to be processed as the input of the model, process the target text through the rhyme text generation model, predict the text covered in the stem text, and take the predicted text as at least one error item text.
Optionally, the rhyme text generation model includes a vocabulary.
The vocabulary is stored with a plurality of vocabulary expressions, and the rhyme text generation model can represent the text input and output by the model through the vocabulary, namely, the vocabulary in the vocabulary is the vocabulary which can be understood and expressed by the model.
Correspondingly, the processing of step 102 may be as follows: the terminal determines the input code of the target text according to the vocabulary and the target text; and processing the input codes through a rhyme text generation model to determine at least one error item text.
In a possible implementation manner, after the terminal acquires the target text to be processed, the terminal may perform word segmentation on the target text according to the vocabulary stored in the vocabulary table. Furthermore, the terminal can encode each word obtained after word segmentation according to the vocabulary table to obtain a corresponding input code.
Optionally, the vocabulary includes prosodic information, and the adding of the prosodic information to the rhyme text generation model refers to adding the prosodic information to the vocabulary. As shown in the flowchart of fig. 3, the process of the terminal determining the input code of the rhyme text generation model may be as follows:
in step 301, the terminal determines a first vocabulary vector of the target prosody information according to the prosody information in the vocabulary.
In one possible implementation, the terminal may perform word segmentation on the target prosody information according to the prosody information in the vocabulary, further query a vector representation corresponding to each vocabulary of the target prosody information in the vocabulary, and use the vector representation corresponding to each vocabulary to represent the target prosody information to obtain a first vocabulary vector of the target prosody information.
Step 302, the terminal determines a second vocabulary vector of the stem text according to the vocabulary information except the prosodic information in the vocabulary.
In a possible implementation manner, the terminal may perform word segmentation on the topic stem text according to vocabulary information except prosodic information in the vocabulary, further query a vector representation corresponding to each vocabulary in the vocabulary, and use the vector representation corresponding to each vocabulary to represent the topic stem text to obtain a second vocabulary vector of the topic stem text.
In step 303, the terminal determines a first position vector of the target prosody information according to the first position information of the target prosody information in the target text.
In a possible implementation manner, in the above process, after the terminal performs word segmentation on the target prosody information, the terminal may determine the position information of each vocabulary in the target text, and further represent the position information of each vocabulary by a vector to obtain a first position vector of the target prosody information.
And step 304, the terminal determines a second position vector of the question stem text according to the second position information of the question stem text in the target text.
In a possible implementation manner, in the above process, after the terminal performs word segmentation on the topic stem text, the terminal may determine the position information of each vocabulary in the target text, and further represent the position information of each vocabulary through a vector to obtain a second position vector of the topic stem text.
In step 305, the terminal determines the input code of the target text according to the first vocabulary vector, the second vocabulary vector, the first position vector and the second position vector.
In a possible implementation manner, the terminal may integrate the first vocabulary vector, the second vocabulary vector, the first position vector and the second position vector determined in the above process to obtain an input vector matrix, i.e. the input code of the target text. The input code may be used to represent each word in the target text and the corresponding location of each word.
After determining the input codes of the target texts, the terminal can process the input codes through a rhyme text generation model, output each covered character to obtain a probability matrix, and the probability matrix represents the probability of each vocabulary in the vocabulary table being selected. For a masked character, the first number of words with the highest probability may be determined in its corresponding probability matrix. Furthermore, the terminal may determine the second number of words from the first number of words with the highest probability according to the probability, that is, the higher the probability of the words, the greater the chance of being selected. Then, the terminal can input the predicted vocabulary as a model and predict the next covered character until the covered character is predicted. The terminal can construct a corresponding text according to each predicted vocabulary, namely obtaining at least one error item text.
For the case where the target text includes the associated text, the terminal may determine a third vocabulary vector of the associated text from the vocabulary information in the vocabulary table other than the prosodic information based on the same method as described above; and determining a third position vector of the question stem text according to the third position information of the associated text in the target text. And then, the terminal determines the input code of the target text according to the first vocabulary vector, the second vocabulary vector, the third vocabulary vector, the first position vector, the second position vector and the third position vector. The specific processing is the same as the above processing, and is not described herein again.
For example, the rhyme text generation model may be a GPT (Generative Pre-Training) model, such as the model structure diagram shown in fig. 4, and the GPT model may be composed of a 12-layer Transformer model decoding module. As shown in the model structure diagram of fig. 5, each decoding module consists of two sub-layers, the first sub-layer consists of a multi-headed attention module with a mask and a first residual concatenation and normalization module, and the second sub-layer consists of a full concatenation layer and a second residual concatenation and normalization module.
As shown in the schematic diagram of model processing in FIG. 6, the inputs of the model are "angngveang \ n static night plus \ n Libai \ n bed front moon light, [ MASK ] [ MASK ] [ MASK ] [ MASK ] [ MASK ]. To look at the moon and to look low at the hometown. "for clarity, the omitted parts are indicated by ellipses, the last [ SOS ] indicates the start of generating the mark, and the model receives the mark and starts generating the first word corresponding to the first [ MASK ]. The probability that the output of the model is the first word, for example, the probability of the suspicion is 0.98, then the sampling is adopted according to the probability, if the sampling is suspicion, then the suspicion word is used as the next input of the GPT model, the GPT model outputs the probability of the next word, the sampling is carried out to obtain the second word, and the process is continued until the model output [ EOS ] indicates that the generation is finished, and the 'suspicion night frost' and 'suspicion water center' are output.
The rhyme text generation model may also be another model capable of implementing the text generation method provided in this embodiment, and this embodiment does not limit a specific model structure.
And 103, the terminal determines at least one investigation question according to the question stem text, the at least one error item text and the correct item text of the question stem text.
In a possible implementation manner, after determining the next at least one error item text, the terminal may combine the error item text and the correct item text into an examination subject option content, and use the question stem text as the examination subject question stem content to construct a corresponding examination subject. The examination questions of the construction are shown in the following table 1:
table 1 topic sample
Question stem Bright moon before bed, (). To look at the moon and to look low at the hometown.
Options for A. Suspected night is like cream B, suspected water center C is suspected as ground cream
Answer to the question C. Ground cream
Optionally, in order to ensure the quality of the examination topic, the error item text generated by the terminal may be scored, and the corresponding processing may be as follows: the terminal determines the comprehensive score of any error item text; and determining at least one investigation question according to at least one error item text, the question stem text and the correct item text of the question stem text, wherein the comprehensive score of the error item text, the question stem text and the correct item text meet the investigation conditions.
The indexes of the comprehensive score comprise importance degree, rhyme retention degree and confusion degree.
In a possible implementation manner, after determining the next at least one error item text, the terminal may evaluate the importance degree, the rhyme retention degree, and the confusion degree of any error item text to obtain a comprehensive score corresponding to each error item text. The higher the comprehensive score is, the higher the quality of the wrong text is, and the more satisfied the investigation purpose of the title is.
A method of determining the composite score of the wrong-term text will be described below.
Optionally, before determining the composite score of any error item text, the terminal may further filter repeated error item texts, and the corresponding processing may be as follows: determining at least one text element of each error term text; determining at least one text element of the stem text; and if the text element which is the same as the question stem text exists in the error item text, deleting the error item text with the same text element.
In one possible implementation, for each error item text, the terminal may divide two adjacent elements, for example, a text element of "suspected night like frost" may be "suspected", "similar night like" or "similar frost". The text elements of the stem text are the same.
Then, the terminal may compare the error entry text with the text elements of the previous or subsequent sentence thereof, and delete the error entry text if the same text element exists in the error entry text.
Generally, in ancient poems, two adjacent poems do not adopt the same words. Therefore, through the filtering processing, the finally obtained wrong item text is more consistent with the expression habit of the ancient poetry, and the quality of the wrong item text is improved.
Alternatively, as shown in the flowchart of fig. 7, the process of the terminal determining the composite score of any text with error items may be as follows:
in step 701, the terminal determines a first score of the correct text.
Wherein the first score is used to represent the importance of the correct term text.
In one possible embodiment, the terminal may determine the first score according to the importance of the correct text. Namely, the importance degree of the corresponding investigation subject is determined, and the investigation importance of the subject is reflected. The higher the importance degree is, the more students are required to master the investigation subject, and the more important the generation of the corresponding error item text is.
Alternatively, the process of determining the first score of the correct term text may be as follows: acquiring a first number of correct texts in a corpus; a first score for the correct term text is determined based on the first number.
In a possible implementation manner, the terminal may count the occurrence times of the verses corresponding to the correct text in the public corpus. Then, a first score is determined by the following formula:
score_importance=log(1+appearance_cnt) (1)
wherein the content of the first and second substances,score_importancein order to obtain the first score mentioned above,appearance_cntthe number of occurrences of the verse.
The more times of appearance of the poetry, the more famous the poetry is. For such poetry, whether the students master or not needs to be intensively examined.
In step 702, the terminal determines a second score of any erroneous-term text.
Wherein the second score is used for representing the rhyme degree of any error item text in the question stem text.
In one possible implementation mode, the terminal can determine prosody information corresponding to the wrong text, then determine whether the terminal impresses a rhyme in the question stem text, and determine a second score of the wrong text according to a rhyme impressing result.
Optionally, the process of determining the second score of any error item text may be as follows: acquiring target position information of a correct text in a question stem text; acquiring first prosodic information of a correct text; acquiring second prosodic information of any error item text; and determining a second score of any error item text according to the target position information, the first prosody information and the second prosody information.
In a possible implementation manner, the terminal may determine the target position information corresponding to the correct item text according to the position of the correct item text in the stem text, that is, the sentence number of the correct item text in the stem text. Then, the terminal may acquire prosody information of the correct-term text and the incorrect-term text, and determine whether the prosody information of both is the same. For example, when the prosody information is the final information, the terminal may obtain the final corresponding to the last word of the correct text and the incorrect text, and determine whether the two are the same.
Because the even sentences have higher requirements on rhyme than the odd sentences, the even sentences of rhyme can be endowed with higher scores. The rhyme score of the error text in the even sentence is divided into a first sub-score, and the rhyme score of the error text in the odd sentence is divided into a second sub-score. When the terminal determines that the prosody information of the correct text and the incorrect text is the same, namely the incorrect text is rhymed, the first sub-score is larger than the second sub-score; when the prosody information of the correct text and the incorrect text is determined to be different, namely the incorrect text is not rhymed, the first sub-score and the second sub-score are equal to 0.
Then, the terminal may determine a second score of the wrong text according to the target location information and the rhyme score, and the formula used may be as follows:
score_final = (lsent_idx % 2 == 1) * 3 + 1) * score_final_base(2)
wherein the content of the first and second substances,score_finalthe second score is the above-mentioned second score.sent_idxThe poetry sentence is the second sentence which is represented by the target position information and has the value range of 0 to 0N-an integer between 1 and-1,Nis the number of sentences. In the case of the second sentence,sent_idx = 1, sent_idx % 2 = = 1 true.lx) Is shown asxThe value is 1 for true time and 0 for false time.score_final_ baseScoring the rhyme, if the error text is an even sentence and rhyme is rhyme, the value is 1, and if rhyme is not rhyme, the value is 0; if it is wrongThe value of the text with the wrong terms is 0.25 if the text with the wrong terms is an odd sentence and rhyme is set, and the value of the text with no rhyme is 0.
In step 703, the terminal determines a third score for any of the wrong text items.
Wherein the third score is used for representing the confusion degree of any error item text and correct item text.
In one possible implementation, the terminal may score the confusion degree between the error text and the correct text according to the number of words repeated between the error text and the correct text, and determine the third score. The more repeated words, the more confusing the wrong text and the correct text, and the more the learning mastery degree of the verse can be inspected.
Optionally, the process of determining the third score of any error item text is as follows: determining a second number of identical words of any of the wrong-term texts at identical positions to the correct-term text; and determining a third score of any wrong text according to the second number and the total number of words of the correct text.
In a possible implementation manner, for each position, the terminal can respectively obtain the characters of the wrong text and the correct text at the position, determine whether the characters are equal, count the equal characters, and obtain the number of the characters of the wrong text at the same position as the number of the characters of the correct text. The terminal may then determine the ratio of this number to the total number of words of the correct term text, resulting in a third score, using the formula shown below:
score_confusion=same_cnt/sum_cnt(3)
wherein the content of the first and second substances,score_confusionin order to achieve the third score as described above,same_cntfor the number of words of the wrong text that are in the same position as the correct text,sum_cnttotal number of words of correct term text.
In step 704, the terminal determines a composite score of any error item text according to the first score, the second score and the third score.
In a possible embodiment, after determining the first score, the second score and the third score, the terminal may add the three to obtain a total score of any corresponding text with an error term, where the formula is as follows:
score = score_importance + score_final + score_confusion(4)
wherein the content of the first and second substances,scoreis the composite score of the wrong text.
The above formula is an example of this embodiment, and the terminal may further perform weighted summation or weighted average on the first score, the second score and the third score, and the obtained composite score may evaluate whether the text of the wrong term meets the investigation condition, so the specific formula for determining the composite score is not limited in this embodiment.
After determining the comprehensive score of each error item text, the terminal can sort according to the high and low order of the comprehensive score to obtain the error item text meeting the investigation condition. The examination condition may be that the comprehensive score exceeds a threshold, that is, the error item text with the comprehensive score exceeding the threshold is obtained and used as the error item text of the examination question to be constructed.
Or, meeting the examination condition may also mean that the examination is performed manually, that is, the teacher may examine the sorted error item texts and mark the error item texts that pass the examination. Furthermore, the terminal can use the error item text which passes the manual review as the error item text of the investigation question to be constructed.
Then, the terminal can combine the error item text and the correct item text meeting the investigation conditions into an investigation subject option content, the subject stem text is used as the investigation subject stem content to construct a corresponding investigation subject, and the constructed investigation subject is added into a subject database to carry out subject practice for a user.
In the embodiment of the disclosure, because the rhyme-entering text generation model is constructed based on the prosody information, when the terminal calls the rhyme-entering text generation model to generate the error item text, the terminal can take the target prosody information and the stem text as the input of the model and output the input to obtain the error item text. Because the input of the model comprises the target prosody information, the terminal can utilize the target prosody information in the process of processing the model, so that the output error item text conforms to the prosody rule corresponding to the target prosody information in the question stem text, and the prosody requirement is met. Furthermore, the terminal can determine at least one investigation question according to the question stem text, the at least one error item text and the correct item text corresponding to the question stem text, so that the question setting efficiency is improved.
The rhyme-retention text generation model used in the above-described disclosed embodiments may be a machine learning model, which may be trained before the above-described processing is performed using the rhyme-retention text generation model. The following describes a training method of the rhyme text generation model by using a flow chart of the training method shown in fig. 8:
step 801, the terminal acquires prosodic information.
In one possible embodiment, the prosodic information may be set by a technician in advance and divided into different words according to prosodic rules. For example, the complex vowel may be set as a vocabulary so that the rhyme text generation model can represent the complex vowel. If the complex vowel is not set as a vocabulary, the complex vowel may be divided into a plurality of vocabularies in the processing process, which affects the processing of the rhyme text generation model on the prosodic information.
When constructing the rhyme text generation model, the terminal can acquire the preset prosody information.
Step 802, the terminal adds rhythm information in a vocabulary of the initial rhyme text generation model to construct the initial rhyme text generation model.
In a possible implementation manner, the terminal may add the obtained prosodic information to a vocabulary, set each parameter of the rhyme text generation model as an initial value, and construct an initial rhyme text generation model.
In step 803, the terminal obtains a training sample.
Each training sample comprises training prosody information, a training stem text and a training correct item text.
In one possible embodiment, the source of the training sample may be a public ancient poetry thesaurus in which a plurality of ancient poetry are stored, the range of which is far beyond all the ancient poetry which the primary and middle school students need to master in class and out of class.
The terminal may perform data preparation before training the model, obtain a plurality of ancient poems in the ancient poem thesaurus, and determine each model input in the training process, that is, the training stem text and the corresponding training prosody information, by the same method as the above step 101, which is not described in detail in this embodiment. And (4) the model input in one processing is called as a training target text, and one training target text comprises corresponding training prosody information and training stem text.
In addition, the terminal can also use the masked text as a correct training item text for a training label in the model training process.
After the data preparation is completed, the terminal can obtain a plurality of training samples, and corresponding training samples can be obtained when a training task is triggered.
Optionally, each training sample further includes a training associated text for training the stem text.
In a possible implementation manner, the terminal may further add a corresponding training associated text in the training target text, and the specific processing is described in the above embodiment and is not described herein again.
And step 804, the terminal trains the initial rhyme-giving text generation model according to the training sample to obtain a trained rhyme-giving text generation model.
In a possible implementation manner, for a training target text, the terminal may process through the initial rhyme text generation model and output to obtain at least one error item text, and the specific process is the same as that in step 102, and is not described herein again.
Then, the terminal can determine an adjusting parameter through a cross entropy loss function according to each obtained error item text and the corresponding training correct item text, and then adjust the model parameter of the initial rhyme text generation model according to the adjusting parameter. After multiple times of training, the wrong text output by the model can be closer to the corresponding correct text for training.
When the training end condition is met, the terminal can obtain the current rhyme-retention text generation model to serve as the trained rhyme-retention text generation model.
The training end condition may be that the number of times of training reaches a first threshold, and/or the model accuracy reaches a second threshold, and/or the loss function is lower than a third threshold. The first threshold, the second threshold, and the third threshold may be set empirically. The embodiment of the present disclosure does not limit the specific training end condition.
In the embodiment of the disclosure, when the rhyme-entering text generation model is constructed, the rhyme information is added into the vocabulary table, so that the rhyme-entering text generation model can express corresponding rhyme information. In the process of model prediction, because the rhyme-retention text generation model is constructed based on prosody information, when the terminal calls the rhyme-retention text generation model to generate an error item text, the terminal can take target prosody information and a question stem text as the input of the model and output the error item text. Because the input of the model comprises the target prosody information, the terminal can utilize the target prosody information in the process of processing the model, so that the output error item text conforms to the prosody rule corresponding to the target prosody information in the question stem text, and the prosody requirement is met. Furthermore, the terminal can determine at least one investigation question according to the question stem text, the at least one error item text and the correct item text corresponding to the question stem text, so that the question setting efficiency is improved.
The embodiment of the disclosure provides a text generation device, which is used for realizing the text generation method. A schematic block diagram of a text generation apparatus as shown in fig. 9, the apparatus comprising:
an obtaining module 901, configured to obtain a target text, where the target text at least includes target prosody information and a stem text;
the processing module 902 is configured to invoke a rhyme-retention text generation model constructed based on prosodic information, and process the target text to obtain at least one error item text, where each error item text conforms to a prosodic rule corresponding to the target prosodic information in the stem text;
and the determining module 903 is used for determining at least one investigation question according to the question stem text, the at least one error item text and the correct item text of the question stem text.
Optionally, the rhyme text generation model includes a vocabulary;
the processing module 902 is configured to:
determining an input code of the target text according to the vocabulary and the target text;
and processing the input codes through an rhyme text generation model constructed based on prosody information to determine at least one error item text.
Optionally, the vocabulary includes prosodic information, and the processing module 902 is configured to:
determining a first vocabulary vector of target prosodic information according to the prosodic information in the vocabulary;
determining a second vocabulary vector of the subject text according to the vocabulary information except the prosodic information in the vocabulary;
determining a first position vector of the target prosody information according to first position information of the target prosody information in the target text;
determining a second position vector of the question stem text according to second position information of the question stem text in the target text;
and determining the input code of the target text according to the first vocabulary vector, the second vocabulary vector, the first position vector and the second position vector.
Optionally, the target text further includes associated text of the stem text.
Optionally, the determining module 903 is configured to:
determining the comprehensive score of any error item text, wherein the indexes of the comprehensive score comprise importance degree, rhyme retention degree and confusion degree;
and determining at least one investigation question according to at least one error item text, the question stem text and the correct item text of the question stem text, wherein the comprehensive score of the error item text, the question stem text and the correct item text meet the investigation conditions.
Optionally, the determining module 903 is configured to:
determining a first score of the correct text, wherein the first score is used for representing the importance degree of the correct text;
determining a second score of any error item text, wherein the second score is used for representing the rhyme retention degree of any error item text in the question stem text;
determining a third score of any error item text, wherein the third score is used for indicating the confusion degree of any error item text and correct item text;
and determining a comprehensive score of any error item text according to the first score, the second score and the third score.
Optionally, the determining module 903 is configured to:
acquiring a first number of correct texts in a corpus;
a first score for the correct term text is determined based on the first number.
Optionally, the determining module 903 is configured to:
acquiring target position information of a correct text in a question stem text;
acquiring first prosodic information of a correct text;
acquiring second prosodic information of any error item text;
and determining a second score of any error item text according to the target position information, the first prosody information and the second prosody information.
Optionally, the determining module 903 is configured to:
determining a second number of identical words of any of the wrong-term texts at identical positions to the correct-term text;
and determining a third score of any wrong text according to the second number and the total number of words of the correct text.
Optionally, the determining module 903 is further configured to:
determining at least one text element of each error term text;
determining at least one text element of the stem text;
and if the text element which is the same as the question stem text exists in the error item text, deleting the error item text with the same text element.
Optionally, as shown in the schematic block diagram of the text generating apparatus shown in fig. 10, the apparatus further includes a training module 904, where the training module 904 is configured to:
acquiring rhythm information;
adding prosodic information into a vocabulary of the initial rhyme text generation model to construct an initial rhyme text generation model;
acquiring training samples, wherein each training sample comprises training prosody information, a training stem text and a training correct item text;
and training the initial rhyme-giving text generation model according to the training sample to obtain a trained rhyme-giving text generation model.
Optionally, each training sample further includes a training associated text for training the stem text.
In the embodiment of the disclosure, because the rhyme-entering text generation model is constructed based on the prosody information, when the terminal calls the rhyme-entering text generation model to generate the error item text, the terminal can take the target prosody information and the stem text as the input of the model and output the input to obtain the error item text. Because the input of the model comprises the target prosody information, the terminal can utilize the target prosody information in the process of processing the model, so that the output error item text conforms to the prosody rule corresponding to the target prosody information in the question stem text, and the prosody requirement is met. Furthermore, the terminal can determine at least one investigation question according to the question stem text, the at least one error item text and the correct item text corresponding to the question stem text, so that the question setting efficiency is improved.
An exemplary embodiment of the present disclosure also provides an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor. The memory stores a computer program executable by the at least one processor, the computer program, when executed by the at least one processor, is for causing the electronic device to perform a method according to an embodiment of the disclosure.
The disclosed exemplary embodiments also provide a non-transitory computer readable storage medium storing a computer program, wherein the computer program, when executed by a processor of a computer, is adapted to cause the computer to perform a method according to an embodiment of the present disclosure.
The exemplary embodiments of the present disclosure also provide a computer program product comprising a computer program, wherein the computer program, when executed by a processor of a computer, is adapted to cause the computer to perform a method according to an embodiment of the present disclosure.
Referring to fig. 11, a block diagram of a structure of an electronic device 1100, which may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic device is intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 11, the electronic device 1100 includes a computing unit 1101, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1102 or a computer program loaded from a storage unit 1108 into a Random Access Memory (RAM) 1103. In the RAM 1103, various programs and data necessary for the operation of the device 1100 may also be stored. The calculation unit 1101, the ROM 1102, and the RAM 1103 are connected to each other by a bus 1104. An input/output (I/O) interface 1105 is also connected to bus 1104.
A number of components in electronic device 1100 connect to I/O interface 1105, including: an input unit 1106, an output unit 1107, a storage unit 1108, and a communication unit 1109. The input unit 1106 may be any type of device capable of inputting information to the electronic device 1100, and the input unit 1106 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device. Output unit 1107 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. The storage unit 1104 may include, but is not limited to, a magnetic disk, an optical disk. The communication unit 1109 allows the electronic device 1100 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, WiFi devices, WiMax devices, cellular communication devices, and/or the like.
The computing unit 1101 can be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 1101 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 1101 performs the respective methods and processes described above. For example, in some embodiments, the text generation method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 1108. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 1100 via the ROM 1102 and/or the communication unit 1109. In some embodiments, the computing unit 1101 may be configured to perform the text generation method by any other suitable means (e.g., by means of firmware).
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As used in this disclosure, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.

Claims (14)

1. A method of text generation, the method comprising:
acquiring a target text, wherein the target text at least comprises target prosody information and a question stem text;
calling a rhyme text generation model constructed based on prosodic information, and processing the target text to obtain at least one error item text, wherein each error item text conforms to prosodic rules corresponding to the target prosodic information in the question stem text;
determining at least one investigation question according to the question stem text, the at least one error item text and the correct item text of the question stem text;
the rhyme text generation model comprises a vocabulary, and the vocabulary comprises the prosodic information;
the calling of the rhyme text generation model constructed based on prosodic information and the processing of the target text to obtain at least one error item text comprise:
determining an input code of the target text according to the vocabulary and the target text; the input code comprises a vocabulary vector and a position vector, wherein the vocabulary vector at least comprises the target prosody information and the vocabulary vector of the subject text, and the position vector at least comprises the target prosody information and the position vector of the subject text;
and processing the input codes through the rhyme text generation model constructed based on the prosody information to determine at least one error item text.
2. The method of claim 1, wherein determining the input encoding of the target text from the vocabulary and the target text comprises:
determining a first vocabulary vector of the target prosodic information according to the prosodic information in the vocabulary;
determining a second vocabulary vector of the stem text according to the vocabulary information except the prosodic information in the vocabulary;
determining a first position vector of the target prosody information according to first position information of the target prosody information in the target text;
determining a second position vector of the question stem text according to second position information of the question stem text in the target text;
and determining the input code of the target text according to the first vocabulary vector, the second vocabulary vector, the first position vector and the second position vector.
3. The text generation method of claim 1, wherein the target text further comprises associated text of the stem text.
4. The text generation method of claim 1, wherein the determining at least one investigation question according to the stem text, the at least one error item text and the correct item text of the stem text comprises:
determining a comprehensive score of any error item text, wherein indexes of the comprehensive score comprise importance degree, rhyme retention degree and confusion degree;
and determining at least one investigation question according to at least one error item text, the question stem text and the correct item text of the question stem text, wherein the comprehensive score of the error item text, the question stem text and the correct item text meet investigation conditions.
5. The text generation method of claim 4, wherein determining a composite score for any of the incorrect term texts comprises:
determining a first score of the correct text, wherein the first score is used for representing the importance degree of the correct text;
determining a second score of any error item text, wherein the second score is used for representing the rhyme retention degree of any error item text in the question stem text;
determining a third score of any error item text, wherein the third score is used for representing the confusion degree of any error item text and the correct item text;
and determining a comprehensive score of any error item text according to the first score, the second score and the third score.
6. The text generation method of claim 5, wherein determining the first score for the correct term text comprises:
acquiring a first number of the correct texts in a corpus;
determining a first score for the correct term text based on the first number.
7. The text generation method of claim 5, wherein determining the second score for any of the wrong-term texts comprises:
acquiring target position information of the correct item text in the question stem text;
acquiring first prosodic information of the correct text;
acquiring second prosodic information of any error item text;
and determining a second score of any error item text according to the target position information, the first prosody information and the second prosody information.
8. The text generation method of claim 5, wherein the determining a third score for the any of the wrong-term texts comprises:
determining a second number of identical words of said any wrong-term text at the same location as said correct-term text;
and determining a third score of any error item text according to the second number and the total number of words of the correct item text.
9. The text generation method of claim 4, wherein prior to determining the composite score for any of the incorrect term texts, the method further comprises:
determining at least one text element of each error term text;
determining at least one text element of the stem text;
and if the text element which is the same as the question stem text exists in the error item text, deleting the error item text with the same text element.
10. The text generation method of claim 1, wherein the training method of the rhyme text generation model comprises:
acquiring rhythm information;
adding the prosodic information into a vocabulary of an initial rhyme text generation model to construct the initial rhyme text generation model;
acquiring training samples, wherein each training sample comprises training prosody information, a training stem text and a training correct item text;
and training the initial rhyme-giving text generation model according to the training sample to obtain a trained rhyme-giving text generation model.
11. The method of claim 10, wherein each training sample further comprises training associated text of the training stem text.
12. An apparatus for generating text, the apparatus comprising:
the acquisition module is used for acquiring a target text, and the target text at least comprises target prosody information and a question stem text;
the processing module is used for calling a rhyme-retention text generation model constructed based on prosodic information, processing the target text to obtain at least one error item text, wherein each error item text conforms to a prosodic rule corresponding to the target prosodic information in the stem text; the rhyme text generation model comprises a vocabulary, and the vocabulary comprises the prosodic information; the calling of the rhyme text generation model constructed based on prosodic information and the processing of the target text to obtain at least one error item text comprise: determining an input code of the target text according to the vocabulary and the target text; the input code comprises a vocabulary vector and a position vector, wherein the vocabulary vector at least comprises the target prosody information and the vocabulary vector of the subject text, and the position vector at least comprises the target prosody information and the position vector of the subject text; processing the input codes through the rhyme text generation model constructed based on the prosody information to determine at least one error item text;
and the determining module is used for determining at least one investigation question according to the question stem text, the at least one error item text and the correct item text of the question stem text.
13. An electronic device, comprising:
a processor; and
a memory for storing a program, wherein the program is stored in the memory,
wherein the program comprises instructions which, when executed by the processor, cause the processor to carry out the method according to any one of claims 1-11.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of claims 1-11.
CN202110883337.1A 2021-08-03 2021-08-03 Text generation method and device Active CN113326696B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110883337.1A CN113326696B (en) 2021-08-03 2021-08-03 Text generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110883337.1A CN113326696B (en) 2021-08-03 2021-08-03 Text generation method and device

Publications (2)

Publication Number Publication Date
CN113326696A CN113326696A (en) 2021-08-31
CN113326696B true CN113326696B (en) 2021-11-05

Family

ID=77426790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110883337.1A Active CN113326696B (en) 2021-08-03 2021-08-03 Text generation method and device

Country Status (1)

Country Link
CN (1) CN113326696B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658609B (en) * 2021-10-20 2022-01-04 北京世纪好未来教育科技有限公司 Method and device for determining keyword matching information, electronic equipment and medium
CN113742459B (en) * 2021-11-05 2022-03-04 北京世纪好未来教育科技有限公司 Vocabulary display method and device, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7200558B2 (en) * 2001-03-08 2007-04-03 Matsushita Electric Industrial Co., Ltd. Prosody generating device, prosody generating method, and program
CN109670185B (en) * 2018-12-27 2023-06-23 北京百度网讯科技有限公司 Text generation method and device based on artificial intelligence
CN110516232B (en) * 2019-07-22 2021-06-22 北京师范大学 Automatic proposition method and system for Chinese evaluation
CN111400506B (en) * 2020-03-13 2022-07-08 思必驰科技股份有限公司 Ancient poetry proposition method and system
CN112183109B (en) * 2020-09-22 2021-06-22 甘肃农业大学 MASS-based poetry sentence generation information steganography method

Also Published As

Publication number Publication date
CN113326696A (en) 2021-08-31

Similar Documents

Publication Publication Date Title
CN111241237B (en) Intelligent question-answer data processing method and device based on operation and maintenance service
CN110808032B (en) Voice recognition method, device, computer equipment and storage medium
CN113326696B (en) Text generation method and device
WO2016032864A1 (en) Generating high-level questions from sentences
CN111310440A (en) Text error correction method, device and system
CN108597538B (en) Evaluation method and system of speech synthesis system
WO2020199600A1 (en) Sentiment polarity analysis method and related device
CN111310447A (en) Grammar error correction method, grammar error correction device, electronic equipment and storage medium
CN110019758B (en) Core element extraction method and device and electronic equipment
CN111930914A (en) Question generation method and device, electronic equipment and computer-readable storage medium
CN111695338A (en) Interview content refining method, device, equipment and medium based on artificial intelligence
CN113282701B (en) Composition material generation method and device, electronic equipment and readable storage medium
CN115309877A (en) Dialog generation method, dialog model training method and device
CN111798118B (en) Enterprise operation risk monitoring method and device
CN111369980A (en) Voice detection method and device, electronic equipment and storage medium
CN111079433A (en) Event extraction method and device and electronic equipment
CN116402166B (en) Training method and device of prediction model, electronic equipment and storage medium
US20230206007A1 (en) Method for mining conversation content and method for generating conversation content evaluation model
CN116910218A (en) Automatic excavation method and device for extended questions in knowledge base
CN117033722A (en) Financial fraud prevention knowledge dispersion method, device, equipment and storage medium
US20230103313A1 (en) User assistance system
CN112307754A (en) Statement acquisition method and device
CN116644765A (en) Speech translation method, speech translation device, electronic device, and storage medium
CN108920560B (en) Generation method, training method, device, computer readable medium and electronic equipment
CN114330285B (en) Corpus processing method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant