CN115438173A - Text processing device, method, apparatus, and computer-readable storage medium - Google Patents

Text processing device, method, apparatus, and computer-readable storage medium Download PDF

Info

Publication number
CN115438173A
CN115438173A CN202110608858.6A CN202110608858A CN115438173A CN 115438173 A CN115438173 A CN 115438173A CN 202110608858 A CN202110608858 A CN 202110608858A CN 115438173 A CN115438173 A CN 115438173A
Authority
CN
China
Prior art keywords
text
user
extracted
abstract
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110608858.6A
Other languages
Chinese (zh)
Inventor
张斯曼
郭垿宏
中村一成
李安新
陈岚
藤本拓
吉村健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NTT Docomo Inc
Original Assignee
NTT Docomo Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NTT Docomo Inc filed Critical NTT Docomo Inc
Priority to CN202110608858.6A priority Critical patent/CN115438173A/en
Priority to JP2022089602A priority patent/JP2022184830A/en
Publication of CN115438173A publication Critical patent/CN115438173A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/34Browsing; Visualisation therefor
    • G06F16/345Summarisation for human users
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/338Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure relates to a text processing apparatus, method, device, and computer-readable storage medium. The text processing apparatus includes: the abstract text to be extracted acquiring module is used for acquiring a first abstract text to be extracted; the user behavior information acquisition module is used for acquiring user behavior information; and the processing module is used for processing the first to-be-extracted abstract text by using a first model to obtain an intermediate text, and processing the intermediate text based on the acquired user behavior information to generate a target abstract text.

Description

Text processing device, method, apparatus, and computer-readable storage medium
Technical Field
The present disclosure relates to the field of text processing, and in particular, to a text processing apparatus, method, device, and computer-readable storage medium.
Background
Text abstract extraction refers to highly summarizing and abstracting text content with clear meaning to generate a text abstract. For summarization, the emphasis of each user's attention may be different and the desired language style (e.g., in terms of words, word order, etc.) may be different, and therefore, it is desirable to be able to obtain or modify the summary generated by the original system to obtain the user's desired summary through convenient user interaction or user settings.
Disclosure of Invention
In view of the above, the present disclosure provides a text processing apparatus, a method, a device, and a computer-readable storage medium.
According to an aspect of the present disclosure, there is provided a text processing apparatus including: the first abstract text to be extracted acquiring module is used for acquiring a first abstract text to be extracted; the user behavior information acquisition module is used for acquiring user behavior information; and the processing module is used for processing the first to-be-extracted abstract text by using a first model to obtain an intermediate text, and processing the intermediate text based on the acquired user behavior information to generate a target abstract text.
According to an example of the disclosure, in a case that the user behavior information acquired by the user behavior information acquisition module is to delete the first specific content in the intermediate text, the processing module deletes the first specific content in the intermediate text to generate a target abstract text; and under the condition that the user behavior information acquired by the user behavior information acquisition module modifies the first specific content in the intermediate text, the processing module provides candidate recommended content for replacing the first specific content for the user to select, and replaces the first specific content with the candidate recommended content selected by the user to generate a target abstract text.
According to an example of the present disclosure, the processing module provides the user with candidate recommended content replacing the first specific content for user selection according to the following steps: identifying a type of the first specific content; generating a plurality of candidate recommended contents from alternative recommended content sources according to the type of the first specific content; and sequencing the candidate recommended contents according to a first preset rule so as to select the first N candidate recommended contents for a user to select, wherein N is a positive integer.
According to an example of the disclosure, the processing module scores the candidate recommended contents according to features of one or more of parts of speech of the candidate recommended contents, original word information coverage of the candidate recommended contents, additional information inclusion of the candidate recommended contents, context fluency, user portrait preference, user behavior, and domain types of the candidate recommended contents to obtain a weighted sum of feature scores of the candidate recommended contents, and sorts the candidate recommended contents according to the weighted sum.
According to an example of the present disclosure, the processing module obtains a weighted sum of the feature scores through a second predetermined rule or a first neural network.
According to an example of the present disclosure, the weighted sum of the feature scores includes a weighted sum of one or both of a base score of the features and an additional score based on the user behavior information and the first to-be-extracted summary text.
According to an example of the present disclosure, the base scores for the features use a uniform weight for all users.
According to an example of the present disclosure, the base scores for the features use different weights for all users.
According to an example of the present disclosure, the additional score is obtained by directly modifying the base score based on the user behavior information or by adding an additional feature obtained based on the first to-be-extracted digest text to the base score.
According to one example of the disclosure, the alternate recommended content sources include one or more of a lexicon of near words, a language model, a knowledge base, a reference resolution, a path search other candidate, a sentence ordering.
According to an example of the present disclosure, the type of the first specific content includes one or more of a part of speech, whether it is an entity, and whether it is a sentence.
According to an example of the disclosure, in a case that the user behavior information acquired by the user behavior information acquiring module is to add second specific content in the first abstract text to be extracted into the intermediate text, the processing module directly adds the second specific content in the first abstract text to be extracted into the intermediate text to generate a target abstract text; or the processing module takes the second specific content as key content, so that the processing module processes both the first to-be-extracted abstract text and the key content by using the first model to generate a target abstract text; or the processing module adaptively adds the second specific content in the first to-be-extracted abstract text to the intermediate text according to one or both of the similarity or the information amount between the second specific content and the intermediate text and the length of the intermediate text to generate a target abstract text.
According to an example of the disclosure, in a case where the user behavior information acquired by the user behavior information acquisition module is to acquire first additional information associated with the intermediate text but different from the intermediate text to add to the intermediate text to generate a target abstract text, the processing module provides one or more second abstract texts to be extracted to a user based on the first abstract text, and in a case where the user selects a desired second abstract text to be extracted, the processing module processes the first abstract text to be extracted and the desired second abstract text to be extracted by using a first model according to a third predetermined rule to generate the target abstract text.
According to an example of the present disclosure, the processing module searches one or more second to-be-extracted digest texts associated with but different from the first to-be-extracted digest text based on the key information and the type contained in the first to-be-extracted digest text, and deduplicates and sorts the one or more second to-be-extracted digest texts to provide the top M second to-be-extracted digest texts to the user, where M is a positive integer.
According to an example of the present disclosure, the processing module ranks one or more second to-be-extracted digest texts, which are associated with but different from the first to-be-extracted digest text, with one or more dimensions according to one or more of the following fourth predetermined rules: similarity with the first to-be-extracted abstract text, difference with the coverage area of the first to-be-extracted abstract text, time difference with the first to-be-extracted abstract text, and user portrait preference.
According to an example of the disclosure, the processing module puts the first additional information obtained by processing the desired second abstract text to be extracted by using a first model at a specific position of the intermediate text according to one or more of the length, the similarity and the correlation ratio of the first abstract text to be extracted and the desired second abstract text to be extracted so as to generate a target abstract text.
According to an example of the disclosure, in a case that the user behavior information acquired by the user behavior information acquisition module is to acquire information related to third specific content in the intermediate text, the processing module provides the user with the information related to the third specific content, so that the user can select the information related to the third specific content or complement the third specific content to generate a target summary text.
According to an example of the present disclosure, the processing module processes the third specific content using a fifth predetermined rule to obtain one or more candidate contents of the third specific content, and supplements the third specific content using the one or more candidate contents of the third specific content.
According to an example of the present disclosure, the processing module sorts one or more pieces of information related to the third specific content searched from a knowledge base according to one or more of a content of the information related to the third specific content, a type of the information related to the third specific content, a domain of the first to-be-extracted digest text, and a weighted sum thereof, and displays the information related to the third specific content to the user.
According to an example of the disclosure, in a case that the user behavior information acquired by the user behavior information acquiring module modifies an order of a first specific sentence contained in the intermediate text, the processing module adjusts the order of the first specific sentence and a sentence related to the first specific sentence according to the user behavior information to generate a target abstract text.
According to an example of the present disclosure, the processing module constructs a structure diagram of a sentence associated with the first specific sentence and adjusts an order of the first specific sentence and the sentence associated with the first specific sentence according to the structure diagram to generate a target abstract text.
According to an example of the disclosure, the text processing apparatus further includes a user history information obtaining module, configured to obtain history information of a user, where the processing module further processes the first to-be-extracted abstract text by using a first model based on the history information of the user to generate the target abstract text.
According to an example of the present disclosure, the text processing device further includes a user preference setting module, configured to form a user-specific information table according to a preference option selected when a user uses the text processing device or a preference option selected when the user registers the text processing device, wherein the processing module further processes the first to-be-extracted digest text by using a first model based on the user-specific information table to generate a target digest text.
According to an example of the present disclosure, the text processing apparatus further includes a display module to display to a user one or more of a target digest text acquired based on user behavior information, a target digest text acquired based on history information of the user, and a target digest text acquired based on user preferences for selection by the user.
According to an aspect of the present disclosure, there is provided a text processing method including: acquiring a first abstract text to be extracted; acquiring user behavior information; and processing the first abstract text to be extracted by using a first model to obtain an intermediate text, and processing the intermediate text based on the acquired user behavior information to generate a target abstract text.
According to an aspect of the present disclosure, there is provided a text processing apparatus, the apparatus including: a processor; and a memory having computer-readable program instructions stored therein, wherein when the computer-readable program instructions are executed by the processor, a text processing method is performed, the method comprising: acquiring a first abstract text to be extracted; acquiring user behavior information; and processing the first to-be-extracted abstract text by using a first model to obtain an intermediate text, and processing the intermediate text based on the acquired user behavior information to generate a target abstract text.
According to an aspect of the present disclosure, there is provided a computer-readable storage medium for storing computer-readable instructions for causing a computer to execute a text processing method, the method including: acquiring a first abstract text to be extracted; acquiring user behavior information; and processing the first to-be-extracted abstract text by using a first model to obtain an intermediate text, and processing the intermediate text based on the acquired user behavior information to generate a target abstract text.
By the text processing device and the text processing method, the target abstract expected by the user can be obtained through interaction with the user or setting by the user.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure, and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. In the drawings, like reference numbers generally indicate like parts or steps.
FIG. 1 shows a schematic diagram of a text processing apparatus according to an embodiment of the present disclosure;
FIG. 2 illustrates a flow diagram of a method for providing, by a processing module, candidate recommended content to a user in place of first particular content for selection by the user, according to an embodiment of the disclosure;
FIG. 3 illustrates a schematic diagram of a processing module providing a candidate recommended content replacing a first particular content to a user for selection by the user, according to an embodiment of the disclosure;
FIG. 4 illustrates a schematic diagram of the base scores obtained for various features by a processing module, according to an embodiment of the disclosure;
fig. 5 is a schematic diagram of directly adding second specific content in the first to-be-extracted abstract text to the intermediate text by a processing module to generate a target abstract text according to an embodiment of the disclosure;
FIG. 6 is a schematic diagram illustrating obtaining of the target abstract text by a processing module using the second specific content as key content according to an embodiment of the disclosure;
FIG. 7 shows a schematic diagram of a target abstract text generated by a processing module adding first additional information associated with but different from the intermediate text to the intermediate text, according to an embodiment of the disclosure;
FIG. 8 illustrates a schematic diagram of a target abstract text generated by a processing module adding first additional information associated with but different from the intermediate text to the intermediate text, according to another embodiment of the present disclosure;
9a-9b illustrate a schematic diagram of the selection of relevant information or completion of particular content by a user, according to an embodiment of the present disclosure;
FIG. 10 shows a schematic diagram of adjusting the order of sentences to generate a target abstract text in accordance with an embodiment of the present disclosure;
FIG. 11 illustrates another schematic diagram showing the order of adjusting sentences to generate a target abstract text in accordance with an embodiment of the present disclosure;
FIG. 12 illustrates a schematic diagram of generating a target summary text based on historical information of a user, according to an embodiment of the disclosure;
FIG. 13 is a diagram illustrating selection of a preference value when a user uses the text processing apparatus according to an embodiment of the present disclosure;
FIG. 14 is a diagram illustrating selection of a preference template when a user uses the text processing apparatus according to an embodiment of the present disclosure;
FIG. 15 shows a schematic diagram of selecting a preference value or template when a user registers the text processing device according to an embodiment of the disclosure;
FIG. 16 shows a schematic diagram illustrating the creation of a user-specific information table in accordance with an embodiment of the present disclosure;
FIG. 17 shows a schematic diagram of displaying a plurality of summary outputs to a user, in accordance with an embodiment of the present disclosure;
FIG. 18 shows a schematic diagram of obtaining a target model for a plurality of data classes, according to an embodiment of the present disclosure;
FIG. 19 shows a schematic diagram of obtaining a goal model for each of a plurality of users, in accordance with an embodiment of the present disclosure;
FIG. 20 shows a flow diagram of a text processing method according to an embodiment of the present disclosure;
FIG. 21 shows a schematic diagram of a text processing device according to an embodiment of the present disclosure;
FIG. 22 shows a schematic diagram of a computer-readable storage medium according to an embodiment of the present disclosure;
fig. 23 is a diagram showing an example of a hardware configuration of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure. It is to be understood that the embodiments described are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without any creative effort, shall fall within the protection scope of the present disclosure.
Flow charts are used herein to illustrate steps of methods according to embodiments of the present application. It should be understood that the preceding and following steps are not necessarily performed in the exact order in which they are performed. Rather, various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to or removed from these processes.
The present disclosure provides a text processing apparatus which can obtain a summary text desired by a user through interaction with the user, thereby customizing a specific summary text for different users. The present disclosure is illustrated by way of abstract extraction.
First, a text processing apparatus 1000 for implementing an embodiment of the present disclosure is described with reference to fig. 1.
As shown in fig. 1, a text processing apparatus 1000 according to an embodiment of the present disclosure includes a first to-be-extracted digest text acquisition module 1001, a user behavior information acquisition module 1002, and a processing module 1003. Those skilled in the art understand that: these unit modules may be implemented in various ways by hardware alone, by software alone, or by a combination thereof, and the present disclosure is not limited to any one of them. These units may be implemented, for example, by a Central Processing Unit (CPU), a text processor (GPU), a Tensor Processor (TPU), a Field Programmable Gate Array (FPGA) or other form of processing unit having data processing and/or instruction execution capabilities and corresponding computer instructions.
For example, the first abstract text obtaining module 1001 may be configured to obtain the first abstract text to be extracted.
For example, the user behavior information obtaining module 1002 may be used to obtain user behavior information.
For example, the processing module 1003 may be configured to process the first to-be-extracted digest text by using a first model to obtain an intermediate text, and process the intermediate text based on the obtained user behavior information to generate a target digest text.
For example, the first text to be abstracted may be original text information for abstracting an abstract, the intermediate text may be an intermediate abstract, and the user behavior information may be deletion, modification, addition of sentences, words or adjustment of sequence in the abstract, etc. to generate an abstract desired by the user, which is not limited herein. For example, the first model may be an existing abstract extraction model, which may include various neural network models, such as, but not limited to: convolutional Neural Networks (CNN) (including GoogLeNet, alexNet, VGG networks, etc.), regions with convolutional Neural networks (R-CNN), region Proposal Networks (RPN), recurrent Neural Networks (RNN), stack-based deep Neural networks (S-DNN), deep Belief Networks (DBN), restricted Boltzman Machines (RBM), fully convolutional networks, long-term memory (LSTM) networks, and classification networks. Additionally, the neural network model that performs a task may include a sub-neural network, and the sub-neural network may include a heterogeneous neural network, and may be implemented with the heterogeneous neural network model.
Various embodiments of a document processing apparatus according to an embodiment of the present disclosure will be described in detail below with reference to fig. 2 to 19.
First embodiment
For example, in a case that the user behavior information acquired by the user behavior information acquiring module is to delete the first specific content in the intermediate text, the processing module 1003 may directly delete the first specific content in the intermediate text to generate the target abstract text.
For example, in a case that the user behavior information acquired by the user behavior information acquisition module is to modify a first specific content in the intermediate text, the processing module 1003 may provide, to the user, a candidate recommended content replacing the first specific content for selection by the user, and replace the first specific content with the candidate recommended content selected by the user to generate a target abstract text.
Fig. 2 shows a flowchart of a method 200 for providing, by the processing module 1003, candidate recommended content replacing the first specific content to the user for selection by the user according to an embodiment of the present disclosure. As shown in fig. 2, the processing module 1003 may provide the candidate recommended content replacing the first specific content to the user for selection according to the following steps: identifying a type of the first specific content (S201); generating a plurality of candidate recommended contents from alternative recommended content sources according to the type (S202); and sorting the plurality of candidate recommended contents according to a first preset rule to select the first N candidate recommended contents for the user to select, wherein N is a positive integer (S203).
For example, for step S201, the type of the first specific content includes one or more of part of speech, whether it is an entity, and whether it is a sentence. For example, for step S202, the alternative recommended content sources may include one or more of a dictionary of near-meaning words, a language model, a knowledge base, a reference resolution, a path search other candidate, and a sentence ordering.
Table 1 illustrates the generation of a plurality of candidate recommended content from alternate recommended content sources based on the type of the first particular content.
TABLE 1
Figure BDA0003094727410000091
As shown in table 1, for example, the synonym dictionary may be used to provide a plurality of candidate recommended contents for the first specific content belonging to "entity", "non-entity noun/pronoun", "verb/adjective/adverb", "not sentence"; the language model may also be used to provide a plurality of candidate recommended contents for the first specific content belonging to "entity", "non-entity noun/pronoun", "verb/adjective/adverb", "not sentence"; the knowledge base may be used to provide a plurality of candidate recommended contents for the first specific contents belonging to the "entity" but not belonging to the "non-entity noun/pronoun", "verb/adjective/adverb", "sentence"; the reference resolution may be used to provide a plurality of candidate recommended contents for the first specific contents belonging to "entity", "non-entity noun/pronoun", not belonging to "verb/adjective/adverb", "sentence"; other candidates for the path Search (Beam Search) may be used to provide a plurality of candidate recommended contents for the first specific content belonging to "entity", "non-entity noun/pronoun", "verb/adjective/adverb", "sentence", and so on.
It should be appreciated that table 1 is merely an example, and that other classification approaches may be utilized to classify the first specific content into multiple categories, and then generate multiple candidate recommended contents from other suitable sources according to the multiple categories, which is not limited herein.
It should be appreciated that reference resolution in this disclosure refers to any conventional or improved method in the current field of natural language processing, and that path search other candidates may refer to existing shortest path search algorithms including, but not limited to, dijkstra's algorithm, a x algorithm, SPFA algorithm, bellman-Ford algorithm, floyd-Warshall algorithm, johnson algorithm, without limitation.
Fig. 3 shows a schematic diagram of providing, by the processing module 1003, candidate recommended content replacing the first specific content to the user for selection by the user according to an embodiment of the present disclosure.
As shown in fig. 3, in the case that the first specific content selected by the user is "zakhberg", the processing module 1003 first identifies the type of "zakhberg" (for example, the type is "entity", "noun", "not sentence"), and then generates a plurality of candidate recommended contents (i.e., a candidate recommended content list) from a thesaurus, a knowledge base, a resolution of a reference, and the like according to the type thereof, and then the processing module 1003 sorts the plurality of candidate recommended contents according to a first predetermined rule to select the top N (for example, N = 3) candidate recommended contents for the user to select.
Next, for step S203, for example, the processing module 1003 may score the multiple candidate recommended contents according to features of one or more of parts of speech of the multiple candidate recommended contents, original word information coverage of the multiple candidate recommended contents, additional information inclusion of the multiple candidate recommended contents, context fluency, user portrait preference, user behavior, and domain types of the multiple candidate recommended contents, so as to obtain a weighted sum of feature scores, and rank the multiple candidate recommended contents according to the weighted sum.
For example, the processing module 1003 may obtain the weighted sum of the feature scores through a second predetermined rule or a first neural network, where the second predetermined rule may be an appropriate rule set by human, for example, the second predetermined rule may be a non-neural network rule such as a formula, a statistical model, and the like, which is not limited herein. The first neural network may be any one of the neural networks described above, without limitation.
For example, the weighted sum of the feature scores includes a weighted sum of the base score of the feature and one or both of the additional scores based on the user behavior information and the first to-be-extracted digest text.
Fig. 4 shows a schematic diagram of obtaining the base scores of the features by the processing module 1003 according to an embodiment of the present disclosure.
As shown in fig. 4, feature extraction is performed on a plurality of candidate recommended contents in the candidate recommended content list to obtain feature percentages such as part of speech, original word information coverage of the plurality of candidate recommended contents (i.e., percentage of the candidate recommended contents covering the original word/the first specific content), additional information inclusion of the plurality of candidate recommended contents (i.e., percentage of the candidate recommended contents including contents other than the original word/the first specific content), and context fluency, and then the feature percentages are converted into vector features after feature processing. For example, the vector feature of the candidate recommended content "mark zakhberg" is [0.92,1.00,0.10,0.93], and the vector feature of the candidate recommended content "he" is [0.26,0.00,0.10,0.32]. Next, the processing module 1003 may obtain a weighted sum of base scores of features of each candidate recommended content according to the vector features. For example, the weighted sum of the base scores of the candidate recommended content "mark zakhberg" is 0.68, and the weighted sum of the base scores of the candidate recommended content "he" is 0.13. Next, the processing module 1003 may sort the plurality of candidate recommended contents according to the weighted sum of the base scores and according to, for example, a descending order, so as to select the top N candidate recommended contents as needed for the user to select.
The base scores for the various features may use a uniform weight for all users. For example, the base scores for the features may be obtained for all users using the same neural network or the same predetermined rules.
Alternatively, the base scores for the features may be weighted differently for all users. For example, users are classified according to user preferences, and then different neural networks are trained for each class of users to obtain the basic scores of the features.
For example, the additional score may be obtained by directly modifying the base score based on the user behavior information, or by adding an additional feature obtained based on the first to-be-extracted digest text to the base score.
For example, when base score = a W1+ b W2, the base score may be directly modified based on the user behavior information to yield: additional score = a (a W1+ b W2). Alternatively, when the base score = a × W1+ b × W2, it may be obtained by adding, to the base score, an additional feature (c × W3) obtained based on the first to-be-extracted digest text: additional score = a × W1+ b × W2+ c × W3.
As one example, additional scores may be obtained based on user historical selections. For example, if the user selects candidate recommended content from the knowledge base 5 times in tandem or consecutively, the weight of the knowledge base may be multiplied by 5 in generating the summary using the first model.
As one example, in the news digest extraction field, additional scores may be obtained based on the current news situation.
For example, additional scores may be obtained by weighting words meeting genre requirements in terms of genre, content, and the like, based on news genre. As an example, when the news genre is political, the weight of "Total before the United states, tellangen" may be increased while the weight of "Enterprise's Tellangen" may be decreased.
As one example, additional scores may be obtained based on the contextual redundancy information. For example, for the "vacation news stock pioneer masquerading as an official in the chinese anti-monopoly supervision agency" in the "vacation news initiative masquerading month contained in the first abstract text to be extracted, since the" vacation news president "has appeared in the front in the process of abstract extraction, the weight of the parent of the" QQ "may be set to be greater than that of the" vacation news president "for the candidate recommended content of the" masquerading ".
Second embodiment
For example, in a case that the user behavior information obtained by the user behavior information obtaining module is to add the second specific content in the first to-be-extracted abstract text to the intermediate text to generate the target abstract text, the processing module 1003 may directly add the second specific content in the first to-be-extracted abstract text to the intermediate text to generate the target abstract text. For example, the second specific content may be directly added to the last position of the intermediate text to generate the target abstract text, or the second specific content may be added to the corresponding position of the intermediate text according to the position of the second specific content in the first abstract text to be extracted, so that the logical relationship of the generated target abstract text is consistent with that of the first abstract text to be extracted.
Fig. 5 is a schematic diagram illustrating that the processing module 1003 directly adds the second specific content in the first to-be-extracted abstract text to the intermediate text to generate the target abstract text according to the embodiment of the disclosure.
As shown in fig. 5, when the user desires to add "share and browse news on the platform of the australian user" to the output summary from 18 days after the australian user reports to the australian media in the original text, the processing module 1003 may directly add "share and browse news on the platform of the australian user" to the output summary from 18 days after the australian user reports to the australian media (as shown in the summary output (after regeneration)).
Alternatively, for example, in a case that the user behavior information acquired by the user behavior information acquiring module is to add second specific content in the first to-be-extracted digest text to the intermediate text to generate a target digest text, the processing module 1003 may use the second specific content as key content, so that the processing module processes both the first to-be-extracted digest text and the key content by using the first model to generate the target digest text.
Fig. 6 is a schematic diagram illustrating that the processing module 1003 uses the second specific content as a key content to obtain the target abstract text according to the embodiment of the disclosure.
As shown in fig. 6, when the user desires to add "from australian media previous report, news browsing by australian user on platform" to the output abstract in the original text, the processing module 1003 can use "from australian media previous report, news browsing by australian user on platform" on 18 days of the facebook as the key content, so that the processing module 1003 processes the first abstract text to be extracted and the key content by using the first model to generate the target abstract text.
Since information redundancy easily occurs when the user selects to add directly or as a key content, and the digest length requirement may not be satisfied, the second specific content may be added adaptively by the processing module 1003.
Alternatively, for example, in a case that the user behavior information acquired by the user behavior information acquiring module is to add the second specific content in the first to-be-extracted digest text to the intermediate text to generate the target digest text, the processing module 1003 may adaptively add the second specific content in the first to-be-extracted digest text to the intermediate text to generate the target digest text according to one or both of the similarity or the information amount between the second specific content and the intermediate text and the length of the intermediate text.
For example, the processing module may adaptively and dynamically add the second specific content in the first to-be-extracted digest text to the intermediate text to generate the target digest text according to the following steps:
(1) comparing the similarity/information amount of the sentence in the intermediate text with the sentence (the second specific content) to be added currently:
all sentences in the intermediate text have no/little overlapping information with the current sentence → direct addition
There are sentences in the intermediate text that overlap (partially or substantially) with the current sentence, and there may be the following choices:
a. direct generation as key content
b. Putting the current sentence into the original text and highlighting the redundant sentence/sentence part, asking the user whether to delete
c. For partial coincidence, the current sentence and the coincident sentence are subjected to de-duplication splicing
(2) Checking the abstract length processed in the step (1), if the length requirement can not be met after sentence compression, selecting the following steps:
instead of directly generating as key content
Ordering according to sentence importance (obtained by rules or neural network model), marking out several sentences ranked at the end (making the rest sentences meet the requirements), and inquiring whether the user deletes the sentences
It should be appreciated that the above method steps for adaptively and dynamically adding the second specific content in the first to-be-extracted abstract text to the intermediate text by the processing module 1003 to generate the target abstract text are not limited thereto, and other suitable methods may also be adopted to add the second specific content, which is not limited herein.
Third embodiment
For example, in a case that the user behavior information obtained by the user behavior information obtaining module is to obtain first additional information associated with the intermediate text but different from the intermediate text to add to the intermediate text to generate a target abstract text, the processing module 1003 may provide one or more second abstract texts to be extracted to the user based on the first abstract text, and in a case that the user selects a desired second abstract text to be extracted, the processing module 1003 may process the first abstract text and the desired second abstract text to be extracted by using the first model according to a third predetermined rule to generate the target abstract text.
For example, the processing module 1003 may search one or more second to-be-extracted abstract texts which are associated with but different from the first to-be-extracted abstract text, based on the key information and the type contained in the first to-be-extracted abstract text, and de-duplicate and sort the one or more second to-be-extracted abstract texts to provide the top M second to-be-extracted abstract texts to the user, where M is a positive integer.
For example, the second text to be extracted is a text that is associated with the first text to be extracted but is different from the first text to be extracted, otherwise redundancy of the text to be extracted may be caused, and the associated text with a similarity in the middle interval may be generally selected as the second text to be extracted.
For example, the processing module 1003 may sort one or more second abstract text to be extracted, which is associated with but different from the first abstract text to be extracted, by using one or more dimensions according to one or more of the following fourth predetermined rules: similarity with the first to-be-extracted abstract text, difference with the coverage area of the first to-be-extracted abstract text, time difference with the first to-be-extracted abstract text, and user portrait preference.
The following details the ordering of one or more second abstract text to be extracted that is associated with but distinct from the first abstract text to be extracted according to one or more dimensions:
(1) and performing similarity matching on all search results of one or more second abstract texts to be extracted and the first abstract texts to be extracted, wherein the similarity of the one or more second abstract texts to be extracted is higher in the middle interval (such as approaching 50 percent)
(2) And simultaneously performing entity extraction and event extraction on one or more second abstract texts to be extracted and the first abstract texts to be extracted: compared with the first abstract text to be extracted, the one or more second abstract texts to be extracted, which have high entity coverage, more new entities and large event difference, are high in order
(3) According to the time: extracting time of the first abstract text to be extracted and one or more second abstract texts to be extracted, wherein the time is more similar, the sequence is higher
(4) According to the user portrait: the ranking of the one or more second text to be abstracted is adjusted according to the preference as the user has set/mined the preference in the history information. Such as:
v. users often select news from Xinhua net → improve Newhua net news ranking
V. user-defined science news → improving the ranking of news classified as science or containing science entities
Next, the processing module 1003 may put the first additional information obtained by processing the desired second abstract text to be extracted by using the first model at a specific position of the intermediate text according to one or more of the length, the similarity, and the correlation ratio of the first abstract text to be extracted and the second abstract text to be extracted so as to generate a target abstract text.
The first additional information may be placed in a specific location of the intermediate text to generate the target abstract text according to one or more of the following examples:
(1) confirming the proportion of a first abstract text to be extracted (e.g., original news) and a second abstract text to be extracted (e.g., related news):
the rule sets, such as: first abstract text to be extracted is preferred (all information of the first abstract text to be extracted is retained, and then second abstract text to be extracted is added according to the residual length space)
User settings, such as: the user can control the length proportion of the second abstract text to be extracted and the first abstract text to be extracted through the slide bar
The system makes an autonomous decision (this step can be performed simultaneously with step 2), such as: the method comprises the steps of processing each second abstract text to be extracted by utilizing a first model to obtain an abstract with the same length setting as the first abstract text to be extracted, sequencing the importance of sentences of a plurality of abstracts obtained from the first abstract text to be extracted and the second abstract text to be extracted, and screening P sentences meeting the final length requirement as target abstract texts, wherein P is a positive integer.
(2) And comparing each second to-be-extracted abstract text with the first to-be-extracted abstract text, and removing sentences which are overlapped or extremely similar to the first to-be-extracted abstract text.
(3) And abstracting each second abstract text to be abstracted according to the relevant proportion (such as length) to obtain first additional information.
(4) And supplementing the second abstract text to be extracted to the specific position of the first abstract text to be extracted according to the relevant proportion. With respect to the confirmation location:
simple rules, such as: uniformly adding the first to-be-extracted abstract texts into the first to-be-extracted abstract texts one by one according to the display sequence
V. chronologically, such as: extracting the time in each second abstract text to be extracted, and listing each second abstract text to be extracted according to the sequence from old to new
Comparing the positions of the check marks and the first abstract text to be extracted, such as: finding a part where the first to-be-extracted digest text and the current second to-be-extracted digest text coincide → observing a positional relationship between the coinciding part and the digest extraction sentence → confirming a final position according to the relationship.
V. constructing a semantic relation tree, such as: and constructing a relation tree based on semantic logic (such as by using an RST method) aiming at all abstract sentences obtained by abstract extraction, and sequencing from a root node.
V according to user behavior or preferences, such as: the second text to be extracted selected by the user is ranked to the top.
Fig. 7 shows a schematic diagram of generating a target abstract text by adding first additional information associated with but different from the intermediate text to the intermediate text by the processing module 1003 according to an embodiment of the present disclosure.
As shown in fig. 7, processing module 1003 may search for one or more related news that are associated with but different from the original news based on keywords contained in the original news and rank the one or more related news using a ranking model to provide the top M (e.g., M = 3) related news to the user for selection by the user. Next, for the related news selected by the user, the processing module 1003 may compare the related news with the original news, remove the sentences that are overlapped or extremely similar to the original news, and supplement the related news digest generated after the related news with redundancy removed is subjected to digest extraction to a specific position of the original news digest output after the original news is subjected to digest extraction, so as to generate the target digest.
Fig. 8 shows a schematic diagram of generating a target abstract text by adding first additional information associated with but different from the intermediate text to the intermediate text by the processing module 1003 according to another embodiment of the present disclosure.
As shown in fig. 8, after the user selects the provided related news (as indicated by a mouse arrow), the processing module 1003 may put first additional information obtained by processing the related news using the abstract extraction model at a specific position of the intermediate text (as shown in fig. 8, an underlined portion in the target abstract text is the abstract content generated by the related news) to generate the target abstract text.
Fourth embodiment
For example, in a case that the user behavior information acquired by the user behavior information acquiring module is information related to acquiring third specific content in the intermediate text, the processing module 1003 may provide the user with information related to the third specific content, so that the user may select the information related to the third specific content or complement the third specific content to generate a target abstract text.
Fig. 9a-9b show schematic diagrams of the selection of relevant information or completion of specific content by a user according to an embodiment of the present disclosure.
As shown in fig. 9a, in a case where the input cursor of the user stays at a certain position for more than a certain time, the processing module 1003 may acquire a keyword before/after the position (i.e. a third specific content, such as "grand pu" in fig. 9 a), and then search, for example, a knowledge base/network for related information of an entity corresponding to the keyword (such as "american president", "45 th american president", "famous entrepreneur", "communist candidate") for the user to learn or select to replace the keyword with the related information.
As shown in fig. 9b, in the case that the user makes an input (e.g., "usa" input in fig. 9 b), the processing module 1003 may acquire a keyword (i.e., "special content" in fig. 9 b) before/after the position and the input (e.g., "usa" input in fig. 9 b), search the entity corresponding to the keyword from, for example, a knowledge base/network (e.g., "american president", "45 th american president" in fig. 9 b), so that the user can learn or complement the keyword.
For example, the processing module 1003 may process the third specific content by using a fifth predetermined rule to obtain one or more candidate contents of the third specific content, and complement the third specific content by using the one or more candidate contents of the third specific content. For example, the fifth predetermined rule may refer to a technique such as resolution, and is not limited herein.
For example, the processing module 1003 may sort one or more pieces of information related to the third specific content searched from the knowledge base according to one or more of the content of the information related to the third specific content, the type of the information related to the third specific content, the domain of the first to-be-extracted digest text, and the weighted sum thereof, and display the information related to the third specific content to the user.
For example, information related to the third specific content (hereinafter, referred to as a keyword) may be displayed to the user by the following rule:
1. and (3) keyword identification: identify entity(s) or noun 2 by proximity principle around the cursor keyword selection and completion:
completion: (1) and performing reference resolution on the selected entity or noun in the first abstract text to be extracted (original news) to select candidate words for information completion. (2) Add news domain types.
Selecting: priority of entity class and priority of insufficient information quantity
3. Searching in knowledge bases, near word dictionaries, language models, or the like
4. And (3) sequencing and displaying search results:
principle: with entries having words entered by the user, without information contained before or after the entity, in conformity with the news genre (e.g. political news for political identity)
Input: candidate word, keyword, user input (possibly dynamically changing), news domain type, candidate word domain type
Method: weighted sum of feature scores (obtained by human setting or neural network)
a) Base scores (constant for all users/different weights for different users), as shown with reference to FIG. 4
b) Additional scoring based on user and current news, e.g.
V. according to user history choices, such as: increasing word weight for user's recent multiple selections
V varies according to user actions or inputs, such as: reducing user-deleted word weights
V. according to the current news situation
(1) Current News type (candidate word of the same kind as the current news is weighted higher)
Example (c): 1. the news genre is political, "President of America" > "Enterprise House"
(2) Contextual mentions (removing redundant information)
Example (c): "Ten news accuse shares beginnings and creats the man and the amalgamation Ten Ben month meeting officers of the Chinese anti-monopoly supervision organization", and "father of QQ" > "Ten news president" in the "Malatang" recommendation word (because there is a word of similar meaning in the front)
Fifth embodiment
For example, in a case that the user behavior information acquired by the user behavior information acquiring module modifies the order of the first specific sentence included in the intermediate text, the processing module 1003 may directly adjust the order of the first specific sentence according to the user behavior information.
Since adjusting only a single sentence is likely to cause disorder of sentence logical relationship, alternatively, for example, in a case where the user behavior information acquired by the user behavior information acquisition module is to modify the order of a first specific sentence contained in the intermediate text, the processing module 1003 may adjust the order of the first specific sentence and a sentence related to the first specific sentence according to the user behavior information.
For example, the processing module 1003 may construct a structure diagram of a statement associated with the first specific statement, and adjust the order of the first specific statement and the statement related to the first specific statement according to the structure diagram.
Fig. 10 shows a schematic diagram of adjusting the order of sentences to generate a target abstract text according to an embodiment of the present disclosure. As shown in fig. 10, for example, when the user selects sentence (4) in the original text, related sentences having a close relationship (high compactness) with sentence (4) in the original text may be extracted to construct a related sentence sub-graph (e.g., (3) → (4) → (5) in fig. 10), and then, according to the position to which the user wants to move sentence (4) in the original text, it may be determined whether the preceding and following sentences at the moved position are in the constructed related sentence sub-graph. As an example, in the case that the user wants to move the (4) th sentence between (1) and (2), since (1) and (2) are not in the constructed related sentence sub-graph, all sentences (3) (4) (5) in the sentence sub-graph can be moved between (1) (2) (the user can be asked before agreeing) in order to guarantee the logical relationship and fluency of the sentences. As another example, in the case where the user wants to move sentence (4) between sentences (5) (6), sentence (4) can be moved directly between sentences (5) (6) because (5) is in the constructed related sentence sub-graph.
It should be appreciated that the closeness of the related sentences can be judged by a neural network or an existing rule (e.g., having the same entity, a close position, a conjunctive relationship, etc.), and then a related sentence subgraph is constructed with the related sentences by, for example, calculating the type of incidence relationship, the positional relationship, etc. between the sentences.
FIG. 11 illustrates another schematic diagram of adjusting the order of sentences to generate a target abstract text according to an embodiment of the present disclosure. As shown in fig. 11, although the user does not select the underlined part, it is adjusted together because it is associated with the bold part (the part selected by the user).
Sixth embodiment
For example, the text processing apparatus may further include a user history information acquisition module configured to acquire history information of the user. For example, the user history information obtaining module may sort and mine the obtained user history information to summarize the information rules about the specific user. Next, the processing module 1003 may further process the first abstract text to be extracted by using a first model based on the historical information/information rule of the user, so as to generate the target abstract text.
By adjusting the output target abstract text based on the historical information of the user, the output target abstract can better meet the requirements of the user.
For example, the user history information acquisition module may record and refine the user's history input and information to form a user history table, such as:
frequency of occurrence of entities in the user input (when an entity frequently appears in the user input, it indicates a high degree of user attention)
The frequency of occurrence of user-specific behavior, such as "frequently deleting sentences with specific numerical values", "frequently adding the last sentence in the original text", etc.
Next, the processing module 1003 may update the user history table in real time according to a predetermined period, for example, when the frequency of occurrence of the user specific behavior exceeds a predetermined threshold or the frequency of occurrence of the entity exceeds a predetermined threshold, the user specific behavior or entity may be updated into the user history table.
Then, for a new input of the user, the processing module 1003 may process the first to-be-extracted abstract text with the corresponding user history table by using the first model to generate the target abstract text that matches the history information of the user.
In one example, the user history information may be weighted in the path search during output of the target digest text by the path search, such that the processing module may consider the user history information when processing the first to-be-extracted digest text.
Fig. 12 illustrates a schematic diagram of generating a target digest text based on historical information of a user according to an embodiment of the present disclosure. As shown in fig. 12, in the acquired historical information of the user, the frequency of occurrence of the entity "hua yi" is higher, so the processing module 1003 may increase the frequency of occurrence of the entity "hua yi" when performing abstract extraction, so that the output target abstract more meets the expectations of the user.
Seventh embodiment
In one example, the text processing device may further include a user preference setting module configured to form a user-specific information table based on preference options selected by a user when using the text processing device or preference options selected when the user registers the text processing device, wherein the processing module further processes the first to-be-extracted digest text using a first model based on the user-specific information table to generate the target digest text.
For example, the user may select a preference option in a manner of checking or answering a question, or the like, when using or registering the text processing apparatus. 13-15 show schematic diagrams of selecting a preference option at a user according to an embodiment of the disclosure.
Fig. 13 shows a schematic diagram of selecting a preference value when a user uses the text processing apparatus according to an embodiment of the present disclosure. As shown in fig. 13, in the case where the user selects "prefer a specific numerical value", the weight of the relevant sentences with numerical values may be increased to output relatively many relevant sentences with numerical values in the digest output.
Fig. 14 shows a schematic diagram of selecting a preference template when a user uses the text processing apparatus according to an embodiment of the present disclosure. As shown in fig. 14, the preference template (datatype, child-reading type as shown in fig. 14) may contain changes in several aspects. For example, in the case where the user selects "data type", the weight of the relevant sentence with a numerical value may be increased; when the user selects the 'children reading type', the user can ignore the long sentence or break the long sentence into short sentences without paying attention to specific data and scientific and technical details, remove the vocabulary in the primary school horizontal lexicon, or change the reading style from formal understandability to popular understandability, and the like.
Fig. 15 shows a schematic diagram of selecting a preference value or template when a user registers the text processing apparatus according to an embodiment of the present disclosure. As shown in fig. 15, the user may be presented with a preference setting table in the registration phase, which table contains one or more user preference information. After the user has filled in the form, the processing module may generate a user-specific information form for reference in performing the summarization.
Eighth embodiment
For example, the processing module 1003 may also create a user-specific information table according to the manner described above. Fig. 16 shows a schematic diagram of creating a user-specific information table according to an embodiment of the present disclosure.
Next, the text processing apparatus may further include a display module to display one or more of a target digest text acquired based on the user behavior information, a target digest text acquired based on the history information of the user, and a target digest text acquired based on the user preference to the user for selection, so that the user can more flexibly and intuitively see the target digest text output based on the history information, the preference setting.
FIG. 17 shows a schematic diagram of displaying multiple summary outputs to a user, according to an embodiment of the disclosure.
Further, the target abstract texts displayed to the user can be deduplicated. For example, for each of the generated plurality of target digest texts, overlap ratio comparison is performed on each of the generated target digest texts with other target digest texts, and one of the two target digest texts having a higher overlap ratio (for example, 90% or more) is deleted. For example, existing models may be used to compute different targets as long as the text has a degree of overlap/similarity, which is not limited herein.
In addition, the target abstract texts displayed by the user can be displayed to the user after being sorted. As one example, the target summary text may be ordered according to sentence fluency of the generated target summary text, user history selections (e.g., frequency of selecting summaries of various sources). As another example, the target summary text may be ordered by scoring a plurality of target summary texts. The scoring method may be similar to the scoring method described above with reference to fig. 4, without limitation.
For example, a uniform weight may be used for all users for scoring of multiple target summary texts. For example, the scores of the features are obtained for all users using the same neural network or the same predetermined rule (e.g., setting the weight of the features based on the user history information and the features of the abstract itself to 1).
Alternatively, different weights may be used for all users for scoring multiple target summary texts. For example, users are classified according to user preferences, etc., and then different neural networks are trained for each class of users or different rules are used to obtain scores for various features.
Ninth embodiment
For example, the text processing apparatus may further include a user data acquisition module for acquiring user data of a plurality of users, and a training module for training the first model with the user data of the plurality of users to obtain target models for different categories.
Because the focus of different classes of users may be different, simulating the user focus through different models can obtain a result more suitable for the user requirements. The present disclosure trains the first model with user data of multiple users, i.e., a target model for different behaviors or for each of the multiple users may be obtained.
As an example, the user data obtaining module may classify the user data of the plurality of users into a plurality of data classes according to a first predetermined rule or a neural network classifier, and the like, and the training module trains the first model with the user data of the plurality of users to obtain a target model for the plurality of data classes, wherein the first predetermined rule is related to user behavior. For example, the user behavior may indicate a preferred language type, a preferred long-phrase type, a preferred summary length, and so forth.
For example, the behavior, input characteristics, etc. of each user may be collected during a model training phase, and then the user data of the plurality of users may be classified into a plurality of data categories according to a first predetermined rule related to the user behavior, a neural network classifier, etc. For example, the first predetermined rule may be a clustering rule or a regression rule, or may be other suitable methods, which are not limited herein.
Next, a small model may be learned online based on the first model (which may also be referred to as a common model) according to different data categories (e.g., a layer is added on the first model, and the layer has different parameters for each data category) to generate a target model for a plurality of data categories.
FIG. 18 shows a schematic diagram of obtaining an object model for a plurality of data classes, according to an embodiment of the disclosure.
As shown in FIG. 18, the behavior, input features, results of selections, user feedback, etc. of various users may be collected during a model training phase, and then the user data of multiple users may be classified into multiple data categories according to, for example, clustering rules, regression rules, sample filters, etc. Next, small models can be learned online based on the common model according to different data categories (e.g., a layer is added on the common model, and the layer has different parameters for each data category) to generate dedicated models for a plurality of data categories (e.g., category 1 dedicated model, category 2 dedicated model, category 3 dedicated model shown in fig. 18), so that the trained common model can be used as a target model for a plurality of data categories.
Next, when the user uses the common model again, a different specific model included in the target model may be used to obtain a summary desired by the user according to the user's behavior, input characteristics, selected results, user's feedback/settings, and the like.
As another example, the training module may also train the first model with user data of the plurality of users to obtain a target model for each of the plurality of users. For example, the user data of the plurality of users may represent user data of a predetermined period.
FIG. 19 shows a schematic diagram of obtaining a goal model for each of a plurality of users, according to an embodiment of the disclosure.
As shown in fig. 19, user data of respective users may be collected at a predetermined period and then classified into a plurality of data categories according to the user according to, for example, a user identification module or the like. Next, small models can be learned online based on the common model according to different data categories (e.g., a layer is added on the common model, where the layer has different parameters for each data category) to generate dedicated models for different users (e.g., the user 1 dedicated model, the user 2 dedicated model, and the user 3 dedicated model shown in FIG. 19), so that the trained common model can be used as a target model for multiple different users.
Furthermore, user data is typically filtered based only on the user data and the user output itself, using language models, etc., and the goal model obtained by training the first model using the user data so obtained may not be ideal. The present disclosure proposes to screen user data based on user feedback on an output target abstract text to obtain user data that meets user expectations.
For example, as shown in fig. 19, the text processing apparatus may further include a filtering module, configured to filter the obtained user data of the multiple users according to user feedback, so that the training module trains the first model by using the filtered user data of the multiple users to obtain target models for different categories. For example, the user feedback may include direct feedback and indirect feedback of the user to the generated target summary text.
For example, the filtering module may weight the characteristics related to the user feedback and the characteristics of the user data of the plurality of users to obtain filtered user data of the plurality of users with different scores. Next, the training module may select a plurality of user data with higher scores to train the first model.
For example, the filtering module may use the same weight for all users to weight and score features related to the user feedback and features of the user data of the plurality of users to obtain filtered user data of the plurality of users.
For example, the filtering module may use different weights for different users to weight and score features related to the user feedback and features of the user data of the multiple users to obtain filtered user data of the multiple users.
For example, the filtering module may filter the acquired user data of the plurality of users according to the user feedback by:
the feature type of the user feedback:
(1) behavioral/indirect feedback (implicit): dwell time/whether to copy/whether to modify multiple times
(2) Direct feedback: such as the user directly giving satisfaction (dissatisfaction, success, very satisfaction) and the like
Screening methods:
(1) and collecting direct/indirect feedback of the user, converting the direct/indirect feedback into related characteristics according to related rules or models and the like, and sorting and screening the user data by using the related characteristics. Such as:
direct feedback of the user translates into a correlation score, as satisfied: 1, relatively satisfactory: 0.8, etc
The speed of the user to copy the summary translates into a correlation speed, such as: the speed is x seconds, the fraction is 1/x
(2) The features related to the user feedback and the features of the summaries contained in the user data are weighted and scored (e.g., using the scoring method described in fig. 4) to obtain a filtered plurality of user data having different scores. Such as:
the same weight is used for all users, such as training a neural network, or artificial rules (such as setting the weight of direct feedback to be 1 and the other weights to be 0; if no direct feedback exists, specific weights are used, such as implicit feedback and abstract self-characteristics 1)
Different weights are used for different users, such as classifying users, training different neural networks or applying different rules for each class of users.
Various embodiments of a text processing apparatus according to embodiments of the present disclosure are described above with reference to fig. 2-19.
The following describes functions of various embodiments of the text processing apparatus in brief with reference to table 2.
TABLE 2
Figure BDA0003094727410000241
Figure BDA0003094727410000251
Through the text processing device of the embodiment of the disclosure, the target abstract expected by the user can be obtained through interaction with the user or through user setting.
Hereinafter, a text processing method 100 according to an embodiment of the present disclosure will be described with reference to fig. 20.
FIG. 20 shows a flow diagram of a text processing method 100 according to an embodiment of the present disclosure. The method can be automatically completed by a computer and the like. For example, the method may be used to obtain summary text. For example, the method may be implemented in software, hardware, firmware, or any combination thereof, loaded and executed by a processor in a device such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, a network server, or the like.
As shown in fig. 20, the text processing method 100 includes the following steps S101 to S102.
In step S101, a first to-be-extracted digest text is acquired.
In step S102, user behavior information is acquired.
In step S103, the first to-be-extracted abstract text is processed by using a first model to obtain an intermediate text, and the intermediate text is processed based on the obtained user behavior information to generate a target abstract text.
For example, in step S102, in the case that the user behavior information is to delete the first specific content in the intermediate text, step S103 may delete the first specific content in the intermediate text to generate the target abstract text. It should be appreciated that steps S101 and S102 may be processed in parallel (e.g., processing S101 and S102 simultaneously), or may be processed in series (e.g., processing S101 and then S102, or processing S102 and then S101), without limitation.
For example, in step S102, in a case that the user behavior information is to modify a first specific content in the intermediate text, step S103 may provide the user with a candidate recommended content replacing the first specific content for selection by the user, and replace the first specific content with the candidate recommended content selected by the user to generate a target abstract text.
For example, step S103 may provide the candidate recommended content replacing the first specific content to the user for selection by the user according to the following steps: identifying a type of the first specific content; generating a plurality of candidate recommended contents from alternative recommended content sources according to the type of the first specific content; and sequencing the candidate recommended contents according to a first preset rule so as to select the first N candidate recommended contents for a user to select, wherein N is a positive integer.
For example, step S103 may score the multiple candidate recommended contents according to features of one or more of parts of speech of the multiple candidate recommended contents, original word information coverage of the multiple candidate recommended contents, additional information inclusion of the multiple candidate recommended contents, context fluency, user portrait preference, user behavior, and domain types of the multiple candidate recommended contents to obtain a weighted sum of feature scores, and rank the multiple candidate recommended contents according to the weighted sum.
For example, step S103 may obtain a weighted sum of the feature scores through a second predetermined rule or a first neural network.
For example, the weighted sum of the scores of the various features may include a weighted sum of one or both of the base score of the various features and an additional score based on the user behavior information and the first text to be extracted.
For example, the base scores for the various features may use a uniform weight for all users.
For example, the base scores for the various features may be weighted differently for all users.
For example, the additional score may be obtained by directly modifying the base score based on the user behavior information, or by adding an additional feature obtained based on the first to-be-extracted digest text to the base score.
For example, the alternate recommended content sources may include one or more of a lexicon of words, a language model, a knowledge base, a reference resolution, a path search other candidate, a sentence ordering.
For example, the type of the first specific content may include one or more of part of speech, whether it is an entity, and whether it is a sentence.
For example, in a case that the user behavior information is that the second specific content in the first to-be-extracted abstract text is added to the intermediate text, step S103 may directly add the second specific content in the first to-be-extracted abstract text to the intermediate text to generate a target abstract text; or step S103 may use the second specific content as a key content, so that step S103 may process both the first text to be extracted and the key content by using the first model to generate a target text to be extracted; or step S103 may adaptively add the second specific content in the first to-be-extracted digest text to the intermediate text according to one or both of the similarity or the information amount between the second specific content and the intermediate text and the length of the intermediate text to generate the target digest text.
For example, in a case where the user behavior information is to obtain first additional information associated with but different from the intermediate text to add to the intermediate text to generate the target abstract text, step S103 may provide the user with one or more second abstract texts to be extracted based on the first abstract text, and in a case where the user selects a desired second abstract text to be extracted, step S103 may process the first abstract text to be extracted and the desired second abstract text to be extracted using the first model according to a third predetermined rule to generate the target abstract text.
For example, step S103 may search for one or more second to-be-extracted digest texts that are associated with but different from the first to-be-extracted digest text based on the key information and the type contained in the first to-be-extracted digest text, and deduplicate and sort the one or more second to-be-extracted digest texts to provide the top M second to-be-extracted digest texts to the user, where M is a positive integer.
For example, step S103 may sort one or more second to-be-extracted summary texts associated with but different from the first to-be-extracted summary text by one or more dimensions according to one or more of the following fourth predetermined rules: similarity with the first to-be-extracted abstract text, difference with the coverage area of the first to-be-extracted abstract text, time difference with the first to-be-extracted abstract text, and user portrait preference.
For example, step S103 may put the first additional information obtained by processing the desired second abstract text to be extracted by using the first model at a specific position of the intermediate text according to one or more of the length, the similarity, and the correlation ratio of the first abstract text to be extracted and the desired second abstract text to be extracted to generate the target abstract text.
For example, in a case that the user behavior information is to acquire information related to third specific content in the intermediate text, step S103 may provide the user with the information related to the third specific content, so that the user may select the information related to the third specific content or complement the third specific content to generate the target abstract text.
For example, step S103 may utilize a reference resolution to process the third specific content to obtain one or more candidate contents of the third specific content, and utilize the one or more candidate contents of the third specific content to complement the third specific content.
For example, step S103 may sort one or more pieces of information related to the third specific content searched from the knowledge base according to one or more of the content of the information related to the third specific content, the type of the information related to the third specific content, the domain of the first text to be extracted, and the weighted sum thereof, and display the information related to the third specific content to the user.
For example, in a case that the user behavior information is to modify an order of a first specific sentence included in the intermediate text, step S103 may adjust an order of the first specific sentence and a sentence related to the first specific sentence according to the user behavior information to generate a target abstract text.
For example, step S103 may construct a structure diagram of the sentence in which the first specific sentence is associated with the first specific sentence, and adjust the order of the first specific sentence and the sentence related to the first specific sentence according to the structure diagram to generate the target abstract text.
For example, the text processing method may further include: and acquiring historical information of a user, and processing the first abstract text to be extracted by using a first model based on the historical information of the user to generate the target abstract text.
For example, the text processing method may further include: and processing the first to-be-extracted abstract text by using a first model based on the user specific information table to generate a target abstract text.
For example, the text processing method may further display one or more of a target abstract text obtained based on the user behavior information, a target abstract text obtained based on the history information of the user, and a target abstract text obtained based on the user preference to the user for selection.
Through the text processing method of the embodiment of the disclosure, the target abstract expected by the user can be obtained through interaction with the user or through user setting.
Next, a text processing apparatus 1100 according to an embodiment of the present disclosure is described with reference to fig. 21. Fig. 21 shows a schematic diagram of a text processing device according to an embodiment of the present disclosure. Since the function of the text processing apparatus of the present embodiment is the same as the details of the method described hereinabove with reference to fig. 20, a detailed description of the same is omitted here for the sake of simplicity.
The text processing apparatus of the present disclosure includes a processor 1102; and a memory 1101 in which are stored computer readable instructions that, when executed by the processor, perform a text processing method comprising: acquiring a first abstract text to be extracted; acquiring user behavior information; and processing the first abstract text to be extracted by using a first model to obtain an intermediate text, and processing the intermediate text based on the acquired user behavior information to generate a target abstract text.
For technical effects of the text processing apparatus 1000 and the text processing device 1100 in different embodiments, reference may be made to technical effects of the text processing method provided in the embodiments of the present disclosure, and details are not repeated here.
The text processing apparatus 1000 and the text processing device 1100 may be used for various appropriate electronic devices.
Fig. 22 is a schematic diagram of a computer-readable storage medium 1200 according to an embodiment of the present disclosure.
As shown in fig. 22, the present disclosure also includes a computer-readable storage medium 1200 for storing computer-readable instructions 1201, the instructions causing a computer to perform a text processing method, the method comprising: acquiring a first abstract text to be extracted; acquiring user behavior information; and processing the first to-be-extracted abstract text by using a first model to obtain an intermediate text, and processing the intermediate text based on the acquired user behavior information to generate a target abstract text.
< hardware Structure >
The block diagrams used in the description of the above embodiments show blocks in units of functions. These functional blocks (structural units) are implemented by any combination of hardware and/or software. Note that the means for implementing each functional block is not particularly limited. That is, each functional block may be implemented by one apparatus which is physically and/or logically combined, or may be implemented by a plurality of apparatuses which are directly and/or indirectly (for example, by wire and/or wirelessly) connected by two or more apparatuses which are physically and/or logically separated.
For example, the electronic device in one embodiment of the present disclosure may function as a computer that executes the processing of the training method of the present disclosure. Fig. 23 is a diagram showing an example of a hardware configuration of the electronic device 60 according to the embodiment of the present disclosure. The electronic apparatus 60 may be physically configured as a computer device including a processor 1010, a memory 1020, a storage 1030, a communication device 1040, an input device 1050, an output device 1060, a bus 1070, and the like.
In the following description, the words "device" or the like may be replaced with circuits, devices, units, or the like. The hardware configuration of the electronic apparatus 60 may include one or more of the devices shown in the drawings, or may not include some of the devices.
For example, processor 1010 is shown as only one, but may be multiple processors. The processing may be executed by one processor, or may be executed by one or more processors at the same time, sequentially, or by other methods. In addition, processor 1010 may be mounted on more than one chip.
The functions in the electronic device 60 are realized, for example, as follows: by reading predetermined software (program) into hardware such as the processor 1010 and the memory 1020, the processor 1010 performs an operation to control communication by the communication device 1040 and to control reading and/or writing of data in the memory 1020 and the memory 1030.
The processor 1010 causes, for example, an operating system to operate, thereby controlling the entire computer. The processor 1010 may be a Central Processing Unit (CPU) including an interface with a peripheral device, a control device, an arithmetic device, a register, and the like.
Further, the processor 1010 reads out a program (program code), a software module, data, and the like from the memory 1030 and/or the communication device 1040 to the memory 1020, and executes various processes according to them. As the program, a program that causes a computer to execute at least a part of the operations described in the above embodiments may be used. For example, the control unit 401 of the electronic device 60 may be implemented by a control program stored in the memory 1020 and operated by the processor 1010, and may be implemented similarly for other functional blocks.
The Memory 1020 is a computer-readable recording medium, and may be configured by at least one of a Read Only Memory (ROM), a Programmable Read Only Memory (EPROM), an Electrically Programmable Read Only Memory (EEPROM), a Random Access Memory (RAM), and other suitable storage media. Memory 1020 may also be referred to as registers, cache, main memory (primary storage), etc. The memory 1020 may store an executable program (program code), a software module, and the like for implementing the wireless communication method according to an embodiment of the present disclosure.
The memory 1030 is a computer-readable recording medium, and may be configured of at least one of a flexible disk (floppy disk), a floppy (registered trademark) disk (floppy disk), a magneto-optical disk (e.g., a Compact Disc read only memory (CD-ROM) or the like), a digital versatile Disc (dvd), a Blu-ray (registered trademark) Disc), a removable disk, a hard disk drive, a smart card, a flash memory device (e.g., a card, a stick, a key driver), a magnetic stripe, a database, a server, and other suitable storage media. Memory 1030 may also be referred to as a secondary storage device.
The communication device 1040 is hardware (transmission/reception device) for performing communication between computers via a wired and/or wireless network, and is also referred to as a network device, a network controller, a network card, a communication module, or the like.
The input device 1050 is an input device (e.g., a keyboard, a mouse, a microphone, a switch, a button, a sensor, etc.) that accepts input from the outside. The output device 1060 is an output device (for example, a display, a speaker, a Light Emitting Diode (LED) lamp, or the like) that outputs Light to the outside. The input device 1050 and the output device 1060 may be integrated (e.g., a touch panel).
The devices such as the processor 1010 and the memory 1020 are connected to each other via a bus 1070 for communicating information. The bus 1070 may be constituted by a single bus or may be constituted by different buses between devices.
In addition, the electronic Device 60 may include hardware such as a microprocessor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), and the like, and a part or all of each functional block may be implemented by the hardware. For example, the processor 1010 may be installed through at least one of these pieces of hardware.
Software, whether referred to as software, firmware, middleware, microcode, hardware description language, or by other names, is to be broadly construed to refer to commands, command sets, code segments, program code, programs, subroutines, software modules, applications, software packages, routines, subroutines, objects, executables, threads of execution, steps, functions, and the like.
Further, software, commands, information, and the like may be transmitted or received via a transmission medium. For example, when the software is transmitted from a website, server, or other remote source using a wired technology (e.g., coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL, microwave, etc.) and/or a wireless technology (e.g., infrared, microwave, etc.), the wired technology and/or wireless technology are included in the definition of transmission medium.
The embodiments and modes described in this specification may be used alone or in combination, or may be switched during execution. Note that, as long as there is no contradiction between the processing steps, sequences, flowcharts, and the like of the embodiments and the embodiments described in the present specification, the order may be changed. For example, with respect to the methods described in this specification, various elements of steps are presented in an exemplary order and are not limited to the particular order presented.
The term "according to" used in the present specification does not mean "according only" unless explicitly stated in other paragraphs. In other words, the statement "according to" means both "according to only" and "according to at least".
Any reference to elements using the designations "first", "second", etc. used in this specification is not intended to be a comprehensive limitation on the number or order of such elements. These names may be used in this specification as a convenient way of distinguishing between two or more elements. Thus, reference to a first unit and a second unit does not mean that only two units may be employed or that the first unit must precede the second unit in several ways.
When the terms "including", "including" and "comprising" and variations thereof are used in the present specification and claims, these terms are open-ended as in the term "comprising". Further, the term "or" as used in the specification or claims is not exclusive or.
Those skilled in the art will recognize that aspects of the present disclosure may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, products, or materials, or any new and useful modifications thereof. Accordingly, various aspects of the present disclosure may be carried out entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.), or by a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present disclosure may be embodied as a computer product, located in one or more computer readable media, comprising computer readable program code.
The present disclosure uses specific words to describe embodiments of the disclosure. Such as "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the disclosure is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the disclosure may be combined as appropriate.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
While the present disclosure has been described in detail above, it will be apparent to those skilled in the art that the present disclosure is not limited to the embodiments described in the present specification. The present disclosure can be implemented as modifications and variations without departing from the spirit and scope of the present disclosure defined by the claims. Accordingly, the description of the present specification is for the purpose of illustration and is not intended to be in any way limiting of the present disclosure.

Claims (20)

1. A text processing apparatus comprising:
the first abstract text to be extracted acquiring module is used for acquiring a first abstract text to be extracted;
the user behavior information acquisition module is used for acquiring user behavior information; and
and the processing module is used for processing the first to-be-extracted abstract text by using a first model to obtain an intermediate text, and processing the intermediate text based on the acquired user behavior information to generate a target abstract text.
2. The text processing apparatus according to claim 1,
under the condition that the user behavior information acquired by the user behavior information acquisition module is deleting the first specific content in the intermediate text, the processing module deletes the first specific content in the intermediate text to generate a target abstract text; and
and under the condition that the user behavior information acquired by the user behavior information acquisition module modifies the first specific content in the intermediate text, the processing module provides candidate recommended content for replacing the first specific content for the user to select, and replaces the first specific content with the candidate recommended content selected by the user to generate a target abstract text.
3. The text processing apparatus of claim 2, wherein the processing module provides the user with candidate recommended content replacing the first particular content for selection by the user according to:
identifying a type of the first specific content;
generating a plurality of candidate recommended contents from alternative recommended content sources according to the type of the first specific content; and
and sequencing the candidate recommended contents according to a first preset rule so as to select the first N candidate recommended contents for the user to select, wherein N is a positive integer.
4. The text processing device of claim 3, wherein the processing module scores the candidate recommendations according to features of one or more of part of speech of the candidate recommendations, original word information coverage of the candidate recommendations, additional information inclusion of the candidate recommendations, context fluency, user profile preferences, user behavior, and domain types of the candidate recommendations to obtain a weighted sum of feature scores, and ranks the candidate recommendations according to the weighted sum.
5. The text processing apparatus according to claim 3,
the processing module obtains the weighted sum of the characteristic scores through a second preset rule or a first neural network.
6. The text processing apparatus according to claim 1, wherein, in a case where the user behavior information acquired by the user behavior information acquisition module is to add second specific content in the first to-be-extracted digest text to the intermediate text,
the processing module directly adds second specific content in the first abstract text to be extracted into the intermediate text to generate a target abstract text; or
The processing module takes the second specific content as key content, so that the processing module processes both the first to-be-extracted abstract text and the key content by using the first model to generate a target abstract text; or
The processing module adaptively adds the second specific content in the first to-be-extracted abstract text to the intermediate text according to one or both of the similarity or the information amount between the second specific content and the intermediate text and the length of the intermediate text to generate a target abstract text.
7. The text processing apparatus according to claim 1, wherein, in a case where the user behavior information acquired by the user behavior information acquisition module is to acquire first additional information associated with the intermediate text but different from the intermediate text to add to the intermediate text to generate a target digest text,
the processing module provides one or more second abstract texts to be extracted for the user based on the first abstract text to be extracted, and in the case that the user selects a desired second abstract text to be extracted, the processing module processes the first abstract text to be extracted and the desired second abstract text to be extracted by using a first model according to a third preset rule so as to generate the target abstract text.
8. The text processing apparatus according to claim 7, wherein the processing module searches for one or more second to-be-extracted digest texts associated with but different from the first to-be-extracted digest text, based on the key information and type contained in the first to-be-extracted digest text, and de-duplicates and sorts the one or more second to-be-extracted digest texts to provide the top M second to-be-extracted digest texts to the user, where M is a positive integer.
9. The text processing apparatus of claim 8, wherein the processing module ranks one or more second abstract text to be extracted, which is associated with but different from the first abstract text to be extracted, with one or more dimensions according to one or more of the following fourth predetermined rules: similarity with the first abstract text to be extracted, difference with the coverage area of the first abstract text to be extracted, time difference with the first abstract text to be extracted, and user portrait preference.
10. The text processing apparatus according to claim 1,
and under the condition that the user behavior information acquired by the user behavior information acquisition module is information related to third specific content in the intermediate text, the processing module provides the information related to the third specific content for the user to select the information related to the third specific content or completes the third specific content to generate a target abstract text.
11. The text processing apparatus according to claim 10, wherein the processing module processes the third specific content using a fifth predetermined rule to obtain one or more candidate contents of the third specific content, and supplements the third specific content using the one or more candidate contents of the third specific content.
12. The text processing apparatus according to claim 10, wherein the processing module sorts one or more pieces of information related to the third specific content searched from a knowledge base according to one or more of a content of the information related to the third specific content, a type of the information related to the third specific content, a domain of the first text to be extracted, and a weighted sum thereof, and displays the information related to the third specific content to the user.
13. The text processing apparatus according to claim 1,
and under the condition that the user behavior information acquired by the user behavior information acquisition module modifies the sequence of a first specific statement contained in the intermediate text, the processing module adjusts the sequence of the first specific statement and the statement related to the first specific statement according to the user behavior information to generate a target abstract text.
14. The text processing apparatus according to claim 13, wherein the processing module constructs a structure diagram of a sentence in which the first specific sentence is associated with the first specific sentence, and adjusts an order of the first specific sentence and the sentence related to the first specific sentence according to the structure diagram to generate the target abstract text.
15. The text processing apparatus according to claim 1, wherein the text processing apparatus further comprises a user history information acquisition module for acquiring history information of a user,
the processing module is further used for processing the first to-be-extracted abstract text by using a first model based on the historical information of the user so as to generate the target abstract text.
16. The text processing device of claim 1, wherein the text processing device further comprises a user preference setting module for forming a user-specific information table of preference options selected when a user uses the text processing device or preference options selected when a user registers the text processing device,
the processing module is further used for processing the first to-be-extracted abstract text by utilizing a first model based on the user specific information table so as to generate a target abstract text.
17. The text processing apparatus according to claim 1, wherein the text processing apparatus further comprises a display module to display to a user one or more of a target summary text acquired based on user behavior information, a target summary text acquired based on history information of the user, and a target summary text acquired based on user preference for selection by the user.
18. A method of text processing, comprising:
acquiring a first abstract text to be extracted;
acquiring user behavior information; and
and processing the first abstract text to be extracted by using a first model to obtain an intermediate text, and processing the intermediate text based on the acquired user behavior information to generate a target abstract text.
19. A text processing apparatus, the apparatus comprising:
a processor; and
a memory having computer-readable program instructions stored therein,
wherein the computer readable program instructions, when executed by the processor, perform a text processing method, the method comprising:
acquiring a first abstract text to be extracted;
acquiring user behavior information; and
and processing the first to-be-extracted abstract text by using a first model to obtain an intermediate text, and processing the intermediate text based on the acquired user behavior information to generate a target abstract text.
20. A computer-readable storage medium for storing computer-readable instructions for causing a computer to perform a text processing method, the method comprising:
acquiring a first abstract text to be extracted;
acquiring user behavior information; and
and processing the first abstract text to be extracted by using a first model to obtain an intermediate text, and processing the intermediate text based on the acquired user behavior information to generate a target abstract text.
CN202110608858.6A 2021-06-01 2021-06-01 Text processing device, method, apparatus, and computer-readable storage medium Pending CN115438173A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110608858.6A CN115438173A (en) 2021-06-01 2021-06-01 Text processing device, method, apparatus, and computer-readable storage medium
JP2022089602A JP2022184830A (en) 2021-06-01 2022-06-01 Text processing apparatus, method, device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110608858.6A CN115438173A (en) 2021-06-01 2021-06-01 Text processing device, method, apparatus, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN115438173A true CN115438173A (en) 2022-12-06

Family

ID=84240294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110608858.6A Pending CN115438173A (en) 2021-06-01 2021-06-01 Text processing device, method, apparatus, and computer-readable storage medium

Country Status (2)

Country Link
JP (1) JP2022184830A (en)
CN (1) CN115438173A (en)

Also Published As

Publication number Publication date
JP2022184830A (en) 2022-12-13

Similar Documents

Publication Publication Date Title
US10061769B2 (en) Machine translation method for performing translation between languages
US20200342177A1 (en) Capturing rich response relationships with small-data neural networks
CN115438174A (en) Text processing device, method, apparatus, and computer-readable storage medium
JP6759308B2 (en) Maintenance equipment
CN107408107B (en) Text prediction integration
US20170228372A1 (en) System and method for querying questions and answers
KR20210040868A (en) Information search method and apparatus, device, storage medium, and computer program
US20140201180A1 (en) Intelligent Supplemental Search Engine Optimization
JP7252914B2 (en) Method, apparatus, apparatus and medium for providing search suggestions
US20080215541A1 (en) Techniques for searching web forums
EP2251795A2 (en) Disambiguation and tagging of entities
US20130018894A1 (en) System and method of sentiment data generation
KR102569760B1 (en) Language detection of user input text for online gaming
US20130060769A1 (en) System and method for identifying social media interactions
US20130018875A1 (en) System and method for ordering semantic sub-keys utilizing superlative adjectives
US20110231411A1 (en) Topic Word Generation Method and System
US10824678B2 (en) Query completion suggestions
WO2014097670A1 (en) Document classification device and program
US20130018874A1 (en) System and method of sentiment data use
US20230147941A1 (en) Method, apparatus and device used to search for content
CN112035688B (en) Resource searching method and device, searching equipment and storage medium
WO2020161505A1 (en) Improved method and system for text based searching
US20240062002A1 (en) Computing system for auto-identification of secondary insights using reverse extraction
CN116414968A (en) Information searching method, device, equipment, medium and product
CN115438173A (en) Text processing device, method, apparatus, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication