WO2022166613A1 - 文本中角色的识别方法、装置、可读介质和电子设备 - Google Patents

文本中角色的识别方法、装置、可读介质和电子设备 Download PDF

Info

Publication number
WO2022166613A1
WO2022166613A1 PCT/CN2022/073126 CN2022073126W WO2022166613A1 WO 2022166613 A1 WO2022166613 A1 WO 2022166613A1 CN 2022073126 W CN2022073126 W CN 2022073126W WO 2022166613 A1 WO2022166613 A1 WO 2022166613A1
Authority
WO
WIPO (PCT)
Prior art keywords
word
text
recognized
character
training
Prior art date
Application number
PCT/CN2022/073126
Other languages
English (en)
French (fr)
Inventor
伍林
Original Assignee
北京有竹居网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202110145123.4A external-priority patent/CN112906380B/zh
Application filed by 北京有竹居网络技术有限公司 filed Critical 北京有竹居网络技术有限公司
Publication of WO2022166613A1 publication Critical patent/WO2022166613A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/237Lexical tools
    • G06F40/242Dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition

Definitions

  • the present disclosure relates to the technical field of electronic information processing, and in particular, to a method, apparatus, readable medium and electronic device for character recognition in text.
  • the present disclosure provides a method for identifying characters in text, the method comprising: acquiring each word included in the text to be recognized and a word vector corresponding to each word; The word vector corresponding to the associated word corresponding to the word, the associated word is determined according to the combined word corresponding to the word, and the combined word is composed of the word and a preset number of words adjacent to the word; each word corresponds to The word vector corresponding to the word, and the word vector corresponding to the associated word corresponding to the word form a combined vector corresponding to the word to obtain a combined vector sequence corresponding to the text to be recognized, and the combined vector sequence includes the text to be recognized.
  • the present disclosure provides an apparatus for character recognition in text, the apparatus comprising: an acquisition module for acquiring each word included in the text to be recognized and a word vector corresponding to each word; a determination module for acquiring Determine the word vector corresponding to the associated word corresponding to each word in the text to be recognized, and the associated word is determined according to the combined word corresponding to the word, and the combined word is composed of the word and the preset number adjacent to the word.
  • the processing module is used to combine the word vector corresponding to each word and the word vector corresponding to the associated word corresponding to the word to form the combined vector corresponding to the word, so as to obtain the text corresponding to the text to be recognized.
  • a combined vector sequence the combined vector sequence includes a combined vector corresponding to each word in the text to be recognized; and a recognition module for determining, according to the combined vector sequence and a pre-trained recognition model, the text to be recognized. Included role entities.
  • the present disclosure provides a computer-readable medium on which a computer program is stored, and when the program is executed by a processing apparatus, implements the method described in the first aspect of the present disclosure.
  • the present disclosure provides an electronic device, comprising: a storage device on which a computer program is stored; and a processing device for executing the computer program in the storage device, so as to implement the first aspect of the present disclosure. method.
  • the present disclosure provides a computer program, comprising: instructions that, when executed by a processor, cause the processor to perform the method provided by the first aspect of the present disclosure.
  • the present disclosure provides a computer program product comprising instructions that, when executed by a processor, cause the processor to perform the method provided by the first aspect of the present disclosure.
  • FIG. 1 is a flowchart of a method for identifying characters in a text according to an exemplary embodiment
  • FIG. 2 is a flowchart of another method for identifying characters in text according to an exemplary embodiment
  • FIG. 3 is a flowchart of another method for identifying characters in text according to an exemplary embodiment
  • FIG. 4 is a flowchart of another method for identifying characters in text according to an exemplary embodiment
  • FIG. 5 is a flowchart illustrating a training recognition model according to an exemplary embodiment
  • FIG. 6 is a block diagram of an apparatus for identifying characters in text according to an exemplary embodiment
  • FIG. 7 is a block diagram of another apparatus for identifying characters in text according to an exemplary embodiment
  • FIG. 8 is a block diagram of another apparatus for identifying characters in text according to an exemplary embodiment
  • FIG. 9 is a block diagram of another apparatus for identifying characters in text according to an exemplary embodiment.
  • Fig. 10 is a block diagram of an electronic device according to an exemplary embodiment.
  • the term “including” and variations thereof are open-ended inclusions, ie, "including but not limited to”.
  • the term “based on” is “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • each character in the e-book needs to be manually marked, and the processing efficiency and accuracy are low.
  • the embodiments of the present disclosure provide a method for identifying characters in text, so as to improve the accuracy of identifying character entities.
  • Fig. 1 is a flowchart of a method for identifying characters in text according to an exemplary embodiment. As shown in Fig. 1 , the method includes the following steps 101 to 104.
  • Step 101 Obtain each word included in the text to be recognized and a word vector corresponding to each word.
  • the text to be recognized that may include character entities is first obtained.
  • the text to be recognized can be, for example, one or more sentences in a text file specified by the user (which can be understood as the specified total text mentioned later), one or more paragraphs in the text file, or a text One or more chapters in a document.
  • the text file may be, for example, an e-book, or other types of files, such as news, articles on official accounts, blogs, and the like.
  • each word included in the text to be recognized and the word vector corresponding to each word are extracted.
  • the word vector corresponding to each word can be sequentially searched in the pre-trained word vector table, or the word vector corresponding to each word can be generated by using the pre-trained Word2vec model.
  • Step 102 Determine the word vector corresponding to the associated word corresponding to each word in the text to be recognized, the associated word is determined according to the combined word corresponding to the word, and the combined word is composed of the word and a preset number of words adjacent to the word. composition.
  • the combination word corresponding to the word may be determined first, and then the corresponding associated word may be determined according to the combination word corresponding to the word, wherein the associated word may be a word or multiple words. Finally, the word vector corresponding to the associated word is determined. For example, the word vector corresponding to each associated word can be sequentially searched in the pre-trained word vector table, or the word vector corresponding to each associated word can be generated by using the pre-trained Word2vec model. Wherein, the combined word corresponding to each word is composed of the word and a preset number of words adjacent to the word, and the combined word can be understood as a word composed of the word and the context.
  • the combined word is the word composed of the word in the text to be recognized, the two words before the word and the two words after the word, that is, the word, the previous word, the previous word A word consisting of two characters, one character after that, and two characters after that.
  • related words can be understood as all combined words.
  • Associated words can also be understood as words that meet specified requirements (for example, matching with a preset dictionary, etc.) in the combined words. For example, the preset number is two, and the text to be recognized is: "There is still no news from Big Brother".
  • step 103 the word vector corresponding to each word and the word vector corresponding to the associated word corresponding to the word are formed into a combined vector corresponding to the word, so as to obtain a combined vector sequence corresponding to the text to be recognized, and the combined vector sequence includes the text to be recognized.
  • the combined vector corresponding to each word in .
  • Step 104 Determine the character entity included in the text to be recognized according to the combined vector sequence and the pre-trained recognition model.
  • the word vector corresponding to the word can be combined with the word vector corresponding to the associated word corresponding to the word to obtain the combined vector corresponding to the word, thereby obtaining the text corresponding to the to-be-recognized text.
  • Combine vector sequences That is to say, the combined vector corresponding to each word includes the word vector corresponding to the word and the word vector corresponding to the associated word corresponding to the word.
  • the text to be recognized includes 20 words, each word corresponds to a 1*100-dimensional word vector, a certain word corresponds to two associated words, and each associated word corresponds to a 1*100-dimensional word vector, then the The combined vector corresponding to the word is 1*300 dimension.
  • the combined vector sequence is input into the pre-trained recognition model to determine the role entities included in the text to be recognized according to the output of the recognition model.
  • the role entities can be zero (that is, there is no role entity in the text to be recognized), one or multiple.
  • Character entities may be, for example, included in the text to be recognized: person names, appellations, personal pronouns (eg: you, me, she, him, you, etc.), anthropomorphic animals, anthropomorphic objects, and the like.
  • the recognition model can directly output the character entity, and the recognition model can also output the label of each word in the text to be recognized, and then determine the character entity according to the label of each word.
  • the recognition model labels each word in the text to be recognized, which is used to indicate whether each word belongs to a character entity.
  • the recognition model can be a deep learning model obtained by training a large number of training samples in advance.
  • Bidirectional Long Short-Term Memory Chinese: bidirectional long short-term memory network
  • the recognition efficiency is higher.
  • the combined vector sequence can be used as the input of the Transformer to obtain the feature vector output by the Transformer that can characterize the combined vector sequence, and then the feature vector can be used as the input of the CRF to obtain the output of the CRF.
  • the combined vector sequence corresponds to each combined vector , that is, the annotation of each word in the text to be recognized.
  • the recognition model can learn the relationship between each word and the corresponding associated word, avoiding In the process of character entity recognition, there are problems of missing characters and multiple characters, which can improve the accuracy of character entities marked by the recognition model.
  • the present disclosure first obtains each word in the text to be recognized and the corresponding word vector, and then determines the word vector corresponding to the associated word corresponding to each word, wherein the associated word is determined according to the combined word corresponding to the word, and then the word vector is determined.
  • the word vector corresponding to each word and the word vector corresponding to the corresponding associated word are formed into the combination vector corresponding to the word, so as to obtain the combination vector sequence corresponding to the text to be recognized, including the combination vector corresponding to each word, and finally Character entities included in the text to be recognized are determined according to the combined vector sequence and the pre-trained recognition model.
  • the present disclosure considers each character included in the text to be identified, and also considers the associated words associated with each character, thereby improving the accuracy of identifying the character entity.
  • FIG. 2 is a flowchart of another method for identifying characters in text according to an exemplary embodiment. As shown in FIG. 2 , step 102 may include steps 1021 to 1022 .
  • Step 1021 For each character, obtain a combined word composed of the character and a preset number of characters adjacent to the character.
  • Step 1022 In the combined words, a combined word that matches a preset word dictionary is used as a related word corresponding to the word, and a word vector corresponding to the related word is obtained.
  • a combined word composed of the word and a preset number of words adjacent to the word can be determined first, that is, the combined word is the word and the word Consists of adjacent preset numbers of words.
  • the combined word corresponding to the character is the word composed of the character and the three characters before and the three characters after the character in the text to be recognized.
  • each combined word is matched with the preset word dictionary in turn. If it matches, the combined word is determined as the associated word corresponding to the word, and the word vector corresponding to the associated word is obtained, and the associated word can be zero ( That is, none of the combined words corresponding to the word match with the word dictionary), one or more.
  • the word dictionary can be understood as a dictionary that collects a large number of role entities in advance. By filtering the combined words corresponding to the word through the word dictionary, a large number of meaningless interference can be removed from the semantics, ensuring that each word is associated with the corresponding word. relationship between them, thereby improving the accuracy of identifying role entities.
  • the text to be recognized is: "The weather is good today, is Miss going to go out?”, for the first word: "jin", there are no previous three words, then "now” can be combined with the next three characters.
  • FIG. 3 is a flowchart of another method for identifying characters in text according to an exemplary embodiment. As shown in FIG. 3 , step 104 may include steps 1041 to 1042 .
  • Step 1041 Input the combined vector sequence into the recognition model to obtain an attribute label corresponding to each word in the text to be recognized output by the recognition model, where the attribute label is used to indicate whether the word belongs to a character entity.
  • Step 1042 Determine the character entity included in the to-be-recognized text according to the attribute label corresponding to each word in the to-be-recognized text.
  • the combined vector sequence may be input into the recognition model to obtain an attribute label corresponding to each word in the text to be recognized output by the recognition model, and the attribute label can indicate whether the corresponding word belongs to a character entity.
  • the attribute label output by the recognition model is 1, it means that the corresponding character belongs to the character entity, and when the attribute label output by the recognition model is 0, it means that the corresponding character does not belong to the character entity.
  • the recognition model can be understood as a seq2seq model, that is, the input is a combined vector sequence that includes the combined vector corresponding to each word in the text to be recognized, and the output is an attribute label set that includes the attribute label corresponding to each word in the text to be recognized. .
  • the text to be recognized is: "There is still no news from Big Brother”.
  • the combined vector sequence corresponding to the text to be recognized is input into the recognition model, and the attribute label set output by the recognition model is obtained as: 00001100000, then the character entity can be determined as: "big brother”.
  • the attribute tag in addition to indicating whether the corresponding character belongs to a character entity, can also indicate whether the character belongs to a single character character entity (that is, the character corresponds to a character entity), or a multi-character character entity (that is, the character entity includes multiple characters in the character entity). words).
  • the position of the character in the character entity can also be indicated, that is, the attribute tag can be used to indicate that the position of the character in the character entity is the starting position, the ending position, or the middle Location.
  • the attribute tag indicates that the position of the character in the character entity is the starting position, it can be understood that the character is the first character of the character entity; if the attribute tag indicates that the position of the character in the character entity is the ending position, it can be It is understood that the character is the last character of the character entity. If the attribute label indicates that the position of the character in the character entity is the middle position, it can be understood that the character is any middle character of the character entity.
  • the attribute label is the letter O, it means that the corresponding character does not belong to the character entity; when the attribute label is the letter S, it means that the corresponding character is a single-character character entity; when the attribute label is the letter B, it means that the corresponding character is multiple The starting position in the character character entity.
  • the attribute label is the letter M, it means that the corresponding word is the middle position in the multi-character character entity.
  • the attribute label is the letter E, it means that the corresponding character is the character in the multi-character character entity. end position.
  • step 1042 can be: if the attribute label corresponding to the target word indicates that the target word belongs to the character entity, determine the character entity including the target character according to the position of the target character indicated by the attribute label in the character entity, and the target character is Any word in the text to be recognized.
  • the target word can be directly used as a role entity. If the attribute label corresponding to the target word indicates that the target word belongs to the character entity and is a multi-character character entity, then the character entity composed of the target character can be further determined according to the position of the target character indicated by the attribute label in the character entity.
  • the word composed of the target word to the word can be used as a role entity.
  • the text to be recognized is: "Miss, are you going out”
  • the attribute label corresponding to each word output by the recognition model is: BEOSOOOOOO
  • the occurrences of different character entities can be counted, so that the character entity with the most occurrences can be used as the main character entity in the text to be identified. Further, among the identified character entities, the target character entity to which the dialogue sentence included in the text to be identified belongs may be determined. The following is a detailed description of how to determine the target role entity.
  • Fig. 4 is a flowchart showing another method for identifying characters in text according to an exemplary embodiment.
  • the text to be recognized includes a first text to be recognized and a second text to be recognized, and the first text to be recognized
  • the text corresponds to any dialogue sentence in the designated total text
  • the second to-be-recognized text corresponds to a sentence in the designated total text whose distance between the dialogue sentences corresponding to the first to-be-recognized text satisfies the preset condition.
  • the method may further include: steps 105 to 107 .
  • Step 105 Determine the attribute feature corresponding to each character entity included in the text to be recognized, the attribute feature includes: the first positional relationship between the character entity and the first text to be recognized, the text to which the character entity belongs and the first text to be recognized. One or more of the second positional relationship of the character entity and the dialog attribute of the text to which the character entity belongs.
  • the text to be recognized may include a first text to be recognized and a second text to be recognized, the first text to be recognized corresponds to any dialogue sentence in the specified total text, and the second text to be recognized corresponds to the specified total text, which is the same as the first text to be recognized.
  • the specified total text includes the text corresponding to each of the multiple sentences.
  • the specified total text may be an e-book specified by the user, or may be a chapter or a segment of an e-book.
  • the multiple sentences included in the specified general text can be divided into two categories according to whether they include dialogue symbols, one is dialogue sentences, and the other is non-dialogue sentences, where dialogue symbols are used to identify a sentence as a dialogue sentence, for example, you can It is a double quotation mark "", and it can also be "", which is not specifically limited in the present disclosure.
  • the text corresponding to any dialogue sentence included in the specified total text may be used as the first text to be recognized.
  • the distance between the sentence corresponding to the first text to be recognized and the sentence corresponding to the second text to be recognized satisfies a preset condition, and the second text to be recognized may correspond to one or more sentences.
  • the first text to be recognized is related to the second text to be recognized, and the second text to be recognized can also be understood as the context of the first text to be recognized.
  • the sentence corresponding to the second text to be recognized may be a dialogue sentence or a non-dialogue sentence. Taking the preset condition as being less than or equal to three sentences as an example, then the second text to be recognized may be in the specified total text, and the three sentences before the first text to be recognized correspond to the three sentences after it (six sentences in total) text.
  • the attribute characteristic corresponding to the role entity is determined.
  • the attribute feature can be understood as a feature that can reflect the relationship between the character entity and the first text to be recognized.
  • the attribute feature may include, for example, one or more of the following: the first positional relationship between the character entity and the first text to be recognized, the second positional relationship between the text to which the character entity belongs and the first text to be recognized, the Dialogue properties of the text.
  • the first positional relationship may be used to indicate whether the character entity belongs to the first text to be recognized.
  • the second positional relationship may be used to indicate that in the specified total text, the text to which the character entity belongs is located before or after the first text to be recognized.
  • the dialog attribute can be used to indicate whether the sentence corresponding to the text to which the character entity belongs is a dialog sentence.
  • Step 106 for each character entity, input the pre-trained attribution recognition model to the first to-be-recognized text, the second to-be-recognized text, the character entity and the attribute features corresponding to the character entity, to obtain the output of the attribution recognition model.
  • Step 107 Determine the target character entity to which the dialogue sentence corresponding to the first text to be recognized belongs according to the degree of matching between each character entity and the first text to be recognized.
  • the first text to be recognized, the second text to be recognized, each character entity and the attribute feature corresponding to the character entity can be used as the input of the pre-trained attribution recognition model, and the attribution recognition model can output the character entity and the first character.
  • a matching degree of the text to be recognized, the matching degree can be understood as the probability value that the dialogue sentence corresponding to the first text to be recognized belongs to the character entity.
  • the attribution recognition model may be a deep learning model obtained by training a large number of training samples in advance, and the structure may be, for example, a combination of BLSTM+Dense_layer+softmax.
  • the first text to be recognized and the second text to be recognized can be converted into corresponding text feature sequences (ie Text Embedding), and then the character entity can be converted into a corresponding word vector, and then the text feature sequence, word vector and The attribute features corresponding to the character entity are spliced together as the input of the BLSTM to obtain the features output by the BLSTM that can comprehensively characterize the first text to be recognized, the second text to be recognized, each character entity and the attribute features corresponding to the character entity vector, then use the feature vector as the input of Dense_layer, and use the output of Dense_layer as the input of softmax to obtain the probability value of softmax output, and finally use this probability value as the matching degree between the character entity and the first text to be recognized.
  • ie Text Embedding ie Text Embedding
  • the attribute features corresponding to the character entity are spliced together as the input of the BLSTM to obtain the features output by the BLSTM that can comprehensively characterize the first text to
  • the first text to be recognized includes 20 characters
  • the second text to be recognized includes 50 characters
  • each character corresponds to a 1*300-dimensional word vector
  • the first text to be recognized and the second text to be recognized are converted into corresponding
  • the text feature sequence of is a 70*300-dimensional vector.
  • the character entity corresponds to a 1*300-dimensional word vector
  • the attribute feature corresponding to the character entity is a 1*11-dimensional vector
  • the vector input to the attribution recognition model is a 70*(300+300+11)-dimensional vector.
  • the target character entity to which the dialogue statement corresponding to the first text to be recognized belongs can be determined in at least one character entity, that is, the first text to be recognized is determined.
  • the attribution of the corresponding dialogue sentence is the target character entity (that is, it is determined that the dialogue sentence corresponding to the first text to be recognized is spoken by the target character entity).
  • the role entity with the highest matching degree can be used as the target role entity, or at least one role entity can be sorted in descending order according to the matching degree, and a specified number (for example, three) role entities in the front can be provided to the user , the user determines the target role entity.
  • the target character entity can be used as a label to associate with the first text to be recognized, so that in the process of recording the audio corresponding to the specified total text, when the first text to be recognized is recorded, it can be According to the label of the first to-be-recognized text, the target character entity is determined, and the recording is performed according to the timbre assigned to the target character entity in advance.
  • the second text to be recognized that is associated with the first text to be recognized is also considered, so that the attribution recognition model can learn the first text to be recognized.
  • the association between the first text to be recognized and the second text to be recognized is also combined with the attribute features of the character entity and the character entity extracted from the first text to be recognized and the second text to be recognized, so that the attribution recognition model can further learn Each character entity is associated with the first text to be recognized, thereby determining the target character entity to which the dialogue sentence corresponding to the first text to be recognized belongs, which can improve the recognition efficiency and accuracy of dialogue attribution.
  • the attribute feature corresponding to each character entity may include various features, and the first positional relationship may be determined according to the character entity and the first text to be recognized. Then, the second positional relationship is determined according to the distance between the text to which the character entity belongs and the first text to be recognized. Finally, the dialog attribute is determined according to the text to which the character entity belongs.
  • attribute features can include 11 features:
  • Feature a used to indicate whether the character entity belongs to the first text to be recognized. If in the specified total text, the character entity belongs to the first text to be recognized, the feature a can be represented as 0. If in the specified total text, the character entity does not belong to the first text to be recognized, if the character entity is located after the first text to be recognized, then the feature a can be expressed as 1; if the character entity is located before the first text to be recognized, then Feature a can be represented as -1.
  • Feature b is used to indicate whether the character entity belongs to the target paragraph, and the target paragraph is the paragraph to which the first text to be recognized belongs, which can be understood as whether the character entity and the first text to be recognized belong to the same paragraph. For example, if the character entity belongs to the target paragraph, the feature b can be represented as 1, and if the character entity does not belong to the target paragraph, the feature b can be represented as 0.
  • the feature c is used to indicate the distance between the character entity and the first text to be recognized. It can be understood as the distance between the text to which the character entity belongs and the first text to be recognized, and the order in the distance between the text to which each character entity belongs and the first text to be recognized. For example, in step 104, it is determined that the text to be recognized includes four character entities A, B, C, and D, and the distances from the first text to be recognized are 2 sentences, 4 sentences, 3 sentences and 2 sentences respectively. After sorting The order of are 1, 3, 2, 1, respectively, then the feature c corresponding to B can be expressed as 3.
  • the feature d is used to indicate the distance between the text to which the character entity belongs and the first text to be recognized. For example, if the distance between the text to which the character entity belongs and the first text to be recognized is 2 sentences, the feature d can be represented as 2.
  • Feature e is used to indicate whether the sentence corresponding to the text to which the character entity belongs is a dialogue sentence. For example, if the sentence corresponding to the text to which the character entity belongs is a dialogue sentence, the feature e can be represented as 1; if the sentence corresponding to the text to which the character entity belongs is not a dialog sentence, then the feature e can be represented as 0.
  • the feature f is used to indicate whether the text to which the character entity belongs includes the first dialog template.
  • Feature g is used to indicate whether the text to which the character entity belongs includes the second dialog template.
  • the feature h is used to indicate whether the text to which the character entity belongs includes the third dialog template.
  • the first dialogue template may include "XX Says:”, “XX Dao:”, “XX Laughs:” and other templates indicating the beginning of a dialogue.
  • the second dialogue template may include "XX talk.”, “XX talk.”, “XX laugh.” and other templates indicating the end of the dialogue.
  • the third dialogue template may include "speak", "dao”, "laugh”, etc. templates indicating that dialogue may occur. If the above template is included, it can be expressed as 1, and if the above template is not included, it can be expressed as 0.
  • Feature i used to indicate the position of the character entity in the text to which the character entity belongs.
  • the role entity is the number of role entities in the text to which it belongs.
  • a text includes three character entities A, B, and C in the order from left to right, then the feature i corresponding to A can be represented as 1, the feature i corresponding to B can be represented as 2, and the feature i corresponding to C can be expressed as 3.
  • the feature j is used to indicate whether a sentence before the dialogue sentence corresponding to the first text to be recognized in the specified total text is a dialogue sentence. For example, if a sentence before the dialogue sentence corresponding to the first text to be recognized is a dialogue sentence, then the feature j can be represented as 1; if the sentence before the dialogue sentence corresponding to the first text to be recognized is not a dialogue sentence, then the feature j can be represented as 0.
  • the feature k is used to indicate whether a sentence after the dialogue sentence corresponding to the first text to be recognized in the specified total text is a dialogue sentence. For example, if a sentence after the dialogue sentence corresponding to the first text to be recognized is a dialogue sentence, then the feature j can be represented as 1; if a sentence after the dialogue sentence corresponding to the first text to be recognized is not a dialogue sentence, then the feature j can be represented as 0.
  • FIG. 5 is a flow chart of training a recognition model according to an exemplary embodiment. As shown in FIG. 5 , the recognition model is obtained by training in the following manner:
  • Step A Obtain the word vector corresponding to each training word in the training text, the word vector corresponding to the training associated word corresponding to the training word in the training text, and the labeling data corresponding to the training text, and the training associated word is based on the training combination corresponding to the training word.
  • the word is determined, the training combination word is composed of the training word and a preset number of training words adjacent to the training word, and the labeling data includes labeling role entities included in the training text.
  • Step B for each training word, the word vector corresponding to the training word, and the word vector corresponding to the training associated word corresponding to the training word, form the training combination vector corresponding to the training word, to obtain the training combination corresponding to the training text
  • the vector sequence, the training combination vector sequence includes the training combination vector corresponding to each training word.
  • step C the training combination vector sequence is input into the recognition model, and the recognition model is trained according to the output of the recognition model and the labeled data.
  • Training This article includes multiple training words.
  • the annotation data annotates the annotated role entities included in the training text. For example, if the training text is: "Miss, are you going out?", then the corresponding annotation data can be: BEOSOOOOO, and the annotated role entities are "Miss" and "You". After that, the word vector corresponding to each training word and the word vector corresponding to the training associated word corresponding to the training word are obtained.
  • a training combination word corresponding to the training word may be determined first, and then a corresponding training associated word may be determined according to the training combination word corresponding to the training word.
  • the training associated word can be a single word or multiple words. Finally, the word vector corresponding to the training associated word is obtained.
  • the training combination word corresponding to each training word is composed of the training word and a preset number of training words adjacent to the training word, and the training combination word can be understood as a word composed of the training word and the context. For example, if the preset number is two, the training combination word is a word composed of the training word in the training text, the previous two training words and the following two training words.
  • the training associated words can be understood as all training combined words. Training associated words can also be understood as words that meet specified requirements in the training combination words (for example, matching with a preset dictionary, etc.).
  • the word vector corresponding to the training word and the word vector corresponding to the training associated word corresponding to the training word can be combined to obtain the training combination vector corresponding to the training word, thereby obtaining the corresponding training text.
  • the training combination vector sequence is input into the recognition model, and the recognition model is trained according to the output of the recognition model and the labeled data. It can be understood that the output of the recognition model is the annotation of each training word in the training text.
  • the difference between the actual output of the recognition model and the labeled data can be used as the loss function of the recognition model, with the goal of reducing the loss function, and the back-propagation algorithm is used to modify the parameters of the neurons in the recognition model.
  • the parameters may be, for example, the weight (English: Weight) and the bias (English: Bias) of the neuron. The above steps are repeated until the loss function satisfies the preset condition, for example, the loss function is smaller than the preset loss threshold.
  • the structure of the recognition model may be, for example, a combination of Transformer+CRF.
  • the Transformer can be based on the multi-head self-attention (English: Multi-head self-attention) mechanism, and can learn the degree of correlation between the combined vectors in the combined vector sequence.
  • the input size of the recognition model can be 300.
  • the number of neurons in the FFN (English: Feed Forward Network, Chinese: Feed Forward Network) included in the Transformer can be 256.
  • the number of neurons in the preprocessing network (English: Pre-net) included in the Transformer can be 256.
  • Transformer can include 8 Multi-head self-attention structures, and the number of blocks of Encoder and Decoder included in Transformer can be 1.
  • the maximum length that the recognition model can handle can be 150, that is, the combined vector sequence can include up to 150 combined vectors (the text to be recognized can include up to 150 words).
  • the present disclosure first obtains each word in the text to be recognized and the corresponding word vector, and then determines the word vector corresponding to the associated word corresponding to each word, wherein the associated word is determined according to the combined word corresponding to the word, and then the word vector is determined.
  • the word vector corresponding to each word and the word vector corresponding to the corresponding associated word are formed into the combination vector corresponding to the word, so as to obtain the combination vector sequence corresponding to the text to be recognized, including the combination vector corresponding to each word, and finally Character entities included in the text to be recognized are determined according to the combined vector sequence and the pre-trained recognition model.
  • the present disclosure considers each character included in the text to be identified, and also considers the associated words associated with each character, thereby improving the accuracy of identifying the character entity.
  • FIG. 6 is a block diagram of an apparatus for identifying characters in text according to an exemplary embodiment.
  • the apparatus 200 may include an acquisition module 201 , a determination module 202 , a processing module 203 and an identification module 204 .
  • the obtaining module 201 is configured to obtain each word included in the text to be recognized and a word vector corresponding to each word.
  • the determination module 202 is used to determine the word vector corresponding to the associated word corresponding to each character in the text to be recognized, the associated word is determined according to the combined word corresponding to the word, and the combined word is composed of the word and the preset number adjacent to the word. Composed of numbers.
  • the processing module 203 is used to combine the word vector corresponding to each word and the word vector corresponding to the associated word corresponding to the word into a combined vector corresponding to the word, so as to obtain a combined vector sequence corresponding to the text to be recognized, and the combined vector sequence includes: The combined vector corresponding to each word in the text to be recognized.
  • the recognition module 204 is configured to determine the character entity included in the text to be recognized according to the combined vector sequence and the pre-trained recognition model.
  • FIG. 7 is a block diagram of another apparatus for identifying characters in text according to an exemplary embodiment.
  • the determining module 202 includes an acquiring sub-module 2021 and a determining sub-module 2022 .
  • the obtaining sub-module 2021 is configured to obtain, for each character, a combined word composed of the character and a preset number of characters adjacent to the character.
  • the determination sub-module 2022 is configured to use a combined word that matches a preset word dictionary in the combined word as a related word corresponding to the word, and obtain a word vector corresponding to the related word.
  • FIG. 8 is a block diagram of another apparatus for identifying characters in text according to an exemplary embodiment.
  • the identifying module 204 may include: an identifying sub-module 2041 and a processing sub-module 2042 .
  • the identification sub-module 2041 is configured to input the combined vector sequence into the identification model to obtain an attribute label corresponding to each word in the text to be identified output by the identification model, and the attribute label is used to indicate whether the word belongs to a character entity.
  • the processing sub-module 2042 is configured to determine the character entity included in the text to be recognized according to the attribute label corresponding to each word in the text to be recognized.
  • the attribute tag is also used to indicate that the position of the character in the character entity is the starting position, the ending position, or the intermediate position.
  • the processing submodule 2042 may be used to: if the attribute label corresponding to the target word indicates that the target word belongs to the character entity, determine the character including the target character according to the position of the target word indicated by the attribute label in the character entity Entity, the target word is any word in the text to be recognized.
  • FIG. 9 is a block diagram of another apparatus for identifying characters in text according to an exemplary embodiment.
  • the text to be recognized includes a first text to be recognized and a second text to be recognized, and the first text to be recognized
  • the second to-be-recognized text corresponds to a sentence in the designated total text whose distance between the dialogue sentences corresponding to the first to-be-recognized text satisfies the preset condition.
  • the apparatus 200 may further include: an attribute determination module 205 , an input module 206 and an attribution determination module 207 .
  • the attribute determination module 205 is used to determine the attribute feature corresponding to each character entity included in the text to be recognized after determining the character entity included in the text to be recognized according to the combined vector sequence and the pre-trained recognition model, and the attribute feature includes: One or more of a first positional relationship between the character entity and the first text to be recognized, a second positional relationship between the text to which the character entity belongs and the first text to be recognized, and a dialog attribute of the text to which the character entity belongs.
  • the input module 206 is used for inputting the pre-trained attribution recognition model for each character entity, the first to-be-recognized text, the second to-be-recognized text, the character entity and the corresponding attribute features of the character entity, to obtain the attribution recognition model The output matching degree between the character entity and the first text to be recognized.
  • the attribution determination module 207 is configured to determine, according to the degree of matching between each character entity and the first text to be recognized, the target character entity to which the dialogue sentence corresponding to the first text to be recognized belongs.
  • the recognition model is obtained by training in the following manner:
  • Step A Obtain the word vector corresponding to each training word in the training text, the word vector corresponding to the training associated word corresponding to the training word in the training text, and the labeling data corresponding to the training text, and the training associated word is based on the training combination corresponding to the training word.
  • the word is determined, the training combination word is composed of the training word and a preset number of training words adjacent to the training word, and the labeling data includes labeling role entities included in the training text.
  • Step B for each training word, the word vector corresponding to the training word, and the word vector corresponding to the training associated word corresponding to the training word, form the training combination vector corresponding to the training word, to obtain the training combination corresponding to the training text
  • the vector sequence, the training combination vector sequence includes the training combination vector corresponding to each training word.
  • step C the training combination vector sequence is input into the recognition model, and the recognition model is trained according to the output of the recognition model and the labeled data.
  • the present disclosure first obtains each word in the text to be recognized and the corresponding word vector, and then determines the word vector corresponding to the associated word corresponding to each word, wherein the associated word is determined according to the combined word corresponding to the word, and then the word vector is determined.
  • the word vector corresponding to each word and the word vector corresponding to the corresponding associated word are formed into the combination vector corresponding to the word, so as to obtain the combination vector sequence corresponding to the text to be recognized, including the combination vector corresponding to each word, and finally Character entities included in the text to be recognized are determined according to the combined vector sequence and the pre-trained recognition model.
  • the present disclosure considers each character included in the text to be identified, and also considers the associated words associated with each character, thereby improving the accuracy of identifying the character entity.
  • Terminal devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals (eg, mobile terminals such as in-vehicle navigation terminals), etc., and stationary terminals such as digital TVs, desktop computers, and the like.
  • the electronic device shown in FIG. 10 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
  • an electronic device 300 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 301 that may be loaded into random access according to a program stored in a read only memory (ROM) 302 or from a storage device 308 Various appropriate actions and processes are executed by the programs in the memory (RAM) 303 .
  • RAM 303 various programs and data required for the operation of the electronic device 300 are also stored.
  • the processing device 301, the ROM 302, and the RAM 303 are connected to each other through a bus 304.
  • An input/output (I/O) interface 305 is also connected to bus 304 .
  • the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 307 of a computer, etc.; a storage device 308 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 309. Communication means 309 may allow electronic device 300 to communicate wirelessly or by wire with other devices to exchange data. While FIG. 10 shows electronic device 300 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communication device 309, or from the storage device 308, or from the ROM 302.
  • the processing device 301 When the computer program is executed by the processing device 301, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • terminal devices and servers can use any currently known or future developed network protocols such as HTTP (HyperText Transfer Protocol) to communicate, and can communicate with digital data in any form or medium Communication (eg, a communication network) interconnects.
  • network protocols such as HTTP (HyperText Transfer Protocol) to communicate, and can communicate with digital data in any form or medium Communication (eg, a communication network) interconnects.
  • Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently known or future development network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device is made to: obtain each word included in the text to be recognized and a word vector corresponding to each word Determine the word vector corresponding to the associated word corresponding to each word in the text to be recognized, and the associated word is determined according to the combination word corresponding to the word, and the combination word is determined by the word and the preset adjacent to the word.
  • the word vector corresponding to each word and the word vector corresponding to the associated word corresponding to the word are formed into a combination vector corresponding to the word, so as to obtain the combination vector sequence corresponding to the text to be recognized,
  • the combined vector sequence includes a combined vector corresponding to each word in the text to be recognized; according to the combined vector sequence and a pre-trained recognition model, the character entity included in the text to be recognized is determined.
  • Computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider to via Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the modules involved in the embodiments of the present disclosure may be implemented in software or hardware. Wherein, the name of the module does not constitute a limitation of the module itself under certain circumstances, for example, the acquisition module may also be described as "a module for acquiring each word and the word vector corresponding to each word".
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Systems on Chips (SOCs), Complex Programmable Logical Devices (CPLDs) and more.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs Systems on Chips
  • CPLDs Complex Programmable Logical Devices
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • Example 1 provides a method for recognizing characters in text, the method comprising: acquiring each word included in the text to be recognized and a word vector corresponding to each word; In the text to be recognized, the word vector corresponding to the associated word corresponding to each word, the associated word is determined according to the combined word corresponding to the word, and the combined word is determined by the word and the preset number adjacent to the word.
  • Word composition the word vector corresponding to each word, and the word vector corresponding to the associated word corresponding to the word, form the combination vector corresponding to the word, so as to obtain the combination vector sequence corresponding to the text to be recognized, the combination vector
  • the vector sequence includes a combined vector corresponding to each word in the text to be recognized; according to the combined vector sequence and a pre-trained recognition model, the character entity included in the text to be recognized is determined.
  • Example 2 provides the method of Example 1.
  • the determining the word vector corresponding to the associated word corresponding to each word in the text to be recognized includes: for each word, obtaining The combined word is composed of the word and a preset number of words adjacent to the word; the combined word that matches the preset word dictionary in the combined word is used as the associated word corresponding to the word , and obtain the word vector corresponding to the associated word.
  • Example 3 provides the method of Example 1, wherein determining the character entity included in the text to be recognized according to the combined vector sequence and the pre-trained recognition model includes: The combined vector sequence is input to the recognition model to obtain an attribute label corresponding to each character in the text to be recognized output by the recognition model, where the attribute label is used to indicate whether the character belongs to the character entity; according to The attribute label corresponding to each word in the text to be recognized determines the character entity included in the text to be recognized.
  • Example 4 provides the method of Example 3, wherein the attribute tag is further used to indicate that the position of the character in the character entity is a starting position, or an ending position, or an intermediate position .
  • Example 5 provides the method of Example 4, wherein according to the attribute tag corresponding to each word in the text to be recognized, the A character entity, comprising: if the attribute label corresponding to the target word indicates that the target word belongs to the character entity, determining that the target word is included in the character entity according to the position of the target character indicated by the attribute label in the character entity The character entity of the character, and the target character is any character in the text to be recognized.
  • Example 6 provides the method of Example 1, the text to be recognized includes a first text to be recognized and a second text to be recognized, and the first text to be recognized corresponds to the specified total text In any dialogue sentence of , the second text to be recognized corresponds to the specified total text, and the distance between the dialogue sentences corresponding to the first text to be recognized satisfies the preset condition;
  • the method further includes: determining the attribute feature corresponding to each character entity included in the text to be recognized, the The attribute features include: the first positional relationship between the character entity and the first text to be recognized, the second positional relationship between the text to which the character entity belongs and the first text to be recognized, and the dialogue between the text to which the character entity belongs one or more of the attributes; for each character entity, input the first text to be recognized, the second text to be recognized, the character entity and the attribute characteristics corresponding to the character
  • the training After combining the vector sequence and the pre-trained recognition model, after determining the character entity included in the text to be recognized, the method further includes: determining the
  • Example 7 provides the method of any one of Examples 1 to 6, and the recognition model is obtained by training in the following manner: obtaining a word vector corresponding to each training word in the training text , the word vector corresponding to the training associated word corresponding to the training word in the training text and the labeling data corresponding to the training text, the training associated word is determined according to the training combination word corresponding to the training word, and the training combination word is determined by
  • the training word is composed of a preset number of training words adjacent to the training word, and the labeling data includes the labeling role entity included in the training text; for each training word, the word corresponding to the training word is vector, and the word vector corresponding to the training associated word corresponding to the training word, form the training combination vector corresponding to the training word, to obtain the training combination vector sequence corresponding to the training text, and the training combination vector sequence includes each The training combination vector corresponding to the training word; the training combination vector sequence is input into the recognition model, and the recognition model is trained according to the output of the
  • Example 8 provides an apparatus for character recognition in text, the apparatus comprising: an acquisition module configured to acquire each character included in the text to be recognized and the corresponding character of each character A word vector; a determination module is used to determine the word vector corresponding to the associated word corresponding to each word in the text to be recognized, the associated word is determined according to the combined word corresponding to the word, and the combined word is determined by the word and the The word is composed of a preset number of adjacent words; the processing module is used to combine the word vector corresponding to each word and the word vector corresponding to the associated word corresponding to the word into a combined vector corresponding to the word, with Obtaining a combined vector sequence corresponding to the text to be recognized, where the combined vector sequence includes a combined vector corresponding to each word in the text to be recognized; an identification module for, according to the combined vector sequence and a pre-trained recognition model, Character entities included in the text to be recognized are determined.
  • Example 9 provides a computer-readable medium having a computer program stored thereon, the program implementing the method described in any one of Examples 1 to 7 when executed by a processing apparatus.
  • Example 10 provides an electronic device, comprising: a storage device on which a computer program is stored; and a processing device for executing the computer program in the storage device to Implement the method described in any one of Examples 1 to 7.
  • Example 11 provides a computer program, comprising: instructions that, when executed by a processor, cause the processor to perform any one of Examples 1 to 7 method.
  • Example 12 provides a computer program product comprising instructions that, when executed by a processor, cause the processor to perform any of Examples 1 to 7 method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Machine Translation (AREA)
  • Character Discrimination (AREA)

Abstract

本公开涉及一种文本中角色的识别方法、装置、可读介质和电子设备,涉及电子信息处理技术领域,该方法包括:获取待识别文本中包括的每个字和每个字对应的字向量,确定待识别文本中,每个字对应的关联词语对应的词向量,关联词语根据该字对应的组合词语确定,组合词语由该字和与该字相邻的预设个数的字组成,将每个字对应的字向量,和该字对应的关联词语对应的词向量,组成该字对应的组合向量,以得到待识别文本对应的组合向量序列,组合向量序列包括待识别文本中每个字对应的组合向量,根据组合向量序列和预先训练的识别模型,确定待识别文本中包括的角色实体。本公开能够提高识别角色实体的准确度。

Description

文本中角色的识别方法、装置、可读介质和电子设备
相关申请的交叉引用
本申请是以申请号为202110145123.4,申请日为2021年2月2日的中国申请为基础,并主张其优先权,该中国申请的公开内容在此作为整体引入本申请中。
技术领域
本公开涉及电子信息处理技术领域,具体地,涉及一种文本中角色的识别方法、装置、可读介质和电子设备。
背景技术
随着电子信息技术的不断发展,人们的娱乐生活也越来越丰富,阅读电子书已经成为了一种主流的阅读方式。为了使用户在不方便阅览电子书时,也能通过听觉来获取电子书中包括的信息,或者边读边听,从视觉和听觉两个维度来获取电子书中包括的信息,往往会为电子书预先录制对应的音频,以供用户收听。为了丰富音频的表现力,在录制音频的过程中,可以使用不同的音色来录制电子书中不同角色的对话,因此需要先识别出电子书中的不同角色。
发明内容
提供该发明内容部分以便以简要的形式介绍构思,这些构思将在后面的具体实施方式部分被详细描述。该发明内容部分并不旨在标识要求保护的技术方案的关键特征或必要特征,也不旨在用于限制所要求的保护的技术方案的范围。
第一方面,本公开提供一种文本中角色的识别方法,所述方法包括:获取待识别文本中包括的每个字和每个字对应的字向量;确定所述待识别文本中,每个字对应的关联词语对应的词向量,所述关联词语根据该字对应的组合词语确定,所述组合词语由该字和与该字相邻的预设个数的字组成;将每个字对应的字向量,和该字对应的所述关联词语对应的词向量,组成该字对应的组合向量,以得到所述待识别文本对应的组合向量序列,所述组合向量序列包括所述待识别文本中每个字对应的组合向量;和根据所述组合向量序列和预先训练的识别模型,确定所述待识别文本中包括的角色实体。
第二方面,本公开提供一种文本中角色的识别装置,所述装置包括:获取模块,用于获取待识别文本中包括的每个字和每个字对应的字向量;确定模块,用于确定所述待识别文本中,每个字对应的关联词语对应的词向量,所述关联词语根据该字对应的组合词语确定,所述组合词语由该字和与该字相邻的预设个数的字组成;处理模块,用于将每个字对应的字向量,和该字对应的所述关联词语对应的词向量,组成该字对应的组合向量,以得到所述待识别文本对应的组合向量序列,所述组合向量序列包括所述待识别文本中每个字对应的组合向量;和识别模块,用于根据所述组合向量序列和预先训练的识别模型,确定所述待识别文本中包括的角色实体。
第三方面,本公开提供一种计算机可读介质,其上存储有计算机程序,该程序被处理装置执行时实现本公开第一方面所述方法。
第四方面,本公开提供一种电子设备,包括:存储装置,其上存储有计算机程序;处理装置,用于执行所述存储装置中的所述计算机程序,以实现本公开第一方面所述方法。
第五方面,本公开提供一种计算机程序,包括:指令,所述指令当由处理器执行时使所述处理器执行本公开第一方面提供的所述方法。
第六方面,本公开提供一种计算机程序产品,包括指令,所述指令当由处理器执行时使所述处理器执行本公开第一方面提供的所述方法。
本公开的其他特征和优点将在随后的具体实施方式部分予以详细说明。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。在附图中:
图1是根据一示例性实施例示出的一种文本中角色的识别方法的流程图;
图2是根据一示例性实施例示出的另一种文本中角色的识别方法的流程图;
图3是根据一示例性实施例示出的另一种文本中角色的识别方法的流程图;
图4是根据一示例性实施例示出的另一种文本中角色的识别方法的流程图;
图5是根据一示例性实施例示出的一种训练识别模型的流程图;
图6是根据一示例性实施例示出的一种文本中角色的识别装置的框图;
图7是根据一示例性实施例示出的另一种文本中角色的识别装置的框图;
图8是根据一示例性实施例示出的另一种文本中角色的识别装置的框图;
图9是根据一示例性实施例示出的另一种文本中角色的识别装置的框图;
图10是根据一示例性实施例示出的一种电子设备的框图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
本公开的发明人发现,在相关技术中,需要人工对电子书中的每个角色进行标注,处理效率和准确度都较低。
鉴于此,本公开的实施例提供了一种文本中角色的识别方法,以提高识别角色实体的准确度。
图1是根据一示例性实施例示出的一种文本中角色的识别方法的流程图,如图1所示,该方法包括以下步骤步骤101至104。
步骤101,获取待识别文本中包括的每个字和每个字对应的字向量。
举例来说,首先获取可能包括角色实体的待识别文本。待识别文本例如可以是用户指 定的文本文件(可以理解为后文提及的指定总文本)中的一个或多个语句,也可以是文本文件中的一个或多个段落,还可以是一个文本文件中的一个或多个章节。文本文件例如可以是一部电子书,也可以是其他类型的文件,例如新闻、公众号文章、博客等。之后提取出待识别文本中包括的每个字,和每个字对应的字向量。例如,可以在预先训练的字向量表中,依次查找每个字对应的字向量,也可以利用预先训练的Word2vec模型,生成每个字对应的字向量。
步骤102,确定待识别文本中,每个字对应的关联词语对应的词向量,关联词语根据该字对应的组合词语确定,组合词语由该字和与该字相邻的预设个数的字组成。
示例的,针对待识别文本中的每个字,可以先确定该字对应的组合词语,然后根据该字对应的组合词语,确定对应的关联词语,其中,关联词语可以是一个词语,也可以是多个词语。最后确定关联词语对应的词向量。例如,可以在预先训练的词向量表中,依次查找每个关联词语对应的词向量,也可以利用预先训练的Word2vec模型,生成每个关联词语对应的词向量。其中,每个字对应的组合词语,是由该字和与该字相邻的预设个数的字组成,可以将组合词语理解为该字和上下文组成的词语。例如,预设个数为两个,那么组合词语即为待识别文本中该字与该字之前的两个字和之后的两个字组成的词语,即该字与之前的一个字、之前的两个字、之后的一个字、之后的两个字组成的词语。相应的,关联词语可以理解为所有的组合词语。关联词语也可以理解为,组合词语中,符合指定要求的词语(例如与预设的词典匹配等)。例如,预设个数为两个,待识别文本为:“仍旧没有大哥的任何消息”。对于待识别文本中的第五个字:“哥”来说,对应的组合词语为:“哥”和“有大”组成的:“有大哥”、“哥”和“大”组成的:“大哥”、“哥”和“的”组成的:“哥的”、“哥”和“的任”组成的:“哥的任”,共四个组合词语。可以将这四个组合词语均作为关联词语,也可以将这四个组合词语中与预设的词典匹配的词语作为关联词语,例如可以将“大哥”作为关联词语。
步骤103,将每个字对应的字向量,和该字对应的关联词语对应的词向量,组成该字对应的组合向量,以得到待识别文本对应的组合向量序列,组合向量序列包括待识别文本中每个字对应的组合向量。
步骤104,根据组合向量序列和预先训练的识别模型,确定待识别文本中包括的角色实体。
示例的,针对待识别文本中的每个字,可以将该字对应的字向量和该字对应的关联词语对应的词向量进行组合,得到该字对应的组合向量,从而得到待识别文本对应的组合向 量序列。也就是说,每个字对应的组合向量中包括了该字对应的字向量和该字对应的关联词语对应的词向量。例如,待识别文本中包括20个字,每个字对应一个1*100维的字向量,其中某个字对应两个关联词语,每个关联词语对应一个1*100维的词向量,那么该字对应的组合向量即为1*300维。最后,将组合向量序列输入到预先训练的识别模型,以根据识别模型的输出确定待识别文本中包括的角色实体,角色实体可以是零个(即待识别文本中不存在角色实体)、一个或多个。角色实体例如可以是待识别文本中包括的:人名、称谓、人称代词(例如:你、我、她、他、你们等)、拟人的动物、拟人的物体等。具体的,识别模型可以直接输出角色实体,识别模型也可以输出待识别文本中每个字的标注,然后根据每个字的标注来确定角色实体。其中,识别模型对待识别文本中每个字的标注,用于指示每个字是否属于角色实体。
其中,识别模型可以是预先通过大量的训练样本进行训练得到的深度学习模型,结构例如可以是Transformer+CRF(英文:Conditional Random Fields,中文:条件随机场)的组合,相比于BLSTM(英文:Bidirectional Long Short-Term Memory,中文:双向长短期记忆网络)+CRF的组合,识别效率更高。例如,可以将组合向量序列作为Transformer的输入,以得到Transformer输出的,能够表征组合向量序列的特征向量,之后将特征向量作为CRF的输入,以得到CRF输出的组合向量序列中每个组合向量对应的标注,即待识别文本中每个字的标注。由于组合向量序列中既包括了每个字对应的字向量,又包括了每个字对应的关联词语对应的词向量,因此识别模型能够学习每个字与对应的关联词语之间的关系,避免出现角色实体识别过程中出现漏字、多字的问题,能够提高识别模型标注角色实体的准确度。
综上所述,本公开首先获取待识别文本中的每个字和对应的字向量,之后确定每个字对应的关联词语对应的词向量,其中关联词语根据该字对应的组合词语确定,再将每个字对应的字向量,和对应的关联词语对应的词向量,组成该字对应的组合向量,从而得到待识别文本对应的,包括了每个字对应的组合向量的组合向量序列,最后根据组合向量序列和预先训练的识别模型,确定待识别文本中包括的角色实体。本公开在识别角色实体的过程中,考虑了待识别文本中包括的每个字,还考虑与每个字存在关联的关联词语,从而提高了识别角色实体的准确度。
图2是根据一示例性实施例示出的另一种文本中角色的识别方法的流程图,如图2所示,步骤102可以包括:步骤1021至步骤1022。
步骤1021,针对每个字,获取该字和与该字相邻的预设个数的字组成的组合词语。
步骤1022,将组合词语中,与预设的词语词典匹配的组合词语作为该字对应的关联词语,并获取关联词语对应的词向量。
举例来说,针对待识别文本中的每个字,可以先确定该字和与该字相邻的预设个数的字组成的组合词语,也就是说,组合词语是该字和与该字相邻的预设个数的字组成的。以预设个数为三个来举例,那么该字对应的组合词语即为待识别文本中该字和与该字之前的三个字和之后的三个字组成的词语。之后,依次将每个组合词语与预设的词语词典进行匹配,如果匹配,那么将该组合词语确定为该字对应的关联词语,并获取关联词语对应的词向量,关联词语可以是零个(即该字对应的组合词语中,均不与词语词典匹配)、一个或多个。若该字对应的关联词语为零个,那么该字对应的组合向量即为该字对应的字向量。其中,词语词典可以理解为预先收集有大量角色实体的词典,通过词语词典对该字对应的组合词语进行筛选,可以从语义上剔除大量无意义的干扰,保证了每个字与对应的关联词语之间的关联关系,从而提高识别角色实体的准确度。例如,待识别文本为:“今天天气不错,小姐要出门吗?”,对于第一个字:“今”来说,不存在之前的三个字,那么可以将“今”与之后的三个字“天”、“天”、“气”进行组合,得到组合词语:“今”与“天”组成的“今天”、“今”与“天天”组成的“今天天”、“今”与“天天气”组成的“今天天气”,共三个组合词语,将三个组合词语依次和词语词典进行匹配,得到匹配的关联词语为:“今天”。
再比如,对于第八个字“姐”来说,可以将“姐”与之前、之后共六个字:“错”、“,”、“小”、“要”、“出”、“门”进行组合,得到组合词语:“姐”与“错,小”组成的“错,小姐”、“姐”与“,小”组成的“,小姐”、“姐”与“小”组成的“小姐”、“姐”与“要”组成的“姐要”、“姐”与“要出”组成的“姐要出”、“姐”与“要出门”组成的“姐要出门”,共六个组合词语。将六个组合词语依次与词语词典进行匹配,得到匹配的关联词语为“小姐”。需要说明的是,上述实施例中,可以将待识别文中包括的标点符号作为一个字来处理。
图3是根据一示例性实施例示出的另一种文本中角色的识别方法的流程图,如图3所示,步骤104可以包括:步骤1041至步骤1042。
步骤1041,将组合向量序列输入识别模型,以得到识别模型输出的待识别文本中每个字对应的属性标签,属性标签用于指示该字是否属于角色实体。
步骤1042,根据待识别文本中每个字对应的属性标签,确定待识别文本中包括的角色实体。
示例的,可以先将组合向量序列输入识别模型,以得到识别模型输出的待识别文本中每个字对应的属性标签,属性标签能够指示对应的字是否属于角色实体。例如,当识别模 型输出的属性标签为1时,表示对应的字属于角色实体,当识别模型输出的属性标签为0时,表示对应的字不属于角色实体。可以将识别模型理解为一个seq2seq模型,即输入为包括了待识别文本中每个字对应的组合向量的组合向量序列,输出为包括了待识别文本中每个字对应的属性标签的属性标签集合。例如,待识别文本为:“仍旧没有大哥的任何消息”。将待识别文本对应的组合向量序列输入识别模型,得到识别模型输出的属性标签集合为:00001100000,那么可以确定角色实体为:“大哥”。
在一种应用场景中,属性标签除了指示对应的字是否属于角色实体,还可以指示该字属于单字角色实体(即该字对应一个角色实体),还是多字角色实体(即角色实体中包括多个字)。在该字属于多字角色实体的情况下,还可以指示该字在角色实体中的位置,即属性标签可以用于指示该字在角色实体中的位置为起始位置,或者终止位置,或者中间位置。
其中,若属性标签指示该字在角色实体中的位置为起始位置,可以理解为该字为角色实体的第一个字,若属性标签指示该字在角色实体中的位置为终止位置,可以理解为该字为角色实体的最后一个字,若属性标签指示该字在角色实体中的位置为中间位置,可以理解为该字为角色实体的中间任一个字。例如,当属性标签为字母O时,表示对应的字不属于角色实体,当属性标签为字母S时,表示对应的字为单字角色实体,当属性标签为字母B时,表示对应的字为多字角色实体中的起始位置,当属性标签为字母M时,表示对应的字为多字角色实体中的中间位置,当属性标签为字母E时,表示对应的字为多字角色实体中的终止位置。
相应的,步骤1042的实现方式可以为:若目标字对应的属性标签指示目标字属于角色实体,根据属性标签指示的目标字在角色实体中的位置,确定包括目标字的角色实体,目标字为待识别文本中的任一字。
示例的,如果目标字对应的属性标签指示目标字属于角色实体,且为单字角色实体,那么可以直接将目标字作为一个角色实体。如果目标字对应的属性标签指示目标字属于角色实体,且为多字角色实体,那么可以进一步根据属性标签指示的目标字在角色实体中的位置,确定由目标字组成的角色实体。
若属性标签指示目标字在角色实体中的位置为起始位置,那么继续确定目标字之后的每个字对应的属性标签,直至某个字对应的属性标签指示该字在角色实体中的位置为终止位置,那么可以将目标字至该字组成的词语作为一个角色实体。例如,待识别文本为:“小姐,你要出门吗”,识别模型输出的每个字对应的属性标签为:BEOSOOOOO,那么可以 确定属性标签为B和E的两个字组成一个角色实体:“小姐”,属性标签为S的一个字组成一个角色实体:“你”,即待识别文本中包括两个角色实体:“小姐”和“你”。
在确定待识别文本中包括的角色实体之后,可以统计不同的角色实体的出现次数,从而可以将出现次数最多的角色实体作为待识别文本中的主要角色实体。进一步的,还可以在识别出的角色实体中,确定待识别文本中包括的对话语句所属的目标角色实体。以下对如何确定目标角色实体进行具体说明。
图4是根据一示例性实施例示出的另一种文本中角色的识别方法的流程图,如图4所示,待识别文本包括第一待识别文本和第二待识别文本,第一待识别文本对应指定总文本中的任一对话语句,第二待识别文本对应指定总文本中,与第一待识别文本对应的对话语句之间的距离满足预设条件的语句。在步骤104之后,该方法还可以包括:步骤105至步骤107。
步骤105,确定待识别文本中包括的每个角色实体对应的属性特征,属性特征包括:该角色实体与第一待识别文本的第一位置关系、该角色实体所属的文本与第一待识别文本的第二位置关系和该角色实体所属的文本的对话属性中的一种或多种。
举例来说,待识别文本可以包括第一待识别文本和第二待识别文本,第一待识别文本对应指定总文本中的任一对话语句,第二待识别文本对应指定总文本中,与第一待识别文本对应的对话语句之间的距离满足预设条件的语句。指定总文本包括了多个语句中每个语句对应的文本,例如指定总文本可以是用户指定的一部电子书,也可以是一部电子书中的一个章节或者一个片段。可以将指定总本文中包括的多个语句,按照是否包括对话符号分为两类,一类是对话语句,另一类是非对话语句,其中,对话符号用于标识一个语句为对话语句,例如可以是双引号“”,也可以是「」,本公开对此不作具体限定。
之后可以将指定总文本中包括的任一对话语句对应的文本作为第一待识别文本。再确定第一待识别文本对应的第二待识别文本。在指定总文本中,第一待识别文本对应的语句与第二待识别文本对应的语句之间的距离满足预设条件,第二待识别文本可以对应一个或多个语句。可以理解为第一待识别文本与第二待识别文本存在关联,也可以将第二待识别文本理解为第一待识别文本的上下文。第二待识别文本对应的语句,可以是对话语句,也可以是非对话语句。以预设条件为小于或等于三句为例,那么第二待识别文本可以是指定总文本中,第一待识别文本的之前的三个语句和之后的三个语句(共六个语句)对应的文本。
针对每个角色实体,确定该角色实体对应的属性特征。属性特征可以理解为能够反映 该角色实体与第一待识别文本之间关系的特征。属性特征例如可以包括以下一种或多种:该角色实体与第一待识别文本的第一位置关系、该角色实体所属的文本与第一待识别文本的第二位置关系、该角色实体所属的文本的对话属性。其中,第一位置关系可以用于指示该角色实体是否属于第一待识别文本。第二位置关系可以用于指示在指定总文本中,该角色实体所属的文本位于第一待识别文本之前或者之后。对话属性可以用于指示该角色实体所属的文本对应的语句是否为对话语句。
步骤106,针对每个角色实体,将第一待识别文本、第二待识别文本、该角色实体和该角色实体对应的属性特征,输入预先训练的归属识别模型,以得到归属识别模型输出的该角色实体与第一待识别文本的匹配度。
步骤107,根据每个角色实体与第一待识别文本的匹配度,确定第一待识别文本对应的对话语句所属的目标角色实体。
示例的,可以将第一待识别文本、第二待识别文本、每个角色实体和该角色实体对应的属性特征,作为预先训练的归属识别模型的输入,归属识别模型能够输出该角色实体与第一待识别文本的匹配度,匹配度可以理解为第一待识别文本对应的对话语句属于该角色实体的概率值。其中,归属识别模型可以是预先通过大量的训练样本进行训练得到的深度学习模型,结构例如可以是BLSTM+Dense_layer+softmax的组合。例如,可以先将第一待识别文本和第二待识别文本转换为对应的文本特征序列(即Text Embedding),然后将该角色实体转换为对应的词向量,再将文本特征序列、词向量和该角色实体对应的属性特征进行拼接,作为BLSTM的输入,以得到BLSTM输出的,能够综合表征第一待识别文本、第二待识别文本、每个角色实体和该角色实体对应的属性特征的特征向量,之后将特征向量作为Dense_layer的输入,并将Dense_layer的输出作为softmax的输入,以得到softmax输出的概率值,最后将这个概率值作为该角色实体与第一待识别文本的匹配度。例如,第一待识别文本包括20个字,第二待识别文本包括50个字,每个字对应一个1*300维的字向量,那么第一待识别文本和第二待识别文本转换为对应的文本特征序列为70*300维的向量。角色实体对应一个1*300维的词向量,该角色实体对应的属性特征为1*11维的向量,那么输入归属识别模型的向量为70*(300+300+11)维的向量。
进一步的,在得到每个角色实体与第一待识别文本的匹配度之后,可以在至少一个角色实体中确定第一待识别文本对应的对话语句所属的目标角色实体,即确定第一待识别文本对应的对话语句的归属是目标角色实体(也就是说确定第一待识别文本对应的对话语句是目标角色实体说的)。例如,可以将匹配度最高的角色实体作为目标角色实体,也可以 按照匹配度的大小,对至少一个角色实体进行降序排列,将排在前面的指定数量个(例如三个)角色实体提供给用户,由用户确定目标角色实体。进一步的,在确定目标角色实体后,可以将目标角色实体作为标签,与第一待识别文本进行关联,这样在录制指定总文本对应的音频的过程中,录制到第一待识别文本时,可以根据第一待识别文本的标签,确定目标角色实体,并按照预先为目标角色实体分配的音色进行录制。
这样,在确定第一待识别文本对应的对话语句的归属时,除了第一待识别文本本身,还考虑了与第一待识别文本存在关联的第二待识别文本,使得归属识别模型能够学习第一待识别文本与第二待识别文本之间的关联,同时还结合了从第一待识别文本和第二待识别文本中提取的角色实体和角色实体的属性特征,使得归属识别模型能够进一步学习每个角色实体与第一待识别文本的关联,从而确定第一待识别文本对应的对话语句所属的目标角色实体,能够提高对话归属的识别效率和准确度。
在一种应用场景中,每个角色实体对应的属性特征中可以包括多种特征,可以先根据该角色实体和第一待识别文本,确定第一位置关系。再根据该角色实体所属的文本与第一待识别文本的距离,确定第二位置关系。最后根据该角色实体所属的文本确定对话属性。例如,属性特征可以包括11种特征:
特征a,用于指示该角色实体是否属于第一待识别文本。若指定总文本中,该角色实体属于第一待识别文本,那么特征a可以表示为0。若指定总文本中,该角色实体不属于第一待识别文本,如果该角色实体位于第一待识别文本之后,那么特征a可以表示为1,如果该角色实体位于第一待识别文本之前,那么特征a可以表示为-1。
特征b,用于指示该角色实体是否属于目标段落,目标段落为第一待识别文本所属的段落,可以理解为该角色实体与第一待识别文本是否同属于一个段落。例如,若该角色实体属于目标段落,那么特征b可以表示为1,若该角色实体不属于目标段落,那么特征b可以表示为0。
特征c,用于指示该角色实体与第一待识别文本的距离。可以理解为该角色实体所属的文本与第一待识别文本的距离,在每个角色实体所属的文本与第一待识别文本的距离中的顺序。例如,步骤104中确定了待识别文本中包括了甲、乙、丙、丁4个角色实体,与第一待识别文本的距离分别为2句,4句,3句和2句,进行排序后的顺序分别为1,3,2,1,那么乙对应的特征c可以表示为3。
特征d,用于指示该角色实体所属的文本与第一待识别文本的距离。例如,该角色实体所属的文本与第一待识别文本的距离为2句,那么特征d可以表示为2。
特征e,用于指示该角色实体所属的文本对应的语句是否为对话语句。例如,若该角色实体所属的文本对应的语句为对话语句,那么特征e可以表示为1,若该角色实体所属的文本对应的语句不为对话语句,那么特征e可以表示为0。
特征f,用于指示该角色实体所属的文本是否包括第一对话模板。
特征g,用于指示该角色实体所属的文本是否包括第二对话模板。
特征h,用于指示该角色实体所属的文本是否包括第三对话模板。
例如,第一对话模板可以包括“XX说:”、“XX道:”、“XX笑:”等表示对话开始的模板。第二对话模板可以包括“XX说。”、“XX道。”、“XX笑。”等表示对话结束的模板。第三对话模板可以包括“说”、“道”、“笑”等表示可能出现对话的模板。如果包括上述模板,可以表示为1,不包括上述模板,可以表示为0。
特征i,用于指示该角色实体在该角色实体所属的文本中的位置。可以理解为该角色实体在所属的文本中是第几个角色实体。例如,一个文本中按照从左到右的顺序,包括甲、乙、丙3个角色实体,那么甲对应的特征i可以表示为1,乙对应的特征i可以表示为2,丙对应的特征i可以表示为3。
特征j,用于指示指定总文本中,第一待识别文本对应的对话语句之前的一个语句是否为对话语句。例如,若第一待识别文本对应的对话语句之前的一个语句为对话语句,那么特征j可以表示为1,若第一待识别文本对应的对话语句之前的一个语句不为对话语句,那么特征j可以表示为0。
特征k,用于指示指定总文本中,第一待识别文本对应的对话语句之后的一个语句是否为对话语句。例如,若第一待识别文本对应的对话语句之后的一个语句为对话语句,那么特征j可以表示为1,若第一待识别文本对应的对话语句之后的一个语句不为对话语句,那么特征j可以表示为0。
图5是根据一示例性实施例示出的一种训练识别模型的流程图,如图5所示,识别模型是通过如下方式训练得到的:
步骤A,获取训练文本中每个训练字对应的字向量、训练文本中该训练字对应的训练关联词语对应的词向量和训练文本对应的标注数据,训练关联词语根据该训练字对应的训练组合词语确定,训练组合词语由该训练字和与该训练字相邻的预设个数的训练字组成,标注数据中包括训练文本中包括的标注角色实体。
步骤B,针对每个训练字,将该训练字对应的字向量,和该训练字对应的训练关联词语对应的词向量,组成该训练字对应的训练组合向量,以得到训练文本对应的训练组合向 量序列,训练组合向量序列包括每个训练字对应的训练组合向量。
步骤C,将训练组合向量序列输入识别模型,并根据识别模型的输出与标注数据,训练识别模型。
举例来说,对识别模型进行训练,需要预先获取训练文本,和对应的标注数据。训练本文中包括了多个训练字。其中,标注数据标注了训练文本中包括的标注角色实体。例如,训练文本为:“小姐,你要出门吗”,那么对应的标注数据可以为:BEOSOOOOO,标注角色实体为“小姐”和“你”。之后,获取每个训练字对应的字向量,和该训练字对应的训练关联词语对应的词向量。
针对每个训练字,可以先确定该训练字对应的训练组合词语,然后根据该训练字对应的训练组合词语,确定对应的训练关联词语。训练关联词语可以是一个词语,也可以是多个词语。最后获取训练关联词语对应的词向量。其中,每个训练字对应的训练组合词语,是由该训练字和与该训练字相邻的预设个数的训练字组成,可以将训练组合词语理解为该训练字和上下文组成的词语。例如,预设个数为两个,那么训练组合词语即为训练文本中该训练字,与之前的两个训练字和之后的两个训练字组成的词语。相应的,训练关联词语可以理解为,所有的训练组合词语。训练关联词语也可以理解为,训练组合词语中,符合指定要求的词语(例如与预设的词典匹配等)。
针对训练文本中的每个训练字,可以将该训练字对应的字向量和该训练字对应的训练关联词语对应的词向量进行组合,得到该训练字对应的训练组合向量,从而得到训练文本对应的,包括了每个训练字对应的训练组合向量的训练组合向量序列。最后,将训练组合向量序列输入识别模型,并根据识别模型的输出与标注数据,训练识别模型。可以理解为,识别模型输出的是对训练文本中每个训练字的标注。因此,可以将识别模型实际输出的标注,与标注数据的差,作为识别模型的损失函数,以降低损失函数为目标,利用反向传播算法来修正识别模型中的神经元的参数,神经元的参数例如可以是神经元的权重(英文:Weight)和偏置量(英文:Bias)。重复上述步骤,直至损失函数满足预设条件,例如损失函数小于预设的损失阈值。
需要说明的是,识别模型的结构例如可以是Transformer+CRF的组合。其中,Transformer可以基于多头自注意力(英文:Multi-head self-attention)机制,能够学习组合向量序列中各个组合向量之间的相关程度。识别模型的输入尺寸可以为300。Transformer中包括的FFN(英文:Feed Forward Network,中文:前馈网络)的神经元数量可以为256。Transformer中包括的预处理网络(英文:Pre-net)的神经元数量可以 为256。Transformer可以包括8个Multi-head self-attention结构,Transformer包括的Encoder和Decoder的block数量可以为1。识别模型能够处理的最大长度可以为150,即组合向量序列中最多可以包括150个组合向量(待识别文本中最多可以包括150个字)。
综上所述,本公开首先获取待识别文本中的每个字和对应的字向量,之后确定每个字对应的关联词语对应的词向量,其中关联词语根据该字对应的组合词语确定,再将每个字对应的字向量,和对应的关联词语对应的词向量,组成该字对应的组合向量,从而得到待识别文本对应的,包括了每个字对应的组合向量的组合向量序列,最后根据组合向量序列和预先训练的识别模型,确定待识别文本中包括的角色实体。本公开在识别角色实体的过程中,考虑了待识别文本中包括的每个字,还考虑与每个字存在关联的关联词语,从而提高了识别角色实体的准确度。
图6是根据一示例性实施例示出的一种文本中角色的识别装置的框图,如图6所示,该装置200可以包括:获取模块201、确定模块202、处理模块203和识别模块204。
获取模块201,用于获取待识别文本中包括的每个字和每个字对应的字向量。
确定模块202,用于确定待识别文本中,每个字对应的关联词语对应的词向量,关联词语根据该字对应的组合词语确定,组合词语由该字和与该字相邻的预设个数的字组成。
处理模块203,用于将每个字对应的字向量,和该字对应的关联词语对应的词向量,组成该字对应的组合向量,以得到待识别文本对应的组合向量序列,组合向量序列包括待识别文本中每个字对应的组合向量。
识别模块204,用于根据组合向量序列和预先训练的识别模型,确定待识别文本中包括的角色实体。
图7是根据一示例性实施例示出的另一种文本中角色的识别装置的框图,如图7所示,确定模块202包括:获取子模块2021和确定子模块2022。
获取子模块2021,用于针对每个字,获取该字和与该字相邻的预设个数的字组成的组合词语。
确定子模块2022,用于将组合词语中,与预设的词语词典匹配的组合词语作为该字对应的关联词语,并获取关联词语对应的词向量。
图8是根据一示例性实施例示出的另一种文本中角色的识别装置的框图,如图8所示,识别模块204可以包括:识别子模块2041和处理子模块2042。
识别子模块2041,用于将组合向量序列输入识别模型,以得到识别模型输出的待识别文本中每个字对应的属性标签,属性标签用于指示该字是否属于角色实体。
处理子模块2042,用于根据待识别文本中每个字对应的属性标签,确定待识别文本中包括的角色实体。
在一种应用场景中,属性标签还用于指示该字在角色实体中的位置为起始位置,或者终止位置,或者中间位置。
在另一种应用场景中,处理子模块2042可以用于:若目标字对应的属性标签指示目标字属于角色实体,根据属性标签指示的目标字在角色实体中的位置,确定包括目标字的角色实体,目标字为待识别文本中的任一字。
图9是根据一示例性实施例示出的另一种文本中角色的识别装置的框图,如图9所示,待识别文本包括第一待识别文本和第二待识别文本,第一待识别文本对应指定总文本中的任一对话语句,第二待识别文本对应指定总文本中,与第一待识别文本对应的对话语句之间的距离满足预设条件的语句。该装置200还可以包括:属性确定模块205、输入模块206和归属确定模块207。
属性确定模块205,用于在根据组合向量序列和预先训练的识别模型,确定待识别文本中包括的角色实体之后,确定待识别文本中包括的每个角色实体对应的属性特征,属性特征包括:该角色实体与第一待识别文本的第一位置关系、该角色实体所属的文本与第一待识别文本的第二位置关系、该角色实体所属的文本的对话属性中的一种或多种。
输入模块206,用于针对每个角色实体,将第一待识别文本、第二待识别文本、该角色实体和该角色实体对应的属性特征,输入预先训练的归属识别模型,以得到归属识别模型输出的该角色实体与第一待识别文本的匹配度。
归属确定模块207,用于根据每个角色实体与第一待识别文本的匹配度,确定第一待识别文本对应的对话语句所属的目标角色实体。
需要说明的是,上述实施例中,识别模型是通过如下方式训练得到的:
步骤A,获取训练文本中每个训练字对应的字向量、训练文本中该训练字对应的训练关联词语对应的词向量和训练文本对应的标注数据,训练关联词语根据该训练字对应的训练组合词语确定,训练组合词语由该训练字和与该训练字相邻的预设个数的训练字组成,标注数据中包括训练文本中包括的标注角色实体。
步骤B,针对每个训练字,将该训练字对应的字向量,和该训练字对应的训练关联词语对应的词向量,组成该训练字对应的训练组合向量,以得到训练文本对应的训练组合向量序列,训练组合向量序列包括每个训练字对应的训练组合向量。
步骤C,将训练组合向量序列输入识别模型,并根据识别模型的输出与标注数据,训 练识别模型。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
综上所述,本公开首先获取待识别文本中的每个字和对应的字向量,之后确定每个字对应的关联词语对应的词向量,其中关联词语根据该字对应的组合词语确定,再将每个字对应的字向量,和对应的关联词语对应的词向量,组成该字对应的组合向量,从而得到待识别文本对应的,包括了每个字对应的组合向量的组合向量序列,最后根据组合向量序列和预先训练的识别模型,确定待识别文本中包括的角色实体。本公开在识别角色实体的过程中,考虑了待识别文本中包括的每个字,还考虑与每个字存在关联的关联词语,从而提高了识别角色实体的准确度。
下面参考图10,其示出了适于用来实现本公开实施例的电子设备(例如上述实施例中文本中角色的识别方法的执行主体)300的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图10示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图10所示,电子设备300可以包括处理装置(例如中央处理器、图形处理器等)301,其可以根据存储在只读存储器(ROM)302中的程序或者从存储装置308加载到随机访问存储器(RAM)303中的程序而执行各种适当的动作和处理。在RAM 303中,还存储有电子设备300操作所需的各种程序和数据。处理装置301、ROM 302以及RAM 303通过总线304彼此相连。输入/输出(I/O)接口305也连接至总线304。
通常,以下装置可以连接至I/O接口305:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置306;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置307;包括例如磁带、硬盘等的存储装置308;以及通信装置309。通信装置309可以允许电子设备300与其他设备进行无线或有线通信以交换数据。虽然图10示出了具有各种装置的电子设备300,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样 的实施例中,该计算机程序可以通过通信装置309从网络上被下载和安装,或者从存储装置308被安装,或者从ROM 302被安装。在该计算机程序被处理装置301执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,终端设备、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取待识别文本中包括的每个字和每个字对应的字向量;确定所述待识别文本中,每个字对应的关联词语对应的词向量,所述关联词语根据该字对应的组合词语确定,所述组合词语由该字和与该字相邻的预设个数的字组成;将每个字对应的字向量,和该字对应的所述关联词语对应的词向量,组成该字对应的组合向量,以得到 所述待识别文本对应的组合向量序列,所述组合向量序列包括所述待识别文本中每个字对应的组合向量;根据所述组合向量序列和预先训练的识别模型,确定所述待识别文本中包括的角色实体。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言——诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)——连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的模块可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块的名称在某种情况下并不构成对该模块本身的限定,例如,获取模块还可以被描述为“获取每个字和每个字对应的字向量的模块”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子 的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,示例1提供了一种文本中角色的识别方法,所述方法包括:获取待识别文本中包括的每个字和每个字对应的字向量;确定所述待识别文本中,每个字对应的关联词语对应的词向量,所述关联词语根据该字对应的组合词语确定,所述组合词语由该字和与该字相邻的预设个数的字组成;将每个字对应的字向量,和该字对应的所述关联词语对应的词向量,组成该字对应的组合向量,以得到所述待识别文本对应的组合向量序列,所述组合向量序列包括所述待识别文本中每个字对应的组合向量;根据所述组合向量序列和预先训练的识别模型,确定所述待识别文本中包括的角色实体。
根据本公开的一个或多个实施例,示例2提供了示例1的方法,所述确定所述待识别文本中,每个字对应的关联词语对应的词向量,包括:针对每个字,获取该字和与该字相邻的预设个数的字组成的所述组合词语;将所述组合词语中,与预设的词语词典匹配的所述组合词语作为该字对应的所述关联词语,并获取所述关联词语对应的词向量。
根据本公开的一个或多个实施例,示例3提供了示例1的方法,所述根据所述组合向量序列和预先训练的识别模型,确定所述待识别文本中包括的角色实体,包括:将所述组合向量序列输入所述识别模型,以得到所述识别模型输出的所述待识别文本中每个字对应的属性标签,所述属性标签用于指示该字是否属于所述角色实体;根据所述待识别文本中每个字对应的所述属性标签,确定所述待识别文本中包括的所述角色实体。
根据本公开的一个或多个实施例,示例4提供了示例3的方法,所述属性标签还用于指示该字在所述角色实体中的位置为起始位置,或者终止位置,或者中间位置。
根据本公开的一个或多个实施例,示例5提供了示例4的方法,所述根据所述待识别文本中每个字对应的所述属性标签,确定所述待识别文本中包括的所述角色实体,包括:若目标字对应的所述属性标签指示所述目标字属于所述角色实体,根据所述属性标签指示的所述目标字在所述角色实体中的位置,确定包括所述目标字的所述角色实体,所述目标字为所述待识别文本中的任一字。
根据本公开的一个或多个实施例,示例6提供了示例1的方法,所述待识别文本包括第一待识别文本和第二待识别文本,所述第一待识别文本对应指定总文本中的任一对话语 句,所述第二待识别文本对应所述指定总文本中,与所述第一待识别文本对应的对话语句之间的距离满足预设条件的语句;在所述根据所述组合向量序列和预先训练的识别模型,确定所述待识别文本中包括的角色实体之后,所述方法还包括:确定所述待识别文本中包括的每个所述角色实体对应的属性特征,所述属性特征包括:该角色实体与所述第一待识别文本的第一位置关系、该角色实体所属的文本与所述第一待识别文本的第二位置关系、该角色实体所属的文本的对话属性中的一种或多种;针对每个所述角色实体,将所述第一待识别文本、所述第二待识别文本、该角色实体和该角色实体对应的所述属性特征,输入预先训练的归属识别模型,以得到所述归属识别模型输出的该角色实体与所述第一待识别文本的匹配度;根据每个所述角色实体与所述第一待识别文本的匹配度,确定所述第一待识别文本对应的对话语句所属的目标角色实体。
根据本公开的一个或多个实施例,示例7提供了示例1至示例6中任一例的方法,所述识别模型是通过如下方式训练得到的:获取训练文本中每个训练字对应的字向量、所述训练文本中该训练字对应的训练关联词语对应的词向量和所述训练文本对应的标注数据,所述训练关联词语根据该训练字对应的训练组合词语确定,所述训练组合词语由该训练字和与该训练字相邻的预设个数的训练字组成,所述标注数据中包括所述训练文本中包括的标注角色实体;针对每个训练字,将该训练字对应的字向量,和该训练字对应的所述训练关联词语对应的词向量,组成该训练字对应的训练组合向量,以得到所述训练文本对应的训练组合向量序列,所述训练组合向量序列包括每个训练字对应的训练组合向量;将所述训练组合向量序列输入所述识别模型,并根据所述识别模型的输出与所述标注数据,训练所述识别模型。
根据本公开的一个或多个实施例,示例8提供了一种文本中角色的识别装置,所述装置包括:获取模块,用于获取待识别文本中包括的每个字和每个字对应的字向量;确定模块,用于确定所述待识别文本中,每个字对应的关联词语对应的词向量,所述关联词语根据该字对应的组合词语确定,所述组合词语由该字和与该字相邻的预设个数的字组成;处理模块,用于将每个字对应的字向量,和该字对应的所述关联词语对应的词向量,组成该字对应的组合向量,以得到所述待识别文本对应的组合向量序列,所述组合向量序列包括所述待识别文本中每个字对应的组合向量;识别模块,用于根据所述组合向量序列和预先训练的识别模型,确定所述待识别文本中包括的角色实体。
根据本公开的一个或多个实施例,示例9提供了一种计算机可读介质,其上存储有计算机程序,该程序被处理装置执行时实现示例1至示例7中任一例所述方法。
根据本公开的一个或多个实施例,示例10提供了一种电子设备,包括:存储装置,其上存储有计算机程序;处理装置,用于执行所述存储装置中的所述计算机程序,以实现示例1至示例7中任一例所述方。
根据本公开的一个或多个实施例,示例11提供了一种计算机程序,包括:指令,所述指令当由处理器执行时使所述处理器执行示例1至示例7中任一例所述的方法。
根据本公开的一个或多个实施例,示例12提供了一种计算机程序产品,包括指令,所述指令当由处理器执行时使所述处理器执行示例1至示例7中任一例所述的方法。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。

Claims (12)

  1. 一种文本中角色的识别方法,包括:
    获取待识别文本中包括的每个字和每个字对应的字向量;
    确定所述待识别文本中,每个字对应的关联词语对应的词向量,所述关联词语根据该字对应的组合词语确定,所述组合词语由该字和与该字相邻的预设个数的字组成;
    将每个字对应的字向量,和该字对应的所述关联词语对应的词向量,组成该字对应的组合向量,以得到所述待识别文本对应的组合向量序列,所述组合向量序列包括所述待识别文本中每个字对应的组合向量;和
    根据所述组合向量序列和预先训练的识别模型,确定所述待识别文本中包括的角色实体。
  2. 根据权利要求1所述的方法,其中,所述确定所述待识别文本中,每个字对应的关联词语对应的词向量,包括:
    针对每个字,获取该字和与该字相邻的预设个数的字组成的所述组合词语;和
    将所述组合词语中,与预设的词语词典匹配的所述组合词语作为该字对应的所述关联词语,并获取所述关联词语对应的词向量。
  3. 根据权利要求1所述的方法,其中,所述根据所述组合向量序列和预先训练的识别模型,确定所述待识别文本中包括的角色实体,包括:
    将所述组合向量序列输入所述识别模型,以得到所述识别模型输出的所述待识别文本中每个字对应的属性标签,所述属性标签用于指示该字是否属于所述角色实体;和
    根据所述待识别文本中每个字对应的所述属性标签,确定所述待识别文本中包括的所述角色实体。
  4. 根据权利要求3所述的方法,其中,所述属性标签还用于指示该字在所述角色实体中的位置为起始位置,或者终止位置,或者中间位置。
  5. 根据权利要求4所述的方法,其中,所述根据所述待识别文本中每个字对应 的所述属性标签,确定所述待识别文本中包括的所述角色实体,包括:
    若目标字对应的所述属性标签指示所述目标字属于所述角色实体,根据所述属性标签指示的所述目标字在所述角色实体中的位置,确定包括所述目标字的所述角色实体,所述目标字为所述待识别文本中的任一字。
  6. 根据权利要求1所述的方法,其中,所述待识别文本包括第一待识别文本和第二待识别文本,所述第一待识别文本对应指定总文本中的任一对话语句,所述第二待识别文本对应所述指定总文本中,与所述第一待识别文本对应的对话语句之间的距离满足预设条件的语句;
    在所述根据所述组合向量序列和预先训练的识别模型,确定所述待识别文本中包括的角色实体之后,所述方法还包括:
    确定所述待识别文本中包括的每个所述角色实体对应的属性特征,所述属性特征包括:该角色实体与所述第一待识别文本的第一位置关系、该角色实体所属的文本与所述第一待识别文本的第二位置关系和该角色实体所属的文本的对话属性中的一种或多种;
    针对每个所述角色实体,将所述第一待识别文本、所述第二待识别文本、该角色实体和该角色实体对应的所述属性特征,输入预先训练的归属识别模型,以得到所述归属识别模型输出的该角色实体与所述第一待识别文本的匹配度;
    根据每个所述角色实体与所述第一待识别文本的匹配度,确定所述第一待识别文本对应的对话语句所属的目标角色实体。
  7. 根据权利要求1-6中任一项所述的方法,其中,所述识别模型是通过如下方式训练得到的:
    获取训练文本中每个训练字对应的字向量、所述训练文本中该训练字对应的训练关联词语对应的词向量和所述训练文本对应的标注数据,所述训练关联词语根据该训练字对应的训练组合词语确定,所述训练组合词语由该训练字和与该训练字相邻的预设个数的训练字组成,所述标注数据中包括所述训练文本中包括的标注角色实体;
    针对每个训练字,将该训练字对应的字向量,和该训练字对应的所述训练关联词语对应的词向量,组成该训练字对应的训练组合向量,以得到所述训练文本对应的训练组合向量序列,所述训练组合向量序列包括每个训练字对应的训练组合向量;
    将所述训练组合向量序列输入所述识别模型,并根据所述识别模型的输出与所述标注数据,训练所述识别模型。
  8. 一种文本中角色的识别装置,包括:
    获取模块,用于获取待识别文本中包括的每个字和每个字对应的字向量;
    确定模块,用于确定所述待识别文本中,每个字对应的关联词语对应的词向量,所述关联词语根据该字对应的组合词语确定,所述组合词语由该字和与该字相邻的预设个数的字组成;
    处理模块,用于将每个字对应的字向量,和该字对应的所述关联词语对应的词向量,组成该字对应的组合向量,以得到所述待识别文本对应的组合向量序列,所述组合向量序列包括所述待识别文本中每个字对应的组合向量;
    识别模块,用于根据所述组合向量序列和预先训练的识别模型,确定所述待识别文本中包括的角色实体。
  9. 一种计算机可读介质,其上存储有计算机程序,该程序被处理装置执行时实现权利要求1-7中任一项所述方法。
  10. 一种电子设备,包括:
    存储装置,其上存储有计算机程序;
    处理装置,用于执行所述存储装置中的所述计算机程序,以实现权利要求1-7中任一项所述方法。
  11. 一种计算机程序,包括:
    指令,所述指令当由处理器执行时使所述处理器执行根据权利要求1-7中任一项所述的方法。
  12. 一种计算机程序产品,包括指令,所述指令当由处理器执行时使所述处理器执行根据权利要求1-7中任一项所述的方法。
PCT/CN2022/073126 2021-02-02 2022-01-21 文本中角色的识别方法、装置、可读介质和电子设备 WO2022166613A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110145123.4A CN112906380B (zh) 2021-02-02 文本中角色的识别方法、装置、可读介质和电子设备
CN202110145123.4 2021-02-02

Publications (1)

Publication Number Publication Date
WO2022166613A1 true WO2022166613A1 (zh) 2022-08-11

Family

ID=76121552

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/073126 WO2022166613A1 (zh) 2021-02-02 2022-01-21 文本中角色的识别方法、装置、可读介质和电子设备

Country Status (1)

Country Link
WO (1) WO2022166613A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222330A (zh) * 2019-04-26 2019-09-10 平安科技(深圳)有限公司 语义识别方法及装置、存储介质、计算机设备
CN110334340A (zh) * 2019-05-06 2019-10-15 北京泰迪熊移动科技有限公司 基于规则融合的语义分析方法、装置以及可读存储介质
CN111104800A (zh) * 2019-12-24 2020-05-05 东软集团股份有限公司 一种实体识别方法、装置、设备、存储介质和程序产品
CN111368535A (zh) * 2018-12-26 2020-07-03 珠海金山网络游戏科技有限公司 一种敏感词识别方法、装置及设备
CN111428493A (zh) * 2020-03-06 2020-07-17 中国平安人寿保险股份有限公司 实体关系获取方法、装置、设备及存储介质
CN111669757A (zh) * 2020-06-15 2020-09-15 国家计算机网络与信息安全管理中心 一种基于通话文本词向量的终端诈骗电话识别方法
US20210026874A1 (en) * 2018-07-24 2021-01-28 Ntt Docomo, Inc. Document classification device and trained model
CN112906380A (zh) * 2021-02-02 2021-06-04 北京有竹居网络技术有限公司 文本中角色的识别方法、装置、可读介质和电子设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210026874A1 (en) * 2018-07-24 2021-01-28 Ntt Docomo, Inc. Document classification device and trained model
CN111368535A (zh) * 2018-12-26 2020-07-03 珠海金山网络游戏科技有限公司 一种敏感词识别方法、装置及设备
CN110222330A (zh) * 2019-04-26 2019-09-10 平安科技(深圳)有限公司 语义识别方法及装置、存储介质、计算机设备
CN110334340A (zh) * 2019-05-06 2019-10-15 北京泰迪熊移动科技有限公司 基于规则融合的语义分析方法、装置以及可读存储介质
CN111104800A (zh) * 2019-12-24 2020-05-05 东软集团股份有限公司 一种实体识别方法、装置、设备、存储介质和程序产品
CN111428493A (zh) * 2020-03-06 2020-07-17 中国平安人寿保险股份有限公司 实体关系获取方法、装置、设备及存储介质
CN111669757A (zh) * 2020-06-15 2020-09-15 国家计算机网络与信息安全管理中心 一种基于通话文本词向量的终端诈骗电话识别方法
CN112906380A (zh) * 2021-02-02 2021-06-04 北京有竹居网络技术有限公司 文本中角色的识别方法、装置、可读介质和电子设备

Also Published As

Publication number Publication date
CN112906380A (zh) 2021-06-04

Similar Documents

Publication Publication Date Title
CN111177393B (zh) 一种知识图谱的构建方法、装置、电子设备及存储介质
CN108985358B (zh) 情绪识别方法、装置、设备及存储介质
WO2022166621A1 (zh) 对话归属的识别方法、装置、可读介质和电子设备
CN113470619B (zh) 语音识别方法、装置、介质及设备
WO2017024553A1 (zh) 一种信息情感分析方法和系统
CN107783960A (zh) 用于抽取信息的方法、装置和设备
CN111274815A (zh) 用于挖掘文本中的实体关注点的方法和装置
WO2020182123A1 (zh) 用于推送语句的方法和装置
CN112883968B (zh) 图像字符识别方法、装置、介质及电子设备
WO2022247562A1 (zh) 多模态数据检索方法、装置、介质及电子设备
CN113158656B (zh) 讽刺内容识别方法、装置、电子设备以及存储介质
CN113688256B (zh) 临床知识库的构建方法、装置
WO2023142914A1 (zh) 日期识别方法、装置、可读介质及电子设备
CN111090993A (zh) 属性对齐模型训练方法及装置
WO2022161122A1 (zh) 一种会议纪要的处理方法、装置、设备及介质
CN111555960A (zh) 信息生成的方法
CN115270717A (zh) 一种立场检测方法、装置、设备及介质
CN111078849A (zh) 用于输出信息的方法和装置
CN116629236A (zh) 一种待办事项提取方法、装置、设备及存储介质
WO2022166613A1 (zh) 文本中角色的识别方法、装置、可读介质和电子设备
WO2022121859A1 (zh) 口语信息处理方法、装置和电子设备
CN114490946A (zh) 基于Xlnet模型的类案检索方法、系统及设备
CN116821327A (zh) 文本数据处理方法、装置、设备、可读存储介质及产品
CN114492400A (zh) 标题实体识别模型训练方法、标题实体识别方法和装置
CN110276001B (zh) 盘点页识别方法、装置、计算设备和介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22748892

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22748892

Country of ref document: EP

Kind code of ref document: A1