CN113051384A - User portrait extraction method based on conversation and related device - Google Patents

User portrait extraction method based on conversation and related device Download PDF

Info

Publication number
CN113051384A
CN113051384A CN202110458709.6A CN202110458709A CN113051384A CN 113051384 A CN113051384 A CN 113051384A CN 202110458709 A CN202110458709 A CN 202110458709A CN 113051384 A CN113051384 A CN 113051384A
Authority
CN
China
Prior art keywords
target
conversation
user
round
sentence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110458709.6A
Other languages
Chinese (zh)
Other versions
CN113051384B (en
Inventor
孙梓淇
张智
白祚
莫洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN202110458709.6A priority Critical patent/CN113051384B/en
Publication of CN113051384A publication Critical patent/CN113051384A/en
Application granted granted Critical
Publication of CN113051384B publication Critical patent/CN113051384B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/237Lexical tools
    • G06F40/242Dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks

Abstract

The embodiment of the application provides a user portrait extraction method based on conversation and a related device, wherein the method comprises the following steps: acquiring a first conversation sentence of a user and a second conversation sentence of a salesman in any pair of conversations; performing entity recognition on the first dialogue sentence and the second dialogue sentence; identifying pronouns in the first dialogue sentence and the second dialogue sentence, and performing reference resolution on the identified pronouns based on entities recorded in a preset data table to obtain a target first dialogue sentence and a target second dialogue sentence; then, extracting user portrait of the target first dialogue sentence and the target second dialogue sentence; and filtering the extracted user portrait belonging to the salesman to obtain the user portrait belonging to the user in any one round of conversation, and merging the target user portrait belonging to the user extracted in each round of conversation of multiple rounds of conversations. The method and the device are beneficial to improving the accuracy of user portrait extraction.

Description

User portrait extraction method based on conversation and related device
Technical Field
The present application relates to the field of data analysis technologies, and in particular, to a user portrait extraction method and a related device based on a dialog.
Background
In the business processing, a large number of scenes for communicating with the client are involved, such as client condition understanding, product service consultation, after-sale processing and the like, and the dialogue information generated in the communication has extremely important significance for user mining or business expansion, such as user portrait extraction of the dialogue information, which is beneficial to subsequent personalized recommendation and user use condition tracking, and can also guide topic trend and further user portrait mining. Traditional user portrait extraction mainly mines conversation information based on manpower and rules, and extracts tags capable of reflecting certain personal information of a user from answers of the user, but the user portrait extracted in the mode is not comprehensive and is low in accuracy of describing the user.
Disclosure of Invention
In view of the foregoing problems, the present application provides a method and a related apparatus for extracting a user portrait based on a dialog, which is beneficial to improving the accuracy of user portrait extraction.
In order to achieve the above object, a first aspect of the embodiments of the present application provides a method for extracting a user portrait based on a dialog, the method including:
acquiring a first conversation sentence of a user and a second conversation sentence of a salesman in any pair of conversations;
performing entity recognition on the first dialogue sentence and the second dialogue sentence, and recording recognized entities to a preset data table;
identifying pronouns in the first dialogue sentence and the second dialogue sentence, and performing reference resolution on the identified pronouns based on entities recorded in the preset data table to obtain a target first dialogue sentence and a target second dialogue sentence;
performing user portrait extraction on the target first spoken sentence pair based on a first preset rule, and performing user portrait extraction on the target second spoken sentence pair based on a second preset rule;
and filtering the user portraits belonging to the salesman and extracted from the target first spoken sentence and the target second spoken sentence to obtain the user portraits belonging to the user in any one round of conversation, and merging the target user portraits belonging to the user and extracted from each round of conversation in multiple rounds of conversations.
With reference to the first aspect, in a possible implementation manner, the performing the reference resolution on the identified pronouns based on the entities recorded in the preset data table includes:
under the condition that any one round of conversation is the first round of conversation of the multiple rounds of conversations, acquiring an entity identified in the first round of conversation from the preset data table, and performing reference resolution on the identified pronouns based on the entity identified in the first round of conversation;
under the condition that any one round of conversation is a target round of conversation in the multiple rounds of conversations except the first round of conversation, acquiring an entity identified in the target round of conversation and an entity identified in a historical round of conversation from the preset data table, and performing reference resolution on the identified pronouns based on the entity identified in the target round of conversation and the entity identified in the historical round of conversation; wherein the historical turn dialog is a dialog before the target turn dialog in the multiple turns of dialog.
With reference to the first aspect, in a possible implementation manner, the performing user portrait extraction on the target first paired utterances based on a first preset rule includes:
sensitive word and service conversational detection is carried out on the target first spoken sentence to obtain a first candidate rule set;
carrying out rule matching on the target first pair of utterances by adopting a regular expression to obtain a second candidate rule set;
taking an intersection of the first candidate rule set and the second candidate rule set to obtain a third candidate rule set;
and under the condition that the rule in the third candidate rule set is the first preset rule, extracting the user portrait in the target first dialog sentence.
With reference to the first aspect, in a possible implementation manner, the obtaining a first candidate rule set includes:
and under the condition that the sensitive words are not detected in the target first dialogue sentences and the target first dialogue sentences do not accord with the business dialogue, carrying out rule matching on the target first dialogue sentences by adopting a rule engine based on a multi-slot Huffman Trie tree to obtain the first candidate rule set.
With reference to the first aspect, in a possible implementation manner, the performing user portrait extraction on the target second spoken sentence based on a second preset rule includes:
performing the sensitive word and the service conversational detection on the target second pair of spoken sentences to obtain a fourth candidate rule set;
carrying out rule matching on the target second pair of the spoken sentences by adopting a regular expression to obtain a fifth candidate rule set;
taking an intersection of the fourth candidate rule set and the fifth candidate rule set to obtain a sixth candidate rule set;
and under the condition that the rule in the sixth candidate rule set is the second preset rule, extracting the user portrait in the target second dialogue statement.
With reference to the first aspect, in a possible implementation manner, after obtaining a user representation belonging to a user in any one of the dialog rounds, the method further includes:
and performing conflict detection on the user portrait belonging to the user in any one round of conversation, and determining the target user portrait in any one round of conversation by adopting a voting strategy.
A second aspect of the embodiments of the present application provides a user portrait extracting apparatus based on a dialog, including:
the dialogue acquisition module is used for acquiring a first dialogue statement of a user in any pair of dialogues of the multiple rounds of dialogues and a second dialogue statement of a salesman;
the entity recognition module is used for carrying out entity recognition on the first dialogue sentences and the second dialogue sentences and recording recognized entities to a preset data table;
the reference resolution module is used for identifying pronouns in the first dialogue statement and the second dialogue statement and performing reference resolution on the identified pronouns based on the entities recorded in the preset data table to obtain a target first dialogue statement and a target second dialogue statement;
the portrait extraction module is used for extracting the user portrait of the target first spoken sentence based on a first preset rule and extracting the user portrait of the target second spoken sentence based on a second preset rule;
and the image merging module is used for filtering the user images which belong to the service staff and are extracted from the target first dialogue sentence and the target second dialogue sentence to obtain the user images which belong to the user in any one round of dialogue, and merging the target user images which belong to the user and are extracted from each round of dialogue in the multiple rounds of dialogues.
A third aspect of embodiments of the present application provides an electronic device, which includes an input device, an output device, and a processor, and is adapted to implement one or more instructions; and a computer storage medium storing one or more instructions adapted to be loaded by the processor and to perform the steps of:
acquiring a first conversation sentence of a user and a second conversation sentence of a salesman in any pair of conversations;
performing entity recognition on the first dialogue sentence and the second dialogue sentence, and recording recognized entities to a preset data table;
identifying pronouns in the first dialogue sentence and the second dialogue sentence, and performing reference resolution on the identified pronouns based on entities recorded in the preset data table to obtain a target first dialogue sentence and a target second dialogue sentence;
performing user portrait extraction on the target first spoken sentence pair based on a first preset rule, and performing user portrait extraction on the target second spoken sentence pair based on a second preset rule;
and filtering the user portraits belonging to the salesman and extracted from the target first spoken sentence and the target second spoken sentence to obtain the user portraits belonging to the user in any one round of conversation, and merging the target user portraits belonging to the user and extracted from each round of conversation in multiple rounds of conversations.
A fourth aspect of embodiments of the present application provides a computer storage medium having one or more instructions stored thereon, the one or more instructions adapted to be loaded by a processor and to perform the following steps:
acquiring a first conversation sentence of a user and a second conversation sentence of a salesman in any pair of conversations;
performing entity recognition on the first dialogue sentence and the second dialogue sentence, and recording recognized entities to a preset data table;
identifying pronouns in the first dialogue sentence and the second dialogue sentence, and performing reference resolution on the identified pronouns based on entities recorded in the preset data table to obtain a target first dialogue sentence and a target second dialogue sentence;
performing user portrait extraction on the target first spoken sentence pair based on a first preset rule, and performing user portrait extraction on the target second spoken sentence pair based on a second preset rule;
and filtering the user portraits belonging to the salesman and extracted from the target first spoken sentence and the target second spoken sentence to obtain the user portraits belonging to the user in any one round of conversation, and merging the target user portraits belonging to the user and extracted from each round of conversation in multiple rounds of conversations.
Compared with the prior art, the method and the device have the advantages that the first conversation sentence of the user and the second conversation sentence of the operator in any one pair of conversations are obtained; performing entity recognition on the first dialogue sentence and the second dialogue sentence, and recording recognized entities to a preset data table; identifying pronouns in the first dialogue sentence and the second dialogue sentence, and performing reference resolution on the identified pronouns based on entities recorded in the preset data table to obtain a target first dialogue sentence and a target second dialogue sentence; performing user portrait extraction on the target first spoken sentence pair based on a first preset rule, and performing user portrait extraction on the target second spoken sentence pair based on a second preset rule; and filtering the user portraits belonging to the salesman and extracted from the target first spoken sentence and the target second spoken sentence to obtain the user portraits belonging to the user in any one round of conversation, and merging the target user portraits belonging to the user and extracted from each round of conversation in multiple rounds of conversations. Therefore, the conversation sentences of the users and the conversation sentences of the salesmen in the multi-turn conversation are processed through entity recognition and reference resolution, the pronouns in the conversation correspond to the entities, the problem that the pronouns are difficult to be accurately recognized as the entities in the user portrait extraction based on single sentence conversation is favorably solved, the conversation sentences in the multi-turn conversation are more complete, the possibility of extracting the user portrait in the single sentence conversation is favorably improved on one hand, more user portraits are extracted, the portrayal of one user is more accurate, the subsequent identity judgment is favorably carried out on the conversation sentences obtained after the reference resolution on the other hand, and the user portrait describing the salesmen is filtered.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a network system architecture according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating a method for extracting a user portrait based on a dialog according to an embodiment of the present application;
FIG. 3 is an exemplary diagram of user portrait extraction for a target first spoken sentence according to an embodiment of the present application;
FIG. 4 is an exemplary diagram of user portrait extraction for a target second spoken sentence according to an embodiment of the present application;
FIG. 5 is a flowchart illustrating another method for extracting a user representation based on a dialog according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a user portrait extraction apparatus based on dialog according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "comprising" and "having," and any variations thereof, as appearing in the specification, claims and drawings of this application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used to distinguish between different objects and are not used to describe a particular order.
The embodiment of the present application provides a method for extracting a user portrait based on a dialog, which can be implemented based on a network system architecture shown in fig. 1, please refer to fig. 1, where the network system architecture includes a terminal and an electronic device, the terminal and the electronic device are connected through a wired or wireless network communication, the terminal is a terminal device used by a user and a service staff, and may be a mobile phone, a tablet, a computer, a Personal Digital Assistant (PDA), and the like of the user and the service staff, and the terminal is used to provide a dialog statement between the user and the service staff to the electronic device, and the dialog statement may be a real-time dialog statement between the user and the service staff or a historical dialog statement in a log record extracted from a database by a developer. The electronic equipment at least comprises a communication module and a processing module, wherein the communication module is integrated with a digital protocol interface, the communication module acquires a conversation sentence submitted by a terminal through the digital protocol interface and forwards the conversation sentence to the processing module, and the processing module executes operations such as entity recognition, reference resolution, user portrait extraction, user portrait filtering, user portrait merging and the like on the conversation sentence. For example, the electronic device may be an independent physical server, a server cluster or a distributed system, or a cloud server that provides basic cloud computing services such as cloud services, a cloud database, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, and big data and artificial intelligence platforms.
Based on the network system architecture shown in fig. 1, the following describes in detail a dialog-based user representation extraction method provided in the embodiment of the present application with reference to other drawings.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for extracting a user portrait based on a dialog according to an embodiment of the present application, where the method is applied to an electronic device, and as shown in fig. 2, the method includes steps S21-S25:
and S21, acquiring a first conversation sentence of the user in any pair of conversations and a second conversation sentence of the salesman in any pair of conversations.
In the embodiment of the present disclosure, the multiple rounds of conversations may be real-time conversations generated in business communication between the user and the attendant, or conversation records of the user and the attendant extracted from an offline log, such as conversation records generated in a customer service system, conversation records generated in telephone communication, and the like. The service staff includes but is not limited to service staff, intelligent dialogue system and dialogue robot.
And S22, performing entity recognition on the first dialogue sentence and the second dialogue sentence, and recording the recognized entities to a preset data table.
In the embodiment of the disclosure, for the first dialogue statement and the second dialogue statement, a keyword dictionary may be used for entity recognition, a named entity recognition model may also be used for entity recognition, and each recognized entity is recorded for use in subsequent entity inheritance and reference resolution.
In one possible embodiment, the performing entity recognition on the first dialog sentence and the second dialog sentence includes:
performing the following operations on any one of the first and second conversational utterances:
performing word segmentation on any conversation sentence to obtain a word sequence; embedding pre-trained or randomly initialized words into an Embedding matrix to map the word sequence into a word vector sequence; inputting the word vector sequence into a bidirectional LSTM to perform feature extraction to obtain a feature sequence corresponding to any conversation sentence; inputting the characteristic sequence into a CRF (conditional random field) layer to perform sentence-level sequence labeling on the word sequence to obtain a tag sequence corresponding to the word sequence, and obtaining an entity in any dialogue sentence based on the tag sequence. For example: for the second pair of utterances of the salesman, "do you have children", mapping the word sequence after word segmentation to a word vector sequence (x1, x2, x3, …, x6), using the word vector sequence (x1, x2, x3, …, x6) as the input of the bidirectional LSTM, the bidirectional LSTM concatenating the output result of the forward last layer with the output result of the backward last layer by position to obtain a corresponding feature sequence (h1, h2, h3, … h6), using the feature sequence (h1, h2, h3, … h …) as the input of the CRF layer, the sentence-level sequence labeling by the BIO rule in the layer, B indicates that an entity word starts, I indicates that an entity word is inside an entity word, O indicates that it is not an entity word, the entity word may be a predefined word such as "public word", "B" may be a predefined word in the public layer, tag sequence (y, … x …, … is calculated, x3, …, x6), determining a word having a probability greater than or equal to a preset value as an entity word, thereby recognizing an entity in the second dialog sentence.
And S23, recognizing pronouns in the first dialogue statement and the second dialogue statement, and performing reference resolution on the recognized pronouns based on the entities recorded in the preset data table to obtain a target first dialogue statement and a target second dialogue statement.
In the embodiment of the present disclosure, the target first pair of utterances is an uttered dialog that is obtained by performing a reference resolution on a pronoun in the first dialog, and similarly, the target second pair of utterances is an uttered dialog that is obtained by performing a reference resolution on a pronoun in the second dialog. The preset data table is used for storing entities identified in each turn of the multiple turns of conversations, and the pronouns can be identified by a keyword dictionary, a regular expression and the like.
In a possible implementation manner, the performing the reference resolution on the identified pronouns based on the entities recorded in the preset data table includes:
under the condition that any one round of conversation is the first round of conversation of the multiple rounds of conversations, acquiring an entity identified in the first round of conversation from the preset data table, and performing reference resolution on the identified pronouns based on the entity identified in the first round of conversation;
under the condition that any one round of conversation is a target round of conversation in the multiple rounds of conversations except the first round of conversation, acquiring an entity identified in the target round of conversation and an entity identified in a historical round of conversation from the preset data table, and performing reference resolution on the identified pronouns based on the entity identified in the target round of conversation and the entity identified in the historical round of conversation; wherein the history turn dialog refers to a dialog before the target turn dialog in the multiple turns of dialog.
It is to be understood that, when any one round of dialog is the first round of dialog, only the entities identified in the first round of dialog are recorded in the preset data table, and when any one round of dialog is the non-first round of dialog (i.e., the target round of dialog), the target round of dialog and the entities identified in the history round of dialog before the target round of dialog are recorded in the preset data table. The pronouns are subjected to reference resolution by adopting a reference resolution model, the reference resolution model is used for matching the recognized pronouns with the entities recorded in a preset data table, then the scores of each pair are calculated, and the entities in the pair with the largest score are used as antecedents of the pronouns to finish the reference resolution. For example, A: do you have children? B: he was high and high. The pronouns ' he ' are paired with the entities in the preset data table to obtain that the ' he ' actually refers to ' child ' in the dialogue sentence of the A, and the target first dialogue sentence obtained after the reference resolution is carried out can be that (I) children are all high and high '.
S24, performing user portrait extraction on the target first spoken sentence pair based on a first preset rule, and performing user portrait extraction on the target second spoken sentence pair based on a second preset rule.
In the embodiment of the present disclosure, the first preset rule is a matching rule designed for a user identity, and the second preset rule is a matching rule designed for an operator identity.
In one possible embodiment, as shown in fig. 3, the above-mentioned user portrait extraction of the target first-pair utterance based on the first preset rule includes steps S31-S34:
s31, sensitive word and service talk detection is carried out on the target first pair of uttered sentences to obtain a first candidate rule set;
s32, performing rule matching on the target first pair of utterances by adopting a regular expression to obtain a second candidate rule set;
s33, taking intersection of the first candidate rule set and the second candidate rule set to obtain a third candidate rule set;
s34, extracting the user portrait in the target first dialog sentence if the rule in the third candidate rule set is the first preset rule.
In the aspect of obtaining the first candidate rule set, if the sensitive word is not detected in the target first dialogue sentence and the target first dialogue sentence does not accord with the business dialogue, the rule engine based on the multi-slot Huffman Trie tree is adopted to perform rule matching on the target first dialogue sentence to obtain the first candidate rule set.
In one possible embodiment, as shown in fig. 4, the above-mentioned user portrait extraction of the target second spoken sentence based on the second preset rule includes steps S41-S44:
s41, performing the sensitive word and the service talk detection on the target second pair of utterances to obtain a fourth candidate rule set;
s42, performing rule matching on the target second pair of utterances by adopting a regular expression to obtain a fifth candidate rule set;
s43, taking intersection of the fourth candidate rule set and the fifth candidate rule set to obtain a sixth candidate rule set;
s44, extracting the user image in the target second dialog sentence if the rule in the sixth candidate rule set is the second preset rule.
In the aspect of obtaining the fourth candidate rule set, if the sensitive word is not detected in the target second dialogue sentence and the target second dialogue sentence is not in accordance with the business dialogue, the rule engine based on the multi-slot huffman Trie tree is adopted to perform rule matching on the target second dialogue sentence, so that the fourth candidate rule set is obtained.
Specifically, different rules are adopted for matching a user dialog sentence and a salesman dialog sentence, before rule matching is carried out, sensitive words and business dialogs are firstly detected for the user dialog sentence, and since some sensitive words or business dialogs are preset and do not allow a user portrait to be extracted from the user dialog sentence, for example, the initial dialog sentence of 'i listen and speak/i have a friend/my certain relative' interferes with the mined user portrait, the initial dialog sentence can be classified as a business dialog sentence to be excluded, and optionally, a regular expression can be adopted for detecting the sensitive words and the business dialog sentence. If the target first dialogue statement and the target second dialogue statement do not comprise sensitive words or do not belong to the business dialect class, rule matching is carried out by adopting a rule engine based on a multi-slot Huffman Trie tree, wherein the rule engine based on the multi-slot Huffman Trie tree defines a rule template in advance, for the target first dialogue statement and the target second dialogue statement, corresponding slot positions are matched firstly, for example, "Beijing in winter is really good at", slot positions in seasons are hit in winter, slot positions in places are hit in Beijing ", for each hit slot position, leaf nodes corresponding to the Huffman Trie tree are searched in a slot recursively to obtain a contained rule set, and a candidate rule set is obtained by combining and intersecting the rule sets of the slot positions. Although the rule engine based on the multi-slot Huffman Trie tree can optimize the rule matching performance, the rule engine also has business logic which cannot be covered, and the coverage of a rule template supported by a regular expression is larger, so that the regular expression can be adopted to perform rule matching on spoken sentences again for logic supplement, and for the situation that the same rule possibly exists in candidate rule sets (namely a first candidate rule set, a second candidate rule set, a fourth candidate rule set and a fifth candidate rule set) obtained by two times of rule matching, an intersection is taken here to filter the same rule. And finally, determining the third candidate rule set as a target first spoken sentence hitting rule, determining the sixth candidate rule set as a target second spoken sentence hitting rule, judging whether the rules in the third candidate rule set belong to a first preset rule, if so, performing user portrait extraction on the target first spoken sentence, judging whether the rules in the sixth candidate rule set belong to a second preset rule, and if so, performing user portrait extraction on the target second spoken sentence.
For example, a trained natural language processing model may be further used to extract user portraits of the target first dialog sentence and the target second dialog sentence, and the target first dialog sentence is preprocessed, for example, the target first dialog sentence, where the preprocessing includes but is not limited to error correction, simplified/complex conversion, and special symbol processing, and the preprocessed target first dialog sentence is input into the trained natural language processing model to perform tag classification, so as to obtain tags of at least one type of user portraits, and obtain corresponding user portraits according to the tags. The user images may be of gender, marital status, whether the user is a married person or not, and the label is represented by a thermal unique vector, and the label includes at least two dimensions, for example "gender" is taken as an example, the "gender" is flattened into two dimensions, if the user is a male, the first dimension is 1, the second dimension is 0, if the user is a female, the second dimension is 1, and the first dimension is 0, and for example, if the business requirement is relatively concerned about the "whether the user is a married person" or not, the user image of "whether the user is a married person or not" may be divided into dimensions of "no child, no child in the unmarried person, no child in the married person, child in the married person" and the like. Similar to the target first sentence pair, the target second sentence pair may also be extracted from the user portrait by using the above-mentioned natural language processing model, which is not described herein again. In the embodiment, the label and the label dimension of the user portrait can be set according to business requirements, so that the user portrait extraction is more flexible.
And S25, filtering the user portraits belonging to the salesman and extracted from the target first spoken sentence and the target second spoken sentence to obtain the user portraits belonging to the user in any one round of conversation, and merging the target user portraits belonging to the user and extracted from each round of conversation in the multiple rounds of conversations.
In the disclosed embodiment, the rules for the dialog sentences of the user are relatively loose, and as long as the target first dialog sentence hits the first preset rule, the user portrait extracted from the target first dialog sentence is considered as the user portrait of the user, the rules of the dialog sentences of the service staff are relatively contracted, the subject of the target second dialog sentences is limited except for hitting the second preset rule, if the subject of the target second dialog sentences does not have preset words such as 'you', 'you' and the like, the target second dialog sentence is considered to describe the salesman himself, then the user image extracted from the target second dialog sentence is the user image of the salesman, and is filtered, only the user portrait belonging to the user is reserved in each single sentence, and finally the target user portrait belonging to the user extracted from the multiple rounds of conversations is merged to obtain the complete user portrait of the user.
In a possible embodiment, after obtaining the user representation belonging to the user in any one of the dialog rounds, the method further comprises:
and performing conflict detection on the user portrait belonging to the user in any one round of conversation, and determining the target user portrait in any one round of conversation by adopting a voting strategy.
The target user portrait refers to a user portrait belonging to a user obtained after collision detection, for example, 5 sentences of a certain pair of sentences identify the user portrait belonging to the user, wherein 3 sentences of the user portrait are considered as female, 1 sentence of the user portrait is considered as male, and the target user portrait in the pair of sentences is male.
It can be seen that in the embodiment of the application, a first conversation sentence of a user and a second conversation sentence of a salesman in any one pair of conversations are obtained; performing entity recognition on the first dialogue sentence and the second dialogue sentence, and recording recognized entities to a preset data table; identifying pronouns in the first dialogue sentence and the second dialogue sentence, and performing reference resolution on the identified pronouns based on entities recorded in the preset data table to obtain a target first dialogue sentence and a target second dialogue sentence; performing user portrait extraction on the target first spoken sentence pair based on a first preset rule, and performing user portrait extraction on the target second spoken sentence pair based on a second preset rule; and filtering the user portraits belonging to the salesman and extracted from the target first spoken sentence and the target second spoken sentence to obtain the user portraits belonging to the user in any one round of conversation, and merging the target user portraits belonging to the user and extracted from each round of conversation in multiple rounds of conversations. Therefore, the conversation sentences of the users and the conversation sentences of the salesmen in the multi-turn conversation are processed through entity recognition and reference resolution, the pronouns in the conversation correspond to the entities, the problem that the pronouns are difficult to be accurately recognized as the entities in the user portrait extraction based on single sentence conversation is favorably solved, the conversation sentences in the multi-turn conversation are more complete, the possibility of extracting the user portrait in the single sentence conversation is favorably improved on one hand, more user portraits are extracted, the portrayal of one user is more accurate, the subsequent identity judgment is favorably carried out on the conversation sentences obtained after the reference resolution on the other hand, and the user portrait describing the salesmen is filtered.
Referring to fig. 5, fig. 5 is a flowchart illustrating another method for extracting a user portrait based on a dialog according to an embodiment of the present application, which can also be implemented based on the network system architecture shown in fig. 1, as shown in fig. 5, including steps S51-S57:
s51, acquiring a first dialogue statement of a user in any pair of dialogues of the multiple rounds of dialogues and a second dialogue statement of a salesman;
s52, performing entity recognition on the first dialogue sentence and the second dialogue sentence, and recording recognized entities to a preset data table;
s53, recognizing pronouns in the first dialogue sentence and the second dialogue sentence;
if the arbitrary round of dialog is the first round of dialog of the multiple rounds of dialog, executing step S54; if the any round of dialog is a target round of dialog other than the first round of dialog in the multiple rounds of dialog, executing step S55;
s54, acquiring the entity identified in the first-round conversation from the preset data table, and performing reference resolution on the identified pronouns based on the entity identified in the first-round conversation to obtain a target first conversation sentence and a target second conversation sentence;
s55, acquiring the entity identified in the target round conversation and the entity identified in the historical round conversation from the preset data table, and performing reference resolution on the identified pronouns based on the entity identified in the target round conversation and the entity identified in the historical round conversation to obtain a target first conversation sentence and a target second conversation sentence;
wherein a history turn dialog is a dialog before the target turn dialog in the multiple turns of dialogs;
s56, extracting the user portrait of the target first spoken sentence based on a first preset rule, and extracting the user portrait of the target second spoken sentence based on a second preset rule;
and S57, filtering the user portraits belonging to the salesman and extracted from the target first spoken sentence and the target second spoken sentence to obtain the user portraits belonging to the user in any one round of conversation, and merging the target user portraits belonging to the user and extracted from each round of conversation in the multiple rounds of conversations.
The specific implementation of steps S51-S57 has been described in the embodiment shown in fig. 2, and can achieve the same or similar beneficial effects, and therefore, in order to avoid repetition, the detailed description is omitted here.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a user portrait extracting apparatus based on a dialog according to an embodiment of the present application, as shown in fig. 6, the apparatus includes:
the dialogue acquisition module 61 is used for acquiring a first dialogue statement of a user and a second dialogue statement of a salesman in any pair of dialogues in multiple rounds of dialogues;
an entity recognition module 62, configured to perform entity recognition on the first dialog sentence and the second dialog sentence, and record the recognized entities in a preset data table;
a reference resolution module 63, configured to recognize pronouns in the first dialogue sentence and the second dialogue sentence, and perform reference resolution on the recognized pronouns based on entities recorded in the preset data table to obtain a target first dialogue sentence and a target second dialogue sentence;
an image extraction module 64, configured to extract a user image of the target first spoken sentence pair based on a first preset rule, and extract a user image of the target second spoken sentence pair based on a second preset rule;
and the portrait merging module 65 is configured to filter the user portrait belonging to the salesman and extracted from the target first dialog sentence and the target second dialog sentence, obtain the user portrait belonging to the user in any one round of dialog, and merge the target user portrait belonging to the user and extracted from each round of dialog of the multiple rounds of dialogs.
In a possible implementation manner, in the aspect of performing reference resolution on the identified pronouns based on the entities recorded in the preset data table, the reference resolution module 63 is specifically configured to:
under the condition that any one round of conversation is the first round of conversation of the multiple rounds of conversations, acquiring an entity identified in the first round of conversation from the preset data table, and performing reference resolution on the identified pronouns based on the entity identified in the first round of conversation;
under the condition that any one round of conversation is a target round of conversation in the multiple rounds of conversations except the first round of conversation, acquiring an entity identified in the target round of conversation and an entity identified in a historical round of conversation from the preset data table, and performing reference resolution on the identified pronouns based on the entity identified in the target round of conversation and the entity identified in the historical round of conversation; wherein the historical turn dialog is a dialog before the target turn dialog in the multiple turns of dialog.
In one possible implementation, in terms of performing user portrait extraction on the target first spoken sentence based on a first preset rule, the portrait extraction module 64 is specifically configured to:
sensitive word and service conversational detection is carried out on the target first spoken sentence to obtain a first candidate rule set;
carrying out rule matching on the target first pair of utterances by adopting a regular expression to obtain a second candidate rule set;
taking an intersection of the first candidate rule set and the second candidate rule set to obtain a third candidate rule set;
and under the condition that the rule in the third candidate rule set is the first preset rule, extracting the user portrait in the target first dialog sentence.
In one possible implementation, in obtaining the first candidate rule set, the sketch extraction module 64 is specifically configured to:
and under the condition that the sensitive words are not detected in the target first dialogue sentences and the target first dialogue sentences do not accord with the business dialogue, carrying out rule matching on the target first dialogue sentences by adopting a rule engine based on a multi-slot Huffman Trie tree to obtain the first candidate rule set.
In a possible implementation manner, in terms of performing user portrait extraction on the target second spoken sentence based on the second preset rule, the portrait extraction module 64 is specifically configured to:
performing the sensitive word and the service conversational detection on the target second pair of spoken sentences to obtain a fourth candidate rule set;
carrying out rule matching on the target second pair of the spoken sentences by adopting a regular expression to obtain a fifth candidate rule set;
taking an intersection of the fourth candidate rule set and the fifth candidate rule set to obtain a sixth candidate rule set;
and under the condition that the rule in the sixth candidate rule set is the second preset rule, extracting the user portrait in the target second dialogue statement.
In one possible implementation, the representation merging module 65 is further configured to:
and performing conflict detection on the user portrait belonging to the user in any one round of conversation, and determining the target user portrait in any one round of conversation by adopting a voting strategy.
According to an embodiment of the present application, the units of the dialog-based user image extraction apparatus shown in fig. 6 may be combined into one or several additional units, or one or some of the units may be further split into multiple functionally smaller units, which may achieve the same operation without affecting the achievement of the technical effects of the embodiments of the present application. The units are divided based on logic functions, and in practical application, the functions of one unit can be realized by a plurality of units, or the functions of a plurality of units can be realized by one unit. In other embodiments of the present application, the dialog-based user representation extraction apparatus may also include other units, and in practical applications, these functions may also be implemented by the assistance of other units, and may be implemented by cooperation of a plurality of units.
According to another embodiment of the present application, a dialog-based user representation extraction apparatus device as shown in fig. 6 may be constructed by running a computer program (including program code) capable of executing the steps involved in the corresponding method as shown in fig. 2 or fig. 5 on a general-purpose computing device, such as a computer, including a Central Processing Unit (CPU), a random access storage medium (RAM), a read-only storage medium (ROM), and the like, and a storage element, and implementing the dialog-based user representation extraction method of the embodiments of the present application. The computer program may be recorded on a computer-readable recording medium, for example, and loaded and executed in the above-described computing apparatus via the computer-readable recording medium.
Based on the description of the method embodiment and the device embodiment, the embodiment of the application further provides an electronic device. Referring to fig. 7, the electronic device includes at least a processor 71, an input device 72, an output device 73, and a computer storage medium 74. The processor 71, input device 72, output device 73, and computer storage medium 74 within the electronic device may be connected by a bus or other means.
A computer storage medium 74 may be stored in the memory of the electronic device, the computer storage medium 74 being used to store a computer program comprising program instructions, the processor 71 being used to execute the program instructions stored by the computer storage medium 74. The processor 71 (or CPU) is a computing core and a control core of the electronic device, and is adapted to implement one or more instructions, and in particular, is adapted to load and execute the one or more instructions so as to implement a corresponding method flow or a corresponding function.
In one embodiment, the processor 71 of the electronic device provided by the embodiment of the present application may be configured to perform a series of dialog-based user profile extractions:
acquiring a first conversation sentence of a user and a second conversation sentence of a salesman in any pair of conversations;
performing entity recognition on the first dialogue sentence and the second dialogue sentence, and recording recognized entities to a preset data table;
identifying pronouns in the first dialogue sentence and the second dialogue sentence, and performing reference resolution on the identified pronouns based on entities recorded in the preset data table to obtain a target first dialogue sentence and a target second dialogue sentence;
performing user portrait extraction on the target first spoken sentence pair based on a first preset rule, and performing user portrait extraction on the target second spoken sentence pair based on a second preset rule;
and filtering the user portraits belonging to the salesman and extracted from the target first spoken sentence and the target second spoken sentence to obtain the user portraits belonging to the user in any one round of conversation, and merging the target user portraits belonging to the user and extracted from each round of conversation in multiple rounds of conversations.
In still another embodiment, the processor 71 performs the reference resolution of the identified pronouns based on the entities recorded in the preset data table, including:
under the condition that any one round of conversation is the first round of conversation of the multiple rounds of conversations, acquiring an entity identified in the first round of conversation from the preset data table, and performing reference resolution on the identified pronouns based on the entity identified in the first round of conversation;
under the condition that any one round of conversation is a target round of conversation in the multiple rounds of conversations except the first round of conversation, acquiring an entity identified in the target round of conversation and an entity identified in a historical round of conversation from the preset data table, and performing reference resolution on the identified pronouns based on the entity identified in the target round of conversation and the entity identified in the historical round of conversation; wherein the historical turn dialog is a dialog before the target turn dialog in the multiple turns of dialog.
In another embodiment, the processor 71 performs the user profile extraction of the target first spoken sentence based on the first preset rule, including:
sensitive word and service conversational detection is carried out on the target first spoken sentence to obtain a first candidate rule set;
carrying out rule matching on the target first pair of utterances by adopting a regular expression to obtain a second candidate rule set;
taking an intersection of the first candidate rule set and the second candidate rule set to obtain a third candidate rule set;
and under the condition that the rule in the third candidate rule set is the first preset rule, extracting the user portrait in the target first dialog sentence.
In yet another embodiment, processor 71 performs the deriving the first candidate rule set by:
and under the condition that the sensitive words are not detected in the target first dialogue sentences and the target first dialogue sentences do not accord with the business dialogue, carrying out rule matching on the target first dialogue sentences by adopting a rule engine based on a multi-slot Huffman Trie tree to obtain the first candidate rule set.
In another embodiment, the processor 71 performs the user profile extraction of the target second spoken sentence based on the second preset rule, including:
performing the sensitive word and the service conversational detection on the target second pair of spoken sentences to obtain a fourth candidate rule set;
carrying out rule matching on the target second pair of the spoken sentences by adopting a regular expression to obtain a fifth candidate rule set;
taking an intersection of the fourth candidate rule set and the fifth candidate rule set to obtain a sixth candidate rule set;
and under the condition that the rule in the sixth candidate rule set is the second preset rule, extracting the user portrait in the target second dialogue statement.
In yet another embodiment, after obtaining the user representation belonging to the user in any of the dialog rounds, the processor 71 is further configured to:
and performing conflict detection on the user portrait belonging to the user in any one round of conversation, and determining the target user portrait in any one round of conversation by adopting a voting strategy.
By way of example, electronic devices include, but are not limited to, a processor 71, an input device 72, an output device 73, and a computer storage medium 74. It will be appreciated by those skilled in the art that the schematic diagrams are merely examples of an electronic device and are not limiting of an electronic device and may include more or fewer components than those shown, or some components in combination, or different components.
It should be noted that, since the processor 71 of the electronic device executes the computer program to implement the steps in the above-mentioned dialog-based user image extraction method, the embodiments of the dialog-based user image extraction method are all applicable to the electronic device, and all can achieve the same or similar beneficial effects.
An embodiment of the present application further provides a computer storage medium (Memory), which is a Memory device in an electronic device and is used to store programs and data. It is understood that the computer storage medium herein may include a built-in storage medium in the terminal, and may also include an extended storage medium supported by the terminal. The computer storage medium provides a storage space that stores an operating system of the terminal. Also stored in this memory space are one or more instructions, which may be one or more computer programs (including program code), suitable for loading and execution by processor 71. The computer storage medium may be a high-speed RAM memory, or may be a non-volatile memory (non-volatile memory), such as at least one disk memory; alternatively, it may be at least one computer storage medium located remotely from the processor 71. In one embodiment, one or more instructions stored in a computer storage medium may be loaded and executed by processor 71 to perform the corresponding steps described above with respect to the dialog-based user representation extraction method.
Illustratively, the computer program of the computer storage medium includes computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, and the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.
It should be noted that, since the computer program of the computer storage medium is executed by the processor to implement the steps of the above-mentioned dialog-based user representation extraction method, all the embodiments of the dialog-based user representation extraction method are applicable to the computer storage medium, and can achieve the same or similar beneficial effects.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A dialog-based user image extraction method, the method comprising:
acquiring a first conversation sentence of a user and a second conversation sentence of a salesman in any pair of conversations;
performing entity recognition on the first dialogue sentence and the second dialogue sentence, and recording recognized entities to a preset data table;
identifying pronouns in the first dialogue sentence and the second dialogue sentence, and performing reference resolution on the identified pronouns based on entities recorded in the preset data table to obtain a target first dialogue sentence and a target second dialogue sentence;
performing user portrait extraction on the target first spoken sentence pair based on a first preset rule, and performing user portrait extraction on the target second spoken sentence pair based on a second preset rule;
and filtering the user portraits belonging to the salesman and extracted from the target first spoken sentence and the target second spoken sentence to obtain the user portraits belonging to the user in any one round of conversation, and merging the target user portraits belonging to the user and extracted from each round of conversation in multiple rounds of conversations.
2. The method according to claim 1, wherein the performing the reference resolution on the identified pronouns based on the entities recorded in the preset data table comprises:
under the condition that any one round of conversation is the first round of conversation of the multiple rounds of conversations, acquiring an entity identified in the first round of conversation from the preset data table, and performing reference resolution on the identified pronouns based on the entity identified in the first round of conversation;
under the condition that any one round of conversation is a target round of conversation in the multiple rounds of conversations except the first round of conversation, acquiring an entity identified in the target round of conversation and an entity identified in a historical round of conversation from the preset data table, and performing reference resolution on the identified pronouns based on the entity identified in the target round of conversation and the entity identified in the historical round of conversation; wherein the historical turn dialog is a dialog before the target turn dialog in the multiple turns of dialog.
3. The method of claim 1, wherein the user-portrait extraction of the target first pair of utterances based on a first predetermined rule comprises:
sensitive word and service conversational detection is carried out on the target first spoken sentence to obtain a first candidate rule set;
carrying out rule matching on the target first pair of utterances by adopting a regular expression to obtain a second candidate rule set;
taking an intersection of the first candidate rule set and the second candidate rule set to obtain a third candidate rule set;
and under the condition that the rule in the third candidate rule set is the first preset rule, extracting the user portrait in the target first dialog sentence.
4. The method of claim 3, wherein obtaining the first set of candidate rules comprises:
and under the condition that the sensitive words are not detected in the target first dialogue sentences and the target first dialogue sentences do not accord with the business dialogue, carrying out rule matching on the target first dialogue sentences by adopting a rule engine based on a multi-slot Huffman Trie tree to obtain the first candidate rule set.
5. The method of claim 3, wherein the user-portrait extraction of the target second spoken sentence based on a second predetermined rule comprises:
performing the sensitive word and the service conversational detection on the target second pair of spoken sentences to obtain a fourth candidate rule set;
carrying out rule matching on the target second pair of the spoken sentences by adopting a regular expression to obtain a fifth candidate rule set;
taking an intersection of the fourth candidate rule set and the fifth candidate rule set to obtain a sixth candidate rule set;
and under the condition that the rule in the sixth candidate rule set is the second preset rule, extracting the user portrait in the target second dialogue statement.
6. The method of any of claims 1-5, wherein after obtaining the user representation pertaining to the user in any of the dialog rounds, the method further comprises:
and performing conflict detection on the user portrait belonging to the user in any one round of conversation, and determining the target user portrait in any one round of conversation by adopting a voting strategy.
7. A dialog-based user representation extraction apparatus, the apparatus comprising:
the dialogue acquisition module is used for acquiring a first dialogue statement of a user in any pair of dialogues of the multiple rounds of dialogues and a second dialogue statement of a salesman;
the entity recognition module is used for carrying out entity recognition on the first dialogue sentences and the second dialogue sentences and recording recognized entities to a preset data table;
the reference resolution module is used for identifying pronouns in the first dialogue statement and the second dialogue statement and performing reference resolution on the identified pronouns based on the entities recorded in the preset data table to obtain a target first dialogue statement and a target second dialogue statement;
the portrait extraction module is used for extracting the user portrait of the target first spoken sentence based on a first preset rule and extracting the user portrait of the target second spoken sentence based on a second preset rule;
and the image merging module is used for filtering the user images which belong to the service staff and are extracted from the target first dialogue sentence and the target second dialogue sentence to obtain the user images which belong to the user in any one round of dialogue, and merging the target user images which belong to the user and are extracted from each round of dialogue in the multiple rounds of dialogues.
8. The apparatus according to claim 7, wherein in the aspect of performing the reference resolution on the identified pronouns based on the entities recorded in the preset data table, the reference resolution module is specifically configured to:
under the condition that any one round of conversation is the first round of conversation of the multiple rounds of conversations, acquiring an entity identified in the first round of conversation from the preset data table, and performing reference resolution on the identified pronouns based on the entity identified in the first round of conversation;
under the condition that any one round of conversation is a target round of conversation in the multiple rounds of conversations except the first round of conversation, acquiring an entity identified in the target round of conversation and an entity identified in a historical round of conversation from the preset data table, and performing reference resolution on the identified pronouns based on the entity identified in the target round of conversation and the entity identified in the historical round of conversation; wherein the historical turn dialog is a dialog before the target turn dialog in the multiple turns of dialog.
9. An electronic device comprising an input device and an output device, further comprising:
a processor adapted to implement one or more instructions; and the number of the first and second groups,
a computer storage medium having stored thereon one or more instructions adapted to be loaded by the processor and to perform the method of any of claims 1-6.
10. A computer storage medium having stored thereon one or more instructions adapted to be loaded by a processor and to perform the method of any of claims 1-6.
CN202110458709.6A 2021-04-26 2021-04-26 User portrait extraction method based on dialogue and related device Active CN113051384B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110458709.6A CN113051384B (en) 2021-04-26 2021-04-26 User portrait extraction method based on dialogue and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110458709.6A CN113051384B (en) 2021-04-26 2021-04-26 User portrait extraction method based on dialogue and related device

Publications (2)

Publication Number Publication Date
CN113051384A true CN113051384A (en) 2021-06-29
CN113051384B CN113051384B (en) 2023-09-19

Family

ID=76520534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110458709.6A Active CN113051384B (en) 2021-04-26 2021-04-26 User portrait extraction method based on dialogue and related device

Country Status (1)

Country Link
CN (1) CN113051384B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115618877A (en) * 2022-12-20 2023-01-17 北京仁科互动网络技术有限公司 User portrait label determination method and device and electronic equipment
CN117556802A (en) * 2024-01-12 2024-02-13 碳丝路文化传播(成都)有限公司 User portrait method, device, equipment and medium based on large language model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140257793A1 (en) * 2013-03-11 2014-09-11 Nuance Communications, Inc. Communicating Context Across Different Components of Multi-Modal Dialog Applications
CN110377715A (en) * 2019-07-23 2019-10-25 天津汇智星源信息技术有限公司 Reasoning type accurate intelligent answering method based on legal knowledge map
CN111914076A (en) * 2020-08-06 2020-11-10 平安科技(深圳)有限公司 User image construction method, system, terminal and storage medium based on man-machine conversation
CN112183060A (en) * 2020-09-28 2021-01-05 重庆工商大学 Reference resolution method of multi-round dialogue system
CN112231556A (en) * 2020-10-13 2021-01-15 中国平安人寿保险股份有限公司 User image drawing method, device, equipment and medium based on conversation scene

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140257793A1 (en) * 2013-03-11 2014-09-11 Nuance Communications, Inc. Communicating Context Across Different Components of Multi-Modal Dialog Applications
CN110377715A (en) * 2019-07-23 2019-10-25 天津汇智星源信息技术有限公司 Reasoning type accurate intelligent answering method based on legal knowledge map
CN111914076A (en) * 2020-08-06 2020-11-10 平安科技(深圳)有限公司 User image construction method, system, terminal and storage medium based on man-machine conversation
CN112183060A (en) * 2020-09-28 2021-01-05 重庆工商大学 Reference resolution method of multi-round dialogue system
CN112231556A (en) * 2020-10-13 2021-01-15 中国平安人寿保险股份有限公司 User image drawing method, device, equipment and medium based on conversation scene

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115618877A (en) * 2022-12-20 2023-01-17 北京仁科互动网络技术有限公司 User portrait label determination method and device and electronic equipment
CN117556802A (en) * 2024-01-12 2024-02-13 碳丝路文化传播(成都)有限公司 User portrait method, device, equipment and medium based on large language model
CN117556802B (en) * 2024-01-12 2024-04-05 碳丝路文化传播(成都)有限公司 User portrait method, device, equipment and medium based on large language model

Also Published As

Publication number Publication date
CN113051384B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
WO2019200923A1 (en) Pinyin-based semantic recognition method and device and human-machine conversation system
CN112100349B (en) Multi-round dialogue method and device, electronic equipment and storage medium
CN108287858B (en) Semantic extraction method and device for natural language
US10438586B2 (en) Voice dialog device and voice dialog method
KR102222317B1 (en) Speech recognition method, electronic device, and computer storage medium
CN110020009B (en) Online question and answer method, device and system
CN111046133A (en) Question-answering method, question-answering equipment, storage medium and device based on atlas knowledge base
CN111445898B (en) Language identification method and device, electronic equipment and storage medium
CN111223476B (en) Method and device for extracting voice feature vector, computer equipment and storage medium
CN113051384A (en) User portrait extraction method based on conversation and related device
CN112632248A (en) Question answering method, device, computer equipment and storage medium
CN114218945A (en) Entity identification method, device, server and storage medium
CN113326702A (en) Semantic recognition method and device, electronic equipment and storage medium
CN113948090B (en) Voice detection method, session recording product and computer storage medium
CN111508497B (en) Speech recognition method, device, electronic equipment and storage medium
US11132999B2 (en) Information processing device, information processing method, and non-transitory computer readable storage medium
US11615787B2 (en) Dialogue system and method of controlling the same
CN111324712A (en) Dialogue reply method and server
CN115691503A (en) Voice recognition method and device, electronic equipment and storage medium
CN113421573B (en) Identity recognition model training method, identity recognition method and device
CN111680514A (en) Information processing and model training method, device, equipment and storage medium
CN114242047A (en) Voice processing method and device, electronic equipment and storage medium
CN112037772A (en) Multi-mode-based response obligation detection method, system and device
CN111626059A (en) Information processing method and device
CN113255361B (en) Automatic voice content detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant