CN106776936B - Intelligent interaction method and system - Google Patents

Intelligent interaction method and system Download PDF

Info

Publication number
CN106776936B
CN106776936B CN201611093525.XA CN201611093525A CN106776936B CN 106776936 B CN106776936 B CN 106776936B CN 201611093525 A CN201611093525 A CN 201611093525A CN 106776936 B CN106776936 B CN 106776936B
Authority
CN
China
Prior art keywords
information
response
intention
user
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611093525.XA
Other languages
Chinese (zh)
Other versions
CN106776936A (en
Inventor
蔡丽婷
朱频频
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xiaoi Robot Technology Co Ltd
Original Assignee
Shanghai Xiaoi Robot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xiaoi Robot Technology Co Ltd filed Critical Shanghai Xiaoi Robot Technology Co Ltd
Priority to CN201611093525.XA priority Critical patent/CN106776936B/en
Priority to US15/390,538 priority patent/US11106983B2/en
Publication of CN106776936A publication Critical patent/CN106776936A/en
Application granted granted Critical
Publication of CN106776936B publication Critical patent/CN106776936B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/242Query formulation
    • G06F16/2423Interactive query statement specification based on a database schema
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]

Abstract

The embodiment of the invention provides an intelligent interaction method and system, and solves the problems that an existing intelligent interaction mode only responds according to acquired intention information, so that the response mode is too single and the interaction effect is poor. The intelligent interaction method comprises the following steps: acquiring user static information and current request information; performing intention analysis on the current request information to obtain intention information corresponding to the current request information; acquiring interactive background information corresponding to the user static information; and acquiring and sending response information to the user according to the intention information and the interactive background information.

Description

Intelligent interaction method and system
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an intelligent interaction method and an intelligent interaction system.
Background
With the continuous development of artificial intelligence technology and the continuous improvement of the requirements of people on interaction experience, the intelligent interaction mode gradually starts to replace some traditional human-computer interaction modes and becomes a research hotspot. The existing intelligent interaction mode can only carry out simple semantic analysis on the current request information to obtain the approximate intention information and determine the response information according to the obtained intention information. That is, the current request information available for semantic analysis is limited to the standard questions stored in the knowledge base, such interaction is rigid and the user experience is poor. Meanwhile, even though intention information corresponding to the current request information is acquired through the existing intelligent interaction mode, the intention information does not represent that the real idea of the user is acquired. For example, in a telephone smart customer service scenario for urging a credit card bill, if the current request message is a standard question, "i will pay immediately," then the obtained intent message may be "ready to pay. But if the user's prior credit record is poor, then the user's real idea is likely that payment will still not be made. At this time, if the frequency and the strength of payment urging are slowed down only according to the intention information of 'preparing for repayment', a good bill payment urging effect cannot be achieved. Therefore, if the response is performed only according to the acquired intention information, the response mode is too single, and a good interaction effect cannot be achieved.
Disclosure of Invention
In view of this, embodiments of the present invention provide an intelligent interaction method and system, which solve the problems that an existing intelligent interaction method only responds according to acquired intention information, so that a response method is too single and an interaction effect is poor.
An intelligent interaction method provided by an embodiment of the present invention includes:
acquiring user static information and current request information;
performing intention analysis on the current request information to obtain intention information corresponding to the current request information;
acquiring interactive background information corresponding to the user static information; and
and acquiring and sending response information to the user according to the intention information and the interactive background information.
An intelligent interactive system provided in an embodiment of the present invention includes:
the interaction module is configured to execute an interaction process and acquire user static information and current request information;
the knowledge base is configured to store intention information, interaction background information and response information;
the intention analysis module is configured to perform intention analysis on the current request information acquired by the interaction module and acquire intention information corresponding to the current request information from the knowledge base;
the background acquisition module is configured to acquire interactive background information corresponding to the user static information from the knowledge base; and the number of the first and second groups,
and the response decision module is configured to acquire corresponding response information from the knowledge base according to the intention information and the interaction background information, and send the response information to the user through the interaction module.
According to the intelligent interaction method and system provided by the embodiment of the invention, on the basis of obtaining the intention information corresponding to the current request information, the interaction background information corresponding to the user static information is obtained, and the obtained intention information is combined with the interaction background information to obtain the corresponding response information. Because the interactive background information corresponds to the static information of the user, the intention information of the semantic level and the interactive background information of the static information level of the user can be combined to more accurately judge the real idea of the user, and the obtained response information is different according to the static information of different users possessed by different users. Therefore, more intelligent and diversified response modes are realized, and the intelligent interaction effect is improved.
Drawings
Fig. 1 is a schematic flow chart of an intelligent interaction method according to an embodiment of the present invention.
Fig. 2 is a schematic flow chart illustrating the process of obtaining intention information in an intelligent interaction method according to an embodiment of the present invention.
Fig. 3 is a schematic flow chart illustrating a process of acquiring interaction background information according to user static information in an intelligent interaction method according to an embodiment of the present invention.
Fig. 4 is a schematic flow chart illustrating a process of acquiring corresponding response information according to intention information and interaction background information corresponding to user static information in the intelligent interaction method according to an embodiment of the present invention.
Fig. 5 is a schematic structural diagram of an intelligent interactive system according to an embodiment of the present invention.
Fig. 6 is a schematic structural diagram of an intelligent interactive system according to another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of an intelligent interaction method according to an embodiment of the present invention. As shown in fig. 1, the method includes:
step 101: and acquiring the static information and the current request information of the user.
The current request information is a message which is sent by the user and contains the intention of the user on the semantic level, and the message can be in a text or voice format. And generating corresponding response information according to the current request information and returning the response information to the user to complete the intelligent interaction process of one level. However, the current request information does not necessarily represent the real intention of the user, so that the static information of the user needs to be acquired to assist in judging the real intention of the user and give more reasonable response information.
The user static information is some static information related to the user, and may be obtained through user input, for example, through a multi-level interactive process, or may be directly obtained through a third party, for example, directly obtained from user data stored in a bank. For understanding, taking the business scenario of credit card payment as an example, the content of the user static information can be shown in the following table one:
watch 1
Figure BDA0001168452460000041
As shown in the first table, the user static information is classified into six static information categories of credit card business attribute, user identity information, credit card type, current account period, total amount owed and current amount paid. However, the static information classification may be already classified when the user static information is obtained, or may be obtained by performing a classification process after the user static information is obtained. Meanwhile, the number of the static information classification can be adjusted according to the specific service scene, and the specific form of the static information classification is not limited by the invention. The obtained user static information is only the original data related to the user credit record, and in order to obtain the user credit degree reflected by the original data, the interactive background information corresponding to the user static information needs to be obtained as the basis for subsequently determining reasonable response information.
Step 102: and analyzing the intention of the current request information to obtain intention information corresponding to the current request information.
The intention information corresponds to the intention reflected by the semantic level of the current request information, but does not necessarily represent the real idea of the user, so that the corresponding response information needs to be comprehensively determined by subsequently combining the interaction background information corresponding to the static information of the user.
The intention information can be acquired by text analysis. Specifically, the text content of the currently requested information may be first matched with a plurality of preset semantic templates to determine a matched semantic template, and the matching process may be implemented by means of text similarity calculation. As shown in fig. 2, the manner of acquiring the intention information may specifically include the following steps:
step 1021: and performing similarity calculation between the text content of the current request information and a plurality of preset semantic templates, and then taking the semantic template with the highest similarity as a matched semantic template.
The current request information sent by the user is not a standard question but a modified form of the standard question, namely an expanded question. Therefore, for intelligent semantic recognition, there is a need in the knowledge base for an expanded query of standard queries that is slightly different from the standard query expression but expresses the same meaning. Therefore, in an embodiment of the present invention, a semantic template may be a set of one or more abstract semantic expressions representing a certain semantic content, and is generated by a developer according to a predetermined rule in combination with the semantic content, that is, a semantic template may describe statements of multiple different expression modes of the corresponding semantic content, so as to cope with multiple possible variations of the text content of the currently requested information. Therefore, the text content of the current request information is matched with the preset semantic template, and the limitation of identifying the user information by using the standard question which can only describe one expression mode is avoided.
Each abstract semantic expression may mainly include semantic component words and semantic rule words. Semantic component words are represented by semantic components that can express a wide variety of specific semantics when filled with corresponding values (i.e., content).
The semantic component tokens of the abstract semantics may include:
[ concept ]: a word or phrase representing a composition of a subject or object.
Such as: how the color ring is opened.
[ action ]: a word or phrase representing an action component.
Such as: how the credit card is handled, is referred to as "handling".
[ attribute ]: a word or phrase representing an attribute component.
Such as: "color" of "which colors the iphone has".
[ adoptive ]: a word or phrase indicating a modifying component.
Such as: "cheap" in "which brand of refrigerator is cheap".
Some examples of major abstract semantic categories are:
concept what is said
Attribute constructs what [ concept ] is
How the behavior is [ concept ] [ action ]
Where the action site [ concept ] is
Reason for behavior [ concept ] why [ action ]
Behavior prediction [ concept ] will not [ action ]
Behavior judgment [ concept ] presence or absence [ attribute ]
Whether [ attribute ] of attribute status [ concept ] is [ adaptive ]
Attribute judgment whether [ concept ] is [ attribute ]
Attribute reason [ attribute ] why [ attribute ] is so [ adaptive ]
Concept comparison where the distinction between [ concept1] and [ concept2] is
Attribute comparison what the attribute differs between [ concept1] and [ attribute ] of [ concept2]
The component judgment of the question at the abstract semantic level can be generally judged by part-of-speech tagging, wherein the part-of-speech corresponding to concept is a noun, the part-of-speech corresponding to action is a verb, the part-of-speech corresponding to attribute is a noun, and the adjective corresponding to adoptive is.
Taking how [ action ] the abstract semantics [ concept ] of the category is "behavior mode" as an example, the abstract semantics set of the category may include a plurality of abstract semantic expressions:
abstract semantic categories: behavioral patterns
Abstract semantic expression:
[ concept ] [ need | should? How is < then can be? < proceed? < action >
b.{[concept]~[action]}
c. [ concept ] <? > [ action ] < method | manner | step? < CHEM > A
d. < what is | what is present and absent > < what is by | in > [ concept ] [ action ] <? < method > ]
e. "how to" act "to" concept
The four abstract semantic expressions a, b, c and d are all used for describing the abstract semantic category of behavior mode. The semantic symbol "|" represents "or" relationship, semantic symbol "? "indicates the presence or absence of the component.
It should be understood that, although some examples of semantic component words, semantic rule words, and semantic symbols are given above, the specific content and part of speech of the semantic component words, the specific content and part of speech of the semantic rule words, and the definition and collocation of the semantic symbols may be preset by a developer according to the specific interaction service scenario applied by the intelligent interaction method, which is not limited in the present invention.
In an embodiment of the present invention, the similarity calculation process for determining the matched semantic template according to the text content of the current request information may adopt one or more of the following calculation methods: an edit distance calculation method, an n-gram calculation method, a Jarouwinkler calculation method, and a Soundex calculation method. In a further embodiment, when the semantic component words and the semantic rule words in the text content of the current request information are identified, the semantic component words and the semantic rule words included in the current request information and the semantic template can also be converted into simplified text strings to improve the efficiency of semantic similarity calculation.
In an embodiment of the present invention, as described above, the semantic template may be composed of semantic component words and semantic rule words, and the semantic component words and the semantic rule words are related to parts of speech of the words in the semantic template and grammatical relations between the words, so the similarity calculation process may specifically be: the method comprises the steps of firstly identifying words, parts of speech and grammatical relations of the words in a current request information text, then identifying semantic component words and semantic rule words according to the parts of speech and the grammatical relations of the words, and then introducing the identified semantic component words and semantic rule words into a vector space model to calculate a plurality of similarities between the text content of the current request information and a plurality of preset semantic templates. In an embodiment of the present invention, words, parts of speech of the words, and grammatical relations among the words in the text content of the currently requested information may be identified by one or more of the following word segmentation methods: hidden markov model method, forward maximum matching method, reverse maximum matching method and named entity recognition method.
In an embodiment of the present invention, as described above, the semantic template may be a set of a plurality of abstract semantic expressions representing a certain semantic content, and at this time, a sentence with a plurality of different expression modes of the corresponding semantic content may be described by using one semantic template, so as to correspond to a plurality of extension questions of the same standard question. Therefore, when calculating the semantic similarity between the text content of the current request information and the preset semantic template, the similarity between the text content of the current request information and at least one expansion question respectively expanded by the plurality of preset semantic templates needs to be calculated, and then the semantic template corresponding to the expansion question with the highest similarity is used as the matched semantic template. These expanded questions may be obtained from semantic component words and/or semantic rule words and/or semantic symbols included in the semantic template.
Step 1022: and acquiring intention information corresponding to the matched semantic template.
After a semantic template matched with the text content of the current request information is found, intention information corresponding to the matched semantic template can be obtained, wherein the corresponding relation between the semantic template and the intention information is pre-established, and the same intention information can correspond to one or more semantic templates. As shown in table two below.
Watch two
Figure BDA0001168452460000091
In an embodiment of the present invention, if the current request information is a voice message and a text similarity calculation mode is required to obtain a matched semantic template, the current request information needs to be converted into a text message first.
Step 103: and acquiring interactive background information corresponding to the user static information.
What the interaction context information reflects is information related to the static attributes of the user itself, independent of the semantic content of the currently requested information. The interactive background information can be directly obtained by a third party or obtained in real time according to the static information of the user.
Still taking a business scenario of credit card payment promotion as an example, the user static information is only original data related to the credit record of the user, and in order to obtain the credit degree of the user reflected by the original data, the interactive background information corresponding to the user static information needs to be obtained as a basis for subsequently determining reasonable response information. The interactive context information that is available from the user static information shown in table one above can be shown as table three below:
watch III
The interactive background information shown in the third table includes four interactive background items, namely credit card business attribute, current account period, debt historical state and repayment historical state, wherein specific content of each interactive background item includes a plurality of interactive background contents, for example, the interactive background item "repayment historical state" may include three interactive background contents, namely "completely unreleased", "partially repayed" and "interval repayment", and the interactive background item "credit card business attribute" may include other interactive background contents, such as "new credit card application", "credit card limit inquiry", "credit card repayment" and "credit card cancellation", in addition to the interactive background content of "credit card debt". However, the number and content of the specific interactive background items and the interactive background contents can be adjusted according to the specific service scenario, which is not limited in the present invention.
In an embodiment of the present invention, the interactive background information is not directly obtained according to the user static information, but the user static information needs to be classified first, and then the interactive background information corresponding to the user static information is obtained based on the classification result. Specifically, as shown in fig. 3, the step of acquiring the interaction background information may include the following steps:
step 1031: the user static information is divided into at least one static information category.
For the above scenario of credit card payment promotion, the dividing process of the static information classification is the process of obtaining the content shown in table one above. The specific partitioning process can be realized by big data and a classification model, and details are not repeated herein.
Step 1032: and determining all interactive background contents matched with the user static information according to the static information classification included in the user static information, wherein each interactive background content is determined according to at least one static information classification.
Specifically, when all the static information classifications of the user static information are acquired, some of the static information classifications can be directly matched with the interactive background content of one interactive background item, that is, the interactive background content can be directly determined only according to one static information classification. For example, the interactive background content "credit card arrears" of the interactive background item "credit card business attribute" can be directly determined by classifying the specific content "credit card arrears" of the "credit card business attribute" according to the static information. Some interactive background contents can be determined according to a plurality of static information classifications, for example, the interactive background contents of the interactive background item "repayment history state" can be determined only by integrating the contents of three static information classifications of "current account period", "total amount of arrears", and "current repayment amount".
In an embodiment of the present invention, the process of obtaining the interactive background content according to the user static information may be implemented by pre-establishing a corresponding relationship between the user static information and the interactive background content, where a specific corresponding relationship may be generated by performing classification training in a big data platform based on a training set provided by a service expert. In addition, with the change of the actual business characteristics, a new training set can be resubmitted for classification training to form a new corresponding relation.
It should be understood that the specific content of the interaction context information is related to the specific content of the user static information, and the user static information may contain different data contents according to the applicable application scenario. The invention does not limit the specific content of the user static information and the corresponding interactive background information.
Step 104: and acquiring and sending response information to the user according to the intention information and the interactive background information.
The interaction background information and the intention information are combined to judge the possible real idea of the user more accurately. The response information thus obtained will also differ according to the different user static information possessed by different users. Therefore, more diversified response modes and better intelligent interaction effect are realized.
Fig. 4 is a schematic flow chart illustrating a process of acquiring corresponding response information according to intention information and interaction background information corresponding to user static information in the intelligent interaction method according to an embodiment of the present invention. As shown in fig. 4, the process of acquiring the response information includes the following steps:
step 1041: and acquiring a corresponding response identifier according to the intention information and the interactive background information.
In an embodiment of the present invention, the response identifier may be specifically a response mood identifier, and the response mood identifier may be divided into at least two categories according to mood from light to heavy. Still taking the above-mentioned business scenario of credit card payment promotion as an example, the specific content of the response tone mark can be shown in the following table four:
watch four
Figure BDA0001168452460000121
As shown in the fourth table, for the same intention information "no money repayment", the response tone identifications determined under different interaction background information are different. Specifically, when the interaction background information determined according to the static information of the user is "M2 date, debt never, and partial payment", it is indicated that the current user has a better credit record, and the M1 date has paid a part, and the payment cannot be made now possibly due to other factors, and the determined answer mood identifier may be "friend alert"; and when the interactive background information determined according to the static information of the user is 'M2 date, debt once and not yet' the current credit record of the user is poor, and the previous M1 date is still not yet paid, the corresponding response information can be 'strict reminding'.
Step 1042: and acquiring corresponding response information according to the intention information and the response identifier.
After the response identifier is obtained, the response information which needs to be finally replied to the user can be determined according to the intention information and the response identifier. In an embodiment of the present invention, when the interaction mode with the user is implemented based on voice and the response message is in a text form, the response message needs to be converted into a voice message and sent to the user.
Taking the response tone identifiers shown in the above table as an example, the specific content of the finally determined response message may be shown in the following table five:
watch five
Figure BDA0001168452460000131
Therefore, the response identification is introduced to determine the response information, a more flexible response mode is realized, and compared with the method of determining the response information only according to the intention information, a more intelligent interaction effect can be realized.
In an embodiment of the present invention, the response information may also be directly obtained according to the response identifier without referring to the intention information. For example, for the intention information "no money repayment", when the interactive background information shows that the current user has a better credit record, the determined answer identifier at this time may be specifically "friend reminding repayment", rather than a simple "friend reminding". Therefore, the answer information with the content of 'you must remember to pay back your arrears within three days' can be directly determined according to the answer identifier without referring to the intention information 'no money is paid'. Therefore, in this case, the specific content of the response identifier is more diversified, and the specific content of the response identifier and the specific content of the intention information have a certain correspondence in a semantic level, rather than a simple quantitative variable, so that the specific content of the response identifier is only referred to when determining the corresponding response information.
It should be understood that, in order to complete the above process of acquiring the response identifier and the response information, the corresponding relationship between the intention information and the interaction background information and the response identifier, the corresponding relationship between the intention information and the response identifier and the response information, and the corresponding relationship between the response identifier and the response information may be established in advance. The specific construction form of the correspondence relationship can be realized by big data classification and clustering technology, and is not described herein again.
It should be understood that although in the above embodiments, the response identifier is specifically a response tone identifier, in other application scenarios, the response identifier may correspond to other content related to a specific interactive service. For example, the response identifier may further specifically include: the response tone marks are divided into at least two categories according to the tone from low to high; and/or, response speech rate identification, the response speech rate identification is divided into at least two categories according to the speech rate from slow to fast; and/or, response volume marks which are divided into at least two categories according to the volume from small to large. The specific content of the response identifier is not limited in the present invention.
In addition, it should be understood that the intelligent interaction method provided by the embodiment of the present invention actually implements a highly intelligent interaction strategy, and each interaction process can be regarded as a process of acquiring intention information according to current request information, and acquiring corresponding response information by combining the intention information and interaction background information and feeding back the response information to the user. And after receiving the response information, the user sends out new current request information, and then a new process of acquiring and feeding back the response information is started, and the process is repeatedly circulated until the whole intelligent interaction process is completed. It should also be understood that the intelligent interaction method can be applied to different service interaction scenarios, and according to the specific needs of the service interaction scenario, developers can adjust the sequence of steps in the intelligent interaction method, omit some steps therein, or visualize specific contents defined by concepts such as semantic templates, intention information, user static information, interaction scenario information, response identification and response information therein, which is not limited by the present invention.
Fig. 5 is a schematic structural diagram of an intelligent interactive system according to an embodiment of the present invention. As shown in fig. 5, the intelligent interactive system 50 includes: an interaction module 51, a knowledge base 52, an intention analysis module 53, a context acquisition module 54 and a response decision module 55. The knowledge base 52 stores therein intention information, interaction context information, and response information. The interactive module 51 is used for executing an interactive process to obtain the user static information and the current request information. The intention analysis module 53 performs intention analysis on the current request information acquired by the interaction module 51, and acquires intention information corresponding to the current request information from the knowledge base 52. The background obtaining module 54 obtains the interactive background information corresponding to the user static information from the knowledge base 52. The response decision module 55 obtains corresponding response information from the knowledge base 52 according to the intention information obtained by the intention analysis module 53 and the interaction background information obtained by the background obtaining module 54, and sends the response information to the user through the interaction module 51.
In an embodiment of the present invention, the intention information obtaining manner of the intention analyzing module 53 can be realized by a text analyzing manner. Specifically, the text content of the currently requested information may be first matched with a plurality of preset semantic templates to determine a matched semantic template, and the matching process may be implemented by means of text similarity calculation.
Therefore, the intelligent interactive system 50 provided by the embodiment of the present invention combines the intention information of the semantic layer and the interactive background information of the user static information layer, so as to more accurately judge the real idea of the user, and the obtained response information is also different according to the different user static information possessed by different users. Therefore, more intelligent and diversified response modes are realized, and the intelligent interaction effect is improved.
In an embodiment of the present invention, the knowledge base 52 may store pre-established correspondence between the intention information and the interaction background information and the response identifier, and store pre-established correspondence between the intention information and the response identifier and the response information. At this time, as shown in fig. 6, the answer decision module 55 may include: a response identification acquisition unit 551 and a control unit 552. The response identifier obtaining unit 551 obtains a corresponding response identifier according to the intention information and the interaction background information based on the knowledge base 52; the control unit 552 acquires the corresponding response information from the intention information and the response identification based on the knowledge base 52.
In another embodiment of the present invention, the knowledge base 52 may store a correspondence relationship between pre-established intention information and interaction background information and the response identifier, and store a correspondence relationship between pre-established response identifier and response information. At this time, the response identifier obtaining unit 551 in the response decision module 55 obtains a corresponding response identifier according to the intention information and the interaction background information based on the knowledge base 52; the control unit 552 acquires the corresponding response information from the response identification based on the knowledge base 52. Therefore, the response identification is introduced to determine the response information, a more flexible response mode is realized, and compared with the method of determining the response information only according to the intention information, a more intelligent interaction effect can be realized.
In an embodiment of the present invention, the correspondence between the intention information and the interaction background information and the response identifier, and/or the correspondence between the intention information and the response identifier and the response information, and/or the correspondence between the response identifier and the response information are established in advance by a big data classification and clustering technique.
In an embodiment of the present invention, the response identifier may correspond to an attribute associated with the specific interactive service content. For example, the response identifier may specifically include: the answer tone mark is divided into at least two categories according to tone from light to heavy; and/or, response tone marks which are divided into at least two categories according to tone from low to high; and/or, response speech rate identification, the response speech rate identification is divided into at least two categories according to the speech rate from slow to fast; and/or, response volume marks which are divided into at least two categories according to the volume from small to large. However, the specific content of the response identifier is not limited in the present invention.
In an embodiment of the present invention, in order to obtain the interactive background information reflected behind the user static information, the user static information needs to be classified first, and at this time, the background obtaining module 54 may be further configured to classify the user static information, and obtain the interactive background information corresponding to the user static information from the knowledge base 52 based on the classification result.
In an embodiment of the present invention, the interaction context information may include at least one interaction context item, wherein each interaction context item includes at least one interaction context content.
In an embodiment of the invention, as shown in fig. 6, the background obtaining module 54 may include: a classification unit 541 and an acquisition unit 542. The classifying unit 541 is configured to divide the user static information into at least one static information classification, and the obtaining unit 542 determines, according to the static information classification included in the user static information, all interactive background contents matched with the user static information, where each interactive background content is determined according to the at least one static information classification. Specifically, when the obtaining unit 542 obtains all the static information classifications of the static information of the user, some static information classifications may be directly matched to the interactive background content of one interactive background item, that is, the interactive background content may be directly determined only according to one static information classification, and some interactive background contents may be determined only according to a plurality of static information classifications.
In an embodiment of the present invention, the user static information may be classified into one or more of the following static information categories: credit card transaction attributes, user identity information, credit card type, current account date, total amount owed, and current amount paid; and/or, the interaction context information may include one or more of the following interaction context items: credit card transaction attributes, current account period, debt historical status, and repayment historical status. The credit card service attributes may include one or more of the following interactive background content: credit card arrears, new credit card application, credit card limit inquiry, credit card repayment and credit card logout; the debt history status may include the following two types of interaction context: never debt and once debt; the payment history status may include one or more of the following interactive background content: complete non-payment, partial payment and intermittent payment. It should be understood that the specific content of the interaction context information is related to the specific content of the user static information, and the user static information may contain different data contents according to the applicable application scenario. The invention does not limit the specific content of the user static information and the corresponding interactive background information.
In an embodiment of the present invention, the interaction module 51 may obtain the user static information by receiving user input or by interacting with a third party, for example, directly from user data stored by a bank.
In an embodiment of the present invention, as shown in fig. 6, the intention analysis module 53 may include: a matching unit 531 and an intent determination unit 532. The matching unit 531 is configured to match the text content of the current request information with a plurality of preset semantic templates to determine a matched semantic template, and the intention determining unit 532 is configured to obtain intention information corresponding to the matched semantic template, where a correspondence between the preset semantic template and the intention information is pre-established and stored in the knowledge base 52, and the same intention information corresponds to one or more semantic templates. The current request information sent by the user is not a standard question but a modified form of the standard question, namely an expanded question. Therefore, for intelligent semantic recognition, there is a need in the knowledge base for an expanded query of standard queries that is slightly different from the standard query expression but expresses the same meaning. In an embodiment of the present invention, the semantic template may be a set of one or more abstract semantic expressions representing a certain semantic content, and is generated by a developer according to a predetermined rule in combination with the semantic content, that is, a semantic template may describe statements of multiple different expression modes of the corresponding semantic content, so as to cope with multiple possible variations of text content of the current request information. Therefore, the text content of the current request information is matched with the preset semantic template, and the limitation of identifying the user information by using the standard question which can only describe one expression mode is avoided.
In an embodiment of the present invention, the process of determining the matched semantic template by the matching unit 531 according to the text content of the current request information may be implemented by a similarity calculation process, at this time, the matching unit 531 calculates the similarity between the text content of the current request information and a plurality of preset semantic templates, and uses the semantic template with the highest similarity as the matched semantic template.
In an embodiment of the present invention, the similarity calculation process performed by the matching unit 531 may be implemented by one or more of the following calculation methods: an edit distance calculation method, an n-gram calculation method, a Jarouwinkler calculation method, and a Soundex calculation method. In a further embodiment, when the semantic component words and the semantic rule words in the text content of the current request information are identified, the semantic component words and the semantic rule words included in the current request information and the semantic template can also be converted into simplified text strings to improve the efficiency of semantic similarity calculation.
In an embodiment of the present invention, the current request information is a voice message, and the intention analysis module 53 needs to perform intention analysis according to the text content, where the interaction module 51 includes: and the text conversion unit is configured to convert the current request information into a text message.
In an embodiment of the present invention, when the interaction mode with the user is implemented based on voice and the obtained response information is in text form, the interaction module 51 may include: and the voice conversion unit is configured to convert the response information into a voice message and send the voice message to the user.
It should be understood that each module or unit described in the intelligent interactive system 50 provided by the above embodiments corresponds to one of the method steps described above. Thus, the operations and features described in the foregoing method steps are also applicable to the intelligent interactive system 50 and the corresponding modules and units included therein, and repeated contents are not repeated herein.
The teachings of the present invention can also be implemented as a computer program product of a computer-readable storage medium, comprising computer program code which, when executed by a processor, enables the processor to implement the intelligent interaction method as embodiments herein according to the methods of embodiments of the present invention. The computer storage medium may be any tangible medium, such as a floppy disk, a CD-ROM, a DVD, a hard drive, even a network medium, and the like.
It should be understood that although one implementation form of the embodiments of the present invention described above may be a computer program product, the method or apparatus of the embodiments of the present invention may be implemented in software, hardware, or a combination of software and hardware. The hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or specially designed hardware. It will be appreciated by those of ordinary skill in the art that the methods and apparatus described above may be implemented using computer executable instructions and/or embodied in processor control code, such code provided, for example, on a carrier medium such as a disk, CD or DVD-ROM, programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The methods and apparatus of the present invention may be implemented in hardware circuitry, such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, or in software for execution by various types of processors, or in a combination of hardware circuitry and software, such as firmware.
It should be understood that although several modules or units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, according to exemplary embodiments of the invention, the features and functions of two or more modules/units described above may be implemented in one module/unit, whereas the features and functions of one module/unit described above may be further divided into implementations by a plurality of modules/units. Furthermore, some of the modules/units described above may be omitted in some application scenarios.
It is to be understood that the description has described only some of the key, not necessarily essential, techniques and features, and may not have described features that could be implemented by those skilled in the art, in order not to obscure the embodiments of the invention.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and the like that are within the spirit and principle of the present invention are included in the present invention.

Claims (22)

1. An intelligent interaction method, comprising:
acquiring user static information and current request information;
performing intention analysis on the current request information to obtain intention information corresponding to the current request information;
acquiring interactive background information corresponding to the user static information;
acquiring a corresponding response identifier according to the intention information and the interactive background information;
acquiring corresponding response information according to the response identification;
the response information is sent to the user and,
the corresponding relation between the intention information and the interactive background information and the response identification is pre-established, and the corresponding relation between the response identification and the response information is pre-established;
the interaction background information comprises at least one interaction background item, wherein each interaction background item comprises at least one interaction background content;
wherein the user static information is divided into one or more of the following static information categories: credit card transaction attributes, user identity information, credit card type, current account date, total amount owed, and current amount paid; and/or the presence of a gas in the gas,
the interaction context information comprises one or more of the following interaction context items: the credit card service attribute, the current account period, the account debt historical state and the repayment historical state;
wherein the credit card service attributes include one or more of the following interactive background content: credit card arrears, new credit card application, credit card limit inquiry, credit card repayment and credit card logout;
the debt historical state comprises the following two interactive background contents: never debt and once debt;
the repayment historical state comprises one or more of the following interactive background contents: complete non-payment, partial payment and intermittent payment.
2. The method of claim 1, wherein the obtaining the corresponding response information according to the response identifier comprises:
acquiring corresponding response information according to the intention information and the response identification;
and the corresponding relation between the intention information and the response identification and the response information is pre-established.
3. The method according to claim 1 or 2, wherein the response identification comprises:
the system comprises a response tone mark, a response tone mark and a response tone mark, wherein the response tone mark is divided into at least two categories according to tone from light to heavy; and/or the presence of a gas in the gas,
the response tone marks are divided into at least two categories according to tone from low to high; and/or the presence of a gas in the gas,
the response speech rate identification is divided into at least two categories according to the speech rate from slow to fast; and/or the presence of a gas in the gas,
and the response volume marks are divided into at least two categories according to the volume from small to large.
4. The method according to claim 2, characterized in that the correspondence between the intention information and the interaction context information and the response identification, and/or the correspondence between the intention information and the response identification and the response information, and/or the correspondence between the response identification and the response information is pre-established by big data classification and clustering techniques.
5. The method of claim 1 or 2, wherein obtaining interaction context information corresponding to the user static information comprises:
and classifying the user static information, and acquiring interactive background information corresponding to the user static information based on the classification result.
6. The method according to claim 5, wherein the classifying the user static information, and the obtaining the interaction context information corresponding to the user static information based on the result of the classifying comprises:
dividing the user static information into at least one static information category; and
and determining all interactive background contents matched with the user static information according to the static information classification included in the user static information, wherein each interactive background content is determined according to at least one static information classification.
7. The method according to claim 1 or 2, wherein the user static information is obtained by user input or by a third party.
8. The method according to claim 1 or 2, wherein the performing the intention analysis on the current request information to obtain intention information corresponding to the current request information comprises:
matching the text content of the current request information with a plurality of preset semantic templates to determine matched semantic templates; and
acquiring intention information corresponding to the matched semantic template;
the corresponding relation between the semantic templates and the intention information is pre-established, and the same intention information corresponds to one or more semantic templates.
9. The method of claim 8, wherein matching the text content of the currently requested information with a plurality of preset semantic templates to determine a matching semantic template comprises:
carrying out similarity calculation between the text content of the current request information and a plurality of preset semantic templates; and
and taking the semantic template with the highest similarity as the matched semantic template.
10. The method of claim 8, wherein the current request message is a voice message, the method further comprising:
and converting the current request information into a text message.
11. The method of claim 1 or 2, further comprising:
and converting the response information into a voice message and sending the voice message to the user.
12. An intelligent interactive system, comprising:
the interaction module is configured to execute an interaction process and acquire user static information and current request information;
the knowledge base is configured to store intention information, interaction background information and response information, store a correspondence between the intention information and the interaction background information and response identification which are established in advance, and store a correspondence between the response identification and the response information which are established in advance;
the intention analysis module is configured to perform intention analysis on the current request information acquired by the interaction module and acquire intention information corresponding to the current request information from the knowledge base;
the background acquisition module is configured to acquire interactive background information corresponding to the user static information from the knowledge base; and the number of the first and second groups,
a response decision module comprising:
the response identifier acquisition unit is configured to acquire a corresponding response identifier according to the intention information and the interaction background information based on the knowledge base; and
the control unit is configured to acquire corresponding response information according to the response identification based on the knowledge base and send the response information to a user through the interaction module;
wherein the interaction context information comprises at least one interaction context item, wherein each interaction context item comprises at least one interaction context content;
wherein the user static information is divided into one or more of the following static information categories: credit card transaction attributes, user identity information, credit card type, current account date, total amount owed, and current amount paid; and/or the presence of a gas in the gas,
the interaction context information comprises one or more of the following interaction context items: the credit card service attribute, the current account period, the account debt historical state and the repayment historical state;
wherein the credit card service attributes include one or more of the following interactive background content: credit card arrears, new credit card application, credit card limit inquiry, credit card repayment and credit card logout;
the debt historical state comprises the following two interactive background contents: never debt and once debt;
the repayment historical state comprises one or more of the following interactive background contents: complete non-payment, partial payment and intermittent payment.
13. The system of claim 12, wherein the knowledge base is further configured to store pre-established correspondence between the intention information and the response identification and the response information;
the control unit is configured to acquire corresponding response information according to the intention information and the response identification based on the knowledge base.
14. The system according to claim 12 or 13, wherein the response identification comprises:
the system comprises a response tone mark, a response tone mark and a response tone mark, wherein the response tone mark is divided into at least two categories according to tone from light to heavy; and/or the presence of a gas in the gas,
the response tone marks are divided into at least two categories according to tone from low to high; and/or the presence of a gas in the gas,
the response speech rate identification is divided into at least two categories according to the speech rate from slow to fast; and/or the presence of a gas in the gas,
and the response volume marks are divided into at least two categories according to the volume from small to large.
15. The system according to claim 13, wherein the correspondence between the intention information and the interaction context information and the response identification, and/or the correspondence between the intention information and the response identification and the response information, and/or the correspondence between the response identification and the response information is pre-established by big data classification and clustering techniques.
16. The system according to claim 12 or 13, wherein the context acquiring module is further configured to classify the user static information, and acquire the interaction context information corresponding to the user static information from the knowledge base based on a result of the classification.
17. The system of claim 16, wherein the context acquisition module comprises:
a classification unit configured to classify the user static information into at least one static information classification; and
an obtaining unit configured to determine all interactive background contents matched with the user static information according to static information categories included in the user static information, wherein each interactive background content is determined according to at least one static information category.
18. The system of claim 12 or 13, wherein the interaction module is further configured to obtain the user static information by receiving user input or by interacting with a third party.
19. The system of claim 12 or 13, wherein the intent analysis module comprises:
the matching unit is configured to match the text content of the current request information with a plurality of preset semantic templates so as to determine matched semantic templates; and
an intention determining unit configured to acquire intention information corresponding to the matched semantic template;
the preset semantic template and the intention information are pre-established and stored in the knowledge base, and the same intention information corresponds to one or more semantic templates.
20. The system according to claim 19, wherein the matching unit is further configured to perform similarity calculation between the text content of the currently requested information and a plurality of preset semantic templates, and use the semantic template with the highest similarity as the matched semantic template.
21. The system of claim 19, wherein the currently requested information is a voice message, and wherein the interaction module comprises:
and the text conversion unit is configured to convert the current request information into a text message.
22. The system of claim 12 or 13, wherein the interaction module comprises:
and the voice conversion unit is configured to convert the response information into a voice message and send the voice message to the user.
CN201611093525.XA 2016-12-01 2016-12-01 Intelligent interaction method and system Active CN106776936B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201611093525.XA CN106776936B (en) 2016-12-01 2016-12-01 Intelligent interaction method and system
US15/390,538 US11106983B2 (en) 2016-12-01 2016-12-26 Intelligent interaction method and intelligent interaction system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611093525.XA CN106776936B (en) 2016-12-01 2016-12-01 Intelligent interaction method and system

Publications (2)

Publication Number Publication Date
CN106776936A CN106776936A (en) 2017-05-31
CN106776936B true CN106776936B (en) 2020-02-18

Family

ID=58882693

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611093525.XA Active CN106776936B (en) 2016-12-01 2016-12-01 Intelligent interaction method and system

Country Status (2)

Country Link
US (1) US11106983B2 (en)
CN (1) CN106776936B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107688667A (en) * 2017-09-30 2018-02-13 平安科技(深圳)有限公司 Intelligent robot client service method, electronic installation and computer-readable recording medium
US11113608B2 (en) 2017-10-30 2021-09-07 Accenture Global Solutions Limited Hybrid bot framework for enterprises
CN108320738B (en) * 2017-12-18 2021-03-02 上海科大讯飞信息科技有限公司 Voice data processing method and device, storage medium and electronic equipment
CN110085262A (en) * 2018-01-26 2019-08-02 上海智臻智能网络科技股份有限公司 Voice mood exchange method, computer equipment and computer readable storage medium
CN110085221A (en) * 2018-01-26 2019-08-02 上海智臻智能网络科技股份有限公司 Speech emotional exchange method, computer equipment and computer readable storage medium
CN110085220A (en) * 2018-01-26 2019-08-02 上海智臻智能网络科技股份有限公司 Intelligent interaction device
CN108197115B (en) 2018-01-26 2022-04-22 上海智臻智能网络科技股份有限公司 Intelligent interaction method and device, computer equipment and computer readable storage medium
CN110689393B (en) * 2018-07-06 2022-08-02 阿里巴巴集团控股有限公司 Man-machine interaction method, device, system and storage medium
CN109376847A (en) * 2018-08-31 2019-02-22 深圳壹账通智能科技有限公司 User's intension recognizing method, device, terminal and computer readable storage medium
CN109451188B (en) * 2018-11-29 2022-03-18 平安科技(深圳)有限公司 Method and device for differential self-help response, computer equipment and storage medium
CN109857843B (en) * 2018-12-25 2023-01-17 科大讯飞股份有限公司 Interaction method and system based on document
CN111611358A (en) * 2019-02-25 2020-09-01 北京嘀嘀无限科技发展有限公司 Information interaction method and device, electronic equipment and storage medium
CN111737670B (en) * 2019-03-25 2023-08-18 广州汽车集团股份有限公司 Method, system and vehicle-mounted multimedia device for multi-mode data collaborative man-machine interaction
CN110457454A (en) * 2019-07-12 2019-11-15 卓尔智联(武汉)研究院有限公司 A kind of dialogue method, server, conversational system and storage medium
CN111223485A (en) * 2019-12-19 2020-06-02 深圳壹账通智能科技有限公司 Intelligent interaction method and device, electronic equipment and storage medium
CN111428017B (en) * 2020-03-24 2022-12-02 科大讯飞股份有限公司 Human-computer interaction optimization method and related device
CN111611468B (en) * 2020-04-29 2023-08-25 百度在线网络技术(北京)有限公司 Page interaction method and device and electronic equipment
CN112530422A (en) * 2020-11-04 2021-03-19 联想(北京)有限公司 Response processing method, intelligent device and storage medium
CN113012687B (en) * 2021-03-05 2022-05-13 北京嘀嘀无限科技发展有限公司 Information interaction method and device and electronic equipment
CN113096657A (en) * 2021-03-30 2021-07-09 西安云湾科技有限公司 Intelligent interaction system and method based on Internet of things products
CN113792125B (en) * 2021-08-25 2024-04-02 北京库睿科技有限公司 Intelligent retrieval ordering method and device based on text relevance and user intention
CN113705245B (en) * 2021-09-01 2022-09-27 北京邮电大学 Semantic communication method, device, system, computer equipment and storage medium
CN114969282B (en) * 2022-05-05 2024-02-06 迈吉客科技(北京)有限公司 Intelligent interaction method based on rich media knowledge graph multi-modal emotion analysis model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102880645A (en) * 2012-08-24 2013-01-16 上海云叟网络科技有限公司 Semantic intelligent search method
CN103198155A (en) * 2013-04-27 2013-07-10 俞志晨 Mobile terminal based smart question answering interaction system and method
CN104731895A (en) * 2015-03-18 2015-06-24 北京京东尚科信息技术有限公司 Auto-answer method and device
CN105895087A (en) * 2016-03-24 2016-08-24 海信集团有限公司 Voice recognition method and apparatus

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002023783A (en) * 2000-07-13 2002-01-25 Fujitsu Ltd Conversation processing system
US9711137B2 (en) * 2011-11-10 2017-07-18 At&T Intellectual Property I, Lp Network-based background expert
WO2015065976A1 (en) * 2013-10-28 2015-05-07 Nant Holdings Ip, Llc Intent engines systems and method
US9390706B2 (en) * 2014-06-19 2016-07-12 Mattersight Corporation Personality-based intelligent personal assistant system and methods

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102880645A (en) * 2012-08-24 2013-01-16 上海云叟网络科技有限公司 Semantic intelligent search method
CN103198155A (en) * 2013-04-27 2013-07-10 俞志晨 Mobile terminal based smart question answering interaction system and method
CN104731895A (en) * 2015-03-18 2015-06-24 北京京东尚科信息技术有限公司 Auto-answer method and device
CN105895087A (en) * 2016-03-24 2016-08-24 海信集团有限公司 Voice recognition method and apparatus

Also Published As

Publication number Publication date
US11106983B2 (en) 2021-08-31
US20180157959A1 (en) 2018-06-07
CN106776936A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN106776936B (en) Intelligent interaction method and system
US11568855B2 (en) System and method for defining dialog intents and building zero-shot intent recognition models
US11488601B2 (en) Dependency graph conversation modeling for use in conducting human-to-computer dialog sessions with a computer-implemented automated assistant
CN106649825B (en) Voice interaction system and creation method and device thereof
US9582757B1 (en) Scalable curation system
CN102880649B (en) A kind of customized information disposal route and system
CN114424185A (en) Stop word data augmentation for natural language processing
CN111708869B (en) Processing method and device for man-machine conversation
CN111177569A (en) Recommendation processing method, device and equipment based on artificial intelligence
JP2023530423A (en) Entity-Level Data Augmentation in Chatbots for Robust Named Entity Recognition
CN110162675B (en) Method and device for generating answer sentence, computer readable medium and electronic device
CN116127020A (en) Method for training generated large language model and searching method based on model
US8315874B2 (en) Voice user interface authoring tool
US20230186033A1 (en) Guided text generation for task-oriented dialogue
CN110210038A (en) Kernel entity determines method and its system, server and computer-readable medium
CN112579733B (en) Rule matching method, rule matching device, storage medium and electronic equipment
Windiatmoko et al. Developing FB chatbot based on deep learning using RASA framework for university enquiries
CN116635862A (en) Outside domain data augmentation for natural language processing
CN114997288A (en) Design resource association method
CN113051380A (en) Information generation method and device, electronic equipment and storage medium
CN114020886A (en) Speech intention recognition method, device, equipment and storage medium
CN116186219A (en) Man-machine dialogue interaction method, system and storage medium
CN116304014A (en) Method for training entity type recognition model, entity type recognition method and device
US20220165257A1 (en) Neural sentence generator for virtual assistants
CN112487154B (en) Intelligent search method based on natural language

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Intelligent interaction methods and systems

Effective date of registration: 20230223

Granted publication date: 20200218

Pledgee: China Construction Bank Corporation Shanghai No.5 Sub-branch

Pledgor: SHANGHAI XIAOI ROBOT TECHNOLOGY Co.,Ltd.

Registration number: Y2023980033272