WO2019001127A1 - Virtual character-based artificial intelligence interaction method and artificial intelligence interaction device - Google Patents

Virtual character-based artificial intelligence interaction method and artificial intelligence interaction device Download PDF

Info

Publication number
WO2019001127A1
WO2019001127A1 PCT/CN2018/084879 CN2018084879W WO2019001127A1 WO 2019001127 A1 WO2019001127 A1 WO 2019001127A1 CN 2018084879 W CN2018084879 W CN 2018084879W WO 2019001127 A1 WO2019001127 A1 WO 2019001127A1
Authority
WO
WIPO (PCT)
Prior art keywords
emotional
corpus
character
determined
role
Prior art date
Application number
PCT/CN2018/084879
Other languages
French (fr)
Chinese (zh)
Inventor
伏英娜
金宇林
雷宇
Original Assignee
迈吉客科技(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 迈吉客科技(北京)有限公司 filed Critical 迈吉客科技(北京)有限公司
Publication of WO2019001127A1 publication Critical patent/WO2019001127A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Definitions

  • the invention relates to the field of audio and video intelligent control, in particular to an artificial intelligence interaction method based on virtual characters and an artificial intelligence interaction device.
  • the assistant product includes the process of interactive information input, interactive information response, and interactive information output.
  • Microsoft's Cortana assistant product and Apple's Siri assistant product use NLP technology (ie, natural language processing technology) to extract keywords from the interactive text in the voice text after obtaining the interactive input voice text through voice recognition technology.
  • NLP technology ie, natural language processing technology
  • the keyword matching/fuzzy matching method retrieves the knowledge list (or knowledge base) to respond to the input interaction information, and responds to the knowledge content as a message, and the message reply is output as the interaction information in the form of voice or text.
  • the used knowledge list (or knowledge base) adopts formatting processing, so that the interactive information output formed by the message reply generated by the same query lacks emotional features, resulting in interaction of the assistant product.
  • the information output is dull and cannot reflect the context-related attributes in the interaction process, and the human-computer interaction experience is poor.
  • the embodiments of the present invention provide an artificial intelligence interaction method based on a virtual character and an artificial intelligence interaction device, which are used to solve the technical problem that the interactive information response is poor due to lack of emotional features and artificial intelligence assisted characters.
  • the corpus resources of the character obtained from the corpus are vectorized to form the role corpus of the character;
  • the role corpus is subjected to natural language semantic recognition by the neural network model to form a determined role, and the sentence emotional characteristics of the determined character are formed to form the character trait of the determined character.
  • the avatar-based artificial intelligence interaction device of the invention comprises:
  • the corpus phrase generating module is configured to obtain a corpus resource of the character from the corpus and perform vectorization processing to form a role corpus of the character.
  • the semantic recognition module is configured to perform natural language semantic recognition on the character corpus through a neural network model to form a determined role, form a sentence emotional feature of the determined character, and form a character feature of the determined character.
  • the avatar-based artificial intelligence interaction device of the invention comprises:
  • a memory a program code for storing a process of the avatar-based artificial intelligence interaction method as described above;
  • a processor for executing the program code.
  • the avatar-based artificial intelligence interaction method and the artificial intelligence interaction device create an artificial intelligence assistant with a character character through natural semantic recognition, thereby reducing manual participation on a large scale, reducing the workload of development, and at the same time Produce artificial intelligence assistants with more natural emotional effects.
  • FIG. 1 is a flowchart of a method for artificial intelligence interaction based on virtual characters according to an embodiment of the present invention.
  • FIG. 2 is a flowchart of determining a character, determining a character's personality, and determining a character's emotion formation based on a virtual person-based artificial intelligence interaction method according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of an emotional interaction information output of an artificial human intelligent interaction method based on an avatar according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of an artificial intelligence interactive device or program module based on a virtual character.
  • FIG. 1 is a flowchart of a method for artificial intelligence interaction based on virtual characters according to an embodiment of the present invention. As shown in Figure 1, it includes:
  • Step 100 Obtain a corpus resource of the character from the corpus for vectorization processing to form a role corpus of the character.
  • the corpus resources of the character include literary works such as novels, scripts, poems, or lines that can be retrieved in the corpus.
  • the carrier of literary works can be text or audio, and audio literary works can correspond to literary works that form words through speech recognition technology.
  • the corpus of a character can be a literary work that includes a character, or a literary work that includes several characters. Several characters can have linguistic communication in the same literary work, or they can be quoted in different literary works.
  • the corpus of the character is vectorized, including the pre-processing of clauses, word segmentation, desensitization, etc., using, for example, word2vec (ie, text-to-vector) techniques to form the vectorization of all words in the corpus, and to maintain the order of all words.
  • word2vec ie, text-to-vector
  • Step 200 Perform natural language semantic recognition on the character corpus through the neural network model to form a determined role, form a sentence emotional characteristic of the determined character, and form a character characteristic for determining the character.
  • the neural network model for natural language semantic recognition of the role corpus may adopt a CNN (ie, convolutional neural network) model, an RNN (ie, a cyclic neural network) model, or a DNN (ie, deep neural network) model.
  • the content of natural language semantic recognition includes the intrinsic connection of sentence order, the internal relationship of word order, the internal relationship between word frequency and emotional expression level, the internal relationship between sentence and role, the character's personality and the internal relationship between emotion and sentence structure.
  • Step 300 Combine the sentence emotional feature of the determined role with the interactive information response by the emotional judgment input on the interaction information to form the emotional interaction information output.
  • the neural network model is used to extract the intrinsic contact information between the corpus from the role corpus, quantify the character of the character and form a specific linguistic structural feature, and cooperate with the traditional interactive information response to form an interactive information response with contextual emotion association expression.
  • the mood and emotional expression of the interactive information output are adapted to the language mood change when the interactive information is input, and the knowledge content that is the response of the interactive information can be timely adjusted for the emotional change of the input subject in the interaction process, and the affinity of the intelligent assistant product is increased and improved. Language interaction experience.
  • the avatar-based artificial intelligence interaction method of the embodiment of the present invention creates an AI assistant with a character character through natural semantic recognition, reduces manual participation on a large scale, reduces the workload of development, and can produce a more natural emotional effect. Artificial Intelligence Assistant.
  • FIG. 2 is a flowchart of determining a character, determining a character's personality, and determining a character's emotion formation based on a virtual person-based artificial intelligence interaction method according to an embodiment of the present invention. Forming the determined roles as shown in FIG. 2 includes:
  • Step 210 Process the role corpus through the first neural network model, and output the corpus phrase.
  • the first neural network model in the embodiment of the present invention includes a CNN model and a softmax classifier (ie, a maximum class classifier), and the CNN model includes data formed by the input layer, the convolution layer, the pooling layer, and the fully connected layer CNN model.
  • the softmax classifier forms all the corpus phrases contained in the character corpus.
  • the CNN model can ensure that the intrinsic contact information between words in the role corpus is not lost in the classification process, and the softmax classifier can guarantee the discretization of the output data when forming the corpus.
  • Step 220 Perform a part-of-speech tag on the words in the corpus phrase.
  • Part of speech refers to the classification of word classes based on the characteristics of words.
  • the words of modern Chinese can be divided into two categories of 12 parts of speech.
  • One type is real words: nouns, verbs, adjectives, numerals, quantifiers and pronouns.
  • One type is the function word: adverb, preposition, conjunction, auxiliary, interjection and onomatopoeia.
  • the part-of-speech tagging algorithm can use the HanLP algorithm and the Jieba algorithm.
  • Step 230 Perform statistics on the corpus phrases according to the part-of-speech tag to form a determined role, and form a determined role list.
  • the part-of-speech tag can further reflect the natural language structure in the corpus phrase, that is, the attributes and connection structures of the subject, predicate, object, attribute, adverbial and complement.
  • the determined role of the subject can be obtained and all the determined roles list can be formed.
  • the statement forming the statement emotional characteristics of the role of the embodiment of the present invention further includes:
  • Step 210 Process the role corpus through the first neural network model, and output the corpus phrase.
  • the first neural network model in the embodiment of the present invention includes a CNN model and a softmax classifier (ie, a maximum class classifier), and the CNN model includes data formed by the input layer, the convolution layer, the pooling layer, and the fully connected layer CNN model.
  • the softmax classifier ie, the largest class classifier
  • the CNN model can ensure that the intrinsic contact information between words in the role corpus is not lost in the classification process, and the softmax classifier can guarantee the discretization of the output data when forming the corpus.
  • Step 220 Perform a part-of-speech tag on the words in the corpus phrase.
  • Part of speech refers to the classification of word classes based on the characteristics of words.
  • the words of modern Chinese can be divided into two categories of 12 parts of speech.
  • One type is real words: nouns, verbs, adjectives, numerals, quantifiers and pronouns.
  • One type is the function word: adverb, preposition, conjunction, auxiliary, interjection and onomatopoeia.
  • the part-of-speech tagging algorithm can use the HanLP algorithm and the Jieba algorithm.
  • Step 230 Perform statistics on the corpus phrases according to the part-of-speech tag to form a determined role, and form a determined role list.
  • the part-of-speech tag can further reflect the natural language structure in the corpus phrase, that is, the attributes and connection structures of the subject, predicate, object, attribute, adverbial and complement.
  • the determined role of the subject can be obtained and all the determined roles list can be formed.
  • Step 240 classify the corpus phrases according to the determined role, form a role corpus phrase for determining the role, and obtain the leading dialogue keyword from the role corpus phrase of the determined role.
  • Determining the role's role corpus phrase includes the content of the conversation associated with determining the role, excluding the corpus phrases that are unrelated to the determined role.
  • the content of each of the determined roles can be completed based on the part-of-speech tag and the determined role list.
  • the leading dialogue keyword in the role corpus phrase is a word, vocabulary or short sentence that is used to determine the character according to characteristics such as frequency.
  • Step 270 Combine the keyword (or word) list in the existing interaction information response with the leading conversation keyword of the role to form a list of the specific keywords that determine the role.
  • the embodiment of the present invention associates the response information in the existing interaction information response with the dialogue of the determined role through the proprietary keyword list, so that the response information and the feature information of the determined role can be uniformly processed and integrated.
  • the statement forming the statement emotional characteristics of the role of the embodiment of the present invention further includes:
  • Step 250 The role corpus phrase of the role is determined by the second neural network model processing, and the corpus phrase is formed into an emotional corpus phrase that determines the character according to the emotional feature, and forms a sentence emotional feature that determines the role.
  • the second neural network model in the embodiment of the present invention includes a CNN model and an SVM classifier (ie, a support vector machine classifier), and the CNN model includes data formed by the input layer, the convolution layer, the pooling layer, and the fully connected layer CNN model.
  • the two SVM classifiers in series form emotional corpus phrases of different emotion categories.
  • the SVM classifier can ensure accurate emotional classification using the intrinsic emotional connections formed by the second neural network model. For example, there are six basic classifications of emotions, fears and fears.
  • the SVM classifier as a supervised learning method, can classify extreme grief and love emotional corpus phrases through emotional dictionaries, and then form a hi-shock through the first SVM classifier classification. Emotional corpus phrases of fear and anger are classified into four types of emotional corpus phrases: hi, shock, fear and anger after being classified by the second SVM classifier.
  • the character characterization of determining the role of the embodiment of the present invention further includes:
  • Step 260 Perform emotional frequency statistics on the emotional corpus phrases of the determined characters to form a main character characteristic of the determined character.
  • Emotional dictionary is used to determine the emotional corpus of the character, and the emotional frequency and the frequency of the emotional words are used to form the emotional frequency histogram data, and the emotional frequency histogram data is input into the SVM classifier to obtain the character classification.
  • the classification of personality characteristics includes:
  • Type A personality is emotionally stable, socially adaptive and directional, but intellectual performance is general, subjective initiative is general, and communication ability is weak;
  • Type B personality has the characteristics of extroversion, emotional instability, poor social adaptability, impatience, and interpersonal relationship;
  • Type C personality has introverted characteristics, emotional stability, and good social adaptability, but it is passive in general;
  • the D-type personality has the characteristics of extroversion, social adaptability is good or general, interpersonal relationship is good, and organizational ability;
  • E-type personality has introverted characteristics, emotional instability, poor social adaptability or general, not good communication, but often good at independent thinking, research-oriented.
  • the most characteristic character classification of weights is used as the character trait of the determined character.
  • the statement emotional characteristics of determining the role in the embodiment of the present invention further includes:
  • Step 280 Perform statistics on the role corpus phrases of the determined role through the natural language structure, and form an emotional language structure corresponding to the sentence emotional characteristics of the determined role.
  • the embodiment of the present invention uses the natural language structure as a statistical basis to statistically determine the frequency of different language structures in the emotional corpus phrase of the character, and uses the high-frequency sub-linguistic structure as a language habit to determine the character under different emotional characteristics. Further, a dialog sample template for determining a role is formed according to the language habits under the emotional characteristics.
  • the statement emotional characteristics of determining the role in the embodiment of the present invention further includes:
  • Step 290 Perform frequency statistics on the specific nouns and the specific noun contexts in the corpus, and form a corpus of related nouns associated with the specific nouns.
  • the determination of the specific noun in the corpus phrase refers to the keyword list in the existing interaction information response, the frequency statistics in the corresponding information content associated with the keyword list, the determination of the specific noun contextual association in the reference corpus phrase, and the association with the keyword list.
  • FIG. 3 is a flowchart of outputting emotional interaction information of a virtual character based artificial intelligence interaction method according to an embodiment of the present invention.
  • the determination of the statement's emotional characteristics of the role and the interaction information response include:
  • Step 310 Receive text vocabulary or emotion control information input by the interaction information.
  • Step 320 Extract the text vocabulary input by the interaction information, and determine the emotional characteristics of the character by matching and extracting the emotional corpus phrase of the determined character.
  • Step 350 Determine the emotional language structure of the determined character by determining the emotional characteristics of the character, and combine the leading conversation keyword in the role-specific keyword list with the standard response information in the existing interaction information response according to the emotional language structure.
  • the emotional corpus phrase is combined with the standard response information in the existing interactive information response, and the emotional expression factor of the role is determined for the standard response information.
  • the determination of the statement's emotional characteristics of the role and the interaction information response include:
  • Step 310 Receive text vocabulary or emotion control information input by the interaction information.
  • Step 320 Extract the text vocabulary input by the interaction information, and determine the emotional characteristics of the character by matching and extracting the emotional corpus phrase of the determined character.
  • Step 340 Extract the text vocabulary input by the interaction information, and determine the specific noun related noun expression corpus of the character by matching with the specific noun in the corpus phrase.
  • Step 350 Determine the emotional language structure of the determined character by determining the emotional characteristics of the character, and combine the related related noun expression corpus of the specific noun of the determined character with the standard response information in the existing interactive information response according to the emotional language structure.
  • the specific noun related associative noun expression corpus is combined with the standard response information in the existing interactive information response, and the emotional expression factor of the character is determined by using the associated context in the corpus phrase of the determined role.
  • the determination of the statement's emotional characteristics of the role and the interaction information response include:
  • Step 310 Receive text vocabulary or emotion control information input by the interaction information.
  • Step 330 Extract emotion control information input by the interaction information, match the determined role, and determine the emotional characteristics of the character.
  • Step 350 Determine the emotional language structure of the determined character by determining the emotional characteristics of the character, and combine the leading conversation keyword in the role-specific keyword list with the standard response information in the existing interaction information response according to the emotional language structure.
  • the emotional language structure of the determined character is obtained by determining the emotional characteristics of the character, and the related noun expression corpus of the specific noun of the determined character is combined with the standard response information in the existing interactive information response according to the emotional language structure.
  • Step 350 is replaced with an additional step.
  • an additional step and step 350 may be included at the same time, the two steps being in the same parallel configuration in the logic process.
  • the embodiment of the present invention can continuously input the determined role and the determined emotion as the controllable weight information through the interaction information, so as to determine the degree of performance of the character in determining the emotional performance according to the user experience and the feeling of immediate adjustment.
  • the feedback is formed according to the emotional response between the user and the virtual determined role, and the positive or negative feedback effect of determining the continuous emotional performance of the character is given, thereby further enhancing the human-computer interaction between the character and the user.
  • FIG. 4 is a schematic structural diagram of an artificial intelligence interactive device or program module based on a virtual character.
  • the avatar-based artificial intelligence interaction device or the program module deployed in the processor as shown in FIG. 4 includes:
  • the corpus phrase generating module 10 is configured to obtain a corpus resource of the character from the corpus and perform vectorization processing to form a role corpus of the character.
  • the semantic recognition module 20 is configured to perform natural language semantic recognition on the character corpus through the neural network model, form a determined role, form a sentence emotional feature that determines the character, and form a character characteristic that determines the character.
  • the emotion combining module 30 is configured to combine the statement emotional feature of the determined role with the interactive information response by the emotional judgment input on the interactive information to form the emotional interaction information output.
  • the semantic recognition module 20 includes:
  • the corpus phrase generating unit 21 is configured to process the role corpus through the first neural network model and output the corpus phrase.
  • the part of speech tagging unit 22 is configured to perform part-of-speech tagging on words in the corpus phrase.
  • the determining role generating unit 23 is configured to perform statistical determination on the corpus phrase according to the part-of-speech tag to form a determined role, and form a determined role list.
  • the role corpus phrase generating unit 24 is configured to classify the corpus phrases according to the determined characters, form a role corpus phrase for determining the role, and obtain the leading dialogue keyword from the role corpus phrase of the determined role.
  • the emotional corpus phrase generating unit 25 is configured to process the role corpus phrase of the role by the second neural network model processing, and form the corpus phrase to form the emotional corpus phrase of the determined character classified by the emotional feature to form a sentence emotional feature that determines the character.
  • the personality feature generating unit 26 is configured to perform emotional frequency statistics on the emotional corpus phrases of the determined characters to form a main character feature for determining the character.
  • the proprietary keyword list generating unit 27 is configured to combine the keyword (or word) list in the existing interactive information response with the leading dialogue keyword of the determined character to form a list of the specific keywords that determine the role.
  • the emotional language structure generating unit 28 is configured to perform statistics on the role corpus of the determined character through the natural language structure, and form an emotional language structure corresponding to the sentence emotional feature of the determined character.
  • the special noun generating unit 29 is configured to perform frequency statistics on the contextual terms of the specific nouns and the specific nouns in the corpus phrases, and form a corpus of associated nouns associated with the specific nouns.
  • the emotion combining module 30 includes:
  • the emotion information extracting unit 31 is configured to receive text vocabulary or emotion control information input by the interaction information.
  • the emotion feature recognition unit 32 is configured to extract a text vocabulary input by the interaction information, and determine an emotional feature of the character by matching and extracting the emotional corpus phrase of the character.
  • the emotion feature control unit 33 is configured to extract emotion control information input by the interaction information, match the determined role, and determine the emotional feature of the character.
  • the special noun association unit 34 is configured to extract the text vocabulary input by the interaction information, and determine the specific noun related noun expression corpus of the character by matching with the special noun in the corpus phrase.
  • the response information emotion generating unit 35 is configured to obtain an emotional language structure of the determined character by determining the emotional feature of the character, and determine the leading conversation keyword in the role list of the role and the standard response information in the existing interaction information response. The combination of emotional language structure.
  • the emotion combining module 30 further includes a second response information emotion generating unit, configured to obtain an emotional language structure of the determined character by determining an emotional feature of the character, and determine related related nouns of the specific noun of the determined character.
  • the corpus and the standard response information in the existing interactive information response are combined according to the emotional language structure.
  • the second response information emotion generating unit may replace the response information emotion generating unit 35.
  • the second response information emotion generating unit is disposed in parallel with the response information emotion generating unit 35.
  • a memory a program code for storing a process of the virtual character-based artificial intelligence interaction method of the above embodiment
  • a processor program code for executing a process of the avatar-based artificial intelligence interaction method of the above embodiment.
  • the disclosed systems, devices, and methods may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the functions may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a standalone product.
  • the technical solution of the present invention which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
  • the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and the like, and can store a program check code. Medium.
  • the artificial character intelligent interaction method and device based on the virtual character in the embodiment of the invention creates an artificial intelligence assistant with a character character through natural semantic recognition, reduces manual participation on a large scale, reduces the workload of development, and can generate more Artificial intelligence assistant for natural emotional effects. . Can be widely used in production and living environments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Machine Translation (AREA)

Abstract

Provided by the present invention is a virtual character-based artificial intelligence interaction method that is used to solve the technical problem wherein the interactive effect for artificial intelligence in the role of an assistant is poor due to a lack of emotional characteristics in interaction information responses. The embodiments of the present invention comprise: acquiring, from a corpus, a corpus resource for a role and performing vectorization processing so as to form a role corpus for the role; by means of a neural network model, performing natural language semantic recognition on the role corpus to form a determined role, and forming an emotional characteristic for statements by the determined role, then forming a personality characteristic for the determined role. The virtual character-based artificial intelligence interaction method of the embodiments of the present application combines text words or emotional control information of an interaction information input, and, by means of a manner of natural semantic recognition, creates an artificial helper having a personality; the present invention reduces human involvement on a large-scale, reducing the workload in development while producing an artificial intelligence helper having more natural emotions. The present invention further comprises a virtual character-based artificial intelligence interaction device.

Description

基于虚拟人物的人工智能交互方法及人工智能交互装置Artificial character interaction method based on virtual character and artificial intelligence interaction device
本发明是要求由申请人提出的,申请日为2017年6月26日,申请号为CN2017104987389,名称为“一种人工智能交互方法及人工智能交互装置”的申请的优先权。以上申请的全部内容通过整体引用结合于此。The present invention claims priority from the applicant, the application date is June 26, 2017, the application number is CN2017104987389, and the application name is "an artificial intelligence interaction method and an artificial intelligence interaction device". The entire contents of the above application are hereby incorporated by reference in its entirety.
技术领域Technical field
本发明涉及音视频智能控制领域,特别涉及基于虚拟人物的人工智能交互方法及人工智能交互装置。The invention relates to the field of audio and video intelligent control, in particular to an artificial intelligence interaction method based on virtual characters and an artificial intelligence interaction device.
发明背景Background of the invention
智能助理产品包括交互信息输入、交互信息响应和交互信息输出的过程。例如微软公司的Cortana助理产品和苹果公司的Siri助理产品在通过语音识别技术获得交互输入的语音文本后,采用NLP技术(即自然语言处理技术)对语音文本中的交互信息进行关键字提取,通过关键字匹配/模糊匹配的方式检索知识列表(或知识库)对输入的交互信息进行响应,并将响应的知识内容作为消息回复,消息回复以语音或文字的形式作为交互信息输出。The assistant product includes the process of interactive information input, interactive information response, and interactive information output. For example, Microsoft's Cortana assistant product and Apple's Siri assistant product use NLP technology (ie, natural language processing technology) to extract keywords from the interactive text in the voice text after obtaining the interactive input voice text through voice recognition technology. The keyword matching/fuzzy matching method retrieves the knowledge list (or knowledge base) to respond to the input interaction information, and responds to the knowledge content as a message, and the message reply is output as the interaction information in the form of voice or text.
在交互信息响应过程中由于利用的知识列表(或知识库)采用格式化处理,仅包含必要的知识内容,使得相同查询产生的消息回复形成的交互信息输出缺乏情绪特征,导致智能助理产品的交互信息输出表现呆板,无法反应交互过程中上下文关联属性,人机交互体验差。In the interactive information response process, since the used knowledge list (or knowledge base) adopts formatting processing, only the necessary knowledge content is included, so that the interactive information output formed by the message reply generated by the same query lacks emotional features, resulting in interaction of the assistant product. The information output is dull and cannot reflect the context-related attributes in the interaction process, and the human-computer interaction experience is poor.
发明内容Summary of the invention
有鉴于此,本发明实施例提供了一种基于虚拟人物的人工智能交互方法及人工智能交互装置,用于解决交互信息响应由于缺乏情绪特征,人工智能辅助角色交互效果差的技术问题。In view of this, the embodiments of the present invention provide an artificial intelligence interaction method based on a virtual character and an artificial intelligence interaction device, which are used to solve the technical problem that the interactive information response is poor due to lack of emotional features and artificial intelligence assisted characters.
本发明的基于虚拟人物的人工智能交互方法,包括:The virtual character based artificial intelligence interaction method of the invention comprises:
从语料库中获取角色的语料资源进行向量化处理,形成角色的角色语料;The corpus resources of the character obtained from the corpus are vectorized to form the role corpus of the character;
通过神经网络模型对角色语料进行自然语言语义识别,形成确定角色,形成所述确定角色的语句情绪特征,形成所述确定角色的性格特征。The role corpus is subjected to natural language semantic recognition by the neural network model to form a determined role, and the sentence emotional characteristics of the determined character are formed to form the character trait of the determined character.
本发明的基于虚拟人物的人工智能交互装置,包括:The avatar-based artificial intelligence interaction device of the invention comprises:
语料短语生成模块,用于从语料库中获取角色的语料资源进行向量化处理,形成角色的角色语料。The corpus phrase generating module is configured to obtain a corpus resource of the character from the corpus and perform vectorization processing to form a role corpus of the character.
语义识别模块,用于通过神经网络模型对所述角色语料进行自然语言语义识别,形成确定角色,形成所述确定角色的语句情绪特征,形成所述确定角色的性格特征。The semantic recognition module is configured to perform natural language semantic recognition on the character corpus through a neural network model to form a determined role, form a sentence emotional feature of the determined character, and form a character feature of the determined character.
本发明的基于虚拟人物的人工智能交互装置,包括:The avatar-based artificial intelligence interaction device of the invention comprises:
存储器,用于存储如上述的基于虚拟人物的人工智能交互方法的处理过程的程序代码;a memory, a program code for storing a process of the avatar-based artificial intelligence interaction method as described above;
处理器,用于执行所述程序代码。a processor for executing the program code.
本发明实施例的基于虚拟人物的人工智能交互方法及人工智能交互装置,通过自然语义识别的方式创建带有人物性格的人工智能助手,大规模降低人工参与,降低了开发的工作量,同时能够产生更具有自然情绪效果的人工智能助手。The avatar-based artificial intelligence interaction method and the artificial intelligence interaction device according to the embodiment of the present invention create an artificial intelligence assistant with a character character through natural semantic recognition, thereby reducing manual participation on a large scale, reducing the workload of development, and at the same time Produce artificial intelligence assistants with more natural emotional effects.
附图简要说明BRIEF DESCRIPTION OF THE DRAWINGS
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below. Obviously, the drawings in the following description are Some embodiments of the present invention may also be used to obtain other drawings based on these drawings without departing from the art.
图1为本发明实施例的基于虚拟人物的人工智能交互方法的流程图。FIG. 1 is a flowchart of a method for artificial intelligence interaction based on virtual characters according to an embodiment of the present invention.
图2为本发明实施例的基于虚拟人物的人工智能交互方法的确定角色、确定角色性格和确定角色情绪形成的流程图。FIG. 2 is a flowchart of determining a character, determining a character's personality, and determining a character's emotion formation based on a virtual person-based artificial intelligence interaction method according to an embodiment of the present invention.
图3为本发明实施例的基于虚拟人物的人工智能交互方法的情绪交互信 息输出的流程图。FIG. 3 is a flowchart of an emotional interaction information output of an artificial human intelligent interaction method based on an avatar according to an embodiment of the present invention.
图4为基于虚拟人物的人工智能交互装置或程序模块的结构示意图。4 is a schematic structural diagram of an artificial intelligence interactive device or program module based on a virtual character.
实施本发明的方式Mode for carrying out the invention
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例,基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described in conjunction with the drawings in the embodiments of the present invention. It is a part of the embodiments of the present invention, and not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those skilled in the art without creative efforts are protected by the present invention. range.
附图中的步骤编号仅用于作为该步骤的附图标记,不表示执行顺序。The step numbers in the drawings are only used as reference numerals for the steps, and do not indicate the order of execution.
图1为本发明实施例的基于虚拟人物的人工智能交互方法的流程图。如图1所示包括:FIG. 1 is a flowchart of a method for artificial intelligence interaction based on virtual characters according to an embodiment of the present invention. As shown in Figure 1, it includes:
步骤100:从语料库中获取角色的语料资源进行向量化处理,形成角色的角色语料。Step 100: Obtain a corpus resource of the character from the corpus for vectorization processing to form a role corpus of the character.
角色的语料资源包括可以在语料库中检索到的包含确定角色的小说、剧本、诗歌或台词等文学作品。文学作品的载体可以是文字或者音频,音频的文学作品通过语音识别技术可以对应形成文字的文学作品。角色的语料资源可以是包括一个角色的文学作品,也可以是包括若干个角色的文学作品。若干个角色可以在同一个文学作品中存在语言上的交流,也可以在不同的文学作品中被互相引用。The corpus resources of the character include literary works such as novels, scripts, poems, or lines that can be retrieved in the corpus. The carrier of literary works can be text or audio, and audio literary works can correspond to literary works that form words through speech recognition technology. The corpus of a character can be a literary work that includes a character, or a literary work that includes several characters. Several characters can have linguistic communication in the same literary work, or they can be quoted in different literary works.
角色的语料资源进行向量化包括在分句、分词、脱敏等预处理的基础上,采用例如word2vec(即文本到向量)技术形成语料资源中所有词语的向量化,并保持所有词语的顺序。The corpus of the character is vectorized, including the pre-processing of clauses, word segmentation, desensitization, etc., using, for example, word2vec (ie, text-to-vector) techniques to form the vectorization of all words in the corpus, and to maintain the order of all words.
步骤200:通过神经网络模型对角色语料进行自然语言语义识别,形成确定角色,形成确定角色的语句情绪特征,形成确定角色的性格特征。Step 200: Perform natural language semantic recognition on the character corpus through the neural network model to form a determined role, form a sentence emotional characteristic of the determined character, and form a character characteristic for determining the character.
对角色语料进行自然语言语义识别的神经网络模型可以采用CNN(即卷积神经网络)模型、RNN(即循环神经网络)模型或DNN(即深度神经网络)模型。 自然语言语义识别的内容包括语句顺序的内在联系、词语顺序的内在联系、语句中词语频率与情绪表达程度的内在联系、语句与角色的内在联系、角色的性格及情绪与语句结构的内在联系等围绕确定角色的文学作品中具体量化的内在情绪联系。The neural network model for natural language semantic recognition of the role corpus may adopt a CNN (ie, convolutional neural network) model, an RNN (ie, a cyclic neural network) model, or a DNN (ie, deep neural network) model. The content of natural language semantic recognition includes the intrinsic connection of sentence order, the internal relationship of word order, the internal relationship between word frequency and emotional expression level, the internal relationship between sentence and role, the character's personality and the internal relationship between emotion and sentence structure. A specific quantitative internal emotional connection around a literary work that determines the role.
步骤300:通过对交互信息输入的情绪判断将确定角色的语句情绪特征与交互信息响应相结合,形成情绪交互信息输出。Step 300: Combine the sentence emotional feature of the determined role with the interactive information response by the emotional judgment input on the interaction information to form the emotional interaction information output.
利用神经网络模型从角色语料中提取语料间的内在联系信息,量化确定角色的性格并形成特定的语言结构特征,与传统的交互信息响应配合形成具有上下文情绪关联表达的交互信息响应。使得交互信息输出的语气和情绪表达适应交互信息输入时的语言情绪变化,满足作为交互信息响应的知识内容可以针对交互过程中输入主体的情绪变化做出适时调整,增加智能助理产品的亲和力,提升语言交互体验。The neural network model is used to extract the intrinsic contact information between the corpus from the role corpus, quantify the character of the character and form a specific linguistic structural feature, and cooperate with the traditional interactive information response to form an interactive information response with contextual emotion association expression. The mood and emotional expression of the interactive information output are adapted to the language mood change when the interactive information is input, and the knowledge content that is the response of the interactive information can be timely adjusted for the emotional change of the input subject in the interaction process, and the affinity of the intelligent assistant product is increased and improved. Language interaction experience.
本发明实施例的基于虚拟人物的人工智能交互方法通过自然语义识别的方式创建带有人物性格的AI助手,大规模降低人工参与,降低了开发的工作量,同时能够产生更具有自然情绪效果的人工智能助手。The avatar-based artificial intelligence interaction method of the embodiment of the present invention creates an AI assistant with a character character through natural semantic recognition, reduces manual participation on a large scale, reduces the workload of development, and can produce a more natural emotional effect. Artificial Intelligence Assistant.
图2为本发明实施例的基于虚拟人物的人工智能交互方法的确定角色、确定角色性格和确定角色情绪形成的流程图。如图2所示形成确定角色包括:FIG. 2 is a flowchart of determining a character, determining a character's personality, and determining a character's emotion formation based on a virtual person-based artificial intelligence interaction method according to an embodiment of the present invention. Forming the determined roles as shown in FIG. 2 includes:
步骤210:通过第一神经网络模型处理角色语料,输出语料短语。Step 210: Process the role corpus through the first neural network model, and output the corpus phrase.
本发明实施例中第一神经网络模型包括CNN模型和softmax分类器(即最大类别分类器),CNN模型包括输入层、卷积层、池化层和全连接层CNN模型处理后形成的数据经softmax分类器形成包含在角色语料中的所有语料短语。CNN模型可以保证角色语料中词语间的内在联系信息在分类过程中不损失,softmax分类器可以保证输出数据形成语料短语时的离散化。The first neural network model in the embodiment of the present invention includes a CNN model and a softmax classifier (ie, a maximum class classifier), and the CNN model includes data formed by the input layer, the convolution layer, the pooling layer, and the fully connected layer CNN model. The softmax classifier forms all the corpus phrases contained in the character corpus. The CNN model can ensure that the intrinsic contact information between words in the role corpus is not lost in the classification process, and the softmax classifier can guarantee the discretization of the output data when forming the corpus.
步骤220:对语料短语中的词语进行词性标记。Step 220: Perform a part-of-speech tag on the words in the corpus phrase.
词性是指以词的特点为根据,来划分词类。现代汉语的词可以分为两类12种词性。一类是实词:名词、动词、形容词、数词、量词和代词。一类是虚词:副词、介词、连词、助词、叹词和拟声词。词性标注算法可以采用HanLP 算法和Jieba(结巴)算法。Part of speech refers to the classification of word classes based on the characteristics of words. The words of modern Chinese can be divided into two categories of 12 parts of speech. One type is real words: nouns, verbs, adjectives, numerals, quantifiers and pronouns. One type is the function word: adverb, preposition, conjunction, auxiliary, interjection and onomatopoeia. The part-of-speech tagging algorithm can use the HanLP algorithm and the Jieba algorithm.
步骤230:根据词性标记对语料短语进行统计形成确定角色,并形成确定角色列表。Step 230: Perform statistics on the corpus phrases according to the part-of-speech tag to form a determined role, and form a determined role list.
词性标记可以进一步反映语料短语中的自然语言结构,即表现主语、谓语、宾语、定语、状语和补语的属性和连接结构。通过对主语、谓语和宾语具体含义的统计,可以获得主语指代的确定角色,并形成所有的确定角色列表。The part-of-speech tag can further reflect the natural language structure in the corpus phrase, that is, the attributes and connection structures of the subject, predicate, object, attribute, adverbial and complement. Through the statistics of the specific meanings of the subject, predicate and object, the determined role of the subject can be obtained and all the determined roles list can be formed.
如图2所示在上述实施例的基础上,本发明实施例形成确定角色的语句情绪特征还包括:As shown in FIG. 2, on the basis of the foregoing embodiment, the statement forming the statement emotional characteristics of the role of the embodiment of the present invention further includes:
步骤210:通过第一神经网络模型处理角色语料,输出语料短语。Step 210: Process the role corpus through the first neural network model, and output the corpus phrase.
本发明实施例中第一神经网络模型包括CNN模型和softmax分类器(即最大类别分类器),CNN模型包括输入层、卷积层、池化层和全连接层CNN模型处理后形成的数据经softmax分类器(即最大类别分类器)形成包含在角色语料中的所有语料短语。CNN模型可以保证角色语料中词语间的内在联系信息在分类过程中不损失,softmax分类器可以保证输出数据形成语料短语时的离散化。The first neural network model in the embodiment of the present invention includes a CNN model and a softmax classifier (ie, a maximum class classifier), and the CNN model includes data formed by the input layer, the convolution layer, the pooling layer, and the fully connected layer CNN model. The softmax classifier (ie, the largest class classifier) forms all the corpus phrases contained in the character corpus. The CNN model can ensure that the intrinsic contact information between words in the role corpus is not lost in the classification process, and the softmax classifier can guarantee the discretization of the output data when forming the corpus.
步骤220:对语料短语中的词语进行词性标记。Step 220: Perform a part-of-speech tag on the words in the corpus phrase.
词性是指以词的特点为根据,来划分词类。现代汉语的词可以分为两类12种词性。一类是实词:名词、动词、形容词、数词、量词和代词。一类是虚词:副词、介词、连词、助词、叹词和拟声词。词性标注算法可以采用HanLP算法和Jieba(结巴)算法。Part of speech refers to the classification of word classes based on the characteristics of words. The words of modern Chinese can be divided into two categories of 12 parts of speech. One type is real words: nouns, verbs, adjectives, numerals, quantifiers and pronouns. One type is the function word: adverb, preposition, conjunction, auxiliary, interjection and onomatopoeia. The part-of-speech tagging algorithm can use the HanLP algorithm and the Jieba algorithm.
步骤230:根据词性标记对语料短语进行统计形成确定角色,并形成确定角色列表。Step 230: Perform statistics on the corpus phrases according to the part-of-speech tag to form a determined role, and form a determined role list.
词性标记可以进一步反映语料短语中的自然语言结构,即表现主语、谓语、宾语、定语、状语和补语的属性和连接结构。通过对主语、谓语和宾语具体含义的统计,可以获得主语指代的确定角色,并形成所有的确定角色列表。The part-of-speech tag can further reflect the natural language structure in the corpus phrase, that is, the attributes and connection structures of the subject, predicate, object, attribute, adverbial and complement. Through the statistics of the specific meanings of the subject, predicate and object, the determined role of the subject can be obtained and all the determined roles list can be formed.
步骤240:根据确定角色对语料短语进行分类,形成确定角色的角色语料短语,从确定角色的角色语料短语中获取前导对话关键词。Step 240: classify the corpus phrases according to the determined role, form a role corpus phrase for determining the role, and obtain the leading dialogue keyword from the role corpus phrase of the determined role.
确定角色的角色语料短语中包括的是与确定角色相关的对话内容,排除了与确定角色无关的语料短语。根据词性标记和确定角色列表可以完成每一个确定角色的对话内容。角色语料短语中的前导对话关键词是根据频率等特征体现的确定角色惯用的词语、词汇或短句。Determining the role's role corpus phrase includes the content of the conversation associated with determining the role, excluding the corpus phrases that are unrelated to the determined role. The content of each of the determined roles can be completed based on the part-of-speech tag and the determined role list. The leading dialogue keyword in the role corpus phrase is a word, vocabulary or short sentence that is used to determine the character according to characteristics such as frequency.
步骤270:结合现有交互信息响应中的关键词(或字)列表和确定角色的前导对话关键词,形成确定角色的专有关键词列表。Step 270: Combine the keyword (or word) list in the existing interaction information response with the leading conversation keyword of the role to form a list of the specific keywords that determine the role.
本发明实施例通过专有关键词列表将现有交互信息响应中的响应信息与确定角色的对话形成关联,使得响应信息和确定角色的特征信息可以进行统一处理和融合。The embodiment of the present invention associates the response information in the existing interaction information response with the dialogue of the determined role through the proprietary keyword list, so that the response information and the feature information of the determined role can be uniformly processed and integrated.
如图2所示在上述实施例的基础上,本发明实施例形成确定角色的语句情绪特征还包括:As shown in FIG. 2, on the basis of the foregoing embodiment, the statement forming the statement emotional characteristics of the role of the embodiment of the present invention further includes:
步骤250:通过第二神经网络模型处理确定角色的角色语料短语,将角色语料短语形成按情绪特征分类的确定角色的情绪语料短语,形成确定角色的语句情绪特征。Step 250: The role corpus phrase of the role is determined by the second neural network model processing, and the corpus phrase is formed into an emotional corpus phrase that determines the character according to the emotional feature, and forms a sentence emotional feature that determines the role.
本发明实施例中第二神经网络模型包括CNN模型和SVM分类器(即支持向量机分类器),CNN模型包括输入层、卷积层、池化层和全连接层CNN模型处理后形成的数据经串联的两个SVM分类器形成不同情绪分类的情绪语料短语。SVM分类器可以保证利用第二神经网络模型形成的内在情绪联系进行准确的情绪分类。例如包含喜怒哀惊恐爱六个基本分类,SVM分类器作为有监督学习方式,可以通过情绪字典将极端的哀和爱的情绪语料短语分类,而后通过第一个SVM分类器分类形成喜-惊和恐-怒两类情绪语料短语,在经过第二个SVM分类器分类分类形成喜、惊、恐和怒四类情绪语料短语。The second neural network model in the embodiment of the present invention includes a CNN model and an SVM classifier (ie, a support vector machine classifier), and the CNN model includes data formed by the input layer, the convolution layer, the pooling layer, and the fully connected layer CNN model. The two SVM classifiers in series form emotional corpus phrases of different emotion categories. The SVM classifier can ensure accurate emotional classification using the intrinsic emotional connections formed by the second neural network model. For example, there are six basic classifications of emotions, fears and fears. The SVM classifier, as a supervised learning method, can classify extreme sorrow and love emotional corpus phrases through emotional dictionaries, and then form a hi-shock through the first SVM classifier classification. Emotional corpus phrases of fear and anger are classified into four types of emotional corpus phrases: hi, shock, fear and anger after being classified by the second SVM classifier.
如图2所示在上述实施例的基础上,本发明实施例形成确定角色的性格特征还包括:As shown in FIG. 2, on the basis of the foregoing embodiment, the character characterization of determining the role of the embodiment of the present invention further includes:
步骤260:对确定角色的情绪语料短语进行情绪频次统计,形成确定角色主要的性格特征。Step 260: Perform emotional frequency statistics on the emotional corpus phrases of the determined characters to form a main character characteristic of the determined character.
采用情绪字典对确定角色的情绪语料短语进行情绪词语和情绪词语频率 的统计形成情绪频次直方图数据,将情绪频次直方图数据输入SVM分类器获得性格特征分类。例如性格特征分类包括:Emotional dictionary is used to determine the emotional corpus of the character, and the emotional frequency and the frequency of the emotional words are used to form the emotional frequency histogram data, and the emotional frequency histogram data is input into the SVM classifier to obtain the character classification. For example, the classification of personality characteristics includes:
A型性格情绪稳定,社会适应性及向性均衡,但智力表现一般,主观能动性一般,交际能力较弱;Type A personality is emotionally stable, socially adaptive and directional, but intellectual performance is general, subjective initiative is general, and communication ability is weak;
B型性格具有外向性的特点,情绪不稳定,社会适应性较差,遇事急躁,人际关系不融洽;Type B personality has the characteristics of extroversion, emotional instability, poor social adaptability, impatience, and interpersonal relationship;
C型性格具有内向性特点,情绪稳定,社会适应性良好,但在一般情况下表现被动;Type C personality has introverted characteristics, emotional stability, and good social adaptability, but it is passive in general;
D型性格具有外向性特点,社会适应性良好或一般,人际关系较好,有组织能力;The D-type personality has the characteristics of extroversion, social adaptability is good or general, interpersonal relationship is good, and organizational ability;
E型性格具有内向性特点,情绪不稳定,社会适应性较差或一般,不善交际,但往往善于独立思考,有钻研性。E-type personality has introverted characteristics, emotional instability, poor social adaptability or general, not good communication, but often good at independent thinking, research-oriented.
权重最大的性格特征分类作为确定角色的性格特征。The most characteristic character classification of weights is used as the character trait of the determined character.
如图2所示在上述实施例的基础上,本发明实施例中形成确定角色的语句情绪特征还包括:As shown in FIG. 2, on the basis of the foregoing embodiment, the statement emotional characteristics of determining the role in the embodiment of the present invention further includes:
步骤280:通过自然语言结构对确定角色的角色语料短语进行统计,形成与确定角色的语句情绪特征对应的情绪语言结构。Step 280: Perform statistics on the role corpus phrases of the determined role through the natural language structure, and form an emotional language structure corresponding to the sentence emotional characteristics of the determined role.
本发明实施例利用自然语言结构为统计基础,统计确定角色的情绪语料短语中不同语言结构的频次,将高频次语言结构作为确定角色在不同情绪特征下的语言习惯。进一步根据情绪特征下的语言习惯形成确定角色的对话样例模板。The embodiment of the present invention uses the natural language structure as a statistical basis to statistically determine the frequency of different language structures in the emotional corpus phrase of the character, and uses the high-frequency sub-linguistic structure as a language habit to determine the character under different emotional characteristics. Further, a dialog sample template for determining a role is formed according to the language habits under the emotional characteristics.
如图2所示在上述实施例的基础上,本发明实施例中形成确定角色的语句情绪特征还包括:As shown in FIG. 2, on the basis of the foregoing embodiment, the statement emotional characteristics of determining the role in the embodiment of the present invention further includes:
步骤290:对语料短语中的专用名词和专用名词上下文联系进行频率统计,形成专用名词相关关联名词表达语料。Step 290: Perform frequency statistics on the specific nouns and the specific noun contexts in the corpus, and form a corpus of related nouns associated with the specific nouns.
语料短语中的专用名词的确定参考现有交互信息响应中的关键词列表、与关键词列表关联的相应信息内容中的频次统计,专用名词上下文联系的确定参 考语料短语中以及与关键词列表关联的相应信息内容中语句和词语的语法关联。当智能助理产品的交互信息输入过程中触发专用名词时,与专用名词相关的关联名词及上下文作为响应输出。The determination of the specific noun in the corpus phrase refers to the keyword list in the existing interaction information response, the frequency statistics in the corresponding information content associated with the keyword list, the determination of the specific noun contextual association in the reference corpus phrase, and the association with the keyword list The grammatical association of statements and words in the corresponding information content. When a special noun is triggered during the interactive information input process of the assistant product, the associated noun and context associated with the specific noun are output as a response.
图3为本发明实施例的基于虚拟人物的人工智能交互方法的情绪交互信息输出的流程图。如图3所示,本发明实施例中通过对交互信息输入的情绪判断将确定角色的语句情绪特征与交互信息响应相结合包括:FIG. 3 is a flowchart of outputting emotional interaction information of a virtual character based artificial intelligence interaction method according to an embodiment of the present invention. As shown in FIG. 3, in the embodiment of the present invention, by combining the emotional judgment of the interaction information input, the determination of the statement's emotional characteristics of the role and the interaction information response include:
步骤310:接收交互信息输入的文本词汇或情绪控制信息。Step 310: Receive text vocabulary or emotion control information input by the interaction information.
步骤320:提取交互信息输入的文本词汇,通过与确定角色的情绪语料短语进行匹配提取确定角色的情绪特征。Step 320: Extract the text vocabulary input by the interaction information, and determine the emotional characteristics of the character by matching and extracting the emotional corpus phrase of the determined character.
步骤350:通过确定角色的情绪特征获得确定角色的情绪语言结构,将确定角色的专有关键词列表中的前导对话关键词与现有交互信息响应中的标准响应信息按情绪语言结构结合。Step 350: Determine the emotional language structure of the determined character by determining the emotional characteristics of the character, and combine the leading conversation keyword in the role-specific keyword list with the standard response information in the existing interaction information response according to the emotional language structure.
本发明实施例将情绪语料短语与现有交互信息响应中的标准响应信息相结合,为标准响应信息增加确定角色的情绪表达因素。In the embodiment of the present invention, the emotional corpus phrase is combined with the standard response information in the existing interactive information response, and the emotional expression factor of the role is determined for the standard response information.
如图3所示,本发明实施例中通过对交互信息输入的情绪判断将确定角色的语句情绪特征与交互信息响应相结合包括:As shown in FIG. 3, in the embodiment of the present invention, by combining the emotional judgment of the interaction information input, the determination of the statement's emotional characteristics of the role and the interaction information response include:
步骤310:接收交互信息输入的文本词汇或情绪控制信息。Step 310: Receive text vocabulary or emotion control information input by the interaction information.
步骤320:提取交互信息输入的文本词汇,通过与确定角色的情绪语料短语进行匹配提取确定角色的情绪特征。Step 320: Extract the text vocabulary input by the interaction information, and determine the emotional characteristics of the character by matching and extracting the emotional corpus phrase of the determined character.
步骤340:提取交互信息输入的文本词汇,通过与语料短语中的专用名词进行匹配提取确定角色的专用名词相关关联名词表达语料。Step 340: Extract the text vocabulary input by the interaction information, and determine the specific noun related noun expression corpus of the character by matching with the specific noun in the corpus phrase.
步骤350:通过确定角色的情绪特征获得确定角色的情绪语言结构,将确定角色的专用名词的相关关联名词表达语料与现有交互信息响应中的标准响应信息按情绪语言结构结合。Step 350: Determine the emotional language structure of the determined character by determining the emotional characteristics of the character, and combine the related related noun expression corpus of the specific noun of the determined character with the standard response information in the existing interactive information response according to the emotional language structure.
本发明实施例将专用名词相关关联名词表达语料与现有交互信息响应中的标准响应信息相结合,利用确定角色的语料短语中的关联上下文增加确定角色的情绪表达因素。In the embodiment of the present invention, the specific noun related associative noun expression corpus is combined with the standard response information in the existing interactive information response, and the emotional expression factor of the character is determined by using the associated context in the corpus phrase of the determined role.
如图3所示,本发明实施例中通过对交互信息输入的情绪判断将确定角色的语句情绪特征与交互信息响应相结合包括:As shown in FIG. 3, in the embodiment of the present invention, by combining the emotional judgment of the interaction information input, the determination of the statement's emotional characteristics of the role and the interaction information response include:
步骤310:接收交互信息输入的文本词汇或情绪控制信息。Step 310: Receive text vocabulary or emotion control information input by the interaction information.
步骤330:提取交互信息输入的情绪控制信息,匹配确定角色,以及确定角色的情绪特征。Step 330: Extract emotion control information input by the interaction information, match the determined role, and determine the emotional characteristics of the character.
步骤350:通过确定角色的情绪特征获得确定角色的情绪语言结构,将确定角色的专有关键词列表中的前导对话关键词与现有交互信息响应中的标准响应信息按情绪语言结构结合。Step 350: Determine the emotional language structure of the determined character by determining the emotional characteristics of the character, and combine the leading conversation keyword in the role-specific keyword list with the standard response information in the existing interaction information response according to the emotional language structure.
在本发明一实施例中还包括额外步骤:In an embodiment of the invention, an additional step is included:
通过确定角色的情绪特征获得确定角色的情绪语言结构,将确定角色的专用名词的相关关联名词表达语料与现有交互信息响应中的标准响应信息按情绪语言结构结合。The emotional language structure of the determined character is obtained by determining the emotional characteristics of the character, and the related noun expression corpus of the specific noun of the determined character is combined with the standard response information in the existing interactive information response according to the emotional language structure.
利用额外步骤替换步骤350。Step 350 is replaced with an additional step.
在本发明一实施例中,可以同时包括额外步骤与步骤350,两个步骤处于逻辑处理过程中相同的并联结构。In an embodiment of the invention, an additional step and step 350 may be included at the same time, the two steps being in the same parallel configuration in the logic process.
本发明实施例可以将确定角色和确定情绪作为可控权重信息通过交互信息连续输入,实现根据用户体验和感受即时调整确定角色在确定情绪表现上的表现程度。使得根据用户与虚拟的确定角色间的情绪响应形成反馈,对确定角色的持续情绪表现给出可控的正反馈或负反馈影响,进一步增强角色与用户间的人机互动。The embodiment of the present invention can continuously input the determined role and the determined emotion as the controllable weight information through the interaction information, so as to determine the degree of performance of the character in determining the emotional performance according to the user experience and the feeling of immediate adjustment. The feedback is formed according to the emotional response between the user and the virtual determined role, and the positive or negative feedback effect of determining the continuous emotional performance of the character is given, thereby further enhancing the human-computer interaction between the character and the user.
图4为基于虚拟人物的人工智能交互装置或程序模块的结构示意图。如图4所示基于虚拟人物的人工智能交互装置或处理器中部署的程序模块包括:4 is a schematic structural diagram of an artificial intelligence interactive device or program module based on a virtual character. The avatar-based artificial intelligence interaction device or the program module deployed in the processor as shown in FIG. 4 includes:
语料短语生成模块10,用于从语料库中获取角色的语料资源进行向量化处理,形成角色的角色语料。The corpus phrase generating module 10 is configured to obtain a corpus resource of the character from the corpus and perform vectorization processing to form a role corpus of the character.
语义识别模块20,用于通过神经网络模型对角色语料进行自然语言语义识别,形成确定角色,形成确定角色的语句情绪特征,形成确定角色的性格特征。The semantic recognition module 20 is configured to perform natural language semantic recognition on the character corpus through the neural network model, form a determined role, form a sentence emotional feature that determines the character, and form a character characteristic that determines the character.
情绪结合模块30,用于通过对交互信息输入的情绪判断将确定角色的语句情绪特征与交互信息响应相结合,形成情绪交互信息输出。The emotion combining module 30 is configured to combine the statement emotional feature of the determined role with the interactive information response by the emotional judgment input on the interactive information to form the emotional interaction information output.
语义识别模块20包括:The semantic recognition module 20 includes:
语料短语生成单元21,用于通过第一神经网络模型处理角色语料,输出语料短语。The corpus phrase generating unit 21 is configured to process the role corpus through the first neural network model and output the corpus phrase.
词性标记单元22,用于对语料短语中的词语进行词性标记。The part of speech tagging unit 22 is configured to perform part-of-speech tagging on words in the corpus phrase.
确定角色生成单元23,用于根据词性标记对语料短语进行统计形成确定角色,并形成确定角色列表。The determining role generating unit 23 is configured to perform statistical determination on the corpus phrase according to the part-of-speech tag to form a determined role, and form a determined role list.
角色语料短语生成单元24,用于根据确定角色对语料短语进行分类,形成确定角色的角色语料短语,从确定角色的角色语料短语中获取前导对话关键词。The role corpus phrase generating unit 24 is configured to classify the corpus phrases according to the determined characters, form a role corpus phrase for determining the role, and obtain the leading dialogue keyword from the role corpus phrase of the determined role.
情绪语料短语生成单元25,用于通过第二神经网络模型处理确定角色的角色语料短语,将角色语料短语形成按情绪特征分类的确定角色的情绪语料短语,形成确定角色的语句情绪特征。The emotional corpus phrase generating unit 25 is configured to process the role corpus phrase of the role by the second neural network model processing, and form the corpus phrase to form the emotional corpus phrase of the determined character classified by the emotional feature to form a sentence emotional feature that determines the character.
性格特征生成单元26,用于对确定角色的情绪语料短语进行情绪频次统计,形成确定角色主要的性格特征。The personality feature generating unit 26 is configured to perform emotional frequency statistics on the emotional corpus phrases of the determined characters to form a main character feature for determining the character.
专有关键词列表生成单元27,用于结合现有交互信息响应中的关键词(或字)列表和确定角色的前导对话关键词,形成确定角色的专有关键词列表。The proprietary keyword list generating unit 27 is configured to combine the keyword (or word) list in the existing interactive information response with the leading dialogue keyword of the determined character to form a list of the specific keywords that determine the role.
情绪语言结构生成单元28,用于通过自然语言结构对确定角色的角色语料短语进行统计,形成与确定角色的语句情绪特征对应的情绪语言结构。The emotional language structure generating unit 28 is configured to perform statistics on the role corpus of the determined character through the natural language structure, and form an emotional language structure corresponding to the sentence emotional feature of the determined character.
专用名词生成单元29,用于对语料短语中的专用名词和专用名词上下文联系进行频率统计,形成专用名词相关关联名词表达语料。The special noun generating unit 29 is configured to perform frequency statistics on the contextual terms of the specific nouns and the specific nouns in the corpus phrases, and form a corpus of associated nouns associated with the specific nouns.
情绪结合模块30包括:The emotion combining module 30 includes:
情绪信息提取单元31,用于接收交互信息输入的文本词汇或情绪控制信息。The emotion information extracting unit 31 is configured to receive text vocabulary or emotion control information input by the interaction information.
情绪特征识别单元32,用于提取交互信息输入的文本词汇,通过与确定角色的情绪语料短语进行匹配提取确定角色的情绪特征。The emotion feature recognition unit 32 is configured to extract a text vocabulary input by the interaction information, and determine an emotional feature of the character by matching and extracting the emotional corpus phrase of the character.
情绪特征控制单元33,用于提取交互信息输入的情绪控制信息,匹配确定角色,以及确定角色的情绪特征。The emotion feature control unit 33 is configured to extract emotion control information input by the interaction information, match the determined role, and determine the emotional feature of the character.
专用名词关联单元34,用于提取交互信息输入的文本词汇,通过与语料短语中的专用名词进行匹配提取确定角色的专用名词相关关联名词表达语料。The special noun association unit 34 is configured to extract the text vocabulary input by the interaction information, and determine the specific noun related noun expression corpus of the character by matching with the special noun in the corpus phrase.
响应信息情绪生成单元35,用于通过确定角色的情绪特征获得确定角色的情绪语言结构,将确定角色的专有关键词列表中的前导对话关键词与现有交互信息响应中的标准响应信息按情绪语言结构结合。The response information emotion generating unit 35 is configured to obtain an emotional language structure of the determined character by determining the emotional feature of the character, and determine the leading conversation keyword in the role list of the role and the standard response information in the existing interaction information response. The combination of emotional language structure.
在本发明一实施例中,情绪结合模块30中还包括第二响应信息情绪生成单元,用于通过确定角色的情绪特征获得确定角色的情绪语言结构,将确定角色的专用名词的相关关联名词表达语料与现有交互信息响应中的标准响应信息按情绪语言结构结合。In an embodiment of the present invention, the emotion combining module 30 further includes a second response information emotion generating unit, configured to obtain an emotional language structure of the determined character by determining an emotional feature of the character, and determine related related nouns of the specific noun of the determined character. The corpus and the standard response information in the existing interactive information response are combined according to the emotional language structure.
在本发明一实施例中,第二响应信息情绪生成单元可以替换响应信息情绪生成单元35。In an embodiment of the present invention, the second response information emotion generating unit may replace the response information emotion generating unit 35.
在本发明一实施例中,第二响应信息情绪生成单元与响应信息情绪生成单元35并联设置。In an embodiment of the invention, the second response information emotion generating unit is disposed in parallel with the response information emotion generating unit 35.
本发明实施例中基于虚拟人物的人工智能交互装置的具体实现和有益效果可参见基于虚拟人物的人工智能交互方法,在此不再赘述。The specific implementation and beneficial effects of the avatar-based artificial intelligence interaction device in the embodiment of the present invention can be referred to the avatar-based artificial intelligence interaction method, and details are not described herein again.
本发明实施例的基于虚拟人物的人工智能交互装置,包括:The avatar-based artificial intelligence interaction device of the embodiment of the invention includes:
存储器,用于存储上述实施例的基于虚拟人物的人工智能交互方法的处理过程的程序代码;a memory, a program code for storing a process of the virtual character-based artificial intelligence interaction method of the above embodiment;
处理器,用于执行上述实施例的基于虚拟人物的人工智能交互方法的处理过程的程序代码。A processor, program code for executing a process of the avatar-based artificial intelligence interaction method of the above embodiment.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the various examples described in connection with the embodiments disclosed herein can be implemented in electronic hardware or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods for implementing the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present invention.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。A person skilled in the art can clearly understand that for the convenience and brevity of the description, the specific working process of the system, the device and the unit described above can refer to the corresponding process in the foregoing method embodiment, and details are not described herein again.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the device embodiments described above are merely illustrative. For example, the division of the unit is only a logical function division. In actual implementation, there may be another division manner, for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed. In addition, the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序校验码的介质。The functions may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a standalone product. Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including The instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention. The foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and the like, and can store a program check code. Medium.
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发 明的精神和原则之内,所作的任何修改、等同替换等,均应包含在本发明的保护范围之内。The above is only the preferred embodiment of the present invention, and is not intended to limit the present invention. Any modifications, equivalent substitutions, etc., which are within the spirit and principles of the present invention, should be included in the scope of the present invention. within.
工业实用性Industrial applicability
本发明实施例的基于虚拟人物的人工智能交互方法和装置,通过自然语义识别的方式创建带有人物性格的人工智能助手,大规模降低人工参与,降低了开发的工作量,同时能够产生更具有自然情绪效果的人工智能助手。。可以普遍应用于生产生活环境中。The artificial character intelligent interaction method and device based on the virtual character in the embodiment of the invention creates an artificial intelligence assistant with a character character through natural semantic recognition, reduces manual participation on a large scale, reduces the workload of development, and can generate more Artificial intelligence assistant for natural emotional effects. . Can be widely used in production and living environments.

Claims (23)

  1. 一种基于虚拟人物的人工智能交互方法,包括:An artificial intelligence interaction method based on virtual characters, including:
    从语料库中获取角色的语料资源进行向量化处理,形成角色的角色语料;The corpus resources of the character obtained from the corpus are vectorized to form the role corpus of the character;
    通过神经网络模型对所述角色语料进行自然语言语义识别,形成确定角色,形成所述确定角色的语句情绪特征,形成所述确定角色的性格特征。The character corpus is subjected to natural language semantic recognition by a neural network model to form a determined character, and the sentence emotional feature of the determined character is formed to form a character characteristic of the determined character.
  2. 如权利要求1所述的基于虚拟人物的人工智能交互方法,其特征在于所述形成确定角色包括:The avatar-based artificial intelligence interaction method according to claim 1, wherein the forming the determined role comprises:
    通过第一神经网络模型处理所述角色语料,输出语料短语;Processing the role corpus by the first neural network model, and outputting a corpus phrase;
    对所述语料短语中的词语进行词性标记;Word-based tagging of words in the corpus;
    根据所述词性标记对所述语料短语进行统计形成所述确定角色。The corpus phrase is statistically generated according to the part of speech tag to form the determined role.
  3. 如权利要求2所述的基于虚拟人物的人工智能交互方法,其特征在于所述形成确定角色的语句情绪特征包括:The avatar-based artificial intelligence interaction method according to claim 2, wherein said forming the sentence emotional characteristics of the determined character comprises:
    根据所述确定角色对所述语料短语进行分类,形成所述确定角色的角色语料短语,从所述确定角色的所述角色语料短语中获取前导对话关键词;And classifying the corpus phrases according to the determined role, forming a role corpus phrase of the determined role, and obtaining a leading dialogue keyword from the role corpus phrase of the determined role;
    结合现有交互信息响应中的关键词列表和所述确定角色的所述前导对话关键词,形成所述确定角色的所述专有关键词列表。Forming the list of the unique keywords of the determined role in combination with the keyword list in the existing interaction information response and the leading conversation keyword of the determined role.
  4. 如权利要求3所述的基于虚拟人物的人工智能交互方法,其特征在于所述形成确定角色的语句情绪特征包括:The avatar-based artificial intelligence interaction method according to claim 3, wherein said forming the sentence emotional characteristics of the determined character comprises:
    通过第二神经网络模型处理所述确定角色的所述角色语料短语,将所述角色语料短语形成按情绪特征分类的所述确定角色的情绪语料短语,形成所述确定角色的所述语句情绪特征;Processing the role corpus phrase of the determined character by a second neural network model, forming the corpus phrase to form an emotional corpus phrase of the determined character classified according to an emotional feature, forming the statement emotional feature of the determined character ;
    通过自然语言结构对所述确定角色的所述角色语料短语进行统计,形成与所述确定角色的所述语句情绪特征对应的情绪语言结构。The character corpus of the determined character is counted by a natural language structure to form an emotional language structure corresponding to the statement emotional feature of the determined character.
  5. 如权利要求4所述的基于虚拟人物的人工智能交互方法,其特征在于所述形成确定角色的性格特征包括:The avatar-based artificial intelligence interaction method according to claim 4, wherein the forming the character traits of the determined character comprises:
    对所述确定角色的所述情绪语料短语进行情绪频次统计,形成所述确定角 色的性格特征。Performing emotional frequency statistics on the emotional corpus phrase of the determined character to form the personality trait of the determined character.
  6. 如权利要求2所述的基于虚拟人物的人工智能交互方法,其特征在于所述形成确定角色的语句情绪特征包括:The avatar-based artificial intelligence interaction method according to claim 2, wherein said forming the sentence emotional characteristics of the determined character comprises:
    对所述语料短语中的专用名词和专用名词上下文联系进行频率统计,形成所述专用名词的相关关联名词表达语料。Frequency statistics are performed on the contextual terms of the specific nouns and the specific nouns in the corpus, forming a related vocabulary corpus of the specific nouns.
  7. 如权利要求1至6任一所述的基于虚拟人物的人工智能交互方法,其特征在于还包括:The avatar-based artificial intelligence interaction method according to any one of claims 1 to 6, further comprising:
    通过对交互信息输入的情绪判断将所述确定角色的语句情绪特征与交互信息响应相结合,形成情绪交互信息输出。The emotional relationship of the determined character is combined with the interactive information response by the emotional judgment of the interactive information input to form an emotional interaction information output.
  8. 如权利要求7所述的基于虚拟人物的人工智能交互方法,其特征在于所述通过对交互信息输入的情绪判断将所述确定角色的语句情绪特征与交互信息响应相结合包括:The avatar-based artificial intelligence interaction method according to claim 7, wherein the combining the statement emotion feature of the determined role with the interaction information response by the emotional judgment inputting the interaction information comprises:
    接收交互信息输入的文本词汇或情绪控制信息;Receiving text vocabulary or emotion control information input by interactive information;
    提取交互信息输入的文本词汇,通过与所述确定角色的所述情绪语料短语进行匹配提取所述确定角色的情绪特征;Extracting a text vocabulary input by the interaction information, and extracting the emotional feature of the determined character by matching with the emotional corpus phrase of the determined character;
    通过所述确定角色的情绪特征获得所述确定角色的所述情绪语言结构,将所述确定角色的所述专有关键词列表中的所述前导对话关键词与现有交互信息响应中的标准响应信息按情绪语言结构结合。Obtaining the emotional language structure of the determined character by determining the emotional feature of the character, and determining the leading dialogue keyword in the private keyword list of the determined role and a criterion in an existing interaction information response Response information is combined by emotional language structure.
  9. 如权利要求7所述的基于虚拟人物的人工智能交互方法,其特征在于所述通过对交互信息输入的情绪判断将所述确定角色的语句情绪特征与交互信息响应相结合包括:The avatar-based artificial intelligence interaction method according to claim 7, wherein the combining the statement emotion feature of the determined role with the interaction information response by the emotional judgment inputting the interaction information comprises:
    接收交互信息输入的文本词汇或情绪控制信息;Receiving text vocabulary or emotion control information input by interactive information;
    提取交互信息输入的文本词汇,通过与所述确定角色的所述情绪语料短语进行匹配提取所述确定角色的情绪特征;Extracting a text vocabulary input by the interaction information, and extracting the emotional feature of the determined character by matching with the emotional corpus phrase of the determined character;
    提取交互信息输入的文本词汇,通过与所述语料短语中的所述专用名词进行匹配提取所述确定角色的专用名词的相关关联名词表达语料;Extracting a text vocabulary input by the interaction information, and extracting related related noun expression corpus of the specific noun of the determined character by matching with the special noun in the corpus phrase;
    通过所述确定角色的情绪特征获得所述确定角色的所述情绪语言结构,将 所述确定角色的所述专用名词的相关关联名词表达语料与现有交互信息响应中的标准响应信息按所述情绪语言结构结合。Obtaining, by the determining the emotional feature of the character, the emotional language structure of the determined character, and determining the related related noun expression corpus of the specific noun of the determined character and the standard response information in the existing interactive information response according to the The combination of emotional language structure.
  10. 如权利要求7所述的基于虚拟人物的人工智能交互方法,其特征在于所述通过对交互信息输入的情绪判断将所述确定角色的语句情绪特征与交互信息响应相结合包括:The avatar-based artificial intelligence interaction method according to claim 7, wherein the combining the statement emotion feature of the determined role with the interaction information response by the emotional judgment inputting the interaction information comprises:
    接收交互信息输入的文本词汇或情绪控制信息;Receiving text vocabulary or emotion control information input by interactive information;
    提取所述交互信息输入的情绪控制信息,匹配所述确定角色,以及所述确定角色的情绪特征;Extracting emotion control information input by the interaction information, matching the determined role, and determining the emotional characteristics of the character;
    通过所述确定角色的所述情绪特征获得所述确定角色的情绪语言结构,将所述确定角色的专有关键词列表中的前导对话关键词与现有交互信息响应中的标准响应信息按所述情绪语言结构结合。Obtaining, by the determining the emotional feature of the character, the emotional language structure of the determined character, and using the leading conversation keyword in the specific keyword list of the determined role and the standard response information in the existing interaction information response according to the The combination of emotional language structure.
  11. 如权利要求8至10任一所述的基于虚拟人物的人工智能交互方法,其特征在于,所述通过所述确定角色的所述情绪特征获得所述确定角色的情绪语言结构,将所述确定角色的专有关键词列表中的前导对话关键词与现有交互信息响应中的标准响应信息按所述情绪语言结构结合替换为:The avatar-based artificial intelligence interaction method according to any one of claims 8 to 10, wherein the determining the emotional language structure of the determined character by the emotional feature of the determined character, the determining The leading conversation keyword in the role's proprietary keyword list and the standard response information in the existing interaction information response are replaced by the emotional language structure in combination with:
    通过所述确定角色的所述情绪特征获得所述确定角色的情绪语言结构,将所述确定角色的专用名词的相关关联名词表达语料与现有交互信息响应中的标准响应信息按所述情绪语言结构结合。Obtaining an emotional language structure of the determined character by determining the emotional feature of the character, and comparing the related noun expression corpus of the specific noun of the determined character with the standard response information in the existing interactive information response according to the emotional language Structural combination.
  12. 一种基于虚拟人物的人工智能交互装置,包括:An artificial intelligence interaction device based on virtual characters, comprising:
    语料短语生成模块,用于从语料库中获取角色的语料资源进行向量化处理,形成角色的角色语料;a corpus phrase generating module, configured to obtain a corpus resource of a character from a corpus, perform vectorization processing, and form a role corpus of the character;
    语义识别模块,用于通过神经网络模型对所述角色语料进行自然语言语义识别,形成确定角色,形成所述确定角色的语句情绪特征,形成所述确定角色的性格特征。The semantic recognition module is configured to perform natural language semantic recognition on the character corpus through a neural network model to form a determined role, form a sentence emotional feature of the determined character, and form a character feature of the determined character.
  13. 如权利要求12所述的基于虚拟人物的人工智能交互装置,其特征在于,所述语义识别模块包括:The avatar-based artificial intelligence interaction device of claim 12, wherein the semantic recognition module comprises:
    语料短语生成单元,用于通过第一神经网络模型处理所述角色语料,输出 语料短语;a corpus phrase generating unit, configured to process the role corpus by the first neural network model, and output a corpus phrase;
    词性标记单元,用于对所述语料短语中的词语进行词性标记;a part of speech tagging unit, configured to perform part-of-speech tagging on words in the corpus phrase;
    确定角色生成单元,用于根据所述词性标记对所述语料短语进行统计形成所述确定角色。Determining a role generating unit, configured to perform statistics on the corpus phrase according to the part of speech tag to form the determined role.
  14. 如权利要求13所述的基于虚拟人物的人工智能交互装置,其特征在于,所述语义识别模块还包括:角色语料短语生成单元,用于根据所述确定角色对所述语料短语进行分类,形成所述确定角色的角色语料短语,从所述确定角色的所述角色语料短语中获取前导对话关键词;The avatar-based artificial intelligence interaction device according to claim 13, wherein the semantic recognition module further comprises: a role corpus phrase generating unit, configured to classify the corpus phrases according to the determined role, and form Determining a role corpus phrase of the character, and obtaining a leading conversation keyword from the role corpus phrase of the determined character;
    专有关键词列表生成单元,用于结合现有交互信息响应中的关键词列表和所述确定角色的所述前导对话关键词,形成所述确定角色的所述专有关键词列表。The proprietary keyword list generating unit is configured to combine the keyword list in the existing interaction information response with the leading conversation keyword of the determined role to form the private keyword list of the determined role.
  15. 如权利要求14所述的基于虚拟人物的人工智能交互装置,其特征在于,所述语义识别模块还包括:The avatar-based artificial intelligence interaction device of claim 14, wherein the semantic recognition module further comprises:
    情绪语料短语生成单元,用于通过第二神经网络模型处理所述确定角色的所述角色语料短语,将所述角色语料短语形成按情绪特征分类的所述确定角色的情绪语料短语,形成所述确定角色的所述语句情绪特征;An emotional corpus phrase generating unit, configured to process the role corpus phrase of the determined character by using a second neural network model, and form the corpus vocabulary phrase to form an emotional corpus phrase of the determined character classified according to an emotional feature, to form the Determining the emotional characteristics of the statement of the character;
    情绪语言结构生成单元,用于通过自然语言结构对所述确定角色的所述角色语料短语进行统计,形成与所述确定角色的所述语句情绪特征对应的情绪语言结构。The emotional language structure generating unit is configured to perform statistics on the role corpus phrase of the determined character through a natural language structure, and form an emotional language structure corresponding to the sentence emotional feature of the determined character.
  16. 如权利要求15所述的基于虚拟人物的人工智能交互装置,其特征在于,所述语义识别模块还包括:The AI-based artificial intelligence interaction device of claim 15, wherein the semantic recognition module further comprises:
    性格特征生成单元,用于对所述确定角色的所述情绪语料短语进行情绪频次统计,形成所述确定角色的性格特征。The character feature generating unit is configured to perform emotional frequency statistics on the emotional corpus phrase of the determined character to form a personality trait of the determined character.
  17. 如权利要求13所述的基于虚拟人物的人工智能交互装置,其特征在于,所述语义识别模块还包括:The AI-based artificial intelligence interaction device of claim 13, wherein the semantic recognition module further comprises:
    专用名词生成单元,用于对所述语料短语中的专用名词和专用名词上下文联系进行频率统计,形成所述专用名词的相关关联名词表达语料。The special noun generating unit is configured to perform frequency statistics on the specific nouns and the specific noun contextual associations in the corpus phrases, and form related related noun expression corpora of the special nouns.
  18. 如权利要求12至17任一所述的基于虚拟人物的人工智能交互装置,其特征在于还包括:The avatar-based artificial intelligence interaction device according to any one of claims 12 to 17, further comprising:
    情绪结合模块,用于通过对交互信息输入的情绪判断将所述确定角色的语句情绪特征与交互信息响应相结合,形成情绪交互信息输出。The emotion combining module is configured to combine the sentence emotional feature of the determined character with the interactive information response by using the emotional judgment of the interaction information input to form the emotional interaction information output.
  19. 如权利要求18所述的基于虚拟人物的人工智能交互装置,其特征在于所述情绪结合模块还包括:The avatar-based artificial intelligence interaction device of claim 18, wherein the emotion combining module further comprises:
    情绪信息提取单元,用于接收交互信息输入的文本词汇或情绪控制信息;An emotion information extracting unit, configured to receive text vocabulary or emotion control information input by the interaction information;
    情绪特征识别单元,用于提取交互信息输入的文本词汇,通过与所述确定角色的所述情绪语料短语进行匹配提取所述确定角色的情绪特征;An emotion feature recognition unit, configured to extract a text vocabulary input by the interaction information, and extract the emotional feature of the determined character by matching with the emotional corpus phrase of the determined character;
    响应信息情绪生成单元,用于通过所述确定角色的情绪特征获得所述确定角色的所述情绪语言结构,将所述确定角色的所述专有关键词列表中的所述前导对话关键词与现有交互信息响应中的标准响应信息按情绪语言结构结合。a response information emotion generating unit, configured to obtain, by using the emotional feature of the determined character, the emotional language structure of the determined character, and the leading dialogue keyword in the list of the unique keywords of the determined character The standard response information in the existing interactive information response is combined by the emotional language structure.
  20. 如权利要求18所述的基于虚拟人物的人工智能交互装置,其特征在于所述情绪结合模块还包括:The avatar-based artificial intelligence interaction device of claim 18, wherein the emotion combining module further comprises:
    情绪信息提取单元,用于接收交互信息输入的文本词汇或情绪控制信息;An emotion information extracting unit, configured to receive text vocabulary or emotion control information input by the interaction information;
    情绪特征识别单元,用于提取交互信息输入的文本词汇,通过与所述确定角色的所述情绪语料短语进行匹配提取所述确定角色的情绪特征;An emotion feature recognition unit, configured to extract a text vocabulary input by the interaction information, and extract the emotional feature of the determined character by matching with the emotional corpus phrase of the determined character;
    专用名词关联单元,用于提取交互信息输入的文本词汇,通过与所述语料短语中的所述专用名词进行匹配提取所述确定角色的专用名词的相关关联名词表达语料;a special noun association unit, configured to extract a text vocabulary input by the interaction information, and extract the related noun expression corpus of the specific noun of the determined character by matching with the special noun in the corpus phrase;
    响应信息情绪生成单元,用于通过所述确定角色的情绪特征获得所述确定角色的所述情绪语言结构,将所述确定角色的所述专用名词的相关关联名词表达语料与现有交互信息响应中的标准响应信息按所述情绪语言结构结合。a response information emotion generating unit, configured to obtain the emotional language structure of the determined character by determining the emotional feature of the character, and correlate the related noun expression corpus of the specific noun of the determined character with an existing interactive information response The standard response information in the combination is combined according to the emotional language structure.
  21. 如权利要求18所述的基于虚拟人物的人工智能交互装置,其特征在于所述情绪结合模块还包括:The avatar-based artificial intelligence interaction device of claim 18, wherein the emotion combining module further comprises:
    情绪信息提取单元,用于接收交互信息输入的文本词汇或情绪控制信息;An emotion information extracting unit, configured to receive text vocabulary or emotion control information input by the interaction information;
    情绪特征控制单元,用于提取所述交互信息输入的情绪控制信息,匹配所 述确定角色,以及所述确定角色的情绪特征;An emotion feature control unit, configured to extract emotion control information input by the interaction information, match the determined role, and determine the emotional feature of the character;
    响应信息情绪生成单元,用于通过所述确定角色的所述情绪特征获得所述确定角色的情绪语言结构,将所述确定角色的专有关键词列表中的前导对话关键词与现有交互信息响应中的标准响应信息按所述情绪语言结构结合。a response information emotion generating unit, configured to obtain, by using the emotional feature of the determined character, an emotional language structure of the determined character, and the leading dialogue keyword in the specified keyword list of the determined character and the existing interaction information The standard response information in the response is combined according to the emotional language structure.
  22. 如权利要求19至21任一所述的基于虚拟人物的人工智能交互装置,其特征在于所述响应信息情绪生成单元替换为:The avatar-based artificial intelligence interactive device according to any one of claims 19 to 21, wherein the response information emotion generating unit is replaced by:
    第二响应信息情绪生成单元,用于通过所述确定角色的所述情绪特征获得所述确定角色的情绪语言结构,将所述确定角色的专用名词的相关关联名词表达语料与现有交互信息响应中的标准响应信息按所述情绪语言结构结合。a second response information emotion generating unit, configured to obtain, by using the emotional feature of the determined character, the emotional language structure of the determined character, and the related related noun expression corpus of the specific noun of the determined character and the existing interactive information response The standard response information in the combination is combined according to the emotional language structure.
  23. 一种基于虚拟人物的人工智能交互装置,包括:An artificial intelligence interaction device based on virtual characters, comprising:
    存储器,用于存储如权利要求1至11任一所述的基于虚拟人物的人工智能交互方法的处理过程的程序代码;a program, a program code for storing a process of the virtual character-based artificial intelligence interaction method according to any one of claims 1 to 11;
    处理器,用于执行所述程序代码。a processor for executing the program code.
PCT/CN2018/084879 2017-06-26 2018-04-27 Virtual character-based artificial intelligence interaction method and artificial intelligence interaction device WO2019001127A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710498738.9A CN107480122B (en) 2017-06-26 2017-06-26 Artificial intelligence interaction method and artificial intelligence interaction device
CN201710498738.9 2017-06-26

Publications (1)

Publication Number Publication Date
WO2019001127A1 true WO2019001127A1 (en) 2019-01-03

Family

ID=60594960

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/084879 WO2019001127A1 (en) 2017-06-26 2018-04-27 Virtual character-based artificial intelligence interaction method and artificial intelligence interaction device

Country Status (2)

Country Link
CN (1) CN107480122B (en)
WO (1) WO2019001127A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111639227A (en) * 2020-05-26 2020-09-08 广东小天才科技有限公司 Spoken language control method of virtual character, electronic device and storage medium
CN112182173A (en) * 2020-09-23 2021-01-05 支付宝(杭州)信息技术有限公司 Human-computer interaction method and device based on virtual life and electronic equipment
US11590432B2 (en) 2020-09-30 2023-02-28 Universal City Studios Llc Interactive display with special effects assembly
CN116483983A (en) * 2023-06-25 2023-07-25 启智元慧(杭州)科技有限公司 Method and related equipment for generating emotion change quantity of virtual character

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107480122B (en) * 2017-06-26 2020-05-08 迈吉客科技(北京)有限公司 Artificial intelligence interaction method and artificial intelligence interaction device
CN108470188B (en) * 2018-02-26 2022-04-22 北京物灵智能科技有限公司 Interaction method based on image analysis and electronic equipment
CN108804411B (en) * 2018-04-09 2019-10-29 平安科技(深圳)有限公司 A kind of semantic role analysis method, computer readable storage medium and terminal device
CN110413984A (en) * 2018-04-27 2019-11-05 北京海马轻帆娱乐科技有限公司 A kind of Emotion identification method and device
CN109377324A (en) * 2018-12-05 2019-02-22 河北工业大学 A kind of technical need docking business model system based on artificial intelligence
CN109584858A (en) * 2019-01-08 2019-04-05 武汉西山艺创文化有限公司 A kind of virtual dubbing method and its device based on AI artificial intelligence
CN112035714A (en) * 2019-06-03 2020-12-04 鲨鱼快游网络技术(北京)有限公司 Man-machine conversation method based on character companions
CN110689078A (en) * 2019-09-29 2020-01-14 浙江连信科技有限公司 Man-machine interaction method and device based on personality classification model and computer equipment
CN112487184A (en) * 2020-11-26 2021-03-12 北京智源人工智能研究院 User character judging method and device, memory and electronic equipment
CN112989822B (en) * 2021-04-16 2021-08-27 北京世纪好未来教育科技有限公司 Method, device, electronic equipment and storage medium for recognizing sentence categories in conversation
CN114969282B (en) * 2022-05-05 2024-02-06 迈吉客科技(北京)有限公司 Intelligent interaction method based on rich media knowledge graph multi-modal emotion analysis model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106297789A (en) * 2016-08-19 2017-01-04 北京光年无限科技有限公司 The personalized interaction method of intelligent robot and interactive system
CN106294726A (en) * 2016-08-09 2017-01-04 北京光年无限科技有限公司 Based on the processing method and processing device that robot role is mutual
CN106855853A (en) * 2016-12-28 2017-06-16 成都数联铭品科技有限公司 Entity relation extraction system based on deep neural network
CN106874472A (en) * 2017-02-16 2017-06-20 深圳追科技有限公司 A kind of anthropomorphic robot's client service method
CN107480122A (en) * 2017-06-26 2017-12-15 迈吉客科技(北京)有限公司 A kind of artificial intelligence exchange method and artificial intelligence interactive device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106294726A (en) * 2016-08-09 2017-01-04 北京光年无限科技有限公司 Based on the processing method and processing device that robot role is mutual
CN106297789A (en) * 2016-08-19 2017-01-04 北京光年无限科技有限公司 The personalized interaction method of intelligent robot and interactive system
CN106855853A (en) * 2016-12-28 2017-06-16 成都数联铭品科技有限公司 Entity relation extraction system based on deep neural network
CN106874472A (en) * 2017-02-16 2017-06-20 深圳追科技有限公司 A kind of anthropomorphic robot's client service method
CN107480122A (en) * 2017-06-26 2017-12-15 迈吉客科技(北京)有限公司 A kind of artificial intelligence exchange method and artificial intelligence interactive device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111639227A (en) * 2020-05-26 2020-09-08 广东小天才科技有限公司 Spoken language control method of virtual character, electronic device and storage medium
CN111639227B (en) * 2020-05-26 2023-09-22 广东小天才科技有限公司 Spoken language control method of virtual character, electronic equipment and storage medium
CN112182173A (en) * 2020-09-23 2021-01-05 支付宝(杭州)信息技术有限公司 Human-computer interaction method and device based on virtual life and electronic equipment
US11590432B2 (en) 2020-09-30 2023-02-28 Universal City Studios Llc Interactive display with special effects assembly
CN116483983A (en) * 2023-06-25 2023-07-25 启智元慧(杭州)科技有限公司 Method and related equipment for generating emotion change quantity of virtual character
CN116483983B (en) * 2023-06-25 2023-08-29 启智元慧(杭州)科技有限公司 Method and related equipment for generating emotion change quantity of virtual character

Also Published As

Publication number Publication date
CN107480122A (en) 2017-12-15
CN107480122B (en) 2020-05-08

Similar Documents

Publication Publication Date Title
WO2019001127A1 (en) Virtual character-based artificial intelligence interaction method and artificial intelligence interaction device
CN111368609A (en) Voice interaction method based on emotion engine technology, intelligent terminal and storage medium
Becker et al. Avaya: Sentiment analysis on twitter with self-training and polarity lexicon expansion
Amin et al. A survey on approaches to computational humor generation
KR101627428B1 (en) Method for establishing syntactic analysis model using deep learning and apparatus for perforing the method
Rahimi et al. An overview on extractive text summarization
Yang et al. Sentiment analysis of Weibo comment texts based on extended vocabulary and convolutional neural network
Kavitha et al. Chatbot for healthcare system using Artificial Intelligence
US9653078B2 (en) Response generation method, response generation apparatus, and response generation program
Atmadja et al. Comparison on the rule based method and statistical based method on emotion classification for Indonesian Twitter text
Hirat et al. A survey on emotion detection techniques using text in blogposts
CN114528919A (en) Natural language processing method and device and computer equipment
Chang et al. A METHOD OF FINE-GRAINED SHORT TEXT SENTIMENT ANALYSIS BASED ON MACHINE LEARNING.
CN111339772B (en) Russian text emotion analysis method, electronic device and storage medium
Ashok et al. Sarcasm detection using genetic optimization on LSTM with CNN
Pate et al. Grammar induction from (lots of) words alone
CN106776557B (en) Emotional state memory identification method and device of emotional robot
Bouchekif et al. EPITA-ADAPT at SemEval-2019 Task 3: Detecting emotions in textual conversations using deep learning models combination
Sayeedunnisa et al. Sarcasm detection: a contemporary research affirmation of recent literature
Moy et al. Hate speech detection in English and non-English languages: A review of techniques and challenges
Dayalani et al. Emoticon-based unsupervised sentiment classifier for polarity analysis in tweets
Ptaszynski et al. Emotive or non-emotive: that is the question
TW202119259A (en) Message feedback method for conversational system which greatly increases its richness to have more human nature and can be applied to the demands of various fields
Malandrakis et al. Affective language model adaptation via corpus selection
Chenal et al. Predicting sentential semantic compatibility for aggregation in text-to-text generation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18824351

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 18/02/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18824351

Country of ref document: EP

Kind code of ref document: A1