CN108536802A - Exchange method based on children's mood and device - Google Patents
Exchange method based on children's mood and device Download PDFInfo
- Publication number
- CN108536802A CN108536802A CN201810290987.3A CN201810290987A CN108536802A CN 108536802 A CN108536802 A CN 108536802A CN 201810290987 A CN201810290987 A CN 201810290987A CN 108536802 A CN108536802 A CN 108536802A
- Authority
- CN
- China
- Prior art keywords
- emotional characteristics
- children
- mood
- child user
- word
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/284—Lexical analysis, e.g. tokenisation or collocates
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Machine Translation (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention provides a kind of exchange method, device, electronic equipment and readable storage medium storing program for executing based on children's mood, by the interactive voice information for receiving child user;Interaction content is determined in the interactive voice information;And the phonetic feature of child user is determined in the interactive voice information;Then, according to interaction content, the emotional characteristics of child user are determined;According to phonetic feature, the degree value of the emotional characteristics is determined;According to emotional characteristics, the degree value of emotional characteristics determines acknowledgment strategy.It realizes accurate grasp children's mood, promotes the concordance and smoothness exchanged with children, ensure that the benign guiding to children's mood.
Description
Technical field
The present invention relates to Emotion identification technology more particularly to a kind of exchange methods and device based on children's mood, and
Using the electronic equipment and readable storage medium storing program for executing of this method.
Background technology
With the rise of artificial intelligence, human-computer interaction device has been applied in daily life more and more widely.
By taking children-story machine as an example, the man-machine interaction mode that children-story machine in the prior art generally uses is according to youngster
Literal intention in children's stories language returns corresponding as a result, carrying out the interaction of " question-response ", for example, Story machine inquiry children " think
Any story listened", children answer " I wants to listen small red cap ", then Story machine is found and keyword " small red cap " phase in the database
Corresponding story resource simultaneously plays for it.
Man-machine interaction mode between this children and Story machine, on the one hand, since the ability to express of children is limited, not necessarily
The story title wanted to listen to can accurately be said so that Story machine is that it randomly chooses story and plays, but the story and differs
Surely it is that children want to listen to;On the other hand, children generally prefer that the Story machine that will talk with is considered as friend and shares with it
The mood of oneself cannot give suitable however, Story machine is all answered with a kind of relatively-stationary language according to the mood of children
Suitable answer, and be also possible that because it is asked it is non-answered, and the case where influence children's mood.
Therefore, the interactive mode of existing children-story machine compares mechanization and stiff, cannot be well understood by children, no
The ability to exchange culture of the children of phase is formed conducive to language.
Invention content
To solve problems of the prior art, the present invention provide a kind of exchange method based on children's mood, device,
Electronic equipment and readable storage medium storing program for executing, by carrying out the analysis of content and phonetic feature to the interactive voice information of child user,
It accurately determines its mood, and corresponding acknowledgment strategy is provided according to its mood.Realize accurate grasp children's mood, promotion and children
The concordance and smoothness of exchange ensure that the benign guiding to children's mood.
In a first aspect, the embodiment of the present invention provides a kind of exchange method based on children's mood, including:
Receive the interactive voice information of child user;
Interaction content is determined in the interactive voice information;
The phonetic feature of child user is determined in the interactive voice information;
According to the interaction content, the emotional characteristics of the child user are determined;
According to the phonetic feature, the degree value of the emotional characteristics is determined;
According to the emotional characteristics, the degree value of the emotional characteristics determines acknowledgment strategy.
Optionally, the interaction content determining in the interactive voice information, including:
The interactive voice information is converted into text message;
Semantic analysis is carried out to the text message, Topic word and/or emotion word are extracted in the text message,
Obtain interaction content;
Correspondingly, described determine the emotional characteristics of the child user according to the interaction content, including:
The feelings of the child user are determined according to the Topic word and/or emotion word based on preset recognition rule
Thread feature.
Optionally, further include:
Based on childrenese, children's dictionary is established;
Wherein, the childrenese includes at least:The folded word representation language of children, animal are personalized representation language;It is described
Children's dictionary includes:The paraphrase of childrenese, the paraphrase of childrenese combination context, the mood mark of childrenese;
The mood mark of the childrenese is for identifying positive mood classification, negative emotions classification, neutral mood classification.
Optionally, described to be based on preset recognition rule, according to the Topic word and/or emotion word, determine described in
The emotional characteristics of child user, including:
Each Topic word and/or the corresponding mood mark of each emotion word are determined in children's dictionary
Know;
If it is determined that each mood mark in do not include negative emotions classification, and include positive mood classification, it is determined that
The emotional characteristics of the child user are front emotional characteristics;
If it is determined that each mood mark in do not include positive mood classification, and include negative emotions classification, it is determined that
The emotional characteristics of the child user are negative emotions feature;
If it is determined that each mood mark in only include neutral mood classification, it is determined that the mood of the child user is special
Sign is neutral emotional characteristics;
If it is determined that each mood mark in not only included positive mood classification, but also include negative emotions classification, then basis
The word order of the Topic word and/or emotion word in the interactive voice information is based on the context meaning of one's words, determines the youngster
The emotional characteristics of virgin user.
Optionally, further include:
Children's mood labeled data is obtained, training obtains children's Emotion identification model;
Correspondingly, described be based on preset recognition rule, according to the Topic word and/or emotion word, determine described in
The emotional characteristics of child user, including:
The Topic word and/or emotion word are input in children's Emotion identification model, identification obtains described
The emotional characteristics of child user.
Optionally, the phonetic feature that child user is determined in the interactive voice information, including:
At least one in following phonetic feature, voice intensity, word speed, intonation are determined in the interactive voice information.
Optionally, according to the phonetic feature, the degree value of the emotional characteristics is determined, including:
Using the interactive voice information integrally as objects of statistics, the corresponding voice of the interactive voice information is determined
The average value of feature;
According to the average value of the phonetic feature, the degree value of the emotional characteristics is determined.
Optionally, described that the degree value of the emotional characteristics is determined according to the phonetic feature, including:
Determine that the voice of each Topic word and/or the emotion word is special in the interactive voice information
Sign;
According to the weighted value of different parts of speech, the phonetic feature of the interactive voice information is weighted, institute is obtained
The weighted average of the corresponding phonetic feature of predicate sound interactive information;
According to the weighted average of the phonetic feature, the degree value of the emotional characteristics is determined.
Optionally, the acknowledgment strategy includes:Dialogue heart-to-heart talk response mode and/or audio resource broadcast mode;Described
According to the emotional characteristics, the degree value of the emotional characteristics determines acknowledgment strategy, including:
If the emotional characteristics are negative emotional characteristics, and the degree value of the emotional characteristics is more than predetermined threshold value, really
It is dialogue heart-to-heart talk response mode to determine acknowledgment strategy;Or determine acknowledgment strategy be first with the dialogue heartly talk response mode into
Row response, then response is carried out with the audio resource broadcast mode.
Optionally, further include:
Determine user's portrait of the child user;Wherein, user's portrait includes at least one of following characteristics,
The attribute information of child user, the history intersection record of child user, child user habitual term, child user work and rest
Incidence relation between rule, the audio resource of child user hobby, geographical location and child user;
It is drawn a portrait according to the user of the determining child user, the acknowledgment strategy is optimized.
Optionally, further include:
Obtain the temporal information and/or location information for receiving the interactive voice information;
According to the temporal information and/or location information, is drawn a portrait based on the user, determine the current institute of the child user
Locate scene;
It is presently in scene according to described, the acknowledgment strategy is optimized.
Optionally, further include:
According to predetermined period, the mood analysis report of the child user is generated.
Second aspect, the embodiment of the present invention provide a kind of interactive device based on children's mood, including:
Receiving module, the interactive voice information for receiving child user;
Determining module, for determining interaction content in the interactive voice information;In the interactive voice information really
Determine the phonetic feature of child user;According to the interaction content, the emotional characteristics of the child user are determined;According to the voice
Feature determines the degree value of the emotional characteristics;According to the emotional characteristics, the degree value of the emotional characteristics determines response
Strategy.
Optionally, the determining module, including:
Transform subblock, for the interactive voice information to be converted to text message;
Submodule is analyzed, for carrying out semantic analysis to the text message;
Extracting sub-module, for extracting Topic word and/or emotion word in the text message, interacted in
Hold;
Emotional characteristics determination sub-module, for being based on preset recognition rule, according to the Topic word and/or emotion word
Language determines the emotional characteristics of the child user.
Optionally, further include:
Lexicon module establishes children's dictionary for being based on childrenese;
Wherein, the childrenese includes at least:The folded word representation language of children, animal are personalized representation language;It is described
Children's dictionary includes:The paraphrase of childrenese, the paraphrase of childrenese combination context, the mood mark of childrenese;
The mood mark of the childrenese is for identifying positive mood classification, negative emotions classification, neutral mood classification.
Optionally, the emotional characteristics determination sub-module is specifically used for determining each master in children's dictionary
Write inscription language and/or the corresponding mood mark of each emotion word;
When not including negative emotions classification in each mood mark determined, and including positive mood classification, determine
The emotional characteristics of the child user are front emotional characteristics;
When not including positive mood classification in each mood mark determined, and including negative emotions classification, determine
The emotional characteristics of the child user are negative emotions feature;
When only including neutral mood classification in each mood mark determined, determine that the mood of the child user is special
Sign is neutral emotional characteristics;
When not only having included positive mood classification in each mood mark determined, but also when including negative emotions classification, according to
The word order of the Topic word and/or emotion word in the interactive voice information is based on the context meaning of one's words, determines the youngster
The emotional characteristics of virgin user.
Optionally, further include:
Identification model module, for obtaining children's mood labeled data, training obtains children's Emotion identification model;
Correspondingly, the emotional characteristics determination sub-module, is specifically used for inputting the Topic word and/or emotion word
Into children's Emotion identification model of the identification model module, identification obtains the emotional characteristics of the child user.
Optionally, the determining module, including:
Phonetic feature determination sub-module, for determining at least one in following phonetic feature in the interactive voice information
, voice intensity, word speed, intonation.
Optionally, the determining module, including:
First degree value determination sub-module, described in, integrally as objects of statistics, being determined using the interactive voice information
The average value of the corresponding phonetic feature of interactive voice information;According to the average value of the phonetic feature, the mood is determined
The degree value of feature.
Optionally, the determining module, including:
Second degree value determination sub-module, for determine in the interactive voice information each Topic word and/or
The phonetic feature of the emotion word;According to the weighted value of different parts of speech, to the phonetic feature of the interactive voice information
It is weighted, obtains the weighted average of the corresponding phonetic feature of the interactive voice information;According to the voice
The weighted average of feature determines the degree value of the emotional characteristics.
Optionally, the acknowledgment strategy includes:Dialogue heart-to-heart talk response mode and/or audio resource broadcast mode;It is described true
Cover half block, including:
First determination sub-module, for working as the emotional characteristics for negative emotional characteristics, and the journey of the emotional characteristics
When angle value is more than predetermined threshold value, determine that acknowledgment strategy is dialogue heart-to-heart talk response mode;Or determine acknowledgment strategy be first with
The dialogue heart-to-heart talk response mode carries out response, then carries out response with the audio resource broadcast mode.
Optionally, the determining module further includes:
User's portrait determination sub-module, for determining that the user of the child user draws a portrait;Wherein, the user, which draws a portrait, wraps
Include at least one of following characteristics, the attribute information of child user, the history intersection record of child user, child user habit
Pass between inertia term, the work and rest rule of child user, the audio resource of child user hobby, geographical location and child user
Connection relationship;
Optimize submodule, the user of the child user for being determined according to user portrait determination sub-module draws
Picture optimizes the acknowledgment strategy.
Optionally, described device further includes:
Acquisition module, for obtaining the temporal information and/or location information for receiving the interactive voice information;
The determining module further includes:
Scene determination sub-module, for according to the temporal information and/or location information, being drawn a portrait based on the user, really
The fixed child user is presently in scene;
The optimization submodule is additionally operable to be presently in scene according to, be optimized to the acknowledgment strategy.
Optionally, further include:
Generation module, for according to predetermined period, generating the mood analysis report of the child user.
The third aspect, the embodiment of the present invention provide a kind of electronic equipment, including:
Processor;Memory;And program;Wherein, described program is stored in the memory, and is configured as
It is executed by the processor, described program includes the instruction for executing method as described in relation to the first aspect.
Fourth aspect, the embodiment of the present invention provide a kind of electronic equipment readable storage medium storing program for executing, and the electronic device-readable is deposited
Storage media has program stored therein, and described program makes electronic equipment execute the method described in first aspect.
Exchange method, device, electronic equipment and readable storage medium storing program for executing provided by the invention based on children's mood, by connecing
Receive the interactive voice information of child user;Interaction content is determined in the interactive voice information;And in the interactive voice information
Determine the phonetic feature of child user;Then, according to interaction content, the emotional characteristics of child user are determined;According to phonetic feature,
Determine the degree value of the emotional characteristics;According to emotional characteristics, the degree value of emotional characteristics determines acknowledgment strategy.Realize the accurate palm
Children's mood is held, the concordance and smoothness exchanged with children is promoted, ensure that the benign guiding to children's mood.
Description of the drawings
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention without having to pay creative labor, may be used also for those of ordinary skill in the art
With obtain other attached drawings according to these attached drawings.
Fig. 1 be an exemplary embodiment shown in the present invention is based on the flow charts of the exchange method of children's mood;
Fig. 2 be another exemplary embodiment shown in the present invention is based on the flow charts of the exchange method of children's mood;
Fig. 3 be an exemplary embodiment shown in the present invention is based on the structural schematic diagrams of the interactive device of children's mood;
Fig. 4 be another exemplary embodiment shown in the present invention is based on the structural schematic diagrams of the interactive device of children's mood;
Fig. 5 a are the structural schematic diagram of the electronic equipment of the present invention shown in an exemplary embodiment;
Fig. 5 b are the structural schematic diagram of the electronic equipment of the present invention shown in another exemplary embodiment.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
Term " first ", " second ", " third " in description and claims of this specification and above-mentioned attached drawing and "
The (if present)s such as four " are for distinguishing similar object, without being used to describe specific sequence or precedence.It should manage
The data that solution uses in this way can be interchanged in the appropriate case, so as to the embodiment of the present invention described herein can in addition to
Here the sequence other than those of diagram or description is implemented.In addition, term " comprising " and " having " and their any deformation,
Be intended to cover it is non-exclusive include, for example, containing the process of series of steps or unit, method, system, product or setting
It is standby those of to be not necessarily limited to clearly to list step or unit, but may include not listing clearly or for these mistakes
The intrinsic other steps of journey, method, product or equipment or unit.
It should be noted that technical scheme of the present invention is described in detail with specific embodiment below.Below this
Several specific embodiments can be combined with each other, may be no longer superfluous in some embodiments for same or analogous concept or process
It states.
Fig. 1 be an exemplary embodiment shown in the present invention is based on the flow charts of the exchange method of children's mood, such as Fig. 1 institutes
Show, an embodiment of the present invention provides a kind of exchange method based on children's mood, this method can be by arbitrarily executing based on children
The device of the exchange method of mood executes, which can pass through software and or hardware realization.In the present embodiment, executing should
The device of exchange method based on children's mood can be integrated in the electronic equipment of user, can also be integrated in the electricity with user
Sub- equipment carries out in the cloud server of network data exchange.Wherein, electronic equipment, including but not limited to children's interactive device
(Story machine, smartwatch, interaction robot), smart mobile phone, tablet computer, portable computer and desktop computer etc..Its
By receiving the interactive voice information of children, the exchange method based on children's mood based on the present invention is to the interactive voice information
Analysis and identification is carried out, the acknowledgment strategy for meeting child user current emotional is obtained, in a manner of more understanding children's mood
It is smoothly exchanged with children.Cloud server may include one or more for executing the functions such as operation, analysis, storage
Server, by execute the exchange method based on children's mood device be Story machine for, Story machine receive children language
Sound interactive information is sent to cloud server, after cloud server receives the interactive voice information, based on the present invention based on
The exchange method of children's mood carries out analysis and identification to the interactive voice information, obtains the response for meeting child user current emotional
The acknowledgment strategy is fed back to Story machine by network transmission, realizes Story machine in a manner of more understanding children's mood by strategy
It is smoothly exchanged with children.The executive agent in each embodiment is illustrated by taking Story machine as an example below, this implementation
Example the exchange method based on children's mood include:
Step 101, the interactive voice information for receiving child user.
In the present embodiment, the audio receiver part in Story machine, such as microphone, can receive the voice of child user
Interactive information, the voice messaging which is sent out by the children that Story machine senses.Wherein, interactive voice information
Can include the instruction language that children send out to Story machine, for example, " I wants to listen the event of Sofia princess instruction Story machine
Thing ", " we two chat " etc.;Can also include only the acoustic information that children send out, for example, crying sound, laugh, or
It is sigh " eh~~", ", eh~~~" etc..
Step 102 determines interaction content in interactive voice information.
In the present embodiment, so-called interaction content, the intention exactly identified in interactive voice information, for example, children
When user expresses it and wants to listen the intention of the story of Sofia princess, can may not accurately it be given expression to as adult
Come, but during expressing its wish, it adds pause, elongate the various disturbing factors such as sound, folded word, for example, children use
The interactive voice information that family gives expression to may be, " I ... uh ... Sofia ... mews~~~~mew~~~~~
.. " is heard ..., then it can be with text that Story machine, which needs the interaction content from being determined in the interactive voice information, the interaction content,
The content that font formula can give expression to, for example, the interaction content that Story machine is determined can be " I hears uh Sofia mew mew ";
Story machine can also be according to preset recognition rule, for example, removal interjection, folds word simplification, the rules such as word order adjustment, in the language
The interaction content of " I listens Sofia, mew " is determined in sound interactive information.So that Story machine can be according to from interactive voice
Interaction content is determined in information, and its meaning of one's words is identified according to interaction content, to determine the intention of children.For in interactive voice
What obtains interaction content from information, the technologies such as semantics recognition, word segmentation processing in the prior art may be used, the present embodiment is to this
It is not especially limited.
Step 103, the phonetic feature that child user is determined in interactive voice information.
In the present embodiment, so-called phonetic feature is the feature for being assessed sound and being described.It may include,
Reflect the sound intensity of acoustic energy;Reflect the loudness of sound intensity degree;For example, one timing of sound frequency, the sound intensity is stronger, and loudness is got over
Greatly;Can also include that the pitch that reflection auditory system feels sound frequency height, the fundamental tone of sounder gender is all for identification
Phase, the features such as signal-to-noise ratio.Wherein, characteristic dimension included in phonetic feature can be by those skilled in the art according to identification need
Ask voluntarily determining, the present embodiment is not especially limited this.By analyzing phonetic feature, the attribute of child user is aided in determining whether
Information, for example, child user is a Boy or a girl;The sound ray feature reflected according to phonetic feature can also determine children's
Substantially age level etc.;To provide prediction data for description user's portrait.In addition, in short carrying out table with different loudness of a sound
Up to when, phonetic feature can also reflect the emotional state of speaker, for example, children expression " I wants to listen the event of Sofia princess
It is expressed with more soft intonation when thing ", then can determine that children's mood is more gentle at this time, predict that it is really intended to listen
This story;And when identical content of the children with the intonation expression " I wants to listen the story of Sofia princess " shouted of neighing, it is likely that
Children just gave birth to quarrel with human hair at this time, very indignant, were really that situation can although then children express it and want to listen the story
Can be that children not really want to listen story, but using Story machine as object is given vent to, to its release mood, then this moment if
Children play the story and seem just not instead of very unsuitable, should according to the mood of children this moment, using other exchange ways,
With the mood for its indignation of releiving.It therefore, can be more accurately by determining the phonetic feature of children from interactive voice information
Grasp the emotional state of children.
Step 104, according to interaction content, determine the emotional characteristics of child user.
In the present embodiment, speech is usually all expression mood most direct mode, and mood is divided with major class, can be general
It includes as pleasure, anger, sorrow, happiness;By the identification to word, the mood of speaker can be relatively accurately grasped.For example, children
User may say " today is very happy " in the emotion of expression " happiness ";The then interaction content by being determined in step 102
" happy " can primarily determine that the emotional characteristics of children tend to happily;And when emotion of the children in expression " anger ", it may
It says " I dislikes you ";The emotional characteristics of children can be then primarily determined by the interaction content " disagreeable " determined in step 102
Tend to unhappy or angry.Wherein, for the classification of emotional characteristics, it can be derived based on pleasure, anger, sorrow, happiness and more be refined
Classification, it is glad, like for example, happiness is further divided into;Anger is further divided into, angry, hatred;In addition, it can include mood
Feature has, frightened, fears.By the way that emotional characteristics are carried out with the subdivision of classification or subclass, the accuracy rate of Emotion identification is promoted.
Step 105, according to phonetic feature, determine the degree value of emotional characteristics.
In the present embodiment, as described in preceding step 103, the features such as pitch, tone, word speed in phonetic feature can be one
Determine the mood for reflecting speaker in degree, for example, example a, when mother softly says child " quickly eat eh~~" then
Reflect on the time to be not very urgent to a certain extent, may only desire to child can more attentively be at table this part thing
A kind of communicative habits in feelings;If however, example b, mother neigh to child shout as say it is " fast!Point!It eats!!!!", it is likely that
The case where reflecting is that the mood of mother is anger;It is really very urgent to be also possible to the time.Therefore, it is based on phonetic feature
Can be to the emotional characteristics that step 104 is determined, such as the emotional characteristics of " worried " carry out the assessment of degree value, for example, for upper
The case where stating example a, the mood degree value of " worried " are less than the mood degree value in example b;That is example a has reflected " one
As worry ", the emotional characteristics of " time is not at full stretch ";Example b has reflected the mood of " worrying very much ", " very nervous "
Feature.
Step 106, according to emotional characteristics, the degree value of emotional characteristics determines acknowledgment strategy.
In the present embodiment, it according to the emotional characteristics determined from interaction content, and is determined according to from phonetic feature
The degree value of the emotional characteristics gone out determines acknowledgment strategy corresponding with interaction content and meeting children's mood.For example, it receives
To child user interactive voice information be children cried and said " I am unhappy today, toot~~~~", therefrom determine to hand over
Mutual content is " I am unhappy today, toot ";Then according to the interaction content, determine that the emotional characteristics of children are " unhappy ", " hinder
The moods such as the heart ";Based on the crying that children express in the interactive information, determine that the phonetic feature of children is that can be identified for that out
The number voice feature (e.g., the intensity of sound of sobs, the features such as duration for sound of sobbing) of " sobs ", then according to should
The degree value of the phonetic feature of " sobs ", the emotional characteristics " unhappy ", " sad " that determine children is " moderate is sad ", or
" unhappy in the extreme ".It to which the acknowledgment strategy that Story machine may be used can be, heartly talks with children, " you how by inquiry children
Why cry", or its favorite story is played for children, alleviate its sad mood to suit the medicine to the illness.
Exchange method provided in an embodiment of the present invention based on user emotion, the interactive voice by receiving child user are believed
Breath;Interaction content is determined in the interactive voice information;And the phonetic feature of child user is determined in the interactive voice information;
Then, according to interaction content, the emotional characteristics of child user are determined;According to phonetic feature, the degree value of the emotional characteristics is determined;
According to emotional characteristics, the degree value of emotional characteristics determines acknowledgment strategy.It realizes accurate grasp children's mood, is promoted and handed over children
The concordance and smoothness of stream ensure that the benign guiding to children's mood.
Fig. 2 be another exemplary embodiment shown in the present invention is based on the flow charts of the exchange method of user emotion.In Fig. 1
On the basis of illustrated embodiment, the exchange method based on user emotion of the present embodiment specifically includes:
Step 201, the interactive voice information for receiving child user.
Interactive voice information is converted to text message by step 202.
In the present embodiment, it may be used the Word Input with semantic information in interactive voice information in the prior art
Technology out forms text message.The place occupied in the memory space and calculating process that are generally occupied due to text message
It is all smaller to manage device resource, therefore, interactive voice information is translated as corresponding text and is conducive to be promoted in mood determination process
The accuracy of mood analysis is advantageously ensured that the promotion of the treatment effeciency of text message, and treatment effeciency.For example, it receives
Interactive voice information to children be " my, the kindergartens Qu Liao breathe out today~~~breathe out~~~~breathe out~~~breathe out~~
~~", then transformed text message can be " my kindergartens Qu Liao today are breathed out ".Wherein, lasting laugh " is breathed out
~~~breathe out~~~~breathe out~~~breathe out~~~~" conversion, to retain the language elements that can be accurately identified as principle
Text determination is carried out, for example, text message of one or more " Kazakhstan " word recognized as the laugh can be understood in laugh,
Then retain " Kazakhstan " word of corresponding number in text message.
Step 203 carries out semantic analysis to text message, and Topic word and/or emotion word are extracted in text message,
Obtain interaction content.
In the present embodiment, it by carrying out semantic analysis to text message, is extracted from the text message being converted to
Element with break-up value, such as Topic word, emotion word;Wherein, semantic analysis may be used in the prior art
Participle technique, such as at least one of following granularity:Individual Chinese character, single character, word, phrase to text message into
Row participle, and according to the part of speech or the features such as grammer in participle, the Topic word that can express a word subject content is extracted,
And/or the emotion word of speaker's Sentiment orientation can be expressed.Such as the example in step 202, children say that " I goes today
Breathe out kindergarten " wherein, the Topic word extracted can be " kindergarten ", and emotion word can be " ", " ", " Kazakhstan " etc.
Tone word.Content related with kindergarten then can be responded in acknowledgment strategy by Topic word " kindergarten ", pass through feelings
Feel the tone words such as word " ", " ", " Kazakhstan ", it can be determined that the present mood of children is more happy, in acknowledgment strategy
Middle cooperation is with cheerful and light-hearted content.
Step 204 determines the feelings of child user based on preset recognition rule according to Topic word and/or emotion word
Thread feature.
In the present embodiment, which can be to search Topic word and/or emotion based on children's dictionary
Word corresponding emotional characteristics in dictionary.Wherein, so-called children's dictionary, is to be established based on childrenese, that is to say, that youngster
Virgin dictionary is the dictionary formed for the speech habits in childrenese.
Wherein, childrenese at least may include:The folded word representation language of children, animal are personalized representation language etc.;It should
Can include in children's dictionary:The paraphrase of childrenese, the paraphrase of childrenese combination context, the mood mark of childrenese
Deng;Wherein, the mood mark of childrenese can be used for identifying positive mood classification, negative feelings in the language expressed by children
Thread classification or neutral mood classification.
For example, children like folded word, such as " carefully ", " eating fruit fruit ", then in addition to there is the paraphrase of " good " in children's dictionary,
There can also be the paraphrase to folding word " carefully ", can be such as " unusual good, very like " to the paraphrase for folding word " carefully ";Institute
Call childrenese combination context paraphrase, be based on childrenese expression it is typically no be grown up it is rigorous, can in communicative habits
It can will appear that pass judgement on justice mixed, situations such as front and back word order is chaotic, therefore, look like only according to the word face obtained after word segmentation processing true
The meaning made may be not the original idea of children, therefore, can be based on context and carry out paraphrase to word, for example, children say
" it is husky that mother sits hair ", its paraphrase may and be not present in word " hair is husky " in children's dictionary, but combines context according to " hair
It is husky " before " seat " may determine that the word order that " hair is husky " be " sofa " is reverse, then combine context can correctly to " hair sand " into
Row paraphrase.For another example children say " I does not like well you, relative's Pussy ", it is not like according to " no " determining that children to be expressed
Vigorously, still, based on context, " do not like well ", be then expressed as liking, if there is " happiness to " not liking well " in children's dictionary
Vigorously " and a variety of paraphrase of " not liking ", then it can also continue to based on context " relative's Pussy ", determine " not liking well " at this
Sentence in give expression to be " liking ", " happiness " mood.It therefore, can be in children's dictionary for " not liking well " this word
The moods such as " liking ", " happiness ", " happy " mark is identified.Mood mark indicates that the present mood of children is positive feelings
Thread classification, and moods classification, the those skilled in the art such as front, negative, neutrality can be finely divided according to statistical data
Class promotes the accuracy of Emotion identification to realize more horn of plenty and accurate children's dictionary.
Based on children's dictionary, it can determine that each Topic word and/or each emotion word correspond in children's dictionary
Mood mark;
A) if it is determined that each mood mark in do not include negative emotions classification, and include positive mood classification, then really
The emotional characteristics for determining child user are front emotional characteristics;
Specifically, Topic word and emotion word that interactive voice information extraction goes out may have it is multiple, then by children
It is inquired in dictionary, with the emotional characteristics of each word of determination.If it is determined that the emotional characteristics that each word gives expression to are positive,
Either positive or neutral, then it can determine that the emotion of whole word expression is positive mood.For example, children say " cat today
Good beauty ", then the Topic word extracted can be " cat ", " today ";Emotion word can be " good ", " beauty ";It is based on
Children's dictionary " cat ", " today " mood mark can be neutral emotional characteristics, and identify can be with for the mood of " good ", " beauty "
For front emotional characteristics, accordingly, it is determined that the emotional characteristics of whole word expression are front emotional characteristics.
B) if it is determined that each mood mark in do not include positive mood classification, and include negative emotions classification, then really
The emotional characteristics for determining child user are negative emotions feature;
For example, children say " I dislikes cat ", then the Topic word extracted can be " cat ", wherein " I " base
Topic word is can be determined that in the rule of Topic word extraction, can also be confirmed as personal pronoun and is ignored, ability
Field technique personnel can be determined according to preset semantic analysis rule, and the present embodiment is not especially limited this.In the sentence
Emotion word be " disagreeable ";Mood mark based on children's dictionary " cat ", " I " can be neutral emotional characteristics, and " beg for
Detest " mood mark can be negative emotions feature, accordingly, it is determined that whole word expression emotional characteristics be negative emotions feature.
C) if it is determined that each mood mark in only include neutral mood classification, it is determined that the emotional characteristics of child user
For neutral emotional characteristics;
For example, children say " my kindergartens Qu Liao today ", then the Topic word extracted can be " today ", " children
Youngster garden ", or " I ", " today ", " going ", " kindergarten " do not include emotion word in the sentence, then are based on children's word
Allusion quotation, these above-mentioned words can be identified as neutral emotional characteristics, it is determined that the emotional characteristics of whole word expression are neutral mood
Feature.
D) if it is determined that each mood mark in not only included positive mood classification, but also include negative emotions classification, then root
According to the word order of Topic word and/or emotion word in interactive voice information, it is based on the context meaning of one's words, determines the feelings of child user
Thread feature.
For example, children say " I well dislike cat ", then the Topic word extracted can be " cat ", in this
Emotion word can be " good ", " disagreeable ";Mood mark based on children's dictionary " cat " can be neutral emotional characteristics, and
The mood mark of " good " can be front emotional characteristics, and the mood mark of " disagreeable " can be negative emotions feature, then at this time may be used
With the word order according to " good " and " disagreeable " in former sentence, it is based on the context meaning of one's words, determines that " good disagreeable " is negative emotions feature.It needs
, both can be according to D it is noted that for the Emotion identification of " good disagreeable ") in identification method be determined as negative emotions, also may be used
Can be according to the communicative habits of children, it is " good to be determined in the paraphrase of childrenese combination context included in children's dictionary
It is disagreeable " emotional characteristics as entire term identification " not liking ".
The preset recognition rule searches Topic word and/or emotion word in dictionary in addition to that can be based on children's dictionary
In corresponding emotional characteristics, be also based on children's Emotion identification model, to childrenese express in mood be identified.
Firstly, it is necessary to establish children's Emotion identification model, by obtaining children's mood labeled data, training obtains children
Emotion identification model;Then Topic word and/or emotion word are input in children's Emotion identification model, identification obtains
The emotional characteristics of virgin user.
It is to be based on a large amount of statistical data, obtained lexicographic rule searching for the mode of former children's dictionary;
And for the ever-changing of language, and the differentiation of the popular language occurred at any time is followed, the update of children's dictionary will be by
Therefore larger challenge can be based on the advanced algorithms such as neural network, be trained to children's idiom of big data quantity,
So that the model is passed through the tongue for constantly learning children, obtains that there is recognition capability to the mood gone out expressed by childrenese
Children's Emotion identification model accurately identifies children's mood with realizing, and can be by constantly learning to carry out more with training
Newly, the accuracy rate of identification is promoted.
Step 205, determined in interactive voice information in following phonetic feature at least one of, voice intensity, word speed, language
It adjusts.
In the present embodiment, voice intensity can include the sound characteristics such as the sound intensity, loudness, pitch;
Word speed can reflect that children's is nervous, urgent, cheerful and light-hearted, excited by the size of vocabulary included in the unit interval
Equal Emotion expressions;
Intonation is the intonation spoken, and instruction height rises and falls the preparation and variation of weight.The intonation meaning of a word can be with table
The attitude or implication of speaker are shown.Same sentence, intonation is different, and the meaning will be different.For example, children say that " I wants today
Do one's assignment " according to intonation be stable intonation or more droning intonation, may indicate that it is not very willing to do one's assignment, less
Happy mood;And according to intonation be rising tune, then the word given expression to question sentence " I will do one's assignment today "Effect.
Therefore, by the identification to voice intensity, word speed, intonation in interactive voice information etc., child user is specified
Mood is inclined to.
Step 206, according to phonetic feature, determine the degree value of emotional characteristics.
In the present embodiment, when the degree value for determining emotional characteristics, can the phonetic feature based on whole word into
Row statistics, can also be counted based on the phonetic feature of each word in whole word.Specifically,
First way determines the corresponding voice of interactive voice information using interactive voice information integrally as objects of statistics
The average value of feature;According to the average value of phonetic feature, the degree value of emotional characteristics is determined.
The second way determines the phonetic feature of each Topic word and/or emotion word in interactive voice information;According to
The phonetic feature of interactive voice information is weighted in the weighted value of different parts of speech, and it is corresponding to obtain interactive voice information
The weighted average of phonetic feature;According to the weighted average of phonetic feature, the degree value of emotional characteristics is determined.
Above two mode cuts both ways, first way, according to the phonetic feature that whole sentence is expressed, such as according to speech wave
Shape, statistics obtain the average value of phonetic feature, which can be compared with preset threshold value, to determine that it is reflected
Emotional characteristics degree value.Its benefit is that algorithm amount is small, and algorithm is simple, effectively promotes the degree value for determining emotional characteristics
Efficiency.But being averaged of carrying out of the expression by being then based on whole sentence, sometimes in whole sentence may there is only the intonation of individual words or
Voice intensity has than stronger variation, then when carrying out taking average operation to whole phonetic feature, the effect of this reinforcing has
It may be weakened, average out.The second way, to the voice of each Topic word and/or emotion word in whole sentence
Feature extracts, and imparts different weighted values also according to the part of speech of word, by being weighted processing to each word,
And average computation is weighted to whole sentence, obtain degree value.The degree value that this mode is determined is more accurate, but due to being
The phonetic feature of each word is determined, and is weighted, therefore, compared to first way, calculation amount has
It is a little big, but accuracy rate can be promoted.For example, child user say " I not!!!Like eating radish ", in expression word
High tone, and the part of speech of " no " as negative word are used when " no ", are endowed larger weighted value, then weighted average calculation
Afterwards, the degree value of the negative emotions gone out expressed by this can be enhanced.
Step 207, according to emotional characteristics, the degree value of emotional characteristics determines acknowledgment strategy.
In the present embodiment, acknowledgment strategy may include:Dialogue heart-to-heart talk response mode and/or audio resource broadcast mode;
Then, if emotional characteristics are negative emotional characteristics, and the degree value of emotional characteristics is more than predetermined threshold value, it may be determined that acknowledgment strategy
For dialogue heart-to-heart talk response mode;Or determine that acknowledgment strategy is first to carry out response to talk with heart-to-heart talk response mode, then provide with audio
Source broadcast mode carries out response.
That is, when children be in mood it is very low when, it should the response in the form of more deeply concerned, such as and its
It engages in the dialogue, to understand its depressed reason;If at this point, directly responding children with music or story, children can be made to feel
Feel that the mood of oneself is ignored, and it is increasingly depressed.Therefore, for negative emotional characteristics, acknowledgment strategy can be
Talk with the response mode of heart-to-heart talk;Or first talk with heart-to-heart talk with it, further according to its emotional change, intert into audio resource (such as children
The song or story liked) carry out mood releive.
Step 208, according to the user of determining child user draw a portrait and/or according to scene is presently in, to acknowledgment strategy
It optimizes.
In the present embodiment, when determining acknowledgment strategy, in addition to according in 201~step 207 of above-mentioned steps to youngster
Virgin interactive voice information is analyzed so that it is determined that outside acknowledgment strategy, which can be combined with user's portrait of children
(what e.g., a Boy or a girl likes, and dislikes what etc.) is determined.The method of determination of relevant user portrait, this field skill
The method for determining user's portrait in the prior art may be used in art personnel, such as analyzes the previous intersection record of user and is determined.
The present embodiment does not limit this specifically.
Wherein, user's portrait may include at least one of following characteristics, attribute information, the child user of child user
History intersection record, the habitual term of child user, the work and rest rule of child user, child user hobby audio money
Incidence relation etc. between source, geographical location and child user;To be drawn a portrait according to the user of determining child user, to response
Strategy optimizes.
In addition, Story machine can also obtain the temporal information and/or location information for receiving interactive voice information;According to the time
Information and/or location information are drawn a portrait based on user, determine that child user is presently in scene;According to scene is presently in, correspond to
Strategy is answered to optimize.
For example, according to 201~step 207 of abovementioned steps, determine that the current mood of children is happy, however, this
When the location information that gets be the children family, temporal information is 9 points at night, then is drawn a portrait, found according to the user of children
The general sack time of children be 9 points 30 minutes or so, then according to step 207 determine acknowledgment strategy may be play one section it is cheerful and light-hearted
Music, it is contemplated that current scene be children fall asleep before, then can select in cheerful and light-hearted music one head more releive
Song, to contribute to children punctual to fall asleep within 30 minutes or so at 9 points.Therefore, (such as user draws with factor the considerations of various dimensions
Picture, time, place etc.) it is combined, can acknowledgment strategy more be optimized, make it in the same of the mood for meeting child user
When, help to guide its mood to more benign direction.
Optionally, the mood analysis report that child user according to predetermined period, can also be generated, so that parent can be more
Understand the child of oneself.
Fig. 3, should for the present invention is based on the structural schematic diagrams of the interactive device of children's mood shown in an exemplary embodiment
Device can be the various electronic equipments for supporting information communication, including but not limited to children's interactive device (Story machine, intelligent hand
Table, interaction robot), smart mobile phone, tablet computer, portable computer and desktop computer etc..This is based on children's mood
Interactive device can also be arranged in server beyond the clouds, cloud server by with children's interactive device (Story machine, intelligent hand
Table, interaction robot), smart mobile phone, tablet computer, the electronic equipments such as portable computer and desktop computer carry out network number
According to interaction, the service that acknowledgment strategy is provided for children's mood is realized.Wherein, cloud server may include one or more use
In the server for executing the functions such as operation, analysis, storage, by taking children's interactive device is Story machine as an example, Story machine receives children
Interactive voice information, be sent to cloud server, after cloud server receives the interactive voice information, be based on aforementioned implementation
The exchange method based on children's mood in example carries out analysis and identification to the interactive voice information, obtains meeting child user current
The acknowledgment strategy is realized Story machine more to understand children's mood by the acknowledgment strategy of mood by network-feedback to Story machine
Mode smoothly exchanged with children.The interactive device based on children's mood can pass through software, hardware or soft or hard
The mode that part combines is realized.As shown in figure 3, the device includes:
Receiving module 31, the interactive voice information for receiving child user.
Determining module 32, for determining interaction content in interactive voice information;Children are determined in interactive voice information
The phonetic feature of user;According to interaction content, the emotional characteristics of child user are determined;According to phonetic feature, emotional characteristics are determined
Degree value;According to emotional characteristics, the degree value of emotional characteristics determines acknowledgment strategy.
Interactive device provided in this embodiment based on children's mood can execute above-mentioned embodiment of the method shown in FIG. 1,
Its implementing principle and technical effect is similar, and details are not described herein.
Interactive device provided in this embodiment based on user emotion, by the interactive voice information for receiving child user;
Interaction content is determined in the interactive voice information;And the phonetic feature of child user is determined in the interactive voice information;So
Afterwards, according to interaction content, the emotional characteristics of child user are determined;According to phonetic feature, the degree value of the emotional characteristics is determined;Root
According to emotional characteristics, the degree value of emotional characteristics determines acknowledgment strategy.Realize that accurate grasp children's mood, promotion are exchanged with children
Concordance and smoothness, ensure that the benign guiding to children's mood.
Fig. 4 is shown in another exemplary embodiment the present invention is based on the structural schematic diagram of the interactive device of children's mood,
As shown in figure 4, on the basis of the above embodiments, which further includes:
Determining module 32, including:
Transform subblock 321, for interactive voice information to be converted to text message.
Submodule 322 is analyzed, for carrying out semantic analysis to text message.
Extracting sub-module 323 obtains interaction content for extracting Topic word and/or emotion word in text message.
Emotional characteristics determination sub-module 324, for being based on preset recognition rule, according to Topic word and/or emotion word
Language determines the emotional characteristics of child user.
Optionally, further include:
Lexicon module 33 establishes children's dictionary for being based on childrenese.
Wherein, childrenese includes at least:The folded word representation language of children, animal are personalized representation language;Children's dictionary
Including:The paraphrase of childrenese, the paraphrase of childrenese combination context, the mood mark of childrenese.
The mood mark of childrenese is for identifying positive mood classification, negative emotions classification, neutral mood classification.
Optionally, emotional characteristics determination sub-module 324, be specifically used for determining in children's dictionary each Topic word and/
Or each the corresponding mood of emotion word identifies.
When not including negative emotions classification in each mood mark determined, and including positive mood classification, determine
The emotional characteristics of child user are front emotional characteristics;
When not including positive mood classification in each mood mark determined, and including negative emotions classification, determine
The emotional characteristics of child user are negative emotions feature;
When only including neutral mood classification in each mood mark determined, determine that the emotional characteristics of child user are
Neutral emotional characteristics;
When not only having included positive mood classification in each mood mark determined, but also when including negative emotions classification, according to
The word order of Topic word and/or emotion word in interactive voice information is based on the context meaning of one's words, determines the mood of child user
Feature.
Optionally, further include:
Identification model module 34, for obtaining children's mood labeled data, training obtains children's Emotion identification model.
Correspondingly, emotional characteristics determination sub-module 324, specifically for Topic word and/or emotion word are input to knowledge
In children's Emotion identification model of other model module, identification obtains the emotional characteristics of child user.
Optionally, determining module 32, including:
Phonetic feature determination sub-module 325, for determining at least one in following phonetic feature in interactive voice information
, voice intensity, word speed, intonation.
Optionally, determining module 32, including:
First degree value determination sub-module 326, for, integrally as objects of statistics, determining that voice is handed over using interactive voice information
The average value of the corresponding phonetic feature of mutual information;According to the average value of phonetic feature, the degree value of emotional characteristics is determined.
Optionally, determining module 32, including:
Second degree value determination sub-module 327, for determining each Topic word and/or emotion word in interactive voice information
The phonetic feature of language;According to the weighted value of different parts of speech, the phonetic feature of interactive voice information is weighted, language is obtained
The weighted average of the corresponding phonetic feature of sound interactive information;According to the weighted average of phonetic feature, emotional characteristics are determined
Degree value.
Optionally, acknowledgment strategy includes:Dialogue heart-to-heart talk response mode and/or audio resource broadcast mode;Determining module
32, including:
First determination sub-module 328, for being negative emotional characteristics when emotional characteristics, and the degree value of emotional characteristics is super
When crossing predetermined threshold value, determine that acknowledgment strategy is dialogue heart-to-heart talk response mode;Or determine that acknowledgment strategy is first to be answered to talk with heart-to-heart talk
The mode of answering carries out response, then carries out response with audio resource broadcast mode.
Optionally, determining module 32 further include:
User's portrait determination sub-module 329, for determining that the user of child user draws a portrait;Wherein, user portrait include with
At least one of lower feature, the attribute information of child user, the history intersection record of child user, child user habituation
Being associated between term, the work and rest rule of child user, the audio resource of child user hobby, geographical location and child user
System.
Optimize submodule 330, the user of the child user for being determined according to user's portrait determination sub-module draws a portrait, corresponding
Strategy is answered to optimize.
Optionally, further include:
Acquisition module 35, for obtaining the temporal information and/or location information that receive interactive voice information.
Determining module 32 further includes:
Scene determination sub-module 331, for according to temporal information and/or location information, being drawn a portrait based on user, determining children
User is presently in scene.
Optimize submodule 330, is additionally operable to, according to scene is presently in, optimize acknowledgment strategy.
Optionally, further include:
Generation module 36, for according to predetermined period, generating the mood analysis report of child user.
Interactive device provided in this embodiment based on user emotion can execute above-mentioned embodiment of the method shown in Fig. 2,
Its implementing principle and technical effect is similar, and details are not described herein.
Fig. 5 a are the structural schematic diagram of the electronic equipment of the present invention shown in an exemplary embodiment.Electronic equipment 500 includes:
Processing unit 502 and communication unit 503.Processing unit 502 is used to carry out control management to the action of electronic equipment 500, for example,
Processing unit 502 is used to that the step 102- steps 106 that electronic equipment 500 executes Fig. 1, processing unit 502 to be supported to can be also used for
Electronic equipment 500 is supported to execute the step 202- steps 208 of Fig. 2, and/or other mistakes for technology described in the invention
Journey.Communication unit 503 is used for the communication of electronic equipment 500 and other network entities, can be also used for that electronic equipment 500 is supported to hold
The step 101 of row Fig. 1 or the step 201 of Fig. 2.Electronic equipment 500 can also include storage unit 501, for storing electronics
The program code and data of equipment 500.
Wherein, processing unit 502 can be processor or controller, such as can be CPU, general processor, number letter
Number processor (DigitalSignalProcessor, DSP), application-specific integrated circuit (Application-
SpecificIntegratedCircuit, ASIC), field programmable gate array (FieldProgrammableGateArray,
FPGA) either other programmable logic device, transistor logic, hardware component or its arbitrary combination.It may be implemented
Or it executes and combines various illustrative logic blocks, module and circuit described in the disclosure of invention.The processor
Can realize the combination of computing function, such as combined comprising one or more microprocessors, the combination etc. of DSP and microprocessor
Deng.Communication unit 503 can be communication interface, transceiver, transmission circuit etc., wherein communication interface is to be referred to as, and may include one
A or multiple interfaces.Storage unit 501 can be memory.
When processing unit 502 is processor, communication unit 503 is communication interface, when storage unit 501 is memory, this
The involved electronic equipment of invention can be electronic equipment 510 shown in Fig. 5 b.
Shown in Fig. 5 b, which includes:Processor 512, communication interface 513, memory 511.It is optional
, electronic equipment 510 can also include bus 514.Wherein, communication interface 513, processor 512 and memory 511 can lead to
Cross the interconnection of bus 514;Bus 514 can be Peripheral Component Interconnect standard
(PeripheralComponentInterconnect, abbreviation PCI) bus or expanding the industrial standard structure
(ExtendedIndustryStandardArchitecture, abbreviation EISA) bus etc..It is total that bus 514 can be divided into address
Line, data/address bus, controlling bus etc..It for ease of indicating, is only indicated with a thick line in Fig. 5 b, it is not intended that only one total
Line or a type of bus.
In addition, having program stored therein in memory 511, and it is configured as being executed by processor 512, which includes being used for
Execute the instruction of the method described in any embodiment as above.
The embodiment of the present invention also provides a kind of electronic equipment readable storage medium storing program for executing, wherein electronic device-readable storage medium
It has program stored therein, program makes server execute the exchange method based on user emotion that aforementioned any embodiment provides.Wherein,
Above-mentioned readable storage medium storing program for executing can be realized by any kind of volatibility or non-volatile memory device or combination thereof,
Such as static RAM (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable is read-only
Memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, disk
Or CD.
One of ordinary skill in the art will appreciate that:Realize that all or part of step of above-mentioned each method embodiment can lead to
The relevant hardware of program instruction is crossed to complete.Program above-mentioned can be stored in a computer read/write memory medium.The journey
When being executed, execution includes the steps that above-mentioned each method embodiment to sequence;And storage medium above-mentioned includes:ROM, RAM, magnetic disc or
The various media that can store program code such as person's CD.
Finally it should be noted that:The above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent
Present invention has been described in detail with reference to the aforementioned embodiments for pipe, it will be understood by those of ordinary skill in the art that:Its according to
So can with technical scheme described in the above embodiments is modified, either to which part or all technical features into
Row equivalent replacement;And these modifications or replacements, various embodiments of the present invention technology that it does not separate the essence of the corresponding technical solution
The range of scheme.
Claims (26)
1. a kind of exchange method based on children's mood, which is characterized in that including:
Receive the interactive voice information of child user;
Interaction content is determined in the interactive voice information;
The phonetic feature of child user is determined in the interactive voice information;
According to the interaction content, the emotional characteristics of the child user are determined;
According to the phonetic feature, the degree value of the emotional characteristics is determined;
According to the emotional characteristics, the degree value of the emotional characteristics determines acknowledgment strategy.
2. according to the method described in claim 1, it is characterized in that, described determine in the interactive voice information in interaction
Hold, including:
The interactive voice information is converted into text message;
Semantic analysis is carried out to the text message, Topic word and/or emotion word are extracted in the text message, is obtained
Interaction content;
Correspondingly, described determine the emotional characteristics of the child user according to the interaction content, including:
Based on preset recognition rule, according to the Topic word and/or emotion word, determine that the mood of the child user is special
Sign.
3. according to the method described in claim 2, it is characterized in that, further including:
Based on childrenese, children's dictionary is established;
Wherein, the childrenese includes at least:The folded word representation language of children, animal are personalized representation language;The children
Dictionary includes:The paraphrase of childrenese, the paraphrase of childrenese combination context, the mood mark of childrenese;
The mood mark of the childrenese is for identifying positive mood classification, negative emotions classification, neutral mood classification.
4. according to the method described in claim 3, it is characterized in that, described be based on preset recognition rule, according to the theme
Word and/or emotion word determine the emotional characteristics of the child user, including:
Each Topic word and/or the corresponding mood mark of each emotion word are determined in children's dictionary;
If it is determined that each mood mark in do not include negative emotions classification, and include positive mood classification, it is determined that described
The emotional characteristics of child user are front emotional characteristics;
If it is determined that each mood mark in do not include positive mood classification, and include negative emotions classification, it is determined that described
The emotional characteristics of child user are negative emotions feature;
If it is determined that each mood mark in only include neutral mood classification, it is determined that the emotional characteristics of the child user are
Neutral emotional characteristics;
If it is determined that each mood mark in not only included positive mood classification, but also include negative emotions classification, then according to
The word order of Topic word and/or emotion word in the interactive voice information is based on the context meaning of one's words, determines that the children use
The emotional characteristics at family.
5. according to the method described in claim 2, it is characterized in that, further including:
Children's mood labeled data is obtained, training obtains children's Emotion identification model;
Correspondingly, described determine the children based on preset recognition rule according to the Topic word and/or emotion word
The emotional characteristics of user, including:
The Topic word and/or emotion word are input in children's Emotion identification model, identification obtains the children
The emotional characteristics of user.
6. according to the method described in claim 2, it is characterized in that, described determine child user in the interactive voice information
Phonetic feature, including:
At least one in following phonetic feature, voice intensity, word speed, intonation are determined in the interactive voice information.
7. according to the method described in claim 2, it is characterized in that, according to the phonetic feature, the emotional characteristics are determined
Degree value, including:
Using the interactive voice information integrally as objects of statistics, the corresponding phonetic feature of the interactive voice information is determined
Average value;
According to the average value of the phonetic feature, the degree value of the emotional characteristics is determined.
8. according to the method described in claim 2, it is characterized in that, described according to the phonetic feature, determine that the mood is special
The degree value of sign, including:
Determine the phonetic feature of each Topic word and/or the emotion word in the interactive voice information;
According to the weighted value of different parts of speech, the phonetic feature of the interactive voice information is weighted, obtains institute's predicate
The weighted average of the corresponding phonetic feature of sound interactive information;
According to the weighted average of the phonetic feature, the degree value of the emotional characteristics is determined.
9. according to the method described in claim 1, it is characterized in that, the acknowledgment strategy includes:Dialogue heart-to-heart talk response mode,
And/or audio resource broadcast mode;It is described that according to the emotional characteristics, the degree value of the emotional characteristics determines acknowledgment strategy,
Including:
If the emotional characteristics are negative emotional characteristics, and the degree value of the emotional characteristics is more than predetermined threshold value, and determination is answered
It is dialogue heart-to-heart talk response mode to answer strategy;Or determine that acknowledgment strategy is first to be answered with dialogue heart-to-heart talk response mode
It answers, then response is carried out with the audio resource broadcast mode.
10. according to the method described in claim 1, it is characterized in that, further including:
Determine user's portrait of the child user;Wherein, the child user portrait includes at least one of following characteristics,
The attribute information of child user, the history intersection record of child user, child user habitual term, child user work and rest
Incidence relation between rule, the audio resource of child user hobby, geographical location and child user;
It is drawn a portrait according to the user of the determining child user, the acknowledgment strategy is optimized.
11. according to the method described in claim 10, it is characterized in that, further including:
Obtain the temporal information and/or location information for receiving the interactive voice information;
According to the temporal information and/or location information, is drawn a portrait based on the user, determine that the child user is presently in field
Scape;
It is presently in scene according to described, the acknowledgment strategy is optimized.
12. according to the method described in claim 1, it is characterized in that, further including:
According to predetermined period, the mood analysis report of the child user is generated.
13. a kind of interactive device based on children's mood, which is characterized in that including:
Receiving module, the interactive voice information for receiving child user;
Determining module, for determining interaction content in the interactive voice information;Youngster is determined in the interactive voice information
The phonetic feature of virgin user;According to the interaction content, the emotional characteristics of the child user are determined;According to voice spy
Sign, determines the degree value of the emotional characteristics;According to the emotional characteristics, the degree value of the emotional characteristics determines response plan
Slightly.
14. device according to claim 13, which is characterized in that the determining module, including:
Transform subblock, for the interactive voice information to be converted to text message;
Submodule is analyzed, for carrying out semantic analysis to the text message;
Extracting sub-module obtains interaction content for extracting Topic word and/or emotion word in the text message;
Emotional characteristics determination sub-module, for being based on preset recognition rule, according to the Topic word and/or emotion word,
Determine the emotional characteristics of the child user.
15. device according to claim 14, which is characterized in that further include:
Lexicon module establishes children's dictionary for being based on childrenese;
Wherein, the childrenese includes at least:The folded word representation language of children, animal are personalized representation language;The children
Dictionary includes:The paraphrase of childrenese, the paraphrase of childrenese combination context, the mood mark of childrenese;
The mood mark of the childrenese is for identifying positive mood classification, negative emotions classification, neutral mood classification.
16. device according to claim 15, which is characterized in that
The emotional characteristics determination sub-module, be specifically used for determining in children's dictionary each Topic word and/or
Each corresponding mood mark of the emotion word;
When not including negative emotions classification in each mood mark determined, and including positive mood classification, described in determination
The emotional characteristics of child user are front emotional characteristics;
When not including positive mood classification in each mood mark determined, and including negative emotions classification, described in determination
The emotional characteristics of child user are negative emotions feature;
When only including neutral mood classification in each mood mark determined, determine that the emotional characteristics of the child user are
Neutral emotional characteristics;
When not only having included positive mood classification in each mood mark determined, but also when including negative emotions classification, according to described
The word order of Topic word and/or emotion word in the interactive voice information is based on the context meaning of one's words, determines that the children use
The emotional characteristics at family.
17. device according to claim 14, which is characterized in that further include:
Identification model module, for obtaining children's mood labeled data, training obtains children's Emotion identification model;
Correspondingly, the emotional characteristics determination sub-module, specifically for the Topic word and/or emotion word are input to institute
In the children's Emotion identification model for stating identification model module, identification obtains the emotional characteristics of the child user.
18. device according to claim 14, which is characterized in that the determining module, including:
Phonetic feature determination sub-module, for determining at least one in following phonetic feature in the interactive voice information,
Voice intensity, word speed, intonation.
19. device according to claim 14, which is characterized in that the determining module, including:
First degree value determination sub-module, for, integrally as objects of statistics, determining the voice using the interactive voice information
The average value of the corresponding phonetic feature of interactive information;According to the average value of the phonetic feature, the emotional characteristics are determined
Degree value.
20. device according to claim 14, which is characterized in that the determining module, including:
Second degree value determination sub-module, for determining in the interactive voice information each Topic word and/or described
The phonetic feature of emotion word;According to the weighted value of different parts of speech, the phonetic feature of the interactive voice information is carried out
Weighted calculation obtains the weighted average of the corresponding phonetic feature of the interactive voice information;According to the phonetic feature
Weighted average, determine the degree value of the emotional characteristics.
21. device according to claim 13, which is characterized in that the acknowledgment strategy includes:Dialogue heart-to-heart talk response mode,
And/or audio resource broadcast mode;The determining module, including:
First determination sub-module, for working as the emotional characteristics for negative emotional characteristics, and the degree value of the emotional characteristics
When more than predetermined threshold value, determine that acknowledgment strategy is dialogue heart-to-heart talk response mode;Or determine that acknowledgment strategy is first with described
Dialogue heart-to-heart talk response mode carries out response, then carries out response with the audio resource broadcast mode.
22. device according to claim 13, which is characterized in that the determining module further includes:
User's portrait determination sub-module, for determining that the user of the child user draws a portrait;Wherein, user portrait include with
At least one of lower feature, the attribute information of child user, the history intersection record of child user, child user habituation
Being associated between term, the work and rest rule of child user, the audio resource of child user hobby, geographical location and child user
System;
Optimize submodule, the user of the child user for being determined according to user portrait determination sub-module draws a portrait, right
The acknowledgment strategy optimizes.
23. device according to claim 22, which is characterized in that described device further includes:
Acquisition module, for obtaining the temporal information and/or location information for receiving the interactive voice information;
The determining module further includes:
Scene determination sub-module, for according to the temporal information and/or location information, being drawn a portrait based on the user, determining institute
It states child user and is presently in scene;
The optimization submodule is additionally operable to be presently in scene according to, be optimized to the acknowledgment strategy.
24. device according to claim 13, which is characterized in that further include:
Generation module, for according to predetermined period, generating the mood analysis report of the child user.
25. a kind of electronic equipment, which is characterized in that including:Processor;Memory;And program;Wherein, described program is deposited
Storage is configured as being executed by the processor in the memory, and described program includes for executing such as claim
The instruction of 1-12 any one of them methods.
26. a kind of electronic equipment readable storage medium storing program for executing, which is characterized in that the electronic device-readable storage medium is stored with journey
Sequence, described program make electronic equipment perform claim require 1-12 any one of them methods.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810290987.3A CN108536802B (en) | 2018-03-30 | 2018-03-30 | Interaction method and device based on child emotion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810290987.3A CN108536802B (en) | 2018-03-30 | 2018-03-30 | Interaction method and device based on child emotion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108536802A true CN108536802A (en) | 2018-09-14 |
CN108536802B CN108536802B (en) | 2020-01-14 |
Family
ID=63482290
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810290987.3A Active CN108536802B (en) | 2018-03-30 | 2018-03-30 | Interaction method and device based on child emotion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108536802B (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109243582A (en) * | 2018-09-19 | 2019-01-18 | 江苏金惠甫山软件科技有限公司 | The human-computer interaction motion management method and system of knowledge based graphical spectrum technology |
CN109344229A (en) * | 2018-09-18 | 2019-02-15 | 深圳壹账通智能科技有限公司 | Method, apparatus, computer equipment and the storage medium of dialog analysis evaluation |
CN109408708A (en) * | 2018-09-25 | 2019-03-01 | 平安科技(深圳)有限公司 | Method, apparatus, computer equipment and the storage medium that user recommends |
CN109473122A (en) * | 2018-11-12 | 2019-03-15 | 平安科技(深圳)有限公司 | Mood analysis method, device and terminal device based on detection model |
CN109547332A (en) * | 2018-11-22 | 2019-03-29 | 腾讯科技(深圳)有限公司 | Communication session interaction method and device, and computer equipment |
CN109670166A (en) * | 2018-09-26 | 2019-04-23 | 平安科技(深圳)有限公司 | Collection householder method, device, equipment and storage medium based on speech recognition |
CN109754810A (en) * | 2019-02-21 | 2019-05-14 | 珠海格力电器股份有限公司 | A kind of sound control method, device, storage medium and air-conditioning |
CN109871807A (en) * | 2019-02-21 | 2019-06-11 | 百度在线网络技术(北京)有限公司 | Face image processing process and device |
CN109885277A (en) * | 2019-02-26 | 2019-06-14 | 百度在线网络技术(北京)有限公司 | Human-computer interaction device, mthods, systems and devices |
CN109935242A (en) * | 2019-01-10 | 2019-06-25 | 上海言通网络科技有限公司 | Formula speech processing system and method can be interrupted |
CN109933782A (en) * | 2018-12-03 | 2019-06-25 | 阿里巴巴集团控股有限公司 | User emotion prediction technique and device |
CN110246519A (en) * | 2019-07-25 | 2019-09-17 | 深圳智慧林网络科技有限公司 | Emotion identification method, equipment and computer readable storage medium |
CN110264791A (en) * | 2019-05-30 | 2019-09-20 | 合肥阿拉丁智能科技有限公司 | Wrist-watch robot automtion autonomous operation system |
CN110502609A (en) * | 2019-07-11 | 2019-11-26 | 江苏心涧康科技有限公司 | A kind of method, apparatus and company robot of adjusting mood |
CN111048075A (en) * | 2018-10-11 | 2020-04-21 | 上海智臻智能网络科技股份有限公司 | Intelligent customer service system and intelligent customer service robot |
CN111143529A (en) * | 2019-12-24 | 2020-05-12 | 北京赤金智娱科技有限公司 | Method and equipment for carrying out conversation with conversation robot |
CN111696556A (en) * | 2020-07-13 | 2020-09-22 | 上海茂声智能科技有限公司 | Method, system, equipment and storage medium for analyzing user conversation emotion |
WO2020253128A1 (en) * | 2019-06-17 | 2020-12-24 | 深圳壹账通智能科技有限公司 | Voice recognition-based communication service method, apparatus, computer device, and storage medium |
CN112329431A (en) * | 2019-08-01 | 2021-02-05 | 中国移动通信集团上海有限公司 | Audio and video data processing method and device and storage medium |
CN112017668B (en) * | 2020-10-30 | 2021-09-24 | 北京淇瑀信息科技有限公司 | Intelligent voice conversation method, device and system based on real-time emotion detection |
CN113661036A (en) * | 2019-04-16 | 2021-11-16 | 索尼集团公司 | Information processing apparatus, information processing method, and program |
CN113693600A (en) * | 2021-08-27 | 2021-11-26 | 安徽淘云科技股份有限公司 | Robot and method for psychology analysis of children based on behavior psychology |
CN114566145A (en) * | 2022-03-04 | 2022-05-31 | 河南云迹智能技术有限公司 | Data interaction method, system and medium |
WO2022126734A1 (en) * | 2020-12-14 | 2022-06-23 | 美的集团股份有限公司 | Voice interaction processing method and apparatus, electronic device, and storage medium |
CN115460317A (en) * | 2022-09-05 | 2022-12-09 | 西安万像电子科技有限公司 | Emotion recognition and voice feedback method, device, medium and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110184721A1 (en) * | 2006-03-03 | 2011-07-28 | International Business Machines Corporation | Communicating Across Voice and Text Channels with Emotion Preservation |
CN103456314A (en) * | 2013-09-03 | 2013-12-18 | 广州创维平面显示科技有限公司 | Emotion recognition method and device |
CN103593054A (en) * | 2013-11-25 | 2014-02-19 | 北京光年无限科技有限公司 | Question-answering system combining emotion recognition and output |
CN104635574A (en) * | 2014-12-15 | 2015-05-20 | 山东大学 | Infant-oriented early-education accompanying and tending robot system |
CN106683672A (en) * | 2016-12-21 | 2017-05-17 | 竹间智能科技(上海)有限公司 | Intelligent dialogue method and system based on emotion and semantics |
-
2018
- 2018-03-30 CN CN201810290987.3A patent/CN108536802B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110184721A1 (en) * | 2006-03-03 | 2011-07-28 | International Business Machines Corporation | Communicating Across Voice and Text Channels with Emotion Preservation |
CN103456314A (en) * | 2013-09-03 | 2013-12-18 | 广州创维平面显示科技有限公司 | Emotion recognition method and device |
CN103593054A (en) * | 2013-11-25 | 2014-02-19 | 北京光年无限科技有限公司 | Question-answering system combining emotion recognition and output |
CN104635574A (en) * | 2014-12-15 | 2015-05-20 | 山东大学 | Infant-oriented early-education accompanying and tending robot system |
CN106683672A (en) * | 2016-12-21 | 2017-05-17 | 竹间智能科技(上海)有限公司 | Intelligent dialogue method and system based on emotion and semantics |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109344229A (en) * | 2018-09-18 | 2019-02-15 | 深圳壹账通智能科技有限公司 | Method, apparatus, computer equipment and the storage medium of dialog analysis evaluation |
WO2020057014A1 (en) * | 2018-09-18 | 2020-03-26 | 深圳壹账通智能科技有限公司 | Dialogue analysis and evaluation method and apparatus, computer device and storage medium |
CN109243582A (en) * | 2018-09-19 | 2019-01-18 | 江苏金惠甫山软件科技有限公司 | The human-computer interaction motion management method and system of knowledge based graphical spectrum technology |
CN109408708A (en) * | 2018-09-25 | 2019-03-01 | 平安科技(深圳)有限公司 | Method, apparatus, computer equipment and the storage medium that user recommends |
CN109670166A (en) * | 2018-09-26 | 2019-04-23 | 平安科技(深圳)有限公司 | Collection householder method, device, equipment and storage medium based on speech recognition |
CN111048075A (en) * | 2018-10-11 | 2020-04-21 | 上海智臻智能网络科技股份有限公司 | Intelligent customer service system and intelligent customer service robot |
CN109473122A (en) * | 2018-11-12 | 2019-03-15 | 平安科技(深圳)有限公司 | Mood analysis method, device and terminal device based on detection model |
CN109547332A (en) * | 2018-11-22 | 2019-03-29 | 腾讯科技(深圳)有限公司 | Communication session interaction method and device, and computer equipment |
CN109547332B (en) * | 2018-11-22 | 2022-05-13 | 腾讯科技(深圳)有限公司 | Communication session interaction method and device, and computer equipment |
CN109933782B (en) * | 2018-12-03 | 2023-11-28 | 创新先进技术有限公司 | User emotion prediction method and device |
CN109933782A (en) * | 2018-12-03 | 2019-06-25 | 阿里巴巴集团控股有限公司 | User emotion prediction technique and device |
CN109935242A (en) * | 2019-01-10 | 2019-06-25 | 上海言通网络科技有限公司 | Formula speech processing system and method can be interrupted |
CN109871807B (en) * | 2019-02-21 | 2023-02-10 | 百度在线网络技术(北京)有限公司 | Face image processing method and device |
CN109871807A (en) * | 2019-02-21 | 2019-06-11 | 百度在线网络技术(北京)有限公司 | Face image processing process and device |
CN109754810A (en) * | 2019-02-21 | 2019-05-14 | 珠海格力电器股份有限公司 | A kind of sound control method, device, storage medium and air-conditioning |
CN109885277A (en) * | 2019-02-26 | 2019-06-14 | 百度在线网络技术(北京)有限公司 | Human-computer interaction device, mthods, systems and devices |
CN113661036A (en) * | 2019-04-16 | 2021-11-16 | 索尼集团公司 | Information processing apparatus, information processing method, and program |
CN110264791A (en) * | 2019-05-30 | 2019-09-20 | 合肥阿拉丁智能科技有限公司 | Wrist-watch robot automtion autonomous operation system |
WO2020253128A1 (en) * | 2019-06-17 | 2020-12-24 | 深圳壹账通智能科技有限公司 | Voice recognition-based communication service method, apparatus, computer device, and storage medium |
CN110502609A (en) * | 2019-07-11 | 2019-11-26 | 江苏心涧康科技有限公司 | A kind of method, apparatus and company robot of adjusting mood |
CN110502609B (en) * | 2019-07-11 | 2022-03-08 | 江苏心涧康科技有限公司 | Method and device for adjusting emotion and accompanying robot |
CN110246519A (en) * | 2019-07-25 | 2019-09-17 | 深圳智慧林网络科技有限公司 | Emotion identification method, equipment and computer readable storage medium |
CN112329431A (en) * | 2019-08-01 | 2021-02-05 | 中国移动通信集团上海有限公司 | Audio and video data processing method and device and storage medium |
CN112329431B (en) * | 2019-08-01 | 2023-07-04 | 中国移动通信集团上海有限公司 | Audio and video data processing method, equipment and storage medium |
CN111143529A (en) * | 2019-12-24 | 2020-05-12 | 北京赤金智娱科技有限公司 | Method and equipment for carrying out conversation with conversation robot |
CN111696556A (en) * | 2020-07-13 | 2020-09-22 | 上海茂声智能科技有限公司 | Method, system, equipment and storage medium for analyzing user conversation emotion |
CN111696556B (en) * | 2020-07-13 | 2023-05-16 | 上海茂声智能科技有限公司 | Method, system, equipment and storage medium for analyzing user dialogue emotion |
CN112017668B (en) * | 2020-10-30 | 2021-09-24 | 北京淇瑀信息科技有限公司 | Intelligent voice conversation method, device and system based on real-time emotion detection |
WO2022126734A1 (en) * | 2020-12-14 | 2022-06-23 | 美的集团股份有限公司 | Voice interaction processing method and apparatus, electronic device, and storage medium |
CN113693600A (en) * | 2021-08-27 | 2021-11-26 | 安徽淘云科技股份有限公司 | Robot and method for psychology analysis of children based on behavior psychology |
CN114566145A (en) * | 2022-03-04 | 2022-05-31 | 河南云迹智能技术有限公司 | Data interaction method, system and medium |
CN115460317A (en) * | 2022-09-05 | 2022-12-09 | 西安万像电子科技有限公司 | Emotion recognition and voice feedback method, device, medium and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN108536802B (en) | 2020-01-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108536802A (en) | Exchange method based on children's mood and device | |
KR102627948B1 (en) | Automated assistants that accommodate multiple age groups and/or vocabulary levels | |
US11645547B2 (en) | Human-machine interactive method and device based on artificial intelligence | |
US6795808B1 (en) | User interface/entertainment device that simulates personal interaction and charges external database with relevant data | |
Kurumada et al. | Pragmatic interpretation of contrastive prosody: It looks like speech adaptation | |
Berg | Modelling of natural dialogues in the context of speech-based information and control systems | |
CN106653052A (en) | Virtual human face animation generation method and device | |
CN109189980A (en) | The method and electronic equipment of interactive voice are carried out with user | |
KR20020067592A (en) | User interface/entertainment device that simulates personal interaction and responds to user's mental state and/or personality | |
KR20020067590A (en) | Environment-responsive user interface/entertainment device that simulates personal interaction | |
KR20020067591A (en) | Self-updating user interface/entertainment device that simulates personal interaction | |
KR20210070213A (en) | Voice user interface | |
CN111145721A (en) | Personalized prompt language generation method, device and equipment | |
KR20040014123A (en) | Emotion and Motion Extracting System of Virtual Human and its Method | |
Pittermann et al. | Handling emotions in human-computer dialogues | |
CN108986785A (en) | A kind of text adaptation method and device | |
KR102101311B1 (en) | Method and apparatus for providing virtual reality including virtual pet | |
Mairesse | Learning to adapt in dialogue systems: data-driven models for personality recognition and generation. | |
Edlund | In search for the conversational homunculus: serving to understand spoken human face-to-face interaction | |
Padilha | Modelling turn-taking in a simulation of small group discussion | |
JP6930781B1 (en) | Learning method and content playback device | |
JP2019139282A (en) | Interaction control device capable of carrying out interaction about contents, program and method | |
Missaouib et al. | Interactive Storytelling for Children: A Case-study of Design and Development Considerations for Ethical Conversational AI | |
Simeoni et al. | EMPATHIC VOICE: ENABLING EMOTIONAL INTELLIGENCE IN VIRTUAL ASSISTANTS |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20210518 Address after: 100085 Baidu Building, 10 Shangdi Tenth Street, Haidian District, Beijing Patentee after: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd. Patentee after: Shanghai Xiaodu Technology Co.,Ltd. Address before: 100085 Baidu Building, 10 Shangdi Tenth Street, Haidian District, Beijing Patentee before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd. |