CN110020013A - Emotion information presentation device and electronic equipment - Google Patents

Emotion information presentation device and electronic equipment Download PDF

Info

Publication number
CN110020013A
CN110020013A CN201711328630.1A CN201711328630A CN110020013A CN 110020013 A CN110020013 A CN 110020013A CN 201711328630 A CN201711328630 A CN 201711328630A CN 110020013 A CN110020013 A CN 110020013A
Authority
CN
China
Prior art keywords
emotion
presented
mode
vocabulary
affective style
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711328630.1A
Other languages
Chinese (zh)
Inventor
王慧
王豫宁
朱频频
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xiaoi Robot Technology Co Ltd
Shanghai Zhizhen Intelligent Network Technology Co Ltd
Original Assignee
Shanghai Zhizhen Intelligent Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhizhen Intelligent Network Technology Co Ltd filed Critical Shanghai Zhizhen Intelligent Network Technology Co Ltd
Priority to CN201711328630.1A priority Critical patent/CN110020013A/en
Priority to US16/052,345 priority patent/US10783329B2/en
Publication of CN110020013A publication Critical patent/CN110020013A/en
Priority to US16/992,284 priority patent/US11455472B2/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/90335Query processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9038Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention provides a kind of emotion information presentation device and electronic equipments.The emotion information presentation device includes: acquisition module, and instruction is presented for obtaining the first emotion;And module is presented, for mode being presented according at least one first emotion and carries out emotion presentation when at least one first emotion being presented mode and meets emotion presentation condition;Determine that at least one first emotion is presented mode and does not meet emotion presentation condition according to user demand variation, Background control dynamic change and/or application scenarios changes in demand, and at least one first emotion presentation mode in instruction is presented to first emotion and is adjusted, it obtains at least one second emotion that second emotion is presented in instruction and mode is presented, and mode is presented according at least one second emotion and carries out emotion presentation.The present invention can be realized multi-modal emotion presentation mode, therefore improve user experience.

Description

Emotion information presentation device and electronic equipment
Technical field
The present invention relates to natural language processing and field of artificial intelligence more particularly to a kind of emotion information presentation devices And electronic equipment.
Background technique
With the continuous improvement that the continuous development and people of artificial intelligence technology require interactive experience, intelligent interaction Mode, which has begun, gradually substitutes some traditional man-machine interaction modes, and has become a hot topic of research.
Emotion signal is identified to obtain certain affective state currently, the prior art is concentrated mainly on, or is only passed through The feedback presentation that expression, movement of user etc. carries out similar or opposite emotion is observed, presentation mode is single, poor user experience.
Summary of the invention
In view of this, the embodiment of the present invention provides a kind of emotion information presentation device and electronic equipment, it is able to solve State technical problem.
The embodiment of the invention provides a kind of emotion information presentation devices, comprising:
Module is obtained, instruction is presented for obtaining the first emotion, wherein it includes at least one that instruction, which is presented, in first emotion Mode and at least one affective style is presented in the first emotion of kind, and it includes text emotion that mode, which is presented, at least one first emotion Mode is presented, mode is presented in sound emotion, mode is presented in Image emotional semantic, mode is presented in video feeling, mechanical movement emotion is presented At least one of mode;And
Module is presented, for when at least one first emotion is presented mode and meets emotion condition is presented, according to institute It states at least one first emotion and mode progress emotion presentation is presented;According to user demand variation, Background control dynamic change and/or Application scenarios changes in demand determines that at least one first emotion is presented mode and does not meet emotion and is presented condition, and to described the One emotion is presented at least one first emotion presentation mode in instruction and is adjusted, and obtains the second emotion presentation and refers to Mode is presented at least one second emotion in order, and mode is presented according at least one second emotion and carries out emotion It presents.
Optionally, the presentation module searches emotion according at least one affective style and database is presented to determine At least one corresponding emotion vocabulary of every kind of affective style at least one affective style is stated, and at least one described feelings are presented Feel vocabulary.
Optionally, every kind of affective style at least one affective style corresponds to multiple emotion vocabulary, and described first Instruction is presented in emotion further include: the corresponding emotional intensity of every kind of affective style and/or institute at least one affective style State the corresponding feeling polarities of every kind of affective style at least one affective style, wherein the presentation module is according to the feelings Sense intensity and/or the feeling polarities select at least one described emotion vocabulary from the multiple emotion vocabulary.
Optionally, at least one described emotion vocabulary is divided into different ranks according to different emotional intensities.
Optionally, each emotion vocabulary at least one described emotion vocabulary includes one or more affective styles, institute State the same emotion vocabulary at least one emotion vocabulary has different affective style and emotion under different application scenarios Intensity.
Optionally, the emotion vocabulary is polynary emotion vocabulary, and the polynary emotion vocabulary includes the combination of multiple vocabulary, The wherein affective style attribute that each vocabulary in the polynary emotion vocabulary does not have individually.
Optionally, the module that presents is according to every kind of emotion presentation mould at least one first emotion presentation mode State carries out first emotion and the emotion presentation for instructing unspecified affective style is presented, wherein the unspecified affective style Corresponding emotional intensity is lower than at least one corresponding emotional intensity of affective style or the unspecified affective style Feeling polarities it is consistent with the feeling polarities of at least one affective style.
Optionally, the module that presents determines that the emotion of at least one emotion vocabulary composition is presented at least one in text The size of the emotional intensity of kind affective style, and the size based on the emotional intensity judges at least one affective style Whether emotional intensity, which meets first emotion, is presented instruction, wherein i-th kind of affective style in text is presented in the emotion Emotional intensity can be calculate by the following formula to obtain:
Round [n/N*1/ [1+exp (- n+1)] * max { a1, a2 ..., an }],
Wherein, round (X) indicates that, to the neighbouring rounding of X, n indicates the number of the emotion vocabulary in i-th kind of affective style Amount, N indicate that the quantity of the emotion vocabulary in text is presented in the emotion, and M indicates the number of the affective style of N number of emotion vocabulary Amount, exp (x) indicate that using natural constant e as the exponential function at bottom, a1, a2 ..., an indicate that n emotion vocabulary respectively corresponds emotion The emotional intensity of type M, max { a1, a2 ..., an } indicate the maximum value of emotional intensity, and wherein n, N and M are positive integer.
Optionally, the feeling polarities include one or more of: commendation, derogatory sense and neutrality.
The embodiment of the invention also provides a kind of electronic equipment comprising above-mentioned emotion information presentation device.
Instruction is presented by obtaining the first emotion in the technical solution provided according to embodiments of the present invention, which is in Now instruction includes that mode and at least one affective style is presented at least one first emotion, and mode is presented at least one first emotion Mode is presented including text emotion, mode is presented in sound emotion, mode is presented in Image emotional semantic, mode, machinery is presented in video feeling It moves emotion and at least one of mode is presented, and every kind of emotion in mode is presented according at least one first emotion and is presented Mode carries out one of at least one affective style or the emotion of a variety of affective styles is presented, and realizes multi-modal emotion and presents Mode, therefore improve user experience.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not It can the limitation present invention.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows and meets implementation of the invention Example, and be used to explain the principle of the present invention together with specification.
Fig. 1 is a kind of block diagram of emotion information presentation device shown in an exemplary embodiment according to the present invention.
Fig. 2 is a kind of block diagram of emotion information presentation device shown in another exemplary embodiment according to the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that the described embodiment is only a part of the embodiment of the present invention, instead of all the embodiments.According to this Embodiment in invention, every other reality obtained by those of ordinary skill in the art without making creative efforts Example is applied, shall fall within the protection scope of the present invention.
Emotion presentation is the final form of expression of affection computation user interface, is intended to based on sentiment analysis identification and emotion Understand the result of (parsing).Emotion is presented can provide for the current state of user and based on emotion presentation instruction decision process The emotion feedback of intelligence, and user is supplied to by emotion output equipment.
Fig. 1 is a kind of block diagram of emotion information presentation device 300 shown in an exemplary embodiment according to the present invention.Such as figure Shown in 1, which includes:
Module 310 is obtained, instruction is presented for obtaining the first emotion, wherein it includes at least one that instruction, which is presented, in the first emotion Mode and at least one affective style is presented in first emotion, and it includes that mould is presented in text emotion that mode, which is presented, at least one first emotion Mode is presented in state, sound emotion, mode is presented in Image emotional semantic, mode is presented in video feeling, mechanical movement emotion is presented in mode At least one;And
Module 320 is presented, for when at least one first emotion is presented mode and meets emotion condition is presented, according to At least one first emotion is presented mode and carries out emotion presentation;According to user demand variation, Background control dynamic change and/ Or application scenarios changes in demand determines that at least one first emotion is presented mode and does not meet emotion presentation condition, and to described First emotion is presented at least one first emotion presentation mode in instruction and is adjusted, and obtains second emotion and presents Mode is presented at least one second emotion in instruction, and mode is presented according at least one second emotion and carries out feelings Sense is presented.
Instruction is presented by obtaining the first emotion in the technical solution provided according to embodiments of the present invention, which is in Now instruction includes that mode and at least one affective style is presented at least one first emotion, and mode is presented at least one first emotion Mode is presented including text emotion, and every kind of emotion in mode is presented according at least one first emotion, mode progress is presented The emotion of one of at least one affective style or a variety of affective styles is presented, and realizes multi-modal emotion presentation mode, because This improves user experience.
In embodiments of the present invention, it can be presented by carrying out sentiment analysis identification to emotion information to obtain the first emotion Instruction can also directly determine the first emotion by way of artificial settings and instruction, the invention is not limited in this regard is presented.Example Such as, when wanting that certain specific emotion is presented, robot does not need the emotion of identification user, but directly by artificially setting Instruction is presented to present in fixed emotion.
Here, the input mode of emotion information can include but is not limited to one of text, voice, image, gesture etc. Or it is a variety of.For example, user can only input emotion information in a manner of text, it can also be defeated in such a way that text and voice combine Enter emotion information, or even the emotions such as facial expression, speech intonation and the limb action of user can also be extracted by acquisition equipment Information.
It is that emotion is intended to understand in affection computation user interface and instruction decision is presented in emotion that instruction, which is presented, in first emotion Output, which, which is presented instruction, should have explicitly executable meaning and be readily appreciated that and receive.First emotion is presented The content of instruction may include that mode and at least one affective style is presented at least one first emotion.
Specifically, it may include that mode is presented in text emotion that mode, which is presented, in the first emotion, also may include that sound emotion is in Mode is presented in existing mode, Image emotional semantic, mode is presented in video feeling, at least one of mode is presented in mechanical movement emotion, this Invention to this with no restriction.It should be noted that final emotion presentation can be only a kind of emotion presentation mode, such as text Mode is presented in emotion;Or the combination of mode is presented in several emotions, such as mode is presented for text emotion and sound emotion is in Mode is presented in the combination of existing mode or text emotion, mode is presented in sound emotion and the combination of mode is presented in Image emotional semantic.
Affective style (also referred to as emotion ingredient) can be indicated by classification emotion model and dimension emotion model.Classification The affective state of emotion model is discrete, therefore also referred to as discrete emotion model;A region in multidimensional emotional space and/ Or the set of at least one point can be defined as an affective style in classification emotion model.Dimension emotion model is building one A multidimensional emotional space, the emotional factor that the corresponding psychology of each dimension in the space defines, under dimension emotion model, Affective state is indicated by the coordinate value in emotional space.In addition, dimension emotion model can be continuously, it is also possible to discrete 's.
Specifically, discrete emotion model is the principal mode and recommendation form of affective style, according to field and can be answered Classified with the emotion that emotion information is presented in scene, and the affective style in different fields or application scenarios can phase Together, it can also be different.For example, in general field, the basic emotion classification system generally taken as a kind of dimension emotion model, I.e. multidimensional emotional space includes 6 dimensions: glad (Joy), sad (Sadness), angry (Anger), surprised (Surprise), Frightened (Fear) and detest (Disgust);In customer service field, common affective style can include but is not limited to it is glad, sad, Comfort, dissuasion etc.;And nurse field is being accompanied, common affective style can include but is not limited to glad, sad, curious, peace Console, encourage, try to stop.
Dimension emotion model is the supplement of affective style, is currently only used for the feelings of continuous dynamic change and subsequent affection computation Condition, such as need to finely tune parameter in real time or to the far-reaching situation of the calculating of context affective state.Dimension emotion model Advantage be to facilitate calculating and fine tuning, but subsequent needs are subject to benefit by being matched with the application parameter presented With.
In addition, each field have be primarily upon affective style (emotion recognition user information obtain the field pay close attention to Affective style) and mainly present affective style (emotion present or interactive instruction in affective style), the two can be Different two groups of moods classification (classification emotion model) or different emotion dimensional extent (dimension emotion model).It answers at some With under scene, instruct decision process corresponding to complete the affective style for determining that the field is primarily upon by certain emotion The affective style mainly presented.
It is preferential that mode is presented to be in using text emotion when it includes that mode is presented in a variety of emotions that instruction, which is presented, in the first emotion Then mode is presented using sound emotion again in existing at least one affective style, mode is presented in Image emotional semantic, mould is presented in video feeling One of mode is presented in state, mechanical movement emotion or mode is presented to supplement and at least one affective style is presented in a variety of emotions. Here, the affective style for supplementing presentation can be text emotion and the not shown at least one affective style of mode be presented, or The emotional intensity that mode is presented is presented in text emotion and/or feeling polarities do not meet the first emotion and present required by instruction at least A kind of affective style.
It should be noted that instruction, which is presented, in the first emotion can specify one or more affective styles, and can be according to The intensity of every kind of affective style is ranked up, with primary and secondary of each affective style of determination during emotion is presented.Specifically, if The emotional intensity of affective style is less than preset emotional intensity threshold value, it may be considered that the affective style is during emotion is presented Emotional intensity cannot be greater than and other emotional intensities in instruction are presented in the first emotion are greater than or equal to emotional intensity threshold value Affective style.
In embodiments of the present invention, the selection that mode is presented in emotion depends on following factor: emotion output equipment and its answering With state (such as, if having the display of display text or image, whether be connected with loudspeaker etc.), interaction scenarios type (for example, daily chat, business consultation etc.), dialogue types (for example, based on the answer of FAQs mainly replied with text, are led Boat is then based on image, supplemented by voice) etc..
Further, the way of output that emotion is presented depends on emotion and mode is presented.For example, if the first emotion is in Existing mode is that mode is presented in text emotion, then the way of output that final emotion is presented is the mode of text;If the first emotion It is that text emotion is presented based on mode that mode, which is presented, and sound emotion is presented supplemented by mode, then the output side that final emotion is presented Formula is the mode that text and voice combine.That is, the output that emotion is presented can only include that mode is presented in a kind of emotion, It may include the combination that mode is presented in several emotions, the invention is not limited in this regard.
Instruction is presented by obtaining the first emotion, wherein the first emotion in the technical solution provided according to embodiments of the present invention It includes that mode and at least one affective style is presented at least one first emotion that instruction, which is presented, and mould is presented at least one first emotion State include text emotion present mode, and according at least one first emotion present mode in every kind of emotion present mode into The emotion of one of at least one affective style of row or a variety of affective styles is presented, and realizes the multi-modal feelings based on text Feel presentation mode, this improves user experiences.
In another embodiment of the present invention, every kind of emotion in mode is presented according at least one first emotion to present Mode carries out one of at least one affective style or the emotion of a variety of affective styles is presented, comprising: according at least one feelings Feel type search emotion present database with every kind of affective style in at least one affective style of determination it is corresponding at least one Emotion vocabulary;And at least one emotion vocabulary is presented.
Specifically, database, which is presented, in emotion can be preset handmarking, be also possible to learn to obtain by big data , or be also possible to obtain by partly learning semi-artificial semi-supervised man-machine collaboration, or even can also be through a large amount of feelings The entire interactive system of sense dialogue data training obtains.It should be noted that database, which is presented, in emotion allows on-line study and more Newly.
The parameter of emotion vocabulary and its affective style, emotional intensity and feeling polarities, which can store, is presented database in emotion In, it can also be obtained by external interface.In addition, the set that database includes the emotion vocabulary of multiple application scenarios is presented in emotion And therefore corresponding parameter can be switched over and be adjusted to emotion vocabulary according to practical situations.
Emotion vocabulary can classify according to the affective state of user of interest under application scenarios.That is, same Affective style, emotional intensity and the feeling polarities and application scenarios of one emotion vocabulary are related.For example, being needed in no special applications In the general field asked, it can classify according to emotion vocabulary of the above-mentioned 6 kinds of basic emotion types to Chinese, and therefore obtain Affective style as shown in Table 1 and corresponding example word and phrase.
Table 1
Number Affective style Example word
1 It is glad Happily, good, excited, happy, stick is rattled away ...
2 It is sad It is sad, painful, gloomy, extremely sad ...
3 Indignation It is indignant, irritated, angry, angry ...
4 It is surprised It is strange, surprised, achieve overnight success, stare dumb-founded ...
5 It is frightened It is flurried, flurried, be at a loss, be all of jump ...
6 Detest Dislike, is hateful, hates, dislikes, blaming, is apologetic ...
It should be noted that the example word in table 1 is the main emotion under the application scenarios of general field according to emotion vocabulary The recommendation example word of Type division.Above-mentioned 6 kinds of affective styles be not it is fixed, in practical applications, can according to application scenarios come The affective style of emotion vocabulary is adjusted, for example, increasing the affective style paid special attention to, or deletes the emotion class without particular application Type.
In addition, the same emotion vocabulary may have different paraphrase to express different feelings in different context of co-texts Sense, i.e., affective style and feeling polarities etc. may change, therefore, it is necessary to according to application scenarios and context of co-text to same One emotion vocabulary carries out emotion and disambiguates to determine its affective style.
Specifically, emotion mark can be carried out to the emotion vocabulary of Chinese by way of automatic, artificial, or both combine Note.For the vocabulary with a variety of affective styles, emotion disambiguation can be carried out based on part of speech, emotion frequency, Bayesian model etc.. Further, it is also possible to judge the affective style of emotion vocabulary within a context by constructing context-sensitive characteristic set.
In another embodiment of the present invention, every kind of affective style at least one affective style corresponds to multiple emotions Instruction is presented in vocabulary, the first emotion further include: the corresponding emotional intensity of every kind of affective style at least one affective style and/ Or the corresponding feeling polarities of every kind of affective style at least one affective style, wherein looked into according at least one affective style Emotion is looked for database to be presented at least one corresponding emotion vocabulary of every kind of affective style in at least one affective style of determination, It include: that at least one emotion vocabulary is selected from multiple emotion vocabulary according to emotional intensity and/or feeling polarities.
Specifically, every kind of affective style can correspond to multiple emotion vocabulary, and the content that instruction is presented in the first emotion can be with Including the corresponding emotional intensity of every kind of affective style and/or the corresponding feeling polarities of every kind of affective style, and it is strong according to emotion Degree and/or feeling polarities select at least one emotion vocabulary from multiple emotion vocabulary.
Here, emotional intensity is to retouch on psychology to emotion from people to tendentiousness is selected caused by things The factor stated is used to describe the intensity grade of certain emotion in the application.Emotional intensity can be arranged according to application scenarios For different emotional intensity ranks, for example, 2 grades (that is, having emotional intensity and ameleia intensity), 3 grades (that is, low emotional intensity, in Etc. emotional intensities and high touch intensity) or more advanced, the invention is not limited in this regard.
Under specific application scenarios, the affective style and emotional intensity of the same emotion vocabulary are one-to-one.? It in practical application, first has to divide the emotional intensity that instruction is presented in the first emotion, because the emotional intensity determines that emotion is presented The emotional intensity rank finally presented;Then, emotion vocabulary is determined according to the emotional intensity rank that instruction is presented in the first emotion Strength grading.It is determined it should be noted that instruction decision process is presented by emotion in emotional intensity of the invention.In addition it is also necessary to Illustrate, emotional intensity analysis needs the emotional intensity rank that instruction is presented with the first emotion to match, and the corresponding of the two is closed System can be obtained by certain operation rule.
Feeling polarities may include one of commendation, derogatory sense and neutrality or a variety of.First emotion is presented what instruction was specified Every kind of affective style all corresponds to one or more feeling polarities.Specifically, by taking the affective style " detest " in table 1 as an example, " detesting Dislike " in example word corresponding to this affective style, the feeling polarities of " blame " are derogatory sense, and during the feeling polarities of " apologetic " are Property.It is determined it should be noted that instruction decision process is presented by emotion in feeling polarities of the invention, which is presented instruction decision Process can be one or more generation outputs in the information such as the affective state according to user, interaction intention, application scenarios The decision process now instructed;In addition to this, emotion is presented instruction decision process and is also possible to according to application scenarios and user demand Feeling polarities are adjusted, and active decision emotion is presented in the affective state and intent information for not capturing user The process of instruction, for example, the state regardless of user and intention, guest-meeting robot can all fix presentation " happiness " emotion.
In another embodiment of the present invention, at least one emotion vocabulary is divided into different according to different emotional intensities Rank.
Specifically, the classification of emotion vocabulary is thinner than the classification that the emotional intensity that instruction is specified is presented in the first emotion, Such presentation rule requirement is no so harsh, is as a result also easier to restrain.That is, the rank of emotion vocabulary compares emotion The rank of intensity is more, but still has to that the specified emotion of the first emotion presentation instruction can be corresponded to by certain operation rule Intensity, and the upper and lower bound of the classification of the specified emotional intensity of instruction is presented no more than first emotion.
As an example it is assumed that the emotional intensity that the presentation instruction of the first emotion provides is classified as 0 grade of emotional intensity of presentation 1 grade (low), presentation of emotional intensity (in) and 2 grades of emotional intensity (height) is presented, emotion vocabulary is classified as vocabulary emotional intensity 0 Grade, 1 grade of vocabulary emotional intensity, 2 grades of vocabulary emotional intensity, 3 grades of vocabulary emotional intensity, 4 grades of vocabulary emotional intensity and vocabulary emotion 5 grades of intensity, then operation rule needs the emotional intensity by the emotion vocabulary in current text (that is, vocabulary emotional intensity 0 to 5 Grade) be matched to the first emotion and present in the emotional intensity (that is, present in 0 to 2 grade of emotional intensity) of instruction, and must not exceed this first The range of the emotional intensity of instruction is presented in emotion.The case where if there is -1 grade of emotional intensity is presented or is presented 3 grades of emotional intensity, It then indicates to have exceeded the range that the emotional intensity instructed is presented in the first emotion, further relates to the classification of matching rule or emotional intensity It is unreasonable.
It should be noted that usually recommending first to divide the emotional intensity that instruction is presented in emotion, because the emotional intensity determines The rank of final emotional intensity is presented in emotion;It is determining and then determining in the rank that the emotional intensity of instruction is presented in emotion The strength grading of emotion vocabulary.
In another embodiment of the present invention, each emotion vocabulary at least one emotion vocabulary includes a kind of or more Affective style is planted, the same emotion vocabulary at least one emotion vocabulary has different emotion classes under different application scenarios Type and emotional intensity.
Specifically, each emotion vocabulary possesses one or more affective styles, and same emotion vocabulary is answered in different With can have different affective style and emotional intensity under scene.It is " high in affective style by taking emotion vocabulary " good " as an example It is emerging " in the case where, emotional intensity is commendation;In the case where affective style is " indignation ", emotional intensity is derogatory sense.
In addition, the same emotion vocabulary may have different paraphrase to express different feelings in different context of co-texts Sense, i.e., affective style and feeling polarities etc. may change, therefore, it is necessary to according to application scenarios and context of co-text to same One emotion vocabulary carries out emotion and disambiguates to determine its affective style.
Specifically, emotion mark can be carried out to the emotion vocabulary of Chinese by way of automatic, artificial, or both combine Note.For the vocabulary with a variety of affective styles, emotion disambiguation can be carried out based on part of speech, emotion frequency, Bayesian model etc.; Simultaneously the affective style of emotion vocabulary within a context can also be judged by constructing context-sensitive characteristic set.
In another embodiment of the present invention, emotion vocabulary is polynary emotion vocabulary, which includes more The combination of a vocabulary, wherein the affective style attribute that each vocabulary in polynary emotion vocabulary does not have individually.
Specifically, vocabulary itself may not have affective style, but several word combinations may have certain together Affective style, and can be used to convey emotion information, the group of this several vocabulary is collectively referred to as polynary emotion vocabulary.Polynary emotion Vocabulary can be obtained from preset emotional semantic database, can also be obtained by preset logic rules or external interface, The invention is not limited in this regard.
In another embodiment of the present invention, the presentation module is presented in mode according at least one first emotion Every kind of emotion is presented mode and carries out the emotion presentation that the unspecified affective style of instruction is presented in the first emotion, wherein unspecified feelings The corresponding emotional intensity of type is felt lower than the corresponding emotional intensity of at least one affective style or unspecified affective style Feeling polarities are consistent with the feeling polarities of at least one affective style.
Specifically, in addition to the affective style specified in instruction is presented in the first emotion, other affective styles in text are pressed It is below the first emotion according to the emotional intensity that set emotional intensity corresponding relationship or formula are calculated and presents in instruction and own The emotional intensity of specified affective style.That is, the corresponding emotional intensity of unspecified affective style does not influence the first feelings The emotion that various affective styles in instruction are presented in sense is presented.
In another embodiment of the present invention, the feelings that module is presented and determines at least one emotion vocabulary composition The size of at least one of the text emotional intensity of affective style is presented in sense;And the size based on the emotional intensity judges institute Whether the emotional intensity for stating at least one affective style, which meets first emotion, is presented instruction, wherein text is presented in the emotion In the emotional intensity of i-th kind of affective style can be calculate by the following formula to obtain:
Round [n/N*1/ [1+exp (- n+1)] * max { a1, a2 ..., an }],
Wherein, round (X) indicates that, to the neighbouring rounding of X, n indicates the number of the emotion vocabulary in i-th kind of affective style Amount, N indicate that the quantity of the emotion vocabulary in text is presented in the emotion, and M indicates the number of the affective style of N number of emotion vocabulary Amount, exp (x) indicate that using natural constant e as the exponential function at bottom, a1, a2 ..., an indicate that n emotion vocabulary respectively corresponds emotion The emotional intensity of type M, max { a1, a2 ..., an } indicate the maximum value of emotional intensity, and wherein n, N and M are positive integer.
Specifically, N=5 in above formula, M=1, n=5 are enabled, max { a1, a2, a3, a4, a5 }=5, then the emotion of affective style Intensity=5.Here, N=5 indicates to share 5 emotion vocabulary in text, and M=1 indicates that this 5 emotion vocabulary only have a kind of emotion Type, thus, it is only required to calculate the emotional intensity that can once obtain the affective style of text.
Optionally, N=5, M=3 in above formula are enabled, for the first emotion A, if n=3, max { a1, a2, a3 }=4, then Emotional intensity=2 of the affective style of emotion A;For the first emotion B, if n=1, max { b1 }=4, then the feelings of emotion B Feel emotional intensity=1 of type;For the first emotion C, if n=1, max { c1 }=2, then feelings of the affective style of emotion C Feel intensity=0.Here, N=5 indicates to share 5 emotion vocabulary in text, and M=3 indicates that this 5 emotion vocabulary share three kinds of feelings Feel type, therefore, it is necessary to calculate the emotional intensity that can just obtain the affective style of text three times.
At the same time, the feeling polarities of i-th kind of affective style in the text can be calculate by the following formula to obtain:
B=Sum (x1* (a1/max { a }), x2* (a2/max { a }) ..., xn* (an/max { a }))/n,
Wherein, Sum (X) indicates to sum to X, and max { a } indicates the maximum emotion to all emotion vocabulary of M affective style Intensity, a1, a2 ..., an indicate the emotional intensity of n emotion vocabulary under M affective style, x1, x2 ..., and xn indicates M emotion class The feeling polarities of n emotion vocabulary under type.
It should be noted that above-mentioned formula needs calculate separately each affective style M, the feelings under the affective style are obtained Feel polarity.
Further, if B > 0.5, then it represents that feeling polarities are commendation;If B < -0.5, then it represents that feeling polarities are to demote Justice;And if 0.5 >=B >=-0.5, then it represents that feeling polarities are neutrality.
It should be noted that the quantization means of feeling polarities can be with are as follows: commendation is+1, and derogatory sense is -1, and neutrality is 0, or It can according to need and be adjusted.In addition it is also necessary to explanation, the feeling polarities of affective style do not allow to occur sharply becoming Change, for example, commendation becomes derogatory sense or derogatory sense becomes commendation.
In another embodiment of the present invention, every kind of emotion in mode is presented according at least one first emotion to present Mode carries out one of at least one affective style or the emotion of a variety of affective styles is presented, comprising: at least one first When emotion presentation mode meets emotion presentation condition, mode is presented according at least one first emotion and carries out emotion presentation.
Specifically, the first emotion presentation mode meets emotion presentation condition and refers to emotion output equipment and user's output equipment All support the first emotion that the mode that mode is presented is presented, for example, text, voice, picture etc..Here, by taking bank's customer service as an example, Assuming that user wants to inquire the address of certain bank, the emotion information that Affection Strategies module is primarily based on user generates the first emotion and is in It now instructs, it is " text ", secondary presentation side which, which is presented the main presentation mode that instruction is the first emotion presentation mode, Formula is " image " and " voice ";Then, emotion output equipment and user's output equipment are detected, if detecting that emotion is defeated Equipment and user's output equipment support these three presentation modes of text, image and voice out, then based on text, image and language The address of certain bank is presented to the user by the mode supplemented by sound.
In another embodiment of the present invention, the presentation module is determining at least one first emotion presentation mode not When meeting emotion presentation condition, instruction generation the second emotion presentation is presented according to the first emotion and is instructed, instruction is presented in the second emotion Mode is presented including at least one second emotion, mode, which is presented, at least one second emotion is presented at least one first emotion What mode was adjusted;And mode is presented based at least one second emotion and carries out emotion presentation.
Specifically, at least one first emotion, which is presented mode and does not meet emotion condition is presented, refers to emotion output equipment and use At least one of family output equipment does not support the first emotion that the presentation mode of mode is presented, or needs according to dynamic change (for example, output equipment failure, user demand variation, Background control dynamic change and/or application scenarios changes in demand etc.) is interim Change the presentation mode that mode is presented in the first emotion.At this moment, it needs that mode is presented at least one first emotion and is adjusted, obtain Mode is presented at least one second emotion, and mode is presented based at least one second emotion and carries out emotion presentation.
Here, the process being adjusted at least one first emotion presentation mode is properly termed as mode is presented in emotion two Secondary adjustment, the secondary adjustment can temporarily adjust emotion according to dynamic change and output policy and priority of mode etc. are presented, with Just debugging is wrong, optimize and preferentially mode is presented in selection emotion.
It may include that mode is presented in text emotion, mode is presented in sound emotion, figure that mode, which is presented, at least one second emotion As mode is presented in emotion, mode is presented in video feeling, at least one of mode is presented in mechanical movement emotion.
In another embodiment of the present invention, mode is presented in determining at least one first emotion and does not meet emotion presentation When condition, instruction is presented according to the first emotion and generates the presentation instruction of the second emotion, comprising: is detecting user's output equipment failure When the presentation or user's output equipment for influencing the first emotion presentation mode do not support the first emotion that the presentation of mode is presented, determine At least one first emotion is presented mode and does not meet emotion presentation condition;And at least one of instruction is presented to the first emotion First emotion is presented mode and is adjusted, and obtains the second emotion and at least one of instruction the second emotion presentation mode is presented.
Specifically, at least one first emotion presentation mode, which does not meet emotion presentation condition, can include but is not limited to user Output equipment failure influences that the presentation of mode, which is presented, in the first emotion, user's output equipment does not support that mode is presented in the first emotion is in Now etc..Therefore, when determining that at least one first emotion presentation mode does not meet emotion presentation condition, need to adjust the first emotion The first emotion of at least one of instruction is presented, mode is presented, obtains the second emotion and the second emotion of at least one of instruction is presented Mode is presented.
Here, still by taking bank's customer service as an example, it is assumed that user wants to inquire the address of certain bank, Affection Strategies module base first The first emotion being generated in the emotion information of user, instruction being presented, it is the master that mode is presented in the first emotion which, which is presented instruction, Wanting presentation mode is " text ", and secondary presentation mode is " image " and " voice ", and affective style is " pleasure ", and emotional intensity is " medium ";Then, emotion output equipment and user's output equipment are detected, if detecting that user's output equipment is not supported Picture (i.e. map) display, then it is assumed that the first emotion is presented mode and does not meet emotion presentation condition, therefore, it is desirable to the first emotion Mode is presented to be adjusted, obtains the second emotion and mode is presented, the main presentation mode which is presented mode is " text This ", secondary presentation mode is " voice ", and affective style is " pleasure ", and emotional intensity is " medium ";Finally, based on text, language The address of certain bank is presented to the user by the mode supplemented by sound, and prompts that user's current map can not be shown or current map is aobvious Show unsuccessful, can be checked by other equipment.
Optionally, as another embodiment, mode is presented in determining at least one first emotion and does not meet emotion presentation When condition, instruction is presented according to the first emotion and generates the presentation instruction of the second emotion, comprising: is controlled according to user demand variation, backstage Dynamic variation and/or application scenarios changes in demand determine that at least one first emotion is presented mode and does not meet emotion presentation item Part;And at least one of instruction the first emotion presentation mode is presented to the first emotion and is adjusted, obtaining the second emotion is in Now mode is presented in the second emotion of at least one of instruction.
Specifically, at least one first emotion, which is presented mode and does not meet the case where condition is presented in emotion, can also include but not It is limited to user demand variation, Background control dynamic change and/or application scenarios changes in demand etc..Therefore, at least one is being determined When first emotion presentation mode does not meet emotion presentation condition, needs to adjust the first emotion and at least one of instruction first is presented Mode is presented in emotion, obtains the second emotion and at least one of instruction the second emotion presentation mode is presented.
Here, still by taking bank's customer service as an example, it is assumed that user wants to inquire the address of certain bank, Affection Strategies module base first The first emotion being generated in the emotion information of user, instruction being presented, it is the master that mode is presented in the first emotion which, which is presented instruction, Wanting presentation mode is " text ", and secondary presentation mode is " voice ", and affective style is " pleasure ", and emotional intensity is " medium ";It connects , when receiving user's request and wanting to show the address of certain bank in such a way that text and map combine, determine the first emotion Mode is presented and does not meet emotion presentation condition, and correspondingly adjusts the first emotion and mode is presented, obtains the second emotion and mode is presented, The main presentation mode that mode is presented in second emotion is " text ", and secondary presentation mode is " image ", and affective style is " pleased It is happy ", emotional intensity is " medium ";Finally, the address of certain bank is presented to the user by the mode based on text, supplemented by image.
The emotion that instruction is presented for not meeting emotion is presented, and is needed to feed back conversational system and is readjusted output, and again Judged, until the text of output meets emotion and instruction is presented.Here, conversational system carries out feedback adjustment and may include but not Be limited to following two mode: one is directly carry out to individual emotion vocabulary in current sentence in the case where not adjusting clause Adjustment replacement, to reach the emotion presentation standard that instruction is presented in emotion, this mode is suitable for affective style and emotional intensity is poor Different little situation;Another kind is that conversational system is needed to regenerate sentence, and this mode is suitable for affective style and emotion is strong The case where degree differs greatly.
It is presented based on mode it should be noted that mode is presented in the first emotion of the invention with text emotion, but can be with Sound emotion presentation mode, Image emotional semantic presentation mode, video feeling are selected or increased according to user demand, application scenarios etc. is in Mode etc. is presented to carry out emotion presentation in existing mode, mechanical movement emotion.
Specifically, it may include the voice broadcast based on content of text that mode, which is presented, in sound emotion, also may include being based on Music, sound of sound etc., the invention is not limited in this regard.At this point, emotion, which is presented in database, is not only stored with application scenarios Under different emotions type corresponding to emotion vocabulary (the emotion vocabulary is used to analyze the affective style of the corresponding text of voice), It also needs to include (such as fundamental frequency, formant, energy feature, the harmonic noise of audio frequency parameter corresponding to different emotions type Than, pronunciation frame number feature, mel cepstrum coefficients etc.), or audio frequency characteristics corresponding to the particular emotion type extracted as training And its parameter.
Further, the affective style of voice broadcast derives from two parts, i.e. the affective style A of casting text and audio letter Number the combination of affective style B, A and B obtain the affective style of voice broadcast.For example, the affective style and emotional intensity of A It is the affective style of voice broadcast with the affective style of B and the average value (or the operations such as summation with weight) of emotional intensity And emotional intensity.Sound (music, the sound including no text information etc.), on the one hand can be divided by a variety of audio frequency parameters On the other hand class can extract feature by manual tag portion of audio data, and by supervised learning with the feelings to sound Sense type and emotional intensity are judged.
Mode, which is presented, in Image emotional semantic can include but is not limited to face, picture expression, icon, pattern, animation, video etc.. It needs to include image parameter corresponding to different emotions type etc. in database at this point, emotion is presented.Mode is presented in Image emotional semantic It can be by detecting and combining artificial mode to obtain the affective style and emotional intensity of image data automatically, it can also be by having Supervised learning extract feature with to image affective style and emotional intensity judge.
Mechanical movement emotion is presented mode and can include but is not limited to the activity at each position of robot and move, is various hard The mechanical movement etc. of part output equipment.It needs to include activity corresponding to different emotions type in database at this point, emotion is presented And kinematic parameter.These parameters can be stored in advance in the database, can also be extended and be updated by on-line study, this Invention to this with no restriction.Mode is presented in mechanical movement emotion can be after receiving emotion and instruction be presented, according to its emotion class Type and the suitable activity of emotional intensity selection and exercise program are simultaneously implemented.It should be noted that mode is presented in mechanical movement emotion Output need to consider safety issue.
All the above alternatives can form alternative embodiment of the invention using any combination, herein no longer It repeats one by one.
Fig. 2 is a kind of block diagram of emotion information presentation device 400 shown in another exemplary embodiment according to the present invention.Such as Shown in Fig. 2, which includes:
Module 410 is obtained, for obtaining the emotion information of user.
In embodiments of the present invention, the emotion information of user can be obtained by modes such as text, voice, image, gestures.
Identification module 420 obtains affective style for carrying out emotion recognition to emotion information.
In embodiments of the present invention, word segmentation processing is carried out to emotion information according to preset word segmentation regulation, obtains multiple feelings Feel vocabulary.Here, word segmentation regulation may include Forward Maximum Method method, reverse maximum matching method, by word traversal and word frequency system Any one of meter method.Word segmentation processing can use two-way maximum matching method, Viterbi (Viterbi) algorithm, hidden Markov Model (Hidden Markov Model, HMM) algorithm and condition random field (Conditional Random Field, CRF) are calculated One of method is a variety of.
Then, the multiple preset emotion vocabulary progress stored in multiple emotion vocabulary and emotion vocabulary semantic base are similar Degree calculates, and using the highest emotion vocabulary of similarity as matched emotion vocabulary.
Specifically, if there is the emotion vocabulary in emotion lexical semantic library in text, corresponding emotion class is directly extracted Type and emotional intensity.If there is no the emotion vocabulary in emotion vocabulary semantic base in text, one by one by the result of word segmentation processing Similarity calculation is carried out with the content in emotion vocabulary semantic base, or attention (Attention) mechanism is added, from participle The content chosen in several emphasis vocabulary and emotion vocabulary semantic base in the result of reason carries out similarity calculation, if similarity is super Certain threshold value is crossed, then uses in emotion vocabulary semantic base the affective style of the highest vocabulary of similarity and emotional intensity as the word The emotional intensity and affective style of remittance.If both not finding emotion vocabulary in existing emotion vocabulary semantic base, do not have yet Similarity exceeds the case where threshold value, then it is assumed that there is no emotion vocabulary in the text, therefore, the affective style of output be it is empty or in Property, emotional intensity zero.It should be noted that output needs here are presented instruction decision process with emotion and match, also It is to say, emotion, which is presented, includes the case where allowing affective style to be empty or neutral in instruction decision process.
Here, similarity calculation can be using the calculating based on vector space model (Vector Space Model, VSM) Method, is based on On The Attribute Theory at the calculation method based on stealthy semantic indexing model (Latent Semantic Indexing, LSI) Semantic similarity calculation method and one of semantic similarity calculation method based on Hamming distance or a variety of methods knot It closes.
Further, it is based on matched emotion vocabulary, obtains affective style.Here, other than obtaining affective style, also Available emotional intensity and feeling polarities etc..
Parsing module 430 is intended to for carrying out intents to emotion information based on affective style.
In embodiments of the present invention, based on the parsing and preset emotion presentation instruction decision process to intention, feelings are obtained Feel type and emotional intensity, or feeling polarities can also be obtained etc..Intents can be obtained by text, can also be passed through The movement for capturing user obtains, the invention is not limited in this regard.Specifically, it can be carried out by the text information to emotion information Word segmentation processing, punctuate processing or word combination are intended to, and can also be obtained by the semantic content in emotion and user information It is intended to, or can also be intended to by emotion informations such as expression, the movements of capture user, the invention is not limited in this regard.
Directive generation module 440 generates the first emotion for instruction decision process to be presented based on intention and preset emotion Instruction is presented, which is presented instruction and mode and at least one affective style is presented including at least one first emotion, until It includes that mode is presented in text emotion that mode, which is presented, in the first emotion of few one kind.
In embodiments of the present invention, it is the affective state (feelings obtained according to emotion recognition that instruction decision process, which is presented, in emotion Feel type), intent information, the contents such as context generate the process that instruction is presented in emotion.
Judgment module 450, for judging that at least one first emotion is presented whether mode meets emotion presentation condition, if At least one first emotion is presented mode and meets emotion presentation condition, and every kind in mode is presented according at least one first emotion The emotion presentation that mode carries out one of at least one affective style or a variety of affective styles is presented in emotion;If at least one First emotion is presented mode and does not meet emotion presentation condition, is presented and is instructed according to the first emotion presentation instruction the second emotion of generation, It includes that mode is presented at least one second emotion that instruction, which is presented, in second emotion, and it is to extremely that mode, which is presented, at least one second emotion The first emotion of few one kind is presented what mode was adjusted.
Module 460 is presented, carries out emotion presentation for mode to be presented based at least one second emotion.
The technical solution provided according to embodiments of the present invention can be presented whether mode meets emotion presentation to the first emotion Condition is judged, and adjusts final emotion based on judging result and mode is presented, and therefore, improves real-time, and further Improve user experience.
The present embodiment additionally provides a kind of electronic equipment, which includes that dress is presented in any of the above-described kind of emotion information It sets.
The electronic equipment can be mobile phone, robot, notebook, tablet computer etc..
About the emotion information presentation device in electronic equipment, details are not described herein, please refers to the description of front.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed The scope of the present invention.
In several embodiments provided herein, it should be understood that disclosed systems, devices and methods, it can be with It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit It closes or communicates to connect, can be electrical, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention. And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited The various media that can store program ver-ify code such as reservoir (RAM, Random Access Memory), magnetic or disk.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain Lid is within protection scope of the present invention.Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. a kind of emotion information presentation device characterized by comprising
Module is obtained, instruction is presented for obtaining the first emotion, wherein it includes at least one that instruction, which is presented, in first emotion Mode and at least one affective style is presented in one emotion, and it includes that text emotion is presented that mode, which is presented, at least one first emotion Mode is presented in mode, sound emotion, mode is presented in Image emotional semantic, mode is presented in video feeling, mode is presented in mechanical movement emotion At least one of;And
Module is presented, for when at least one first emotion is presented mode and meets emotion condition is presented, according to it is described extremely The first emotion of few one kind is presented mode and carries out emotion presentation;According to user demand variation, Background control dynamic change and/or application Scene changes in demand determines that at least one first emotion is presented mode and does not meet emotion presentation condition, and to first feelings Sense is presented at least one first emotion presentation mode in instruction and is adjusted, and obtains second emotion and presents in instruction At least one second emotion mode is presented, and mode be presented according at least one second emotion carry out emotion and be in It is existing.
2. emotion information presentation device according to claim 1, which is characterized in that the presentation module according to it is described at least A kind of affective style lookup emotion presentation database is corresponding with every kind of affective style in determination at least one affective style At least one emotion vocabulary, and at least one described emotion vocabulary is presented.
3. emotion information presentation device according to claim 2, which is characterized in that at least one affective style Every kind of affective style corresponds to multiple emotion vocabulary, and instruction is presented in first emotion further include: at least one affective style In the corresponding emotional intensity of every kind of affective style and/or every kind of affective style at least one affective style it is corresponding Feeling polarities,
Wherein, the presentation module is selected from the multiple emotion vocabulary according to the emotional intensity and/or the feeling polarities Select at least one described emotion vocabulary.
4. emotion information presentation device according to claim 2, which is characterized in that at least one described emotion vocabulary according to Different emotional intensities is divided into different ranks.
5. emotion information presentation device according to claim 4, which is characterized in that at least one described emotion vocabulary Each emotion vocabulary includes one or more affective styles, and the same emotion vocabulary at least one described emotion vocabulary is in difference Application scenarios under have different affective style and emotional intensity.
6. emotion information presentation device according to claim 4, which is characterized in that the emotion vocabulary is polynary emotion word It converges, the polynary emotion vocabulary includes the combination of multiple vocabulary, wherein each vocabulary in the polynary emotion vocabulary is individually not The affective style attribute having.
7. emotion information presentation device according to claim 4, which is characterized in that the presentation module according to it is described at least A kind of first emotion, which is presented every kind of emotion in mode mode is presented, to be carried out first emotion and presents to instruct unspecified emotion The emotion of type is presented, wherein the corresponding emotional intensity of the unspecified affective style is lower than at least one affective style The emotion of the feeling polarities of corresponding emotional intensity or the unspecified affective style and at least one affective style Polarity is consistent.
8. emotion information presentation device according to claim 7, which is characterized in that described in the presentation module is determining at least The size of at least one of the text emotional intensity of affective style is presented in the emotion of one emotion vocabulary composition, and is based on the feelings The size of sense intensity judges whether the emotional intensity of at least one affective style meets first emotion and instruction is presented, Described in the emotional intensity of i-th kind of affective style that presents in text of emotion can be calculate by the following formula to obtain:
Round [n/N*1/ [1+exp (- n+1)] * max { a1, a2 ..., an }],
Wherein, round (X) indicates that, to the neighbouring rounding of X, n indicates the quantity of the emotion vocabulary in i-th kind of affective style, N table Show that the quantity of the emotion vocabulary in text is presented in the emotion, M indicates the quantity of the affective style of N number of emotion vocabulary, exp (x) indicate that, using natural constant e as the exponential function at bottom, a1, a2 ..., an indicate that n emotion vocabulary respectively corresponds affective style M Emotional intensity, max { a1, a2 ..., an } indicate emotional intensity maximum value, wherein n, N and M be positive integer.
9. the emotion information presentation device according to claim 4 or 7, which is characterized in that the feeling polarities include following It is one or more: commendation, derogatory sense and neutrality.
10. a kind of electronic equipment, which is characterized in that present and fill including emotion information as claimed in any one of claims 1-9 wherein It sets.
CN201711328630.1A 2017-12-07 2017-12-13 Emotion information presentation device and electronic equipment Pending CN110020013A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201711328630.1A CN110020013A (en) 2017-12-13 2017-12-13 Emotion information presentation device and electronic equipment
US16/052,345 US10783329B2 (en) 2017-12-07 2018-08-01 Method, device and computer readable storage medium for presenting emotion
US16/992,284 US11455472B2 (en) 2017-12-07 2020-08-13 Method, device and computer readable storage medium for presenting emotion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711328630.1A CN110020013A (en) 2017-12-13 2017-12-13 Emotion information presentation device and electronic equipment

Publications (1)

Publication Number Publication Date
CN110020013A true CN110020013A (en) 2019-07-16

Family

ID=67186888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711328630.1A Pending CN110020013A (en) 2017-12-07 2017-12-13 Emotion information presentation device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110020013A (en)

Similar Documents

Publication Publication Date Title
Alexanderson et al. Listen, denoise, action! audio-driven motion synthesis with diffusion models
Alexanderson et al. Style‐controllable speech‐driven gesture synthesis using normalising flows
CN108227932B (en) Interaction intention determination method and device, computer equipment and storage medium
US20220093101A1 (en) Dialog management for multiple users
CN108334583B (en) Emotion interaction method and device, computer readable storage medium and computer equipment
US11455472B2 (en) Method, device and computer readable storage medium for presenting emotion
Levine et al. Real-time prosody-driven synthesis of body language
Levine et al. Gesture controllers
CN106548773B (en) Child user searching method and device based on artificial intelligence
JP2022524944A (en) Interaction methods, devices, electronic devices and storage media
CN111368609A (en) Voice interaction method based on emotion engine technology, intelligent terminal and storage medium
CN108326855A (en) A kind of exchange method of robot, device, equipment and storage medium
CN109410927A (en) Offline order word parses the audio recognition method combined, device and system with cloud
CN108877336A (en) Teaching method, cloud service platform and tutoring system based on augmented reality
Liu et al. Speech emotion recognition based on convolutional neural network with attention-based bidirectional long short-term memory network and multi-task learning
WO2007098560A1 (en) An emotion recognition system and method
KR20020067591A (en) Self-updating user interface/entertainment device that simulates personal interaction
KR20190089451A (en) Electronic device for providing image related with text and operation method thereof
CN114911932A (en) Heterogeneous graph structure multi-conversation person emotion analysis method based on theme semantic enhancement
US20210005218A1 (en) Nonverbal information generation apparatus, method, and program
CN116958342A (en) Method for generating actions of virtual image, method and device for constructing action library
CN109918636A (en) Kansei Information Processing method, computer equipment and computer readable storage medium
CN107943299A (en) Emotion rendering method and device, computer equipment and computer-readable recording medium
Cafaro et al. Selecting and expressing communicative functions in a SAIBA-compliant agent framework
CN109919292A (en) Kansei Information Processing device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190716

WD01 Invention patent application deemed withdrawn after publication