CN114816036A - Emotion processing method, device and medium - Google Patents

Emotion processing method, device and medium Download PDF

Info

Publication number
CN114816036A
CN114816036A CN202110071046.2A CN202110071046A CN114816036A CN 114816036 A CN114816036 A CN 114816036A CN 202110071046 A CN202110071046 A CN 202110071046A CN 114816036 A CN114816036 A CN 114816036A
Authority
CN
China
Prior art keywords
user
parameters
emotion
determining
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110071046.2A
Other languages
Chinese (zh)
Inventor
韩秦
蔡泓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN202110071046.2A priority Critical patent/CN114816036A/en
Publication of CN114816036A publication Critical patent/CN114816036A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides an emotion processing method, device and medium. The method specifically comprises the following steps: determining a plurality of user parameters; the plurality of user parameters includes at least two of the following parameters: inputting content, physical parameters and environmental parameters; and determining the emotion information of the user according to the various user parameters. The embodiment of the invention can enhance the protection effect of the private information of the user and improve the accuracy of emotion recognition.

Description

Emotion processing method, device and medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an emotion processing method, device and medium.
Background
Emotion is a state that combines human feelings, thoughts and behaviors, and includes a human psychological response to external or self-stimulation and also includes a physiological response accompanying such a psychological response.
With the development of human-computer interaction technology, the traditional human-computer interaction technology has gradually changed towards intelligent interaction, natural interaction and the like. The key points of human-computer interaction attention are also developed from defining an interaction mode, designing interaction semantics and the like to focus on the emotion of the user, and further the implicit requirements of the user are mined and the like. One of the tasks of achieving natural human-computer interaction is to enable a computer to naturally sense the emotion of a user in the process of interacting with the user, track the emotion change of the user, and further carry out thought communication and interaction with the computer more subjectively, or presume the basic will of the mind of the user. Therefore, emotion recognition has a very important meaning in natural interaction.
The current emotion recognition method includes: image recognition methods, voice recognition methods, and the like. The image recognition method needs to install a camera to monitor the image of the user, and the voice recognition method needs to collect the voice of the user by a recording device. Because the health of the image and the collection of the voice both relate to the privacy information of the user, the current emotion recognition method easily influences the privacy safety of the user.
Disclosure of Invention
Embodiments of the present invention provide an emotion processing method, apparatus, and medium, which can enhance the protection effect of private information of a user, and can improve the accuracy of emotion recognition.
In order to solve the above problem, an embodiment of the present invention discloses an emotion processing method, including:
determining a plurality of user parameters; the plurality of user parameters includes at least two of the following parameters: inputting content, physical parameters and environmental parameters;
and determining the emotion information of the user according to the various user parameters.
On the other hand, the embodiment of the invention discloses an emotion processing device, which comprises:
the user parameter determining module is used for determining various user parameters; the plurality of user parameters includes at least two of the following parameters: inputting content, physical parameters and environmental parameters; and
and the emotion determining module is used for determining the emotion information of the user according to the various user parameters.
In yet another aspect, an embodiment of the present invention discloses an apparatus for emotion handling, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and configured for execution by the one or more processors the one or more programs include instructions for:
determining a plurality of user parameters; the plurality of user parameters includes at least two of the following parameters: inputting content, physical parameters and environmental parameters;
and determining the emotion information of the user according to the various user parameters.
In yet another aspect, embodiments of the invention disclose a machine-readable medium having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform a method of emotion processing as described in one or more of the preceding.
The embodiment of the invention has the following advantages:
according to the embodiment of the invention, the emotion information of the user is determined according to at least two user parameters of the input content, the body parameter and the environment parameter. The user parameters adopted by the embodiment of the invention do not relate to the privacy information of the user such as images, sounds and the like, so that the protection effect of the privacy information of the user can be enhanced.
In addition, according to the embodiment of the invention, the emotion information of the user is determined according to various user parameters, and the emotion information of the user can be more accurately identified, so that the emotion identification accuracy can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic illustration of an application environment of a method of emotion handling according to an embodiment of the present invention;
FIG. 2 is a flow chart of steps of a first embodiment of a mood processing method of the invention;
FIG. 3 is a flow chart of the steps of a second embodiment of the emotion handling method of the present invention;
FIG. 4 is a block diagram of an embodiment of an emotion processing apparatus of the present invention;
FIG. 5 is a block diagram of an apparatus 800 for emotion processing of the present invention; and
fig. 6 is a schematic structural diagram of a server in some embodiments of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Aiming at the technical problem that the privacy security of a user is influenced by the current emotion recognition method, the embodiment of the invention provides an emotion processing scheme which can determine various user parameters; the plurality of user parameters may include at least two of the following parameters: inputting content, physical parameters and environmental parameters; and determining the emotion information of the user according to the various user parameters.
According to the embodiment of the invention, the emotion information of the user is determined according to at least two user parameters of the input content, the body parameter and the environment parameter. The user parameters adopted by the embodiment of the invention do not relate to the privacy information of the user such as images, sounds and the like, so that the protection effect of the privacy information of the user can be enhanced.
In addition, according to the embodiment of the invention, the emotion information of the user is determined according to various user parameters, and the emotion information of the user can be more accurately identified, so that the emotion identification accuracy can be improved.
The emotion processing method provided by the embodiment of the invention can be applied to Application environments such as websites and/or APPs (applications) to realize accurate emotion recognition. The embodiment of the invention can also realize accurate recommendation based on the emotion information obtained by identification.
In the embodiment of the invention, the emotion refers to the psychological experience of people such as happiness, anger, sadness, happiness and fear, and the experience is a reflection of the attitude of people to objective objects. Emotions have positive and negative properties. Things that can meet the needs of a person can cause certain experiences of the person, such as happiness, satisfaction, and the like; something that does not meet a person's needs can cause a person's negative experience, such as anger, hate, sadness, etc.; things that are not related to the need can cause a person to have no so-called emotion or emotion. A positive emotion may increase a person's ability to move, while a negative emotion may decrease a person's ability to move.
In an alternative embodiment of the invention, the user mood may comprise: positive emotions, which are constructive and aggressive, or negative emotions, which are destructive and depolarised. Among them, negative emotions may include, but are not limited to: anxiety, tension, anger, depression, sadness, fear, etc. Positive emotions may include, but are not limited to: joy, optimism, confidence, appreciation, relaxation, etc. Optionally, the user emotion may further include: neutral emotions, which may include, but are not limited to, bland, don't care, frigidity, surprise, and the like.
The emotion processing method provided by the embodiment of the present invention can be applied to the application environment shown in fig. 1, as shown in fig. 1, the client 100 and the server 200 are located in a wired or wireless network, and the client 100 and the server 200 perform data interaction through the wired or wireless network.
Optionally, the client 100 may run on a terminal, which specifically includes but is not limited to: smart phones, tablet computers, electronic book readers, MP3 (Moving Picture Experts Group Audio Layer III) players, MP4 (Moving Picture Experts Group Audio Layer IV) players, laptop portable computers, car-mounted computers, desktop computers, set-top boxes, smart televisions, wearable devices, and the like.
The client 100 may correspond to a website, or APP (Application). The client 100 may correspond to an application program such as an input method APP or an instant messaging APP.
The server side of the embodiment of the invention can be a cloud server side. The cloud server is a simple, efficient, safe and reliable computing service with elastically-stretchable processing capacity. The resource information of the cloud server is dynamic, so that the processing capacity of the cloud server can be elastically stretched.
The embodiment of the invention can be applied to input method programs of various input modes such as keyboard symbols, handwriting, voice and the like. Taking the keyboard symbol input mode as an example, the user may perform text input through the code character string, and the input string may refer to the code character string input by the user. In the field of input methods, for input method programs in, for example, chinese, japanese, korean, or other languages, an input string input by a user may be generally converted into a candidate for a corresponding language. Hereinafter, the description will be mainly given by taking chinese as an example, and other languages such as japanese and korean may be referred to each other. It is to be understood that the above-mentioned chinese input methods may include, but are not limited to, full pinyin, simple pinyin, strokes, five strokes, etc., and the embodiment of the present invention is not limited to a specific input method program corresponding to a certain language.
Taking the input of chinese as an example, the types of the encoding character string may include: pinyin strings, character strings (such as pencils, etc.). Taking english input as an example, the types of the encoding strings may include: alphabetic strings, and the like.
In practical applications, for the input mode of keyboard symbols, a user may input the input string through a physical keyboard or a virtual keyboard. For example, for a terminal with a touch screen, a virtual keyboard may be set in the input interface to perform input of an input string by triggering virtual keys included in the virtual keyboard. Optionally, examples of the virtual keyboard may include: a 9-key keyboard, a 26-key keyboard, etc. Moreover, it can be understood that, in addition to the virtual keys corresponding to the letters, the input interface may also be provided with symbol keys, numeric keys, and function keys such as a chinese-english switching key, or may also be provided with toolbar keys, and it can be understood that the specific keys included in the input interface are not limited in the embodiments of the present invention.
According to some embodiments, the input string may include, but is not limited to: a key symbol or a combination of a plurality of key symbols input by a user through a key. The key symbol may specifically include: pinyin, strokes, kana, etc.
In the embodiment of the invention, the candidate item can be used for representing one or more characters provided by the input method program and to be selected by the user. The candidate item can be a character of a language such as a Chinese character, an English character, a Japanese character and the like, and can also be a symbol combination in the forms of a character, an expression, a picture and the like. The above-mentioned characters include but are not limited to drawings composed of lines, symbols, and words, for example, examples of the above-mentioned characters may include: ": p ",": o ",": etc.
The embodiment of the invention can respond to the starting operation and start the input method program in any application scene. Alternatively, the call-up operation may be a trigger operation for an input window or the like. The input window may include: an input box, etc. For example, if a click operation is received for an input box, the input method program is invoked.
Optionally, after the input method program is invoked, an input interface may be displayed, so that the user can input the input content through the input interface. The input interface may include an input keyboard, which typically includes a plurality of keys. The above-mentioned key may include: character keys and function keys. The function keys may include: set up button, search button, enter button etc.. The character button may further include: alphabetic keys, numeric keys, symbolic keys, functional keys, and the like.
Method embodiment one
Referring to fig. 2, a flowchart illustrating steps of a first embodiment of an emotion processing method according to the present invention is shown, which may specifically include the following steps:
step 201, determining various user parameters; the plurality of user parameters may include at least two of the following parameters: inputting content, physical parameters and environmental parameters;
step 202, determining the emotion information of the user according to the various user parameters.
At least one step of the embodiment shown in fig. 2 may be performed by the server and/or the client, although the embodiment of the present invention does not limit the specific execution subject of each step.
In step 201, the user parameters may characterize parameters associated with the user. The user parameters of the embodiment of the invention can have timeliness so as to improve the accuracy of emotion recognition. For example, the user parameter may be a user parameter within a preset time period. The preset time period may be a time period in which a time interval from the current time is a preset time interval. The preset time interval can be determined by one skilled in the art according to the actual application requirement, for example, the preset time interval can be 0-60 minutes, etc.
The input content may represent content input by means of keyboard input, voice input, clipboard paste, etc. The input content can reflect the emotion of the user.
For example, the input contents such as "haha", "hip-hop", etc. may reflect "happy" emotion. For another example, the input content such as "hard to pass" can reflect "sadness" emotion. For example, the input content such as "i am angry" may reflect the emotion of "anger". Alternatively, the input contents such as "going to an unfamiliar place, being timid", "first person walking at night" may reflect the fear mood, etc.
Optionally, the input content may include: the chat content is, for example, chat content of friends or chat content of group chat. Alternatively, the input content may include: emoticon content or picture content.
The embodiment of the invention can determine the mapping relation between the input content and the emotion information.
Optionally, emotion labels may be added for the words in the input content; in this way, the emotion information corresponding to the input content can be determined according to the emotion label of the vocabulary included in the input content.
Optionally, semantic analysis may be performed on the input content, and according to a result of the speech analysis, emotion information corresponding to the input content is determined.
The body parameter can represent the body parameter of the user in the input process, the body parameter can represent the effect of body reaction of the user transmitted to the terminal, and the emotion of the user in the input process can be determined according to the body parameter. The above physical parameters may include at least one of the following parameters: inputting a speed parameter, an input force parameter and a jitter parameter in the input process.
For example, in the case of "sadness" of the user, the input speed is slow due to physical weakness. For another example, when the user is in a "crying" state, the shaking of the small frequency of the body is transmitted to the terminal through the hand, and can be reflected on the shaking parameters of the terminal. When the user is angry, the input strength is large, and the like.
The jitter parameters may further include: whether to dither, a dither amplitude, and a dither frequency.
According to one embodiment, the body parameters may be collected using sensors built into or out of the terminal.
For example, the terminal may be provided with a Touch screen having a pressure sensing function, such as a 3D Touch (Three Dimensions Touch) Touch screen, or the terminal may further be provided with a control for pressure sensing in a common Touch screen; the touch screen or control described above may be used to collect input force parameters.
As another example, the shake parameters may be collected using an acceleration sensor or a gyroscope or a level meter in the terminal.
According to another embodiment, the input speed parameter may be determined based on the time the user has been on the key. Alternatively, the input speed parameter may be determined according to the number of words input by the user per unit time.
In the embodiment of the invention, the environment parameter can be used for representing the environment information of the terminal when the user inputs the information. Changes in environmental parameters may cause changes in mood, as may be relied upon as a determination of mood information
Optionally, the environmental parameter may include at least one of the following parameters: a time parameter, a location parameter, and a weather parameter.
A change in the time parameter may cause a change in mood for the same user. The time parameters may include: a time period parameter, which may include: a waking up section parameter, a working section parameter, a noon break section parameter, a golden section parameter, a sleeping section parameter (a period of time for starting sleeping at night), and the like.
Most users have a sense of getting up, and therefore, the mood of the getting up session parameter may be "angry". Most users are in a state of excitement at the work segment, and thus, the mood of the work segment parameter may be "happy". Alternatively, most users are in a sleepy state during the noon break or the sleep stage, and thus, the mood corresponding to the noon break parameter or the sleep stage parameter may be "sadness" or "fall" or "tired". Alternatively, most users are in a relaxed state during the golden section (7-9 pm), and thus, the mood corresponding to the golden section parameter may be "happy".
The embodiment of the invention can determine the emotion corresponding to the time period parameter by utilizing the rule between the time period parameter and the emotion information.
It will be appreciated that different users may correspond to different time period parameters. According to the embodiment of the invention, the time period parameter corresponding to the user can be determined according to the user behavior. For example, the parameter of the getting-up period or the parameter of the sleeping period may be determined according to the power-off time or the screen-off time of the terminal of the user. For another example, the parameter of the getting-up section or the parameter of the sleeping section can be determined according to the input content of the user.
For most users, changes in location parameters can cause changes in mood.
According to an embodiment, in case the location parameter of the user is relatively fixed, the mood of the user is also typically more stable, e.g. stable as "happy". As another example, in a state where the user is moving at a high speed, the mood of the user is usually anxious, and the mood of the user may be "urgent" or "anxious" or the like.
According to another embodiment, the user status may be determined according to a location parameter of the user, and the user status may include: an operating state or a non-operating state. For example, the location parameter of the user indicates that the user is in the unit, and the user is in the working state. Or, the position parameter of the user represents that the user is not in the unit, and the user is in a non-working state.
The embodiment of the invention can determine the mapping relation between the user state and the emotion information. For example, the working state corresponds to a "tight" or "intense" mood. While the non-working state corresponds to a "relaxed" mood, etc.
The weather parameters may include: temperature parameters, humidity parameters, illumination intensity parameters, wind power grade parameters, air quality parameters and the like.
The mood of the user is usually a neutral mood or a positive mood, usually in case the weather parameter corresponds to a pleasant weather. And in the case that the weather corresponding to the weather parameter is not suitable for the user, the emotion of the user is usually a negative emotion.
While the detailed parameters of the input content, the physical parameters and the environmental parameters are described above, it is understood that those skilled in the art can adopt any combination of the input content, the physical parameters and the environmental parameters according to the actual application requirements.
In step 202, the emotion information of the user is determined according to various user parameters, and the emotion information of the user can be more accurately identified, so that the accuracy of emotion identification can be improved.
The embodiment of the invention can provide the following technical scheme for determining the emotion information of the user:
the technical scheme 1,
In technical scheme 1, the determining of the emotion information of the user specifically includes: fusing the multiple user parameters to obtain fused user parameters; and determining emotion information of the user according to the fused user parameters.
The embodiment of the invention fuses a plurality of user parameters, so that the obtained fused user parameters can reflect information carrying the plurality of user parameters; on the basis, the emotion information is determined according to the fusion user parameters, and the accuracy of the emotion information can be improved.
The embodiment of the present invention does not limit the specific fusion manner, for example, the fusion manner may include: weighted average, product, etc.
In an optional embodiment of the present invention, the determining the emotion information of the user according to the fused user parameter specifically includes: determining a target coordinate area corresponding to the fusion user parameter; and determining the emotion information of the user according to the emotion information corresponding to the target coordinate area.
The embodiment of the invention can preset the coordinate area corresponding to the emotion information, and different emotion information can correspond to different coordinate areas.
For example, the emotional information includes: joy, anger, sadness, fear, and blankness, the coordinate regions may include: four quadrants, joy, anger, sadness, and fear, respectively correspond to the regions of the different quadrants. Happiness, anger, sadness and fear correspond to the region of the corresponding quadrant far from the origin, while blandness corresponds to the region of the four quadrants near the origin.
It is understood that joy, anger, sadness and fear correspond to the regions of different quadrants, respectively, but as an alternative embodiment, joy, anger, sadness and fear may correspond to the regions of the same quadrant, respectively. Those skilled in the art can determine the coordinate areas corresponding to the emotion information according to the actual application requirements.
The embodiment of the invention can also preset the mapping relation between the user parameter and the coordinate area.
Optionally, sample data may be collected, which may include: the method comprises the steps that a first preset number of user parameters and corresponding annotation emotion information are obtained; thus, the mapping relation between the user parameter and the coordinate area can be determined according to the sample data. Assuming that the coordinate axis is XOY, the user parameter corresponds to an X axis, and the emotion information corresponds to a Y axis, a coordinate region corresponding to the user parameter may be determined according to a data point corresponding to the sample data.
In an optional embodiment of the present invention, an X coordinate value corresponding to the user parameter and a Y coordinate value corresponding to the emotion information may be determined according to the coordinate region corresponding to the emotion information, so that a data point corresponding to the sample data may be obtained. For example, if "happy" corresponds to the first quadrant, the Y-coordinate value corresponding to "happy" is positive, and the X-coordinate value corresponding to "happy" is positive.
According to the embodiment of the invention, the mapping relation between the user parameter and the coordinate area can be inquired according to the fusion user parameter so as to obtain the target coordinate area corresponding to the fusion user parameter; furthermore, the emotion information of the user can be determined according to the emotion information corresponding to the target coordinate area.
Technical scheme 2,
In technical scheme 2, the determining of the emotion information of the user specifically includes: determining a plurality of sub-emotion information respectively corresponding to the plurality of user parameters; and fusing the multiple kinds of emotion information to obtain the emotion information of the user.
The embodiment of the invention can determine various sub-emotion information respectively corresponding to various user parameters according to the mapping relation between the user parameters and the emotion information.
For example, in a case where the input content includes "hip-hop", the corresponding divided emotion information may be "happy". In the case where the input strength of the user is large, the corresponding divided emotion information may be "anger". For another example, in the case that the time parameter is a sleep stage parameter, the corresponding sub-emotion information may be "tired". It can be understood that the mapping relationship between the user parameter and the emotion information is not limited in the embodiments of the present invention.
The fusion mode of the multiple kinds of emotion information may include: accumulation mode, weighted average mode, product mode, etc.
Alternatively, the same or similar divided emotion information may be accumulated.
Alternatively, different pieces of mood information may be selected. For example, different pieces of emotion information may be selected according to priorities of the pieces of emotion information corresponding to user parameters.
For example, the input content includes "hip-hop" and the time parameter is a sleep stage parameter, and if the sub-emotion information corresponding to the input content is "happy", the sub-emotion information corresponding to the sleep stage parameter is "tired", and the priority of the input content may be considered to be higher than the priority of the sleep stage parameter, so that the emotion may be determined to be "happy".
Technical scheme 3,
In technical scheme 3, the determining of the emotion information of the user specifically includes: and inputting the various user parameters into an emotion recognition model, and determining emotion information of the user according to output information of the emotion recognition model.
The training data of the emotion recognition model may include:
training data 1 and labeled emotion information corresponding to the multiple user parameters respectively; or
Training data 2, labeled emotion information corresponding to the multiple user parameters, or
Training data 3, feedback data of the user for recommended content in case of the plurality of user parameters.
According to the embodiment of the invention, the mathematical model can be trained according to the training data to obtain the emotion recognition model, so that the emotion recognition model has emotion recognition capability.
The emotion recognition model may characterize a mapping relationship between input data (various user parameters) and output information (emotion information). The output information may include: emotional information, or a probability of emotional information.
The training data 1 may include: and labeling emotion information corresponding to different user parameters respectively. For example, the labeled emotion information corresponding to the input content, the labeled emotion information corresponding to the physical parameter, the labeled emotion information corresponding to the environmental parameter, and the like. According to the embodiment of the invention, the influence of different user parameters on the emotion information can be determined according to the training data 1, and further the comprehensive influence of various user parameters on the emotion information can be determined.
The labeled emotion information of the training data 2 and the training data 3 can reflect the comprehensive influence of various user parameters on the emotion information. The embodiment of the invention can determine the influence weight of various user parameters on the emotion information based on the learning of the training data, thereby more accurately carrying out emotion recognition.
The training data 2 may include: and determining labeled emotional information aiming at various user parameters, such as corresponding labeled emotional information under the condition of inputting the content, the physical parameters and the environmental parameters.
The training data 3 may include: feedback data of the user for recommended content in case of various user parameters. Because the recommended content corresponds to the emotional information, the embodiment of the invention can determine whether the user is satisfied with the recommended content according to the feedback data, and further determine the emotional information of the user according to the judgment result.
For example, in the case of multiple user parameters, the recommended content is content corresponding to emotion information a, and if the user is satisfied with the recommended content, the feedback data may include: in this case, the emotion information of the user may be considered to be consistent with the emotion information a, and may be used as positive sample data for training. Alternatively, if the user is not satisfied with the recommended content, the feedback data may include: in the case where the selection operation is not performed for the recommended content or the display of the recommended content is stopped, it is considered that the emotion information of the user does not coincide with the emotion information a, and the user may be used as negative sample data for training.
The mathematical model is a scientific or engineering model constructed by using a mathematical logic method and a mathematical language, and is a mathematical structure which is generally or approximately expressed by adopting the mathematical language aiming at the characteristic or quantity dependency relationship of a certain object system, and the mathematical structure is a relationship structure which is described by means of mathematical symbols. The mathematical model may be one or a set of algebraic, differential, integral or statistical equations, and combinations thereof, by which the interrelationships or causal relationships between the variables of the system are described quantitatively or qualitatively. In addition to mathematical models described by equations, there are also models described by other mathematical tools, such as algebra, geometry, topology, mathematical logic, etc. Where the mathematical model describes the behavior and characteristics of the system rather than the actual structure of the system. The method can adopt methods such as machine learning and deep learning methods to train the mathematical model, and the machine learning method can comprise the following steps: linear regression, decision trees, random forests, etc., and the deep learning method may include: convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM), Gated cyclic units (GRU), and so on.
The process of determining the emotion information of the user is described in detail through technical solutions 1 to 3, and it can be understood that a person skilled in the art may adopt any one or a combination of technical solutions 1 to 3 according to actual application requirements, or may adopt other technical solutions of determining the emotion information of the user according to actual application requirements, and the embodiment of the present invention does not limit the specific process of determining the emotion information of the user.
The emotion information obtained by the embodiment of the invention can be applied to various scenes such as recommendation, intelligent interaction and the like.
For example, in a recommendation scenario, recommended content matching emotion information may be recommended to a user, and the recommended content may include: advertisements, articles, media content, and the like.
In an intelligent interactive scene, conversation content matched with emotion information can be recommended to a user based on intelligent chatting. For example, in a case where the emotional information of the user is low, the chat robot may output the conversation content "i come to sing you a song bar".
In summary, the emotion processing method according to the embodiment of the present invention determines the emotion information of the user according to at least two user parameters among the input content, the physical parameter, and the environmental parameter. The user parameters adopted by the embodiment of the invention do not relate to the privacy information of the user such as images, sounds and the like, so that the protection effect of the privacy information of the user can be enhanced.
In addition, according to the embodiment of the invention, the emotion information of the user is determined according to various user parameters, and the emotion information of the user can be more accurately identified, so that the emotion identification accuracy can be improved.
Method embodiment two
Referring to fig. 3, a flowchart illustrating steps of a second embodiment of the emotion processing method of the present invention is shown, which may specifically include the following steps:
step 301, determining various user parameters; the plurality of user parameters may include at least two of the following parameters: inputting content, physical parameters and environmental parameters;
step 302, determining the emotion information of the user according to the various user parameters.
With respect to the first embodiment of the method shown in fig. 2, the method of this embodiment may further include:
step 303, determining the target recommended content corresponding to the user according to the emotion information of the user.
Often the user's needs for content are different in different contexts, e.g. the user likes fast-paced music when excited and needs flat music when lost. The embodiment of the invention recommends the target recommendation content matched with the emotion information to the user, thereby realizing the recommendation of the content to the user according to the emotion information, improving the recommendation accuracy and improving the user experience.
In an embodiment of the present invention, in a case where the user emotion is happy, the recommended content may include: fast-paced music, chapter manuring, etc.; alternatively, in the case where the user emotion is calm, the recommended content may include: soothing music, narrative movies, commentary, etc.; alternatively, in the case where the user emotion is a negative emotion, the recommended content may include: sadness music, inspirational movies, joke segments, etc.
In an optional embodiment of the present invention, recommendation of the target recommended content may be performed in a scenario of an input method. Accordingly, the target recommended content may include at least one of the following recommended contents:
inputting association candidates corresponding to the content;
selecting expression candidates corresponding to the text;
skin; and
and (5) language book candidate.
The association function is an extended function of the input method, and the occurrence of the association function reduces the times of active input and the times of key pressing of a user and increases the intelligence of the input method. Currently, the input method may recommend corresponding association candidates for the input content. For example, for the input content "you have not eaten", the associative candidates of "eaten", "not woollen", "eating", and the like are recommended.
According to the embodiment of the invention, the association candidate corresponding to the input content can be determined according to the emotion information, so that the accuracy of the association candidate is improved. For example, for the input content "you have had no meal", in the case where the emotion information is a negative emotion, a associative candidate such as "do not want to eat", "do not have a mood to eat" is provided; alternatively, when the emotion information is a positive emotion, an associative candidate such as "a person eats a bar together" is provided.
The expression candidates corresponding to the selected text can be used for providing corresponding label matching pictures for the selected text so as to improve the interest of input. According to the embodiment of the invention, the expression candidate corresponding to the selected text is determined according to the emotion information, so that the accuracy of the expression candidate can be improved. For example, the text is selected as 'spreading', and in the case that the emotion information is negative emotion, expression candidates of 'crying and selling melons' or 'frown and selling melons' are provided; or, in the case where the emotion information is a positive emotion, an expression candidate of "smiling and selling melons" is provided.
Skin may refer to an interface of an application. In the context of an input method, the skin may serve as a background for the input interface.
The input method may provide a skin library for selection by the user. According to one embodiment, in the case that the user browses the skin library, the skin corresponding to the emotion information may be preferentially presented.
According to another embodiment, the skin of the input interface may be switched depending on the mood information. In case of a change in mood information, a skin switch may be triggered. For example, in the case where the emotional information is low, a warm skin is provided.
The corpus candidates may provide a second predetermined number of corpus candidates for use by the user. A transcript may refer to a person's speech record, typically used for formal discourse, usually to explain a person's spoken sentences and linguistic features over a period of time. The language record can also refer to new entries generated by the language of famous persons, netizen's language and social events which can cause resonance, are deeply owned and have certain transmission power.
For example, in the case where the emotion information is low, a corpus candidate of positive energy is provided; as another example, in the case where the emotion information is happy, humorous bibliographic candidates and the like are provided.
In the embodiment of the present invention, optionally, the target recommended content may also be displayed. For example, in a candidate area of the input method, a corresponding association candidate of the input content or an expression candidate corresponding to the selected text is displayed. For another example, the skin of the input interface is switched according to the real-time emotional information. Or displaying language record candidates corresponding to the emotion information on an interface of the language record. It is understood that the embodiment of the present invention does not impose a limitation on the specific presentation content of the target recommended content.
In conclusion, the emotion processing method of the embodiment of the invention recommends the target recommendation content matched with the emotion information to the user, thereby realizing the recommendation of the content to the user according to the emotion information, improving the recommendation accuracy and improving the user experience.
It should be noted that, for simplicity of description, the method embodiments are described as a series of motion combinations, but those skilled in the art should understand that the present invention is not limited by the described motion sequences, because some steps may be performed in other sequences or simultaneously according to the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no moving act is required as an embodiment of the invention.
Device embodiment
Referring to fig. 4, a block diagram of an embodiment of an emotion processing apparatus of the present invention is shown, which may specifically include: a user parameter determination module 401 and an emotion determination module 402.
The user parameter determining module 401 is configured to determine a plurality of user parameters; the plurality of user parameters includes at least two of the following parameters: inputting content, physical parameters and environmental parameters; and
an emotion determining module 402, configured to determine emotion information of the user according to the multiple user parameters.
Optionally, the physical parameter may include at least one of the following parameters:
inputting a speed parameter, an input force parameter and a jitter parameter in the input process.
Optionally, the environmental parameter may include at least one of the following parameters:
a time parameter, a location parameter, and a weather parameter.
Optionally, the emotion determining module 402 may include:
the parameter fusion module is used for fusing the user parameters to obtain fused user parameters;
and the emotion information determining module is used for determining the emotion information of the user according to the fused user parameters.
Optionally, the emotion information determination module 402 may include:
the region determining module is used for determining a target coordinate region corresponding to the fusion user parameter;
and the region-based emotion determining module is used for determining the emotion information of the user according to the emotion information corresponding to the target coordinate region.
Optionally, the emotion determining module 402 may include:
determining the emotional information by dividing, namely determining that the plurality of user parameters respectively correspond to a plurality of emotional information by listening and hearing;
and the sub-emotion information fusion module is used for fusing the multi-sub-emotion information to obtain the emotion information of the user.
Optionally, the emotion determining module 402 may include:
the emotion recognition module is used for inputting the various user parameters into an emotion recognition model and determining emotion information of the user according to output information of the emotion recognition model; the training data of the emotion recognition model may include: the annotation emotion information corresponding to the various user parameters respectively, or the annotation emotion information corresponding to the various user parameters together, or feedback data of the user on the recommended content under the condition of the various user parameters.
Optionally, the apparatus may further include:
and the recommended content determining module is used for determining the target recommended content corresponding to the user according to the emotion information of the user.
Optionally, the target recommended content may include at least one of the following recommended contents:
inputting association candidates corresponding to the content;
selecting expression candidates corresponding to the text;
skin; and
and (5) language book candidate.
Optionally, the emotion information may include any one of the following information:
joy, anger, sadness, fear, and blankness.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
An embodiment of the present invention provides an apparatus for emotion handling, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and configured for execution by the one or more processors the one or more programs include instructions for: determining a plurality of user parameters; the plurality of user parameters includes at least two of the following parameters: inputting content, physical parameters and environmental parameters; and determining the emotion information of the user according to the various user parameters.
Fig. 5 is a block diagram illustrating an apparatus 800 for emotion processing according to an example embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 5, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing elements 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operation at the device 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 806 provide power to the various components of device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice data processing mode. The received audio signal may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed state of the device 800, the relative positioning of the components, such as a display and keypad of the apparatus 800, the sensor assembly 814 may also detect a change in position of the apparatus 800 or a component of the apparatus 800, the presence or absence of user contact with the apparatus 800, orientation or acceleration/deceleration of the apparatus 800, and a change in temperature of the apparatus 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on radio frequency data processing (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Fig. 6 is a schematic diagram of a server in some embodiments of the invention. The server 1900, which may vary widely in configuration or performance, may include one or more Central Processing Units (CPUs) 1922 (e.g., one or more processors) and memory 1932, one or more storage media 1930 (e.g., one or more mass storage devices) that store applications 1942 or data 1944. Memory 1932 and storage medium 1930 can be, among other things, transient or persistent storage. The program stored in the storage medium 1930 may include one or more modules (not shown), each of which may include a sequence of instructions operating on the server. Further, a central processor 1922 may be arranged to communicate with the storage medium 1930 to execute a series of instruction operations in the storage medium 1930 on the server 1900.
The server 1900 may also include one or more power supplies 1926, one or more wired or wireless network interfaces 1950, one or more input/output interfaces 1958, one or more keyboards 1956, and/or one or more operating systems 1941, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
A non-transitory computer readable storage medium, wherein instructions, when executed by a processor of a device (server or terminal), enable the device to perform the emotion processing method shown in fig. 2 or fig. 3.
A non-transitory computer-readable storage medium in which instructions, when executed by a processor of an apparatus (a server or a terminal), enable the apparatus to perform a method of emotion processing, the method comprising: determining a plurality of user parameters; the plurality of user parameters includes at least two of the following parameters: inputting content, physical parameters and environmental parameters; and determining the emotion information of the user according to the various user parameters.
The embodiment of the invention discloses A1 and an emotion processing method, wherein the method comprises the following steps:
determining a plurality of user parameters; the plurality of user parameters includes at least two of the following parameters: inputting content, physical parameters and environmental parameters;
and determining the emotion information of the user according to the various user parameters.
A2, the method according to A1, the physical parameters comprising at least one of:
inputting a speed parameter, an input force parameter and a jitter parameter in the input process.
A3, the method of A1, the environmental parameters including at least one of:
a time parameter, a location parameter, and a weather parameter.
A4, the method according to any one of A1 to A3, the determining emotional information of the user, comprising:
fusing the multiple user parameters to obtain fused user parameters;
and determining emotion information of the user according to the fused user parameters.
A5, the method according to A4, wherein the determining emotional information of the user according to the fused user parameters includes:
determining a target coordinate area corresponding to the fusion user parameter;
and determining the emotion information of the user according to the emotion information corresponding to the target coordinate area.
A6, the method according to any one of A1 to A3, the determining emotional information of the user, comprising:
determining a plurality of sub-emotion information respectively corresponding to the plurality of user parameters;
and fusing the multiple kinds of emotion information to obtain the emotion information of the user.
A7, the method according to any one of A1 to A3, the determining emotional information of the user, comprising:
inputting the various user parameters into an emotion recognition model, and determining emotion information of the user according to output information of the emotion recognition model; the training data of the emotion recognition model includes: the annotation emotion information corresponding to the multiple user parameters respectively, or the annotation emotion information corresponding to the multiple user parameters together, or feedback data of the user on the recommended content under the condition of the multiple user parameters.
A8, the method of any one of A1 to A3, the method further comprising:
and determining target recommendation content corresponding to the user according to the emotion information of the user.
A9, according to the method of A8, the target recommended content includes at least one of the following recommended content:
inputting association candidates corresponding to the content;
selecting expression candidates corresponding to the text;
skin; and
and (5) language list candidate.
A10, the method of any of A1 to A3, the emotional information comprising any of the following information:
joy, anger, sadness, fear, and blankness.
The embodiment of the invention discloses B11 and a data processing device, which comprises:
the user parameter determining module is used for determining various user parameters; the plurality of user parameters includes at least two of the following parameters: inputting content, physical parameters and environmental parameters; and
and the emotion determining module is used for determining the emotion information of the user according to the various user parameters.
B12, the apparatus according to B11, the physical parameters comprising at least one of the following:
inputting a speed parameter, an input force parameter and a jitter parameter in the input process.
B13, the apparatus according to B11, the environmental parameters including at least one of:
a time parameter, a location parameter, and a weather parameter.
B14, the apparatus of any of B11 to 13, the emotion determination module comprising:
the parameter fusion module is used for fusing the user parameters to obtain fused user parameters;
and the emotion information determining module is used for determining the emotion information of the user according to the fused user parameters.
B15, the apparatus of B14, the emotion information determination module comprising:
the region determining module is used for determining a target coordinate region corresponding to the fusion user parameter;
and the region-based emotion determining module is used for determining the emotion information of the user according to the emotion information corresponding to the target coordinate region.
B16, the apparatus of any of B11-B13, the emotion determination module comprising:
determining the emotional information by dividing, namely determining that the plurality of user parameters respectively correspond to a plurality of emotional information by listening and hearing;
and the sub-emotion information fusion module is used for fusing the various sub-emotion information to obtain the emotion information of the user.
B17, the apparatus of any of B11-B13, the emotion determination module comprising:
the emotion recognition module is used for inputting the various user parameters into an emotion recognition model and determining emotion information of the user according to output information of the emotion recognition model; the training data of the emotion recognition model includes: the annotation emotion information corresponding to the multiple user parameters respectively, or the annotation emotion information corresponding to the multiple user parameters together, or feedback data of the user on the recommended content under the condition of the multiple user parameters.
B18, the apparatus of any one of B11 to B13, the apparatus further comprising:
and the recommended content determining module is used for determining the target recommended content corresponding to the user according to the emotion information of the user.
B19, the device according to B18, the target recommended content comprises at least one of the following recommended content:
inputting association candidates corresponding to the content;
selecting expression candidates corresponding to the text;
skin; and
and (5) language book candidate.
B20, the apparatus of any of B11 to B13, the emotional information comprising any of the following information:
joy, anger, sadness, fear, and blankness.
The embodiment of the invention discloses C21, a device for emotion processing, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs configured to be executed by the one or more processors comprise instructions for:
determining a plurality of user parameters; the plurality of user parameters includes at least two of the following parameters: inputting content, physical parameters and environmental parameters;
and determining the emotion information of the user according to the various user parameters.
C22, the apparatus according to C21, the physical parameters comprising at least one of the following:
inputting a speed parameter, an input force parameter and a jitter parameter in the input process.
C23, the apparatus according to C21, the environmental parameters including at least one of:
a time parameter, a location parameter, and a weather parameter.
C24, the apparatus of any of C21 to C23, the determining emotional information of the user, comprising:
fusing the multiple user parameters to obtain fused user parameters;
and determining emotion information of the user according to the fused user parameters.
C25, the apparatus according to C24, the determining emotional information of the user according to the fused user parameter includes:
determining a target coordinate area corresponding to the fusion user parameter;
and determining the emotion information of the user according to the emotion information corresponding to the target coordinate area.
C26, the apparatus of any of C21 to C23, the determining emotional information of the user, comprising:
determining a plurality of sub-emotion information respectively corresponding to the plurality of user parameters;
and fusing the multiple kinds of emotion information to obtain the emotion information of the user.
C27, the apparatus of any of C21 to C23, the determining emotional information of the user, comprising:
inputting the various user parameters into an emotion recognition model, and determining emotion information of the user according to output information of the emotion recognition model; the training data of the emotion recognition model includes: the annotation emotion information corresponding to the multiple user parameters respectively, or the annotation emotion information corresponding to the multiple user parameters together, or feedback data of the user on the recommended content under the condition of the multiple user parameters.
C28, the device of any of C21-C23, the device also configured to execute the one or more programs by the one or more processors including instructions for:
and determining target recommendation content corresponding to the user according to the emotion information of the user.
C29, the apparatus according to C28, the target recommended content includes at least one of the following recommended content:
inputting association candidates corresponding to the content;
selecting expression candidates corresponding to the text;
skin; and
and (5) language list candidate.
C30, the apparatus of any of C21 to C23, the emotional information comprising any of the following information:
joy, anger, sadness, fear, and blankness.
Embodiments of the present invention disclose D31, a machine-readable medium having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform a method of emotion processing as described in one or more of a 1-a 10.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
The emotion processing method, the emotion processing device and the emotion processing device provided by the invention are introduced in detail, a specific example is applied to explain the principle and the implementation mode of the invention, and the explanation of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A method of emotion handling, the method comprising:
determining a plurality of user parameters; the plurality of user parameters includes at least two of the following parameters: inputting content, physical parameters and environmental parameters;
and determining the emotion information of the user according to the various user parameters.
2. The method of claim 1, wherein the physical parameters include at least one of:
inputting a speed parameter, an input force parameter, and a jitter parameter during input.
3. The method of claim 1, wherein the environmental parameter comprises at least one of:
a time parameter, a location parameter, and a weather parameter.
4. The method according to any one of claims 1 to 3, wherein the determining emotional information of the user comprises:
fusing the multiple user parameters to obtain fused user parameters;
and determining emotion information of the user according to the fused user parameters.
5. The method according to claim 4, wherein the determining emotional information of the user according to the fused user parameter comprises:
determining a target coordinate area corresponding to the fusion user parameter;
and determining the emotion information of the user according to the emotion information corresponding to the target coordinate area.
6. The method according to any one of claims 1 to 3, wherein the determining emotional information of the user comprises:
determining various sub-emotion information respectively corresponding to the various user parameters;
and fusing the multiple kinds of emotion information to obtain the emotion information of the user.
7. The method according to any one of claims 1 to 3, wherein the determining emotional information of the user comprises:
inputting the various user parameters into an emotion recognition model, and determining emotion information of the user according to output information of the emotion recognition model; the training data of the emotion recognition model includes: the annotation emotion information corresponding to the multiple user parameters respectively, or the annotation emotion information corresponding to the multiple user parameters together, or feedback data of the user on the recommended content under the condition of the multiple user parameters.
8. A data processing apparatus, comprising:
the user parameter determining module is used for determining various user parameters; the plurality of user parameters includes at least two of the following parameters: inputting content, physical parameters and environmental parameters; and
and the emotion determining module is used for determining the emotion information of the user according to the various user parameters.
9. An apparatus for emotion processing, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and wherein the one or more programs being configured to be executed by the one or more processors comprise instructions for:
determining a plurality of user parameters; the plurality of user parameters includes at least two of the following parameters: inputting content, physical parameters and environmental parameters;
and determining the emotion information of the user according to the various user parameters.
10. A machine-readable medium having stored thereon instructions, which when executed by one or more processors, cause an apparatus to perform a method of mood processing as recited in one or more of claims 1-7.
CN202110071046.2A 2021-01-19 2021-01-19 Emotion processing method, device and medium Pending CN114816036A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110071046.2A CN114816036A (en) 2021-01-19 2021-01-19 Emotion processing method, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110071046.2A CN114816036A (en) 2021-01-19 2021-01-19 Emotion processing method, device and medium

Publications (1)

Publication Number Publication Date
CN114816036A true CN114816036A (en) 2022-07-29

Family

ID=82524004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110071046.2A Pending CN114816036A (en) 2021-01-19 2021-01-19 Emotion processing method, device and medium

Country Status (1)

Country Link
CN (1) CN114816036A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115437510A (en) * 2022-09-23 2022-12-06 联想(北京)有限公司 Data display method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103926997A (en) * 2013-01-11 2014-07-16 北京三星通信技术研究有限公司 Method for determining emotional information based on user input and terminal
CN106469297A (en) * 2016-08-31 2017-03-01 北京小米移动软件有限公司 Emotion identification method, device and terminal unit
KR20180115551A (en) * 2017-04-13 2018-10-23 젤릭스 주식회사 A Robot capable of emotional expression and operation method thereof
CN108764010A (en) * 2018-03-23 2018-11-06 姜涵予 Emotional state determines method and device
CN109804389A (en) * 2016-10-12 2019-05-24 微软技术许可有限责任公司 Emotional state is extracted from device data
CN110147729A (en) * 2019-04-16 2019-08-20 深圳壹账通智能科技有限公司 User emotion recognition methods, device, computer equipment and storage medium
CN111241822A (en) * 2020-01-03 2020-06-05 北京搜狗科技发展有限公司 Emotion discovery and dispersion method and device under input scene

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103926997A (en) * 2013-01-11 2014-07-16 北京三星通信技术研究有限公司 Method for determining emotional information based on user input and terminal
CN106469297A (en) * 2016-08-31 2017-03-01 北京小米移动软件有限公司 Emotion identification method, device and terminal unit
CN109804389A (en) * 2016-10-12 2019-05-24 微软技术许可有限责任公司 Emotional state is extracted from device data
KR20180115551A (en) * 2017-04-13 2018-10-23 젤릭스 주식회사 A Robot capable of emotional expression and operation method thereof
CN108764010A (en) * 2018-03-23 2018-11-06 姜涵予 Emotional state determines method and device
CN110147729A (en) * 2019-04-16 2019-08-20 深圳壹账通智能科技有限公司 User emotion recognition methods, device, computer equipment and storage medium
CN111241822A (en) * 2020-01-03 2020-06-05 北京搜狗科技发展有限公司 Emotion discovery and dispersion method and device under input scene

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115437510A (en) * 2022-09-23 2022-12-06 联想(北京)有限公司 Data display method and device

Similar Documents

Publication Publication Date Title
KR102378513B1 (en) Message Service Providing Device and Method Providing Content thereof
KR102452258B1 (en) Natural assistant interaction
CN107632706B (en) Application data processing method and system of multi-modal virtual human
US20180285641A1 (en) Electronic device and operation method thereof
KR20220038639A (en) Message Service Providing Device and Method Providing Content thereof
CN112037756A (en) Voice processing method, apparatus and medium
CN110244860A (en) A kind of input method, device and electronic equipment
CN109814730B (en) Input method and device and input device
CN110929122B (en) Data processing method and device for data processing
CN112000766B (en) Data processing method, device and medium
CN114816036A (en) Emotion processing method, device and medium
CN113420553A (en) Text generation method and device, storage medium and electronic equipment
CN112948565A (en) Man-machine conversation method, device, electronic equipment and storage medium
CN111708444A (en) Input method, input device and input device
CN112000877B (en) Data processing method, device and medium
CN113470614B (en) Voice generation method and device and electronic equipment
CN111292743B (en) Voice interaction method and device and electronic equipment
CN113010768B (en) Data processing method and device for data processing
CN110795581B (en) Image searching method and device, terminal equipment and storage medium
CN114610163A (en) Recommendation method, apparatus and medium
CN113674731A (en) Speech synthesis processing method, apparatus and medium
CN113221030A (en) Recommendation method, device and medium
CN112558848B (en) Data processing method, device and medium
CN112306252A (en) Data processing method and device and data processing device
CN112181163A (en) Input method, input device and input device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination